<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Thu, 30 Apr 2026 10:56:52 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Increments - Episodes Tagged with “Belief”</title>
    <link>https://www.incrementspodcast.com/tags/belief</link>
    <pubDate>Fri, 20 Jun 2025 10:15:00 -0700</pubDate>
    <description>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Science, Philosophy, Epistemology, Mayhem</itunes:subtitle>
    <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
    <itunes:summary>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>Philosophy,Science,Ethics,Progress,Knowledge,Computer Science,Conversation,Error-Correction</itunes:keywords>
    <itunes:owner>
      <itunes:name>Ben Chugg and Vaden Masrani</itunes:name>
      <itunes:email>incrementspodcast@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<itunes:category text="Science"/>
<item>
  <title>#87 - Gullibility, Belief, and Conformity (with Hugo Mercier)</title>
  <link>https://www.incrementspodcast.com/87</link>
  <guid isPermaLink="false">d20165d0-2913-4a2f-808e-c03ce3d9d906</guid>
  <pubDate>Fri, 20 Jun 2025 10:15:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/d20165d0-2913-4a2f-808e-c03ce3d9d906.mp3" length="52060994" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Hugo Mercier joins us to discuss his book "Not Born Yesterday" and his work on belief, gullibility, and how we change our minds. </itunes:subtitle>
  <itunes:duration>54:13</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/d/d20165d0-2913-4a2f-808e-c03ce3d9d906/cover.jpg?v=2"/>
  <description>Ben and Vaden test their French skills and have Hugo Mercier on the podcast to discuss who we trust and what we believe. Are humans gullible? Do we fall for propaganda and advertising campaigns? Do we follow expert consensus or forge ahead as independent thinkers? Can Vaden go for one episode without bringing up Trump? 
Hugo Mercier (https://sites.google.com/site/hugomercier/) is a research director at the CNRS (Institut Jean Nicod, Paris), where he work with the Evolution and Social Cognition team. Check out his two books: The Enigma of Reason (https://www.amazon.com/Enigma-Reason-Hugo-Mercier/dp/0674368304) and Not Born Yesterday (https://www.amazon.com/dp/0691208921) . 
We discuss
Mercier's thoughts on the cognitive bias literature
Open vigilance mechanisms
Criticism of the System 1 vs System 2 dichotomy
Why Kahneman misinterpreted the bat and the ball thought experiment
Do flat earthers really believe the earth is flat?
The Asch conformity experiment 
Preference falsification vs internalization of professed beliefs 
How important is social signaling? 
Trump, MAGA, gullibility, and Tariffs 
How effective are advertisements? 
How effective is propaganda? 
Is social science reforming? 
References
The Enigma of Reason (https://www.amazon.com/Enigma-Reason-Hugo-Mercier/dp/0674368304) by Hugo Mercier and Dan Sperber 
Not Born Yesterday (https://www.amazon.com/dp/0691208921)  
Our previous episodes on Not Born Yesterday (https://www.incrementspodcast.com/84) and The Enigma of Reason (https://www.incrementspodcast.com/39) 
Socials
Follow us on Twitter at @hugoreasoning, @IncrementsPod, @BennyChugg, @VadenMasrani
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Become a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
How much system 2 thinking does it take to misunderstand system 1 vs system 2? Tell us at incrementspodcast@gmail.com  Special Guest: Hugo Mercier.
</description>
  <itunes:keywords>reason, rationality, belief, information, communication, trust</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Ben and Vaden test their French skills and have Hugo Mercier on the podcast to discuss who we trust and what we believe. Are humans gullible? Do we fall for propaganda and advertising campaigns? Do we follow expert consensus or forge ahead as independent thinkers? Can Vaden go for one episode without bringing up Trump? </p>

<p><a href="https://sites.google.com/site/hugomercier/" rel="nofollow">Hugo Mercier</a> is a research director at the CNRS (Institut Jean Nicod, Paris), where he work with the Evolution and Social Cognition team. Check out his two books: <a href="https://www.amazon.com/Enigma-Reason-Hugo-Mercier/dp/0674368304" rel="nofollow">The Enigma of Reason</a> and <a href="https://www.amazon.com/dp/0691208921" rel="nofollow">Not Born Yesterday</a> . </p>

<h1>We discuss</h1>

<ul>
<li>Mercier&#39;s thoughts on the cognitive bias literature</li>
<li>Open vigilance mechanisms</li>
<li>Criticism of the System 1 vs System 2 dichotomy</li>
<li>Why Kahneman misinterpreted the bat and the ball thought experiment</li>
<li>Do flat earthers really believe the earth is flat?</li>
<li>The Asch conformity experiment </li>
<li>Preference falsification vs internalization of professed beliefs </li>
<li>How important is social signaling? </li>
<li>Trump, MAGA, gullibility, and Tariffs </li>
<li>How effective are advertisements? </li>
<li>How effective is propaganda? </li>
<li>Is social science reforming? </li>
</ul>

<h1>References</h1>

<ul>
<li><a href="https://www.amazon.com/Enigma-Reason-Hugo-Mercier/dp/0674368304" rel="nofollow">The Enigma of Reason</a> by Hugo Mercier and Dan Sperber </li>
<li><a href="https://www.amazon.com/dp/0691208921" rel="nofollow">Not Born Yesterday</a><br></li>
<li>Our previous episodes on <a href="https://www.incrementspodcast.com/84" rel="nofollow">Not Born Yesterday</a> and <a href="https://www.incrementspodcast.com/39" rel="nofollow">The Enigma of Reason</a> </li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @hugoreasoning, @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>How much system 2 thinking does it take to misunderstand system 1 vs system 2? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p><p>Special Guest: Hugo Mercier.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Ben and Vaden test their French skills and have Hugo Mercier on the podcast to discuss who we trust and what we believe. Are humans gullible? Do we fall for propaganda and advertising campaigns? Do we follow expert consensus or forge ahead as independent thinkers? Can Vaden go for one episode without bringing up Trump? </p>

<p><a href="https://sites.google.com/site/hugomercier/" rel="nofollow">Hugo Mercier</a> is a research director at the CNRS (Institut Jean Nicod, Paris), where he work with the Evolution and Social Cognition team. Check out his two books: <a href="https://www.amazon.com/Enigma-Reason-Hugo-Mercier/dp/0674368304" rel="nofollow">The Enigma of Reason</a> and <a href="https://www.amazon.com/dp/0691208921" rel="nofollow">Not Born Yesterday</a> . </p>

<h1>We discuss</h1>

<ul>
<li>Mercier&#39;s thoughts on the cognitive bias literature</li>
<li>Open vigilance mechanisms</li>
<li>Criticism of the System 1 vs System 2 dichotomy</li>
<li>Why Kahneman misinterpreted the bat and the ball thought experiment</li>
<li>Do flat earthers really believe the earth is flat?</li>
<li>The Asch conformity experiment </li>
<li>Preference falsification vs internalization of professed beliefs </li>
<li>How important is social signaling? </li>
<li>Trump, MAGA, gullibility, and Tariffs </li>
<li>How effective are advertisements? </li>
<li>How effective is propaganda? </li>
<li>Is social science reforming? </li>
</ul>

<h1>References</h1>

<ul>
<li><a href="https://www.amazon.com/Enigma-Reason-Hugo-Mercier/dp/0674368304" rel="nofollow">The Enigma of Reason</a> by Hugo Mercier and Dan Sperber </li>
<li><a href="https://www.amazon.com/dp/0691208921" rel="nofollow">Not Born Yesterday</a><br></li>
<li>Our previous episodes on <a href="https://www.incrementspodcast.com/84" rel="nofollow">Not Born Yesterday</a> and <a href="https://www.incrementspodcast.com/39" rel="nofollow">The Enigma of Reason</a> </li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @hugoreasoning, @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>How much system 2 thinking does it take to misunderstand system 1 vs system 2? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p><p>Special Guest: Hugo Mercier.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#85 (Reaction) - On Confidence and Evidence: Reacting to Brett Hall and Peter Boghossian (Part 1) </title>
  <link>https://www.incrementspodcast.com/85</link>
  <guid isPermaLink="false">2411225d-dc31-4f0f-9907-cf386fc6e475</guid>
  <pubDate>Thu, 08 May 2025 20:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/2411225d-dc31-4f0f-9907-cf386fc6e475.mp3" length="81702284" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Reacting to a discussion about belief, confidence, and epistemology between Brett Hall and Peter Boghossian</itunes:subtitle>
  <itunes:duration>1:49:48</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/2/2411225d-dc31-4f0f-9907-cf386fc6e475/cover.jpg?v=4"/>
  <description>We all knew that Vaden would release his inner Youtube debate bro at some point. Well he finally paid Ben enough to do it, and here we are: our first reaction video. Today we're commenting on the video What's the most rational way to know? (https://www.youtube.com/watch?v=vNQlmVJxySc&amp;amp;t=3614s&amp;amp;ab_channel=CordialCuriosity), a discussion between Brett Hall and Peter Boghossian on the relationship between confidence and evidence. Are we overly confident in our ability to make reaction videos? Evidently. 
Check out more from Brett Hall here (https://www.bretthall.org/) and Peter Boghossian here (https://peterboghossian.com/). 
We discuss
What is the relationship between confidence and evidence? 
The "formal apparatus of science" vs the "sociology" of science 
Eddington's famous experiment 
Why confidence and belief can't be mathematized (But why they are useful nonetheless)
Confidence as a function of falsifying experiments
Bayesianism vs critical rationalism  
References
Paper discussing how it took the wider scientific community over 40 years (after Eddington's experiment!) to become convinced in the truth of general relativity: The 1919 measurement of the deflection of light (https://arxiv.org/abs/1409.7812)
Eddington's original paper (https://w.astro.berkeley.edu/~kalas/labs/documents/dyson1920.pdf):
Vaden and Brett's blog exchange (https://vmasrani.github.io/blog/2023/predicting-human-behaviour/) 
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Become a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
Where were you last night, and why do you have condoms in your pocket? Tell us at incrementspodcast@gmail.com. 
</description>
  <itunes:keywords>epistemology, reaction video, confidence, belief, falsification</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>We all knew that Vaden would release his inner Youtube debate bro at some point. Well he finally paid Ben enough to do it, and here we are: our first reaction video. Today we&#39;re commenting on the video <a href="https://www.youtube.com/watch?v=vNQlmVJxySc&t=3614s&ab_channel=CordialCuriosity" rel="nofollow">What&#39;s the most rational way to know?</a>, a discussion between Brett Hall and Peter Boghossian on the relationship between confidence and evidence. Are we overly confident in our ability to make reaction videos? Evidently. </p>

<p>Check out more from Brett Hall <a href="https://www.bretthall.org/" rel="nofollow">here</a> and Peter Boghossian <a href="https://peterboghossian.com/" rel="nofollow">here</a>. </p>

<h1>We discuss</h1>

<ul>
<li>What is the relationship between confidence and evidence? </li>
<li>The &quot;formal apparatus of science&quot; vs the &quot;sociology&quot; of science </li>
<li>Eddington&#39;s famous experiment </li>
<li>Why confidence and belief can&#39;t be mathematized (But why they are useful nonetheless)</li>
<li>Confidence as a function of falsifying experiments</li>
<li>Bayesianism vs critical rationalism<br></li>
</ul>

<h1>References</h1>

<ul>
<li>Paper discussing how it took the wider scientific community over 40 years (after Eddington&#39;s experiment!) to become convinced in the truth of general relativity: <a href="https://arxiv.org/abs/1409.7812" rel="nofollow">The 1919 measurement of the deflection of light</a></li>
<li><a href="https://w.astro.berkeley.edu/%7Ekalas/labs/documents/dyson1920.pdf" rel="nofollow">Eddington&#39;s original paper</a>:</li>
<li><a href="https://vmasrani.github.io/blog/2023/predicting-human-behaviour/" rel="nofollow">Vaden and Brett&#39;s blog exchange</a> </li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Where were you last night, and why do you have condoms in your pocket? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>We all knew that Vaden would release his inner Youtube debate bro at some point. Well he finally paid Ben enough to do it, and here we are: our first reaction video. Today we&#39;re commenting on the video <a href="https://www.youtube.com/watch?v=vNQlmVJxySc&t=3614s&ab_channel=CordialCuriosity" rel="nofollow">What&#39;s the most rational way to know?</a>, a discussion between Brett Hall and Peter Boghossian on the relationship between confidence and evidence. Are we overly confident in our ability to make reaction videos? Evidently. </p>

<p>Check out more from Brett Hall <a href="https://www.bretthall.org/" rel="nofollow">here</a> and Peter Boghossian <a href="https://peterboghossian.com/" rel="nofollow">here</a>. </p>

<h1>We discuss</h1>

<ul>
<li>What is the relationship between confidence and evidence? </li>
<li>The &quot;formal apparatus of science&quot; vs the &quot;sociology&quot; of science </li>
<li>Eddington&#39;s famous experiment </li>
<li>Why confidence and belief can&#39;t be mathematized (But why they are useful nonetheless)</li>
<li>Confidence as a function of falsifying experiments</li>
<li>Bayesianism vs critical rationalism<br></li>
</ul>

<h1>References</h1>

<ul>
<li>Paper discussing how it took the wider scientific community over 40 years (after Eddington&#39;s experiment!) to become convinced in the truth of general relativity: <a href="https://arxiv.org/abs/1409.7812" rel="nofollow">The 1919 measurement of the deflection of light</a></li>
<li><a href="https://w.astro.berkeley.edu/%7Ekalas/labs/documents/dyson1920.pdf" rel="nofollow">Eddington&#39;s original paper</a>:</li>
<li><a href="https://vmasrani.github.io/blog/2023/predicting-human-behaviour/" rel="nofollow">Vaden and Brett&#39;s blog exchange</a> </li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Where were you last night, and why do you have condoms in your pocket? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#76 (Bonus) - Is P(doom) meaningful? Debating epistemology (w/ Liron Shapira) </title>
  <link>https://www.incrementspodcast.com/76</link>
  <guid isPermaLink="false">c2b5df9d-ecb4-43d0-9e80-a713495335d8</guid>
  <pubDate>Fri, 08 Nov 2024 14:30:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/c2b5df9d-ecb4-43d0-9e80-a713495335d8.mp3" length="98349666" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>We were invited onto Liron Shapira's "Doom debates" to discuss Bayesian versus Popperian epistemology, AI doom, and superintelligence. Unsurprisingly, we got about one third of the way through the first subject ... </itunes:subtitle>
  <itunes:duration>2:50:58</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/c/c2b5df9d-ecb4-43d0-9e80-a713495335d8/cover.jpg?v=2"/>
  <description>Liron Shapira, host of [Doom Debates], invited us on to discuss Popperian versus Bayesian epistemology and whether we're worried about AI doom. As one might expect knowing us, we only got about halfway through the first subject, so get yourselves ready (presumably with many drinks) for part II in a few weeks! The era of Ben and Vaden's rowdy youtube debates has begun. Vaden is jubilant, Ben is uncomfortable, and the world has never been more annoyed by Popperians. 
Follow Liron on twitter (@liron) and check out the Doom Debates youtube channel (https://www.youtube.com/@DoomDebates) and podcast (https://podcasts.apple.com/us/podcast/doom-debates/id1751366208).  
We discuss
Whether we're concerned about AI doom 
Bayesian reasoning versus Popperian reasoning 
Whether it makes sense to put numbers on all your beliefs 
Solomonoff induction 
Objective vs subjective Bayesianism 
Prediction markets and superforecasting 
References
Vaden's blog post on Cox's Theorem and Yudkowsky's claims of "Laws of Rationality": https://vmasrani.github.io/blog/2021/thecredenceassumption/
Disproof of probabilistic induction (including Solomonov Induction): https://arxiv.org/abs/2107.00749 
EA Post Vaden Mentioned regarding predictions being uncalibrated more than 1yr out: https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations
Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: https://ifp.org/can-policymakers-trust-forecasters/
Superforecaster p(doom) is ~1%: https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:~:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25).
The existential risk persuasion tournament https://www.astralcodexten.com/p/the-extinction-tournament
Some more info in Ben's article on superforecasting: https://benchugg.com/writing/superforecasting/
Slides on Content vs Probability: https://vmasrani.github.io/assets/pdf/popper_good.pdf
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Trust in the reverend Bayes and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
What's your credence that the second debate is as fun as the first? Tell us at incrementspodcast@gmail.com 
 Special Guest: Liron Shapira.
</description>
  <itunes:keywords>AI, belief, Popper, Bayes, epistemology, prediction, induction</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Liron Shapira, host of [Doom Debates], invited us on to discuss Popperian versus Bayesian epistemology and whether we&#39;re worried about AI doom. As one might expect knowing us, we only got about halfway through the first subject, so get yourselves ready (presumably with many drinks) for part II in a few weeks! The era of Ben and Vaden&#39;s rowdy youtube debates has begun. Vaden is jubilant, Ben is uncomfortable, and the world has never been more annoyed by Popperians. </p>

<p>Follow Liron on twitter (@liron) and check out the Doom Debates <a href="https://www.youtube.com/@DoomDebates" rel="nofollow">youtube channel</a> and <a href="https://podcasts.apple.com/us/podcast/doom-debates/id1751366208" rel="nofollow">podcast</a>.  </p>

<h1>We discuss</h1>

<ul>
<li>Whether we&#39;re concerned about AI doom </li>
<li>Bayesian reasoning versus Popperian reasoning </li>
<li>Whether it makes sense to put numbers on all your beliefs </li>
<li>Solomonoff induction </li>
<li>Objective vs subjective Bayesianism </li>
<li>Prediction markets and superforecasting </li>
</ul>

<h1>References</h1>

<ul>
<li>Vaden&#39;s blog post on Cox&#39;s Theorem and Yudkowsky&#39;s claims of &quot;Laws of Rationality&quot;: <a href="https://vmasrani.github.io/blog/2021/the_credence_assumption/" rel="nofollow">https://vmasrani.github.io/blog/2021/the_credence_assumption/</a></li>
<li>Disproof of probabilistic induction (including Solomonov Induction): <a href="https://arxiv.org/abs/2107.00749" rel="nofollow">https://arxiv.org/abs/2107.00749</a> </li>
<li>EA Post Vaden Mentioned regarding predictions being uncalibrated more than 1yr out: <a href="https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations" rel="nofollow">https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations</a></li>
<li>Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: <a href="https://ifp.org/can-policymakers-trust-forecasters/" rel="nofollow">https://ifp.org/can-policymakers-trust-forecasters/</a></li>
<li>Superforecaster p(doom) is ~1%: <a href="https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:%7E:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25)" rel="nofollow">https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:~:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25)</a>.</li>
<li>The existential risk persuasion tournament <a href="https://www.astralcodexten.com/p/the-extinction-tournament" rel="nofollow">https://www.astralcodexten.com/p/the-extinction-tournament</a></li>
<li>Some more info in Ben&#39;s article on superforecasting: <a href="https://benchugg.com/writing/superforecasting/" rel="nofollow">https://benchugg.com/writing/superforecasting/</a></li>
<li>Slides on Content vs Probability: <a href="https://vmasrani.github.io/assets/pdf/popper_good.pdf" rel="nofollow">https://vmasrani.github.io/assets/pdf/popper_good.pdf</a></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Trust in the reverend Bayes and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s your credence that the second debate is as fun as the first? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p><p>Special Guest: Liron Shapira.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Liron Shapira, host of [Doom Debates], invited us on to discuss Popperian versus Bayesian epistemology and whether we&#39;re worried about AI doom. As one might expect knowing us, we only got about halfway through the first subject, so get yourselves ready (presumably with many drinks) for part II in a few weeks! The era of Ben and Vaden&#39;s rowdy youtube debates has begun. Vaden is jubilant, Ben is uncomfortable, and the world has never been more annoyed by Popperians. </p>

<p>Follow Liron on twitter (@liron) and check out the Doom Debates <a href="https://www.youtube.com/@DoomDebates" rel="nofollow">youtube channel</a> and <a href="https://podcasts.apple.com/us/podcast/doom-debates/id1751366208" rel="nofollow">podcast</a>.  </p>

<h1>We discuss</h1>

<ul>
<li>Whether we&#39;re concerned about AI doom </li>
<li>Bayesian reasoning versus Popperian reasoning </li>
<li>Whether it makes sense to put numbers on all your beliefs </li>
<li>Solomonoff induction </li>
<li>Objective vs subjective Bayesianism </li>
<li>Prediction markets and superforecasting </li>
</ul>

<h1>References</h1>

<ul>
<li>Vaden&#39;s blog post on Cox&#39;s Theorem and Yudkowsky&#39;s claims of &quot;Laws of Rationality&quot;: <a href="https://vmasrani.github.io/blog/2021/the_credence_assumption/" rel="nofollow">https://vmasrani.github.io/blog/2021/the_credence_assumption/</a></li>
<li>Disproof of probabilistic induction (including Solomonov Induction): <a href="https://arxiv.org/abs/2107.00749" rel="nofollow">https://arxiv.org/abs/2107.00749</a> </li>
<li>EA Post Vaden Mentioned regarding predictions being uncalibrated more than 1yr out: <a href="https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations" rel="nofollow">https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations</a></li>
<li>Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: <a href="https://ifp.org/can-policymakers-trust-forecasters/" rel="nofollow">https://ifp.org/can-policymakers-trust-forecasters/</a></li>
<li>Superforecaster p(doom) is ~1%: <a href="https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:%7E:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25)" rel="nofollow">https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:~:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25)</a>.</li>
<li>The existential risk persuasion tournament <a href="https://www.astralcodexten.com/p/the-extinction-tournament" rel="nofollow">https://www.astralcodexten.com/p/the-extinction-tournament</a></li>
<li>Some more info in Ben&#39;s article on superforecasting: <a href="https://benchugg.com/writing/superforecasting/" rel="nofollow">https://benchugg.com/writing/superforecasting/</a></li>
<li>Slides on Content vs Probability: <a href="https://vmasrani.github.io/assets/pdf/popper_good.pdf" rel="nofollow">https://vmasrani.github.io/assets/pdf/popper_good.pdf</a></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Trust in the reverend Bayes and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s your credence that the second debate is as fun as the first? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p><p>Special Guest: Liron Shapira.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#75 -  The Problem of Induction, Relitigated (w/ Tamler Sommers)</title>
  <link>https://www.incrementspodcast.com/75</link>
  <guid isPermaLink="false">620c85f4-0377-4a5a-ba7e-71006bcb89b4</guid>
  <pubDate>Wed, 23 Oct 2024 09:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/620c85f4-0377-4a5a-ba7e-71006bcb89b4.mp3" length="98840196" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>When Very Bad Wizards meets Very Culty Popperians. Famed philosopher, podcaster, and Kant-hater Tamler Sommers joins the boys for a spirited disagreement over Popper, and whether he solved the Problem of Induction. </itunes:subtitle>
  <itunes:duration>1:41:13</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/6/620c85f4-0377-4a5a-ba7e-71006bcb89b4/cover.jpg?v=4"/>
  <description>When Very Bad Wizards meets Very Culty Popperians.  We finally decided to have a real life professional philosopher on the pod to call us out on our nonsense,  and are honored to have on Tamler Sommers, from the esteemed Very Bad Wizards podcast, to argue with us about the Problem of Induction. Did Popper solve it, or does his proposed solution, like all the other attempts, "fail decisively"? 
(Warning: One of the two hosts maaay have revealed their Popperian dogmatism a bit throughout this episode. Whichever host that is - they shall remain unnamed - apologizes quietly and stubbornly under their breath.) 
Check out Tamler's website (https://www.tamlersommers.com/), his podcast (Very Bad Wizards (https://verybadwizards.com/)), or follow him on twitter (@tamler). 
We discuss
What is the problem of induction? 
Whether regularities really exist in nature
The difference between certainty and justification 
Popper's solution to the problem of induction 
If whiskey will taste like orange juice next week
What makes a good theory?
Why prediction is secondary to explanation for Popper 
If science and meditiation are in conflict 
The boundaries of science  
References
Very Bad Wizards episode on induction (https://verybadwizards.com/episode/episode-294-the-scandal-of-philosophy-humes-problem-of-induction)
The problem of induction, by Wesley Salmon (https://home.csulb.edu/~cwallis/100/articles/salmon.html)
Hume on induction (https://plato.stanford.edu/entries/induction-problem/#HumeProb)
Errata
Vaden mentions in the episode how "Einstein's theory is better because it can explain earth's gravitational constant". He got some of the details wrong here - it's actually the inverse square law, not the gravitational constant. Listen to Edward Witten explain it much better here (https://www.youtube.com/watch?v=A_9RqsHYEAs). 
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @tamler
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Trust in our regularity and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
If you are a Very Bad Wizards listener, hello! We're exactly like Tamler and David, except younger. Come join the Cult of Popper over at incrementspodcast@gmail.com 
Image credit: From this Aeon essay on Hume (https://aeon.co/essays/hume-is-the-amiable-modest-generous-philosopher-we-need-today). Illustration by Petra Eriksson at Handsome Frank.  Special Guest: Tamler Sommers.
</description>
  <itunes:keywords>induction, popper, belief, certainty, justification, deduction, logic</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>When Very Bad Wizards meets Very Culty Popperians.  We finally decided to have a real life professional philosopher on the pod to call us out on our nonsense,  and are honored to have on Tamler Sommers, from the esteemed Very Bad Wizards podcast, to argue with us about the Problem of Induction. Did Popper solve it, or does his proposed solution, like all the other attempts, &quot;fail decisively&quot;? </p>

<p>(Warning: One of the two hosts maaay have revealed their Popperian dogmatism a bit throughout this episode. Whichever host that is - they shall remain unnamed - apologizes quietly and stubbornly under their breath.) </p>

<p>Check out <a href="https://www.tamlersommers.com/" rel="nofollow">Tamler&#39;s website</a>, his podcast (<a href="https://verybadwizards.com/" rel="nofollow">Very Bad Wizards</a>), or follow him on twitter (@tamler). </p>

<h1>We discuss</h1>

<ul>
<li>What is the problem of induction? </li>
<li>Whether regularities really exist in nature</li>
<li>The difference between certainty and justification </li>
<li>Popper&#39;s solution to the problem of induction </li>
<li>If whiskey will taste like orange juice next week</li>
<li>What makes a good theory?</li>
<li>Why prediction is secondary to explanation for Popper </li>
<li>If science and meditiation are in conflict </li>
<li>The boundaries of science<br></li>
</ul>

<h1>References</h1>

<ul>
<li><a href="https://verybadwizards.com/episode/episode-294-the-scandal-of-philosophy-humes-problem-of-induction" rel="nofollow">Very Bad Wizards episode on induction</a></li>
<li><a href="https://home.csulb.edu/%7Ecwallis/100/articles/salmon.html" rel="nofollow">The problem of induction, by Wesley Salmon</a></li>
<li><a href="https://plato.stanford.edu/entries/induction-problem/#HumeProb" rel="nofollow">Hume on induction</a></li>
</ul>

<h1>Errata</h1>

<ul>
<li>Vaden mentions in the episode how &quot;Einstein&#39;s theory is better because it can explain earth&#39;s gravitational constant&quot;. He got some of the details wrong here - it&#39;s actually the inverse square law, not the gravitational constant. Listen to Edward Witten explain it much better <a href="https://www.youtube.com/watch?v=A_9RqsHYEAs" rel="nofollow">here</a>. </li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @tamler</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Trust in our regularity and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>If you are a Very Bad Wizards listener, hello! We&#39;re exactly like Tamler and David, except younger. Come join the Cult of Popper over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p>

<p>Image credit: From this <a href="https://aeon.co/essays/hume-is-the-amiable-modest-generous-philosopher-we-need-today" rel="nofollow">Aeon essay on Hume</a>. Illustration by Petra Eriksson at Handsome Frank. </p><p>Special Guest: Tamler Sommers.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>When Very Bad Wizards meets Very Culty Popperians.  We finally decided to have a real life professional philosopher on the pod to call us out on our nonsense,  and are honored to have on Tamler Sommers, from the esteemed Very Bad Wizards podcast, to argue with us about the Problem of Induction. Did Popper solve it, or does his proposed solution, like all the other attempts, &quot;fail decisively&quot;? </p>

<p>(Warning: One of the two hosts maaay have revealed their Popperian dogmatism a bit throughout this episode. Whichever host that is - they shall remain unnamed - apologizes quietly and stubbornly under their breath.) </p>

<p>Check out <a href="https://www.tamlersommers.com/" rel="nofollow">Tamler&#39;s website</a>, his podcast (<a href="https://verybadwizards.com/" rel="nofollow">Very Bad Wizards</a>), or follow him on twitter (@tamler). </p>

<h1>We discuss</h1>

<ul>
<li>What is the problem of induction? </li>
<li>Whether regularities really exist in nature</li>
<li>The difference between certainty and justification </li>
<li>Popper&#39;s solution to the problem of induction </li>
<li>If whiskey will taste like orange juice next week</li>
<li>What makes a good theory?</li>
<li>Why prediction is secondary to explanation for Popper </li>
<li>If science and meditiation are in conflict </li>
<li>The boundaries of science<br></li>
</ul>

<h1>References</h1>

<ul>
<li><a href="https://verybadwizards.com/episode/episode-294-the-scandal-of-philosophy-humes-problem-of-induction" rel="nofollow">Very Bad Wizards episode on induction</a></li>
<li><a href="https://home.csulb.edu/%7Ecwallis/100/articles/salmon.html" rel="nofollow">The problem of induction, by Wesley Salmon</a></li>
<li><a href="https://plato.stanford.edu/entries/induction-problem/#HumeProb" rel="nofollow">Hume on induction</a></li>
</ul>

<h1>Errata</h1>

<ul>
<li>Vaden mentions in the episode how &quot;Einstein&#39;s theory is better because it can explain earth&#39;s gravitational constant&quot;. He got some of the details wrong here - it&#39;s actually the inverse square law, not the gravitational constant. Listen to Edward Witten explain it much better <a href="https://www.youtube.com/watch?v=A_9RqsHYEAs" rel="nofollow">here</a>. </li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @tamler</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Trust in our regularity and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>If you are a Very Bad Wizards listener, hello! We&#39;re exactly like Tamler and David, except younger. Come join the Cult of Popper over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p>

<p>Image credit: From this <a href="https://aeon.co/essays/hume-is-the-amiable-modest-generous-philosopher-we-need-today" rel="nofollow">Aeon essay on Hume</a>. Illustration by Petra Eriksson at Handsome Frank. </p><p>Special Guest: Tamler Sommers.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#74 - Disagreeing about Belief, Probability, and Truth (w/ David Deutsch)</title>
  <link>https://www.incrementspodcast.com/74</link>
  <guid isPermaLink="false">03508f9b-3a2a-4b15-9b23-fe30083b431b</guid>
  <pubDate>Tue, 01 Oct 2024 09:30:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/03508f9b-3a2a-4b15-9b23-fe30083b431b.mp3" length="88784483" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>We talk with David Deutsch about whether the concept of belief is a useful lens on human cognition, when probability and statistics are actually useful, and whether he disagrees with Karl Popper about the truth. </itunes:subtitle>
  <itunes:duration>1:32:02</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/0/03508f9b-3a2a-4b15-9b23-fe30083b431b/cover.jpg?v=9"/>
  <description>What do you do when one of your intellectual idols comes on the podcast? Bombard them with disagreements of course. We were thrilled to have David Deutsch on the podcast to discuss whether the concept of belief is a useful lens on human cognition, when probability and statistics should be deployed, and whether he disagrees with Karl Popper on abstractions, the truth, and nothing but the truth. 
Follow David on Twitter (@DavidDeutschOxf) or find his website here (https://www.daviddeutsch.org.uk/). 
We discuss
Whether belief is a fruitful lens through which to analyze ideas 
Whether a non-quantitative form of belief can be defended 
How does belief bottom out epistemologically? 
Whether statistics and probability are useful 
Where should statistics and probability be used in practice? 
The Popper-Miller theorem
Statements vs propositions and their relevance for truth 
Whether Popper and Deutsch disagree about truth 
References
The Popper-Miller theorem. See the original paper (https://www.nature.com/articles/302687a0) 
David's 2021 talk on the correspondence theory of truth (https://www.youtube.com/watch?v=DZ-opI-jghs) 
David's talk on physics without probability (https://www.youtube.com/watch?v=wfzSE4Hoxbc). 
Hempel's paradox (https://en.wikipedia.org/wiki/Raven_paradox) 
The Beginning of Infinity (https://www.amazon.com/Beginning-Infinity-Explanations-Transform-World/dp/0143121359)
Knowledge and the Body-Mind Problem (https://www.amazon.ca/Knowledge-Body-Mind-Problem-Defence-Interaction/dp/0415135567)
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @DavidDeutschOxf
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Believe in us and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
What's the truth about your belief on the probability of useful statistics? Tell us over at incrementspodcast@gmail.com.  Special Guest: David Deutsch.
</description>
  <itunes:keywords>probability, statistics, truth, belief, epistemology, certainty, mathematics</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>What do you do when one of your intellectual idols comes on the podcast? Bombard them with disagreements of course. We were thrilled to have David Deutsch on the podcast to discuss whether the concept of belief is a useful lens on human cognition, when probability and statistics should be deployed, and whether he disagrees with Karl Popper on abstractions, the truth, and nothing but the truth. </p>

<p>Follow David on Twitter (@DavidDeutschOxf) or find his website <a href="https://www.daviddeutsch.org.uk/" rel="nofollow">here</a>. </p>

<h1>We discuss</h1>

<ul>
<li>Whether belief is a fruitful lens through which to analyze ideas </li>
<li>Whether a non-quantitative form of belief can be defended </li>
<li>How does belief bottom out epistemologically? </li>
<li>Whether statistics and probability are useful </li>
<li>Where should statistics and probability be used in practice? </li>
<li>The Popper-Miller theorem</li>
<li>Statements vs propositions and their relevance for truth </li>
<li>Whether Popper and Deutsch disagree about truth </li>
</ul>

<h1>References</h1>

<ul>
<li>The Popper-Miller theorem. See the <a href="https://www.nature.com/articles/302687a0" rel="nofollow">original paper</a> </li>
<li>David&#39;s 2021 talk on the <a href="https://www.youtube.com/watch?v=DZ-opI-jghs" rel="nofollow">correspondence theory of truth</a> </li>
<li>David&#39;s talk on <a href="https://www.youtube.com/watch?v=wfzSE4Hoxbc" rel="nofollow">physics without probability</a>. </li>
<li><a href="https://en.wikipedia.org/wiki/Raven_paradox" rel="nofollow">Hempel&#39;s paradox</a> </li>
<li><a href="https://www.amazon.com/Beginning-Infinity-Explanations-Transform-World/dp/0143121359" rel="nofollow">The Beginning of Infinity</a></li>
<li><a href="https://www.amazon.ca/Knowledge-Body-Mind-Problem-Defence-Interaction/dp/0415135567" rel="nofollow">Knowledge and the Body-Mind Problem</a></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @DavidDeutschOxf</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Believe in us and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s the truth about your belief on the probability of useful statistics? Tell us over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p>Special Guest: David Deutsch.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>What do you do when one of your intellectual idols comes on the podcast? Bombard them with disagreements of course. We were thrilled to have David Deutsch on the podcast to discuss whether the concept of belief is a useful lens on human cognition, when probability and statistics should be deployed, and whether he disagrees with Karl Popper on abstractions, the truth, and nothing but the truth. </p>

<p>Follow David on Twitter (@DavidDeutschOxf) or find his website <a href="https://www.daviddeutsch.org.uk/" rel="nofollow">here</a>. </p>

<h1>We discuss</h1>

<ul>
<li>Whether belief is a fruitful lens through which to analyze ideas </li>
<li>Whether a non-quantitative form of belief can be defended </li>
<li>How does belief bottom out epistemologically? </li>
<li>Whether statistics and probability are useful </li>
<li>Where should statistics and probability be used in practice? </li>
<li>The Popper-Miller theorem</li>
<li>Statements vs propositions and their relevance for truth </li>
<li>Whether Popper and Deutsch disagree about truth </li>
</ul>

<h1>References</h1>

<ul>
<li>The Popper-Miller theorem. See the <a href="https://www.nature.com/articles/302687a0" rel="nofollow">original paper</a> </li>
<li>David&#39;s 2021 talk on the <a href="https://www.youtube.com/watch?v=DZ-opI-jghs" rel="nofollow">correspondence theory of truth</a> </li>
<li>David&#39;s talk on <a href="https://www.youtube.com/watch?v=wfzSE4Hoxbc" rel="nofollow">physics without probability</a>. </li>
<li><a href="https://en.wikipedia.org/wiki/Raven_paradox" rel="nofollow">Hempel&#39;s paradox</a> </li>
<li><a href="https://www.amazon.com/Beginning-Infinity-Explanations-Transform-World/dp/0143121359" rel="nofollow">The Beginning of Infinity</a></li>
<li><a href="https://www.amazon.ca/Knowledge-Body-Mind-Problem-Defence-Interaction/dp/0415135567" rel="nofollow">Knowledge and the Body-Mind Problem</a></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @DavidDeutschOxf</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Believe in us and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s the truth about your belief on the probability of useful statistics? Tell us over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p>Special Guest: David Deutsch.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#56 - Ask Us Anything IV: Certainty, Emergence, and Popperian Imperatives</title>
  <link>https://www.incrementspodcast.com/56</link>
  <guid isPermaLink="false">d4e62324-29eb-46bd-99c1-d97f3b2ae8b7</guid>
  <pubDate>Wed, 01 Nov 2023 09:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/d4e62324-29eb-46bd-99c1-d97f3b2ae8b7.mp3" length="78287515" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Perhaps you thought, in your infinite ignorance, that the release of the previous episode marked the end of the age of the AMA! But nay: the age of the AMA has just begun!</itunes:subtitle>
  <itunes:duration>1:21:32</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/d/d4e62324-29eb-46bd-99c1-d97f3b2ae8b7/cover.jpg?v=1"/>
  <description>Perhaps you thought, in your infinite ignorance, that the release of the previous episode marked the end of the age of the AMA! But nay my friends, the age of the AMA has just begun! We'll answer your questions until the cows come home; until Godot arrives; until all the world's babies are potty-trained. Or, at least, until we stop laughing. 
We discuss
Potty training, taking babies seriously, and adult diapers 
Why Vaden never daydreams, fantasizes, or minds spending 10 hours in a car
Whether the subjective notions of certainty, belief, or confidence deserve a spot in the objective world of epistemology 
Whether sports are authoritarian 
Whether spreading Popper's epistemology is a moral imperative 
The role of school and educational institutions 
Whether emergence is the result of the interplay between physical reality and the reality of abstraction
Questions
(Tom) Can any thinking take place completely independent of any certainty (explicitly acknowledged or inexplicit) whatsoever? Or can we introduce alternative terms to 'certainty' and 'confidence' to describe how individuals process their convictions, consent, and agreement? If 'certainty' and 'confidence' connote justificationism, can a fallibilist dismiss these terms entirely?
(Tom) Can fallibilism, anti-authoritarianism, anti-justificationism, and critical rationalism overall operate effectively in the highly competitive space of sports, especially professional sports? 
(Andrew) If our best theory of how to make rapid progress comes from Popper's epistemology, should making it more widely known/understood be considered a moral imperative? If not, why? If so, thoughts? 
(Andrew) This one has been hanging about in my notes for a couple of years so I'm not sure it's a great question any more, but something zingy about the interplay between reality, abstractions and their effects on each other has pushed me to add it here: Is emergence the result of the interplay between physical reality and the reality of abstractions?
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Help us pay for diapers and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help with Diarrhea removal here (https://ko-fi.com/increments)). 
Click dem like buttons on youtube over hur (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ).
Who is more annoying in the mornings? Tell us at incrementspodcast@gmail.com 
</description>
  <itunes:keywords>ask-us-anything, emergence, moral imperatives, certainty, confidence, belief</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Perhaps you thought, in your infinite ignorance, that the release of the previous episode marked the end of the age of the AMA! But nay my friends, the age of the AMA has just begun! We&#39;ll answer your questions until the cows come home; until Godot arrives; until all the world&#39;s babies are potty-trained. Or, at least, until we stop laughing. </p>

<h1>We discuss</h1>

<ul>
<li>Potty training, taking babies seriously, and adult diapers </li>
<li>Why Vaden never daydreams, fantasizes, or minds spending 10 hours in a car</li>
<li>Whether the subjective notions of certainty, belief, or confidence deserve a spot in the objective world of epistemology </li>
<li>Whether sports are authoritarian </li>
<li>Whether spreading Popper&#39;s epistemology is a moral imperative </li>
<li>The role of school and educational institutions </li>
<li>Whether emergence is the result of the interplay between physical reality and the reality of abstraction</li>
</ul>

<h1>Questions</h1>

<ol>
<li><p>(Tom) Can any thinking take place completely independent of any certainty (explicitly acknowledged or inexplicit) whatsoever? Or can we introduce alternative terms to &#39;certainty&#39; and &#39;confidence&#39; to describe how individuals process their convictions, consent, and agreement? If &#39;certainty&#39; and &#39;confidence&#39; connote justificationism, can a fallibilist dismiss these terms entirely?</p></li>
<li><p>(Tom) Can fallibilism, anti-authoritarianism, anti-justificationism, and critical rationalism overall operate effectively in the highly competitive space of sports, especially professional sports? </p></li>
<li><p>(Andrew) If our best theory of how to make rapid progress comes from Popper&#39;s epistemology, should making it more widely known/understood be considered a moral imperative? If not, why? If so, thoughts? </p></li>
<li><p>(Andrew) This one has been hanging about in my notes for a couple of years so I&#39;m not sure it&#39;s a great question any more, but something zingy about the interplay between reality, abstractions and their effects on each other has pushed me to add it here: Is emergence the result of the interplay between physical reality and the reality of abstractions?</p></li>
</ol>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Help us pay for diapers and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help with Diarrhea removal <a href="https://ko-fi.com/increments" rel="nofollow">here</a>). </li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube over hur</a>.</li>
</ul>

<p>Who is more annoying in the mornings? Tell us at <em><a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></em></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Perhaps you thought, in your infinite ignorance, that the release of the previous episode marked the end of the age of the AMA! But nay my friends, the age of the AMA has just begun! We&#39;ll answer your questions until the cows come home; until Godot arrives; until all the world&#39;s babies are potty-trained. Or, at least, until we stop laughing. </p>

<h1>We discuss</h1>

<ul>
<li>Potty training, taking babies seriously, and adult diapers </li>
<li>Why Vaden never daydreams, fantasizes, or minds spending 10 hours in a car</li>
<li>Whether the subjective notions of certainty, belief, or confidence deserve a spot in the objective world of epistemology </li>
<li>Whether sports are authoritarian </li>
<li>Whether spreading Popper&#39;s epistemology is a moral imperative </li>
<li>The role of school and educational institutions </li>
<li>Whether emergence is the result of the interplay between physical reality and the reality of abstraction</li>
</ul>

<h1>Questions</h1>

<ol>
<li><p>(Tom) Can any thinking take place completely independent of any certainty (explicitly acknowledged or inexplicit) whatsoever? Or can we introduce alternative terms to &#39;certainty&#39; and &#39;confidence&#39; to describe how individuals process their convictions, consent, and agreement? If &#39;certainty&#39; and &#39;confidence&#39; connote justificationism, can a fallibilist dismiss these terms entirely?</p></li>
<li><p>(Tom) Can fallibilism, anti-authoritarianism, anti-justificationism, and critical rationalism overall operate effectively in the highly competitive space of sports, especially professional sports? </p></li>
<li><p>(Andrew) If our best theory of how to make rapid progress comes from Popper&#39;s epistemology, should making it more widely known/understood be considered a moral imperative? If not, why? If so, thoughts? </p></li>
<li><p>(Andrew) This one has been hanging about in my notes for a couple of years so I&#39;m not sure it&#39;s a great question any more, but something zingy about the interplay between reality, abstractions and their effects on each other has pushed me to add it here: Is emergence the result of the interplay between physical reality and the reality of abstractions?</p></li>
</ol>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Help us pay for diapers and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help with Diarrhea removal <a href="https://ko-fi.com/increments" rel="nofollow">here</a>). </li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube over hur</a>.</li>
</ul>

<p>Who is more annoying in the mornings? Tell us at <em><a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></em></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#2 - Consequentialism II: Strange Beliefs</title>
  <link>https://www.incrementspodcast.com/2</link>
  <guid isPermaLink="false">Buzzsprout-3866813</guid>
  <pubDate>Thu, 21 May 2020 18:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/8243d2b5-6232-425b-8c8a-7a502b324440.mp3" length="64811713" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle></itunes:subtitle>
  <itunes:duration>1:29:30</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/8/8243d2b5-6232-425b-8c8a-7a502b324440/cover.jpg?v=1"/>
  <description>&lt;p&gt;An attempt to clean up the mess we made last episode. Ben still doesn't figure out how not to yell into his microphone, and Vaden finally realizes what Ben was saying and it was … perhaps not so interesting in the first place? Ben, all too pleased with himself, starts yammering on about future generations. Should we care? God — we promise that next week we’ll try to stick to whichever subject we pick. &lt;/p&gt;&lt;p&gt;&lt;b&gt;&lt;em&gt;References: &lt;/em&gt;&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;
&lt;a href="https://80000hours.org/podcast/episodes/why-the-long-run-future-matters-more-than-anything-else-and-what-we-should-do-about-it/"&gt;Why the long-term future matters&lt;/a&gt;, podcast with Toby Ord. &lt;/li&gt;&lt;/ul&gt; 
</description>
  <itunes:keywords>Consequentialism, belief, future generations </itunes:keywords>
  <content:encoded>
    <![CDATA[<p><p>An attempt to clean up the mess we made last episode. Ben still doesn&apos;t figure out how not to yell into his microphone, and Vaden finally realizes what Ben was saying and it was … perhaps not so interesting in the first place? Ben, all too pleased with himself, starts yammering on about future generations. Should we care? God — we promise that next week we’ll try to stick to whichever subject we pick. </p><p><b><em>References: </em></b></p><ul><li><a href='https://80000hours.org/podcast/episodes/why-the-long-run-future-matters-more-than-anything-else-and-what-we-should-do-about-it/'>Why the long-term future matters</a>, podcast with Toby Ord. </li></ul></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p><p>An attempt to clean up the mess we made last episode. Ben still doesn&apos;t figure out how not to yell into his microphone, and Vaden finally realizes what Ben was saying and it was … perhaps not so interesting in the first place? Ben, all too pleased with himself, starts yammering on about future generations. Should we care? God — we promise that next week we’ll try to stick to whichever subject we pick. </p><p><b><em>References: </em></b></p><ul><li><a href='https://80000hours.org/podcast/episodes/why-the-long-run-future-matters-more-than-anything-else-and-what-we-should-do-about-it/'>Why the long-term future matters</a>, podcast with Toby Ord. </li></ul></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#1 - Consequentialism I: Epistemic Modesty</title>
  <link>https://www.incrementspodcast.com/1</link>
  <guid isPermaLink="false">Buzzsprout-3818885</guid>
  <pubDate>Thu, 21 May 2020 16:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/e73d04da-5d22-4097-ae4c-e2502387ad0e.mp3" length="48969291" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle></itunes:subtitle>
  <itunes:duration>1:07:20</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/e/e73d04da-5d22-4097-ae4c-e2502387ad0e/cover.jpg?v=3"/>
  <description>&lt;p&gt;We attempt to talk about &lt;em&gt;Epistemic Modesty&lt;/em&gt;: broadly, the idea that one should be modest in their beliefs when other people (with similar credentials) disagree with them. Vaden however, entirely immodestly, tries abandoning the subject because he’s scared of Ben’s forceful arguments and derails the conversation on to the entirely uncontroversial subject of which systems of moral decision making are best suited for moral progress. A flabbergasted Ben tries to keep up, but too little too late. Most of the time he's just trying to get his microphone to behave anyway. &lt;/p&gt;&lt;p&gt;&lt;b&gt;&lt;em&gt;References:&lt;/em&gt;&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty"&gt;In defence of epistemic modesty&lt;/a&gt;; Greg Lewis. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://forum.effectivealtruism.org/posts/ftshCQDZJ726RtY3s/against-modest-epistemology"&gt;Against Modest Epistemology&lt;/a&gt;; Eliezer Yudkowski. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://80000hours.org/podcast/episodes/will-macaskill-moral-philosophy/"&gt;Podcast with Will MacAskill on moral uncertainty&lt;/a&gt;.  &lt;/li&gt;
&lt;/ul&gt; 
</description>
  <itunes:keywords>Epistemic Modesty, belief, morality, progress, decision making</itunes:keywords>
  <content:encoded>
    <![CDATA[<p><p>We attempt to talk about <em>Epistemic Modesty</em>: broadly, the idea that one should be modest in their beliefs when other people (with similar credentials) disagree with them. Vaden however, entirely immodestly, tries abandoning the subject because he’s scared of Ben’s forceful arguments and derails the conversation on to the entirely uncontroversial subject of which systems of moral decision making are best suited for moral progress. A flabbergasted Ben tries to keep up, but too little too late. Most of the time he&apos;s just trying to get his microphone to behave anyway. </p><p><b><em>References:</em></b></p><ul><li><a href='https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty'>In defence of epistemic modesty</a>; Greg Lewis. </li><li><a href='https://forum.effectivealtruism.org/posts/ftshCQDZJ726RtY3s/against-modest-epistemology'>Against Modest Epistemology</a>; Eliezer Yudkowski. </li><li><a href='https://80000hours.org/podcast/episodes/will-macaskill-moral-philosophy/'>Podcast with Will MacAskill on moral uncertainty</a>.  </li></ul></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p><p>We attempt to talk about <em>Epistemic Modesty</em>: broadly, the idea that one should be modest in their beliefs when other people (with similar credentials) disagree with them. Vaden however, entirely immodestly, tries abandoning the subject because he’s scared of Ben’s forceful arguments and derails the conversation on to the entirely uncontroversial subject of which systems of moral decision making are best suited for moral progress. A flabbergasted Ben tries to keep up, but too little too late. Most of the time he&apos;s just trying to get his microphone to behave anyway. </p><p><b><em>References:</em></b></p><ul><li><a href='https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty'>In defence of epistemic modesty</a>; Greg Lewis. </li><li><a href='https://forum.effectivealtruism.org/posts/ftshCQDZJ726RtY3s/against-modest-epistemology'>Against Modest Epistemology</a>; Eliezer Yudkowski. </li><li><a href='https://80000hours.org/podcast/episodes/will-macaskill-moral-philosophy/'>Podcast with Will MacAskill on moral uncertainty</a>.  </li></ul></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
