<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Wed, 29 Apr 2026 07:22:24 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Increments - Episodes Tagged with “Induction”</title>
    <link>https://www.incrementspodcast.com/tags/induction</link>
    <pubDate>Thu, 06 Nov 2025 10:00:00 -0800</pubDate>
    <description>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Science, Philosophy, Epistemology, Mayhem</itunes:subtitle>
    <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
    <itunes:summary>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>Philosophy,Science,Ethics,Progress,Knowledge,Computer Science,Conversation,Error-Correction</itunes:keywords>
    <itunes:owner>
      <itunes:name>Ben Chugg and Vaden Masrani</itunes:name>
      <itunes:email>incrementspodcast@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<itunes:category text="Science"/>
<item>
  <title>#94 - Is AI Just a Tool? (w/ Scott Aaronson) </title>
  <link>https://www.incrementspodcast.com/94</link>
  <guid isPermaLink="false">b36467e9-f3b2-4477-86e8-14586cc5a5a9</guid>
  <pubDate>Thu, 06 Nov 2025 10:00:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/b36467e9-f3b2-4477-86e8-14586cc5a5a9.mp3" length="81765482" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Is there any reason to believe that AI's capabilities are fundamentally limited? Scott Aaronson comes on to scare us straight. </itunes:subtitle>
  <itunes:duration>1:24:46</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/b/b36467e9-f3b2-4477-86e8-14586cc5a5a9/cover.jpg?v=1"/>
  <description>The time has come for Vaden to defend his faith in the face of cold, hard scientific rationality. Will AI take over the world, automating away everything that makes humans distinct? Or can Vaden defend the church of just-ism, the radical belief that AI is simply "just a tool." Scott Aaronson, professor of computer science at UT Austin, goes to head to head against the zealotry. 
Check out Scott's website (https://www.scottaaronson.com/) and his blog, Shtetl Optimized (https://scottaaronson.blog/). 
We discuss
Scott view's on education. Should we radically reform K-12? 
Is ChatGPT changing Scott's approach to teaching 
The religion of "justa-ism" 
Is AI just a tool? 
Is there any principle which lets us say that AI won't be as general as humans? 
Aaronson's thesis of Artificial Intelligence 
Computational universality vs explanatory universality 
The many-worlds interpretation of quantum mechanics 
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Become a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
Have you been converted? Tell us at incrementspodcast@gmail.com
 Special Guest: Scott Aaronson.
</description>
  <itunes:keywords>AI, induction, AI doom, computation, quantum mechanics</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>The time has come for Vaden to defend his faith in the face of cold, hard scientific rationality. Will AI take over the world, automating away everything that makes humans distinct? Or can Vaden defend the church of just-ism, the radical belief that AI is simply &quot;just a tool.&quot; Scott Aaronson, professor of computer science at UT Austin, goes to head to head against the zealotry. </p>

<p>Check out Scott&#39;s <a href="https://www.scottaaronson.com/" rel="nofollow">website</a> and his blog, <a href="https://scottaaronson.blog/" rel="nofollow">Shtetl Optimized</a>. </p>

<h1>We discuss</h1>

<ul>
<li>Scott view&#39;s on education. Should we radically reform K-12? </li>
<li>Is ChatGPT changing Scott&#39;s approach to teaching </li>
<li>The religion of &quot;justa-ism&quot; </li>
<li>Is AI just a tool? </li>
<li>Is there any principle which lets us say that AI won&#39;t be as general as humans? </li>
<li>Aaronson&#39;s thesis of Artificial Intelligence </li>
<li>Computational universality vs explanatory universality </li>
<li>The many-worlds interpretation of quantum mechanics </li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Have you been converted? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guest: Scott Aaronson.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>The time has come for Vaden to defend his faith in the face of cold, hard scientific rationality. Will AI take over the world, automating away everything that makes humans distinct? Or can Vaden defend the church of just-ism, the radical belief that AI is simply &quot;just a tool.&quot; Scott Aaronson, professor of computer science at UT Austin, goes to head to head against the zealotry. </p>

<p>Check out Scott&#39;s <a href="https://www.scottaaronson.com/" rel="nofollow">website</a> and his blog, <a href="https://scottaaronson.blog/" rel="nofollow">Shtetl Optimized</a>. </p>

<h1>We discuss</h1>

<ul>
<li>Scott view&#39;s on education. Should we radically reform K-12? </li>
<li>Is ChatGPT changing Scott&#39;s approach to teaching </li>
<li>The religion of &quot;justa-ism&quot; </li>
<li>Is AI just a tool? </li>
<li>Is there any principle which lets us say that AI won&#39;t be as general as humans? </li>
<li>Aaronson&#39;s thesis of Artificial Intelligence </li>
<li>Computational universality vs explanatory universality </li>
<li>The many-worlds interpretation of quantum mechanics </li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Have you been converted? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guest: Scott Aaronson.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#77 (Bonus) - AI Doom Debate (w/ Liron Shapira)</title>
  <link>https://www.incrementspodcast.com/77</link>
  <guid isPermaLink="false">24e93eab-5281-418f-bddf-9516c7c5f8d7</guid>
  <pubDate>Tue, 19 Nov 2024 13:30:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/24e93eab-5281-418f-bddf-9516c7c5f8d7.mp3" length="137335802" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Part II of the great debate! Is AI about to kill everyone? Should you cash in on those vacation days now? </itunes:subtitle>
  <itunes:duration>2:21:22</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/2/24e93eab-5281-418f-bddf-9516c7c5f8d7/cover.jpg?v=2"/>
  <description>Back on Liron's Doom Debates podcast! Will we actually get around to the subject of superintelligent AI this time? Is it time to worry about the end of the world? Will Ben and Vaden emotionally recover from the devastating youtube comments from the last episode? 
Follow Liron on twitter (@liron) and check out the Doom Debates youtube channel (https://www.youtube.com/@DoomDebates) and podcast (https://podcasts.apple.com/us/podcast/doom-debates/id1751366208).  
We discuss
Definitions of "new knowledge" 
The reliance of deep learning on induction 
Can AIs be creative? 
The limits of statistical prediction 
Predictions of what deep learning cannot accomplish 
Can ChatGPT write funny jokes? 
Trends versus principles 
The psychological consequences of doomerism
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
The world is going to end soon, might as well get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
Was Vaden's two week anti-debate bro reeducation camp successful? Tell us at incrementspodcast@gmail.com
 Special Guest: Liron Shapira.
</description>
  <itunes:keywords>AI, superintelligence, existential risk, novelty, induction, deep learning, comedy, creativity, knowledge</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Back on Liron&#39;s <strong>Doom Debates</strong> podcast! Will we actually get around to the subject of superintelligent AI this time? Is it time to worry about the end of the world? Will Ben and Vaden emotionally recover from the devastating youtube comments from the last episode? </p>

<p>Follow Liron on twitter (@liron) and check out the Doom Debates <a href="https://www.youtube.com/@DoomDebates" rel="nofollow">youtube channel</a> and <a href="https://podcasts.apple.com/us/podcast/doom-debates/id1751366208" rel="nofollow">podcast</a>.  </p>

<h1>We discuss</h1>

<ul>
<li>Definitions of &quot;new knowledge&quot; </li>
<li>The reliance of deep learning on induction </li>
<li>Can AIs be creative? </li>
<li>The limits of statistical prediction </li>
<li>Predictions of what deep learning cannot accomplish </li>
<li>Can ChatGPT write funny jokes? </li>
<li>Trends versus principles </li>
<li>The psychological consequences of doomerism</li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>The world is going to end soon, might as well get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Was Vaden&#39;s two week anti-debate bro reeducation camp successful? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guest: Liron Shapira.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Back on Liron&#39;s <strong>Doom Debates</strong> podcast! Will we actually get around to the subject of superintelligent AI this time? Is it time to worry about the end of the world? Will Ben and Vaden emotionally recover from the devastating youtube comments from the last episode? </p>

<p>Follow Liron on twitter (@liron) and check out the Doom Debates <a href="https://www.youtube.com/@DoomDebates" rel="nofollow">youtube channel</a> and <a href="https://podcasts.apple.com/us/podcast/doom-debates/id1751366208" rel="nofollow">podcast</a>.  </p>

<h1>We discuss</h1>

<ul>
<li>Definitions of &quot;new knowledge&quot; </li>
<li>The reliance of deep learning on induction </li>
<li>Can AIs be creative? </li>
<li>The limits of statistical prediction </li>
<li>Predictions of what deep learning cannot accomplish </li>
<li>Can ChatGPT write funny jokes? </li>
<li>Trends versus principles </li>
<li>The psychological consequences of doomerism</li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>The world is going to end soon, might as well get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Was Vaden&#39;s two week anti-debate bro reeducation camp successful? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guest: Liron Shapira.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#76 (Bonus) - Is P(doom) meaningful? Debating epistemology (w/ Liron Shapira) </title>
  <link>https://www.incrementspodcast.com/76</link>
  <guid isPermaLink="false">c2b5df9d-ecb4-43d0-9e80-a713495335d8</guid>
  <pubDate>Fri, 08 Nov 2024 14:30:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/c2b5df9d-ecb4-43d0-9e80-a713495335d8.mp3" length="98349666" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>We were invited onto Liron Shapira's "Doom debates" to discuss Bayesian versus Popperian epistemology, AI doom, and superintelligence. Unsurprisingly, we got about one third of the way through the first subject ... </itunes:subtitle>
  <itunes:duration>2:50:58</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/c/c2b5df9d-ecb4-43d0-9e80-a713495335d8/cover.jpg?v=2"/>
  <description>Liron Shapira, host of [Doom Debates], invited us on to discuss Popperian versus Bayesian epistemology and whether we're worried about AI doom. As one might expect knowing us, we only got about halfway through the first subject, so get yourselves ready (presumably with many drinks) for part II in a few weeks! The era of Ben and Vaden's rowdy youtube debates has begun. Vaden is jubilant, Ben is uncomfortable, and the world has never been more annoyed by Popperians. 
Follow Liron on twitter (@liron) and check out the Doom Debates youtube channel (https://www.youtube.com/@DoomDebates) and podcast (https://podcasts.apple.com/us/podcast/doom-debates/id1751366208).  
We discuss
Whether we're concerned about AI doom 
Bayesian reasoning versus Popperian reasoning 
Whether it makes sense to put numbers on all your beliefs 
Solomonoff induction 
Objective vs subjective Bayesianism 
Prediction markets and superforecasting 
References
Vaden's blog post on Cox's Theorem and Yudkowsky's claims of "Laws of Rationality": https://vmasrani.github.io/blog/2021/thecredenceassumption/
Disproof of probabilistic induction (including Solomonov Induction): https://arxiv.org/abs/2107.00749 
EA Post Vaden Mentioned regarding predictions being uncalibrated more than 1yr out: https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations
Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: https://ifp.org/can-policymakers-trust-forecasters/
Superforecaster p(doom) is ~1%: https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:~:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25).
The existential risk persuasion tournament https://www.astralcodexten.com/p/the-extinction-tournament
Some more info in Ben's article on superforecasting: https://benchugg.com/writing/superforecasting/
Slides on Content vs Probability: https://vmasrani.github.io/assets/pdf/popper_good.pdf
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Trust in the reverend Bayes and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
What's your credence that the second debate is as fun as the first? Tell us at incrementspodcast@gmail.com 
 Special Guest: Liron Shapira.
</description>
  <itunes:keywords>AI, belief, Popper, Bayes, epistemology, prediction, induction</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Liron Shapira, host of [Doom Debates], invited us on to discuss Popperian versus Bayesian epistemology and whether we&#39;re worried about AI doom. As one might expect knowing us, we only got about halfway through the first subject, so get yourselves ready (presumably with many drinks) for part II in a few weeks! The era of Ben and Vaden&#39;s rowdy youtube debates has begun. Vaden is jubilant, Ben is uncomfortable, and the world has never been more annoyed by Popperians. </p>

<p>Follow Liron on twitter (@liron) and check out the Doom Debates <a href="https://www.youtube.com/@DoomDebates" rel="nofollow">youtube channel</a> and <a href="https://podcasts.apple.com/us/podcast/doom-debates/id1751366208" rel="nofollow">podcast</a>.  </p>

<h1>We discuss</h1>

<ul>
<li>Whether we&#39;re concerned about AI doom </li>
<li>Bayesian reasoning versus Popperian reasoning </li>
<li>Whether it makes sense to put numbers on all your beliefs </li>
<li>Solomonoff induction </li>
<li>Objective vs subjective Bayesianism </li>
<li>Prediction markets and superforecasting </li>
</ul>

<h1>References</h1>

<ul>
<li>Vaden&#39;s blog post on Cox&#39;s Theorem and Yudkowsky&#39;s claims of &quot;Laws of Rationality&quot;: <a href="https://vmasrani.github.io/blog/2021/the_credence_assumption/" rel="nofollow">https://vmasrani.github.io/blog/2021/the_credence_assumption/</a></li>
<li>Disproof of probabilistic induction (including Solomonov Induction): <a href="https://arxiv.org/abs/2107.00749" rel="nofollow">https://arxiv.org/abs/2107.00749</a> </li>
<li>EA Post Vaden Mentioned regarding predictions being uncalibrated more than 1yr out: <a href="https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations" rel="nofollow">https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations</a></li>
<li>Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: <a href="https://ifp.org/can-policymakers-trust-forecasters/" rel="nofollow">https://ifp.org/can-policymakers-trust-forecasters/</a></li>
<li>Superforecaster p(doom) is ~1%: <a href="https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:%7E:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25)" rel="nofollow">https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:~:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25)</a>.</li>
<li>The existential risk persuasion tournament <a href="https://www.astralcodexten.com/p/the-extinction-tournament" rel="nofollow">https://www.astralcodexten.com/p/the-extinction-tournament</a></li>
<li>Some more info in Ben&#39;s article on superforecasting: <a href="https://benchugg.com/writing/superforecasting/" rel="nofollow">https://benchugg.com/writing/superforecasting/</a></li>
<li>Slides on Content vs Probability: <a href="https://vmasrani.github.io/assets/pdf/popper_good.pdf" rel="nofollow">https://vmasrani.github.io/assets/pdf/popper_good.pdf</a></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Trust in the reverend Bayes and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s your credence that the second debate is as fun as the first? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p><p>Special Guest: Liron Shapira.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Liron Shapira, host of [Doom Debates], invited us on to discuss Popperian versus Bayesian epistemology and whether we&#39;re worried about AI doom. As one might expect knowing us, we only got about halfway through the first subject, so get yourselves ready (presumably with many drinks) for part II in a few weeks! The era of Ben and Vaden&#39;s rowdy youtube debates has begun. Vaden is jubilant, Ben is uncomfortable, and the world has never been more annoyed by Popperians. </p>

<p>Follow Liron on twitter (@liron) and check out the Doom Debates <a href="https://www.youtube.com/@DoomDebates" rel="nofollow">youtube channel</a> and <a href="https://podcasts.apple.com/us/podcast/doom-debates/id1751366208" rel="nofollow">podcast</a>.  </p>

<h1>We discuss</h1>

<ul>
<li>Whether we&#39;re concerned about AI doom </li>
<li>Bayesian reasoning versus Popperian reasoning </li>
<li>Whether it makes sense to put numbers on all your beliefs </li>
<li>Solomonoff induction </li>
<li>Objective vs subjective Bayesianism </li>
<li>Prediction markets and superforecasting </li>
</ul>

<h1>References</h1>

<ul>
<li>Vaden&#39;s blog post on Cox&#39;s Theorem and Yudkowsky&#39;s claims of &quot;Laws of Rationality&quot;: <a href="https://vmasrani.github.io/blog/2021/the_credence_assumption/" rel="nofollow">https://vmasrani.github.io/blog/2021/the_credence_assumption/</a></li>
<li>Disproof of probabilistic induction (including Solomonov Induction): <a href="https://arxiv.org/abs/2107.00749" rel="nofollow">https://arxiv.org/abs/2107.00749</a> </li>
<li>EA Post Vaden Mentioned regarding predictions being uncalibrated more than 1yr out: <a href="https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations" rel="nofollow">https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations</a></li>
<li>Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: <a href="https://ifp.org/can-policymakers-trust-forecasters/" rel="nofollow">https://ifp.org/can-policymakers-trust-forecasters/</a></li>
<li>Superforecaster p(doom) is ~1%: <a href="https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:%7E:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25)" rel="nofollow">https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:~:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25)</a>.</li>
<li>The existential risk persuasion tournament <a href="https://www.astralcodexten.com/p/the-extinction-tournament" rel="nofollow">https://www.astralcodexten.com/p/the-extinction-tournament</a></li>
<li>Some more info in Ben&#39;s article on superforecasting: <a href="https://benchugg.com/writing/superforecasting/" rel="nofollow">https://benchugg.com/writing/superforecasting/</a></li>
<li>Slides on Content vs Probability: <a href="https://vmasrani.github.io/assets/pdf/popper_good.pdf" rel="nofollow">https://vmasrani.github.io/assets/pdf/popper_good.pdf</a></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Trust in the reverend Bayes and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s your credence that the second debate is as fun as the first? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p><p>Special Guest: Liron Shapira.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#75 -  The Problem of Induction, Relitigated (w/ Tamler Sommers)</title>
  <link>https://www.incrementspodcast.com/75</link>
  <guid isPermaLink="false">620c85f4-0377-4a5a-ba7e-71006bcb89b4</guid>
  <pubDate>Wed, 23 Oct 2024 09:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/620c85f4-0377-4a5a-ba7e-71006bcb89b4.mp3" length="98840196" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>When Very Bad Wizards meets Very Culty Popperians. Famed philosopher, podcaster, and Kant-hater Tamler Sommers joins the boys for a spirited disagreement over Popper, and whether he solved the Problem of Induction. </itunes:subtitle>
  <itunes:duration>1:41:13</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/6/620c85f4-0377-4a5a-ba7e-71006bcb89b4/cover.jpg?v=4"/>
  <description>When Very Bad Wizards meets Very Culty Popperians.  We finally decided to have a real life professional philosopher on the pod to call us out on our nonsense,  and are honored to have on Tamler Sommers, from the esteemed Very Bad Wizards podcast, to argue with us about the Problem of Induction. Did Popper solve it, or does his proposed solution, like all the other attempts, "fail decisively"? 
(Warning: One of the two hosts maaay have revealed their Popperian dogmatism a bit throughout this episode. Whichever host that is - they shall remain unnamed - apologizes quietly and stubbornly under their breath.) 
Check out Tamler's website (https://www.tamlersommers.com/), his podcast (Very Bad Wizards (https://verybadwizards.com/)), or follow him on twitter (@tamler). 
We discuss
What is the problem of induction? 
Whether regularities really exist in nature
The difference between certainty and justification 
Popper's solution to the problem of induction 
If whiskey will taste like orange juice next week
What makes a good theory?
Why prediction is secondary to explanation for Popper 
If science and meditiation are in conflict 
The boundaries of science  
References
Very Bad Wizards episode on induction (https://verybadwizards.com/episode/episode-294-the-scandal-of-philosophy-humes-problem-of-induction)
The problem of induction, by Wesley Salmon (https://home.csulb.edu/~cwallis/100/articles/salmon.html)
Hume on induction (https://plato.stanford.edu/entries/induction-problem/#HumeProb)
Errata
Vaden mentions in the episode how "Einstein's theory is better because it can explain earth's gravitational constant". He got some of the details wrong here - it's actually the inverse square law, not the gravitational constant. Listen to Edward Witten explain it much better here (https://www.youtube.com/watch?v=A_9RqsHYEAs). 
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @tamler
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Trust in our regularity and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
If you are a Very Bad Wizards listener, hello! We're exactly like Tamler and David, except younger. Come join the Cult of Popper over at incrementspodcast@gmail.com 
Image credit: From this Aeon essay on Hume (https://aeon.co/essays/hume-is-the-amiable-modest-generous-philosopher-we-need-today). Illustration by Petra Eriksson at Handsome Frank.  Special Guest: Tamler Sommers.
</description>
  <itunes:keywords>induction, popper, belief, certainty, justification, deduction, logic</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>When Very Bad Wizards meets Very Culty Popperians.  We finally decided to have a real life professional philosopher on the pod to call us out on our nonsense,  and are honored to have on Tamler Sommers, from the esteemed Very Bad Wizards podcast, to argue with us about the Problem of Induction. Did Popper solve it, or does his proposed solution, like all the other attempts, &quot;fail decisively&quot;? </p>

<p>(Warning: One of the two hosts maaay have revealed their Popperian dogmatism a bit throughout this episode. Whichever host that is - they shall remain unnamed - apologizes quietly and stubbornly under their breath.) </p>

<p>Check out <a href="https://www.tamlersommers.com/" rel="nofollow">Tamler&#39;s website</a>, his podcast (<a href="https://verybadwizards.com/" rel="nofollow">Very Bad Wizards</a>), or follow him on twitter (@tamler). </p>

<h1>We discuss</h1>

<ul>
<li>What is the problem of induction? </li>
<li>Whether regularities really exist in nature</li>
<li>The difference between certainty and justification </li>
<li>Popper&#39;s solution to the problem of induction </li>
<li>If whiskey will taste like orange juice next week</li>
<li>What makes a good theory?</li>
<li>Why prediction is secondary to explanation for Popper </li>
<li>If science and meditiation are in conflict </li>
<li>The boundaries of science<br></li>
</ul>

<h1>References</h1>

<ul>
<li><a href="https://verybadwizards.com/episode/episode-294-the-scandal-of-philosophy-humes-problem-of-induction" rel="nofollow">Very Bad Wizards episode on induction</a></li>
<li><a href="https://home.csulb.edu/%7Ecwallis/100/articles/salmon.html" rel="nofollow">The problem of induction, by Wesley Salmon</a></li>
<li><a href="https://plato.stanford.edu/entries/induction-problem/#HumeProb" rel="nofollow">Hume on induction</a></li>
</ul>

<h1>Errata</h1>

<ul>
<li>Vaden mentions in the episode how &quot;Einstein&#39;s theory is better because it can explain earth&#39;s gravitational constant&quot;. He got some of the details wrong here - it&#39;s actually the inverse square law, not the gravitational constant. Listen to Edward Witten explain it much better <a href="https://www.youtube.com/watch?v=A_9RqsHYEAs" rel="nofollow">here</a>. </li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @tamler</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Trust in our regularity and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>If you are a Very Bad Wizards listener, hello! We&#39;re exactly like Tamler and David, except younger. Come join the Cult of Popper over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p>

<p>Image credit: From this <a href="https://aeon.co/essays/hume-is-the-amiable-modest-generous-philosopher-we-need-today" rel="nofollow">Aeon essay on Hume</a>. Illustration by Petra Eriksson at Handsome Frank. </p><p>Special Guest: Tamler Sommers.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>When Very Bad Wizards meets Very Culty Popperians.  We finally decided to have a real life professional philosopher on the pod to call us out on our nonsense,  and are honored to have on Tamler Sommers, from the esteemed Very Bad Wizards podcast, to argue with us about the Problem of Induction. Did Popper solve it, or does his proposed solution, like all the other attempts, &quot;fail decisively&quot;? </p>

<p>(Warning: One of the two hosts maaay have revealed their Popperian dogmatism a bit throughout this episode. Whichever host that is - they shall remain unnamed - apologizes quietly and stubbornly under their breath.) </p>

<p>Check out <a href="https://www.tamlersommers.com/" rel="nofollow">Tamler&#39;s website</a>, his podcast (<a href="https://verybadwizards.com/" rel="nofollow">Very Bad Wizards</a>), or follow him on twitter (@tamler). </p>

<h1>We discuss</h1>

<ul>
<li>What is the problem of induction? </li>
<li>Whether regularities really exist in nature</li>
<li>The difference between certainty and justification </li>
<li>Popper&#39;s solution to the problem of induction </li>
<li>If whiskey will taste like orange juice next week</li>
<li>What makes a good theory?</li>
<li>Why prediction is secondary to explanation for Popper </li>
<li>If science and meditiation are in conflict </li>
<li>The boundaries of science<br></li>
</ul>

<h1>References</h1>

<ul>
<li><a href="https://verybadwizards.com/episode/episode-294-the-scandal-of-philosophy-humes-problem-of-induction" rel="nofollow">Very Bad Wizards episode on induction</a></li>
<li><a href="https://home.csulb.edu/%7Ecwallis/100/articles/salmon.html" rel="nofollow">The problem of induction, by Wesley Salmon</a></li>
<li><a href="https://plato.stanford.edu/entries/induction-problem/#HumeProb" rel="nofollow">Hume on induction</a></li>
</ul>

<h1>Errata</h1>

<ul>
<li>Vaden mentions in the episode how &quot;Einstein&#39;s theory is better because it can explain earth&#39;s gravitational constant&quot;. He got some of the details wrong here - it&#39;s actually the inverse square law, not the gravitational constant. Listen to Edward Witten explain it much better <a href="https://www.youtube.com/watch?v=A_9RqsHYEAs" rel="nofollow">here</a>. </li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @tamler</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Trust in our regularity and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>If you are a Very Bad Wizards listener, hello! We&#39;re exactly like Tamler and David, except younger. Come join the Cult of Popper over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p>

<p>Image credit: From this <a href="https://aeon.co/essays/hume-is-the-amiable-modest-generous-philosopher-we-need-today" rel="nofollow">Aeon essay on Hume</a>. Illustration by Petra Eriksson at Handsome Frank. </p><p>Special Guest: Tamler Sommers.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#59 (C&amp;R, Chap 8) - On the Status of Science and Metaphysics (Plus reflections on the Brett Hall blog exchange) </title>
  <link>https://www.incrementspodcast.com/59</link>
  <guid isPermaLink="false">6363ebbf-c232-45f7-adbc-140ab1f61037</guid>
  <pubDate>Fri, 22 Dec 2023 12:00:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/6363ebbf-c232-45f7-adbc-140ab1f61037.mp3" length="82956119" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Chapter 8 of conjectures and refutations! Back on the horse baby, talkin' bout Kant, induction, irrefutability, induction - all the good stuff. Oh, and also Vaden's failed blog exchange w/ Brett Hall</itunes:subtitle>
  <itunes:duration>1:26:24</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/6/6363ebbf-c232-45f7-adbc-140ab1f61037/cover.jpg?v=1"/>
  <description>Back to the C&amp;amp;R series baby! Feels goooooood. Need some bar-room explanations for why induction is impossible? We gotchu. Need some historical background on where your boy Isaac got his ideas? We gotchu. Need to know how to refute the irrefutable? Gotchu there too homie, because today we're diving into Conjectures and Refutations, Chapter 8: On the Status of Science and Metaphysics. 
Oh, and we also discuss, in admittedly frustrated tones, the failed blog exchange between Brett Hall and Vaden on prediction and Austrianism. If you want the full listening experience, we suggest reading both posts before hearing our kvetching:
Vaden's post (https://vmasrani.github.io/blog/2023/predicting-human-behaviour/) 
Brett's "response" (https://www.bretthall.org/blog/humans-are-creative) 
Hold on to your hats for this one listeners, because she starts off rather spicy. 
We discuss
Why Kant believed in the truth of Newtonian mechanics 
Newton and his assertion that he arrived at his theory via induction 
Why this isn't true and is logically impossible
Was Copernicus influenced by Platonic ideals?
How Kepler came up with the idea of elliptical orbits 
Why finite observations are always compatible with infinitely many theories 
Kant's paradox and his solution 
Popper's updated solution to Kant's paradox 
The irrefutability of philosophical theories 
How can we say that irrefutable theories are false?
Annnnnd perhaps a few cheap shots here and there about Austrian Economics as well. 
# References 
Some background history (https://plato.stanford.edu/entries/copernicus/notes.html#note-6) on Copernicus and why Ben thinks Popper is wrong 
Quotes
Listening to this statement you may well wonder how I can possibly hold a theory to be false and irrefutable at one and the same time—I who claim to be a rationalist. For how can a rationalist say of a theory that it is false and irrefutable? Is he not bound, as a rationalist, to refute a theory before he asserts that it is false? And conversely, is he not bound to admit that if a theory is irrefutable, it is true?
Now if we look upon a theory as a proposed solution to a set of problems, then the theory immediately lends itself to critical discussion—even if it is non-empirical and irrefutable. For we can now ask questions such as, Does it solve the problem? Does it solve it better than other theories? Has it perhaps merely shifted the problem? Is the solution simple? Is it fruitful? Does it perhaps contradict other philosophical theories needed for solving other problems?
Because, as you [Kant] said, we are not passive receptors of sense data, but active organisms. Because we react to our environment not always merely instinctively, but sometimes con- sciously and freely. Because we can invent myths, stories, theories; because we have a thirst for explanation, an insatiable curiosity, a wish to know. Because we not only invent stories and theories, but try them out and see whether they work and how they work. Because by a great effort, by trying hard and making many mistakes, we may sometimes, if we are lucky, succeed in hitting upon a story, an explanation, which ‘saves the phenomena’; perhaps by making up a myth about ‘invisibles’, such as atoms or gravitational forces, which explain the visible. Because knowledge is an adventure of ideas. These ideas, it is true, are produced by us, and not by the world around us; they are not merely the traces of repeated sensations or stimuli or what not; here you were right. But we are more active and free than even you believed; for similar observations or similar environmental situations do not, as your theory implied, produce similar explanations in different men. Nor is the fact that we create our theories, and that we attempt to impose them upon the world, an explanation of their success, as you believed. For the overwhelming majority of our theories, of our freely invented ideas, are unsuccessful; they do not stand up to searching tests, and are discarded as falsified by experience. Only a very few of them succeed, for a time, in the competitive struggle for survival.
\ 
C&amp;amp;R Chapter 2
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Help us fund more hour-long blog posts and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover anger management here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
Would you rather be wrong or boring? Tell us at incrementspodcast@gmail.com 
</description>
  <itunes:keywords>conjectures-and-refutations, induction, Kant, metaphysics, irrefutability, Copernicus, austrianism, prediction</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Back to the C&amp;R series baby! Feels goooooood. Need some bar-room explanations for why induction is impossible? We gotchu. Need some historical background on where your boy Isaac got his ideas? We gotchu. Need to know how to refute the irrefutable? Gotchu there too homie, because today we&#39;re diving into Conjectures and Refutations, Chapter 8: On the Status of Science and Metaphysics. </p>

<p>Oh, and we also discuss, in admittedly frustrated tones, the failed blog exchange between Brett Hall and Vaden on prediction and Austrianism. If you want the full listening experience, we suggest reading both posts before hearing our kvetching:</p>

<ul>
<li><a href="https://vmasrani.github.io/blog/2023/predicting-human-behaviour/" rel="nofollow">Vaden&#39;s post</a> </li>
<li><a href="https://www.bretthall.org/blog/humans-are-creative" rel="nofollow">Brett&#39;s &quot;response&quot;</a> </li>
</ul>

<p>Hold on to your hats for this one listeners, because she starts off rather spicy. </p>

<h1>We discuss</h1>

<ul>
<li>Why Kant believed in the truth of Newtonian mechanics </li>
<li>Newton and his assertion that he arrived at his theory via induction </li>
<li>Why this isn&#39;t true and is logically impossible</li>
<li>Was Copernicus influenced by Platonic ideals?</li>
<li>How Kepler came up with the idea of elliptical orbits </li>
<li>Why finite observations are always compatible with infinitely many theories </li>
<li>Kant&#39;s paradox and his solution </li>
<li>Popper&#39;s updated solution to Kant&#39;s paradox </li>
<li>The irrefutability of philosophical theories </li>
<li>How can we say that irrefutable theories are false?</li>
<li>Annnnnd perhaps a few cheap shots here and there about Austrian Economics as well. 
# References </li>
<li>Some <a href="https://plato.stanford.edu/entries/copernicus/notes.html#note-6" rel="nofollow">background history</a> on Copernicus and why Ben thinks Popper is wrong </li>
</ul>

<h1>Quotes</h1>

<blockquote>
<p>Listening to this statement you may well wonder how I can possibly hold a theory to be false and irrefutable at one and the same time—I who claim to be a rationalist. For how can a rationalist say of a theory that it is false and irrefutable? Is he not bound, as a rationalist, to refute a theory before he asserts that it is false? And conversely, is he not bound to admit that if a theory is irrefutable, it is true?</p>

<p>Now if we look upon a theory as a proposed solution to a set of problems, then the theory immediately lends itself to critical discussion—even if it is non-empirical and irrefutable. For we can now ask questions such as, Does it solve the problem? Does it solve it better than other theories? Has it perhaps merely shifted the problem? Is the solution simple? Is it fruitful? Does it perhaps contradict other philosophical theories needed for solving other problems?</p>

<p>Because, as you [Kant] said, we are not passive receptors of sense data, but active organisms. Because we react to our environment not always merely instinctively, but sometimes con- sciously and freely. Because we can invent myths, stories, theories; because we have a thirst for explanation, an insatiable curiosity, a wish to know. Because we not only invent stories and theories, but try them out and see whether they work and how they work. Because by a great effort, by trying hard and making many mistakes, we may sometimes, if we are lucky, succeed in hitting upon a story, an explanation, which ‘saves the phenomena’; perhaps by making up a myth about ‘invisibles’, such as atoms or gravitational forces, which explain the visible. Because knowledge is an adventure of ideas. These ideas, it is true, are produced by us, and not by the world around us; they are not merely the traces of repeated sensations or stimuli or what not; here you were right. But we are more active and free than even you believed; for similar observations or similar environmental situations do not, as your theory implied, produce similar explanations in different men. Nor is the fact that we create our theories, and that we attempt to impose them upon the world, an explanation of their success, as you believed. For the overwhelming majority of our theories, of our freely invented ideas, are unsuccessful; they do not stand up to searching tests, and are discarded as falsified by experience. Only a very few of them succeed, for a time, in the competitive struggle for survival.<br>
\ <br>
C&amp;R Chapter 2</p>

<h1>Socials</h1>
</blockquote>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Help us fund more hour-long blog posts and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover anger management <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Would you rather be wrong or boring? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Back to the C&amp;R series baby! Feels goooooood. Need some bar-room explanations for why induction is impossible? We gotchu. Need some historical background on where your boy Isaac got his ideas? We gotchu. Need to know how to refute the irrefutable? Gotchu there too homie, because today we&#39;re diving into Conjectures and Refutations, Chapter 8: On the Status of Science and Metaphysics. </p>

<p>Oh, and we also discuss, in admittedly frustrated tones, the failed blog exchange between Brett Hall and Vaden on prediction and Austrianism. If you want the full listening experience, we suggest reading both posts before hearing our kvetching:</p>

<ul>
<li><a href="https://vmasrani.github.io/blog/2023/predicting-human-behaviour/" rel="nofollow">Vaden&#39;s post</a> </li>
<li><a href="https://www.bretthall.org/blog/humans-are-creative" rel="nofollow">Brett&#39;s &quot;response&quot;</a> </li>
</ul>

<p>Hold on to your hats for this one listeners, because she starts off rather spicy. </p>

<h1>We discuss</h1>

<ul>
<li>Why Kant believed in the truth of Newtonian mechanics </li>
<li>Newton and his assertion that he arrived at his theory via induction </li>
<li>Why this isn&#39;t true and is logically impossible</li>
<li>Was Copernicus influenced by Platonic ideals?</li>
<li>How Kepler came up with the idea of elliptical orbits </li>
<li>Why finite observations are always compatible with infinitely many theories </li>
<li>Kant&#39;s paradox and his solution </li>
<li>Popper&#39;s updated solution to Kant&#39;s paradox </li>
<li>The irrefutability of philosophical theories </li>
<li>How can we say that irrefutable theories are false?</li>
<li>Annnnnd perhaps a few cheap shots here and there about Austrian Economics as well. 
# References </li>
<li>Some <a href="https://plato.stanford.edu/entries/copernicus/notes.html#note-6" rel="nofollow">background history</a> on Copernicus and why Ben thinks Popper is wrong </li>
</ul>

<h1>Quotes</h1>

<blockquote>
<p>Listening to this statement you may well wonder how I can possibly hold a theory to be false and irrefutable at one and the same time—I who claim to be a rationalist. For how can a rationalist say of a theory that it is false and irrefutable? Is he not bound, as a rationalist, to refute a theory before he asserts that it is false? And conversely, is he not bound to admit that if a theory is irrefutable, it is true?</p>

<p>Now if we look upon a theory as a proposed solution to a set of problems, then the theory immediately lends itself to critical discussion—even if it is non-empirical and irrefutable. For we can now ask questions such as, Does it solve the problem? Does it solve it better than other theories? Has it perhaps merely shifted the problem? Is the solution simple? Is it fruitful? Does it perhaps contradict other philosophical theories needed for solving other problems?</p>

<p>Because, as you [Kant] said, we are not passive receptors of sense data, but active organisms. Because we react to our environment not always merely instinctively, but sometimes con- sciously and freely. Because we can invent myths, stories, theories; because we have a thirst for explanation, an insatiable curiosity, a wish to know. Because we not only invent stories and theories, but try them out and see whether they work and how they work. Because by a great effort, by trying hard and making many mistakes, we may sometimes, if we are lucky, succeed in hitting upon a story, an explanation, which ‘saves the phenomena’; perhaps by making up a myth about ‘invisibles’, such as atoms or gravitational forces, which explain the visible. Because knowledge is an adventure of ideas. These ideas, it is true, are produced by us, and not by the world around us; they are not merely the traces of repeated sensations or stimuli or what not; here you were right. But we are more active and free than even you believed; for similar observations or similar environmental situations do not, as your theory implied, produce similar explanations in different men. Nor is the fact that we create our theories, and that we attempt to impose them upon the world, an explanation of their success, as you believed. For the overwhelming majority of our theories, of our freely invented ideas, are unsuccessful; they do not stand up to searching tests, and are discarded as falsified by experience. Only a very few of them succeed, for a time, in the competitive struggle for survival.<br>
\ <br>
C&amp;R Chapter 2</p>

<h1>Socials</h1>
</blockquote>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Help us fund more hour-long blog posts and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover anger management <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Would you rather be wrong or boring? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#21 (C&amp;R Series, Ch.1) - The Problem of Induction</title>
  <link>https://www.incrementspodcast.com/21</link>
  <guid isPermaLink="false">Buzzsprout-8195969</guid>
  <pubDate>Tue, 23 Mar 2021 09:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/86b770bb-6b37-44ec-acdc-9d810bee3b7f.mp3" length="45649800" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle></itunes:subtitle>
  <itunes:duration>53:58</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
  <description>&lt;p&gt;After a long digression, we finally return to the Conjectures and Refutations series. In this episode we cover Chapter 1: &lt;em&gt;Science: Conjectures and Refutations&lt;/em&gt;. In particular, we focus on one of the trickiest Popperian concepts to wrap one's head around - the problem of induction.  &lt;br&gt; &lt;br&gt;&lt;em&gt;References:&lt;/em&gt;&lt;/p&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Scientific_law"&gt;Wiki on scientific laws &lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://en.wikipedia.org/wiki/Dialogues_Concerning_Natural_Religion"&gt;Hume's dialogues concerning natural religion&lt;/a&gt;  &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://vmasrani.github.io/assets/pdf/prob_induction_disproof.pdf"&gt;Proof of the impossibility of probability induction&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;One of the &lt;a href="https://www.youtube.com/watch?v=Fd1U_MC_p3M&amp;amp;ab_channel=AeonVideo"&gt;YouTube videos&lt;/a&gt; on induction. &lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;And in case you were wondering what happened to the two unfalsifiable theories Popper attacks in this chapter, you'll be pleased to know that they have merged into a super theory. We give you &lt;em&gt;Psychoanalytic-Marxism: &lt;/em&gt;&lt;a href="http://oldsite.english.ucsb.edu/faculty/janmohamed/Psychoanalytic-Marxism.pdf"&gt;http://oldsite.english.ucsb.edu/faculty/janmohamed/Psychoanalytic-Marxism.pdf&lt;/a&gt;.&lt;br&gt; &lt;br&gt;Sent us your favorite unfalsifiable theory at &lt;em&gt;incrementspodcast@gmail.com&lt;/em&gt;&lt;/p&gt;

audio updated: 29/08/2021 
</description>
  <itunes:keywords>science, induction, law, popper</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>After a long digression, we finally return to the Conjectures and Refutations series. In this episode we cover Chapter 1: <em>Science: Conjectures and Refutations</em>. In particular, we focus on one of the trickiest Popperian concepts to wrap one&apos;s head around - the problem of induction.  <br/> <br/><em>References:</em></p><ul><li><a href='https://en.wikipedia.org/wiki/Scientific_law'>Wiki on scientific laws </a></li><li><a href='https://en.wikipedia.org/wiki/Dialogues_Concerning_Natural_Religion'>Hume&apos;s dialogues concerning natural religion</a>  </li><li><a href='https://vmasrani.github.io/assets/pdf/prob_induction_disproof.pdf'>Proof of the impossibility of probability induction</a> </li><li>One of the <a href='https://www.youtube.com/watch?v=Fd1U_MC_p3M&amp;ab_channel=AeonVideo'>YouTube videos</a> on induction. </li></ul><p>And in case you were wondering what happened to the two unfalsifiable theories Popper attacks in this chapter, you&apos;ll be pleased to know that they have merged into a super theory. We give you <em>Psychoanalytic-Marxism: </em><a href='http://oldsite.english.ucsb.edu/faculty/janmohamed/Psychoanalytic-Marxism.pdf'>http://oldsite.english.ucsb.edu/faculty/janmohamed/Psychoanalytic-Marxism.pdf</a>.<br/> <br/>Sent us your favorite unfalsifiable theory at <em>incrementspodcast@gmail.com</em></p>

<p><em>audio updated: 29/08/2021</em></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>After a long digression, we finally return to the Conjectures and Refutations series. In this episode we cover Chapter 1: <em>Science: Conjectures and Refutations</em>. In particular, we focus on one of the trickiest Popperian concepts to wrap one&apos;s head around - the problem of induction.  <br/> <br/><em>References:</em></p><ul><li><a href='https://en.wikipedia.org/wiki/Scientific_law'>Wiki on scientific laws </a></li><li><a href='https://en.wikipedia.org/wiki/Dialogues_Concerning_Natural_Religion'>Hume&apos;s dialogues concerning natural religion</a>  </li><li><a href='https://vmasrani.github.io/assets/pdf/prob_induction_disproof.pdf'>Proof of the impossibility of probability induction</a> </li><li>One of the <a href='https://www.youtube.com/watch?v=Fd1U_MC_p3M&amp;ab_channel=AeonVideo'>YouTube videos</a> on induction. </li></ul><p>And in case you were wondering what happened to the two unfalsifiable theories Popper attacks in this chapter, you&apos;ll be pleased to know that they have merged into a super theory. We give you <em>Psychoanalytic-Marxism: </em><a href='http://oldsite.english.ucsb.edu/faculty/janmohamed/Psychoanalytic-Marxism.pdf'>http://oldsite.english.ucsb.edu/faculty/janmohamed/Psychoanalytic-Marxism.pdf</a>.<br/> <br/>Sent us your favorite unfalsifiable theory at <em>incrementspodcast@gmail.com</em></p>

<p><em>audio updated: 29/08/2021</em></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#6 - Philosophy of Probability I: Introduction</title>
  <link>https://www.incrementspodcast.com/6</link>
  <guid isPermaLink="false">Buzzsprout-4407194</guid>
  <pubDate>Wed, 01 Jul 2020 18:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/eeb49cea-deb7-4957-8f51-8d5f0949c799.mp3" length="55868881" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle></itunes:subtitle>
  <itunes:duration>1:17:05</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/e/eeb49cea-deb7-4957-8f51-8d5f0949c799/cover.jpg?v=1"/>
  <description>&lt;p&gt;Don't leave yet - we swear this will be more interesting than it sounds ... &lt;br&gt;&lt;br&gt;... But a drink will definitely help. Ben and Vaden dive into the interpretations behind probability. What do people mean when they use the word, and why do we use this one tool to describe different concepts. The rowdiness truly kicks in when Vaden releases his pent-up critique of Bayesianism, thereby losing both his friends and PhD position. But at least he's ingratiated himself with Karl Popper. &lt;br&gt;&lt;br&gt;&lt;b&gt;&lt;em&gt;References:&lt;/em&gt;&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://vmasrani.github.io/assets/popper_good.pdf"&gt;Vaden's  Slides&lt;/a&gt; on a 1975 &lt;a href="https://www.jstor.org/stable/20115014?seq=1#metadata_info_tab_contents"&gt;paper&lt;/a&gt; by Irving John Good titled &lt;em&gt;Explicativity, Corroboration, and the Relative Odds of Hypotheses&lt;/em&gt;. The paper is I.J. Good’s response to Karl Popper, and in the presentation I compare the two philosophers’ views on probability, epistemology, induction, simplicity, and content.&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.mrcc.uqam.ca/Publications/articles/deElia_MWR2005_.pdf"&gt;Diversity in Interpretations of Probability: Implications for Weather Forecasting&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Andrew Gelman, &lt;a href="http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf"&gt;Philosophy and the practice of Bayesian statistics&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Popper quote: &lt;em&gt;"Those who identify confirmation with probability must believe that a high degree of probability is desirable. They implicitly accept the rule: ‘Always choose the most probable hypothesis!’ Now it can be easily shown that this rule is equivalent to the following rule: ‘Always choose the hypothesis which goes as little beyond the evidence as possible!’ And this, in turn, can be shown to be equivalent, not only to ‘Always accept the hypothesis with the lowest content (within the limits of your task, for example, your task of predicting)!’, but also to ‘Always choose the hypothesis which has the highest degree of ad hoc character (within the limits of your task)!’" &lt;/em&gt;(Conjectures and Refutations p.391) &lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;Get in touch at incrementspodcast@gmail.com.&lt;br&gt;&lt;br&gt;&lt;em&gt;audio updated 13/12/2020&lt;/em&gt;&lt;/p&gt; 
</description>
  <itunes:keywords>probability, bayesianism, frequency, induction, epistemology</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Don&apos;t leave yet - we swear this will be more interesting than it sounds ... <br/><br/>... But a drink will definitely help. Ben and Vaden dive into the interpretations behind probability. What do people mean when they use the word, and why do we use this one tool to describe different concepts. The rowdiness truly kicks in when Vaden releases his pent-up critique of Bayesianism, thereby losing both his friends and PhD position. But at least he&apos;s ingratiated himself with Karl Popper. <br/><br/><b><em>References:</em></b></p><ul><li><a href='https://vmasrani.github.io/assets/popper_good.pdf'>Vaden&apos;s  Slides</a> on a 1975 <a href='https://www.jstor.org/stable/20115014?seq=1#metadata_info_tab_contents'>paper</a> by Irving John Good titled <em>Explicativity, Corroboration, and the Relative Odds of Hypotheses</em>. The paper is I.J. Good’s response to Karl Popper, and in the presentation I compare the two philosophers’ views on probability, epistemology, induction, simplicity, and content.</li><li><a href='http://www.mrcc.uqam.ca/Publications/articles/deElia_MWR2005_.pdf'>Diversity in Interpretations of Probability: Implications for Weather Forecasting</a></li><li>Andrew Gelman, <a href='http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf'>Philosophy and the practice of Bayesian statistics</a></li><li>Popper quote: <em>&quot;Those who identify confirmation with probability must believe that a high degree of probability is desirable. They implicitly accept the rule: ‘Always choose the most probable hypothesis!’ Now it can be easily shown that this rule is equivalent to the following rule: ‘Always choose the hypothesis which goes as little beyond the evidence as possible!’ And this, in turn, can be shown to be equivalent, not only to ‘Always accept the hypothesis with the lowest content (within the limits of your task, for example, your task of predicting)!’, but also to ‘Always choose the hypothesis which has the highest degree of ad hoc character (within the limits of your task)!’&quot; </em>(Conjectures and Refutations p.391) </li></ul><p>Get in touch at incrementspodcast@gmail.com.<br/><br/><em>audio updated 13/12/2020</em></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Don&apos;t leave yet - we swear this will be more interesting than it sounds ... <br/><br/>... But a drink will definitely help. Ben and Vaden dive into the interpretations behind probability. What do people mean when they use the word, and why do we use this one tool to describe different concepts. The rowdiness truly kicks in when Vaden releases his pent-up critique of Bayesianism, thereby losing both his friends and PhD position. But at least he&apos;s ingratiated himself with Karl Popper. <br/><br/><b><em>References:</em></b></p><ul><li><a href='https://vmasrani.github.io/assets/popper_good.pdf'>Vaden&apos;s  Slides</a> on a 1975 <a href='https://www.jstor.org/stable/20115014?seq=1#metadata_info_tab_contents'>paper</a> by Irving John Good titled <em>Explicativity, Corroboration, and the Relative Odds of Hypotheses</em>. The paper is I.J. Good’s response to Karl Popper, and in the presentation I compare the two philosophers’ views on probability, epistemology, induction, simplicity, and content.</li><li><a href='http://www.mrcc.uqam.ca/Publications/articles/deElia_MWR2005_.pdf'>Diversity in Interpretations of Probability: Implications for Weather Forecasting</a></li><li>Andrew Gelman, <a href='http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf'>Philosophy and the practice of Bayesian statistics</a></li><li>Popper quote: <em>&quot;Those who identify confirmation with probability must believe that a high degree of probability is desirable. They implicitly accept the rule: ‘Always choose the most probable hypothesis!’ Now it can be easily shown that this rule is equivalent to the following rule: ‘Always choose the hypothesis which goes as little beyond the evidence as possible!’ And this, in turn, can be shown to be equivalent, not only to ‘Always accept the hypothesis with the lowest content (within the limits of your task, for example, your task of predicting)!’, but also to ‘Always choose the hypothesis which has the highest degree of ad hoc character (within the limits of your task)!’&quot; </em>(Conjectures and Refutations p.391) </li></ul><p>Get in touch at incrementspodcast@gmail.com.<br/><br/><em>audio updated 13/12/2020</em></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
