<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web01.fireside.fm</fireside:hostname>
    <fireside:genDate>Tue, 07 Apr 2026 20:11:24 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Increments - Episodes Tagged with “Bayes”</title>
    <link>https://www.incrementspodcast.com/tags/bayes</link>
    <pubDate>Fri, 08 Nov 2024 14:30:00 -0800</pubDate>
    <description>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Science, Philosophy, Epistemology, Mayhem</itunes:subtitle>
    <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
    <itunes:summary>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>Philosophy,Science,Ethics,Progress,Knowledge,Computer Science,Conversation,Error-Correction</itunes:keywords>
    <itunes:owner>
      <itunes:name>Ben Chugg and Vaden Masrani</itunes:name>
      <itunes:email>incrementspodcast@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<itunes:category text="Science"/>
<item>
  <title>#76 (Bonus) - Is P(doom) meaningful? Debating epistemology (w/ Liron Shapira) </title>
  <link>https://www.incrementspodcast.com/76</link>
  <guid isPermaLink="false">c2b5df9d-ecb4-43d0-9e80-a713495335d8</guid>
  <pubDate>Fri, 08 Nov 2024 14:30:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/c2b5df9d-ecb4-43d0-9e80-a713495335d8.mp3" length="98349666" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>We were invited onto Liron Shapira's "Doom debates" to discuss Bayesian versus Popperian epistemology, AI doom, and superintelligence. Unsurprisingly, we got about one third of the way through the first subject ... </itunes:subtitle>
  <itunes:duration>2:50:58</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/c/c2b5df9d-ecb4-43d0-9e80-a713495335d8/cover.jpg?v=2"/>
  <description>Liron Shapira, host of [Doom Debates], invited us on to discuss Popperian versus Bayesian epistemology and whether we're worried about AI doom. As one might expect knowing us, we only got about halfway through the first subject, so get yourselves ready (presumably with many drinks) for part II in a few weeks! The era of Ben and Vaden's rowdy youtube debates has begun. Vaden is jubilant, Ben is uncomfortable, and the world has never been more annoyed by Popperians. 
Follow Liron on twitter (@liron) and check out the Doom Debates youtube channel (https://www.youtube.com/@DoomDebates) and podcast (https://podcasts.apple.com/us/podcast/doom-debates/id1751366208).  
We discuss
Whether we're concerned about AI doom 
Bayesian reasoning versus Popperian reasoning 
Whether it makes sense to put numbers on all your beliefs 
Solomonoff induction 
Objective vs subjective Bayesianism 
Prediction markets and superforecasting 
References
Vaden's blog post on Cox's Theorem and Yudkowsky's claims of "Laws of Rationality": https://vmasrani.github.io/blog/2021/thecredenceassumption/
Disproof of probabilistic induction (including Solomonov Induction): https://arxiv.org/abs/2107.00749 
EA Post Vaden Mentioned regarding predictions being uncalibrated more than 1yr out: https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations
Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: https://ifp.org/can-policymakers-trust-forecasters/
Superforecaster p(doom) is ~1%: https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:~:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25).
The existential risk persuasion tournament https://www.astralcodexten.com/p/the-extinction-tournament
Some more info in Ben's article on superforecasting: https://benchugg.com/writing/superforecasting/
Slides on Content vs Probability: https://vmasrani.github.io/assets/pdf/popper_good.pdf
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Trust in the reverend Bayes and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
What's your credence that the second debate is as fun as the first? Tell us at incrementspodcast@gmail.com 
 Special Guest: Liron Shapira.
</description>
  <itunes:keywords>AI, belief, Popper, Bayes, epistemology, prediction, induction</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Liron Shapira, host of [Doom Debates], invited us on to discuss Popperian versus Bayesian epistemology and whether we&#39;re worried about AI doom. As one might expect knowing us, we only got about halfway through the first subject, so get yourselves ready (presumably with many drinks) for part II in a few weeks! The era of Ben and Vaden&#39;s rowdy youtube debates has begun. Vaden is jubilant, Ben is uncomfortable, and the world has never been more annoyed by Popperians. </p>

<p>Follow Liron on twitter (@liron) and check out the Doom Debates <a href="https://www.youtube.com/@DoomDebates" rel="nofollow">youtube channel</a> and <a href="https://podcasts.apple.com/us/podcast/doom-debates/id1751366208" rel="nofollow">podcast</a>.  </p>

<h1>We discuss</h1>

<ul>
<li>Whether we&#39;re concerned about AI doom </li>
<li>Bayesian reasoning versus Popperian reasoning </li>
<li>Whether it makes sense to put numbers on all your beliefs </li>
<li>Solomonoff induction </li>
<li>Objective vs subjective Bayesianism </li>
<li>Prediction markets and superforecasting </li>
</ul>

<h1>References</h1>

<ul>
<li>Vaden&#39;s blog post on Cox&#39;s Theorem and Yudkowsky&#39;s claims of &quot;Laws of Rationality&quot;: <a href="https://vmasrani.github.io/blog/2021/the_credence_assumption/" rel="nofollow">https://vmasrani.github.io/blog/2021/the_credence_assumption/</a></li>
<li>Disproof of probabilistic induction (including Solomonov Induction): <a href="https://arxiv.org/abs/2107.00749" rel="nofollow">https://arxiv.org/abs/2107.00749</a> </li>
<li>EA Post Vaden Mentioned regarding predictions being uncalibrated more than 1yr out: <a href="https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations" rel="nofollow">https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations</a></li>
<li>Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: <a href="https://ifp.org/can-policymakers-trust-forecasters/" rel="nofollow">https://ifp.org/can-policymakers-trust-forecasters/</a></li>
<li>Superforecaster p(doom) is ~1%: <a href="https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:%7E:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25)" rel="nofollow">https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:~:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25)</a>.</li>
<li>The existential risk persuasion tournament <a href="https://www.astralcodexten.com/p/the-extinction-tournament" rel="nofollow">https://www.astralcodexten.com/p/the-extinction-tournament</a></li>
<li>Some more info in Ben&#39;s article on superforecasting: <a href="https://benchugg.com/writing/superforecasting/" rel="nofollow">https://benchugg.com/writing/superforecasting/</a></li>
<li>Slides on Content vs Probability: <a href="https://vmasrani.github.io/assets/pdf/popper_good.pdf" rel="nofollow">https://vmasrani.github.io/assets/pdf/popper_good.pdf</a></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Trust in the reverend Bayes and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s your credence that the second debate is as fun as the first? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p><p>Special Guest: Liron Shapira.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Liron Shapira, host of [Doom Debates], invited us on to discuss Popperian versus Bayesian epistemology and whether we&#39;re worried about AI doom. As one might expect knowing us, we only got about halfway through the first subject, so get yourselves ready (presumably with many drinks) for part II in a few weeks! The era of Ben and Vaden&#39;s rowdy youtube debates has begun. Vaden is jubilant, Ben is uncomfortable, and the world has never been more annoyed by Popperians. </p>

<p>Follow Liron on twitter (@liron) and check out the Doom Debates <a href="https://www.youtube.com/@DoomDebates" rel="nofollow">youtube channel</a> and <a href="https://podcasts.apple.com/us/podcast/doom-debates/id1751366208" rel="nofollow">podcast</a>.  </p>

<h1>We discuss</h1>

<ul>
<li>Whether we&#39;re concerned about AI doom </li>
<li>Bayesian reasoning versus Popperian reasoning </li>
<li>Whether it makes sense to put numbers on all your beliefs </li>
<li>Solomonoff induction </li>
<li>Objective vs subjective Bayesianism </li>
<li>Prediction markets and superforecasting </li>
</ul>

<h1>References</h1>

<ul>
<li>Vaden&#39;s blog post on Cox&#39;s Theorem and Yudkowsky&#39;s claims of &quot;Laws of Rationality&quot;: <a href="https://vmasrani.github.io/blog/2021/the_credence_assumption/" rel="nofollow">https://vmasrani.github.io/blog/2021/the_credence_assumption/</a></li>
<li>Disproof of probabilistic induction (including Solomonov Induction): <a href="https://arxiv.org/abs/2107.00749" rel="nofollow">https://arxiv.org/abs/2107.00749</a> </li>
<li>EA Post Vaden Mentioned regarding predictions being uncalibrated more than 1yr out: <a href="https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations" rel="nofollow">https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations</a></li>
<li>Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: <a href="https://ifp.org/can-policymakers-trust-forecasters/" rel="nofollow">https://ifp.org/can-policymakers-trust-forecasters/</a></li>
<li>Superforecaster p(doom) is ~1%: <a href="https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:%7E:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25)" rel="nofollow">https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:~:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25)</a>.</li>
<li>The existential risk persuasion tournament <a href="https://www.astralcodexten.com/p/the-extinction-tournament" rel="nofollow">https://www.astralcodexten.com/p/the-extinction-tournament</a></li>
<li>Some more info in Ben&#39;s article on superforecasting: <a href="https://benchugg.com/writing/superforecasting/" rel="nofollow">https://benchugg.com/writing/superforecasting/</a></li>
<li>Slides on Content vs Probability: <a href="https://vmasrani.github.io/assets/pdf/popper_good.pdf" rel="nofollow">https://vmasrani.github.io/assets/pdf/popper_good.pdf</a></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Trust in the reverend Bayes and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s your credence that the second debate is as fun as the first? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p><p>Special Guest: Liron Shapira.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
