<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Tue, 21 Apr 2026 08:37:52 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Increments - Episodes Tagged with “Superintelligence”</title>
    <link>https://www.incrementspodcast.com/tags/superintelligence</link>
    <pubDate>Sun, 17 Aug 2025 18:00:00 -0700</pubDate>
    <description>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Science, Philosophy, Epistemology, Mayhem</itunes:subtitle>
    <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
    <itunes:summary>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>Philosophy,Science,Ethics,Progress,Knowledge,Computer Science,Conversation,Error-Correction</itunes:keywords>
    <itunes:owner>
      <itunes:name>Ben Chugg and Vaden Masrani</itunes:name>
      <itunes:email>incrementspodcast@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<itunes:category text="Science"/>
<item>
  <title>#90 (Reaction) - Disbelieving AI 2027: Responding to "Why We're Not Ready For Superintelligence"</title>
  <link>https://www.incrementspodcast.com/90</link>
  <guid isPermaLink="false">5f0aa7bc-c0a9-4fe5-b95e-18e5ab93b228</guid>
  <pubDate>Sun, 17 Aug 2025 18:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/5f0aa7bc-c0a9-4fe5-b95e-18e5ab93b228.mp3" length="92100845" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>The boys are hooked on reaction videos. This time: 80,000 hours' "Why we're not ready for superintelligence." </itunes:subtitle>
  <itunes:duration>1:35:32</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/5/5f0aa7bc-c0a9-4fe5-b95e-18e5ab93b228/cover.jpg?v=1"/>
  <description>Always the uncool kids at the table, Ben and Vaden push back against the AGI hype domininating every second episode of every second podcast. We react to "We're not ready for superintelligence" (https://www.youtube.com/watch?v=5KVDDfAkRgc) by 80,000 Hours - a bleak portrayal of the pre and post AGI world. Can Ben keep Vaden's sass in check? Can the 80,000 hours team find enough cubes for AGI? Is Agent-5 listening to you RIGHT NOW?
Listener Note:
We strongly recommend watching the video for this one, available both on youtube and spotify:
    - https://www.youtube.com/@incrementspod
    - https://open.spotify.com/show/1gKKSP5HKT4Nk3i0y4UseB 
We discuss
The incentives of superforecasters 
Arguments by authority
Whether superintelligence is right around the corner 
The difference between model size and data 
Are we running out of high quality data?
Does training on synthetic data work? 
The assumptions behind the AGI claims 
The pitfalls of reasoning from trends
References
Michael I Jordan (https://people.eecs.berkeley.edu/~jordan/)
Neil Lawrence (https://en.wikipedia.org/wiki/Neil_Lawrence)  
Important technical paper from Jordan pushing back on Doomerism (A Collectivist, Economic Perspective on AI) 
Jordan article talking about dangers of using AlphaFold data (https://news.berkeley.edu/2023/11/09/how-to-use-ai-for-discovery-without-leading-science-astray/)
Nature paper showing you can't use synthetic data to train bigger models  (https://www.nature.com/articles/s41586-024-07566-y)
Paper estimating of when training data will run out (https://arxiv.org/abs/2211.04325v2) (Coincidentally enough, sometime between 2027-2028)
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Become a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
But how many cubes until we get to AGI though? Send a few of your cubes over to incrementspodcast@gmail.com
Episode header image from here (https://www.youtube.com/watch?app=desktop&amp;amp;v=0Jsrux_XY8Y&amp;amp;ab_channel=TheAlgorithmicVoice). 
</description>
  <itunes:keywords>AI, AGI, superintelligence, trends, doomerism, technology, progress</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Always the uncool kids at the table, Ben and Vaden push back against the AGI hype domininating every second episode of every second podcast. We react to <a href="https://www.youtube.com/watch?v=5KVDDfAkRgc" rel="nofollow">&quot;We&#39;re not ready for superintelligence&quot;</a> by 80,000 Hours - a bleak portrayal of the pre and post AGI world. Can Ben keep Vaden&#39;s sass in check? Can the 80,000 hours team find enough cubes for AGI? Is Agent-5 listening to you RIGHT NOW?</p>

<h1>Listener Note:</h1>

<p>We strongly recommend watching the video for this one, available both on youtube and spotify:<br>
    - <a href="https://www.youtube.com/@incrementspod" rel="nofollow">https://www.youtube.com/@incrementspod</a><br>
    - <a href="https://open.spotify.com/show/1gKKSP5HKT4Nk3i0y4UseB" rel="nofollow">https://open.spotify.com/show/1gKKSP5HKT4Nk3i0y4UseB</a> </p>

<h1>We discuss</h1>

<ul>
<li>The incentives of superforecasters </li>
<li>Arguments by authority</li>
<li>Whether superintelligence is right around the corner </li>
<li>The difference between model size and data </li>
<li>Are we running out of high quality data?</li>
<li>Does training on synthetic data work? </li>
<li>The assumptions behind the AGI claims </li>
<li>The pitfalls of reasoning from trends</li>
</ul>

<h1>References</h1>

<ul>
<li><a href="https://people.eecs.berkeley.edu/%7Ejordan/" rel="nofollow">Michael I Jordan</a></li>
<li><a href="https://en.wikipedia.org/wiki/Neil_Lawrence" rel="nofollow">Neil Lawrence</a><br></li>
<li>[Important technical paper from Jordan pushing back on Doomerism](A Collectivist, Economic Perspective on AI) </li>
<li><a href="https://news.berkeley.edu/2023/11/09/how-to-use-ai-for-discovery-without-leading-science-astray/" rel="nofollow">Jordan article talking about dangers of using AlphaFold data</a></li>
<li><a href="https://www.nature.com/articles/s41586-024-07566-y" rel="nofollow">Nature paper showing you can&#39;t use synthetic data to train bigger models </a></li>
<li><a href="https://arxiv.org/abs/2211.04325v2" rel="nofollow">Paper estimating of when training data will run out</a> (Coincidentally enough, sometime between 2027-2028)</li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>But how many cubes until we get to AGI though? Send a few of your cubes over to <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p>

<p>Episode header image from <a href="https://www.youtube.com/watch?app=desktop&v=0Jsrux_XY8Y&ab_channel=TheAlgorithmicVoice" rel="nofollow">here</a>. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Always the uncool kids at the table, Ben and Vaden push back against the AGI hype domininating every second episode of every second podcast. We react to <a href="https://www.youtube.com/watch?v=5KVDDfAkRgc" rel="nofollow">&quot;We&#39;re not ready for superintelligence&quot;</a> by 80,000 Hours - a bleak portrayal of the pre and post AGI world. Can Ben keep Vaden&#39;s sass in check? Can the 80,000 hours team find enough cubes for AGI? Is Agent-5 listening to you RIGHT NOW?</p>

<h1>Listener Note:</h1>

<p>We strongly recommend watching the video for this one, available both on youtube and spotify:<br>
    - <a href="https://www.youtube.com/@incrementspod" rel="nofollow">https://www.youtube.com/@incrementspod</a><br>
    - <a href="https://open.spotify.com/show/1gKKSP5HKT4Nk3i0y4UseB" rel="nofollow">https://open.spotify.com/show/1gKKSP5HKT4Nk3i0y4UseB</a> </p>

<h1>We discuss</h1>

<ul>
<li>The incentives of superforecasters </li>
<li>Arguments by authority</li>
<li>Whether superintelligence is right around the corner </li>
<li>The difference between model size and data </li>
<li>Are we running out of high quality data?</li>
<li>Does training on synthetic data work? </li>
<li>The assumptions behind the AGI claims </li>
<li>The pitfalls of reasoning from trends</li>
</ul>

<h1>References</h1>

<ul>
<li><a href="https://people.eecs.berkeley.edu/%7Ejordan/" rel="nofollow">Michael I Jordan</a></li>
<li><a href="https://en.wikipedia.org/wiki/Neil_Lawrence" rel="nofollow">Neil Lawrence</a><br></li>
<li>[Important technical paper from Jordan pushing back on Doomerism](A Collectivist, Economic Perspective on AI) </li>
<li><a href="https://news.berkeley.edu/2023/11/09/how-to-use-ai-for-discovery-without-leading-science-astray/" rel="nofollow">Jordan article talking about dangers of using AlphaFold data</a></li>
<li><a href="https://www.nature.com/articles/s41586-024-07566-y" rel="nofollow">Nature paper showing you can&#39;t use synthetic data to train bigger models </a></li>
<li><a href="https://arxiv.org/abs/2211.04325v2" rel="nofollow">Paper estimating of when training data will run out</a> (Coincidentally enough, sometime between 2027-2028)</li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>But how many cubes until we get to AGI though? Send a few of your cubes over to <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p>

<p>Episode header image from <a href="https://www.youtube.com/watch?app=desktop&v=0Jsrux_XY8Y&ab_channel=TheAlgorithmicVoice" rel="nofollow">here</a>. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#77 (Bonus) - AI Doom Debate (w/ Liron Shapira)</title>
  <link>https://www.incrementspodcast.com/77</link>
  <guid isPermaLink="false">24e93eab-5281-418f-bddf-9516c7c5f8d7</guid>
  <pubDate>Tue, 19 Nov 2024 13:30:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/24e93eab-5281-418f-bddf-9516c7c5f8d7.mp3" length="137335802" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Part II of the great debate! Is AI about to kill everyone? Should you cash in on those vacation days now? </itunes:subtitle>
  <itunes:duration>2:21:22</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/2/24e93eab-5281-418f-bddf-9516c7c5f8d7/cover.jpg?v=2"/>
  <description>Back on Liron's Doom Debates podcast! Will we actually get around to the subject of superintelligent AI this time? Is it time to worry about the end of the world? Will Ben and Vaden emotionally recover from the devastating youtube comments from the last episode? 
Follow Liron on twitter (@liron) and check out the Doom Debates youtube channel (https://www.youtube.com/@DoomDebates) and podcast (https://podcasts.apple.com/us/podcast/doom-debates/id1751366208).  
We discuss
Definitions of "new knowledge" 
The reliance of deep learning on induction 
Can AIs be creative? 
The limits of statistical prediction 
Predictions of what deep learning cannot accomplish 
Can ChatGPT write funny jokes? 
Trends versus principles 
The psychological consequences of doomerism
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
The world is going to end soon, might as well get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
Was Vaden's two week anti-debate bro reeducation camp successful? Tell us at incrementspodcast@gmail.com
 Special Guest: Liron Shapira.
</description>
  <itunes:keywords>AI, superintelligence, existential risk, novelty, induction, deep learning, comedy, creativity, knowledge</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Back on Liron&#39;s <strong>Doom Debates</strong> podcast! Will we actually get around to the subject of superintelligent AI this time? Is it time to worry about the end of the world? Will Ben and Vaden emotionally recover from the devastating youtube comments from the last episode? </p>

<p>Follow Liron on twitter (@liron) and check out the Doom Debates <a href="https://www.youtube.com/@DoomDebates" rel="nofollow">youtube channel</a> and <a href="https://podcasts.apple.com/us/podcast/doom-debates/id1751366208" rel="nofollow">podcast</a>.  </p>

<h1>We discuss</h1>

<ul>
<li>Definitions of &quot;new knowledge&quot; </li>
<li>The reliance of deep learning on induction </li>
<li>Can AIs be creative? </li>
<li>The limits of statistical prediction </li>
<li>Predictions of what deep learning cannot accomplish </li>
<li>Can ChatGPT write funny jokes? </li>
<li>Trends versus principles </li>
<li>The psychological consequences of doomerism</li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>The world is going to end soon, might as well get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Was Vaden&#39;s two week anti-debate bro reeducation camp successful? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guest: Liron Shapira.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Back on Liron&#39;s <strong>Doom Debates</strong> podcast! Will we actually get around to the subject of superintelligent AI this time? Is it time to worry about the end of the world? Will Ben and Vaden emotionally recover from the devastating youtube comments from the last episode? </p>

<p>Follow Liron on twitter (@liron) and check out the Doom Debates <a href="https://www.youtube.com/@DoomDebates" rel="nofollow">youtube channel</a> and <a href="https://podcasts.apple.com/us/podcast/doom-debates/id1751366208" rel="nofollow">podcast</a>.  </p>

<h1>We discuss</h1>

<ul>
<li>Definitions of &quot;new knowledge&quot; </li>
<li>The reliance of deep learning on induction </li>
<li>Can AIs be creative? </li>
<li>The limits of statistical prediction </li>
<li>Predictions of what deep learning cannot accomplish </li>
<li>Can ChatGPT write funny jokes? </li>
<li>Trends versus principles </li>
<li>The psychological consequences of doomerism</li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>The world is going to end soon, might as well get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Was Vaden&#39;s two week anti-debate bro reeducation camp successful? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guest: Liron Shapira.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
