<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Sat, 02 May 2026 00:37:46 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Increments - Episodes Tagged with “Complexity”</title>
    <link>https://www.incrementspodcast.com/tags/complexity</link>
    <pubDate>Mon, 31 Oct 2022 10:45:00 -0700</pubDate>
    <description>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Science, Philosophy, Epistemology, Mayhem</itunes:subtitle>
    <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
    <itunes:summary>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>Philosophy,Science,Ethics,Progress,Knowledge,Computer Science,Conversation,Error-Correction</itunes:keywords>
    <itunes:owner>
      <itunes:name>Ben Chugg and Vaden Masrani</itunes:name>
      <itunes:email>incrementspodcast@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<itunes:category text="Science"/>
<item>
  <title>#45 - Four Central Fallacies of AI Research (with Melanie Mitchell)</title>
  <link>https://www.incrementspodcast.com/45</link>
  <guid isPermaLink="false">6ce3560d-1cbd-414c-8e21-54bd37bc5711</guid>
  <pubDate>Mon, 31 Oct 2022 10:45:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/6ce3560d-1cbd-414c-8e21-54bd37bc5711.mp3" length="51348374" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>We chat with Melanie Mitchell about our understanding of artificial intelligence, human intelligence, and whether it's reasonable to expect us to be able to build sophisticated human-like automated systems anytime soon. </itunes:subtitle>
  <itunes:duration>53:29</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/6/6ce3560d-1cbd-414c-8e21-54bd37bc5711/cover.jpg?v=1"/>
  <description>&lt;p&gt;We were delighted to be joined by Davis Professor at the Sante Fe Insitute, Melanie Mitchell! We chat about our understanding of artificial intelligence, human intelligence, and whether it's reasonable to expect us to be able to build sophisticated human-like automated systems anytime soon. &lt;/p&gt;

&lt;p&gt;Follow Melanie on twitter @MelMitchell1 and check out her website: &lt;a href="https://melaniemitchell.me/" target="_blank" rel="nofollow noopener"&gt;https://melaniemitchell.me/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We discuss:&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI hype through the ages &lt;/li&gt;
&lt;li&gt;How do we know if machines understand? &lt;/li&gt;
&lt;li&gt;Winograd schemas and the "WinoGrande" challenge. &lt;/li&gt;
&lt;li&gt;The importance of metaphor and analogies to intelligence &lt;/li&gt;
&lt;li&gt;The four fallacies in AI research: 

&lt;ul&gt;
&lt;li&gt;1. Narrow intelligence is on a continuum with general intelligence&lt;/li&gt;
&lt;li&gt;2. Easy things are easy and hard things are hard&lt;/li&gt;
&lt;li&gt;3. The lure of wishful mnemonics&lt;/li&gt;
&lt;li&gt;4. Intelligence is all in the brain&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Whether embodiment is necessary for true intelligence&lt;/li&gt;
&lt;li&gt;Douglas Hofstadter's views on AI &lt;/li&gt;
&lt;li&gt;Ray Kurzweil and the "singularity" &lt;/li&gt;
&lt;li&gt;The fact that Moore's law doesn't hold for software&lt;/li&gt;
&lt;li&gt;The difference between symbolic AI and machine learning &lt;/li&gt;
&lt;li&gt;What analogies have to teach us about human cognition &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Errata&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ben mistakenly says that Eliezer Yudkowsky has bet that everyone will die by 2025. It's actually by 2030. You can find the details of the bet here: &lt;a href="https://www.econlib.org/archives/2017/01/my_end-of-the-w.html" target="_blank" rel="nofollow noopener"&gt;https://www.econlib.org/archives/2017/01/my_end-of-the-w.html&lt;/a&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NY Times &lt;a href="https://www.nytimes.com/1958/07/13/archives/electronic-brain-teaches-itself.html" target="_blank" rel="nofollow noopener"&gt;reporting on Perceptrons&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;The WinoGrande challenge &lt;a href="https://arxiv.org/abs/1907.10641" target="_blank" rel="nofollow noopener"&gt;paper&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/pdf/2104.12871.pdf" target="_blank" rel="nofollow noopener"&gt;Why AI is harder than we think&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://smile.amazon.com/Singularity-Near-Humans-Transcend-Biology/dp/0143037889?sa-no-redirect=1" target="_blank" rel="nofollow noopener"&gt;The Singularity is Near&lt;/a&gt;, by Ray Kurzweil&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Contact us&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani&lt;/li&gt;
&lt;li&gt;Check us out on youtube at &lt;a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" target="_blank" rel="nofollow noopener"&gt;https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Come join our discord server! DM us on twitter or send us an email to get a supersecret link&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Eliezer was more scared than Douglas about AI, so he wrote a blog post about it. Who wrote the blog post, Eliezer or Douglas? Tell us at over at &lt;a href="mailto:incrementspodcast@gmail.com" target="_blank" rel="nofollow noopener"&gt;incrementspodcast@gmail.com&lt;/a&gt;. Special Guest: Melanie Mitchell.&lt;/p&gt;
</description>
  <itunes:keywords>AI, intelligence, complexity, analogies, </itunes:keywords>
  <content:encoded>
    <![CDATA[<p>We were delighted to be joined by Davis Professor at the Sante Fe Insitute, Melanie Mitchell! We chat about our understanding of artificial intelligence, human intelligence, and whether it&#39;s reasonable to expect us to be able to build sophisticated human-like automated systems anytime soon. </p>

<p>Follow Melanie on twitter @MelMitchell1 and check out her website: <a href="https://melaniemitchell.me/" rel="nofollow">https://melaniemitchell.me/</a></p>

<p><strong>We discuss:</strong> </p>

<ul>
<li>AI hype through the ages </li>
<li>How do we know if machines understand? </li>
<li>Winograd schemas and the &quot;WinoGrande&quot; challenge. </li>
<li>The importance of metaphor and analogies to intelligence </li>
<li>The four fallacies in AI research: 

<ul>
<li>1. Narrow intelligence is on a continuum with general intelligence</li>
<li>2. Easy things are easy and hard things are hard</li>
<li>3. The lure of wishful mnemonics</li>
<li>4. Intelligence is all in the brain</li>
</ul></li>
<li>Whether embodiment is necessary for true intelligence</li>
<li>Douglas Hofstadter&#39;s views on AI </li>
<li>Ray Kurzweil and the &quot;singularity&quot; </li>
<li>The fact that Moore&#39;s law doesn&#39;t hold for software</li>
<li>The difference between symbolic AI and machine learning </li>
<li>What analogies have to teach us about human cognition </li>
</ul>

<p><strong>Errata</strong> </p>

<ul>
<li>Ben mistakenly says that Eliezer Yudkowsky has bet that everyone will die by 2025. It&#39;s actually by 2030. You can find the details of the bet here: <a href="https://www.econlib.org/archives/2017/01/my_end-of-the-w.html" rel="nofollow">https://www.econlib.org/archives/2017/01/my_end-of-the-w.html</a>. </li>
</ul>

<p><strong>References:</strong></p>

<ul>
<li>NY Times <a href="https://www.nytimes.com/1958/07/13/archives/electronic-brain-teaches-itself.html" rel="nofollow">reporting on Perceptrons</a>. </li>
<li>The WinoGrande challenge <a href="https://arxiv.org/abs/1907.10641" rel="nofollow">paper</a></li>
<li><a href="https://arxiv.org/pdf/2104.12871.pdf" rel="nofollow">Why AI is harder than we think</a></li>
<li><a href="https://smile.amazon.com/Singularity-Near-Humans-Transcend-Biology/dp/0143037889?sa-no-redirect=1" rel="nofollow">The Singularity is Near</a>, by Ray Kurzweil</li>
</ul>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p>Eliezer was more scared than Douglas about AI, so he wrote a blog post about it. Who wrote the blog post, Eliezer or Douglas? Tell us at over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>.</p><p>Special Guest: Melanie Mitchell.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>We were delighted to be joined by Davis Professor at the Sante Fe Insitute, Melanie Mitchell! We chat about our understanding of artificial intelligence, human intelligence, and whether it&#39;s reasonable to expect us to be able to build sophisticated human-like automated systems anytime soon. </p>

<p>Follow Melanie on twitter @MelMitchell1 and check out her website: <a href="https://melaniemitchell.me/" rel="nofollow">https://melaniemitchell.me/</a></p>

<p><strong>We discuss:</strong> </p>

<ul>
<li>AI hype through the ages </li>
<li>How do we know if machines understand? </li>
<li>Winograd schemas and the &quot;WinoGrande&quot; challenge. </li>
<li>The importance of metaphor and analogies to intelligence </li>
<li>The four fallacies in AI research: 

<ul>
<li>1. Narrow intelligence is on a continuum with general intelligence</li>
<li>2. Easy things are easy and hard things are hard</li>
<li>3. The lure of wishful mnemonics</li>
<li>4. Intelligence is all in the brain</li>
</ul></li>
<li>Whether embodiment is necessary for true intelligence</li>
<li>Douglas Hofstadter&#39;s views on AI </li>
<li>Ray Kurzweil and the &quot;singularity&quot; </li>
<li>The fact that Moore&#39;s law doesn&#39;t hold for software</li>
<li>The difference between symbolic AI and machine learning </li>
<li>What analogies have to teach us about human cognition </li>
</ul>

<p><strong>Errata</strong> </p>

<ul>
<li>Ben mistakenly says that Eliezer Yudkowsky has bet that everyone will die by 2025. It&#39;s actually by 2030. You can find the details of the bet here: <a href="https://www.econlib.org/archives/2017/01/my_end-of-the-w.html" rel="nofollow">https://www.econlib.org/archives/2017/01/my_end-of-the-w.html</a>. </li>
</ul>

<p><strong>References:</strong></p>

<ul>
<li>NY Times <a href="https://www.nytimes.com/1958/07/13/archives/electronic-brain-teaches-itself.html" rel="nofollow">reporting on Perceptrons</a>. </li>
<li>The WinoGrande challenge <a href="https://arxiv.org/abs/1907.10641" rel="nofollow">paper</a></li>
<li><a href="https://arxiv.org/pdf/2104.12871.pdf" rel="nofollow">Why AI is harder than we think</a></li>
<li><a href="https://smile.amazon.com/Singularity-Near-Humans-Transcend-Biology/dp/0143037889?sa-no-redirect=1" rel="nofollow">The Singularity is Near</a>, by Ray Kurzweil</li>
</ul>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p>Eliezer was more scared than Douglas about AI, so he wrote a blog post about it. Who wrote the blog post, Eliezer or Douglas? Tell us at over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>.</p><p>Special Guest: Melanie Mitchell.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
