<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web01.fireside.fm</fireside:hostname>
    <fireside:genDate>Sat, 09 May 2026 08:45:19 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Increments - Episodes Tagged with “Computation”</title>
    <link>https://www.incrementspodcast.com/tags/computation</link>
    <pubDate>Thu, 06 Nov 2025 10:00:00 -0800</pubDate>
    <description>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Science, Philosophy, Epistemology, Mayhem</itunes:subtitle>
    <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
    <itunes:summary>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>Philosophy,Science,Ethics,Progress,Knowledge,Computer Science,Conversation,Error-Correction</itunes:keywords>
    <itunes:owner>
      <itunes:name>Ben Chugg and Vaden Masrani</itunes:name>
      <itunes:email>incrementspodcast@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<itunes:category text="Science"/>
<item>
  <title>#94 - Is AI Just a Tool? (w/ Scott Aaronson) </title>
  <link>https://www.incrementspodcast.com/94</link>
  <guid isPermaLink="false">b36467e9-f3b2-4477-86e8-14586cc5a5a9</guid>
  <pubDate>Thu, 06 Nov 2025 10:00:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/b36467e9-f3b2-4477-86e8-14586cc5a5a9.mp3" length="81765482" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Is there any reason to believe that AI's capabilities are fundamentally limited? Scott Aaronson comes on to scare us straight. </itunes:subtitle>
  <itunes:duration>1:24:46</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/b/b36467e9-f3b2-4477-86e8-14586cc5a5a9/cover.jpg?v=1"/>
  <description>&lt;p&gt;The time has come for Vaden to defend his faith in the face of cold, hard scientific rationality. Will AI take over the world, automating away everything that makes humans distinct? Or can Vaden defend the church of just-ism, the radical belief that AI is simply "just a tool." Scott Aaronson, professor of computer science at UT Austin, goes to head to head against the zealotry. &lt;/p&gt;

&lt;p&gt;Check out Scott's &lt;a href="https://www.scottaaronson.com/" target="_blank" rel="nofollow noopener"&gt;website&lt;/a&gt; and his blog, &lt;a href="https://scottaaronson.blog/" target="_blank" rel="nofollow noopener"&gt;Shtetl Optimized&lt;/a&gt;. &lt;/p&gt;

We discuss

&lt;ul&gt;
&lt;li&gt;Scott view's on education. Should we radically reform K-12? &lt;/li&gt;
&lt;li&gt;Is ChatGPT changing Scott's approach to teaching &lt;/li&gt;
&lt;li&gt;The religion of "justa-ism" &lt;/li&gt;
&lt;li&gt;Is AI just a tool? &lt;/li&gt;
&lt;li&gt;Is there any principle which lets us say that AI won't be as general as humans? &lt;/li&gt;
&lt;li&gt;Aaronson's thesis of Artificial Intelligence &lt;/li&gt;
&lt;li&gt;Computational universality vs explanatory universality &lt;/li&gt;
&lt;li&gt;The many-worlds interpretation of quantum mechanics &lt;/li&gt;
&lt;/ul&gt;

Socials

&lt;ul&gt;
&lt;li&gt;Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani&lt;/li&gt;
&lt;li&gt;Come join our discord server! DM us on twitter or send us an email to get a supersecret link&lt;/li&gt;
&lt;li&gt;Become a patreon subscriber &lt;a href="https://www.patreon.com/Increments" target="_blank" rel="nofollow noopener"&gt;here&lt;/a&gt;. Or give us one-time cash donations to help cover our lack of cash donations &lt;a href="https://ko-fi.com/increments" target="_blank" rel="nofollow noopener"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Click dem like buttons on &lt;a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" target="_blank" rel="nofollow noopener"&gt;youtube&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Have you been converted? Tell us at &lt;a href="mailto:incrementspodcast@gmail.com" target="_blank" rel="nofollow noopener"&gt;incrementspodcast@gmail.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Special Guest: Scott Aaronson.&lt;/p&gt;
</description>
  <itunes:keywords>AI, induction, AI doom, computation, quantum mechanics</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>The time has come for Vaden to defend his faith in the face of cold, hard scientific rationality. Will AI take over the world, automating away everything that makes humans distinct? Or can Vaden defend the church of just-ism, the radical belief that AI is simply &quot;just a tool.&quot; Scott Aaronson, professor of computer science at UT Austin, goes to head to head against the zealotry. </p>

<p>Check out Scott&#39;s <a href="https://www.scottaaronson.com/" rel="nofollow">website</a> and his blog, <a href="https://scottaaronson.blog/" rel="nofollow">Shtetl Optimized</a>. </p>

<h1>We discuss</h1>

<ul>
<li>Scott view&#39;s on education. Should we radically reform K-12? </li>
<li>Is ChatGPT changing Scott&#39;s approach to teaching </li>
<li>The religion of &quot;justa-ism&quot; </li>
<li>Is AI just a tool? </li>
<li>Is there any principle which lets us say that AI won&#39;t be as general as humans? </li>
<li>Aaronson&#39;s thesis of Artificial Intelligence </li>
<li>Computational universality vs explanatory universality </li>
<li>The many-worlds interpretation of quantum mechanics </li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Have you been converted? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guest: Scott Aaronson.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>The time has come for Vaden to defend his faith in the face of cold, hard scientific rationality. Will AI take over the world, automating away everything that makes humans distinct? Or can Vaden defend the church of just-ism, the radical belief that AI is simply &quot;just a tool.&quot; Scott Aaronson, professor of computer science at UT Austin, goes to head to head against the zealotry. </p>

<p>Check out Scott&#39;s <a href="https://www.scottaaronson.com/" rel="nofollow">website</a> and his blog, <a href="https://scottaaronson.blog/" rel="nofollow">Shtetl Optimized</a>. </p>

<h1>We discuss</h1>

<ul>
<li>Scott view&#39;s on education. Should we radically reform K-12? </li>
<li>Is ChatGPT changing Scott&#39;s approach to teaching </li>
<li>The religion of &quot;justa-ism&quot; </li>
<li>Is AI just a tool? </li>
<li>Is there any principle which lets us say that AI won&#39;t be as general as humans? </li>
<li>Aaronson&#39;s thesis of Artificial Intelligence </li>
<li>Computational universality vs explanatory universality </li>
<li>The many-worlds interpretation of quantum mechanics </li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Have you been converted? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guest: Scott Aaronson.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#52 - Ask Us Anything I: Computation and Creativity</title>
  <link>https://www.incrementspodcast.com/52</link>
  <guid isPermaLink="false">e60dc6c5-1d0a-4061-85b0-e97bcb4b060f</guid>
  <pubDate>Mon, 10 Jul 2023 07:30:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/e60dc6c5-1d0a-4061-85b0-e97bcb4b060f.mp3" length="70556524" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Our first ask us anything episode! We get through a whopping ... two questions. </itunes:subtitle>
  <itunes:duration>1:13:29</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/e/e60dc6c5-1d0a-4061-85b0-e97bcb4b060f/cover.jpg?v=1"/>
  <description>&lt;p&gt;We debated calling this episode "An ode to Michael," because we set out to do an AMA but only get through his first two questions. But never fear, there are only 20 questions, so at this rate we should be done the AMA by the end of 2024. Who said we weren't fans of longtermism? &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Questions&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Hey do you guys have a Patreon page or anyway to support you?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;(Michael)&lt;/strong&gt; Not clear that humans are universal explainers. Standard argument for this is "to assume o.w. is to appeal to the supernatural," but this argument is weak b/c it does not explain &lt;em&gt;why&lt;/em&gt; humans could in principle explain everything. But all Deutch's ideas rests on this axiom. It's almost tautological - there &lt;em&gt;could&lt;/em&gt; be things humans cannot explain, but we wouldn't even know about these things b/c we wouldn't be able to explain them. I think this argument that humans are universal explainers and thus can achieve indefinite progress needs more rigor.It might be a step jump from animals to humans, but why could there not be more step jumps in intelligence beyond human intelligence that we do not even know about? I'd love to get your thoughts on this.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;(Michael)&lt;/strong&gt; Another pt I'd love to get your perspectives on is the idea of the "creative program." Standard discussion is "humans are special because we are creative, and we don't know what the creative program is." But we need to make progress on creativity at some point and it kind of feels like we are using the word "creativity" as a vague suitcase word to encapsulate "everything we don't yet know about intelligence." Simply saying "humans are creative" without properly defining what it means to be creative in a way that we can evaluate in machines is not helping us make progress on developing creative AI. It's unsatisfying to hear critiques of AI that say "this AI model is not 'truly intelligent' because it is not creative" without also proposing a way to evaluate its creativity.  In this sense, critiques of AI that say AI is "not creative" are bad explanations because these critiques are easy to vary. Without a proposing a proper test for creativity that can actually evaluated, it is not possible for us to conduct a test to refute the critique. I'd love to get your thoughts on how we can construct evaluations for creativity in a way that enables us to make scientific progress on understanding the creative algorithm!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;References&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://podcasts.apple.com/us/podcast/episode-9-introduction-to-computational-theory/id1503194218?i=1000502266361" target="_blank" rel="nofollow noopener"&gt;Episode 9: Introduction to Computational Theory&lt;/a&gt;, &lt;a href="https://podcasts.apple.com/us/podcast/the-theory-of-anything/id1503194218" target="_blank" rel="nofollow noopener"&gt;Theory of Anything podcast&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;David Deutsch on Coleman Hughes' podcast: &lt;a href="https://en.padverb.com/er/conversations-with-coleman_rss-09-may-2023-multiverse-of-madness-with-david-deutsch" target="_blank" rel="nofollow noopener"&gt;Multiverse of Madness&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;John Cleese's excellent new book &lt;a href="https://www.amazon.ca/Creativity-Short-Cheerful-John-Cleese/dp/0385348274" target="_blank" rel="nofollow noopener"&gt;Creativity&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Contact us&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani&lt;/li&gt;
&lt;li&gt;Check us out on youtube at &lt;a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" target="_blank" rel="nofollow noopener"&gt;https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Come join our discord server! DM us on twitter or send us an email to get a supersecret link&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Support&lt;/strong&gt;&lt;br&gt;
You can support the project on Patreon (monthly donations, &lt;a href="https://www.patreon.com/Increments" target="_blank" rel="nofollow noopener"&gt;https://www.patreon.com/Increments&lt;/a&gt;) or  Ko-fi (one time donation, &lt;a href="https://ko-fi.com/increments" target="_blank" rel="nofollow noopener"&gt;https://ko-fi.com/increments&lt;/a&gt;). Thank you! &lt;/p&gt;

&lt;p&gt;How much explaining could a universal explainer explain if a universal explainer could explain explaining? Tell us at &lt;a href="mailto:incrementspodcast@gmail.com" target="_blank" rel="nofollow noopener"&gt;incrementspodcast@gmail.com&lt;/a&gt;.  &lt;/p&gt;
</description>
  <itunes:keywords>ask-us-anything, creativity, computation, universality</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>We debated calling this episode &quot;An ode to Michael,&quot; because we set out to do an AMA but only get through his first two questions. But never fear, there are only 20 questions, so at this rate we should be done the AMA by the end of 2024. Who said we weren&#39;t fans of longtermism? </p>

<p><strong>Questions</strong>:</p>

<ol>
<li>Hey do you guys have a Patreon page or anyway to support you?</li>
<li><strong>(Michael)</strong> Not clear that humans are universal explainers. Standard argument for this is &quot;to assume o.w. is to appeal to the supernatural,&quot; but this argument is weak b/c it does not explain <em>why</em> humans could in principle explain everything. But all Deutch&#39;s ideas rests on this axiom. It&#39;s almost tautological - there <em>could</em> be things humans cannot explain, but we wouldn&#39;t even know about these things b/c we wouldn&#39;t be able to explain them. I think this argument that humans are universal explainers and thus can achieve indefinite progress needs more rigor.It might be a step jump from animals to humans, but why could there not be more step jumps in intelligence beyond human intelligence that we do not even know about? I&#39;d love to get your thoughts on this.</li>
<li><strong>(Michael)</strong> Another pt I&#39;d love to get your perspectives on is the idea of the &quot;creative program.&quot; Standard discussion is &quot;humans are special because we are creative, and we don&#39;t know what the creative program is.&quot; But we need to make progress on creativity at some point and it kind of feels like we are using the word &quot;creativity&quot; as a vague suitcase word to encapsulate &quot;everything we don&#39;t yet know about intelligence.&quot; Simply saying &quot;humans are creative&quot; without properly defining what it means to be creative in a way that we can evaluate in machines is not helping us make progress on developing creative AI. It&#39;s unsatisfying to hear critiques of AI that say &quot;this AI model is not &#39;truly intelligent&#39; because it is not creative&quot; without also proposing a way to evaluate its creativity.  In this sense, critiques of AI that say AI is &quot;not creative&quot; are bad explanations because these critiques are easy to vary. Without a proposing a proper test for creativity that can actually evaluated, it is not possible for us to conduct a test to refute the critique. I&#39;d love to get your thoughts on how we can construct evaluations for creativity in a way that enables us to make scientific progress on understanding the creative algorithm!</li>
</ol>

<p><strong>References</strong>:</p>

<ul>
<li><a href="https://podcasts.apple.com/us/podcast/episode-9-introduction-to-computational-theory/id1503194218?i=1000502266361" rel="nofollow">Episode 9: Introduction to Computational Theory</a>, <a href="https://podcasts.apple.com/us/podcast/the-theory-of-anything/id1503194218" rel="nofollow">Theory of Anything podcast</a></li>
<li>David Deutsch on Coleman Hughes&#39; podcast: <a href="https://en.padverb.com/er/conversations-with-coleman_rss-09-may-2023-multiverse-of-madness-with-david-deutsch" rel="nofollow">Multiverse of Madness</a> </li>
<li>John Cleese&#39;s excellent new book <a href="https://www.amazon.ca/Creativity-Short-Cheerful-John-Cleese/dp/0385348274" rel="nofollow">Creativity</a> </li>
</ul>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p><strong>Support</strong><br>
You can support the project on Patreon (monthly donations, <a href="https://www.patreon.com/Increments" rel="nofollow">https://www.patreon.com/Increments</a>) or  Ko-fi (one time donation, <a href="https://ko-fi.com/increments" rel="nofollow">https://ko-fi.com/increments</a>). Thank you! </p>

<p>How much explaining could a universal explainer explain if a universal explainer could explain explaining? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>We debated calling this episode &quot;An ode to Michael,&quot; because we set out to do an AMA but only get through his first two questions. But never fear, there are only 20 questions, so at this rate we should be done the AMA by the end of 2024. Who said we weren&#39;t fans of longtermism? </p>

<p><strong>Questions</strong>:</p>

<ol>
<li>Hey do you guys have a Patreon page or anyway to support you?</li>
<li><strong>(Michael)</strong> Not clear that humans are universal explainers. Standard argument for this is &quot;to assume o.w. is to appeal to the supernatural,&quot; but this argument is weak b/c it does not explain <em>why</em> humans could in principle explain everything. But all Deutch&#39;s ideas rests on this axiom. It&#39;s almost tautological - there <em>could</em> be things humans cannot explain, but we wouldn&#39;t even know about these things b/c we wouldn&#39;t be able to explain them. I think this argument that humans are universal explainers and thus can achieve indefinite progress needs more rigor.It might be a step jump from animals to humans, but why could there not be more step jumps in intelligence beyond human intelligence that we do not even know about? I&#39;d love to get your thoughts on this.</li>
<li><strong>(Michael)</strong> Another pt I&#39;d love to get your perspectives on is the idea of the &quot;creative program.&quot; Standard discussion is &quot;humans are special because we are creative, and we don&#39;t know what the creative program is.&quot; But we need to make progress on creativity at some point and it kind of feels like we are using the word &quot;creativity&quot; as a vague suitcase word to encapsulate &quot;everything we don&#39;t yet know about intelligence.&quot; Simply saying &quot;humans are creative&quot; without properly defining what it means to be creative in a way that we can evaluate in machines is not helping us make progress on developing creative AI. It&#39;s unsatisfying to hear critiques of AI that say &quot;this AI model is not &#39;truly intelligent&#39; because it is not creative&quot; without also proposing a way to evaluate its creativity.  In this sense, critiques of AI that say AI is &quot;not creative&quot; are bad explanations because these critiques are easy to vary. Without a proposing a proper test for creativity that can actually evaluated, it is not possible for us to conduct a test to refute the critique. I&#39;d love to get your thoughts on how we can construct evaluations for creativity in a way that enables us to make scientific progress on understanding the creative algorithm!</li>
</ol>

<p><strong>References</strong>:</p>

<ul>
<li><a href="https://podcasts.apple.com/us/podcast/episode-9-introduction-to-computational-theory/id1503194218?i=1000502266361" rel="nofollow">Episode 9: Introduction to Computational Theory</a>, <a href="https://podcasts.apple.com/us/podcast/the-theory-of-anything/id1503194218" rel="nofollow">Theory of Anything podcast</a></li>
<li>David Deutsch on Coleman Hughes&#39; podcast: <a href="https://en.padverb.com/er/conversations-with-coleman_rss-09-may-2023-multiverse-of-madness-with-david-deutsch" rel="nofollow">Multiverse of Madness</a> </li>
<li>John Cleese&#39;s excellent new book <a href="https://www.amazon.ca/Creativity-Short-Cheerful-John-Cleese/dp/0385348274" rel="nofollow">Creativity</a> </li>
</ul>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p><strong>Support</strong><br>
You can support the project on Patreon (monthly donations, <a href="https://www.patreon.com/Increments" rel="nofollow">https://www.patreon.com/Increments</a>) or  Ko-fi (one time donation, <a href="https://ko-fi.com/increments" rel="nofollow">https://ko-fi.com/increments</a>). Thank you! </p>

<p>How much explaining could a universal explainer explain if a universal explainer could explain explaining? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
