<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web01.fireside.fm</fireside:hostname>
    <fireside:genDate>Wed, 15 Apr 2026 00:36:48 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Increments - Episodes Tagged with “Creativity”</title>
    <link>https://www.incrementspodcast.com/tags/creativity</link>
    <pubDate>Tue, 19 Nov 2024 13:30:00 -0800</pubDate>
    <description>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Science, Philosophy, Epistemology, Mayhem</itunes:subtitle>
    <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
    <itunes:summary>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>Philosophy,Science,Ethics,Progress,Knowledge,Computer Science,Conversation,Error-Correction</itunes:keywords>
    <itunes:owner>
      <itunes:name>Ben Chugg and Vaden Masrani</itunes:name>
      <itunes:email>incrementspodcast@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<itunes:category text="Science"/>
<item>
  <title>#77 (Bonus) - AI Doom Debate (w/ Liron Shapira)</title>
  <link>https://www.incrementspodcast.com/77</link>
  <guid isPermaLink="false">24e93eab-5281-418f-bddf-9516c7c5f8d7</guid>
  <pubDate>Tue, 19 Nov 2024 13:30:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/24e93eab-5281-418f-bddf-9516c7c5f8d7.mp3" length="137335802" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Part II of the great debate! Is AI about to kill everyone? Should you cash in on those vacation days now? </itunes:subtitle>
  <itunes:duration>2:21:22</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/2/24e93eab-5281-418f-bddf-9516c7c5f8d7/cover.jpg?v=2"/>
  <description>Back on Liron's Doom Debates podcast! Will we actually get around to the subject of superintelligent AI this time? Is it time to worry about the end of the world? Will Ben and Vaden emotionally recover from the devastating youtube comments from the last episode? 
Follow Liron on twitter (@liron) and check out the Doom Debates youtube channel (https://www.youtube.com/@DoomDebates) and podcast (https://podcasts.apple.com/us/podcast/doom-debates/id1751366208).  
We discuss
Definitions of "new knowledge" 
The reliance of deep learning on induction 
Can AIs be creative? 
The limits of statistical prediction 
Predictions of what deep learning cannot accomplish 
Can ChatGPT write funny jokes? 
Trends versus principles 
The psychological consequences of doomerism
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
The world is going to end soon, might as well get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
Was Vaden's two week anti-debate bro reeducation camp successful? Tell us at incrementspodcast@gmail.com
 Special Guest: Liron Shapira.
</description>
  <itunes:keywords>AI, superintelligence, existential risk, novelty, induction, deep learning, comedy, creativity, knowledge</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Back on Liron&#39;s <strong>Doom Debates</strong> podcast! Will we actually get around to the subject of superintelligent AI this time? Is it time to worry about the end of the world? Will Ben and Vaden emotionally recover from the devastating youtube comments from the last episode? </p>

<p>Follow Liron on twitter (@liron) and check out the Doom Debates <a href="https://www.youtube.com/@DoomDebates" rel="nofollow">youtube channel</a> and <a href="https://podcasts.apple.com/us/podcast/doom-debates/id1751366208" rel="nofollow">podcast</a>.  </p>

<h1>We discuss</h1>

<ul>
<li>Definitions of &quot;new knowledge&quot; </li>
<li>The reliance of deep learning on induction </li>
<li>Can AIs be creative? </li>
<li>The limits of statistical prediction </li>
<li>Predictions of what deep learning cannot accomplish </li>
<li>Can ChatGPT write funny jokes? </li>
<li>Trends versus principles </li>
<li>The psychological consequences of doomerism</li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>The world is going to end soon, might as well get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Was Vaden&#39;s two week anti-debate bro reeducation camp successful? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guest: Liron Shapira.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Back on Liron&#39;s <strong>Doom Debates</strong> podcast! Will we actually get around to the subject of superintelligent AI this time? Is it time to worry about the end of the world? Will Ben and Vaden emotionally recover from the devastating youtube comments from the last episode? </p>

<p>Follow Liron on twitter (@liron) and check out the Doom Debates <a href="https://www.youtube.com/@DoomDebates" rel="nofollow">youtube channel</a> and <a href="https://podcasts.apple.com/us/podcast/doom-debates/id1751366208" rel="nofollow">podcast</a>.  </p>

<h1>We discuss</h1>

<ul>
<li>Definitions of &quot;new knowledge&quot; </li>
<li>The reliance of deep learning on induction </li>
<li>Can AIs be creative? </li>
<li>The limits of statistical prediction </li>
<li>Predictions of what deep learning cannot accomplish </li>
<li>Can ChatGPT write funny jokes? </li>
<li>Trends versus principles </li>
<li>The psychological consequences of doomerism</li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>The world is going to end soon, might as well get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Was Vaden&#39;s two week anti-debate bro reeducation camp successful? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guest: Liron Shapira.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#60 - Creativity and Computational Universality (with Bruce Nielson) </title>
  <link>https://www.incrementspodcast.com/60</link>
  <guid isPermaLink="false">1c458a1d-9763-4387-9217-c1c90d50df23</guid>
  <pubDate>Wed, 03 Jan 2024 22:00:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/1c458a1d-9763-4387-9217-c1c90d50df23.mp3" length="87432005" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Bruce Nielsen makes his first appearance on the podcast to push us on machine intelligence and creativity, computational universality, Roger Penrose, and everything in between! </itunes:subtitle>
  <itunes:duration>1:58:42</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/1/1c458a1d-9763-4387-9217-c1c90d50df23/cover.jpg?v=2"/>
  <description>Today we [finally] have on someone who actually knows what they're actually talking about: Mr. Bruce Nielson of the excellent Theory of Anything Podcast. We bring him on to straighten us out on the topics of creativity, machine intelligence, Turing machines, and computational universality - We build upon our previous conversation way back in Ask Us Anything I: Computation and Creativity (https://www.incrementspodcast.com/52), and suggest listening to that episode first. 
Go follow Bruce on twitter (https://twitter.com/bnielson01) and check out his Theory of Anything Podcast here (https://podcasts.apple.com/us/podcast/the-theory-of-anything/id1503194218). 
(Also Vaden's audio was acting up a bit in this episode, we humbly seek forgiveness.) 
We discuss
Does theorem proving count as creativity?
Is AlphaGo creative?
Determinism, predictability, and chaos theory
Essentialism and a misunderstanding of definitions
Animal memes and understanding
Turing Machines and computational universality
Penrose's "proof" that we need new physics 
References
Ask Us Anything I: Computation and Creativity (https://www.incrementspodcast.com/52) (Listen first!)
Logic theorist (https://en.wikipedia.org/wiki/Logic_Theorist) 
AlphaGo movie (https://en.wikipedia.org/wiki/AlphaGo_(film)) 
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Help us fund more 64 minute-long blog posts and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
Create us up an email with something imaginatively rote, cliche and formulaic, and mail that creative stinker over to incrementspodcast@gmail.com
 Special Guest: Bruce Nielson.
</description>
  <itunes:keywords>creativity, turing-completeness, universality, determinism, chaos theory</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Today we [finally] have on someone who actually knows what they&#39;re actually talking about: Mr. Bruce Nielson of the excellent Theory of Anything Podcast. We bring him on to straighten us out on the topics of creativity, machine intelligence, Turing machines, and computational universality - We build upon our previous conversation way back in <a href="https://www.incrementspodcast.com/52" rel="nofollow">Ask Us Anything I: Computation and Creativity</a>, and suggest listening to that episode first. </p>

<p>Go follow Bruce on twitter (<a href="https://twitter.com/bnielson01" rel="nofollow">https://twitter.com/bnielson01</a>) and check out his Theory of Anything Podcast <a href="https://podcasts.apple.com/us/podcast/the-theory-of-anything/id1503194218" rel="nofollow">here</a>. </p>

<p>(Also Vaden&#39;s audio was acting up a bit in this episode, we humbly seek forgiveness.) </p>

<h1>We discuss</h1>

<ul>
<li>Does theorem proving count as creativity?</li>
<li>Is AlphaGo creative?</li>
<li>Determinism, predictability, and chaos theory</li>
<li>Essentialism and a misunderstanding of definitions</li>
<li>Animal memes and understanding</li>
<li>Turing Machines and computational universality</li>
<li>Penrose&#39;s &quot;proof&quot; that we need new physics </li>
</ul>

<h1>References</h1>

<ul>
<li><a href="https://www.incrementspodcast.com/52" rel="nofollow">Ask Us Anything I: Computation and Creativity</a> (Listen first!)</li>
<li><a href="https://en.wikipedia.org/wiki/Logic_Theorist" rel="nofollow">Logic theorist</a> </li>
<li><a href="https://en.wikipedia.org/wiki/AlphaGo_(film)" rel="nofollow">AlphaGo movie</a> </li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Help us fund more 64 minute-long blog posts and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Create us up an email with something imaginatively rote, cliche and formulaic, and mail that creative stinker over to <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guest: Bruce Nielson.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Today we [finally] have on someone who actually knows what they&#39;re actually talking about: Mr. Bruce Nielson of the excellent Theory of Anything Podcast. We bring him on to straighten us out on the topics of creativity, machine intelligence, Turing machines, and computational universality - We build upon our previous conversation way back in <a href="https://www.incrementspodcast.com/52" rel="nofollow">Ask Us Anything I: Computation and Creativity</a>, and suggest listening to that episode first. </p>

<p>Go follow Bruce on twitter (<a href="https://twitter.com/bnielson01" rel="nofollow">https://twitter.com/bnielson01</a>) and check out his Theory of Anything Podcast <a href="https://podcasts.apple.com/us/podcast/the-theory-of-anything/id1503194218" rel="nofollow">here</a>. </p>

<p>(Also Vaden&#39;s audio was acting up a bit in this episode, we humbly seek forgiveness.) </p>

<h1>We discuss</h1>

<ul>
<li>Does theorem proving count as creativity?</li>
<li>Is AlphaGo creative?</li>
<li>Determinism, predictability, and chaos theory</li>
<li>Essentialism and a misunderstanding of definitions</li>
<li>Animal memes and understanding</li>
<li>Turing Machines and computational universality</li>
<li>Penrose&#39;s &quot;proof&quot; that we need new physics </li>
</ul>

<h1>References</h1>

<ul>
<li><a href="https://www.incrementspodcast.com/52" rel="nofollow">Ask Us Anything I: Computation and Creativity</a> (Listen first!)</li>
<li><a href="https://en.wikipedia.org/wiki/Logic_Theorist" rel="nofollow">Logic theorist</a> </li>
<li><a href="https://en.wikipedia.org/wiki/AlphaGo_(film)" rel="nofollow">AlphaGo movie</a> </li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Help us fund more 64 minute-long blog posts and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Create us up an email with something imaginatively rote, cliche and formulaic, and mail that creative stinker over to <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guest: Bruce Nielson.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#52 - Ask Us Anything I: Computation and Creativity</title>
  <link>https://www.incrementspodcast.com/52</link>
  <guid isPermaLink="false">e60dc6c5-1d0a-4061-85b0-e97bcb4b060f</guid>
  <pubDate>Mon, 10 Jul 2023 07:30:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/e60dc6c5-1d0a-4061-85b0-e97bcb4b060f.mp3" length="70556524" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Our first ask us anything episode! We get through a whopping ... two questions. </itunes:subtitle>
  <itunes:duration>1:13:29</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/e/e60dc6c5-1d0a-4061-85b0-e97bcb4b060f/cover.jpg?v=1"/>
  <description>We debated calling this episode "An ode to Michael," because we set out to do an AMA but only get through his first two questions. But never fear, there are only 20 questions, so at this rate we should be done the AMA by the end of 2024. Who said we weren't fans of longtermism? 
Questions:
Hey do you guys have a Patreon page or anyway to support you?
(Michael) Not clear that humans are universal explainers. Standard argument for this is "to assume o.w. is to appeal to the supernatural," but this argument is weak b/c it does not explain why humans could in principle explain everything. But all Deutch's ideas rests on this axiom. It's almost tautological - there could be things humans cannot explain, but we wouldn't even know about these things b/c we wouldn't be able to explain them. I think this argument that humans are universal explainers and thus can achieve indefinite progress needs more rigor.It might be a step jump from animals to humans, but why could there not be more step jumps in intelligence beyond human intelligence that we do not even know about? I'd love to get your thoughts on this.
(Michael) Another pt I'd love to get your perspectives on is the idea of the "creative program." Standard discussion is "humans are special because we are creative, and we don't know what the creative program is." But we need to make progress on creativity at some point and it kind of feels like we are using the word "creativity" as a vague suitcase word to encapsulate "everything we don't yet know about intelligence." Simply saying "humans are creative" without properly defining what it means to be creative in a way that we can evaluate in machines is not helping us make progress on developing creative AI. It's unsatisfying to hear critiques of AI that say "this AI model is not 'truly intelligent' because it is not creative" without also proposing a way to evaluate its creativity.  In this sense, critiques of AI that say AI is "not creative" are bad explanations because these critiques are easy to vary. Without a proposing a proper test for creativity that can actually evaluated, it is not possible for us to conduct a test to refute the critique. I'd love to get your thoughts on how we can construct evaluations for creativity in a way that enables us to make scientific progress on understanding the creative algorithm!
References:
- Episode 9: Introduction to Computational Theory (https://podcasts.apple.com/us/podcast/episode-9-introduction-to-computational-theory/id1503194218?i=1000502266361), Theory of Anything podcast (https://podcasts.apple.com/us/podcast/the-theory-of-anything/id1503194218)
- David Deutsch on Coleman Hughes' podcast: Multiverse of Madness (https://en.padverb.com/er/conversations-with-coleman_rss-09-may-2023-multiverse-of-madness-with-david-deutsch) 
- John Cleese's excellent new book Creativity (https://www.amazon.ca/Creativity-Short-Cheerful-John-Cleese/dp/0385348274) 
Contact us
- Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
- Check us out on youtube at https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ
- Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Support
You can support the project on Patreon (monthly donations, https://www.patreon.com/Increments) or  Ko-fi (one time donation, https://ko-fi.com/increments). Thank you! 
How much explaining could a universal explainer explain if a universal explainer could explain explaining? Tell us at incrementspodcast@gmail.com.  
</description>
  <itunes:keywords>ask-us-anything, creativity, computation, universality</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>We debated calling this episode &quot;An ode to Michael,&quot; because we set out to do an AMA but only get through his first two questions. But never fear, there are only 20 questions, so at this rate we should be done the AMA by the end of 2024. Who said we weren&#39;t fans of longtermism? </p>

<p><strong>Questions</strong>:</p>

<ol>
<li>Hey do you guys have a Patreon page or anyway to support you?</li>
<li><strong>(Michael)</strong> Not clear that humans are universal explainers. Standard argument for this is &quot;to assume o.w. is to appeal to the supernatural,&quot; but this argument is weak b/c it does not explain <em>why</em> humans could in principle explain everything. But all Deutch&#39;s ideas rests on this axiom. It&#39;s almost tautological - there <em>could</em> be things humans cannot explain, but we wouldn&#39;t even know about these things b/c we wouldn&#39;t be able to explain them. I think this argument that humans are universal explainers and thus can achieve indefinite progress needs more rigor.It might be a step jump from animals to humans, but why could there not be more step jumps in intelligence beyond human intelligence that we do not even know about? I&#39;d love to get your thoughts on this.</li>
<li><strong>(Michael)</strong> Another pt I&#39;d love to get your perspectives on is the idea of the &quot;creative program.&quot; Standard discussion is &quot;humans are special because we are creative, and we don&#39;t know what the creative program is.&quot; But we need to make progress on creativity at some point and it kind of feels like we are using the word &quot;creativity&quot; as a vague suitcase word to encapsulate &quot;everything we don&#39;t yet know about intelligence.&quot; Simply saying &quot;humans are creative&quot; without properly defining what it means to be creative in a way that we can evaluate in machines is not helping us make progress on developing creative AI. It&#39;s unsatisfying to hear critiques of AI that say &quot;this AI model is not &#39;truly intelligent&#39; because it is not creative&quot; without also proposing a way to evaluate its creativity.  In this sense, critiques of AI that say AI is &quot;not creative&quot; are bad explanations because these critiques are easy to vary. Without a proposing a proper test for creativity that can actually evaluated, it is not possible for us to conduct a test to refute the critique. I&#39;d love to get your thoughts on how we can construct evaluations for creativity in a way that enables us to make scientific progress on understanding the creative algorithm!</li>
</ol>

<p><strong>References</strong>:</p>

<ul>
<li><a href="https://podcasts.apple.com/us/podcast/episode-9-introduction-to-computational-theory/id1503194218?i=1000502266361" rel="nofollow">Episode 9: Introduction to Computational Theory</a>, <a href="https://podcasts.apple.com/us/podcast/the-theory-of-anything/id1503194218" rel="nofollow">Theory of Anything podcast</a></li>
<li>David Deutsch on Coleman Hughes&#39; podcast: <a href="https://en.padverb.com/er/conversations-with-coleman_rss-09-may-2023-multiverse-of-madness-with-david-deutsch" rel="nofollow">Multiverse of Madness</a> </li>
<li>John Cleese&#39;s excellent new book <a href="https://www.amazon.ca/Creativity-Short-Cheerful-John-Cleese/dp/0385348274" rel="nofollow">Creativity</a> </li>
</ul>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p><strong>Support</strong><br>
You can support the project on Patreon (monthly donations, <a href="https://www.patreon.com/Increments" rel="nofollow">https://www.patreon.com/Increments</a>) or  Ko-fi (one time donation, <a href="https://ko-fi.com/increments" rel="nofollow">https://ko-fi.com/increments</a>). Thank you! </p>

<p>How much explaining could a universal explainer explain if a universal explainer could explain explaining? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>We debated calling this episode &quot;An ode to Michael,&quot; because we set out to do an AMA but only get through his first two questions. But never fear, there are only 20 questions, so at this rate we should be done the AMA by the end of 2024. Who said we weren&#39;t fans of longtermism? </p>

<p><strong>Questions</strong>:</p>

<ol>
<li>Hey do you guys have a Patreon page or anyway to support you?</li>
<li><strong>(Michael)</strong> Not clear that humans are universal explainers. Standard argument for this is &quot;to assume o.w. is to appeal to the supernatural,&quot; but this argument is weak b/c it does not explain <em>why</em> humans could in principle explain everything. But all Deutch&#39;s ideas rests on this axiom. It&#39;s almost tautological - there <em>could</em> be things humans cannot explain, but we wouldn&#39;t even know about these things b/c we wouldn&#39;t be able to explain them. I think this argument that humans are universal explainers and thus can achieve indefinite progress needs more rigor.It might be a step jump from animals to humans, but why could there not be more step jumps in intelligence beyond human intelligence that we do not even know about? I&#39;d love to get your thoughts on this.</li>
<li><strong>(Michael)</strong> Another pt I&#39;d love to get your perspectives on is the idea of the &quot;creative program.&quot; Standard discussion is &quot;humans are special because we are creative, and we don&#39;t know what the creative program is.&quot; But we need to make progress on creativity at some point and it kind of feels like we are using the word &quot;creativity&quot; as a vague suitcase word to encapsulate &quot;everything we don&#39;t yet know about intelligence.&quot; Simply saying &quot;humans are creative&quot; without properly defining what it means to be creative in a way that we can evaluate in machines is not helping us make progress on developing creative AI. It&#39;s unsatisfying to hear critiques of AI that say &quot;this AI model is not &#39;truly intelligent&#39; because it is not creative&quot; without also proposing a way to evaluate its creativity.  In this sense, critiques of AI that say AI is &quot;not creative&quot; are bad explanations because these critiques are easy to vary. Without a proposing a proper test for creativity that can actually evaluated, it is not possible for us to conduct a test to refute the critique. I&#39;d love to get your thoughts on how we can construct evaluations for creativity in a way that enables us to make scientific progress on understanding the creative algorithm!</li>
</ol>

<p><strong>References</strong>:</p>

<ul>
<li><a href="https://podcasts.apple.com/us/podcast/episode-9-introduction-to-computational-theory/id1503194218?i=1000502266361" rel="nofollow">Episode 9: Introduction to Computational Theory</a>, <a href="https://podcasts.apple.com/us/podcast/the-theory-of-anything/id1503194218" rel="nofollow">Theory of Anything podcast</a></li>
<li>David Deutsch on Coleman Hughes&#39; podcast: <a href="https://en.padverb.com/er/conversations-with-coleman_rss-09-may-2023-multiverse-of-madness-with-david-deutsch" rel="nofollow">Multiverse of Madness</a> </li>
<li>John Cleese&#39;s excellent new book <a href="https://www.amazon.ca/Creativity-Short-Cheerful-John-Cleese/dp/0385348274" rel="nofollow">Creativity</a> </li>
</ul>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p><strong>Support</strong><br>
You can support the project on Patreon (monthly donations, <a href="https://www.patreon.com/Increments" rel="nofollow">https://www.patreon.com/Increments</a>) or  Ko-fi (one time donation, <a href="https://ko-fi.com/increments" rel="nofollow">https://ko-fi.com/increments</a>). Thank you! </p>

<p>How much explaining could a universal explainer explain if a universal explainer could explain explaining? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#49 - AGI: Could The End Be Nigh? (With Rosie Campbell)</title>
  <link>https://www.incrementspodcast.com/49</link>
  <guid isPermaLink="false">d190df1f-0cf0-4161-ba5f-544066c08c1f</guid>
  <pubDate>Wed, 22 Mar 2023 10:15:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/d190df1f-0cf0-4161-ba5f-544066c08c1f.mp3" length="81494098" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>The delightful Rosie Campbell joins us on the podcast to debate AI, AGI, superintelligence, and rogue computer viruses. </itunes:subtitle>
  <itunes:duration>1:24:53</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/d/d190df1f-0cf0-4161-ba5f-544066c08c1f/cover.jpg?v=1"/>
  <description>When big bearded men wearing fedoras begin yelling at you that the end is nigh (https://www.youtube.com/watch?v=gA1sNLL6yg4&amp;amp;ab_channel=BanklessShows) and superintelligence is about to kill us all, what should you do? Vaden says don't panic, and Ben is simply awestruck by the ability to grow a beard in the first place. 
To help us think through the potential risks and rewards of ever more impressive machine learning models, we invited Rosie Campbell on the podcast. Rosie is on the safety team at OpenAI and, while she's more worried about the existential risks of AI than we are, she's just as keen on some debate over a bottle of wine. 
We discuss:
- Whether machine learning poses an existential threat 
- How concerned we should be about existing AI 
- Whether deep learning can get us to artificial general intelligence (AGI)
- If AI safety is simply quality assurance
- How can we test if an AI system is creative? 
References:
- Mathgen: Randomly generated math papers (https://thatsmathematics.com/mathgen/) 
Contact us
- Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
- Follow Rosie at @RosieCampbell or https://www.rosiecampbell.xyz/
- Check us out on youtube at https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ
- Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Prove you're creative by inventing the next big thing and then send it to us at incrementspodcast@gmail.com
 Special Guest: Rosie Campbell.
</description>
  <itunes:keywords>AI, existential risks, creativity, progress</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>When big bearded men wearing fedoras begin yelling at you that <a href="https://www.youtube.com/watch?v=gA1sNLL6yg4&ab_channel=BanklessShows" rel="nofollow">the end is nigh</a> and superintelligence is about to kill us all, what should you do? Vaden says don&#39;t panic, and Ben is simply awestruck by the ability to grow a beard in the first place. </p>

<p>To help us think through the potential risks and rewards of ever more impressive machine learning models, we invited Rosie Campbell on the podcast. Rosie is on the safety team at OpenAI and, while she&#39;s more worried about the existential risks of AI than we are, she&#39;s just as keen on some debate over a bottle of wine. </p>

<p><strong>We discuss:</strong></p>

<ul>
<li>Whether machine learning poses an existential threat </li>
<li>How concerned we should be about existing AI </li>
<li>Whether deep learning can get us to artificial <em>general</em> intelligence (AGI)</li>
<li>If AI safety is simply quality assurance</li>
<li>How can we test if an AI system is creative? </li>
</ul>

<p><strong>References:</strong></p>

<ul>
<li><a href="https://thatsmathematics.com/mathgen/" rel="nofollow">Mathgen: Randomly generated math papers</a> </li>
</ul>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Follow Rosie at @RosieCampbell or <a href="https://www.rosiecampbell.xyz/" rel="nofollow">https://www.rosiecampbell.xyz/</a></li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p>Prove you&#39;re creative by inventing the next big thing and then send it to us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guest: Rosie Campbell.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>When big bearded men wearing fedoras begin yelling at you that <a href="https://www.youtube.com/watch?v=gA1sNLL6yg4&ab_channel=BanklessShows" rel="nofollow">the end is nigh</a> and superintelligence is about to kill us all, what should you do? Vaden says don&#39;t panic, and Ben is simply awestruck by the ability to grow a beard in the first place. </p>

<p>To help us think through the potential risks and rewards of ever more impressive machine learning models, we invited Rosie Campbell on the podcast. Rosie is on the safety team at OpenAI and, while she&#39;s more worried about the existential risks of AI than we are, she&#39;s just as keen on some debate over a bottle of wine. </p>

<p><strong>We discuss:</strong></p>

<ul>
<li>Whether machine learning poses an existential threat </li>
<li>How concerned we should be about existing AI </li>
<li>Whether deep learning can get us to artificial <em>general</em> intelligence (AGI)</li>
<li>If AI safety is simply quality assurance</li>
<li>How can we test if an AI system is creative? </li>
</ul>

<p><strong>References:</strong></p>

<ul>
<li><a href="https://thatsmathematics.com/mathgen/" rel="nofollow">Mathgen: Randomly generated math papers</a> </li>
</ul>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Follow Rosie at @RosieCampbell or <a href="https://www.rosiecampbell.xyz/" rel="nofollow">https://www.rosiecampbell.xyz/</a></li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p>Prove you&#39;re creative by inventing the next big thing and then send it to us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guest: Rosie Campbell.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
