<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Thu, 16 Apr 2026 01:14:52 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Increments - Episodes Tagged with “Existential Risk”</title>
    <link>https://www.incrementspodcast.com/tags/existential%20risk</link>
    <pubDate>Tue, 19 Nov 2024 13:30:00 -0800</pubDate>
    <description>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Science, Philosophy, Epistemology, Mayhem</itunes:subtitle>
    <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
    <itunes:summary>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>Philosophy,Science,Ethics,Progress,Knowledge,Computer Science,Conversation,Error-Correction</itunes:keywords>
    <itunes:owner>
      <itunes:name>Ben Chugg and Vaden Masrani</itunes:name>
      <itunes:email>incrementspodcast@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<itunes:category text="Science"/>
<item>
  <title>#77 (Bonus) - AI Doom Debate (w/ Liron Shapira)</title>
  <link>https://www.incrementspodcast.com/77</link>
  <guid isPermaLink="false">24e93eab-5281-418f-bddf-9516c7c5f8d7</guid>
  <pubDate>Tue, 19 Nov 2024 13:30:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/24e93eab-5281-418f-bddf-9516c7c5f8d7.mp3" length="137335802" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Part II of the great debate! Is AI about to kill everyone? Should you cash in on those vacation days now? </itunes:subtitle>
  <itunes:duration>2:21:22</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/2/24e93eab-5281-418f-bddf-9516c7c5f8d7/cover.jpg?v=2"/>
  <description>Back on Liron's Doom Debates podcast! Will we actually get around to the subject of superintelligent AI this time? Is it time to worry about the end of the world? Will Ben and Vaden emotionally recover from the devastating youtube comments from the last episode? 
Follow Liron on twitter (@liron) and check out the Doom Debates youtube channel (https://www.youtube.com/@DoomDebates) and podcast (https://podcasts.apple.com/us/podcast/doom-debates/id1751366208).  
We discuss
Definitions of "new knowledge" 
The reliance of deep learning on induction 
Can AIs be creative? 
The limits of statistical prediction 
Predictions of what deep learning cannot accomplish 
Can ChatGPT write funny jokes? 
Trends versus principles 
The psychological consequences of doomerism
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
The world is going to end soon, might as well get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
Was Vaden's two week anti-debate bro reeducation camp successful? Tell us at incrementspodcast@gmail.com
 Special Guest: Liron Shapira.
</description>
  <itunes:keywords>AI, superintelligence, existential risk, novelty, induction, deep learning, comedy, creativity, knowledge</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Back on Liron&#39;s <strong>Doom Debates</strong> podcast! Will we actually get around to the subject of superintelligent AI this time? Is it time to worry about the end of the world? Will Ben and Vaden emotionally recover from the devastating youtube comments from the last episode? </p>

<p>Follow Liron on twitter (@liron) and check out the Doom Debates <a href="https://www.youtube.com/@DoomDebates" rel="nofollow">youtube channel</a> and <a href="https://podcasts.apple.com/us/podcast/doom-debates/id1751366208" rel="nofollow">podcast</a>.  </p>

<h1>We discuss</h1>

<ul>
<li>Definitions of &quot;new knowledge&quot; </li>
<li>The reliance of deep learning on induction </li>
<li>Can AIs be creative? </li>
<li>The limits of statistical prediction </li>
<li>Predictions of what deep learning cannot accomplish </li>
<li>Can ChatGPT write funny jokes? </li>
<li>Trends versus principles </li>
<li>The psychological consequences of doomerism</li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>The world is going to end soon, might as well get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Was Vaden&#39;s two week anti-debate bro reeducation camp successful? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guest: Liron Shapira.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Back on Liron&#39;s <strong>Doom Debates</strong> podcast! Will we actually get around to the subject of superintelligent AI this time? Is it time to worry about the end of the world? Will Ben and Vaden emotionally recover from the devastating youtube comments from the last episode? </p>

<p>Follow Liron on twitter (@liron) and check out the Doom Debates <a href="https://www.youtube.com/@DoomDebates" rel="nofollow">youtube channel</a> and <a href="https://podcasts.apple.com/us/podcast/doom-debates/id1751366208" rel="nofollow">podcast</a>.  </p>

<h1>We discuss</h1>

<ul>
<li>Definitions of &quot;new knowledge&quot; </li>
<li>The reliance of deep learning on induction </li>
<li>Can AIs be creative? </li>
<li>The limits of statistical prediction </li>
<li>Predictions of what deep learning cannot accomplish </li>
<li>Can ChatGPT write funny jokes? </li>
<li>Trends versus principles </li>
<li>The psychological consequences of doomerism</li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>The world is going to end soon, might as well get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Was Vaden&#39;s two week anti-debate bro reeducation camp successful? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guest: Liron Shapira.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#43 - Artificial General Intelligence and the AI Safety debate</title>
  <link>https://www.incrementspodcast.com/43</link>
  <guid isPermaLink="false">49557cb4-fb21-4217-84d4-137505705a3e</guid>
  <pubDate>Sun, 28 Aug 2022 15:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/49557cb4-fb21-4217-84d4-137505705a3e.mp3" length="65129742" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Is advanced AI going to kill everyone? How close are we to building AGI? Is current AI creative? Put aside your philosophy textbooks, because we have the answers. </itunes:subtitle>
  <itunes:duration>1:07:50</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/4/49557cb4-fb21-4217-84d4-137505705a3e/cover.jpg?v=1"/>
  <description>Some people think (https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) that advanced AI is going to kill everyone. Some people don't (https://www.nytimes.com/2019/10/31/opinion/superintelligent-artificial-intelligence.html). Who to believe?  Fortunately, Ben and Vaden are here to sort out the question once and for all. No need to think for yourselves after listening to this one, we've got you covered. 
We discuss: 
- How well does math fit reality? Is that surprising? 
- Should artificial general intelligence (AGI) be considered "a person"? 
- How could AI possibly "go rogue?"
- Can we know if current AI systems are being creative? 
- Is misplaced AI fear hampering progress? 
References: 
- The Unreasonable effectiveness of mathematics (https://www.maths.ed.ac.uk/~v1ranick/papers/wigner.pdf)
- Prohibition on autonomous weapons letter (https://techlaw.uottawa.ca/bankillerai)
- Google employee conversation with chat bot (https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917)
- Gary marcus on the Turing test (https://garymarcus.substack.com/p/nonsense-on-stilts)
- Melanie Mitchell essay (https://arxiv.org/pdf/2104.12871.pdf). 
- Did MIRI give up? Their (half-sarcastic?) death with dignity strategy (https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy) 
- Kerry Vaughan on slowing down (https://twitter.com/KerryLVaughan/status/1545423249013620736) AGI development. 
Contact us 
- Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
- Check us out on youtube at https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ
- Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Which prompt would you send to GPT-3 in order to end the world? Tell us before you're turned into a paperclip over at incrementspodcast@gmail.com
</description>
  <itunes:keywords>AGI, AI Safety, existential risk, empiricism</itunes:keywords>
  <content:encoded>
    <![CDATA[<p><a href="https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities" rel="nofollow">Some people think</a> that advanced AI is going to kill everyone. <a href="https://www.nytimes.com/2019/10/31/opinion/superintelligent-artificial-intelligence.html" rel="nofollow">Some people don&#39;t</a>. Who to believe?  Fortunately, Ben and Vaden are here to sort out the question once and for all. No need to think for yourselves after listening to this one, we&#39;ve got you covered. </p>

<p><strong>We discuss</strong>: </p>

<ul>
<li>How well does math fit reality? Is that surprising? </li>
<li>Should artificial general intelligence (AGI) be considered &quot;a person&quot;? </li>
<li>How could AI possibly &quot;go rogue?&quot;</li>
<li>Can we know if current AI systems are being creative? </li>
<li>Is misplaced AI fear hampering progress? </li>
</ul>

<p><strong>References</strong>: </p>

<ul>
<li><a href="https://www.maths.ed.ac.uk/%7Ev1ranick/papers/wigner.pdf" rel="nofollow">The Unreasonable effectiveness of mathematics</a></li>
<li><a href="https://techlaw.uottawa.ca/bankillerai" rel="nofollow">Prohibition on autonomous weapons letter</a></li>
<li><a href="https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917" rel="nofollow">Google employee conversation with chat bot</a></li>
<li><a href="https://garymarcus.substack.com/p/nonsense-on-stilts" rel="nofollow">Gary marcus on the Turing test</a></li>
<li>Melanie Mitchell <a href="https://arxiv.org/pdf/2104.12871.pdf" rel="nofollow">essay</a>. </li>
<li>Did MIRI give up? Their (half-sarcastic?) <a href="https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy" rel="nofollow">death with dignity strategy</a> </li>
<li>Kerry Vaughan on <a href="https://twitter.com/KerryLVaughan/status/1545423249013620736" rel="nofollow">slowing down</a> AGI development. </li>
</ul>

<p><strong>Contact us</strong> </p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p>Which prompt would you send to GPT-3 in order to end the world? Tell us before you&#39;re turned into a paperclip over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p><a href="https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities" rel="nofollow">Some people think</a> that advanced AI is going to kill everyone. <a href="https://www.nytimes.com/2019/10/31/opinion/superintelligent-artificial-intelligence.html" rel="nofollow">Some people don&#39;t</a>. Who to believe?  Fortunately, Ben and Vaden are here to sort out the question once and for all. No need to think for yourselves after listening to this one, we&#39;ve got you covered. </p>

<p><strong>We discuss</strong>: </p>

<ul>
<li>How well does math fit reality? Is that surprising? </li>
<li>Should artificial general intelligence (AGI) be considered &quot;a person&quot;? </li>
<li>How could AI possibly &quot;go rogue?&quot;</li>
<li>Can we know if current AI systems are being creative? </li>
<li>Is misplaced AI fear hampering progress? </li>
</ul>

<p><strong>References</strong>: </p>

<ul>
<li><a href="https://www.maths.ed.ac.uk/%7Ev1ranick/papers/wigner.pdf" rel="nofollow">The Unreasonable effectiveness of mathematics</a></li>
<li><a href="https://techlaw.uottawa.ca/bankillerai" rel="nofollow">Prohibition on autonomous weapons letter</a></li>
<li><a href="https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917" rel="nofollow">Google employee conversation with chat bot</a></li>
<li><a href="https://garymarcus.substack.com/p/nonsense-on-stilts" rel="nofollow">Gary marcus on the Turing test</a></li>
<li>Melanie Mitchell <a href="https://arxiv.org/pdf/2104.12871.pdf" rel="nofollow">essay</a>. </li>
<li>Did MIRI give up? Their (half-sarcastic?) <a href="https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy" rel="nofollow">death with dignity strategy</a> </li>
<li>Kerry Vaughan on <a href="https://twitter.com/KerryLVaughan/status/1545423249013620736" rel="nofollow">slowing down</a> AGI development. </li>
</ul>

<p><strong>Contact us</strong> </p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p>Which prompt would you send to GPT-3 in order to end the world? Tell us before you&#39;re turned into a paperclip over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#11 - Debating Existential Risk</title>
  <link>https://www.incrementspodcast.com/11</link>
  <guid isPermaLink="false">Buzzsprout-5475121</guid>
  <pubDate>Wed, 16 Sep 2020 16:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/4ed5459c-bf59-432a-966d-33c3dd5450f0.mp3" length="64654289" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle></itunes:subtitle>
  <itunes:duration>1:29:17</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/4/4ed5459c-bf59-432a-966d-33c3dd5450f0/cover.jpg?v=1"/>
  <description>&lt;p&gt;Vaden's arguments against Bayesian philosophy and existential risk are examined by someone who might actually know what they're talking about, i.e., not Ben. After writing a critique of our conversation in Episode 7, which started off &lt;a href="https://vmasrani.github.io/blog/2020/mauricio_first_response/"&gt;a series of blog posts&lt;/a&gt;, our good friend Mauricio (who studies political science, economics, and philosophy) kindly agrees to come on the podcast and try to figure out who's more confused. Does Vaden convert? &lt;br&gt;&lt;br&gt;
We apologize for the long wait between this episode and the last one. It was all Vaden's fault. &lt;br&gt;&lt;br&gt;Hit us up at &lt;em&gt;incrementspodcast@gmail.com&lt;/em&gt;!&lt;br&gt;&lt;br&gt;&lt;em&gt;Note from Vaden:  Upon relistening, I've just learned my new computer chair clicks in the most annoying possible way every time I get enthusiastic. My apologies - I'll work on being less enthusiastic in future episodes.  &lt;br&gt;&lt;br&gt;Second note from Vaden: Yeesh lots of audio issues with this episode - I replaced the file with a cleaned up version at 5:30pm September 17th. Still learning... &lt;br&gt;&lt;/em&gt;&lt;br&gt;&lt;/p&gt; 
</description>
  <itunes:keywords>existential risk, probability, bayesianism</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Vaden&apos;s arguments against Bayesian philosophy and existential risk are examined by someone who might actually know what they&apos;re talking about, i.e., not Ben. After writing a critique of our conversation in Episode 7, which started off <a href='https://vmasrani.github.io/blog/2020/mauricio_first_response/'>a series of blog posts</a>, our good friend Mauricio (who studies political science, economics, and philosophy) kindly agrees to come on the podcast and try to figure out who&apos;s more confused. Does Vaden convert? <br/><br/>
We apologize for the long wait between this episode and the last one. It was all Vaden&apos;s fault. <br/><br/>Hit us up at <em>incrementspodcast@gmail.com</em>!<br/><br/><em>Note from Vaden:  Upon relistening, I&apos;ve just learned my new computer chair clicks in the most annoying possible way every time I get enthusiastic. My apologies - I&apos;ll work on being less enthusiastic in future episodes.  <br/><br/>Second note from Vaden: Yeesh lots of audio issues with this episode - I replaced the file with a cleaned up version at 5:30pm September 17th. Still learning... <br/></em><br/></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Vaden&apos;s arguments against Bayesian philosophy and existential risk are examined by someone who might actually know what they&apos;re talking about, i.e., not Ben. After writing a critique of our conversation in Episode 7, which started off <a href='https://vmasrani.github.io/blog/2020/mauricio_first_response/'>a series of blog posts</a>, our good friend Mauricio (who studies political science, economics, and philosophy) kindly agrees to come on the podcast and try to figure out who&apos;s more confused. Does Vaden convert? <br/><br/>
We apologize for the long wait between this episode and the last one. It was all Vaden&apos;s fault. <br/><br/>Hit us up at <em>incrementspodcast@gmail.com</em>!<br/><br/><em>Note from Vaden:  Upon relistening, I&apos;ve just learned my new computer chair clicks in the most annoying possible way every time I get enthusiastic. My apologies - I&apos;ll work on being less enthusiastic in future episodes.  <br/><br/>Second note from Vaden: Yeesh lots of audio issues with this episode - I replaced the file with a cleaned up version at 5:30pm September 17th. Still learning... <br/></em><br/></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#7 - Philosophy of Probability II: Existential Risks </title>
  <link>https://www.incrementspodcast.com/7</link>
  <guid isPermaLink="false">Buzzsprout-4476590</guid>
  <pubDate>Tue, 07 Jul 2020 11:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/07a038fa-d44d-40e6-9942-39879969c038.mp3" length="70590859" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle></itunes:subtitle>
  <itunes:duration>1:37:32</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/0/07a038fa-d44d-40e6-9942-39879969c038/cover.jpg?v=1"/>
  <description>&lt;p&gt;Back down to earth we go! Or try to, at least. In this episode Ben and Vaden attempt to ground their previous discussion on the philosophy of probability by focusing on a real-world example, namely the book The Precipice by Toby Ord, recently featured on the Making Sense podcast. Vaden believes in arguments, and Ben argues for beliefs. &lt;br&gt;&lt;br&gt;&lt;b&gt;Quotes&lt;/b&gt;&lt;br&gt;"&lt;em&gt;A common approach to estimating the chance of an unprecedented event with earth-shaking consequences is to take a skeptical stance: to start with an extremely small probability and only raise it from there when a large amount of hard evidence is presented. But I disagree. Instead, I think the right method is to start with a probability that reflects our overall impressions, then adjust this in light of the scientific evidence. When there is a lot of evidence, these approaches converge. But when there isn’t, the starting point can matter. &lt;br&gt;&lt;br&gt;In the case of artificial intelligence, everyone agrees the evidence and arguments are far from watertight, but the question is where does this leave us? Very roughly, my approach is to start with the overall view of the expert community that there is something like a one in two chance that AI agents capable of outperforming humans in almost every task will be developed in the coming century. And conditional on that happening, we shouldn’t be shocked if these agents that outperform us across the board were to inherit our future. Especially if when looking into the details, we see great challenges in aligning these agents with our values.&lt;/em&gt;"&lt;br&gt;- The Precipice, p. 165&lt;br&gt;&lt;br&gt;"&lt;em&gt;Most of the risks arising from long-term trends remain beyond revealing quantification. What is the probability of China’s spectacular economic expansion stalling or even going into reverse? What is the likelihood that Islamic terrorism will develop into a massive, determined quest to destroy the West? Probability estimates of these outcomes based on expert opinion provide at best some constraining guidelines but do not offer any reliable basis for relative comparisons of diverse events or their interrelations. What is the likelihood that a massive wave of global Islamic terrorism will accelerate the Western transition to non–fossil fuel energies? To what extent will the globalization trend be enhanced or impeded by a faster-than-expected sea level rise or by a precipitous demise of the United States? Setting such odds or multipliers is beyond any meaningful quantification.&lt;/em&gt;" &lt;br&gt;- Global Catastrophes and Trends, p. 226&lt;br&gt;&lt;br&gt;"&lt;em&gt;And while computers have been used for many years to assemble other  computers and machines, such deployments do not indicate any imminent self- reproductive capability. All those processes require human actions to initiate them,  raw materials to build the hardware, and above all, energy to run them. I find it hard to visualize how those machines would (particularly in less than a generation) launch, integrate, and sustain an entirely independent exploration, extraction, conversion, and delivery of the requisite energies."&lt;/em&gt;&lt;br&gt;- Global Catastrophes and Trends, p. 26&lt;br&gt;&lt;br&gt;&lt;b&gt;References:&lt;/b&gt;&lt;br&gt;- &lt;a href="https://www.amazon.ca/dp/B08BSZ52TN/ref=dp-kindle-redirect?_encoding=UTF8&amp;amp;btkr=1"&gt;Global Catastrophes and Trends: The Next Fifty Years&lt;/a&gt;&lt;br&gt;- &lt;a href="https://www.amazon.ca/dp/B07V9GHKYP/ref=dp-kindle-redirect?_encoding=UTF8&amp;amp;btkr=1"&gt;The Precipice: Existential Risk and the Future of Humanity&lt;/a&gt;&lt;br&gt;- &lt;a href="https://samharris.org/podcasts/208-existential-risk/"&gt;Making Sense podcast w/ Ord&lt;/a&gt;  (Clip starts around 40:00)&lt;br&gt;- &lt;a href="https://en.wikipedia.org/wiki/Mere_addition_paradox"&gt;Repugnant conclusion&lt;/a&gt;&lt;br&gt;- &lt;a href="https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem"&gt;Arrow's theorem&lt;/a&gt;&lt;br&gt;- &lt;a href="https://en.wikipedia.org/wiki/Apportionment_paradox"&gt;Balinski–Young theorem&lt;/a&gt;&lt;/p&gt; 
</description>
  <itunes:keywords>existential risk, AI, bayesianism, expected value</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Back down to earth we go! Or try to, at least. In this episode Ben and Vaden attempt to ground their previous discussion on the philosophy of probability by focusing on a real-world example, namely the book The Precipice by Toby Ord, recently featured on the Making Sense podcast. Vaden believes in arguments, and Ben argues for beliefs. <br/><br/><b>Quotes</b><br/>&quot;<em>A common approach to estimating the chance of an unprecedented event with earth-shaking consequences is to take a skeptical stance: to start with an extremely small probability and only raise it from there when a large amount of hard evidence is presented. But I disagree. Instead, I think the right method is to start with a probability that reflects our overall impressions, then adjust this in light of the scientific evidence. When there is a lot of evidence, these approaches converge. But when there isn’t, the starting point can matter. <br/><br/>In the case of artificial intelligence, everyone agrees the evidence and arguments are far from watertight, but the question is where does this leave us? Very roughly, my approach is to start with the overall view of the expert community that there is something like a one in two chance that AI agents capable of outperforming humans in almost every task will be developed in the coming century. And conditional on that happening, we shouldn’t be shocked if these agents that outperform us across the board were to inherit our future. Especially if when looking into the details, we see great challenges in aligning these agents with our values.</em>&quot;<br/>- The Precipice, p. 165<br/><br/>&quot;<em>Most of the risks arising from long-term trends remain beyond revealing quantification. What is the probability of China’s spectacular economic expansion stalling or even going into reverse? What is the likelihood that Islamic terrorism will develop into a massive, determined quest to destroy the West? Probability estimates of these outcomes based on expert opinion provide at best some constraining guidelines but do not offer any reliable basis for relative comparisons of diverse events or their interrelations. What is the likelihood that a massive wave of global Islamic terrorism will accelerate the Western transition to non–fossil fuel energies? To what extent will the globalization trend be enhanced or impeded by a faster-than-expected sea level rise or by a precipitous demise of the United States? Setting such odds or multipliers is beyond any meaningful quantification.</em>&quot; <br/>- Global Catastrophes and Trends, p. 226<br/><br/>&quot;<em>And while computers have been used for many years to assemble other  computers and machines, such deployments do not indicate any imminent self- reproductive capability. All those processes require human actions to initiate them,  raw materials to build the hardware, and above all, energy to run them. I find it hard to visualize how those machines would (particularly in less than a generation) launch, integrate, and sustain an entirely independent exploration, extraction, conversion, and delivery of the requisite energies.&quot;</em><br/>- Global Catastrophes and Trends, p. 26<br/><br/><b>References:</b><br/>- <a href='https://www.amazon.ca/dp/B08BSZ52TN/ref=dp-kindle-redirect?_encoding=UTF8&amp;btkr=1'>Global Catastrophes and Trends: The Next Fifty Years</a><br/>- <a href='https://www.amazon.ca/dp/B07V9GHKYP/ref=dp-kindle-redirect?_encoding=UTF8&amp;btkr=1'>The Precipice: Existential Risk and the Future of Humanity</a><br/>- <a href='https://samharris.org/podcasts/208-existential-risk/'>Making Sense podcast w/ Ord</a>  (Clip starts around 40:00)<br/>- <a href='https://en.wikipedia.org/wiki/Mere_addition_paradox'>Repugnant conclusion</a><br/>- <a href='https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem'>Arrow&apos;s theorem</a><br/>- <a href='https://en.wikipedia.org/wiki/Apportionment_paradox'>Balinski–Young theorem</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Back down to earth we go! Or try to, at least. In this episode Ben and Vaden attempt to ground their previous discussion on the philosophy of probability by focusing on a real-world example, namely the book The Precipice by Toby Ord, recently featured on the Making Sense podcast. Vaden believes in arguments, and Ben argues for beliefs. <br/><br/><b>Quotes</b><br/>&quot;<em>A common approach to estimating the chance of an unprecedented event with earth-shaking consequences is to take a skeptical stance: to start with an extremely small probability and only raise it from there when a large amount of hard evidence is presented. But I disagree. Instead, I think the right method is to start with a probability that reflects our overall impressions, then adjust this in light of the scientific evidence. When there is a lot of evidence, these approaches converge. But when there isn’t, the starting point can matter. <br/><br/>In the case of artificial intelligence, everyone agrees the evidence and arguments are far from watertight, but the question is where does this leave us? Very roughly, my approach is to start with the overall view of the expert community that there is something like a one in two chance that AI agents capable of outperforming humans in almost every task will be developed in the coming century. And conditional on that happening, we shouldn’t be shocked if these agents that outperform us across the board were to inherit our future. Especially if when looking into the details, we see great challenges in aligning these agents with our values.</em>&quot;<br/>- The Precipice, p. 165<br/><br/>&quot;<em>Most of the risks arising from long-term trends remain beyond revealing quantification. What is the probability of China’s spectacular economic expansion stalling or even going into reverse? What is the likelihood that Islamic terrorism will develop into a massive, determined quest to destroy the West? Probability estimates of these outcomes based on expert opinion provide at best some constraining guidelines but do not offer any reliable basis for relative comparisons of diverse events or their interrelations. What is the likelihood that a massive wave of global Islamic terrorism will accelerate the Western transition to non–fossil fuel energies? To what extent will the globalization trend be enhanced or impeded by a faster-than-expected sea level rise or by a precipitous demise of the United States? Setting such odds or multipliers is beyond any meaningful quantification.</em>&quot; <br/>- Global Catastrophes and Trends, p. 226<br/><br/>&quot;<em>And while computers have been used for many years to assemble other  computers and machines, such deployments do not indicate any imminent self- reproductive capability. All those processes require human actions to initiate them,  raw materials to build the hardware, and above all, energy to run them. I find it hard to visualize how those machines would (particularly in less than a generation) launch, integrate, and sustain an entirely independent exploration, extraction, conversion, and delivery of the requisite energies.&quot;</em><br/>- Global Catastrophes and Trends, p. 26<br/><br/><b>References:</b><br/>- <a href='https://www.amazon.ca/dp/B08BSZ52TN/ref=dp-kindle-redirect?_encoding=UTF8&amp;btkr=1'>Global Catastrophes and Trends: The Next Fifty Years</a><br/>- <a href='https://www.amazon.ca/dp/B07V9GHKYP/ref=dp-kindle-redirect?_encoding=UTF8&amp;btkr=1'>The Precipice: Existential Risk and the Future of Humanity</a><br/>- <a href='https://samharris.org/podcasts/208-existential-risk/'>Making Sense podcast w/ Ord</a>  (Clip starts around 40:00)<br/>- <a href='https://en.wikipedia.org/wiki/Mere_addition_paradox'>Repugnant conclusion</a><br/>- <a href='https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem'>Arrow&apos;s theorem</a><br/>- <a href='https://en.wikipedia.org/wiki/Apportionment_paradox'>Balinski–Young theorem</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
