<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web01.fireside.fm</fireside:hostname>
    <fireside:genDate>Wed, 29 Apr 2026 21:48:53 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Increments - Episodes Tagged with “Existential Risks”</title>
    <link>https://www.incrementspodcast.com/tags/existential%20risks</link>
    <pubDate>Wed, 22 Mar 2023 10:15:00 -0700</pubDate>
    <description>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Science, Philosophy, Epistemology, Mayhem</itunes:subtitle>
    <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
    <itunes:summary>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>Philosophy,Science,Ethics,Progress,Knowledge,Computer Science,Conversation,Error-Correction</itunes:keywords>
    <itunes:owner>
      <itunes:name>Ben Chugg and Vaden Masrani</itunes:name>
      <itunes:email>incrementspodcast@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<itunes:category text="Science"/>
<item>
  <title>#49 - AGI: Could The End Be Nigh? (With Rosie Campbell)</title>
  <link>https://www.incrementspodcast.com/49</link>
  <guid isPermaLink="false">d190df1f-0cf0-4161-ba5f-544066c08c1f</guid>
  <pubDate>Wed, 22 Mar 2023 10:15:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/d190df1f-0cf0-4161-ba5f-544066c08c1f.mp3" length="81494098" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>The delightful Rosie Campbell joins us on the podcast to debate AI, AGI, superintelligence, and rogue computer viruses. </itunes:subtitle>
  <itunes:duration>1:24:53</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/d/d190df1f-0cf0-4161-ba5f-544066c08c1f/cover.jpg?v=1"/>
  <description>When big bearded men wearing fedoras begin yelling at you that the end is nigh (https://www.youtube.com/watch?v=gA1sNLL6yg4&amp;amp;ab_channel=BanklessShows) and superintelligence is about to kill us all, what should you do? Vaden says don't panic, and Ben is simply awestruck by the ability to grow a beard in the first place. 
To help us think through the potential risks and rewards of ever more impressive machine learning models, we invited Rosie Campbell on the podcast. Rosie is on the safety team at OpenAI and, while she's more worried about the existential risks of AI than we are, she's just as keen on some debate over a bottle of wine. 
We discuss:
- Whether machine learning poses an existential threat 
- How concerned we should be about existing AI 
- Whether deep learning can get us to artificial general intelligence (AGI)
- If AI safety is simply quality assurance
- How can we test if an AI system is creative? 
References:
- Mathgen: Randomly generated math papers (https://thatsmathematics.com/mathgen/) 
Contact us
- Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
- Follow Rosie at @RosieCampbell or https://www.rosiecampbell.xyz/
- Check us out on youtube at https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ
- Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Prove you're creative by inventing the next big thing and then send it to us at incrementspodcast@gmail.com
 Special Guest: Rosie Campbell.
</description>
  <itunes:keywords>AI, existential risks, creativity, progress</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>When big bearded men wearing fedoras begin yelling at you that <a href="https://www.youtube.com/watch?v=gA1sNLL6yg4&ab_channel=BanklessShows" rel="nofollow">the end is nigh</a> and superintelligence is about to kill us all, what should you do? Vaden says don&#39;t panic, and Ben is simply awestruck by the ability to grow a beard in the first place. </p>

<p>To help us think through the potential risks and rewards of ever more impressive machine learning models, we invited Rosie Campbell on the podcast. Rosie is on the safety team at OpenAI and, while she&#39;s more worried about the existential risks of AI than we are, she&#39;s just as keen on some debate over a bottle of wine. </p>

<p><strong>We discuss:</strong></p>

<ul>
<li>Whether machine learning poses an existential threat </li>
<li>How concerned we should be about existing AI </li>
<li>Whether deep learning can get us to artificial <em>general</em> intelligence (AGI)</li>
<li>If AI safety is simply quality assurance</li>
<li>How can we test if an AI system is creative? </li>
</ul>

<p><strong>References:</strong></p>

<ul>
<li><a href="https://thatsmathematics.com/mathgen/" rel="nofollow">Mathgen: Randomly generated math papers</a> </li>
</ul>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Follow Rosie at @RosieCampbell or <a href="https://www.rosiecampbell.xyz/" rel="nofollow">https://www.rosiecampbell.xyz/</a></li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p>Prove you&#39;re creative by inventing the next big thing and then send it to us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guest: Rosie Campbell.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>When big bearded men wearing fedoras begin yelling at you that <a href="https://www.youtube.com/watch?v=gA1sNLL6yg4&ab_channel=BanklessShows" rel="nofollow">the end is nigh</a> and superintelligence is about to kill us all, what should you do? Vaden says don&#39;t panic, and Ben is simply awestruck by the ability to grow a beard in the first place. </p>

<p>To help us think through the potential risks and rewards of ever more impressive machine learning models, we invited Rosie Campbell on the podcast. Rosie is on the safety team at OpenAI and, while she&#39;s more worried about the existential risks of AI than we are, she&#39;s just as keen on some debate over a bottle of wine. </p>

<p><strong>We discuss:</strong></p>

<ul>
<li>Whether machine learning poses an existential threat </li>
<li>How concerned we should be about existing AI </li>
<li>Whether deep learning can get us to artificial <em>general</em> intelligence (AGI)</li>
<li>If AI safety is simply quality assurance</li>
<li>How can we test if an AI system is creative? </li>
</ul>

<p><strong>References:</strong></p>

<ul>
<li><a href="https://thatsmathematics.com/mathgen/" rel="nofollow">Mathgen: Randomly generated math papers</a> </li>
</ul>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Follow Rosie at @RosieCampbell or <a href="https://www.rosiecampbell.xyz/" rel="nofollow">https://www.rosiecampbell.xyz/</a></li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p>Prove you&#39;re creative by inventing the next big thing and then send it to us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guest: Rosie Campbell.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
