<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web01.fireside.fm</fireside:hostname>
    <fireside:genDate>Sat, 16 May 2026 10:17:45 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Increments - Episodes Tagged with “Longtermism”</title>
    <link>https://www.incrementspodcast.com/tags/longtermism</link>
    <pubDate>Mon, 19 Dec 2022 12:30:00 -0800</pubDate>
    <description>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Science, Philosophy, Epistemology, Mayhem</itunes:subtitle>
    <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
    <itunes:summary>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>Philosophy,Science,Ethics,Progress,Knowledge,Computer Science,Conversation,Error-Correction</itunes:keywords>
    <itunes:owner>
      <itunes:name>Ben Chugg and Vaden Masrani</itunes:name>
      <itunes:email>incrementspodcast@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<itunes:category text="Science"/>
<item>
  <title>#46 (Bonus) - Arguing about probability (with Nick Anyos)</title>
  <link>https://www.incrementspodcast.com/46</link>
  <guid isPermaLink="false">4b26dbf2-7bcd-44e6-ac65-c3dbca70c897</guid>
  <pubDate>Mon, 19 Dec 2022 12:30:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/4b26dbf2-7bcd-44e6-ac65-c3dbca70c897.mp3" length="85872117" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Ben and Vaden make a guest appearance on Nick Anyos' podcast on criticisms of effective altruism. As usual, they end up arguing about probability for most of it. </itunes:subtitle>
  <itunes:duration>1:59:16</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/4/4b26dbf2-7bcd-44e6-ac65-c3dbca70c897/cover.jpg?v=1"/>
  <description>&lt;p&gt;We make a guest appearance on Nick Anyos' podcast to talk about effective altruism, longtermism, and probability. Nick (very politely) pushes back on our anti-Bayesian credo, and we get deep into the weeds of probability and epistemology. &lt;/p&gt;

&lt;p&gt;You can find Nick's podcast on institutional design &lt;a href="https://institutionaldesign.podbean.com/" target="_blank" rel="nofollow noopener"&gt;here&lt;/a&gt;, and his substack &lt;a href="https://institutionaldesign.substack.com/?utm_source=substack&amp;amp;utm_medium=web&amp;amp;utm_campaign=substack_profile" target="_blank" rel="nofollow noopener"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We discuss:&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The lack of feedback loops in longtermism &lt;/li&gt;
&lt;li&gt;Whether quantifying your beliefs is helpful &lt;/li&gt;
&lt;li&gt;Objective versus subjective knowledge &lt;/li&gt;
&lt;li&gt;The difference between prediction and explanation&lt;/li&gt;
&lt;li&gt;The difference between Bayesian epistemology and Bayesian statistics&lt;/li&gt;
&lt;li&gt;Statistical modelling and when statistics is useful &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Links&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="http://www.stat.columbia.edu/%7Egelman/research/published/philosophy.pdf" target="_blank" rel="nofollow noopener"&gt;Philosophy and the practice of Bayesian statistics&lt;/a&gt; by Andrew Gelman and Cosma Shalizi&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations" target="_blank" rel="nofollow noopener"&gt;EA forum post&lt;/a&gt; showing all forecasts beyond a year out are uncalibrated. &lt;/li&gt;
&lt;li&gt;&lt;p&gt;Vaclav smil quote where he predicts a pandemic by 2021:&lt;br&gt;
 &amp;gt; &lt;em&gt;The following realities indicate the imminence of the risk. The typical frequency of influenza pan- demics was once every 50–60 years between 1700 and 1889 (the longest known gap was 52 years, between the pandemics of 1729–1733 and 1781–1782) and only once every 10–40 years since 1889. The recurrence interval, calculated simply as the mean time elapsed between the last six known pandemics, is about 28 years, with the extremes of 6 and 53 years. Adding the mean and the highest interval to 1968 gives a span between 1996 and 2021. We are, probabilistically speaking, very much inside a high-risk zone.&lt;/em&gt;&lt;br&gt;
 &amp;gt; &lt;br&gt;
 &amp;gt; &lt;em&gt;- Global Catastropes and Trends, p.46&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reference for Tetlock's superforecasters failing to predict the pandemic. &lt;a href="https://wearenotsaved.com/2020/04/18/pandemic-uncovers-the-ridiculousness-of-superforecasting/" target="_blank" rel="nofollow noopener"&gt;"On February 20th, Tetlock’s superforecasters predicted only a 3% chance that there would be 200,000+ coronavirus cases a month later (there were)."&lt;/a&gt; &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Contact us&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani&lt;/li&gt;
&lt;li&gt;Check us out on youtube at &lt;a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" target="_blank" rel="nofollow noopener"&gt;https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Come join our discord server! DM us on twitter or send us an email to get a supersecret link&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Errata&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;At the beginning of the episode Vaden says he hasn't been interviewed on another podcast before. He forgot &lt;a href="https://www.thedeclarationonline.com/podcast/2019/7/23/chesto-and-vaden-debatecast" target="_blank" rel="nofollow noopener"&gt;his appearence&lt;/a&gt; on The Declaration Podcast in 2019, which will be appearing as a bonus episode on our feed in the coming weeks. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sick of hearing us talk about this subject? Understandable! Send topic suggestions over to &lt;a href="mailto:incrementspodcast@gmail.com" target="_blank" rel="nofollow noopener"&gt;incrementspodcast@gmail.com&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Photo credit: &lt;a href="http://www.obrien-studio.com/" target="_blank" rel="nofollow noopener"&gt;James O’Brien&lt;/a&gt; for &lt;a href="https://www.quantamagazine.org/where-quantum-probability-comes-from-20190909/" target="_blank" rel="nofollow noopener"&gt;Quanta Magazine&lt;/a&gt; &lt;/p&gt;
</description>
  <itunes:keywords>probability, longtermism, effective altruism, bayesianism, statistics</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>We make a guest appearance on Nick Anyos&#39; podcast to talk about effective altruism, longtermism, and probability. Nick (very politely) pushes back on our anti-Bayesian credo, and we get deep into the weeds of probability and epistemology. </p>

<p>You can find Nick&#39;s podcast on institutional design <a href="https://institutionaldesign.podbean.com/" rel="nofollow">here</a>, and his substack <a href="https://institutionaldesign.substack.com/?utm_source=substack&utm_medium=web&utm_campaign=substack_profile" rel="nofollow">here</a>. </p>

<p><strong>We discuss:</strong> </p>

<ul>
<li>The lack of feedback loops in longtermism </li>
<li>Whether quantifying your beliefs is helpful </li>
<li>Objective versus subjective knowledge </li>
<li>The difference between prediction and explanation</li>
<li>The difference between Bayesian epistemology and Bayesian statistics</li>
<li>Statistical modelling and when statistics is useful </li>
</ul>

<p><strong>Links</strong></p>

<ul>
<li><a href="http://www.stat.columbia.edu/%7Egelman/research/published/philosophy.pdf" rel="nofollow">Philosophy and the practice of Bayesian statistics</a> by Andrew Gelman and Cosma Shalizi</li>
<li><a href="https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations" rel="nofollow">EA forum post</a> showing all forecasts beyond a year out are uncalibrated. </li>
<li><p>Vaclav smil quote where he predicts a pandemic by 2021:</p>

<blockquote>
<p><em>The following realities indicate the imminence of the risk. The typical frequency of influenza pan- demics was once every 50–60 years between 1700 and 1889 (the longest known gap was 52 years, between the pandemics of 1729–1733 and 1781–1782) and only once every 10–40 years since 1889. The recurrence interval, calculated simply as the mean time elapsed between the last six known pandemics, is about 28 years, with the extremes of 6 and 53 years. Adding the mean and the highest interval to 1968 gives a span between 1996 and 2021. We are, probabilistically speaking, very much inside a high-risk zone.</em></p>

<p><em>- Global Catastropes and Trends, p.46</em></p>
</blockquote></li>
<li><p>Reference for Tetlock&#39;s superforecasters failing to predict the pandemic. <a href="https://wearenotsaved.com/2020/04/18/pandemic-uncovers-the-ridiculousness-of-superforecasting/" rel="nofollow">&quot;On February 20th, Tetlock’s superforecasters predicted only a 3% chance that there would be 200,000+ coronavirus cases a month later (there were).&quot;</a> </p></li>
</ul>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p><strong>Errata</strong></p>

<ul>
<li>At the beginning of the episode Vaden says he hasn&#39;t been interviewed on another podcast before. He forgot <a href="https://www.thedeclarationonline.com/podcast/2019/7/23/chesto-and-vaden-debatecast" rel="nofollow">his appearence</a> on The Declaration Podcast in 2019, which will be appearing as a bonus episode on our feed in the coming weeks. </li>
</ul>

<p>Sick of hearing us talk about this subject? Understandable! Send topic suggestions over to <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p>

<p>Photo credit: <a href="http://www.obrien-studio.com/" rel="nofollow">James O’Brien</a> for <a href="https://www.quantamagazine.org/where-quantum-probability-comes-from-20190909/" rel="nofollow">Quanta Magazine</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>We make a guest appearance on Nick Anyos&#39; podcast to talk about effective altruism, longtermism, and probability. Nick (very politely) pushes back on our anti-Bayesian credo, and we get deep into the weeds of probability and epistemology. </p>

<p>You can find Nick&#39;s podcast on institutional design <a href="https://institutionaldesign.podbean.com/" rel="nofollow">here</a>, and his substack <a href="https://institutionaldesign.substack.com/?utm_source=substack&utm_medium=web&utm_campaign=substack_profile" rel="nofollow">here</a>. </p>

<p><strong>We discuss:</strong> </p>

<ul>
<li>The lack of feedback loops in longtermism </li>
<li>Whether quantifying your beliefs is helpful </li>
<li>Objective versus subjective knowledge </li>
<li>The difference between prediction and explanation</li>
<li>The difference between Bayesian epistemology and Bayesian statistics</li>
<li>Statistical modelling and when statistics is useful </li>
</ul>

<p><strong>Links</strong></p>

<ul>
<li><a href="http://www.stat.columbia.edu/%7Egelman/research/published/philosophy.pdf" rel="nofollow">Philosophy and the practice of Bayesian statistics</a> by Andrew Gelman and Cosma Shalizi</li>
<li><a href="https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations" rel="nofollow">EA forum post</a> showing all forecasts beyond a year out are uncalibrated. </li>
<li><p>Vaclav smil quote where he predicts a pandemic by 2021:</p>

<blockquote>
<p><em>The following realities indicate the imminence of the risk. The typical frequency of influenza pan- demics was once every 50–60 years between 1700 and 1889 (the longest known gap was 52 years, between the pandemics of 1729–1733 and 1781–1782) and only once every 10–40 years since 1889. The recurrence interval, calculated simply as the mean time elapsed between the last six known pandemics, is about 28 years, with the extremes of 6 and 53 years. Adding the mean and the highest interval to 1968 gives a span between 1996 and 2021. We are, probabilistically speaking, very much inside a high-risk zone.</em></p>

<p><em>- Global Catastropes and Trends, p.46</em></p>
</blockquote></li>
<li><p>Reference for Tetlock&#39;s superforecasters failing to predict the pandemic. <a href="https://wearenotsaved.com/2020/04/18/pandemic-uncovers-the-ridiculousness-of-superforecasting/" rel="nofollow">&quot;On February 20th, Tetlock’s superforecasters predicted only a 3% chance that there would be 200,000+ coronavirus cases a month later (there were).&quot;</a> </p></li>
</ul>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p><strong>Errata</strong></p>

<ul>
<li>At the beginning of the episode Vaden says he hasn&#39;t been interviewed on another podcast before. He forgot <a href="https://www.thedeclarationonline.com/podcast/2019/7/23/chesto-and-vaden-debatecast" rel="nofollow">his appearence</a> on The Declaration Podcast in 2019, which will be appearing as a bonus episode on our feed in the coming weeks. </li>
</ul>

<p>Sick of hearing us talk about this subject? Understandable! Send topic suggestions over to <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p>

<p>Photo credit: <a href="http://www.obrien-studio.com/" rel="nofollow">James O’Brien</a> for <a href="https://www.quantamagazine.org/where-quantum-probability-comes-from-20190909/" rel="nofollow">Quanta Magazine</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#44 - Longtermism Revisited: What We Owe the Future</title>
  <link>https://www.incrementspodcast.com/44</link>
  <guid isPermaLink="false">6c02f356-e380-4b16-a69c-d43b882b4746</guid>
  <pubDate>Mon, 03 Oct 2022 10:45:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/6c02f356-e380-4b16-a69c-d43b882b4746.mp3" length="59599306" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Could have seen this one coming. We discuss Will MacAskill's new book "What We Owe the Future." </itunes:subtitle>
  <itunes:duration>1:02:04</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/6/6c02f356-e380-4b16-a69c-d43b882b4746/cover.jpg?v=1"/>
  <description>&lt;p&gt;Like moths to a flame, we come back to longtermism once again. But it's not our fault. Will MacAskill published a new book, &lt;em&gt;What We Owe the Future&lt;/em&gt;, and billions (trillions!) of lives are at stake if we don't review it. Sisyphus had his task and we have ours. We're doing it for the (great great great ... great) grandchildren. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We discuss:&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Whether longtermism is actionable &lt;/li&gt;
&lt;li&gt;Whether the book is a faithful representation of longtermism as practiced &lt;/li&gt;
&lt;li&gt;Why humans are actually cool, despite what you might hear &lt;/li&gt;
&lt;li&gt;Some cool ideas from the book including career advice and allowing vaccines on the free market &lt;/li&gt;
&lt;li&gt;Ben's love of charter cities and whether he's is a totalitarian at heart &lt;/li&gt;
&lt;li&gt;The plausability of "value lock-in"&lt;/li&gt;
&lt;li&gt;The bizarro world of population ethics &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;References&lt;/strong&gt;:&lt;br&gt;
"Bait-and-switch" critique from a longtermist blogger: &lt;a href="https://forum.effectivealtruism.org/posts/9Y6Y6qoAigRC7A8eX/my-take-on-what-we-owe-the-future" target="_blank" rel="nofollow noopener"&gt;https://forum.effectivealtruism.org/posts/9Y6Y6qoAigRC7A8eX/my-take-on-what-we-owe-the-future&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Quote: "For instance, I’m worried people will feel bait-and-switched if they get into EA via WWOTF then do an 80,000 Hours call or hang out around their EA university group and realize most people think AI risk is the biggest longtermist priority, many thinking this by a large margin."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Contact us&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani&lt;/li&gt;
&lt;li&gt;Check us out on youtube at &lt;a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" target="_blank" rel="nofollow noopener"&gt;https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Come join our discord server! DM us on twitter or send us an email to get a supersecret link&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How long is your termist? Tell us at &lt;a href="mailto:incrementspodcast@gmail.com" target="_blank" rel="nofollow noopener"&gt;incrementspodcast@gmail.com&lt;/a&gt;  &lt;/p&gt;
</description>
  <itunes:keywords>longtermism, effective altruism, philosophy, ethics</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Like moths to a flame, we come back to longtermism once again. But it&#39;s not our fault. Will MacAskill published a new book, <em>What We Owe the Future</em>, and billions (trillions!) of lives are at stake if we don&#39;t review it. Sisyphus had his task and we have ours. We&#39;re doing it for the (great great great ... great) grandchildren. </p>

<p><strong>We discuss:</strong> </p>

<ul>
<li>Whether longtermism is actionable </li>
<li>Whether the book is a faithful representation of longtermism as practiced </li>
<li>Why humans are actually cool, despite what you might hear </li>
<li>Some cool ideas from the book including career advice and allowing vaccines on the free market </li>
<li>Ben&#39;s love of charter cities and whether he&#39;s is a totalitarian at heart </li>
<li>The plausability of &quot;value lock-in&quot;</li>
<li>The bizarro world of population ethics </li>
</ul>

<p><strong>References</strong>:<br>
&quot;Bait-and-switch&quot; critique from a longtermist blogger: <a href="https://forum.effectivealtruism.org/posts/9Y6Y6qoAigRC7A8eX/my-take-on-what-we-owe-the-future" rel="nofollow">https://forum.effectivealtruism.org/posts/9Y6Y6qoAigRC7A8eX/my-take-on-what-we-owe-the-future</a></p>

<p>Quote: &quot;For instance, I’m worried people will feel bait-and-switched if they get into EA via WWOTF then do an 80,000 Hours call or hang out around their EA university group and realize most people think AI risk is the biggest longtermist priority, many thinking this by a large margin.&quot;</p>

<p><strong>Contact us</strong> </p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p>How long is your termist? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Like moths to a flame, we come back to longtermism once again. But it&#39;s not our fault. Will MacAskill published a new book, <em>What We Owe the Future</em>, and billions (trillions!) of lives are at stake if we don&#39;t review it. Sisyphus had his task and we have ours. We&#39;re doing it for the (great great great ... great) grandchildren. </p>

<p><strong>We discuss:</strong> </p>

<ul>
<li>Whether longtermism is actionable </li>
<li>Whether the book is a faithful representation of longtermism as practiced </li>
<li>Why humans are actually cool, despite what you might hear </li>
<li>Some cool ideas from the book including career advice and allowing vaccines on the free market </li>
<li>Ben&#39;s love of charter cities and whether he&#39;s is a totalitarian at heart </li>
<li>The plausability of &quot;value lock-in&quot;</li>
<li>The bizarro world of population ethics </li>
</ul>

<p><strong>References</strong>:<br>
&quot;Bait-and-switch&quot; critique from a longtermist blogger: <a href="https://forum.effectivealtruism.org/posts/9Y6Y6qoAigRC7A8eX/my-take-on-what-we-owe-the-future" rel="nofollow">https://forum.effectivealtruism.org/posts/9Y6Y6qoAigRC7A8eX/my-take-on-what-we-owe-the-future</a></p>

<p>Quote: &quot;For instance, I’m worried people will feel bait-and-switched if they get into EA via WWOTF then do an 80,000 Hours call or hang out around their EA university group and realize most people think AI risk is the biggest longtermist priority, many thinking this by a large margin.&quot;</p>

<p><strong>Contact us</strong> </p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p>How long is your termist? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#20 (HTI crossover episode) - Roundtable Longtermism Discussion</title>
  <link>https://www.incrementspodcast.com/20</link>
  <guid isPermaLink="false">Buzzsprout-8100547</guid>
  <pubDate>Mon, 08 Mar 2021 10:00:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/b82f2199-72ee-4dc7-8a04-a72b67bb3efe.mp3" length="93479914" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle></itunes:subtitle>
  <itunes:duration>3:14:44</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
  <description>&lt;p&gt;&lt;/p&gt;&lt;p&gt;Hello and sorry for the delay! We finally got together with Fin and Luca from the excellent &lt;a href="https://hearthisidea.com/" target="_blank" rel="nofollow noopener"&gt;HearThisIdea&lt;/a&gt; podcast for a nice roundtable discussion on longtermism. We laughed, we cried, we tried our best to communicate across the divide.  &lt;br&gt;&lt;br&gt;Material referenced in the discussion:&lt;br&gt;&lt;br&gt;- &lt;a href="https://80000hours.org/problem-profiles/" target="_blank" rel="nofollow noopener"&gt;80k Hours Problem Profiles&lt;/a&gt;&lt;br&gt;- &lt;a href="https://web.archive.org/web/20191023155157/https://foundational-research.org/s-risks-talk-eag-boston-2017/" target="_blank" rel="nofollow noopener"&gt;Jon Hamm  imprisons us in an Alexa&lt;/a&gt;&lt;br&gt;- &lt;a href="https://globalprioritiesinstitute.org/wp-content/uploads/Hilary-Greaves-and-William-MacAskill_strong-longtermism.pdf" target="_blank" rel="nofollow noopener"&gt;The Case for Strong Longtermism&lt;/a&gt;&lt;br&gt;- &lt;a href="https://vmasrani.github.io/blog/2020/against_longtermism/" target="_blank" rel="nofollow noopener"&gt;A Case Against Strong Longtermism&lt;/a&gt;&lt;br&gt;- &lt;a href="https://nickbostrom.com/existential/risks.html" target="_blank" rel="nofollow noopener"&gt;Nick Bostrom's seminal paper on existential risks&lt;/a&gt;&lt;br&gt;&lt;br&gt;Quote:  "[Events like Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts, World War I, World War II, epidemics of influenza, smallpox, black plague, and AIDS. ] have occurred many times and our cultural attitudes towards risk have been shaped by trial-and-error in managing such hazards. But tragic as such events are to the people immediately affected, in the big picture of things – from the perspective of humankind as a whole – &lt;em&gt;even the worst of these catastrophes are mere ripples on the surface of the great sea of life.&lt;/em&gt;  (italics added)"&lt;br&gt;&lt;br&gt;- Nick Bostrom's "&lt;a href="https://www.nickbostrom.com/papers/survey.pdf" target="_blank" rel="nofollow noopener"&gt;A survey of expert opinion&lt;/a&gt;" (errata: Vaden incorrectly said this paper was coauthored by Nick Bostrom and Toby Ord. It's actually authored by Vincent C. Müller and Nick Bostrom - Toby Ord and Anders Sandberg are acknowledged on page 15 for having helped design the questionnaire.) &lt;br&gt;&lt;br&gt;Send us a survey of expert credences over at &lt;a href="mailto:incrementspodcast@gmail.com" target="_blank" rel="nofollow noopener"&gt;incrementspodcast@gmail.com&lt;/a&gt;&lt;/p&gt; Special Guests: Fin Moorhouse and Luca Righetti.
</description>
  <itunes:keywords>debate, longtermism, effective altruism</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Hello and sorry for the delay! We finally got together with Fin and Luca from the excellent <a href='https://hearthisidea.com/'>HearThisIdea</a> podcast for a nice roundtable discussion on longtermism. We laughed, we cried, we tried our best to communicate across the divide.  <br/><br/>Material referenced in the discussion:<br/><br/>- <a href='https://80000hours.org/problem-profiles/'>80k Hours Problem Profiles</a><br/>- <a href='https://web.archive.org/web/20191023155157/https://foundational-research.org/s-risks-talk-eag-boston-2017/'>Jon Hamm  imprisons us in an Alexa</a><br/>- <a href='https://globalprioritiesinstitute.org/wp-content/uploads/Hilary-Greaves-and-William-MacAskill_strong-longtermism.pdf'>The Case for Strong Longtermism</a><br/>- <a href='https://vmasrani.github.io/blog/2020/against_longtermism/'>A Case Against Strong Longtermism</a><br/>- <a href='https://nickbostrom.com/existential/risks.html'>Nick Bostrom&apos;s seminal paper on existential risks</a><br/><br/>Quote:  &quot;[Events like Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts, World War I, World War II, epidemics of influenza, smallpox, black plague, and AIDS. ] have occurred many times and our cultural attitudes towards risk have been shaped by trial-and-error in managing such hazards. But tragic as such events are to the people immediately affected, in the big picture of things – from the perspective of humankind as a whole – <em>even the worst of these catastrophes are mere ripples on the surface of the great sea of life.</em>  (italics added)&quot;<br/><br/>- Nick Bostrom&apos;s &quot;<a href='https://www.nickbostrom.com/papers/survey.pdf'>A survey of expert opinion</a>&quot; (errata: Vaden incorrectly said this paper was coauthored by Nick Bostrom and Toby Ord. It&apos;s actually authored by Vincent C. Müller and Nick Bostrom - Toby Ord and Anders Sandberg are acknowledged on page 15 for having helped design the questionnaire.) <br/><br/>Send us a survey of expert credences over at incrementspodcast@gmail.com</p><p>Special Guests: Fin Moorhouse and Luca Righetti.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Hello and sorry for the delay! We finally got together with Fin and Luca from the excellent <a href='https://hearthisidea.com/'>HearThisIdea</a> podcast for a nice roundtable discussion on longtermism. We laughed, we cried, we tried our best to communicate across the divide.  <br/><br/>Material referenced in the discussion:<br/><br/>- <a href='https://80000hours.org/problem-profiles/'>80k Hours Problem Profiles</a><br/>- <a href='https://web.archive.org/web/20191023155157/https://foundational-research.org/s-risks-talk-eag-boston-2017/'>Jon Hamm  imprisons us in an Alexa</a><br/>- <a href='https://globalprioritiesinstitute.org/wp-content/uploads/Hilary-Greaves-and-William-MacAskill_strong-longtermism.pdf'>The Case for Strong Longtermism</a><br/>- <a href='https://vmasrani.github.io/blog/2020/against_longtermism/'>A Case Against Strong Longtermism</a><br/>- <a href='https://nickbostrom.com/existential/risks.html'>Nick Bostrom&apos;s seminal paper on existential risks</a><br/><br/>Quote:  &quot;[Events like Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts, World War I, World War II, epidemics of influenza, smallpox, black plague, and AIDS. ] have occurred many times and our cultural attitudes towards risk have been shaped by trial-and-error in managing such hazards. But tragic as such events are to the people immediately affected, in the big picture of things – from the perspective of humankind as a whole – <em>even the worst of these catastrophes are mere ripples on the surface of the great sea of life.</em>  (italics added)&quot;<br/><br/>- Nick Bostrom&apos;s &quot;<a href='https://www.nickbostrom.com/papers/survey.pdf'>A survey of expert opinion</a>&quot; (errata: Vaden incorrectly said this paper was coauthored by Nick Bostrom and Toby Ord. It&apos;s actually authored by Vincent C. Müller and Nick Bostrom - Toby Ord and Anders Sandberg are acknowledged on page 15 for having helped design the questionnaire.) <br/><br/>Send us a survey of expert credences over at incrementspodcast@gmail.com</p><p>Special Guests: Fin Moorhouse and Luca Righetti.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#19 - Against Longtermism FAQ</title>
  <link>https://www.incrementspodcast.com/19</link>
  <guid isPermaLink="false">Buzzsprout-7623718</guid>
  <pubDate>Mon, 01 Feb 2021 20:00:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/5b58b507-52f8-4dd7-8abd-471f6371691d.mp3" length="65372208" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle></itunes:subtitle>
  <itunes:duration>1:30:44</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
  <description>&lt;p&gt;Back in the ring for round two on longtermism! We (Ben somewhat drunkenly) respond to some of the criticism of episode #17 and our two essays (&lt;a href="https://forum.effectivealtruism.org/posts/2NJszbnBTwibfdpo7/strong-longtermism-irrefutability-and-moral-progress" target="_blank" rel="nofollow noopener"&gt;Ben's&lt;/a&gt;, &lt;a href="https://vmasrani.github.io/blog/2020/against_longtermism/" target="_blank" rel="nofollow noopener"&gt;Vaden's&lt;/a&gt;) We touch on: &lt;/p&gt;&lt;ul&gt;
&lt;li&gt;Ben's hate mail from his &lt;a href="https://medium.com/conjecture-magazine/the-dangers-of-cliodynamics-c48392b4a985" target="_blank" rel="nofollow noopener"&gt;piece on cliodynamics&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Longtermism as implying altruistic portfolio shuffling&lt;/li&gt;
&lt;li&gt;What on earth is Bayesian epistemology &lt;/li&gt;
&lt;li&gt;&lt;a href="http://colyvan.com/papers/pasadena.pdf" target="_blank" rel="nofollow noopener"&gt;The Pasadena game&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Authoritarianism and the danger of seeking perfection &lt;/li&gt;
&lt;li&gt;Arrow's theorem&lt;/li&gt;
&lt;li&gt;Alternative decision theories focusing on error correction &lt;/li&gt;
&lt;li&gt;What's the probability of nuclear war before 2100?&lt;/li&gt;
&lt;li&gt;When are models reliable &lt;/li&gt;
&lt;li&gt;What problems to work on &lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;You will, dear listener, be either pleased or horrified to learn that this will not be our last foray into longtermism. It's like choose your own adventure ... except we're choosing the adventure, and the adventure is longtermism. Next stop is the &lt;a href="https://hearthisidea.com/" target="_blank" rel="nofollow noopener"&gt;Hear this Idea podcast&lt;/a&gt;!&lt;br&gt;&lt;br&gt;Send us best longterm prediction at incrementspodcast@gmail.com&lt;/p&gt; 
</description>
  <itunes:keywords>longtermism</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Back in the ring for round two on longtermism! We (Ben somewhat drunkenly) respond to some of the criticism of episode #17 and our two essays (<a href='https://forum.effectivealtruism.org/posts/2NJszbnBTwibfdpo7/strong-longtermism-irrefutability-and-moral-progress'>Ben&apos;s</a>, <a href='https://vmasrani.github.io/blog/2020/against_longtermism/'>Vaden&apos;s</a>) We touch on: </p><ul><li>Ben&apos;s hate mail from his <a href='https://medium.com/conjecture-magazine/the-dangers-of-cliodynamics-c48392b4a985'>piece on cliodynamics</a></li><li>Longtermism as implying altruistic portfolio shuffling</li><li>What on earth is Bayesian epistemology </li><li><a href='http://colyvan.com/papers/pasadena.pdf'>The Pasadena game</a></li><li>Authoritarianism and the danger of seeking perfection </li><li>Arrow&apos;s theorem</li><li>Alternative decision theories focusing on error correction </li><li>What&apos;s the probability of nuclear war before 2100?</li><li>When are models reliable </li><li>What problems to work on </li></ul><p>You will, dear listener, be either pleased or horrified to learn that this will not be our last foray into longtermism. It&apos;s like choose your own adventure ... except we&apos;re choosing the adventure, and the adventure is longtermism. Next stop is the <a href='https://hearthisidea.com/'>Hear this Idea podcast</a>!<br/><br/>Send us best longterm prediction at incrementspodcast@gmail.com</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Back in the ring for round two on longtermism! We (Ben somewhat drunkenly) respond to some of the criticism of episode #17 and our two essays (<a href='https://forum.effectivealtruism.org/posts/2NJszbnBTwibfdpo7/strong-longtermism-irrefutability-and-moral-progress'>Ben&apos;s</a>, <a href='https://vmasrani.github.io/blog/2020/against_longtermism/'>Vaden&apos;s</a>) We touch on: </p><ul><li>Ben&apos;s hate mail from his <a href='https://medium.com/conjecture-magazine/the-dangers-of-cliodynamics-c48392b4a985'>piece on cliodynamics</a></li><li>Longtermism as implying altruistic portfolio shuffling</li><li>What on earth is Bayesian epistemology </li><li><a href='http://colyvan.com/papers/pasadena.pdf'>The Pasadena game</a></li><li>Authoritarianism and the danger of seeking perfection </li><li>Arrow&apos;s theorem</li><li>Alternative decision theories focusing on error correction </li><li>What&apos;s the probability of nuclear war before 2100?</li><li>When are models reliable </li><li>What problems to work on </li></ul><p>You will, dear listener, be either pleased or horrified to learn that this will not be our last foray into longtermism. It&apos;s like choose your own adventure ... except we&apos;re choosing the adventure, and the adventure is longtermism. Next stop is the <a href='https://hearthisidea.com/'>Hear this Idea podcast</a>!<br/><br/>Send us best longterm prediction at incrementspodcast@gmail.com</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#17 - Against Longtermism</title>
  <link>https://www.incrementspodcast.com/17</link>
  <guid isPermaLink="false">Buzzsprout-6919628</guid>
  <pubDate>Fri, 18 Dec 2020 19:00:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/f1e65451-076d-4ca4-bef0-5f938e81d70d.mp3" length="64853211" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle></itunes:subtitle>
  <itunes:duration>1:30:01</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
  <description>&lt;p&gt;Well, there's no avoiding controversy with this one. We explain, examine, and attempt to refute the shiny new moral philosophy of &lt;em&gt;longtermism.&lt;/em&gt; Our critique focuses on &lt;a href="https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/5f1704905c33720e61cd3214/1595344019788/The_Case_for_Strong_Longtermism.pdf" target="_blank" rel="nofollow noopener"&gt;&lt;em&gt;The Case for Strong Longtermism&lt;/em&gt;&lt;/a&gt;&lt;em&gt; &lt;/em&gt;by Hilary Greaves and Will MacAskill. &lt;br&gt;&lt;br&gt;We say so in the episode, but it's important to emphasize that we harbour no animosity towards anyone in the effective altruism community. However, we both think that longtermism is pretty f***ing scary and do our best to communicate why.&lt;br&gt;&lt;br&gt;Confused as to why there's no charming, witty, and hilarious intro? Us too. Somehow, Ben managed to corrupt his audio. Classic. Oh well, some of you tell us you dislike the intros anyway. &lt;br&gt;&lt;br&gt;&lt;b&gt;References&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/5f1704905c33720e61cd3214/1595344019788/The_Case_for_Strong_Longtermism.pdf" target="_blank" rel="nofollow noopener"&gt;The Case for Strong Longtermism&lt;/a&gt;, by Greaves and MacAskill&lt;/li&gt;
&lt;li&gt;Vaden's &lt;a href="https://forum.effectivealtruism.org/posts/7MPTzAnPtu5HKesMX/a-case-against-strong-longtermism" target="_blank" rel="nofollow noopener"&gt;EA forum post&lt;/a&gt; on longtermism&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://www.reddit.com/r/EffectiveAltruism/comments/kd41jw/a_case_against_strong_longtermism/" target="_blank" rel="nofollow noopener"&gt;reddit discussion&lt;/a&gt; surrounding Vaden's piece&lt;/li&gt;
&lt;li&gt;Ben's &lt;a href="https://benchugg.medium.com/against-strong-longtermism-a-response-to-greaves-and-macaskill-cb4bb9681982" target="_blank" rel="nofollow noopener"&gt;piece on longtermism&lt;/a&gt; (which he has hidden in the depths of Medium because he's scared of the EA forum) &lt;/li&gt;
&lt;li&gt;Ben on &lt;a href="https://medium.com/conjecture-magazine/pascals-mugging-and-the-poverty-of-the-expected-value-calculus-70b190d953cd" target="_blank" rel="nofollow noopener"&gt;Pascal's Mugging and Expected Values&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Gwern and Robin Hanson &lt;a href="https://twitter.com/robinhanson/status/1339956546801954816?s=20" target="_blank" rel="nofollow noopener"&gt;making fun&lt;/a&gt; of Ben's piece &lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;&lt;br&gt;Yell at us on the EA forum, on Reddit, on Medium, or over email at incrementspodcast@gmail.com. &lt;/p&gt; 
</description>
  <itunes:keywords>longtermism, expected value, bayesianism, effective altruism</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Well, there&apos;s no avoiding controversy with this one. We explain, examine, and attempt to refute the shiny new moral philosophy of <em>longtermism.</em> Our critique focuses on <a href='https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/5f1704905c33720e61cd3214/1595344019788/The_Case_for_Strong_Longtermism.pdf'><em>The Case for Strong Longtermism</em></a><em> </em>by Hilary Greaves and Will MacAskill. <br/><br/>We say so in the episode, but it&apos;s important to emphasize that we harbour no animosity towards anyone in the effective altruism community. However, we both think that longtermism is pretty f***ing scary and do our best to communicate why.<br/><br/>Confused as to why there&apos;s no charming, witty, and hilarious intro? Us too. Somehow, Ben managed to corrupt his audio. Classic. Oh well, some of you tell us you dislike the intros anyway. <br/><br/><b>References</b></p><ul><li><a href='https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/5f1704905c33720e61cd3214/1595344019788/The_Case_for_Strong_Longtermism.pdf'>The Case for Strong Longtermism</a>, by Greaves and MacAskill</li><li>Vaden&apos;s <a href='https://forum.effectivealtruism.org/posts/7MPTzAnPtu5HKesMX/a-case-against-strong-longtermism'>EA forum post</a> on longtermism</li><li>The <a href='https://www.reddit.com/r/EffectiveAltruism/comments/kd41jw/a_case_against_strong_longtermism/'>reddit discussion</a> surrounding Vaden&apos;s piece</li><li>Ben&apos;s <a href='https://benchugg.medium.com/against-strong-longtermism-a-response-to-greaves-and-macaskill-cb4bb9681982'>piece on longtermism</a> (which he has hidden in the depths of Medium because he&apos;s scared of the EA forum) </li><li>Ben on <a href='https://medium.com/conjecture-magazine/pascals-mugging-and-the-poverty-of-the-expected-value-calculus-70b190d953cd'>Pascal&apos;s Mugging and Expected Values</a></li><li>Gwern and Robin Hanson <a href='https://twitter.com/robinhanson/status/1339956546801954816?s=20'>making fun</a> of Ben&apos;s piece </li></ul><p><br/>Yell at us on the EA forum, on Reddit, on Medium, or over email at incrementspodcast@gmail.com. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Well, there&apos;s no avoiding controversy with this one. We explain, examine, and attempt to refute the shiny new moral philosophy of <em>longtermism.</em> Our critique focuses on <a href='https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/5f1704905c33720e61cd3214/1595344019788/The_Case_for_Strong_Longtermism.pdf'><em>The Case for Strong Longtermism</em></a><em> </em>by Hilary Greaves and Will MacAskill. <br/><br/>We say so in the episode, but it&apos;s important to emphasize that we harbour no animosity towards anyone in the effective altruism community. However, we both think that longtermism is pretty f***ing scary and do our best to communicate why.<br/><br/>Confused as to why there&apos;s no charming, witty, and hilarious intro? Us too. Somehow, Ben managed to corrupt his audio. Classic. Oh well, some of you tell us you dislike the intros anyway. <br/><br/><b>References</b></p><ul><li><a href='https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/5f1704905c33720e61cd3214/1595344019788/The_Case_for_Strong_Longtermism.pdf'>The Case for Strong Longtermism</a>, by Greaves and MacAskill</li><li>Vaden&apos;s <a href='https://forum.effectivealtruism.org/posts/7MPTzAnPtu5HKesMX/a-case-against-strong-longtermism'>EA forum post</a> on longtermism</li><li>The <a href='https://www.reddit.com/r/EffectiveAltruism/comments/kd41jw/a_case_against_strong_longtermism/'>reddit discussion</a> surrounding Vaden&apos;s piece</li><li>Ben&apos;s <a href='https://benchugg.medium.com/against-strong-longtermism-a-response-to-greaves-and-macaskill-cb4bb9681982'>piece on longtermism</a> (which he has hidden in the depths of Medium because he&apos;s scared of the EA forum) </li><li>Ben on <a href='https://medium.com/conjecture-magazine/pascals-mugging-and-the-poverty-of-the-expected-value-calculus-70b190d953cd'>Pascal&apos;s Mugging and Expected Values</a></li><li>Gwern and Robin Hanson <a href='https://twitter.com/robinhanson/status/1339956546801954816?s=20'>making fun</a> of Ben&apos;s piece </li></ul><p><br/>Yell at us on the EA forum, on Reddit, on Medium, or over email at incrementspodcast@gmail.com. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
