<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web01.fireside.fm</fireside:hostname>
    <fireside:genDate>Fri, 24 Apr 2026 10:17:43 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Increments - Episodes Tagged with “Deception”</title>
    <link>https://www.incrementspodcast.com/tags/deception</link>
    <pubDate>Thu, 22 Jan 2026 16:15:00 -0800</pubDate>
    <description>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Science, Philosophy, Epistemology, Mayhem</itunes:subtitle>
    <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
    <itunes:summary>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>Philosophy,Science,Ethics,Progress,Knowledge,Computer Science,Conversation,Error-Correction</itunes:keywords>
    <itunes:owner>
      <itunes:name>Ben Chugg and Vaden Masrani</itunes:name>
      <itunes:email>incrementspodcast@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<itunes:category text="Science"/>
<item>
  <title>#97 - Did Effective Altruism Have Ulterior Motives From the Beginning?</title>
  <link>https://www.incrementspodcast.com/97</link>
  <guid isPermaLink="false">93913978-d551-461a-a966-1bf35def6f4c</guid>
  <pubDate>Thu, 22 Jan 2026 16:15:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/93913978-d551-461a-a966-1bf35def6f4c.mp3" length="98009352" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Was EA a front for AI safety the whole time? </itunes:subtitle>
  <itunes:duration>1:41:42</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/9/93913978-d551-461a-a966-1bf35def6f4c/cover.jpg?v=1"/>
  <description>Two years without discussing effective altruism -- did you miss it? Not as much as Vaden, surely. And probably a right bit more than Ben. 
Well, we're back in the game with a spicy one. Was EA a front for AI safety from the beginning? Did the leaders care not a wit for global poverty? Is Ben going to throw himself out window if Vaden keeps this up? 
We discuss
Feedback on our introspection episode 
The motives of the EA founders 
The felicia forum 
Is this a conspiracy theory?  
EA's strategic ambiguity 
Bostromism, transhumanism, and AI safety 
EA funding 
The public/core divide and the funnel model 
Quotes
new effective altruists tend to start off concerned about global poverty or animal suffering and then hear, take seriously, and often are convinced by the arguments for existential risk mitigation
- Will MacAskill 
Existential risk isn’t the most useful public face for effective altruism – everyone inc[l]uding Eliezer Yudkowsky agrees about that 
- Scott Alexander, 2015
Utilitymonster: GWWC is explicitly poverty-focused but high impact careers (HIC) is not. In fact, hardcore members of GWWC are heavily interested in x-risk, and I estimate that 10-15% of its general membership is as well. I’d take them seriously as a group for promoting utilitarianism in general.
I’m a GWWC leader.
[Redacted]: but HIC always seems to talk about things in terms of “lives saved”, ive never heard them mentioning other things to donate to. […]
Utilitymonster: That’s exactly the right thing for HIC to do. Talk about lives saved with their public face, let hardcore members hear about x-risk, and then, in the future, if some excellent x-risk opportunity arises, direct resources to x-risk. 
- From felicia forum. 
References
Gleiberman's paper: https://medialibrary.uantwerpen.be/files/8518/61565cb6-e056-4e35-bd2e-d14d58e35231.pdf
Old EA wikipedia page (web archive): https://web.archive.org/web/20170409171350/https://en.wikipedia.org/wiki/Effective_altruism 
Old CEA webpage (web archive): https://web.archive.org/web/20161219031827/https://www.centreforeffectivealtruism.org/fundraising/
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Become a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
Let us funnel you into the core group of super secret patreon supporters. Send us an email at incrementspodcast@gmail.com  
</description>
  <itunes:keywords>effective altruism, ulterior motives, AI safety, deception  </itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Two years without discussing effective altruism -- did you miss it? Not as much as Vaden, surely. And probably a right bit more than Ben. </p>

<p>Well, we&#39;re back in the game with a spicy one. Was EA a front for AI safety from the beginning? Did the leaders care not a wit for global poverty? Is Ben going to throw himself out window if Vaden keeps this up? </p>

<h1>We discuss</h1>

<ul>
<li>Feedback on our introspection episode </li>
<li>The motives of the EA founders </li>
<li>The felicia forum </li>
<li>Is this a conspiracy theory?<br></li>
<li>EA&#39;s strategic ambiguity </li>
<li>Bostromism, transhumanism, and AI safety </li>
<li>EA funding </li>
<li>The public/core divide and the funnel model </li>
</ul>

<h1>Quotes</h1>

<blockquote>
<p>new effective altruists tend to start off concerned about global poverty or animal suffering and then hear, take seriously, and often are convinced by the arguments for existential risk mitigation<br>
- Will MacAskill </p>

<p>Existential risk isn’t the most useful public face for effective altruism – everyone inc[l]uding Eliezer Yudkowsky agrees about that <br>
- Scott Alexander, 2015</p>

<p>Utilitymonster: GWWC is explicitly poverty-focused but high impact careers (HIC) is not. In fact, hardcore members of GWWC are heavily interested in x-risk, and I estimate that 10-15% of its general membership is as well. I’d take them seriously as a group for promoting utilitarianism in general.<br>
I’m a GWWC leader.<br>
[Redacted]: but HIC always seems to talk about things in terms of “lives saved”, ive never heard them mentioning other things to donate to. […]<br>
Utilitymonster: That’s exactly the right thing for HIC to do. Talk about lives saved with their public face, let hardcore members hear about x-risk, and then, in the future, if some excellent x-risk opportunity arises, direct resources to x-risk. <br>
- From felicia forum. </p>
</blockquote>

<h1>References</h1>

<ul>
<li>Gleiberman&#39;s paper: <a href="https://medialibrary.uantwerpen.be/files/8518/61565cb6-e056-4e35-bd2e-d14d58e35231.pdf" rel="nofollow">https://medialibrary.uantwerpen.be/files/8518/61565cb6-e056-4e35-bd2e-d14d58e35231.pdf</a></li>
<li>Old EA wikipedia page (web archive): <a href="https://web.archive.org/web/20170409171350/https://en.wikipedia.org/wiki/Effective_altruism" rel="nofollow">https://web.archive.org/web/20170409171350/https://en.wikipedia.org/wiki/Effective_altruism</a> </li>
<li>Old CEA webpage (web archive): <a href="https://web.archive.org/web/20161219031827/https://www.centreforeffectivealtruism.org/fundraising/" rel="nofollow">https://web.archive.org/web/20161219031827/https://www.centreforeffectivealtruism.org/fundraising/</a></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Let us funnel you into the core group of super secret patreon supporters. Send us an email at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Two years without discussing effective altruism -- did you miss it? Not as much as Vaden, surely. And probably a right bit more than Ben. </p>

<p>Well, we&#39;re back in the game with a spicy one. Was EA a front for AI safety from the beginning? Did the leaders care not a wit for global poverty? Is Ben going to throw himself out window if Vaden keeps this up? </p>

<h1>We discuss</h1>

<ul>
<li>Feedback on our introspection episode </li>
<li>The motives of the EA founders </li>
<li>The felicia forum </li>
<li>Is this a conspiracy theory?<br></li>
<li>EA&#39;s strategic ambiguity </li>
<li>Bostromism, transhumanism, and AI safety </li>
<li>EA funding </li>
<li>The public/core divide and the funnel model </li>
</ul>

<h1>Quotes</h1>

<blockquote>
<p>new effective altruists tend to start off concerned about global poverty or animal suffering and then hear, take seriously, and often are convinced by the arguments for existential risk mitigation<br>
- Will MacAskill </p>

<p>Existential risk isn’t the most useful public face for effective altruism – everyone inc[l]uding Eliezer Yudkowsky agrees about that <br>
- Scott Alexander, 2015</p>

<p>Utilitymonster: GWWC is explicitly poverty-focused but high impact careers (HIC) is not. In fact, hardcore members of GWWC are heavily interested in x-risk, and I estimate that 10-15% of its general membership is as well. I’d take them seriously as a group for promoting utilitarianism in general.<br>
I’m a GWWC leader.<br>
[Redacted]: but HIC always seems to talk about things in terms of “lives saved”, ive never heard them mentioning other things to donate to. […]<br>
Utilitymonster: That’s exactly the right thing for HIC to do. Talk about lives saved with their public face, let hardcore members hear about x-risk, and then, in the future, if some excellent x-risk opportunity arises, direct resources to x-risk. <br>
- From felicia forum. </p>
</blockquote>

<h1>References</h1>

<ul>
<li>Gleiberman&#39;s paper: <a href="https://medialibrary.uantwerpen.be/files/8518/61565cb6-e056-4e35-bd2e-d14d58e35231.pdf" rel="nofollow">https://medialibrary.uantwerpen.be/files/8518/61565cb6-e056-4e35-bd2e-d14d58e35231.pdf</a></li>
<li>Old EA wikipedia page (web archive): <a href="https://web.archive.org/web/20170409171350/https://en.wikipedia.org/wiki/Effective_altruism" rel="nofollow">https://web.archive.org/web/20170409171350/https://en.wikipedia.org/wiki/Effective_altruism</a> </li>
<li>Old CEA webpage (web archive): <a href="https://web.archive.org/web/20161219031827/https://www.centreforeffectivealtruism.org/fundraising/" rel="nofollow">https://web.archive.org/web/20161219031827/https://www.centreforeffectivealtruism.org/fundraising/</a></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Let us funnel you into the core group of super secret patreon supporters. Send us an email at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
