<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Mon, 04 May 2026 04:58:40 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Increments - Episodes Tagged with “Decision Making”</title>
    <link>https://www.incrementspodcast.com/tags/decision-making</link>
    <pubDate>Tue, 09 Jul 2024 10:00:00 -0700</pubDate>
    <description>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Science, Philosophy, Epistemology, Mayhem</itunes:subtitle>
    <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
    <itunes:summary>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>Philosophy,Science,Ethics,Progress,Knowledge,Computer Science,Conversation,Error-Correction</itunes:keywords>
    <itunes:owner>
      <itunes:name>Ben Chugg and Vaden Masrani</itunes:name>
      <itunes:email>incrementspodcast@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<itunes:category text="Science"/>
<item>
  <title>#70 - ... and Bayes Bites Back (w/ Richard Meadows) </title>
  <link>https://www.incrementspodcast.com/70</link>
  <guid isPermaLink="false">a9b0b76a-e2e7-449c-8318-06efecf1c13d</guid>
  <pubDate>Tue, 09 Jul 2024 10:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/a9b0b76a-e2e7-449c-8318-06efecf1c13d.mp3" length="88283500" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Rich comes on to defend Scott Alexander against our criticisms. Are we being unfair? Are the Bayesians simply the Most Rational People (MRP) and we can't handle it? </itunes:subtitle>
  <itunes:duration>1:30:34</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/a/a9b0b76a-e2e7-449c-8318-06efecf1c13d/cover.jpg?v=4"/>
  <description>&lt;p&gt;Sick of hearing us shouting about Bayesianism? Well today you're in luck, because this time, someone shouts at &lt;em&gt;us&lt;/em&gt; about Bayesianism! Richard Meadows, finance journalist, author, and Ben's secretive podcast paramour, takes us to task. Are we being unfair to the Bayesians? Is Bayesian rationality optimal in theory, and the rest of us are just coping with an uncertain world? Is this why the Bayesian rationalists have so much cultural influence (and money, and fame, and media attention, and ...), and we, ahem, uhhh, don't?  &lt;/p&gt;

&lt;p&gt;Check out Rich's &lt;a href="https://thedeepdish.org/start" target="_blank" rel="nofollow noopener"&gt;website&lt;/a&gt;, his book &lt;a href="https://www.amazon.ca/Optionality-Survive-Thrive-Volatile-World/dp/0473545500" target="_blank" rel="nofollow noopener"&gt;&lt;strong&gt;Optionality:&lt;/strong&gt; How to Survive and Thrive in a Volatile World&lt;/a&gt;, and his &lt;a href="https://doyouevenlit.podbean.com/" target="_blank" rel="nofollow noopener"&gt;podcast&lt;/a&gt;. &lt;/p&gt;

We discuss

&lt;ul&gt;
&lt;li&gt;The pros of the rationality and EA communities &lt;/li&gt;
&lt;li&gt;Whether Bayesian epistemology contributes to open-mindedness&lt;/li&gt;
&lt;li&gt;The fact that evidence doesn't speak for itself &lt;/li&gt;
&lt;li&gt;The fact that the world doesn't come bundled as discrete chunks of evidence &lt;/li&gt;
&lt;li&gt;Whether Bayesian epistemology would be "optimal" for Laplace's demon &lt;/li&gt;
&lt;li&gt;The difference between truth and certainty&lt;/li&gt;
&lt;li&gt;Vaden's tone issues and why he gets animated about this subject. &lt;/li&gt;
&lt;/ul&gt;

References

&lt;ul&gt;
&lt;li&gt;Scott's original piece: &lt;a href="https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist" target="_blank" rel="nofollow noopener"&gt;In continued defense of non-frequentist probabilities&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Scott Alexander's &lt;a href="https://www.astralcodexten.com/p/practically-a-book-review-rootclaim/comments" target="_blank" rel="nofollow noopener"&gt;post about rootclaim&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;Our previous episode on Scott's piece: &lt;a href="https://www.incrementspodcast.com/69" target="_blank" rel="nofollow noopener"&gt;#69 - Contra Scott Alexander on Probability&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.rootclaim.com/" target="_blank" rel="nofollow noopener"&gt;Rootclaim&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Ben's blogpost &lt;a href="https://benchugg.com/writing/you-need-a-theory/" target="_blank" rel="nofollow noopener"&gt;You need a theory for that theory&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://en.wikipedia.org/wiki/Cox%27s_theorem" target="_blank" rel="nofollow noopener"&gt;Cox's theorem&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem" target="_blank" rel="nofollow noopener"&gt;Aumann's agreement theorem&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;Vaden's blogposts mentioned in the episode:

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://vmasrani.github.io/blog/2020/vaden_second_response/" target="_blank" rel="nofollow noopener"&gt;Critical Rationalism and Bayesian Epistemology&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://vmasrani.github.io/blog/2021/proving_too_much/" target="_blank" rel="nofollow noopener"&gt;Proving Too Much&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

Socials

&lt;ul&gt;
&lt;li&gt;Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani&lt;/li&gt;
&lt;li&gt;Follow Rich at @MeadowsRichard&lt;/li&gt;
&lt;li&gt;Come join our discord server! DM us on twitter or send us an email to get a supersecret link&lt;/li&gt;
&lt;li&gt;Help us calibrate our credences and get exclusive bonus content by becoming a patreon subscriber &lt;a href="https://www.patreon.com/Increments" target="_blank" rel="nofollow noopener"&gt;here&lt;/a&gt;. Or give us one-time cash donations to help cover our lack of cash donations &lt;a href="https://ko-fi.com/increments" target="_blank" rel="nofollow noopener"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Click dem like buttons on &lt;a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" target="_blank" rel="nofollow noopener"&gt;youtube&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What's your favorite theory that is neither true nor useful? Tell us over at &lt;a href="mailto:incrementspodcast@gmail.com" target="_blank" rel="nofollow noopener"&gt;incrementspodcast@gmail.com&lt;/a&gt;.  Special Guest: Richard Meadows.&lt;/p&gt;
</description>
  <itunes:keywords>probability, bayesianism, rationality, uncertainty, decision-making</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Sick of hearing us shouting about Bayesianism? Well today you&#39;re in luck, because this time, someone shouts at <em>us</em> about Bayesianism! Richard Meadows, finance journalist, author, and Ben&#39;s secretive podcast paramour, takes us to task. Are we being unfair to the Bayesians? Is Bayesian rationality optimal in theory, and the rest of us are just coping with an uncertain world? Is this why the Bayesian rationalists have so much cultural influence (and money, and fame, and media attention, and ...), and we, ahem, uhhh, don&#39;t?  </p>

<p>Check out Rich&#39;s <a href="https://thedeepdish.org/start" rel="nofollow">website</a>, his book <a href="https://www.amazon.ca/Optionality-Survive-Thrive-Volatile-World/dp/0473545500" rel="nofollow"><strong>Optionality:</strong> How to Survive and Thrive in a Volatile World</a>, and his <a href="https://doyouevenlit.podbean.com/" rel="nofollow">podcast</a>. </p>

<h1>We discuss</h1>

<ul>
<li>The pros of the rationality and EA communities </li>
<li>Whether Bayesian epistemology contributes to open-mindedness</li>
<li>The fact that evidence doesn&#39;t speak for itself </li>
<li>The fact that the world doesn&#39;t come bundled as discrete chunks of evidence </li>
<li>Whether Bayesian epistemology would be &quot;optimal&quot; for Laplace&#39;s demon </li>
<li>The difference between truth and certainty</li>
<li>Vaden&#39;s tone issues and why he gets animated about this subject. </li>
</ul>

<h1>References</h1>

<ul>
<li>Scott&#39;s original piece: <a href="https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist" rel="nofollow">In continued defense of non-frequentist probabilities</a></li>
<li>Scott Alexander&#39;s <a href="https://www.astralcodexten.com/p/practically-a-book-review-rootclaim/comments" rel="nofollow">post about rootclaim</a> </li>
<li>Our previous episode on Scott&#39;s piece: <a href="https://www.incrementspodcast.com/69" rel="nofollow">#69 - Contra Scott Alexander on Probability</a> </li>
<li><a href="https://www.rootclaim.com/" rel="nofollow">Rootclaim</a></li>
<li>Ben&#39;s blogpost <a href="https://benchugg.com/writing/you-need-a-theory/" rel="nofollow">You need a theory for that theory</a> </li>
<li><a href="https://en.wikipedia.org/wiki/Cox%27s_theorem" rel="nofollow">Cox&#39;s theorem</a> </li>
<li><a href="https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem" rel="nofollow">Aumann&#39;s agreement theorem</a> </li>
<li>Vaden&#39;s blogposts mentioned in the episode:

<ul>
<li><a href="https://vmasrani.github.io/blog/2020/vaden_second_response/" rel="nofollow">Critical Rationalism and Bayesian Epistemology</a></li>
<li><a href="https://vmasrani.github.io/blog/2021/proving_too_much/" rel="nofollow">Proving Too Much</a></li>
</ul></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Follow Rich at @MeadowsRichard</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Help us calibrate our credences and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s your favorite theory that is neither true nor useful? Tell us over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p>Special Guest: Richard Meadows.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Sick of hearing us shouting about Bayesianism? Well today you&#39;re in luck, because this time, someone shouts at <em>us</em> about Bayesianism! Richard Meadows, finance journalist, author, and Ben&#39;s secretive podcast paramour, takes us to task. Are we being unfair to the Bayesians? Is Bayesian rationality optimal in theory, and the rest of us are just coping with an uncertain world? Is this why the Bayesian rationalists have so much cultural influence (and money, and fame, and media attention, and ...), and we, ahem, uhhh, don&#39;t?  </p>

<p>Check out Rich&#39;s <a href="https://thedeepdish.org/start" rel="nofollow">website</a>, his book <a href="https://www.amazon.ca/Optionality-Survive-Thrive-Volatile-World/dp/0473545500" rel="nofollow"><strong>Optionality:</strong> How to Survive and Thrive in a Volatile World</a>, and his <a href="https://doyouevenlit.podbean.com/" rel="nofollow">podcast</a>. </p>

<h1>We discuss</h1>

<ul>
<li>The pros of the rationality and EA communities </li>
<li>Whether Bayesian epistemology contributes to open-mindedness</li>
<li>The fact that evidence doesn&#39;t speak for itself </li>
<li>The fact that the world doesn&#39;t come bundled as discrete chunks of evidence </li>
<li>Whether Bayesian epistemology would be &quot;optimal&quot; for Laplace&#39;s demon </li>
<li>The difference between truth and certainty</li>
<li>Vaden&#39;s tone issues and why he gets animated about this subject. </li>
</ul>

<h1>References</h1>

<ul>
<li>Scott&#39;s original piece: <a href="https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist" rel="nofollow">In continued defense of non-frequentist probabilities</a></li>
<li>Scott Alexander&#39;s <a href="https://www.astralcodexten.com/p/practically-a-book-review-rootclaim/comments" rel="nofollow">post about rootclaim</a> </li>
<li>Our previous episode on Scott&#39;s piece: <a href="https://www.incrementspodcast.com/69" rel="nofollow">#69 - Contra Scott Alexander on Probability</a> </li>
<li><a href="https://www.rootclaim.com/" rel="nofollow">Rootclaim</a></li>
<li>Ben&#39;s blogpost <a href="https://benchugg.com/writing/you-need-a-theory/" rel="nofollow">You need a theory for that theory</a> </li>
<li><a href="https://en.wikipedia.org/wiki/Cox%27s_theorem" rel="nofollow">Cox&#39;s theorem</a> </li>
<li><a href="https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem" rel="nofollow">Aumann&#39;s agreement theorem</a> </li>
<li>Vaden&#39;s blogposts mentioned in the episode:

<ul>
<li><a href="https://vmasrani.github.io/blog/2020/vaden_second_response/" rel="nofollow">Critical Rationalism and Bayesian Epistemology</a></li>
<li><a href="https://vmasrani.github.io/blog/2021/proving_too_much/" rel="nofollow">Proving Too Much</a></li>
</ul></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Follow Rich at @MeadowsRichard</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Help us calibrate our credences and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s your favorite theory that is neither true nor useful? Tell us over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p>Special Guest: Richard Meadows.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#53 - Ask Us Anything II: Disagreements and Decisions</title>
  <link>https://www.incrementspodcast.com/53</link>
  <guid isPermaLink="false">1ffe1058-61dd-4c4d-8d9e-383a97549241</guid>
  <pubDate>Mon, 14 Aug 2023 11:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/1ffe1058-61dd-4c4d-8d9e-383a97549241.mp3" length="90414601" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Ask us anything? Ask us everything! Back at it again with AUA Part 2/N. We wax poetic and wane dramatic on disagreements, decision-making, EA, and probability</itunes:subtitle>
  <itunes:duration>1:34:10</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/1/1ffe1058-61dd-4c4d-8d9e-383a97549241/cover.jpg?v=1"/>
  <description>&lt;p&gt;Ask us anything? Ask us everything! Back at it again with AUA Part 2/N. We wax poetic and wane dramatic on a number of subjects, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ben's dark and despicable hidden historicist tendencies&lt;/li&gt;
&lt;li&gt;Expounding upon (one of our many) critiques of Bayesian Epistemology&lt;/li&gt;
&lt;li&gt;Ben's total abandonment of all of his principles&lt;/li&gt;
&lt;li&gt;Similarities and differences between human and computer decision making&lt;/li&gt;
&lt;li&gt;What can the critical rationalist community learn from Effective Altruism?&lt;/li&gt;
&lt;li&gt;Ben's new best friend Peter Turchin&lt;/li&gt;
&lt;li&gt;How to have effective disagreements and not take gleeful petty jabs at friends and co-hosts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Questions&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;(&lt;strong&gt;Michael&lt;/strong&gt;) A critique of Bayesian epistemology is that it "assigns scalars to feelings" in an ungrounded way. It's not clear to me that the problem-solving approach of Deutsch and Popper avoid this, because even during the conjecture-refutation process, the person needs to at some point decide whether the current problem has been solved satisfactorily enough to move on to the next problem. How is this satisfaction determined, if not via summarizing one's internal belief as a scalar that surpasses some threshold? If not this (which is essentially assigning scalars to feelings), by what mechanism is a problem determined to be solved?&lt;/li&gt;
&lt;li&gt;(&lt;strong&gt;Michael&lt;/strong&gt;) Is the claim that "humans create new choices whereas machines are constrained to choose within the event-space defined by the human" equivalent to saying "humans can perform abstraction while machines cannot?" Not clear what "create new choices" means, given that humans are also constrained in their vocabulary (and thus their event-space of possible thoughts)&lt;/li&gt;
&lt;li&gt;(&lt;strong&gt;Lulie&lt;/strong&gt;) In what ways could the critical rationalist culture improve by looking to EA?&lt;/li&gt;
&lt;li&gt;(&lt;strong&gt;Scott&lt;/strong&gt;) What principles do the @IncrementsPod duo apply to navigating effective conversations involving deep disagreement?&lt;/li&gt;
&lt;li&gt;(&lt;strong&gt;Scott&lt;/strong&gt;) Are there any contexts where bayesianism has utility? (steelman)&lt;/li&gt;
&lt;li&gt;(&lt;strong&gt;Scott&lt;/strong&gt;) What is Vaden going to do post graduation?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Quotes&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&amp;gt; “The words or the language, as they are written or spoken,” he wrote, “do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be ‘voluntarily’ reproduced and combined...this combinatory play seems to be the essential feature in productive thought— before there is any connection with logical construction in words or other kinds of signs which can be communicated to others.” (Einstein) &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Contact us&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani&lt;/li&gt;
&lt;li&gt;Check us out on youtube at &lt;a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" target="_blank" rel="nofollow noopener"&gt;https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Come join our discord server! DM us on twitter or send us an email to get a supersecret link&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Send Ben an email asking him why god why over at incrementspodcast.com &lt;/p&gt;
</description>
  <itunes:keywords>ask-us-anything, disagreements, decision-making, bayesianism, probability </itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Ask us anything? Ask us everything! Back at it again with AUA Part 2/N. We wax poetic and wane dramatic on a number of subjects, including:</p>

<ul>
<li>Ben&#39;s dark and despicable hidden historicist tendencies</li>
<li>Expounding upon (one of our many) critiques of Bayesian Epistemology</li>
<li>Ben&#39;s total abandonment of all of his principles</li>
<li>Similarities and differences between human and computer decision making</li>
<li>What can the critical rationalist community learn from Effective Altruism?</li>
<li>Ben&#39;s new best friend Peter Turchin</li>
<li>How to have effective disagreements and not take gleeful petty jabs at friends and co-hosts.</li>
</ul>

<p><strong>Questions</strong></p>

<ol>
<li>(<strong>Michael</strong>) A critique of Bayesian epistemology is that it &quot;assigns scalars to feelings&quot; in an ungrounded way. It&#39;s not clear to me that the problem-solving approach of Deutsch and Popper avoid this, because even during the conjecture-refutation process, the person needs to at some point decide whether the current problem has been solved satisfactorily enough to move on to the next problem. How is this satisfaction determined, if not via summarizing one&#39;s internal belief as a scalar that surpasses some threshold? If not this (which is essentially assigning scalars to feelings), by what mechanism is a problem determined to be solved?</li>
<li>(<strong>Michael</strong>) Is the claim that &quot;humans create new choices whereas machines are constrained to choose within the event-space defined by the human&quot; equivalent to saying &quot;humans can perform abstraction while machines cannot?&quot; Not clear what &quot;create new choices&quot; means, given that humans are also constrained in their vocabulary (and thus their event-space of possible thoughts)</li>
<li>(<strong>Lulie</strong>) In what ways could the critical rationalist culture improve by looking to EA?</li>
<li>(<strong>Scott</strong>) What principles do the @IncrementsPod duo apply to navigating effective conversations involving deep disagreement?</li>
<li>(<strong>Scott</strong>) Are there any contexts where bayesianism has utility? (steelman)</li>
<li>(<strong>Scott</strong>) What is Vaden going to do post graduation?</li>
</ol>

<p><strong>Quotes</strong> </p>

<blockquote>
<p>“The words or the language, as they are written or spoken,” he wrote, “do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be ‘voluntarily’ reproduced and combined...this combinatory play seems to be the essential feature in productive thought— before there is any connection with logical construction in words or other kinds of signs which can be communicated to others.” (Einstein) </p>
</blockquote>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p>Send Ben an email asking him why god why over at incrementspodcast.com</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Ask us anything? Ask us everything! Back at it again with AUA Part 2/N. We wax poetic and wane dramatic on a number of subjects, including:</p>

<ul>
<li>Ben&#39;s dark and despicable hidden historicist tendencies</li>
<li>Expounding upon (one of our many) critiques of Bayesian Epistemology</li>
<li>Ben&#39;s total abandonment of all of his principles</li>
<li>Similarities and differences between human and computer decision making</li>
<li>What can the critical rationalist community learn from Effective Altruism?</li>
<li>Ben&#39;s new best friend Peter Turchin</li>
<li>How to have effective disagreements and not take gleeful petty jabs at friends and co-hosts.</li>
</ul>

<p><strong>Questions</strong></p>

<ol>
<li>(<strong>Michael</strong>) A critique of Bayesian epistemology is that it &quot;assigns scalars to feelings&quot; in an ungrounded way. It&#39;s not clear to me that the problem-solving approach of Deutsch and Popper avoid this, because even during the conjecture-refutation process, the person needs to at some point decide whether the current problem has been solved satisfactorily enough to move on to the next problem. How is this satisfaction determined, if not via summarizing one&#39;s internal belief as a scalar that surpasses some threshold? If not this (which is essentially assigning scalars to feelings), by what mechanism is a problem determined to be solved?</li>
<li>(<strong>Michael</strong>) Is the claim that &quot;humans create new choices whereas machines are constrained to choose within the event-space defined by the human&quot; equivalent to saying &quot;humans can perform abstraction while machines cannot?&quot; Not clear what &quot;create new choices&quot; means, given that humans are also constrained in their vocabulary (and thus their event-space of possible thoughts)</li>
<li>(<strong>Lulie</strong>) In what ways could the critical rationalist culture improve by looking to EA?</li>
<li>(<strong>Scott</strong>) What principles do the @IncrementsPod duo apply to navigating effective conversations involving deep disagreement?</li>
<li>(<strong>Scott</strong>) Are there any contexts where bayesianism has utility? (steelman)</li>
<li>(<strong>Scott</strong>) What is Vaden going to do post graduation?</li>
</ol>

<p><strong>Quotes</strong> </p>

<blockquote>
<p>“The words or the language, as they are written or spoken,” he wrote, “do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be ‘voluntarily’ reproduced and combined...this combinatory play seems to be the essential feature in productive thought— before there is any connection with logical construction in words or other kinds of signs which can be communicated to others.” (Einstein) </p>
</blockquote>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p>Send Ben an email asking him why god why over at incrementspodcast.com</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
