<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web01.fireside.fm</fireside:hostname>
    <fireside:genDate>Sun, 19 Apr 2026 02:30:09 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Increments - Episodes Tagged with “Bayesianism”</title>
    <link>https://www.incrementspodcast.com/tags/bayesianism</link>
    <pubDate>Thu, 16 Oct 2025 12:15:00 -0700</pubDate>
    <description>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Science, Philosophy, Epistemology, Mayhem</itunes:subtitle>
    <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
    <itunes:summary>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>Philosophy,Science,Ethics,Progress,Knowledge,Computer Science,Conversation,Error-Correction</itunes:keywords>
    <itunes:owner>
      <itunes:name>Ben Chugg and Vaden Masrani</itunes:name>
      <itunes:email>incrementspodcast@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<itunes:category text="Science"/>
<item>
  <title>#93 (C&amp;R Chap 10, Part I) - An Introduction to Popper's Theory of Content</title>
  <link>https://www.incrementspodcast.com/93</link>
  <guid isPermaLink="false">614c7d46-abe3-4651-946a-b20d77e84f84</guid>
  <pubDate>Thu, 16 Oct 2025 12:15:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/614c7d46-abe3-4651-946a-b20d77e84f84.mp3" length="103477292" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>An introduction to Popper's theory of content, following Chapter 10 of Conjectures and Refutations. Plus a lot of arguing about Bayesianism. </itunes:subtitle>
  <itunes:duration>1:47:23</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/6/614c7d46-abe3-4651-946a-b20d77e84f84/cover.jpg?v=1"/>
  <description>Back to basics baby. We're doing a couple introductory episodes on Popper's philosophy of science, following Chapter 10 of Conjectures and Refutations. We start with Popper's theory of content: what makes a good scientific theory? Can we judge some theories as better than others before we even run any empirical tests? Should we be looking for theories with high probability? 
Ben and Vaden also return to their roots in another way, and get into a nice little fight about how content relates to Bayesianism. 
We discuss
Vaden's skin care routine 
If you find your friend's lost watch and proceed to lose it, are you responsible for the watch?
Empirical vs logical content 
Whether and how content can be measured and compared 
How content relates to probability 
Quotes
My aim in this lecture is to stress the significance of one particular aspect of science—its need to grow, or, if you like, its need to progress. I do not have in mind here the practical or social significance of this need. What I wish to discuss is rather its intellectual significance. I assert that continued growth is essential to the rational and empirical character of scientific knowledge; that if science ceases to grow it must lose that character. It is the way of its growth which makes science rational and empirical; the way, that is, in which scientists discriminate between available theories and choose the better one or (in the absence of a satisfactory theory) the way they give reasons for rejecting all the available theories, thereby suggesting some of the conditions with which a satisfactory theory should comply.
You will have noticed from this formulation that it is not the accumulation of observations which I have in mind when I speak of the growth of scientific knowledge, but the repeated overthrow of scien- tific theories and their replacement by better or more satisfactory ones. This, incidentally, is a procedure which might be found worthy of attention even by those who see the most important aspect of the growth of scientific knowledge in new experiments and in new observations.
- C&amp;amp;R p. 291
Thus it is my first thesis that we can know of a theory, even before it has been tested, that if it passes certain tests it will be better than some other theory. 
My first thesis implies that we have a criterion of relative potential satisfactoriness, or of potential progressiveness, which can be applied to a theory even before we know whether or not it will turn out, by the passing of some crucial tests, to be satisfactory in fact.
This criterion of relative potential satisfactoriness (which I formu- lated some time ago,2 and which, incidentally, allows us to grade the- ories according to their degree of relative potential satisfactoriness) is extremely simple and intuitive. It characterizes as preferable the theory which tells us more; that is to say, the theory which contains the greater amount of empirical information or content; which is logically stronger; which has the greater explanatory and predictive power; and which can therefore be more severely tested by comparing predicted facts with observations. In short, we prefer an interesting, daring, and highly informative theory to a trivial one.
- C&amp;amp;R p.294
Let a be the statement ‘It will rain on Friday’; b the statement ‘It willbe fine on Saturday’; and ab the statement ‘It will rain on Friday and itwill be fine on Saturday’: it is then obvious that the informative contentof this last statement, the conjunction ab, will exceed that of its com-ponent a and also that of its component b. And it will also be obviousthat the probability of ab (or, what is the same, the probability that abwill be true) will be smaller than that of either of its components.
Writing Ct(a) for ‘the content of the statement a’, and Ct(ab) for ‘thecontent of the conjunction a and b’, we have
(1) Ct(a) &amp;lt;= Ct(ab)  &amp;gt;= Ct(b).
This contrasts with the corresponding law of the calculus of probability,
(2) p(a) &amp;gt;= p(ab) &amp;lt;= p(b),
where the inequality signs of (1) are inverted. Together these two laws, (1) and (2), state that with increasing content, probability decreases, and vice versa; or in other words, that content increases with increasing improbability. (This analysis is of course in full agreement with the general idea of the logical content of a statement as the class of all those statements which are logically entailed by it. We may also say that a statement a is logically stronger than a statement b if its content is greater than that of b—that is to say, if it entails more than b does.)
This trivial fact has the following inescapable consequences: if growth of knowledge means that we operate with theories of increasing content, it must also mean that we operate with theories of decreasing probability (in the sense of the calculus of probability). Thus if our aim is the advancement or growth of knowledge, then a high probability (in the sense of the calculus of probability) cannot possibly be our aim as well: these two aims are incompatible.
- C&amp;amp;R p.295
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Become a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
How much content does the theory "dish soap is the ultimate face cleanser" have? Send your order of infinity over to incrementspodcast@gmail.com
</description>
  <itunes:keywords>popper, content, philosophy of science, probability, bayesianism</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Back to basics baby. We&#39;re doing a couple introductory episodes on Popper&#39;s philosophy of science, following Chapter 10 of Conjectures and Refutations. We start with Popper&#39;s theory of <em>content</em>: what makes a good scientific theory? Can we judge some theories as better than others before we even run any empirical tests? Should we be looking for theories with high probability? </p>

<p>Ben and Vaden also return to their roots in another way, and get into a nice little fight about how content relates to Bayesianism. </p>

<h1>We discuss</h1>

<ul>
<li>Vaden&#39;s skin care routine </li>
<li>If you find your friend&#39;s lost watch and proceed to lose it, are you responsible for the watch?</li>
<li>Empirical vs logical content </li>
<li>Whether and how content can be measured and compared </li>
<li>How content relates to probability </li>
</ul>

<h1>Quotes</h1>

<blockquote>
<p>My aim in this lecture is to stress the significance of one particular aspect of science—its need to grow, or, if you like, its need to progress. I do not have in mind here the practical or social significance of this need. What I wish to discuss is rather its intellectual significance. I assert that continued growth is essential to the rational and empirical character of scientific knowledge; that if science ceases to grow it must lose that character. It is the way of its growth which makes science rational and empirical; the way, that is, in which scientists discriminate between available theories and choose the better one or (in the absence of a satisfactory theory) the way they give reasons for rejecting all the available theories, thereby suggesting some of the conditions with which a satisfactory theory should comply.</p>

<p>You will have noticed from this formulation that it is not the accumulation of observations which I have in mind when I speak of the growth of scientific knowledge, but the repeated overthrow of scien- tific theories and their replacement by better or more satisfactory ones. This, incidentally, is a procedure which might be found worthy of attention even by those who see the most important aspect of the growth of scientific knowledge in new experiments and in new observations.</p>

<ul>
<li><em>C&amp;R p. 291</em></li>
</ul>
</blockquote>

<hr>

<blockquote>
<p>Thus it is my first thesis that we can know of a theory, even before it has been tested, that if it passes certain tests it will be better than some other theory. </p>

<p>My first thesis implies that we have a criterion of relative potential satisfactoriness, or of potential progressiveness, which can be applied to a theory even before we know whether or not it will turn out, by the passing of some crucial tests, to be satisfactory in <em>fact</em>.</p>

<p>This criterion of relative potential satisfactoriness (which I formu- lated some time ago,2 and which, incidentally, allows us to grade the- ories according to their degree of relative potential satisfactoriness) is extremely simple and intuitive. It characterizes as preferable the theory which tells us more; that is to say, the theory which contains the greater amount of empirical information or <em>content</em>; which is logically stronger; which has the greater explanatory and predictive power; and which can therefore be <em>more severely tested</em> by comparing predicted facts with observations. In short, we prefer an interesting, daring, and highly informative theory to a trivial one.</p>

<ul>
<li><em>C&amp;R p.294</em></li>
</ul>

<p>Let a be the statement ‘It will rain on Friday’; b the statement ‘It willbe fine on Saturday’; and ab the statement ‘It will rain on Friday and itwill be fine on Saturday’: it is then obvious that the informative contentof this last statement, the conjunction ab, will exceed that of its com-ponent a and also that of its component b. And it will also be obviousthat the probability of ab (or, what is the same, the probability that abwill be true) will be smaller than that of either of its components.</p>

<p>Writing Ct(a) for ‘the content of the statement a’, and Ct(ab) for ‘thecontent of the conjunction a and b’, we have<br>
(1) Ct(a) &lt;= Ct(ab)  &gt;= Ct(b).</p>

<p>This contrasts with the corresponding law of the calculus of probability,</p>

<p>(2) p(a) &gt;= p(ab) &lt;= p(b),</p>

<p>where the inequality signs of (1) are inverted. Together these two laws, (1) and (2), state that with increasing content, probability decreases, and vice versa; or in other words, that content increases with increasing improbability. (This analysis is of course in full agreement with the general idea of the logical content of a statement as the class of all those statements which are logically entailed by it. We may also say that a statement a is logically stronger than a statement b if its content is greater than that of b—that is to say, if it entails more than b does.)</p>

<p>This trivial fact has the following inescapable consequences: if growth of knowledge means that we operate with theories of increasing content, it must also mean that we operate with theories of decreasing probability (in the sense of the calculus of probability). Thus if our aim is the advancement or growth of knowledge, then a high probability (in the sense of the calculus of probability) cannot possibly be our aim as well: these two aims are incompatible.</p>

<ul>
<li><em>C&amp;R p.295</em></li>
</ul>
</blockquote>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>How much content does the theory &quot;dish soap is the ultimate face cleanser&quot; have? Send your order of infinity over to <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Back to basics baby. We&#39;re doing a couple introductory episodes on Popper&#39;s philosophy of science, following Chapter 10 of Conjectures and Refutations. We start with Popper&#39;s theory of <em>content</em>: what makes a good scientific theory? Can we judge some theories as better than others before we even run any empirical tests? Should we be looking for theories with high probability? </p>

<p>Ben and Vaden also return to their roots in another way, and get into a nice little fight about how content relates to Bayesianism. </p>

<h1>We discuss</h1>

<ul>
<li>Vaden&#39;s skin care routine </li>
<li>If you find your friend&#39;s lost watch and proceed to lose it, are you responsible for the watch?</li>
<li>Empirical vs logical content </li>
<li>Whether and how content can be measured and compared </li>
<li>How content relates to probability </li>
</ul>

<h1>Quotes</h1>

<blockquote>
<p>My aim in this lecture is to stress the significance of one particular aspect of science—its need to grow, or, if you like, its need to progress. I do not have in mind here the practical or social significance of this need. What I wish to discuss is rather its intellectual significance. I assert that continued growth is essential to the rational and empirical character of scientific knowledge; that if science ceases to grow it must lose that character. It is the way of its growth which makes science rational and empirical; the way, that is, in which scientists discriminate between available theories and choose the better one or (in the absence of a satisfactory theory) the way they give reasons for rejecting all the available theories, thereby suggesting some of the conditions with which a satisfactory theory should comply.</p>

<p>You will have noticed from this formulation that it is not the accumulation of observations which I have in mind when I speak of the growth of scientific knowledge, but the repeated overthrow of scien- tific theories and their replacement by better or more satisfactory ones. This, incidentally, is a procedure which might be found worthy of attention even by those who see the most important aspect of the growth of scientific knowledge in new experiments and in new observations.</p>

<ul>
<li><em>C&amp;R p. 291</em></li>
</ul>
</blockquote>

<hr>

<blockquote>
<p>Thus it is my first thesis that we can know of a theory, even before it has been tested, that if it passes certain tests it will be better than some other theory. </p>

<p>My first thesis implies that we have a criterion of relative potential satisfactoriness, or of potential progressiveness, which can be applied to a theory even before we know whether or not it will turn out, by the passing of some crucial tests, to be satisfactory in <em>fact</em>.</p>

<p>This criterion of relative potential satisfactoriness (which I formu- lated some time ago,2 and which, incidentally, allows us to grade the- ories according to their degree of relative potential satisfactoriness) is extremely simple and intuitive. It characterizes as preferable the theory which tells us more; that is to say, the theory which contains the greater amount of empirical information or <em>content</em>; which is logically stronger; which has the greater explanatory and predictive power; and which can therefore be <em>more severely tested</em> by comparing predicted facts with observations. In short, we prefer an interesting, daring, and highly informative theory to a trivial one.</p>

<ul>
<li><em>C&amp;R p.294</em></li>
</ul>

<p>Let a be the statement ‘It will rain on Friday’; b the statement ‘It willbe fine on Saturday’; and ab the statement ‘It will rain on Friday and itwill be fine on Saturday’: it is then obvious that the informative contentof this last statement, the conjunction ab, will exceed that of its com-ponent a and also that of its component b. And it will also be obviousthat the probability of ab (or, what is the same, the probability that abwill be true) will be smaller than that of either of its components.</p>

<p>Writing Ct(a) for ‘the content of the statement a’, and Ct(ab) for ‘thecontent of the conjunction a and b’, we have<br>
(1) Ct(a) &lt;= Ct(ab)  &gt;= Ct(b).</p>

<p>This contrasts with the corresponding law of the calculus of probability,</p>

<p>(2) p(a) &gt;= p(ab) &lt;= p(b),</p>

<p>where the inequality signs of (1) are inverted. Together these two laws, (1) and (2), state that with increasing content, probability decreases, and vice versa; or in other words, that content increases with increasing improbability. (This analysis is of course in full agreement with the general idea of the logical content of a statement as the class of all those statements which are logically entailed by it. We may also say that a statement a is logically stronger than a statement b if its content is greater than that of b—that is to say, if it entails more than b does.)</p>

<p>This trivial fact has the following inescapable consequences: if growth of knowledge means that we operate with theories of increasing content, it must also mean that we operate with theories of decreasing probability (in the sense of the calculus of probability). Thus if our aim is the advancement or growth of knowledge, then a high probability (in the sense of the calculus of probability) cannot possibly be our aim as well: these two aims are incompatible.</p>

<ul>
<li><em>C&amp;R p.295</em></li>
</ul>
</blockquote>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>How much content does the theory &quot;dish soap is the ultimate face cleanser&quot; have? Send your order of infinity over to <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#70 - ... and Bayes Bites Back (w/ Richard Meadows) </title>
  <link>https://www.incrementspodcast.com/70</link>
  <guid isPermaLink="false">a9b0b76a-e2e7-449c-8318-06efecf1c13d</guid>
  <pubDate>Tue, 09 Jul 2024 10:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/a9b0b76a-e2e7-449c-8318-06efecf1c13d.mp3" length="88283500" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Rich comes on to defend Scott Alexander against our criticisms. Are we being unfair? Are the Bayesians simply the Most Rational People (MRP) and we can't handle it? </itunes:subtitle>
  <itunes:duration>1:30:34</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/a/a9b0b76a-e2e7-449c-8318-06efecf1c13d/cover.jpg?v=4"/>
  <description>Sick of hearing us shouting about Bayesianism? Well today you're in luck, because this time, someone shouts at us about Bayesianism! Richard Meadows, finance journalist, author, and Ben's secretive podcast paramour, takes us to task. Are we being unfair to the Bayesians? Is Bayesian rationality optimal in theory, and the rest of us are just coping with an uncertain world? Is this why the Bayesian rationalists have so much cultural influence (and money, and fame, and media attention, and ...), and we, ahem, uhhh, don't?  
Check out Rich's website (https://thedeepdish.org/start), his book Optionality: How to Survive and Thrive in a Volatile World (https://www.amazon.ca/Optionality-Survive-Thrive-Volatile-World/dp/0473545500), and his podcast (https://doyouevenlit.podbean.com/). 
We discuss
The pros of the rationality and EA communities 
Whether Bayesian epistemology contributes to open-mindedness
The fact that evidence doesn't speak for itself 
The fact that the world doesn't come bundled as discrete chunks of evidence 
Whether Bayesian epistemology would be "optimal" for Laplace's demon 
The difference between truth and certainty
Vaden's tone issues and why he gets animated about this subject. 
References
Scott's original piece: In continued defense of non-frequentist probabilities (https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist)
Scott Alexander's post about rootclaim (https://www.astralcodexten.com/p/practically-a-book-review-rootclaim/comments) 
Our previous episode on Scott's piece: #69 - Contra Scott Alexander on Probability (https://www.incrementspodcast.com/69) 
Rootclaim (https://www.rootclaim.com/)
Ben's blogpost You need a theory for that theory (https://benchugg.com/writing/you-need-a-theory/) 
Cox's theorem (https://en.wikipedia.org/wiki/Cox%27s_theorem) 
Aumann's agreement theorem (https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem) 
Vaden's blogposts mentioned in the episode:
Critical Rationalism and Bayesian Epistemology (https://vmasrani.github.io/blog/2020/vaden_second_response/)
Proving Too Much (https://vmasrani.github.io/blog/2021/proving_too_much/)
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Follow Rich at @MeadowsRichard
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Help us calibrate our credences and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
What's your favorite theory that is neither true nor useful? Tell us over at incrementspodcast@gmail.com.  Special Guest: Richard Meadows.
</description>
  <itunes:keywords>probability, bayesianism, rationality, uncertainty, decision-making</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Sick of hearing us shouting about Bayesianism? Well today you&#39;re in luck, because this time, someone shouts at <em>us</em> about Bayesianism! Richard Meadows, finance journalist, author, and Ben&#39;s secretive podcast paramour, takes us to task. Are we being unfair to the Bayesians? Is Bayesian rationality optimal in theory, and the rest of us are just coping with an uncertain world? Is this why the Bayesian rationalists have so much cultural influence (and money, and fame, and media attention, and ...), and we, ahem, uhhh, don&#39;t?  </p>

<p>Check out Rich&#39;s <a href="https://thedeepdish.org/start" rel="nofollow">website</a>, his book <a href="https://www.amazon.ca/Optionality-Survive-Thrive-Volatile-World/dp/0473545500" rel="nofollow"><strong>Optionality:</strong> How to Survive and Thrive in a Volatile World</a>, and his <a href="https://doyouevenlit.podbean.com/" rel="nofollow">podcast</a>. </p>

<h1>We discuss</h1>

<ul>
<li>The pros of the rationality and EA communities </li>
<li>Whether Bayesian epistemology contributes to open-mindedness</li>
<li>The fact that evidence doesn&#39;t speak for itself </li>
<li>The fact that the world doesn&#39;t come bundled as discrete chunks of evidence </li>
<li>Whether Bayesian epistemology would be &quot;optimal&quot; for Laplace&#39;s demon </li>
<li>The difference between truth and certainty</li>
<li>Vaden&#39;s tone issues and why he gets animated about this subject. </li>
</ul>

<h1>References</h1>

<ul>
<li>Scott&#39;s original piece: <a href="https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist" rel="nofollow">In continued defense of non-frequentist probabilities</a></li>
<li>Scott Alexander&#39;s <a href="https://www.astralcodexten.com/p/practically-a-book-review-rootclaim/comments" rel="nofollow">post about rootclaim</a> </li>
<li>Our previous episode on Scott&#39;s piece: <a href="https://www.incrementspodcast.com/69" rel="nofollow">#69 - Contra Scott Alexander on Probability</a> </li>
<li><a href="https://www.rootclaim.com/" rel="nofollow">Rootclaim</a></li>
<li>Ben&#39;s blogpost <a href="https://benchugg.com/writing/you-need-a-theory/" rel="nofollow">You need a theory for that theory</a> </li>
<li><a href="https://en.wikipedia.org/wiki/Cox%27s_theorem" rel="nofollow">Cox&#39;s theorem</a> </li>
<li><a href="https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem" rel="nofollow">Aumann&#39;s agreement theorem</a> </li>
<li>Vaden&#39;s blogposts mentioned in the episode:

<ul>
<li><a href="https://vmasrani.github.io/blog/2020/vaden_second_response/" rel="nofollow">Critical Rationalism and Bayesian Epistemology</a></li>
<li><a href="https://vmasrani.github.io/blog/2021/proving_too_much/" rel="nofollow">Proving Too Much</a></li>
</ul></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Follow Rich at @MeadowsRichard</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Help us calibrate our credences and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s your favorite theory that is neither true nor useful? Tell us over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p>Special Guest: Richard Meadows.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Sick of hearing us shouting about Bayesianism? Well today you&#39;re in luck, because this time, someone shouts at <em>us</em> about Bayesianism! Richard Meadows, finance journalist, author, and Ben&#39;s secretive podcast paramour, takes us to task. Are we being unfair to the Bayesians? Is Bayesian rationality optimal in theory, and the rest of us are just coping with an uncertain world? Is this why the Bayesian rationalists have so much cultural influence (and money, and fame, and media attention, and ...), and we, ahem, uhhh, don&#39;t?  </p>

<p>Check out Rich&#39;s <a href="https://thedeepdish.org/start" rel="nofollow">website</a>, his book <a href="https://www.amazon.ca/Optionality-Survive-Thrive-Volatile-World/dp/0473545500" rel="nofollow"><strong>Optionality:</strong> How to Survive and Thrive in a Volatile World</a>, and his <a href="https://doyouevenlit.podbean.com/" rel="nofollow">podcast</a>. </p>

<h1>We discuss</h1>

<ul>
<li>The pros of the rationality and EA communities </li>
<li>Whether Bayesian epistemology contributes to open-mindedness</li>
<li>The fact that evidence doesn&#39;t speak for itself </li>
<li>The fact that the world doesn&#39;t come bundled as discrete chunks of evidence </li>
<li>Whether Bayesian epistemology would be &quot;optimal&quot; for Laplace&#39;s demon </li>
<li>The difference between truth and certainty</li>
<li>Vaden&#39;s tone issues and why he gets animated about this subject. </li>
</ul>

<h1>References</h1>

<ul>
<li>Scott&#39;s original piece: <a href="https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist" rel="nofollow">In continued defense of non-frequentist probabilities</a></li>
<li>Scott Alexander&#39;s <a href="https://www.astralcodexten.com/p/practically-a-book-review-rootclaim/comments" rel="nofollow">post about rootclaim</a> </li>
<li>Our previous episode on Scott&#39;s piece: <a href="https://www.incrementspodcast.com/69" rel="nofollow">#69 - Contra Scott Alexander on Probability</a> </li>
<li><a href="https://www.rootclaim.com/" rel="nofollow">Rootclaim</a></li>
<li>Ben&#39;s blogpost <a href="https://benchugg.com/writing/you-need-a-theory/" rel="nofollow">You need a theory for that theory</a> </li>
<li><a href="https://en.wikipedia.org/wiki/Cox%27s_theorem" rel="nofollow">Cox&#39;s theorem</a> </li>
<li><a href="https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem" rel="nofollow">Aumann&#39;s agreement theorem</a> </li>
<li>Vaden&#39;s blogposts mentioned in the episode:

<ul>
<li><a href="https://vmasrani.github.io/blog/2020/vaden_second_response/" rel="nofollow">Critical Rationalism and Bayesian Epistemology</a></li>
<li><a href="https://vmasrani.github.io/blog/2021/proving_too_much/" rel="nofollow">Proving Too Much</a></li>
</ul></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Follow Rich at @MeadowsRichard</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Help us calibrate our credences and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s your favorite theory that is neither true nor useful? Tell us over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p>Special Guest: Richard Meadows.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#69 - Contra Scott Alexander on Probability</title>
  <link>https://www.incrementspodcast.com/69</link>
  <guid isPermaLink="false">3ac225c1-a486-428e-bdcf-2d1973d2c80b</guid>
  <pubDate>Thu, 20 Jun 2024 08:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/3ac225c1-a486-428e-bdcf-2d1973d2c80b.mp3" length="101992679" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle> Cursed to return to this subject again, we attack the big man himself on probability. What's your credence that we're correct?</itunes:subtitle>
  <itunes:duration>1:45:09</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/3/3ac225c1-a486-428e-bdcf-2d1973d2c80b/cover.jpg?v=2"/>
  <description>After four episodes spent fawning over Scott Alexander's "Non-libertarian FAQ", we turn around and attack the good man instead. In this episode we respond to Scott's piece "In Continued Defense of Non-Frequentist Probabilities", and respond to each of his five arguments defending Bayesian probability. Like moths to a flame, we apparently cannot let the probability subject slide, sorry people. But the good news is that before getting there, you get to here about some therapists and pedophiles (therapeutic pedophelia?). What's the probability that Scott changes his mind based on this episode?
We discuss
Why we're not defending frequentism as a philosophy 
The Bayesian interpretation of probability 
The importance of being explicit about assumptions 
Why it's insane to think that 50% should mean both "equally likely" and "I have no effing idea". 
Why Scott's interpretation of probability is crippling our ability to communicate 
How super are Superforecasters? 
Marginal versus conditional guarantees (this is exactly as boring as it sounds) 
How to pronounce Samotsvety and are they Italian or Eastern European or what?
References
In Continued Defense Of Non-Frequentist Probabilities (https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist)
Article on superforecasting by Gavin Leech and Misha Yugadin (https://progress.institute/can-policymakers-trust-forecasters/) 
Essay by Michael Story on superforecasting (https://www.samstack.io/p/five-questions-for-michael-story) 
Existential risk tournament: Superforecasters vs AI doomers (https://forecastingresearch.org/news/results-from-the-2022-existential-risk-persuasion-tournament) and Ben's blogpost about it (https://benchugg.com/writing/superforecasting/) 
The Good Judgment Project (https://goodjudgment.com/) 
Quotes
During the pandemic, Dominic Cummings said some of the most useful stuff that he received and circulated in the British government was not forecasting. It was qualitative information explaining the general model of what’s going on, which enabled decision-makers to think more clearly about their options for action and the likely consequences. If you’re worried about a new disease outbreak, you don’t just want a percentage probability estimate about future case numbers, you want an explanation of how the virus is likely to spread, what you can do about it, how you can prevent it.
- Michael Story (https://www.samstack.io/p/five-questions-for-michael-story) 
Is it bad that one term can mean both perfect information (as in 1) and total lack of information (as in 3)? No. This is no different from how we discuss things when we’re not using probability.
Do vaccines cause autism? No. Does drinking monkey blood cause autism? Also no. My evidence on the vaccines question is dozens of excellent studies, conducted so effectively that we’re as sure about this as we are about anything in biology. My evidence on the monkey blood question is that nobody’s ever proposed this and it would be weird if it were true. Still, it’s perfectly fine to say the single-word answer “no” to both of them to describe where I currently stand. If someone wants to know how much evidence/certainty is behind my “no”, they can ask, and I’ll tell them.
- SA, Section 2
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Help us calibrate our credences and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
What's your credence in Bayesianism? Tell us over at incrementspodcast@gmail.com. 
</description>
  <itunes:keywords>probability, bayesianism, frequentism, Scott Alexander, superforecasting, credences</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>After four episodes spent fawning over Scott Alexander&#39;s &quot;Non-libertarian FAQ&quot;, we turn around and attack the good man instead. In this episode we respond to Scott&#39;s piece &quot;In Continued Defense of Non-Frequentist Probabilities&quot;, and respond to each of his five arguments defending Bayesian probability. Like moths to a flame, we apparently cannot let the probability subject slide, sorry people. But the good news is that before getting there, you get to here about some therapists and pedophiles (therapeutic pedophelia?). What&#39;s the probability that Scott changes his mind based on this episode?</p>

<h1>We discuss</h1>

<ul>
<li>Why we&#39;re not defending frequentism as a philosophy </li>
<li>The Bayesian interpretation of probability </li>
<li>The importance of being explicit about assumptions </li>
<li>Why it&#39;s insane to think that 50% should mean both &quot;equally likely&quot; and &quot;I have no effing idea&quot;. </li>
<li>Why Scott&#39;s interpretation of probability is crippling <em>our</em> ability to communicate </li>
<li>How super are Superforecasters? </li>
<li>Marginal versus conditional guarantees (this is exactly as boring as it sounds) </li>
<li>How to pronounce Samotsvety and are they Italian or Eastern European or what?</li>
</ul>

<h1>References</h1>

<ul>
<li><a href="https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist" rel="nofollow">In Continued Defense Of Non-Frequentist Probabilities</a></li>
<li><a href="https://progress.institute/can-policymakers-trust-forecasters/" rel="nofollow">Article on superforecasting by Gavin Leech and Misha Yugadin</a> </li>
<li><a href="https://www.samstack.io/p/five-questions-for-michael-story" rel="nofollow">Essay by Michael Story on superforecasting</a> </li>
<li><a href="https://forecastingresearch.org/news/results-from-the-2022-existential-risk-persuasion-tournament" rel="nofollow">Existential risk tournament: Superforecasters vs AI doomers</a> and <a href="https://benchugg.com/writing/superforecasting/" rel="nofollow">Ben&#39;s blogpost about it</a> </li>
<li><a href="https://goodjudgment.com/" rel="nofollow">The Good Judgment Project</a> </li>
</ul>

<h1>Quotes</h1>

<blockquote>
<p>During the pandemic, Dominic Cummings said some of the most useful stuff that he received and circulated in the British government was not forecasting. It was qualitative information explaining the general model of what’s going on, which enabled decision-makers to think more clearly about their options for action and the likely consequences. If you’re worried about a new disease outbreak, you don’t just want a percentage probability estimate about future case numbers, you want an explanation of how the virus is likely to spread, what you can do about it, how you can prevent it.<br>
- <a href="https://www.samstack.io/p/five-questions-for-michael-story" rel="nofollow">Michael Story</a> </p>

<p>Is it bad that one term can mean both perfect information (as in 1) and total lack of information (as in 3)? No. This is no different from how we discuss things when we’re not using probability.</p>

<p>Do vaccines cause autism? No. Does drinking monkey blood cause autism? Also no. My evidence on the vaccines question is dozens of excellent studies, conducted so effectively that we’re as sure about this as we are about anything in biology. My evidence on the monkey blood question is that nobody’s ever proposed this and it would be weird if it were true. Still, it’s perfectly fine to say the single-word answer “no” to both of them to describe where I currently stand. If someone wants to know how much evidence/certainty is behind my “no”, they can ask, and I’ll tell them.<br>
- SA, Section 2</p>
</blockquote>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Help us calibrate our credences and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s your credence in Bayesianism? Tell us over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>After four episodes spent fawning over Scott Alexander&#39;s &quot;Non-libertarian FAQ&quot;, we turn around and attack the good man instead. In this episode we respond to Scott&#39;s piece &quot;In Continued Defense of Non-Frequentist Probabilities&quot;, and respond to each of his five arguments defending Bayesian probability. Like moths to a flame, we apparently cannot let the probability subject slide, sorry people. But the good news is that before getting there, you get to here about some therapists and pedophiles (therapeutic pedophelia?). What&#39;s the probability that Scott changes his mind based on this episode?</p>

<h1>We discuss</h1>

<ul>
<li>Why we&#39;re not defending frequentism as a philosophy </li>
<li>The Bayesian interpretation of probability </li>
<li>The importance of being explicit about assumptions </li>
<li>Why it&#39;s insane to think that 50% should mean both &quot;equally likely&quot; and &quot;I have no effing idea&quot;. </li>
<li>Why Scott&#39;s interpretation of probability is crippling <em>our</em> ability to communicate </li>
<li>How super are Superforecasters? </li>
<li>Marginal versus conditional guarantees (this is exactly as boring as it sounds) </li>
<li>How to pronounce Samotsvety and are they Italian or Eastern European or what?</li>
</ul>

<h1>References</h1>

<ul>
<li><a href="https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist" rel="nofollow">In Continued Defense Of Non-Frequentist Probabilities</a></li>
<li><a href="https://progress.institute/can-policymakers-trust-forecasters/" rel="nofollow">Article on superforecasting by Gavin Leech and Misha Yugadin</a> </li>
<li><a href="https://www.samstack.io/p/five-questions-for-michael-story" rel="nofollow">Essay by Michael Story on superforecasting</a> </li>
<li><a href="https://forecastingresearch.org/news/results-from-the-2022-existential-risk-persuasion-tournament" rel="nofollow">Existential risk tournament: Superforecasters vs AI doomers</a> and <a href="https://benchugg.com/writing/superforecasting/" rel="nofollow">Ben&#39;s blogpost about it</a> </li>
<li><a href="https://goodjudgment.com/" rel="nofollow">The Good Judgment Project</a> </li>
</ul>

<h1>Quotes</h1>

<blockquote>
<p>During the pandemic, Dominic Cummings said some of the most useful stuff that he received and circulated in the British government was not forecasting. It was qualitative information explaining the general model of what’s going on, which enabled decision-makers to think more clearly about their options for action and the likely consequences. If you’re worried about a new disease outbreak, you don’t just want a percentage probability estimate about future case numbers, you want an explanation of how the virus is likely to spread, what you can do about it, how you can prevent it.<br>
- <a href="https://www.samstack.io/p/five-questions-for-michael-story" rel="nofollow">Michael Story</a> </p>

<p>Is it bad that one term can mean both perfect information (as in 1) and total lack of information (as in 3)? No. This is no different from how we discuss things when we’re not using probability.</p>

<p>Do vaccines cause autism? No. Does drinking monkey blood cause autism? Also no. My evidence on the vaccines question is dozens of excellent studies, conducted so effectively that we’re as sure about this as we are about anything in biology. My evidence on the monkey blood question is that nobody’s ever proposed this and it would be weird if it were true. Still, it’s perfectly fine to say the single-word answer “no” to both of them to describe where I currently stand. If someone wants to know how much evidence/certainty is behind my “no”, they can ask, and I’ll tell them.<br>
- SA, Section 2</p>
</blockquote>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Help us calibrate our credences and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s your credence in Bayesianism? Tell us over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#53 - Ask Us Anything II: Disagreements and Decisions</title>
  <link>https://www.incrementspodcast.com/53</link>
  <guid isPermaLink="false">1ffe1058-61dd-4c4d-8d9e-383a97549241</guid>
  <pubDate>Mon, 14 Aug 2023 11:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/1ffe1058-61dd-4c4d-8d9e-383a97549241.mp3" length="90414601" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Ask us anything? Ask us everything! Back at it again with AUA Part 2/N. We wax poetic and wane dramatic on disagreements, decision-making, EA, and probability</itunes:subtitle>
  <itunes:duration>1:34:10</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/1/1ffe1058-61dd-4c4d-8d9e-383a97549241/cover.jpg?v=1"/>
  <description>Ask us anything? Ask us everything! Back at it again with AUA Part 2/N. We wax poetic and wane dramatic on a number of subjects, including:
- Ben's dark and despicable hidden historicist tendencies
- Expounding upon (one of our many) critiques of Bayesian Epistemology
- Ben's total abandonment of all of his principles
- Similarities and differences between human and computer decision making
- What can the critical rationalist community learn from Effective Altruism?
- Ben's new best friend Peter Turchin
- How to have effective disagreements and not take gleeful petty jabs at friends and co-hosts.
Questions
(Michael) A critique of Bayesian epistemology is that it "assigns scalars to feelings" in an ungrounded way. It's not clear to me that the problem-solving approach of Deutsch and Popper avoid this, because even during the conjecture-refutation process, the person needs to at some point decide whether the current problem has been solved satisfactorily enough to move on to the next problem. How is this satisfaction determined, if not via summarizing one's internal belief as a scalar that surpasses some threshold? If not this (which is essentially assigning scalars to feelings), by what mechanism is a problem determined to be solved?
(Michael) Is the claim that "humans create new choices whereas machines are constrained to choose within the event-space defined by the human" equivalent to saying "humans can perform abstraction while machines cannot?" Not clear what "create new choices" means, given that humans are also constrained in their vocabulary (and thus their event-space of possible thoughts)
(Lulie) In what ways could the critical rationalist culture improve by looking to EA?
(Scott) What principles do the @IncrementsPod duo apply to navigating effective conversations involving deep disagreement?
(Scott) Are there any contexts where bayesianism has utility? (steelman)
(Scott) What is Vaden going to do post graduation?
Quotes 
“The words or the language, as they are written or spoken,” he wrote, “do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be ‘voluntarily’ reproduced and combined...this combinatory play seems to be the essential feature in productive thought— before there is any connection with logical construction in words or other kinds of signs which can be communicated to others.” (Einstein) 
Contact us
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Check us out on youtube at https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Send Ben an email asking him why god why over at incrementspodcast.com 
</description>
  <itunes:keywords>ask-us-anything, disagreements, decision-making, bayesianism, probability </itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Ask us anything? Ask us everything! Back at it again with AUA Part 2/N. We wax poetic and wane dramatic on a number of subjects, including:</p>

<ul>
<li>Ben&#39;s dark and despicable hidden historicist tendencies</li>
<li>Expounding upon (one of our many) critiques of Bayesian Epistemology</li>
<li>Ben&#39;s total abandonment of all of his principles</li>
<li>Similarities and differences between human and computer decision making</li>
<li>What can the critical rationalist community learn from Effective Altruism?</li>
<li>Ben&#39;s new best friend Peter Turchin</li>
<li>How to have effective disagreements and not take gleeful petty jabs at friends and co-hosts.</li>
</ul>

<p><strong>Questions</strong></p>

<ol>
<li>(<strong>Michael</strong>) A critique of Bayesian epistemology is that it &quot;assigns scalars to feelings&quot; in an ungrounded way. It&#39;s not clear to me that the problem-solving approach of Deutsch and Popper avoid this, because even during the conjecture-refutation process, the person needs to at some point decide whether the current problem has been solved satisfactorily enough to move on to the next problem. How is this satisfaction determined, if not via summarizing one&#39;s internal belief as a scalar that surpasses some threshold? If not this (which is essentially assigning scalars to feelings), by what mechanism is a problem determined to be solved?</li>
<li>(<strong>Michael</strong>) Is the claim that &quot;humans create new choices whereas machines are constrained to choose within the event-space defined by the human&quot; equivalent to saying &quot;humans can perform abstraction while machines cannot?&quot; Not clear what &quot;create new choices&quot; means, given that humans are also constrained in their vocabulary (and thus their event-space of possible thoughts)</li>
<li>(<strong>Lulie</strong>) In what ways could the critical rationalist culture improve by looking to EA?</li>
<li>(<strong>Scott</strong>) What principles do the @IncrementsPod duo apply to navigating effective conversations involving deep disagreement?</li>
<li>(<strong>Scott</strong>) Are there any contexts where bayesianism has utility? (steelman)</li>
<li>(<strong>Scott</strong>) What is Vaden going to do post graduation?</li>
</ol>

<p><strong>Quotes</strong> </p>

<blockquote>
<p>“The words or the language, as they are written or spoken,” he wrote, “do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be ‘voluntarily’ reproduced and combined...this combinatory play seems to be the essential feature in productive thought— before there is any connection with logical construction in words or other kinds of signs which can be communicated to others.” (Einstein) </p>
</blockquote>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p>Send Ben an email asking him why god why over at incrementspodcast.com</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Ask us anything? Ask us everything! Back at it again with AUA Part 2/N. We wax poetic and wane dramatic on a number of subjects, including:</p>

<ul>
<li>Ben&#39;s dark and despicable hidden historicist tendencies</li>
<li>Expounding upon (one of our many) critiques of Bayesian Epistemology</li>
<li>Ben&#39;s total abandonment of all of his principles</li>
<li>Similarities and differences between human and computer decision making</li>
<li>What can the critical rationalist community learn from Effective Altruism?</li>
<li>Ben&#39;s new best friend Peter Turchin</li>
<li>How to have effective disagreements and not take gleeful petty jabs at friends and co-hosts.</li>
</ul>

<p><strong>Questions</strong></p>

<ol>
<li>(<strong>Michael</strong>) A critique of Bayesian epistemology is that it &quot;assigns scalars to feelings&quot; in an ungrounded way. It&#39;s not clear to me that the problem-solving approach of Deutsch and Popper avoid this, because even during the conjecture-refutation process, the person needs to at some point decide whether the current problem has been solved satisfactorily enough to move on to the next problem. How is this satisfaction determined, if not via summarizing one&#39;s internal belief as a scalar that surpasses some threshold? If not this (which is essentially assigning scalars to feelings), by what mechanism is a problem determined to be solved?</li>
<li>(<strong>Michael</strong>) Is the claim that &quot;humans create new choices whereas machines are constrained to choose within the event-space defined by the human&quot; equivalent to saying &quot;humans can perform abstraction while machines cannot?&quot; Not clear what &quot;create new choices&quot; means, given that humans are also constrained in their vocabulary (and thus their event-space of possible thoughts)</li>
<li>(<strong>Lulie</strong>) In what ways could the critical rationalist culture improve by looking to EA?</li>
<li>(<strong>Scott</strong>) What principles do the @IncrementsPod duo apply to navigating effective conversations involving deep disagreement?</li>
<li>(<strong>Scott</strong>) Are there any contexts where bayesianism has utility? (steelman)</li>
<li>(<strong>Scott</strong>) What is Vaden going to do post graduation?</li>
</ol>

<p><strong>Quotes</strong> </p>

<blockquote>
<p>“The words or the language, as they are written or spoken,” he wrote, “do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be ‘voluntarily’ reproduced and combined...this combinatory play seems to be the essential feature in productive thought— before there is any connection with logical construction in words or other kinds of signs which can be communicated to others.” (Einstein) </p>
</blockquote>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p>Send Ben an email asking him why god why over at incrementspodcast.com</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#46 (Bonus) - Arguing about probability (with Nick Anyos)</title>
  <link>https://www.incrementspodcast.com/46</link>
  <guid isPermaLink="false">4b26dbf2-7bcd-44e6-ac65-c3dbca70c897</guid>
  <pubDate>Mon, 19 Dec 2022 12:30:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/4b26dbf2-7bcd-44e6-ac65-c3dbca70c897.mp3" length="85872117" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Ben and Vaden make a guest appearance on Nick Anyos' podcast on criticisms of effective altruism. As usual, they end up arguing about probability for most of it. </itunes:subtitle>
  <itunes:duration>1:59:16</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/4/4b26dbf2-7bcd-44e6-ac65-c3dbca70c897/cover.jpg?v=1"/>
  <description>We make a guest appearance on Nick Anyos' podcast to talk about effective altruism, longtermism, and probability. Nick (very politely) pushes back on our anti-Bayesian credo, and we get deep into the weeds of probability and epistemology. 
You can find Nick's podcast on institutional design here (https://institutionaldesign.podbean.com/), and his substack here (https://institutionaldesign.substack.com/?utm_source=substack&amp;amp;utm_medium=web&amp;amp;utm_campaign=substack_profile). 
We discuss: 
- The lack of feedback loops in longtermism 
- Whether quantifying your beliefs is helpful 
- Objective versus subjective knowledge 
- The difference between prediction and explanation
- The difference between Bayesian epistemology and Bayesian statistics
- Statistical modelling and when statistics is useful 
Links
- Philosophy and the practice of Bayesian statistics (http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf) by Andrew Gelman and Cosma Shalizi
- EA forum post (https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations) showing all forecasts beyond a year out are uncalibrated. 
- Vaclav smil quote where he predicts a pandemic by 2021:
     &amp;gt; The following realities indicate the imminence of the risk. The typical frequency of influenza pan- demics was once every 50–60 years between 1700 and 1889 (the longest known gap was 52 years, between the pandemics of 1729–1733 and 1781–1782) and only once every 10–40 years since 1889. The recurrence interval, calculated simply as the mean time elapsed between the last six known pandemics, is about 28 years, with the extremes of 6 and 53 years. Adding the mean and the highest interval to 1968 gives a span between 1996 and 2021. We are, probabilistically speaking, very much inside a high-risk zone.
     &amp;gt; 
     &amp;gt; - Global Catastropes and Trends, p.46
Reference for Tetlock's superforecasters failing to predict the pandemic. "On February 20th, Tetlock’s superforecasters predicted only a 3% chance that there would be 200,000+ coronavirus cases a month later (there were)." (https://wearenotsaved.com/2020/04/18/pandemic-uncovers-the-ridiculousness-of-superforecasting/) 
Contact us
- Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
- Check us out on youtube at https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ
- Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Errata
- At the beginning of the episode Vaden says he hasn't been interviewed on another podcast before. He forgot his appearence (https://www.thedeclarationonline.com/podcast/2019/7/23/chesto-and-vaden-debatecast) on The Declaration Podcast in 2019, which will be appearing as a bonus episode on our feed in the coming weeks. 
Sick of hearing us talk about this subject? Understandable! Send topic suggestions over to incrementspodcast@gmail.com. 
Photo credit: James O’Brien (http://www.obrien-studio.com/) for Quanta Magazine (https://www.quantamagazine.org/where-quantum-probability-comes-from-20190909/) 
</description>
  <itunes:keywords>probability, longtermism, effective altruism, bayesianism, statistics</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>We make a guest appearance on Nick Anyos&#39; podcast to talk about effective altruism, longtermism, and probability. Nick (very politely) pushes back on our anti-Bayesian credo, and we get deep into the weeds of probability and epistemology. </p>

<p>You can find Nick&#39;s podcast on institutional design <a href="https://institutionaldesign.podbean.com/" rel="nofollow">here</a>, and his substack <a href="https://institutionaldesign.substack.com/?utm_source=substack&utm_medium=web&utm_campaign=substack_profile" rel="nofollow">here</a>. </p>

<p><strong>We discuss:</strong> </p>

<ul>
<li>The lack of feedback loops in longtermism </li>
<li>Whether quantifying your beliefs is helpful </li>
<li>Objective versus subjective knowledge </li>
<li>The difference between prediction and explanation</li>
<li>The difference between Bayesian epistemology and Bayesian statistics</li>
<li>Statistical modelling and when statistics is useful </li>
</ul>

<p><strong>Links</strong></p>

<ul>
<li><a href="http://www.stat.columbia.edu/%7Egelman/research/published/philosophy.pdf" rel="nofollow">Philosophy and the practice of Bayesian statistics</a> by Andrew Gelman and Cosma Shalizi</li>
<li><a href="https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations" rel="nofollow">EA forum post</a> showing all forecasts beyond a year out are uncalibrated. </li>
<li><p>Vaclav smil quote where he predicts a pandemic by 2021:</p>

<blockquote>
<p><em>The following realities indicate the imminence of the risk. The typical frequency of influenza pan- demics was once every 50–60 years between 1700 and 1889 (the longest known gap was 52 years, between the pandemics of 1729–1733 and 1781–1782) and only once every 10–40 years since 1889. The recurrence interval, calculated simply as the mean time elapsed between the last six known pandemics, is about 28 years, with the extremes of 6 and 53 years. Adding the mean and the highest interval to 1968 gives a span between 1996 and 2021. We are, probabilistically speaking, very much inside a high-risk zone.</em></p>

<p><em>- Global Catastropes and Trends, p.46</em></p>
</blockquote></li>
<li><p>Reference for Tetlock&#39;s superforecasters failing to predict the pandemic. <a href="https://wearenotsaved.com/2020/04/18/pandemic-uncovers-the-ridiculousness-of-superforecasting/" rel="nofollow">&quot;On February 20th, Tetlock’s superforecasters predicted only a 3% chance that there would be 200,000+ coronavirus cases a month later (there were).&quot;</a> </p></li>
</ul>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p><strong>Errata</strong></p>

<ul>
<li>At the beginning of the episode Vaden says he hasn&#39;t been interviewed on another podcast before. He forgot <a href="https://www.thedeclarationonline.com/podcast/2019/7/23/chesto-and-vaden-debatecast" rel="nofollow">his appearence</a> on The Declaration Podcast in 2019, which will be appearing as a bonus episode on our feed in the coming weeks. </li>
</ul>

<p>Sick of hearing us talk about this subject? Understandable! Send topic suggestions over to <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p>

<p>Photo credit: <a href="http://www.obrien-studio.com/" rel="nofollow">James O’Brien</a> for <a href="https://www.quantamagazine.org/where-quantum-probability-comes-from-20190909/" rel="nofollow">Quanta Magazine</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>We make a guest appearance on Nick Anyos&#39; podcast to talk about effective altruism, longtermism, and probability. Nick (very politely) pushes back on our anti-Bayesian credo, and we get deep into the weeds of probability and epistemology. </p>

<p>You can find Nick&#39;s podcast on institutional design <a href="https://institutionaldesign.podbean.com/" rel="nofollow">here</a>, and his substack <a href="https://institutionaldesign.substack.com/?utm_source=substack&utm_medium=web&utm_campaign=substack_profile" rel="nofollow">here</a>. </p>

<p><strong>We discuss:</strong> </p>

<ul>
<li>The lack of feedback loops in longtermism </li>
<li>Whether quantifying your beliefs is helpful </li>
<li>Objective versus subjective knowledge </li>
<li>The difference between prediction and explanation</li>
<li>The difference between Bayesian epistemology and Bayesian statistics</li>
<li>Statistical modelling and when statistics is useful </li>
</ul>

<p><strong>Links</strong></p>

<ul>
<li><a href="http://www.stat.columbia.edu/%7Egelman/research/published/philosophy.pdf" rel="nofollow">Philosophy and the practice of Bayesian statistics</a> by Andrew Gelman and Cosma Shalizi</li>
<li><a href="https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations" rel="nofollow">EA forum post</a> showing all forecasts beyond a year out are uncalibrated. </li>
<li><p>Vaclav smil quote where he predicts a pandemic by 2021:</p>

<blockquote>
<p><em>The following realities indicate the imminence of the risk. The typical frequency of influenza pan- demics was once every 50–60 years between 1700 and 1889 (the longest known gap was 52 years, between the pandemics of 1729–1733 and 1781–1782) and only once every 10–40 years since 1889. The recurrence interval, calculated simply as the mean time elapsed between the last six known pandemics, is about 28 years, with the extremes of 6 and 53 years. Adding the mean and the highest interval to 1968 gives a span between 1996 and 2021. We are, probabilistically speaking, very much inside a high-risk zone.</em></p>

<p><em>- Global Catastropes and Trends, p.46</em></p>
</blockquote></li>
<li><p>Reference for Tetlock&#39;s superforecasters failing to predict the pandemic. <a href="https://wearenotsaved.com/2020/04/18/pandemic-uncovers-the-ridiculousness-of-superforecasting/" rel="nofollow">&quot;On February 20th, Tetlock’s superforecasters predicted only a 3% chance that there would be 200,000+ coronavirus cases a month later (there were).&quot;</a> </p></li>
</ul>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p><strong>Errata</strong></p>

<ul>
<li>At the beginning of the episode Vaden says he hasn&#39;t been interviewed on another podcast before. He forgot <a href="https://www.thedeclarationonline.com/podcast/2019/7/23/chesto-and-vaden-debatecast" rel="nofollow">his appearence</a> on The Declaration Podcast in 2019, which will be appearing as a bonus episode on our feed in the coming weeks. </li>
</ul>

<p>Sick of hearing us talk about this subject? Understandable! Send topic suggestions over to <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p>

<p>Photo credit: <a href="http://www.obrien-studio.com/" rel="nofollow">James O’Brien</a> for <a href="https://www.quantamagazine.org/where-quantum-probability-comes-from-20190909/" rel="nofollow">Quanta Magazine</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#41 - Parenting, Epistemology, and EA (w/ Lulie Tanett) </title>
  <link>https://www.incrementspodcast.com/41</link>
  <guid isPermaLink="false">8ed5f8dd-a838-4df0-8791-af0372ee011d</guid>
  <pubDate>Mon, 20 Jun 2022 16:15:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/8ed5f8dd-a838-4df0-8791-af0372ee011d.mp3" length="77460808" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>We're joined by the wonderful Lulie Tanett to talk about effective altruism, pulling spouses out of burning buildings, and why you should prefer critical rationalism to Bayesianism for your mom's sake.</itunes:subtitle>
  <itunes:duration>1:18:15</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/8/8ed5f8dd-a838-4df0-8791-af0372ee011d/cover.jpg?v=1"/>
  <description>We're joined by the wonderful Lulie Tanett to talk about effective altruism, pulling spouses out of burning buildings, and why you should prefer critical rationalism to Bayesianism for your mom's sake. Buckle up! 
We discuss:
- Lulie's recent experience at EA Global 
- Bayesianism and how it differs from critical rationalism 
- Common arguments in favor of Bayesianism 
- Taking Children Seriously 
- What it was like for Lulie growing up without going to school 
- The Alexander Technique, Internal Family Systems, Gendlin's Focusing, and Belief Reporting 
References 
- EA Global (https://www.eaglobal.org/)
- Taking Children Seriously (https://www.fitz-claridge.com/taking-children-seriously/) 
- Alexander Technique (https://expandingawareness.org/blog/what-is-the-alexander-technique/)
- Internal Family Systems (https://ifs-institute.com/)
- Gendlin Focusing (https://en.wikipedia.org/wiki/Focusing_(psychotherapy))
Social Media Everywhere 
Follow Lulie on Twitter @reasonisfun. Follow us at @VadenMasrani, @BennyChugg, @IncrementsPod, or on Youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ). 
Report your beliefs and focus your Gendlin's at incrementspodcast@gmail.com.   Special Guest: Lulie Tanett.
</description>
  <itunes:keywords>effective altruism, epistemology, rationality, bayesianism, critical rationalism</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>We&#39;re joined by the wonderful Lulie Tanett to talk about effective altruism, pulling spouses out of burning buildings, and why you should prefer critical rationalism to Bayesianism for your mom&#39;s sake. Buckle up! </p>

<p><strong>We discuss:</strong></p>

<ul>
<li>Lulie&#39;s recent experience at EA Global </li>
<li>Bayesianism and how it differs from critical rationalism </li>
<li>Common arguments in favor of Bayesianism </li>
<li>Taking Children Seriously </li>
<li>What it was like for Lulie growing up without going to school </li>
<li>The Alexander Technique, Internal Family Systems, Gendlin&#39;s Focusing, and Belief Reporting </li>
</ul>

<p><strong>References</strong> </p>

<ul>
<li><a href="https://www.eaglobal.org/" rel="nofollow">EA Global</a></li>
<li><a href="https://www.fitz-claridge.com/taking-children-seriously/" rel="nofollow">Taking Children Seriously</a> </li>
<li><a href="https://expandingawareness.org/blog/what-is-the-alexander-technique/" rel="nofollow">Alexander Technique</a></li>
<li><a href="https://ifs-institute.com/" rel="nofollow">Internal Family Systems</a></li>
<li><a href="https://en.wikipedia.org/wiki/Focusing_(psychotherapy)" rel="nofollow">Gendlin Focusing</a></li>
</ul>

<p><strong>Social Media Everywhere</strong> <br>
Follow Lulie on Twitter @reasonisfun. Follow us at @VadenMasrani, @BennyChugg, @IncrementsPod, or on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">Youtube</a>. </p>

<p>Report your beliefs and focus your Gendlin&#39;s at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>.  </p><p>Special Guest: Lulie Tanett.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>We&#39;re joined by the wonderful Lulie Tanett to talk about effective altruism, pulling spouses out of burning buildings, and why you should prefer critical rationalism to Bayesianism for your mom&#39;s sake. Buckle up! </p>

<p><strong>We discuss:</strong></p>

<ul>
<li>Lulie&#39;s recent experience at EA Global </li>
<li>Bayesianism and how it differs from critical rationalism </li>
<li>Common arguments in favor of Bayesianism </li>
<li>Taking Children Seriously </li>
<li>What it was like for Lulie growing up without going to school </li>
<li>The Alexander Technique, Internal Family Systems, Gendlin&#39;s Focusing, and Belief Reporting </li>
</ul>

<p><strong>References</strong> </p>

<ul>
<li><a href="https://www.eaglobal.org/" rel="nofollow">EA Global</a></li>
<li><a href="https://www.fitz-claridge.com/taking-children-seriously/" rel="nofollow">Taking Children Seriously</a> </li>
<li><a href="https://expandingawareness.org/blog/what-is-the-alexander-technique/" rel="nofollow">Alexander Technique</a></li>
<li><a href="https://ifs-institute.com/" rel="nofollow">Internal Family Systems</a></li>
<li><a href="https://en.wikipedia.org/wiki/Focusing_(psychotherapy)" rel="nofollow">Gendlin Focusing</a></li>
</ul>

<p><strong>Social Media Everywhere</strong> <br>
Follow Lulie on Twitter @reasonisfun. Follow us at @VadenMasrani, @BennyChugg, @IncrementsPod, or on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">Youtube</a>. </p>

<p>Report your beliefs and focus your Gendlin&#39;s at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>.  </p><p>Special Guest: Lulie Tanett.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#17 - Against Longtermism</title>
  <link>https://www.incrementspodcast.com/17</link>
  <guid isPermaLink="false">Buzzsprout-6919628</guid>
  <pubDate>Fri, 18 Dec 2020 19:00:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/f1e65451-076d-4ca4-bef0-5f938e81d70d.mp3" length="64853211" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle></itunes:subtitle>
  <itunes:duration>1:30:01</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
  <description>&lt;p&gt;Well, there's no avoiding controversy with this one. We explain, examine, and attempt to refute the shiny new moral philosophy of &lt;em&gt;longtermism.&lt;/em&gt; Our critique focuses on &lt;a href="https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/5f1704905c33720e61cd3214/1595344019788/The_Case_for_Strong_Longtermism.pdf"&gt;&lt;em&gt;The Case for Strong Longtermism&lt;/em&gt;&lt;/a&gt;&lt;em&gt; &lt;/em&gt;by Hilary Greaves and Will MacAskill. &lt;br&gt;&lt;br&gt;We say so in the episode, but it's important to emphasize that we harbour no animosity towards anyone in the effective altruism community. However, we both think that longtermism is pretty f***ing scary and do our best to communicate why.&lt;br&gt;&lt;br&gt;Confused as to why there's no charming, witty, and hilarious intro? Us too. Somehow, Ben managed to corrupt his audio. Classic. Oh well, some of you tell us you dislike the intros anyway. &lt;br&gt;&lt;br&gt;&lt;b&gt;References&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/5f1704905c33720e61cd3214/1595344019788/The_Case_for_Strong_Longtermism.pdf"&gt;The Case for Strong Longtermism&lt;/a&gt;, by Greaves and MacAskill&lt;/li&gt;
&lt;li&gt;Vaden's &lt;a href="https://forum.effectivealtruism.org/posts/7MPTzAnPtu5HKesMX/a-case-against-strong-longtermism"&gt;EA forum post&lt;/a&gt; on longtermism&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://www.reddit.com/r/EffectiveAltruism/comments/kd41jw/a_case_against_strong_longtermism/"&gt;reddit discussion&lt;/a&gt; surrounding Vaden's piece&lt;/li&gt;
&lt;li&gt;Ben's &lt;a href="https://benchugg.medium.com/against-strong-longtermism-a-response-to-greaves-and-macaskill-cb4bb9681982"&gt;piece on longtermism&lt;/a&gt; (which he has hidden in the depths of Medium because he's scared of the EA forum) &lt;/li&gt;
&lt;li&gt;Ben on &lt;a href="https://medium.com/conjecture-magazine/pascals-mugging-and-the-poverty-of-the-expected-value-calculus-70b190d953cd"&gt;Pascal's Mugging and Expected Values&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Gwern and Robin Hanson &lt;a href="https://twitter.com/robinhanson/status/1339956546801954816?s=20"&gt;making fun&lt;/a&gt; of Ben's piece &lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;&lt;br&gt;Yell at us on the EA forum, on Reddit, on Medium, or over email at incrementspodcast@gmail.com. &lt;/p&gt; 
</description>
  <itunes:keywords>longtermism, expected value, bayesianism, effective altruism</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Well, there&apos;s no avoiding controversy with this one. We explain, examine, and attempt to refute the shiny new moral philosophy of <em>longtermism.</em> Our critique focuses on <a href='https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/5f1704905c33720e61cd3214/1595344019788/The_Case_for_Strong_Longtermism.pdf'><em>The Case for Strong Longtermism</em></a><em> </em>by Hilary Greaves and Will MacAskill. <br/><br/>We say so in the episode, but it&apos;s important to emphasize that we harbour no animosity towards anyone in the effective altruism community. However, we both think that longtermism is pretty f***ing scary and do our best to communicate why.<br/><br/>Confused as to why there&apos;s no charming, witty, and hilarious intro? Us too. Somehow, Ben managed to corrupt his audio. Classic. Oh well, some of you tell us you dislike the intros anyway. <br/><br/><b>References</b></p><ul><li><a href='https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/5f1704905c33720e61cd3214/1595344019788/The_Case_for_Strong_Longtermism.pdf'>The Case for Strong Longtermism</a>, by Greaves and MacAskill</li><li>Vaden&apos;s <a href='https://forum.effectivealtruism.org/posts/7MPTzAnPtu5HKesMX/a-case-against-strong-longtermism'>EA forum post</a> on longtermism</li><li>The <a href='https://www.reddit.com/r/EffectiveAltruism/comments/kd41jw/a_case_against_strong_longtermism/'>reddit discussion</a> surrounding Vaden&apos;s piece</li><li>Ben&apos;s <a href='https://benchugg.medium.com/against-strong-longtermism-a-response-to-greaves-and-macaskill-cb4bb9681982'>piece on longtermism</a> (which he has hidden in the depths of Medium because he&apos;s scared of the EA forum) </li><li>Ben on <a href='https://medium.com/conjecture-magazine/pascals-mugging-and-the-poverty-of-the-expected-value-calculus-70b190d953cd'>Pascal&apos;s Mugging and Expected Values</a></li><li>Gwern and Robin Hanson <a href='https://twitter.com/robinhanson/status/1339956546801954816?s=20'>making fun</a> of Ben&apos;s piece </li></ul><p><br/>Yell at us on the EA forum, on Reddit, on Medium, or over email at incrementspodcast@gmail.com. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Well, there&apos;s no avoiding controversy with this one. We explain, examine, and attempt to refute the shiny new moral philosophy of <em>longtermism.</em> Our critique focuses on <a href='https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/5f1704905c33720e61cd3214/1595344019788/The_Case_for_Strong_Longtermism.pdf'><em>The Case for Strong Longtermism</em></a><em> </em>by Hilary Greaves and Will MacAskill. <br/><br/>We say so in the episode, but it&apos;s important to emphasize that we harbour no animosity towards anyone in the effective altruism community. However, we both think that longtermism is pretty f***ing scary and do our best to communicate why.<br/><br/>Confused as to why there&apos;s no charming, witty, and hilarious intro? Us too. Somehow, Ben managed to corrupt his audio. Classic. Oh well, some of you tell us you dislike the intros anyway. <br/><br/><b>References</b></p><ul><li><a href='https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/5f1704905c33720e61cd3214/1595344019788/The_Case_for_Strong_Longtermism.pdf'>The Case for Strong Longtermism</a>, by Greaves and MacAskill</li><li>Vaden&apos;s <a href='https://forum.effectivealtruism.org/posts/7MPTzAnPtu5HKesMX/a-case-against-strong-longtermism'>EA forum post</a> on longtermism</li><li>The <a href='https://www.reddit.com/r/EffectiveAltruism/comments/kd41jw/a_case_against_strong_longtermism/'>reddit discussion</a> surrounding Vaden&apos;s piece</li><li>Ben&apos;s <a href='https://benchugg.medium.com/against-strong-longtermism-a-response-to-greaves-and-macaskill-cb4bb9681982'>piece on longtermism</a> (which he has hidden in the depths of Medium because he&apos;s scared of the EA forum) </li><li>Ben on <a href='https://medium.com/conjecture-magazine/pascals-mugging-and-the-poverty-of-the-expected-value-calculus-70b190d953cd'>Pascal&apos;s Mugging and Expected Values</a></li><li>Gwern and Robin Hanson <a href='https://twitter.com/robinhanson/status/1339956546801954816?s=20'>making fun</a> of Ben&apos;s piece </li></ul><p><br/>Yell at us on the EA forum, on Reddit, on Medium, or over email at incrementspodcast@gmail.com. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#11 - Debating Existential Risk</title>
  <link>https://www.incrementspodcast.com/11</link>
  <guid isPermaLink="false">Buzzsprout-5475121</guid>
  <pubDate>Wed, 16 Sep 2020 16:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/4ed5459c-bf59-432a-966d-33c3dd5450f0.mp3" length="64654289" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle></itunes:subtitle>
  <itunes:duration>1:29:17</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/4/4ed5459c-bf59-432a-966d-33c3dd5450f0/cover.jpg?v=1"/>
  <description>&lt;p&gt;Vaden's arguments against Bayesian philosophy and existential risk are examined by someone who might actually know what they're talking about, i.e., not Ben. After writing a critique of our conversation in Episode 7, which started off &lt;a href="https://vmasrani.github.io/blog/2020/mauricio_first_response/"&gt;a series of blog posts&lt;/a&gt;, our good friend Mauricio (who studies political science, economics, and philosophy) kindly agrees to come on the podcast and try to figure out who's more confused. Does Vaden convert? &lt;br&gt;&lt;br&gt;
We apologize for the long wait between this episode and the last one. It was all Vaden's fault. &lt;br&gt;&lt;br&gt;Hit us up at &lt;em&gt;incrementspodcast@gmail.com&lt;/em&gt;!&lt;br&gt;&lt;br&gt;&lt;em&gt;Note from Vaden:  Upon relistening, I've just learned my new computer chair clicks in the most annoying possible way every time I get enthusiastic. My apologies - I'll work on being less enthusiastic in future episodes.  &lt;br&gt;&lt;br&gt;Second note from Vaden: Yeesh lots of audio issues with this episode - I replaced the file with a cleaned up version at 5:30pm September 17th. Still learning... &lt;br&gt;&lt;/em&gt;&lt;br&gt;&lt;/p&gt; 
</description>
  <itunes:keywords>existential risk, probability, bayesianism</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Vaden&apos;s arguments against Bayesian philosophy and existential risk are examined by someone who might actually know what they&apos;re talking about, i.e., not Ben. After writing a critique of our conversation in Episode 7, which started off <a href='https://vmasrani.github.io/blog/2020/mauricio_first_response/'>a series of blog posts</a>, our good friend Mauricio (who studies political science, economics, and philosophy) kindly agrees to come on the podcast and try to figure out who&apos;s more confused. Does Vaden convert? <br/><br/>
We apologize for the long wait between this episode and the last one. It was all Vaden&apos;s fault. <br/><br/>Hit us up at <em>incrementspodcast@gmail.com</em>!<br/><br/><em>Note from Vaden:  Upon relistening, I&apos;ve just learned my new computer chair clicks in the most annoying possible way every time I get enthusiastic. My apologies - I&apos;ll work on being less enthusiastic in future episodes.  <br/><br/>Second note from Vaden: Yeesh lots of audio issues with this episode - I replaced the file with a cleaned up version at 5:30pm September 17th. Still learning... <br/></em><br/></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Vaden&apos;s arguments against Bayesian philosophy and existential risk are examined by someone who might actually know what they&apos;re talking about, i.e., not Ben. After writing a critique of our conversation in Episode 7, which started off <a href='https://vmasrani.github.io/blog/2020/mauricio_first_response/'>a series of blog posts</a>, our good friend Mauricio (who studies political science, economics, and philosophy) kindly agrees to come on the podcast and try to figure out who&apos;s more confused. Does Vaden convert? <br/><br/>
We apologize for the long wait between this episode and the last one. It was all Vaden&apos;s fault. <br/><br/>Hit us up at <em>incrementspodcast@gmail.com</em>!<br/><br/><em>Note from Vaden:  Upon relistening, I&apos;ve just learned my new computer chair clicks in the most annoying possible way every time I get enthusiastic. My apologies - I&apos;ll work on being less enthusiastic in future episodes.  <br/><br/>Second note from Vaden: Yeesh lots of audio issues with this episode - I replaced the file with a cleaned up version at 5:30pm September 17th. Still learning... <br/></em><br/></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#7 - Philosophy of Probability II: Existential Risks </title>
  <link>https://www.incrementspodcast.com/7</link>
  <guid isPermaLink="false">Buzzsprout-4476590</guid>
  <pubDate>Tue, 07 Jul 2020 11:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/07a038fa-d44d-40e6-9942-39879969c038.mp3" length="70590859" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle></itunes:subtitle>
  <itunes:duration>1:37:32</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/0/07a038fa-d44d-40e6-9942-39879969c038/cover.jpg?v=1"/>
  <description>&lt;p&gt;Back down to earth we go! Or try to, at least. In this episode Ben and Vaden attempt to ground their previous discussion on the philosophy of probability by focusing on a real-world example, namely the book The Precipice by Toby Ord, recently featured on the Making Sense podcast. Vaden believes in arguments, and Ben argues for beliefs. &lt;br&gt;&lt;br&gt;&lt;b&gt;Quotes&lt;/b&gt;&lt;br&gt;"&lt;em&gt;A common approach to estimating the chance of an unprecedented event with earth-shaking consequences is to take a skeptical stance: to start with an extremely small probability and only raise it from there when a large amount of hard evidence is presented. But I disagree. Instead, I think the right method is to start with a probability that reflects our overall impressions, then adjust this in light of the scientific evidence. When there is a lot of evidence, these approaches converge. But when there isn’t, the starting point can matter. &lt;br&gt;&lt;br&gt;In the case of artificial intelligence, everyone agrees the evidence and arguments are far from watertight, but the question is where does this leave us? Very roughly, my approach is to start with the overall view of the expert community that there is something like a one in two chance that AI agents capable of outperforming humans in almost every task will be developed in the coming century. And conditional on that happening, we shouldn’t be shocked if these agents that outperform us across the board were to inherit our future. Especially if when looking into the details, we see great challenges in aligning these agents with our values.&lt;/em&gt;"&lt;br&gt;- The Precipice, p. 165&lt;br&gt;&lt;br&gt;"&lt;em&gt;Most of the risks arising from long-term trends remain beyond revealing quantification. What is the probability of China’s spectacular economic expansion stalling or even going into reverse? What is the likelihood that Islamic terrorism will develop into a massive, determined quest to destroy the West? Probability estimates of these outcomes based on expert opinion provide at best some constraining guidelines but do not offer any reliable basis for relative comparisons of diverse events or their interrelations. What is the likelihood that a massive wave of global Islamic terrorism will accelerate the Western transition to non–fossil fuel energies? To what extent will the globalization trend be enhanced or impeded by a faster-than-expected sea level rise or by a precipitous demise of the United States? Setting such odds or multipliers is beyond any meaningful quantification.&lt;/em&gt;" &lt;br&gt;- Global Catastrophes and Trends, p. 226&lt;br&gt;&lt;br&gt;"&lt;em&gt;And while computers have been used for many years to assemble other  computers and machines, such deployments do not indicate any imminent self- reproductive capability. All those processes require human actions to initiate them,  raw materials to build the hardware, and above all, energy to run them. I find it hard to visualize how those machines would (particularly in less than a generation) launch, integrate, and sustain an entirely independent exploration, extraction, conversion, and delivery of the requisite energies."&lt;/em&gt;&lt;br&gt;- Global Catastrophes and Trends, p. 26&lt;br&gt;&lt;br&gt;&lt;b&gt;References:&lt;/b&gt;&lt;br&gt;- &lt;a href="https://www.amazon.ca/dp/B08BSZ52TN/ref=dp-kindle-redirect?_encoding=UTF8&amp;amp;btkr=1"&gt;Global Catastrophes and Trends: The Next Fifty Years&lt;/a&gt;&lt;br&gt;- &lt;a href="https://www.amazon.ca/dp/B07V9GHKYP/ref=dp-kindle-redirect?_encoding=UTF8&amp;amp;btkr=1"&gt;The Precipice: Existential Risk and the Future of Humanity&lt;/a&gt;&lt;br&gt;- &lt;a href="https://samharris.org/podcasts/208-existential-risk/"&gt;Making Sense podcast w/ Ord&lt;/a&gt;  (Clip starts around 40:00)&lt;br&gt;- &lt;a href="https://en.wikipedia.org/wiki/Mere_addition_paradox"&gt;Repugnant conclusion&lt;/a&gt;&lt;br&gt;- &lt;a href="https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem"&gt;Arrow's theorem&lt;/a&gt;&lt;br&gt;- &lt;a href="https://en.wikipedia.org/wiki/Apportionment_paradox"&gt;Balinski–Young theorem&lt;/a&gt;&lt;/p&gt; 
</description>
  <itunes:keywords>existential risk, AI, bayesianism, expected value</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Back down to earth we go! Or try to, at least. In this episode Ben and Vaden attempt to ground their previous discussion on the philosophy of probability by focusing on a real-world example, namely the book The Precipice by Toby Ord, recently featured on the Making Sense podcast. Vaden believes in arguments, and Ben argues for beliefs. <br/><br/><b>Quotes</b><br/>&quot;<em>A common approach to estimating the chance of an unprecedented event with earth-shaking consequences is to take a skeptical stance: to start with an extremely small probability and only raise it from there when a large amount of hard evidence is presented. But I disagree. Instead, I think the right method is to start with a probability that reflects our overall impressions, then adjust this in light of the scientific evidence. When there is a lot of evidence, these approaches converge. But when there isn’t, the starting point can matter. <br/><br/>In the case of artificial intelligence, everyone agrees the evidence and arguments are far from watertight, but the question is where does this leave us? Very roughly, my approach is to start with the overall view of the expert community that there is something like a one in two chance that AI agents capable of outperforming humans in almost every task will be developed in the coming century. And conditional on that happening, we shouldn’t be shocked if these agents that outperform us across the board were to inherit our future. Especially if when looking into the details, we see great challenges in aligning these agents with our values.</em>&quot;<br/>- The Precipice, p. 165<br/><br/>&quot;<em>Most of the risks arising from long-term trends remain beyond revealing quantification. What is the probability of China’s spectacular economic expansion stalling or even going into reverse? What is the likelihood that Islamic terrorism will develop into a massive, determined quest to destroy the West? Probability estimates of these outcomes based on expert opinion provide at best some constraining guidelines but do not offer any reliable basis for relative comparisons of diverse events or their interrelations. What is the likelihood that a massive wave of global Islamic terrorism will accelerate the Western transition to non–fossil fuel energies? To what extent will the globalization trend be enhanced or impeded by a faster-than-expected sea level rise or by a precipitous demise of the United States? Setting such odds or multipliers is beyond any meaningful quantification.</em>&quot; <br/>- Global Catastrophes and Trends, p. 226<br/><br/>&quot;<em>And while computers have been used for many years to assemble other  computers and machines, such deployments do not indicate any imminent self- reproductive capability. All those processes require human actions to initiate them,  raw materials to build the hardware, and above all, energy to run them. I find it hard to visualize how those machines would (particularly in less than a generation) launch, integrate, and sustain an entirely independent exploration, extraction, conversion, and delivery of the requisite energies.&quot;</em><br/>- Global Catastrophes and Trends, p. 26<br/><br/><b>References:</b><br/>- <a href='https://www.amazon.ca/dp/B08BSZ52TN/ref=dp-kindle-redirect?_encoding=UTF8&amp;btkr=1'>Global Catastrophes and Trends: The Next Fifty Years</a><br/>- <a href='https://www.amazon.ca/dp/B07V9GHKYP/ref=dp-kindle-redirect?_encoding=UTF8&amp;btkr=1'>The Precipice: Existential Risk and the Future of Humanity</a><br/>- <a href='https://samharris.org/podcasts/208-existential-risk/'>Making Sense podcast w/ Ord</a>  (Clip starts around 40:00)<br/>- <a href='https://en.wikipedia.org/wiki/Mere_addition_paradox'>Repugnant conclusion</a><br/>- <a href='https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem'>Arrow&apos;s theorem</a><br/>- <a href='https://en.wikipedia.org/wiki/Apportionment_paradox'>Balinski–Young theorem</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Back down to earth we go! Or try to, at least. In this episode Ben and Vaden attempt to ground their previous discussion on the philosophy of probability by focusing on a real-world example, namely the book The Precipice by Toby Ord, recently featured on the Making Sense podcast. Vaden believes in arguments, and Ben argues for beliefs. <br/><br/><b>Quotes</b><br/>&quot;<em>A common approach to estimating the chance of an unprecedented event with earth-shaking consequences is to take a skeptical stance: to start with an extremely small probability and only raise it from there when a large amount of hard evidence is presented. But I disagree. Instead, I think the right method is to start with a probability that reflects our overall impressions, then adjust this in light of the scientific evidence. When there is a lot of evidence, these approaches converge. But when there isn’t, the starting point can matter. <br/><br/>In the case of artificial intelligence, everyone agrees the evidence and arguments are far from watertight, but the question is where does this leave us? Very roughly, my approach is to start with the overall view of the expert community that there is something like a one in two chance that AI agents capable of outperforming humans in almost every task will be developed in the coming century. And conditional on that happening, we shouldn’t be shocked if these agents that outperform us across the board were to inherit our future. Especially if when looking into the details, we see great challenges in aligning these agents with our values.</em>&quot;<br/>- The Precipice, p. 165<br/><br/>&quot;<em>Most of the risks arising from long-term trends remain beyond revealing quantification. What is the probability of China’s spectacular economic expansion stalling or even going into reverse? What is the likelihood that Islamic terrorism will develop into a massive, determined quest to destroy the West? Probability estimates of these outcomes based on expert opinion provide at best some constraining guidelines but do not offer any reliable basis for relative comparisons of diverse events or their interrelations. What is the likelihood that a massive wave of global Islamic terrorism will accelerate the Western transition to non–fossil fuel energies? To what extent will the globalization trend be enhanced or impeded by a faster-than-expected sea level rise or by a precipitous demise of the United States? Setting such odds or multipliers is beyond any meaningful quantification.</em>&quot; <br/>- Global Catastrophes and Trends, p. 226<br/><br/>&quot;<em>And while computers have been used for many years to assemble other  computers and machines, such deployments do not indicate any imminent self- reproductive capability. All those processes require human actions to initiate them,  raw materials to build the hardware, and above all, energy to run them. I find it hard to visualize how those machines would (particularly in less than a generation) launch, integrate, and sustain an entirely independent exploration, extraction, conversion, and delivery of the requisite energies.&quot;</em><br/>- Global Catastrophes and Trends, p. 26<br/><br/><b>References:</b><br/>- <a href='https://www.amazon.ca/dp/B08BSZ52TN/ref=dp-kindle-redirect?_encoding=UTF8&amp;btkr=1'>Global Catastrophes and Trends: The Next Fifty Years</a><br/>- <a href='https://www.amazon.ca/dp/B07V9GHKYP/ref=dp-kindle-redirect?_encoding=UTF8&amp;btkr=1'>The Precipice: Existential Risk and the Future of Humanity</a><br/>- <a href='https://samharris.org/podcasts/208-existential-risk/'>Making Sense podcast w/ Ord</a>  (Clip starts around 40:00)<br/>- <a href='https://en.wikipedia.org/wiki/Mere_addition_paradox'>Repugnant conclusion</a><br/>- <a href='https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem'>Arrow&apos;s theorem</a><br/>- <a href='https://en.wikipedia.org/wiki/Apportionment_paradox'>Balinski–Young theorem</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#6 - Philosophy of Probability I: Introduction</title>
  <link>https://www.incrementspodcast.com/6</link>
  <guid isPermaLink="false">Buzzsprout-4407194</guid>
  <pubDate>Wed, 01 Jul 2020 18:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/eeb49cea-deb7-4957-8f51-8d5f0949c799.mp3" length="55868881" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle></itunes:subtitle>
  <itunes:duration>1:17:05</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/e/eeb49cea-deb7-4957-8f51-8d5f0949c799/cover.jpg?v=1"/>
  <description>&lt;p&gt;Don't leave yet - we swear this will be more interesting than it sounds ... &lt;br&gt;&lt;br&gt;... But a drink will definitely help. Ben and Vaden dive into the interpretations behind probability. What do people mean when they use the word, and why do we use this one tool to describe different concepts. The rowdiness truly kicks in when Vaden releases his pent-up critique of Bayesianism, thereby losing both his friends and PhD position. But at least he's ingratiated himself with Karl Popper. &lt;br&gt;&lt;br&gt;&lt;b&gt;&lt;em&gt;References:&lt;/em&gt;&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://vmasrani.github.io/assets/popper_good.pdf"&gt;Vaden's  Slides&lt;/a&gt; on a 1975 &lt;a href="https://www.jstor.org/stable/20115014?seq=1#metadata_info_tab_contents"&gt;paper&lt;/a&gt; by Irving John Good titled &lt;em&gt;Explicativity, Corroboration, and the Relative Odds of Hypotheses&lt;/em&gt;. The paper is I.J. Good’s response to Karl Popper, and in the presentation I compare the two philosophers’ views on probability, epistemology, induction, simplicity, and content.&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.mrcc.uqam.ca/Publications/articles/deElia_MWR2005_.pdf"&gt;Diversity in Interpretations of Probability: Implications for Weather Forecasting&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Andrew Gelman, &lt;a href="http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf"&gt;Philosophy and the practice of Bayesian statistics&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Popper quote: &lt;em&gt;"Those who identify confirmation with probability must believe that a high degree of probability is desirable. They implicitly accept the rule: ‘Always choose the most probable hypothesis!’ Now it can be easily shown that this rule is equivalent to the following rule: ‘Always choose the hypothesis which goes as little beyond the evidence as possible!’ And this, in turn, can be shown to be equivalent, not only to ‘Always accept the hypothesis with the lowest content (within the limits of your task, for example, your task of predicting)!’, but also to ‘Always choose the hypothesis which has the highest degree of ad hoc character (within the limits of your task)!’" &lt;/em&gt;(Conjectures and Refutations p.391) &lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;Get in touch at incrementspodcast@gmail.com.&lt;br&gt;&lt;br&gt;&lt;em&gt;audio updated 13/12/2020&lt;/em&gt;&lt;/p&gt; 
</description>
  <itunes:keywords>probability, bayesianism, frequency, induction, epistemology</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Don&apos;t leave yet - we swear this will be more interesting than it sounds ... <br/><br/>... But a drink will definitely help. Ben and Vaden dive into the interpretations behind probability. What do people mean when they use the word, and why do we use this one tool to describe different concepts. The rowdiness truly kicks in when Vaden releases his pent-up critique of Bayesianism, thereby losing both his friends and PhD position. But at least he&apos;s ingratiated himself with Karl Popper. <br/><br/><b><em>References:</em></b></p><ul><li><a href='https://vmasrani.github.io/assets/popper_good.pdf'>Vaden&apos;s  Slides</a> on a 1975 <a href='https://www.jstor.org/stable/20115014?seq=1#metadata_info_tab_contents'>paper</a> by Irving John Good titled <em>Explicativity, Corroboration, and the Relative Odds of Hypotheses</em>. The paper is I.J. Good’s response to Karl Popper, and in the presentation I compare the two philosophers’ views on probability, epistemology, induction, simplicity, and content.</li><li><a href='http://www.mrcc.uqam.ca/Publications/articles/deElia_MWR2005_.pdf'>Diversity in Interpretations of Probability: Implications for Weather Forecasting</a></li><li>Andrew Gelman, <a href='http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf'>Philosophy and the practice of Bayesian statistics</a></li><li>Popper quote: <em>&quot;Those who identify confirmation with probability must believe that a high degree of probability is desirable. They implicitly accept the rule: ‘Always choose the most probable hypothesis!’ Now it can be easily shown that this rule is equivalent to the following rule: ‘Always choose the hypothesis which goes as little beyond the evidence as possible!’ And this, in turn, can be shown to be equivalent, not only to ‘Always accept the hypothesis with the lowest content (within the limits of your task, for example, your task of predicting)!’, but also to ‘Always choose the hypothesis which has the highest degree of ad hoc character (within the limits of your task)!’&quot; </em>(Conjectures and Refutations p.391) </li></ul><p>Get in touch at incrementspodcast@gmail.com.<br/><br/><em>audio updated 13/12/2020</em></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Don&apos;t leave yet - we swear this will be more interesting than it sounds ... <br/><br/>... But a drink will definitely help. Ben and Vaden dive into the interpretations behind probability. What do people mean when they use the word, and why do we use this one tool to describe different concepts. The rowdiness truly kicks in when Vaden releases his pent-up critique of Bayesianism, thereby losing both his friends and PhD position. But at least he&apos;s ingratiated himself with Karl Popper. <br/><br/><b><em>References:</em></b></p><ul><li><a href='https://vmasrani.github.io/assets/popper_good.pdf'>Vaden&apos;s  Slides</a> on a 1975 <a href='https://www.jstor.org/stable/20115014?seq=1#metadata_info_tab_contents'>paper</a> by Irving John Good titled <em>Explicativity, Corroboration, and the Relative Odds of Hypotheses</em>. The paper is I.J. Good’s response to Karl Popper, and in the presentation I compare the two philosophers’ views on probability, epistemology, induction, simplicity, and content.</li><li><a href='http://www.mrcc.uqam.ca/Publications/articles/deElia_MWR2005_.pdf'>Diversity in Interpretations of Probability: Implications for Weather Forecasting</a></li><li>Andrew Gelman, <a href='http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf'>Philosophy and the practice of Bayesian statistics</a></li><li>Popper quote: <em>&quot;Those who identify confirmation with probability must believe that a high degree of probability is desirable. They implicitly accept the rule: ‘Always choose the most probable hypothesis!’ Now it can be easily shown that this rule is equivalent to the following rule: ‘Always choose the hypothesis which goes as little beyond the evidence as possible!’ And this, in turn, can be shown to be equivalent, not only to ‘Always accept the hypothesis with the lowest content (within the limits of your task, for example, your task of predicting)!’, but also to ‘Always choose the hypothesis which has the highest degree of ad hoc character (within the limits of your task)!’&quot; </em>(Conjectures and Refutations p.391) </li></ul><p>Get in touch at incrementspodcast@gmail.com.<br/><br/><em>audio updated 13/12/2020</em></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
