<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Sun, 19 Apr 2026 23:31:08 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Increments - Episodes Tagged with “Probability”</title>
    <link>https://www.incrementspodcast.com/tags/probability</link>
    <pubDate>Sat, 29 Nov 2025 13:00:00 -0800</pubDate>
    <description>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Science, Philosophy, Epistemology, Mayhem</itunes:subtitle>
    <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
    <itunes:summary>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>Philosophy,Science,Ethics,Progress,Knowledge,Computer Science,Conversation,Error-Correction</itunes:keywords>
    <itunes:owner>
      <itunes:name>Ben Chugg and Vaden Masrani</itunes:name>
      <itunes:email>incrementspodcast@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<itunes:category text="Science"/>
<item>
  <title>#95 (C&amp;R Chap 10, Part II) - A Problem-First View of Scientific Progress </title>
  <link>https://www.incrementspodcast.com/95</link>
  <guid isPermaLink="false">189bdf89-18ae-4bfd-a90b-9adbaa2353d3</guid>
  <pubDate>Sat, 29 Nov 2025 13:00:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/189bdf89-18ae-4bfd-a90b-9adbaa2353d3.mp3" length="55671326" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>After unsuccessfully trying to resolve our dispute about Popper's theory of content, we're back for part II of Chapter 10 of the Conjectures and Refutations Series. </itunes:subtitle>
  <itunes:duration>57:59</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/1/189bdf89-18ae-4bfd-a90b-9adbaa2353d3/cover.jpg?v=1"/>
  <description>After a long hiatus where we both saw grief counsellors over our fight about Popper's theory of content in the last C&amp;amp;R episode, we are back. And we're ready to play nice ... for about 30 seconds until Vaden admits that two sentences from Popper changed his mind about something Ben had arguing for literally years. 
But eventually putting those disagreements aside, we return to the subject at hand: The Conjectures and Refutations Series: Chapter 10: Truth, Rationality, and the Growth of Scientific Knowledge (Part II). Here all goes smoothly. Just kidding, we start fighting about content again almost immediately. Where are the guests to break us up when you need them. 
We discuss
Why Vaden changed his mind about "all thought is problem solving" 
Something that rhymes with wero horship 
Is Popper sloppy when it comes to writing about probability and content 
Is all modern data science based on the wrong idea? (Hint: No) 
Popper's problem-focused view of scientific progress 
How much formalization is too much? 
The difference between high verisimilitude and high probability 
Why do we value simplicity in science? 
Historical examples of science progressing via theories with increasing content 
Quotes
Consciousness, world 2, was presumably an evaluating and discerning consciousness, a problem-solving consciousness, right from the start. I have said of the animate part of the physical world 1 that all organisms are problem solvers. My basic assumption regarding world 2 is that this problem-solving activity of the animate part of world 1 resulted in the emergence of world 2, of the world of consciousness. But I do not mean by this that consciousness solves problems all the time, as I asserted of the organisms. On the contrary. The organisms are preoccupied with problem-solving day in, day out, but consciousness is not only concerned with the solving of problems, although that is its most important biological function. My hypothesis is that the original task of consciousness was to anticipate success and failure in problem-solving and to signal to the organism in the form of pleasure and pain whether it was on the right or wrong path to the solution of the problem.
In Search of a Better World, p.17 (emphasis added) 
The criterion of potential satisfactoriness is thus testability, or improbability: only a highly testable or improbable theory is worth testing, and is actually (and not merely potentially) satisfactory if it withstands severe tests—especially those tests to which we could point as crucial for the theory before they were ever undertaken. 
- C&amp;amp;R, Chapter 10 
Consequently there is little merit in formalizing and elaborating a deductive system (intended for use as an empirical science) beyond the requirements of the task of criticizing and testing it, and of comparing it critically with competitors.
- C&amp;amp;R, Chapter 10 
Admittedly, our expectations, and thus our theories, may precede, historically, even our problems. Yet science starts only with problems. Problems crop up especially when we are disappointed in our expectations, or when our theories involve us in diﬃculties, in contradictions; and these may arise either within a theory, or between two diﬀerent theories, or as the result of a clash between our theories and our observations.
- C&amp;amp;R, Chapter 10 
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Become a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
Is "Ben and Vaden will fight about content" high or low probability? Tell us at incrementspodcast@gmail.com  
</description>
  <itunes:keywords>popper, philosophy of science, probability, epistemology, content, simplicity, verisimilitude</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>After a long hiatus where we both saw grief counsellors over our fight about Popper&#39;s theory of content in the last C&amp;R episode, we are back. And we&#39;re ready to play nice ... for about 30 seconds until Vaden admits that two sentences from Popper changed his mind about something Ben had arguing for literally years. </p>

<p>But eventually putting those disagreements aside, we return to the subject at hand: The Conjectures and Refutations Series: Chapter 10: Truth, Rationality, and the Growth of Scientific Knowledge (Part II). Here all goes smoothly. Just kidding, we start fighting about content again almost immediately. Where are the guests to break us up when you need them. </p>

<h1>We discuss</h1>

<ul>
<li>Why Vaden changed his mind about &quot;all thought is problem solving&quot; </li>
<li>Something that rhymes with wero horship </li>
<li>Is Popper sloppy when it comes to writing about probability and content </li>
<li>Is all modern data science based on the wrong idea? (Hint: No) </li>
<li>Popper&#39;s problem-focused view of scientific progress </li>
<li>How much formalization is too much? </li>
<li>The difference between high verisimilitude and high probability </li>
<li>Why do we value simplicity in science? </li>
<li>Historical examples of science progressing via theories with increasing content </li>
</ul>

<h1>Quotes</h1>

<blockquote>
<p>Consciousness, world 2, was presumably <em>an evaluating and discerning consciousness</em>, a problem-solving consciousness, right from the start. I have said of the animate part of the physical world 1 that all organisms are problem solvers. My basic assumption regarding world 2 is that this problem-solving activity of the animate part of world 1 resulted in the emergence of world 2, of the world of consciousness. But I do not mean by this that consciousness solves problems all the time, as I asserted of the organisms. On the contrary. The organisms are preoccupied with problem-solving day in, day out, but consciousness <em>is not only concerned</em> with the solving of problems, although that is its most important biological function. <strong>My hypothesis is that the original task of consciousness was to anticipate success and failure in problem-solving and to signal to the organism in the form of pleasure and pain whether it was on the right or wrong path to the solution of the problem.</strong></p>

<ul>
<li>In Search of a Better World, p.17 (emphasis added) </li>
</ul>

<p>The criterion of potential satisfactoriness is thus testability, or improbability: only a highly testable or improbable theory is worth testing, and is actually (and not merely potentially) satisfactory if it withstands severe tests—especially those tests to which we could point as crucial for the theory before they were ever undertaken. <br>
- C&amp;R, Chapter 10 </p>

<p>Consequently there is little merit in formalizing and elaborating a deductive system (intended for use as an empirical science) beyond the requirements of the task of criticizing and testing it, and of comparing it critically with competitors.<br>
- C&amp;R, Chapter 10 </p>

<p>Admittedly, our expectations, and thus our theories, may precede, historically, even our problems. Yet science starts only with problems. Problems crop up especially when we are disappointed in our expectations, or when our theories involve us in diﬃculties, in contradictions; and these may arise either within a theory, or between two diﬀerent theories, or as the result of a clash between our theories and our observations.<br>
- C&amp;R, Chapter 10 </p>
</blockquote>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Is &quot;Ben and Vaden will fight about content&quot; high or low probability? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>After a long hiatus where we both saw grief counsellors over our fight about Popper&#39;s theory of content in the last C&amp;R episode, we are back. And we&#39;re ready to play nice ... for about 30 seconds until Vaden admits that two sentences from Popper changed his mind about something Ben had arguing for literally years. </p>

<p>But eventually putting those disagreements aside, we return to the subject at hand: The Conjectures and Refutations Series: Chapter 10: Truth, Rationality, and the Growth of Scientific Knowledge (Part II). Here all goes smoothly. Just kidding, we start fighting about content again almost immediately. Where are the guests to break us up when you need them. </p>

<h1>We discuss</h1>

<ul>
<li>Why Vaden changed his mind about &quot;all thought is problem solving&quot; </li>
<li>Something that rhymes with wero horship </li>
<li>Is Popper sloppy when it comes to writing about probability and content </li>
<li>Is all modern data science based on the wrong idea? (Hint: No) </li>
<li>Popper&#39;s problem-focused view of scientific progress </li>
<li>How much formalization is too much? </li>
<li>The difference between high verisimilitude and high probability </li>
<li>Why do we value simplicity in science? </li>
<li>Historical examples of science progressing via theories with increasing content </li>
</ul>

<h1>Quotes</h1>

<blockquote>
<p>Consciousness, world 2, was presumably <em>an evaluating and discerning consciousness</em>, a problem-solving consciousness, right from the start. I have said of the animate part of the physical world 1 that all organisms are problem solvers. My basic assumption regarding world 2 is that this problem-solving activity of the animate part of world 1 resulted in the emergence of world 2, of the world of consciousness. But I do not mean by this that consciousness solves problems all the time, as I asserted of the organisms. On the contrary. The organisms are preoccupied with problem-solving day in, day out, but consciousness <em>is not only concerned</em> with the solving of problems, although that is its most important biological function. <strong>My hypothesis is that the original task of consciousness was to anticipate success and failure in problem-solving and to signal to the organism in the form of pleasure and pain whether it was on the right or wrong path to the solution of the problem.</strong></p>

<ul>
<li>In Search of a Better World, p.17 (emphasis added) </li>
</ul>

<p>The criterion of potential satisfactoriness is thus testability, or improbability: only a highly testable or improbable theory is worth testing, and is actually (and not merely potentially) satisfactory if it withstands severe tests—especially those tests to which we could point as crucial for the theory before they were ever undertaken. <br>
- C&amp;R, Chapter 10 </p>

<p>Consequently there is little merit in formalizing and elaborating a deductive system (intended for use as an empirical science) beyond the requirements of the task of criticizing and testing it, and of comparing it critically with competitors.<br>
- C&amp;R, Chapter 10 </p>

<p>Admittedly, our expectations, and thus our theories, may precede, historically, even our problems. Yet science starts only with problems. Problems crop up especially when we are disappointed in our expectations, or when our theories involve us in diﬃculties, in contradictions; and these may arise either within a theory, or between two diﬀerent theories, or as the result of a clash between our theories and our observations.<br>
- C&amp;R, Chapter 10 </p>
</blockquote>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Is &quot;Ben and Vaden will fight about content&quot; high or low probability? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#93 (C&amp;R Chap 10, Part I) - An Introduction to Popper's Theory of Content</title>
  <link>https://www.incrementspodcast.com/93</link>
  <guid isPermaLink="false">614c7d46-abe3-4651-946a-b20d77e84f84</guid>
  <pubDate>Thu, 16 Oct 2025 12:15:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/614c7d46-abe3-4651-946a-b20d77e84f84.mp3" length="103477292" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>An introduction to Popper's theory of content, following Chapter 10 of Conjectures and Refutations. Plus a lot of arguing about Bayesianism. </itunes:subtitle>
  <itunes:duration>1:47:23</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/6/614c7d46-abe3-4651-946a-b20d77e84f84/cover.jpg?v=1"/>
  <description>Back to basics baby. We're doing a couple introductory episodes on Popper's philosophy of science, following Chapter 10 of Conjectures and Refutations. We start with Popper's theory of content: what makes a good scientific theory? Can we judge some theories as better than others before we even run any empirical tests? Should we be looking for theories with high probability? 
Ben and Vaden also return to their roots in another way, and get into a nice little fight about how content relates to Bayesianism. 
We discuss
Vaden's skin care routine 
If you find your friend's lost watch and proceed to lose it, are you responsible for the watch?
Empirical vs logical content 
Whether and how content can be measured and compared 
How content relates to probability 
Quotes
My aim in this lecture is to stress the significance of one particular aspect of science—its need to grow, or, if you like, its need to progress. I do not have in mind here the practical or social significance of this need. What I wish to discuss is rather its intellectual significance. I assert that continued growth is essential to the rational and empirical character of scientific knowledge; that if science ceases to grow it must lose that character. It is the way of its growth which makes science rational and empirical; the way, that is, in which scientists discriminate between available theories and choose the better one or (in the absence of a satisfactory theory) the way they give reasons for rejecting all the available theories, thereby suggesting some of the conditions with which a satisfactory theory should comply.
You will have noticed from this formulation that it is not the accumulation of observations which I have in mind when I speak of the growth of scientific knowledge, but the repeated overthrow of scien- tific theories and their replacement by better or more satisfactory ones. This, incidentally, is a procedure which might be found worthy of attention even by those who see the most important aspect of the growth of scientific knowledge in new experiments and in new observations.
- C&amp;amp;R p. 291
Thus it is my first thesis that we can know of a theory, even before it has been tested, that if it passes certain tests it will be better than some other theory. 
My first thesis implies that we have a criterion of relative potential satisfactoriness, or of potential progressiveness, which can be applied to a theory even before we know whether or not it will turn out, by the passing of some crucial tests, to be satisfactory in fact.
This criterion of relative potential satisfactoriness (which I formu- lated some time ago,2 and which, incidentally, allows us to grade the- ories according to their degree of relative potential satisfactoriness) is extremely simple and intuitive. It characterizes as preferable the theory which tells us more; that is to say, the theory which contains the greater amount of empirical information or content; which is logically stronger; which has the greater explanatory and predictive power; and which can therefore be more severely tested by comparing predicted facts with observations. In short, we prefer an interesting, daring, and highly informative theory to a trivial one.
- C&amp;amp;R p.294
Let a be the statement ‘It will rain on Friday’; b the statement ‘It willbe fine on Saturday’; and ab the statement ‘It will rain on Friday and itwill be fine on Saturday’: it is then obvious that the informative contentof this last statement, the conjunction ab, will exceed that of its com-ponent a and also that of its component b. And it will also be obviousthat the probability of ab (or, what is the same, the probability that abwill be true) will be smaller than that of either of its components.
Writing Ct(a) for ‘the content of the statement a’, and Ct(ab) for ‘thecontent of the conjunction a and b’, we have
(1) Ct(a) &amp;lt;= Ct(ab)  &amp;gt;= Ct(b).
This contrasts with the corresponding law of the calculus of probability,
(2) p(a) &amp;gt;= p(ab) &amp;lt;= p(b),
where the inequality signs of (1) are inverted. Together these two laws, (1) and (2), state that with increasing content, probability decreases, and vice versa; or in other words, that content increases with increasing improbability. (This analysis is of course in full agreement with the general idea of the logical content of a statement as the class of all those statements which are logically entailed by it. We may also say that a statement a is logically stronger than a statement b if its content is greater than that of b—that is to say, if it entails more than b does.)
This trivial fact has the following inescapable consequences: if growth of knowledge means that we operate with theories of increasing content, it must also mean that we operate with theories of decreasing probability (in the sense of the calculus of probability). Thus if our aim is the advancement or growth of knowledge, then a high probability (in the sense of the calculus of probability) cannot possibly be our aim as well: these two aims are incompatible.
- C&amp;amp;R p.295
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Become a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
How much content does the theory "dish soap is the ultimate face cleanser" have? Send your order of infinity over to incrementspodcast@gmail.com
</description>
  <itunes:keywords>popper, content, philosophy of science, probability, bayesianism</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Back to basics baby. We&#39;re doing a couple introductory episodes on Popper&#39;s philosophy of science, following Chapter 10 of Conjectures and Refutations. We start with Popper&#39;s theory of <em>content</em>: what makes a good scientific theory? Can we judge some theories as better than others before we even run any empirical tests? Should we be looking for theories with high probability? </p>

<p>Ben and Vaden also return to their roots in another way, and get into a nice little fight about how content relates to Bayesianism. </p>

<h1>We discuss</h1>

<ul>
<li>Vaden&#39;s skin care routine </li>
<li>If you find your friend&#39;s lost watch and proceed to lose it, are you responsible for the watch?</li>
<li>Empirical vs logical content </li>
<li>Whether and how content can be measured and compared </li>
<li>How content relates to probability </li>
</ul>

<h1>Quotes</h1>

<blockquote>
<p>My aim in this lecture is to stress the significance of one particular aspect of science—its need to grow, or, if you like, its need to progress. I do not have in mind here the practical or social significance of this need. What I wish to discuss is rather its intellectual significance. I assert that continued growth is essential to the rational and empirical character of scientific knowledge; that if science ceases to grow it must lose that character. It is the way of its growth which makes science rational and empirical; the way, that is, in which scientists discriminate between available theories and choose the better one or (in the absence of a satisfactory theory) the way they give reasons for rejecting all the available theories, thereby suggesting some of the conditions with which a satisfactory theory should comply.</p>

<p>You will have noticed from this formulation that it is not the accumulation of observations which I have in mind when I speak of the growth of scientific knowledge, but the repeated overthrow of scien- tific theories and their replacement by better or more satisfactory ones. This, incidentally, is a procedure which might be found worthy of attention even by those who see the most important aspect of the growth of scientific knowledge in new experiments and in new observations.</p>

<ul>
<li><em>C&amp;R p. 291</em></li>
</ul>
</blockquote>

<hr>

<blockquote>
<p>Thus it is my first thesis that we can know of a theory, even before it has been tested, that if it passes certain tests it will be better than some other theory. </p>

<p>My first thesis implies that we have a criterion of relative potential satisfactoriness, or of potential progressiveness, which can be applied to a theory even before we know whether or not it will turn out, by the passing of some crucial tests, to be satisfactory in <em>fact</em>.</p>

<p>This criterion of relative potential satisfactoriness (which I formu- lated some time ago,2 and which, incidentally, allows us to grade the- ories according to their degree of relative potential satisfactoriness) is extremely simple and intuitive. It characterizes as preferable the theory which tells us more; that is to say, the theory which contains the greater amount of empirical information or <em>content</em>; which is logically stronger; which has the greater explanatory and predictive power; and which can therefore be <em>more severely tested</em> by comparing predicted facts with observations. In short, we prefer an interesting, daring, and highly informative theory to a trivial one.</p>

<ul>
<li><em>C&amp;R p.294</em></li>
</ul>

<p>Let a be the statement ‘It will rain on Friday’; b the statement ‘It willbe fine on Saturday’; and ab the statement ‘It will rain on Friday and itwill be fine on Saturday’: it is then obvious that the informative contentof this last statement, the conjunction ab, will exceed that of its com-ponent a and also that of its component b. And it will also be obviousthat the probability of ab (or, what is the same, the probability that abwill be true) will be smaller than that of either of its components.</p>

<p>Writing Ct(a) for ‘the content of the statement a’, and Ct(ab) for ‘thecontent of the conjunction a and b’, we have<br>
(1) Ct(a) &lt;= Ct(ab)  &gt;= Ct(b).</p>

<p>This contrasts with the corresponding law of the calculus of probability,</p>

<p>(2) p(a) &gt;= p(ab) &lt;= p(b),</p>

<p>where the inequality signs of (1) are inverted. Together these two laws, (1) and (2), state that with increasing content, probability decreases, and vice versa; or in other words, that content increases with increasing improbability. (This analysis is of course in full agreement with the general idea of the logical content of a statement as the class of all those statements which are logically entailed by it. We may also say that a statement a is logically stronger than a statement b if its content is greater than that of b—that is to say, if it entails more than b does.)</p>

<p>This trivial fact has the following inescapable consequences: if growth of knowledge means that we operate with theories of increasing content, it must also mean that we operate with theories of decreasing probability (in the sense of the calculus of probability). Thus if our aim is the advancement or growth of knowledge, then a high probability (in the sense of the calculus of probability) cannot possibly be our aim as well: these two aims are incompatible.</p>

<ul>
<li><em>C&amp;R p.295</em></li>
</ul>
</blockquote>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>How much content does the theory &quot;dish soap is the ultimate face cleanser&quot; have? Send your order of infinity over to <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Back to basics baby. We&#39;re doing a couple introductory episodes on Popper&#39;s philosophy of science, following Chapter 10 of Conjectures and Refutations. We start with Popper&#39;s theory of <em>content</em>: what makes a good scientific theory? Can we judge some theories as better than others before we even run any empirical tests? Should we be looking for theories with high probability? </p>

<p>Ben and Vaden also return to their roots in another way, and get into a nice little fight about how content relates to Bayesianism. </p>

<h1>We discuss</h1>

<ul>
<li>Vaden&#39;s skin care routine </li>
<li>If you find your friend&#39;s lost watch and proceed to lose it, are you responsible for the watch?</li>
<li>Empirical vs logical content </li>
<li>Whether and how content can be measured and compared </li>
<li>How content relates to probability </li>
</ul>

<h1>Quotes</h1>

<blockquote>
<p>My aim in this lecture is to stress the significance of one particular aspect of science—its need to grow, or, if you like, its need to progress. I do not have in mind here the practical or social significance of this need. What I wish to discuss is rather its intellectual significance. I assert that continued growth is essential to the rational and empirical character of scientific knowledge; that if science ceases to grow it must lose that character. It is the way of its growth which makes science rational and empirical; the way, that is, in which scientists discriminate between available theories and choose the better one or (in the absence of a satisfactory theory) the way they give reasons for rejecting all the available theories, thereby suggesting some of the conditions with which a satisfactory theory should comply.</p>

<p>You will have noticed from this formulation that it is not the accumulation of observations which I have in mind when I speak of the growth of scientific knowledge, but the repeated overthrow of scien- tific theories and their replacement by better or more satisfactory ones. This, incidentally, is a procedure which might be found worthy of attention even by those who see the most important aspect of the growth of scientific knowledge in new experiments and in new observations.</p>

<ul>
<li><em>C&amp;R p. 291</em></li>
</ul>
</blockquote>

<hr>

<blockquote>
<p>Thus it is my first thesis that we can know of a theory, even before it has been tested, that if it passes certain tests it will be better than some other theory. </p>

<p>My first thesis implies that we have a criterion of relative potential satisfactoriness, or of potential progressiveness, which can be applied to a theory even before we know whether or not it will turn out, by the passing of some crucial tests, to be satisfactory in <em>fact</em>.</p>

<p>This criterion of relative potential satisfactoriness (which I formu- lated some time ago,2 and which, incidentally, allows us to grade the- ories according to their degree of relative potential satisfactoriness) is extremely simple and intuitive. It characterizes as preferable the theory which tells us more; that is to say, the theory which contains the greater amount of empirical information or <em>content</em>; which is logically stronger; which has the greater explanatory and predictive power; and which can therefore be <em>more severely tested</em> by comparing predicted facts with observations. In short, we prefer an interesting, daring, and highly informative theory to a trivial one.</p>

<ul>
<li><em>C&amp;R p.294</em></li>
</ul>

<p>Let a be the statement ‘It will rain on Friday’; b the statement ‘It willbe fine on Saturday’; and ab the statement ‘It will rain on Friday and itwill be fine on Saturday’: it is then obvious that the informative contentof this last statement, the conjunction ab, will exceed that of its com-ponent a and also that of its component b. And it will also be obviousthat the probability of ab (or, what is the same, the probability that abwill be true) will be smaller than that of either of its components.</p>

<p>Writing Ct(a) for ‘the content of the statement a’, and Ct(ab) for ‘thecontent of the conjunction a and b’, we have<br>
(1) Ct(a) &lt;= Ct(ab)  &gt;= Ct(b).</p>

<p>This contrasts with the corresponding law of the calculus of probability,</p>

<p>(2) p(a) &gt;= p(ab) &lt;= p(b),</p>

<p>where the inequality signs of (1) are inverted. Together these two laws, (1) and (2), state that with increasing content, probability decreases, and vice versa; or in other words, that content increases with increasing improbability. (This analysis is of course in full agreement with the general idea of the logical content of a statement as the class of all those statements which are logically entailed by it. We may also say that a statement a is logically stronger than a statement b if its content is greater than that of b—that is to say, if it entails more than b does.)</p>

<p>This trivial fact has the following inescapable consequences: if growth of knowledge means that we operate with theories of increasing content, it must also mean that we operate with theories of decreasing probability (in the sense of the calculus of probability). Thus if our aim is the advancement or growth of knowledge, then a high probability (in the sense of the calculus of probability) cannot possibly be our aim as well: these two aims are incompatible.</p>

<ul>
<li><em>C&amp;R p.295</em></li>
</ul>
</blockquote>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>How much content does the theory &quot;dish soap is the ultimate face cleanser&quot; have? Send your order of infinity over to <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#74 - Disagreeing about Belief, Probability, and Truth (w/ David Deutsch)</title>
  <link>https://www.incrementspodcast.com/74</link>
  <guid isPermaLink="false">03508f9b-3a2a-4b15-9b23-fe30083b431b</guid>
  <pubDate>Tue, 01 Oct 2024 09:30:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/03508f9b-3a2a-4b15-9b23-fe30083b431b.mp3" length="88784483" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>We talk with David Deutsch about whether the concept of belief is a useful lens on human cognition, when probability and statistics are actually useful, and whether he disagrees with Karl Popper about the truth. </itunes:subtitle>
  <itunes:duration>1:32:02</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/0/03508f9b-3a2a-4b15-9b23-fe30083b431b/cover.jpg?v=9"/>
  <description>What do you do when one of your intellectual idols comes on the podcast? Bombard them with disagreements of course. We were thrilled to have David Deutsch on the podcast to discuss whether the concept of belief is a useful lens on human cognition, when probability and statistics should be deployed, and whether he disagrees with Karl Popper on abstractions, the truth, and nothing but the truth. 
Follow David on Twitter (@DavidDeutschOxf) or find his website here (https://www.daviddeutsch.org.uk/). 
We discuss
Whether belief is a fruitful lens through which to analyze ideas 
Whether a non-quantitative form of belief can be defended 
How does belief bottom out epistemologically? 
Whether statistics and probability are useful 
Where should statistics and probability be used in practice? 
The Popper-Miller theorem
Statements vs propositions and their relevance for truth 
Whether Popper and Deutsch disagree about truth 
References
The Popper-Miller theorem. See the original paper (https://www.nature.com/articles/302687a0) 
David's 2021 talk on the correspondence theory of truth (https://www.youtube.com/watch?v=DZ-opI-jghs) 
David's talk on physics without probability (https://www.youtube.com/watch?v=wfzSE4Hoxbc). 
Hempel's paradox (https://en.wikipedia.org/wiki/Raven_paradox) 
The Beginning of Infinity (https://www.amazon.com/Beginning-Infinity-Explanations-Transform-World/dp/0143121359)
Knowledge and the Body-Mind Problem (https://www.amazon.ca/Knowledge-Body-Mind-Problem-Defence-Interaction/dp/0415135567)
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @DavidDeutschOxf
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Believe in us and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
What's the truth about your belief on the probability of useful statistics? Tell us over at incrementspodcast@gmail.com.  Special Guest: David Deutsch.
</description>
  <itunes:keywords>probability, statistics, truth, belief, epistemology, certainty, mathematics</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>What do you do when one of your intellectual idols comes on the podcast? Bombard them with disagreements of course. We were thrilled to have David Deutsch on the podcast to discuss whether the concept of belief is a useful lens on human cognition, when probability and statistics should be deployed, and whether he disagrees with Karl Popper on abstractions, the truth, and nothing but the truth. </p>

<p>Follow David on Twitter (@DavidDeutschOxf) or find his website <a href="https://www.daviddeutsch.org.uk/" rel="nofollow">here</a>. </p>

<h1>We discuss</h1>

<ul>
<li>Whether belief is a fruitful lens through which to analyze ideas </li>
<li>Whether a non-quantitative form of belief can be defended </li>
<li>How does belief bottom out epistemologically? </li>
<li>Whether statistics and probability are useful </li>
<li>Where should statistics and probability be used in practice? </li>
<li>The Popper-Miller theorem</li>
<li>Statements vs propositions and their relevance for truth </li>
<li>Whether Popper and Deutsch disagree about truth </li>
</ul>

<h1>References</h1>

<ul>
<li>The Popper-Miller theorem. See the <a href="https://www.nature.com/articles/302687a0" rel="nofollow">original paper</a> </li>
<li>David&#39;s 2021 talk on the <a href="https://www.youtube.com/watch?v=DZ-opI-jghs" rel="nofollow">correspondence theory of truth</a> </li>
<li>David&#39;s talk on <a href="https://www.youtube.com/watch?v=wfzSE4Hoxbc" rel="nofollow">physics without probability</a>. </li>
<li><a href="https://en.wikipedia.org/wiki/Raven_paradox" rel="nofollow">Hempel&#39;s paradox</a> </li>
<li><a href="https://www.amazon.com/Beginning-Infinity-Explanations-Transform-World/dp/0143121359" rel="nofollow">The Beginning of Infinity</a></li>
<li><a href="https://www.amazon.ca/Knowledge-Body-Mind-Problem-Defence-Interaction/dp/0415135567" rel="nofollow">Knowledge and the Body-Mind Problem</a></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @DavidDeutschOxf</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Believe in us and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s the truth about your belief on the probability of useful statistics? Tell us over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p>Special Guest: David Deutsch.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>What do you do when one of your intellectual idols comes on the podcast? Bombard them with disagreements of course. We were thrilled to have David Deutsch on the podcast to discuss whether the concept of belief is a useful lens on human cognition, when probability and statistics should be deployed, and whether he disagrees with Karl Popper on abstractions, the truth, and nothing but the truth. </p>

<p>Follow David on Twitter (@DavidDeutschOxf) or find his website <a href="https://www.daviddeutsch.org.uk/" rel="nofollow">here</a>. </p>

<h1>We discuss</h1>

<ul>
<li>Whether belief is a fruitful lens through which to analyze ideas </li>
<li>Whether a non-quantitative form of belief can be defended </li>
<li>How does belief bottom out epistemologically? </li>
<li>Whether statistics and probability are useful </li>
<li>Where should statistics and probability be used in practice? </li>
<li>The Popper-Miller theorem</li>
<li>Statements vs propositions and their relevance for truth </li>
<li>Whether Popper and Deutsch disagree about truth </li>
</ul>

<h1>References</h1>

<ul>
<li>The Popper-Miller theorem. See the <a href="https://www.nature.com/articles/302687a0" rel="nofollow">original paper</a> </li>
<li>David&#39;s 2021 talk on the <a href="https://www.youtube.com/watch?v=DZ-opI-jghs" rel="nofollow">correspondence theory of truth</a> </li>
<li>David&#39;s talk on <a href="https://www.youtube.com/watch?v=wfzSE4Hoxbc" rel="nofollow">physics without probability</a>. </li>
<li><a href="https://en.wikipedia.org/wiki/Raven_paradox" rel="nofollow">Hempel&#39;s paradox</a> </li>
<li><a href="https://www.amazon.com/Beginning-Infinity-Explanations-Transform-World/dp/0143121359" rel="nofollow">The Beginning of Infinity</a></li>
<li><a href="https://www.amazon.ca/Knowledge-Body-Mind-Problem-Defence-Interaction/dp/0415135567" rel="nofollow">Knowledge and the Body-Mind Problem</a></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @DavidDeutschOxf</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Believe in us and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s the truth about your belief on the probability of useful statistics? Tell us over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p>Special Guest: David Deutsch.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#70 - ... and Bayes Bites Back (w/ Richard Meadows) </title>
  <link>https://www.incrementspodcast.com/70</link>
  <guid isPermaLink="false">a9b0b76a-e2e7-449c-8318-06efecf1c13d</guid>
  <pubDate>Tue, 09 Jul 2024 10:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/a9b0b76a-e2e7-449c-8318-06efecf1c13d.mp3" length="88283500" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Rich comes on to defend Scott Alexander against our criticisms. Are we being unfair? Are the Bayesians simply the Most Rational People (MRP) and we can't handle it? </itunes:subtitle>
  <itunes:duration>1:30:34</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/a/a9b0b76a-e2e7-449c-8318-06efecf1c13d/cover.jpg?v=4"/>
  <description>Sick of hearing us shouting about Bayesianism? Well today you're in luck, because this time, someone shouts at us about Bayesianism! Richard Meadows, finance journalist, author, and Ben's secretive podcast paramour, takes us to task. Are we being unfair to the Bayesians? Is Bayesian rationality optimal in theory, and the rest of us are just coping with an uncertain world? Is this why the Bayesian rationalists have so much cultural influence (and money, and fame, and media attention, and ...), and we, ahem, uhhh, don't?  
Check out Rich's website (https://thedeepdish.org/start), his book Optionality: How to Survive and Thrive in a Volatile World (https://www.amazon.ca/Optionality-Survive-Thrive-Volatile-World/dp/0473545500), and his podcast (https://doyouevenlit.podbean.com/). 
We discuss
The pros of the rationality and EA communities 
Whether Bayesian epistemology contributes to open-mindedness
The fact that evidence doesn't speak for itself 
The fact that the world doesn't come bundled as discrete chunks of evidence 
Whether Bayesian epistemology would be "optimal" for Laplace's demon 
The difference between truth and certainty
Vaden's tone issues and why he gets animated about this subject. 
References
Scott's original piece: In continued defense of non-frequentist probabilities (https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist)
Scott Alexander's post about rootclaim (https://www.astralcodexten.com/p/practically-a-book-review-rootclaim/comments) 
Our previous episode on Scott's piece: #69 - Contra Scott Alexander on Probability (https://www.incrementspodcast.com/69) 
Rootclaim (https://www.rootclaim.com/)
Ben's blogpost You need a theory for that theory (https://benchugg.com/writing/you-need-a-theory/) 
Cox's theorem (https://en.wikipedia.org/wiki/Cox%27s_theorem) 
Aumann's agreement theorem (https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem) 
Vaden's blogposts mentioned in the episode:
Critical Rationalism and Bayesian Epistemology (https://vmasrani.github.io/blog/2020/vaden_second_response/)
Proving Too Much (https://vmasrani.github.io/blog/2021/proving_too_much/)
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Follow Rich at @MeadowsRichard
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Help us calibrate our credences and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
What's your favorite theory that is neither true nor useful? Tell us over at incrementspodcast@gmail.com.  Special Guest: Richard Meadows.
</description>
  <itunes:keywords>probability, bayesianism, rationality, uncertainty, decision-making</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Sick of hearing us shouting about Bayesianism? Well today you&#39;re in luck, because this time, someone shouts at <em>us</em> about Bayesianism! Richard Meadows, finance journalist, author, and Ben&#39;s secretive podcast paramour, takes us to task. Are we being unfair to the Bayesians? Is Bayesian rationality optimal in theory, and the rest of us are just coping with an uncertain world? Is this why the Bayesian rationalists have so much cultural influence (and money, and fame, and media attention, and ...), and we, ahem, uhhh, don&#39;t?  </p>

<p>Check out Rich&#39;s <a href="https://thedeepdish.org/start" rel="nofollow">website</a>, his book <a href="https://www.amazon.ca/Optionality-Survive-Thrive-Volatile-World/dp/0473545500" rel="nofollow"><strong>Optionality:</strong> How to Survive and Thrive in a Volatile World</a>, and his <a href="https://doyouevenlit.podbean.com/" rel="nofollow">podcast</a>. </p>

<h1>We discuss</h1>

<ul>
<li>The pros of the rationality and EA communities </li>
<li>Whether Bayesian epistemology contributes to open-mindedness</li>
<li>The fact that evidence doesn&#39;t speak for itself </li>
<li>The fact that the world doesn&#39;t come bundled as discrete chunks of evidence </li>
<li>Whether Bayesian epistemology would be &quot;optimal&quot; for Laplace&#39;s demon </li>
<li>The difference between truth and certainty</li>
<li>Vaden&#39;s tone issues and why he gets animated about this subject. </li>
</ul>

<h1>References</h1>

<ul>
<li>Scott&#39;s original piece: <a href="https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist" rel="nofollow">In continued defense of non-frequentist probabilities</a></li>
<li>Scott Alexander&#39;s <a href="https://www.astralcodexten.com/p/practically-a-book-review-rootclaim/comments" rel="nofollow">post about rootclaim</a> </li>
<li>Our previous episode on Scott&#39;s piece: <a href="https://www.incrementspodcast.com/69" rel="nofollow">#69 - Contra Scott Alexander on Probability</a> </li>
<li><a href="https://www.rootclaim.com/" rel="nofollow">Rootclaim</a></li>
<li>Ben&#39;s blogpost <a href="https://benchugg.com/writing/you-need-a-theory/" rel="nofollow">You need a theory for that theory</a> </li>
<li><a href="https://en.wikipedia.org/wiki/Cox%27s_theorem" rel="nofollow">Cox&#39;s theorem</a> </li>
<li><a href="https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem" rel="nofollow">Aumann&#39;s agreement theorem</a> </li>
<li>Vaden&#39;s blogposts mentioned in the episode:

<ul>
<li><a href="https://vmasrani.github.io/blog/2020/vaden_second_response/" rel="nofollow">Critical Rationalism and Bayesian Epistemology</a></li>
<li><a href="https://vmasrani.github.io/blog/2021/proving_too_much/" rel="nofollow">Proving Too Much</a></li>
</ul></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Follow Rich at @MeadowsRichard</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Help us calibrate our credences and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s your favorite theory that is neither true nor useful? Tell us over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p>Special Guest: Richard Meadows.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Sick of hearing us shouting about Bayesianism? Well today you&#39;re in luck, because this time, someone shouts at <em>us</em> about Bayesianism! Richard Meadows, finance journalist, author, and Ben&#39;s secretive podcast paramour, takes us to task. Are we being unfair to the Bayesians? Is Bayesian rationality optimal in theory, and the rest of us are just coping with an uncertain world? Is this why the Bayesian rationalists have so much cultural influence (and money, and fame, and media attention, and ...), and we, ahem, uhhh, don&#39;t?  </p>

<p>Check out Rich&#39;s <a href="https://thedeepdish.org/start" rel="nofollow">website</a>, his book <a href="https://www.amazon.ca/Optionality-Survive-Thrive-Volatile-World/dp/0473545500" rel="nofollow"><strong>Optionality:</strong> How to Survive and Thrive in a Volatile World</a>, and his <a href="https://doyouevenlit.podbean.com/" rel="nofollow">podcast</a>. </p>

<h1>We discuss</h1>

<ul>
<li>The pros of the rationality and EA communities </li>
<li>Whether Bayesian epistemology contributes to open-mindedness</li>
<li>The fact that evidence doesn&#39;t speak for itself </li>
<li>The fact that the world doesn&#39;t come bundled as discrete chunks of evidence </li>
<li>Whether Bayesian epistemology would be &quot;optimal&quot; for Laplace&#39;s demon </li>
<li>The difference between truth and certainty</li>
<li>Vaden&#39;s tone issues and why he gets animated about this subject. </li>
</ul>

<h1>References</h1>

<ul>
<li>Scott&#39;s original piece: <a href="https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist" rel="nofollow">In continued defense of non-frequentist probabilities</a></li>
<li>Scott Alexander&#39;s <a href="https://www.astralcodexten.com/p/practically-a-book-review-rootclaim/comments" rel="nofollow">post about rootclaim</a> </li>
<li>Our previous episode on Scott&#39;s piece: <a href="https://www.incrementspodcast.com/69" rel="nofollow">#69 - Contra Scott Alexander on Probability</a> </li>
<li><a href="https://www.rootclaim.com/" rel="nofollow">Rootclaim</a></li>
<li>Ben&#39;s blogpost <a href="https://benchugg.com/writing/you-need-a-theory/" rel="nofollow">You need a theory for that theory</a> </li>
<li><a href="https://en.wikipedia.org/wiki/Cox%27s_theorem" rel="nofollow">Cox&#39;s theorem</a> </li>
<li><a href="https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem" rel="nofollow">Aumann&#39;s agreement theorem</a> </li>
<li>Vaden&#39;s blogposts mentioned in the episode:

<ul>
<li><a href="https://vmasrani.github.io/blog/2020/vaden_second_response/" rel="nofollow">Critical Rationalism and Bayesian Epistemology</a></li>
<li><a href="https://vmasrani.github.io/blog/2021/proving_too_much/" rel="nofollow">Proving Too Much</a></li>
</ul></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Follow Rich at @MeadowsRichard</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Help us calibrate our credences and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s your favorite theory that is neither true nor useful? Tell us over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p>Special Guest: Richard Meadows.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#69 - Contra Scott Alexander on Probability</title>
  <link>https://www.incrementspodcast.com/69</link>
  <guid isPermaLink="false">3ac225c1-a486-428e-bdcf-2d1973d2c80b</guid>
  <pubDate>Thu, 20 Jun 2024 08:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/3ac225c1-a486-428e-bdcf-2d1973d2c80b.mp3" length="101992679" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle> Cursed to return to this subject again, we attack the big man himself on probability. What's your credence that we're correct?</itunes:subtitle>
  <itunes:duration>1:45:09</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/3/3ac225c1-a486-428e-bdcf-2d1973d2c80b/cover.jpg?v=2"/>
  <description>After four episodes spent fawning over Scott Alexander's "Non-libertarian FAQ", we turn around and attack the good man instead. In this episode we respond to Scott's piece "In Continued Defense of Non-Frequentist Probabilities", and respond to each of his five arguments defending Bayesian probability. Like moths to a flame, we apparently cannot let the probability subject slide, sorry people. But the good news is that before getting there, you get to here about some therapists and pedophiles (therapeutic pedophelia?). What's the probability that Scott changes his mind based on this episode?
We discuss
Why we're not defending frequentism as a philosophy 
The Bayesian interpretation of probability 
The importance of being explicit about assumptions 
Why it's insane to think that 50% should mean both "equally likely" and "I have no effing idea". 
Why Scott's interpretation of probability is crippling our ability to communicate 
How super are Superforecasters? 
Marginal versus conditional guarantees (this is exactly as boring as it sounds) 
How to pronounce Samotsvety and are they Italian or Eastern European or what?
References
In Continued Defense Of Non-Frequentist Probabilities (https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist)
Article on superforecasting by Gavin Leech and Misha Yugadin (https://progress.institute/can-policymakers-trust-forecasters/) 
Essay by Michael Story on superforecasting (https://www.samstack.io/p/five-questions-for-michael-story) 
Existential risk tournament: Superforecasters vs AI doomers (https://forecastingresearch.org/news/results-from-the-2022-existential-risk-persuasion-tournament) and Ben's blogpost about it (https://benchugg.com/writing/superforecasting/) 
The Good Judgment Project (https://goodjudgment.com/) 
Quotes
During the pandemic, Dominic Cummings said some of the most useful stuff that he received and circulated in the British government was not forecasting. It was qualitative information explaining the general model of what’s going on, which enabled decision-makers to think more clearly about their options for action and the likely consequences. If you’re worried about a new disease outbreak, you don’t just want a percentage probability estimate about future case numbers, you want an explanation of how the virus is likely to spread, what you can do about it, how you can prevent it.
- Michael Story (https://www.samstack.io/p/five-questions-for-michael-story) 
Is it bad that one term can mean both perfect information (as in 1) and total lack of information (as in 3)? No. This is no different from how we discuss things when we’re not using probability.
Do vaccines cause autism? No. Does drinking monkey blood cause autism? Also no. My evidence on the vaccines question is dozens of excellent studies, conducted so effectively that we’re as sure about this as we are about anything in biology. My evidence on the monkey blood question is that nobody’s ever proposed this and it would be weird if it were true. Still, it’s perfectly fine to say the single-word answer “no” to both of them to describe where I currently stand. If someone wants to know how much evidence/certainty is behind my “no”, they can ask, and I’ll tell them.
- SA, Section 2
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Help us calibrate our credences and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
What's your credence in Bayesianism? Tell us over at incrementspodcast@gmail.com. 
</description>
  <itunes:keywords>probability, bayesianism, frequentism, Scott Alexander, superforecasting, credences</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>After four episodes spent fawning over Scott Alexander&#39;s &quot;Non-libertarian FAQ&quot;, we turn around and attack the good man instead. In this episode we respond to Scott&#39;s piece &quot;In Continued Defense of Non-Frequentist Probabilities&quot;, and respond to each of his five arguments defending Bayesian probability. Like moths to a flame, we apparently cannot let the probability subject slide, sorry people. But the good news is that before getting there, you get to here about some therapists and pedophiles (therapeutic pedophelia?). What&#39;s the probability that Scott changes his mind based on this episode?</p>

<h1>We discuss</h1>

<ul>
<li>Why we&#39;re not defending frequentism as a philosophy </li>
<li>The Bayesian interpretation of probability </li>
<li>The importance of being explicit about assumptions </li>
<li>Why it&#39;s insane to think that 50% should mean both &quot;equally likely&quot; and &quot;I have no effing idea&quot;. </li>
<li>Why Scott&#39;s interpretation of probability is crippling <em>our</em> ability to communicate </li>
<li>How super are Superforecasters? </li>
<li>Marginal versus conditional guarantees (this is exactly as boring as it sounds) </li>
<li>How to pronounce Samotsvety and are they Italian or Eastern European or what?</li>
</ul>

<h1>References</h1>

<ul>
<li><a href="https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist" rel="nofollow">In Continued Defense Of Non-Frequentist Probabilities</a></li>
<li><a href="https://progress.institute/can-policymakers-trust-forecasters/" rel="nofollow">Article on superforecasting by Gavin Leech and Misha Yugadin</a> </li>
<li><a href="https://www.samstack.io/p/five-questions-for-michael-story" rel="nofollow">Essay by Michael Story on superforecasting</a> </li>
<li><a href="https://forecastingresearch.org/news/results-from-the-2022-existential-risk-persuasion-tournament" rel="nofollow">Existential risk tournament: Superforecasters vs AI doomers</a> and <a href="https://benchugg.com/writing/superforecasting/" rel="nofollow">Ben&#39;s blogpost about it</a> </li>
<li><a href="https://goodjudgment.com/" rel="nofollow">The Good Judgment Project</a> </li>
</ul>

<h1>Quotes</h1>

<blockquote>
<p>During the pandemic, Dominic Cummings said some of the most useful stuff that he received and circulated in the British government was not forecasting. It was qualitative information explaining the general model of what’s going on, which enabled decision-makers to think more clearly about their options for action and the likely consequences. If you’re worried about a new disease outbreak, you don’t just want a percentage probability estimate about future case numbers, you want an explanation of how the virus is likely to spread, what you can do about it, how you can prevent it.<br>
- <a href="https://www.samstack.io/p/five-questions-for-michael-story" rel="nofollow">Michael Story</a> </p>

<p>Is it bad that one term can mean both perfect information (as in 1) and total lack of information (as in 3)? No. This is no different from how we discuss things when we’re not using probability.</p>

<p>Do vaccines cause autism? No. Does drinking monkey blood cause autism? Also no. My evidence on the vaccines question is dozens of excellent studies, conducted so effectively that we’re as sure about this as we are about anything in biology. My evidence on the monkey blood question is that nobody’s ever proposed this and it would be weird if it were true. Still, it’s perfectly fine to say the single-word answer “no” to both of them to describe where I currently stand. If someone wants to know how much evidence/certainty is behind my “no”, they can ask, and I’ll tell them.<br>
- SA, Section 2</p>
</blockquote>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Help us calibrate our credences and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s your credence in Bayesianism? Tell us over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>After four episodes spent fawning over Scott Alexander&#39;s &quot;Non-libertarian FAQ&quot;, we turn around and attack the good man instead. In this episode we respond to Scott&#39;s piece &quot;In Continued Defense of Non-Frequentist Probabilities&quot;, and respond to each of his five arguments defending Bayesian probability. Like moths to a flame, we apparently cannot let the probability subject slide, sorry people. But the good news is that before getting there, you get to here about some therapists and pedophiles (therapeutic pedophelia?). What&#39;s the probability that Scott changes his mind based on this episode?</p>

<h1>We discuss</h1>

<ul>
<li>Why we&#39;re not defending frequentism as a philosophy </li>
<li>The Bayesian interpretation of probability </li>
<li>The importance of being explicit about assumptions </li>
<li>Why it&#39;s insane to think that 50% should mean both &quot;equally likely&quot; and &quot;I have no effing idea&quot;. </li>
<li>Why Scott&#39;s interpretation of probability is crippling <em>our</em> ability to communicate </li>
<li>How super are Superforecasters? </li>
<li>Marginal versus conditional guarantees (this is exactly as boring as it sounds) </li>
<li>How to pronounce Samotsvety and are they Italian or Eastern European or what?</li>
</ul>

<h1>References</h1>

<ul>
<li><a href="https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist" rel="nofollow">In Continued Defense Of Non-Frequentist Probabilities</a></li>
<li><a href="https://progress.institute/can-policymakers-trust-forecasters/" rel="nofollow">Article on superforecasting by Gavin Leech and Misha Yugadin</a> </li>
<li><a href="https://www.samstack.io/p/five-questions-for-michael-story" rel="nofollow">Essay by Michael Story on superforecasting</a> </li>
<li><a href="https://forecastingresearch.org/news/results-from-the-2022-existential-risk-persuasion-tournament" rel="nofollow">Existential risk tournament: Superforecasters vs AI doomers</a> and <a href="https://benchugg.com/writing/superforecasting/" rel="nofollow">Ben&#39;s blogpost about it</a> </li>
<li><a href="https://goodjudgment.com/" rel="nofollow">The Good Judgment Project</a> </li>
</ul>

<h1>Quotes</h1>

<blockquote>
<p>During the pandemic, Dominic Cummings said some of the most useful stuff that he received and circulated in the British government was not forecasting. It was qualitative information explaining the general model of what’s going on, which enabled decision-makers to think more clearly about their options for action and the likely consequences. If you’re worried about a new disease outbreak, you don’t just want a percentage probability estimate about future case numbers, you want an explanation of how the virus is likely to spread, what you can do about it, how you can prevent it.<br>
- <a href="https://www.samstack.io/p/five-questions-for-michael-story" rel="nofollow">Michael Story</a> </p>

<p>Is it bad that one term can mean both perfect information (as in 1) and total lack of information (as in 3)? No. This is no different from how we discuss things when we’re not using probability.</p>

<p>Do vaccines cause autism? No. Does drinking monkey blood cause autism? Also no. My evidence on the vaccines question is dozens of excellent studies, conducted so effectively that we’re as sure about this as we are about anything in biology. My evidence on the monkey blood question is that nobody’s ever proposed this and it would be weird if it were true. Still, it’s perfectly fine to say the single-word answer “no” to both of them to describe where I currently stand. If someone wants to know how much evidence/certainty is behind my “no”, they can ask, and I’ll tell them.<br>
- SA, Section 2</p>
</blockquote>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Help us calibrate our credences and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s your credence in Bayesianism? Tell us over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#53 - Ask Us Anything II: Disagreements and Decisions</title>
  <link>https://www.incrementspodcast.com/53</link>
  <guid isPermaLink="false">1ffe1058-61dd-4c4d-8d9e-383a97549241</guid>
  <pubDate>Mon, 14 Aug 2023 11:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/1ffe1058-61dd-4c4d-8d9e-383a97549241.mp3" length="90414601" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Ask us anything? Ask us everything! Back at it again with AUA Part 2/N. We wax poetic and wane dramatic on disagreements, decision-making, EA, and probability</itunes:subtitle>
  <itunes:duration>1:34:10</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/1/1ffe1058-61dd-4c4d-8d9e-383a97549241/cover.jpg?v=1"/>
  <description>Ask us anything? Ask us everything! Back at it again with AUA Part 2/N. We wax poetic and wane dramatic on a number of subjects, including:
- Ben's dark and despicable hidden historicist tendencies
- Expounding upon (one of our many) critiques of Bayesian Epistemology
- Ben's total abandonment of all of his principles
- Similarities and differences between human and computer decision making
- What can the critical rationalist community learn from Effective Altruism?
- Ben's new best friend Peter Turchin
- How to have effective disagreements and not take gleeful petty jabs at friends and co-hosts.
Questions
(Michael) A critique of Bayesian epistemology is that it "assigns scalars to feelings" in an ungrounded way. It's not clear to me that the problem-solving approach of Deutsch and Popper avoid this, because even during the conjecture-refutation process, the person needs to at some point decide whether the current problem has been solved satisfactorily enough to move on to the next problem. How is this satisfaction determined, if not via summarizing one's internal belief as a scalar that surpasses some threshold? If not this (which is essentially assigning scalars to feelings), by what mechanism is a problem determined to be solved?
(Michael) Is the claim that "humans create new choices whereas machines are constrained to choose within the event-space defined by the human" equivalent to saying "humans can perform abstraction while machines cannot?" Not clear what "create new choices" means, given that humans are also constrained in their vocabulary (and thus their event-space of possible thoughts)
(Lulie) In what ways could the critical rationalist culture improve by looking to EA?
(Scott) What principles do the @IncrementsPod duo apply to navigating effective conversations involving deep disagreement?
(Scott) Are there any contexts where bayesianism has utility? (steelman)
(Scott) What is Vaden going to do post graduation?
Quotes 
“The words or the language, as they are written or spoken,” he wrote, “do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be ‘voluntarily’ reproduced and combined...this combinatory play seems to be the essential feature in productive thought— before there is any connection with logical construction in words or other kinds of signs which can be communicated to others.” (Einstein) 
Contact us
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Check us out on youtube at https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Send Ben an email asking him why god why over at incrementspodcast.com 
</description>
  <itunes:keywords>ask-us-anything, disagreements, decision-making, bayesianism, probability </itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Ask us anything? Ask us everything! Back at it again with AUA Part 2/N. We wax poetic and wane dramatic on a number of subjects, including:</p>

<ul>
<li>Ben&#39;s dark and despicable hidden historicist tendencies</li>
<li>Expounding upon (one of our many) critiques of Bayesian Epistemology</li>
<li>Ben&#39;s total abandonment of all of his principles</li>
<li>Similarities and differences between human and computer decision making</li>
<li>What can the critical rationalist community learn from Effective Altruism?</li>
<li>Ben&#39;s new best friend Peter Turchin</li>
<li>How to have effective disagreements and not take gleeful petty jabs at friends and co-hosts.</li>
</ul>

<p><strong>Questions</strong></p>

<ol>
<li>(<strong>Michael</strong>) A critique of Bayesian epistemology is that it &quot;assigns scalars to feelings&quot; in an ungrounded way. It&#39;s not clear to me that the problem-solving approach of Deutsch and Popper avoid this, because even during the conjecture-refutation process, the person needs to at some point decide whether the current problem has been solved satisfactorily enough to move on to the next problem. How is this satisfaction determined, if not via summarizing one&#39;s internal belief as a scalar that surpasses some threshold? If not this (which is essentially assigning scalars to feelings), by what mechanism is a problem determined to be solved?</li>
<li>(<strong>Michael</strong>) Is the claim that &quot;humans create new choices whereas machines are constrained to choose within the event-space defined by the human&quot; equivalent to saying &quot;humans can perform abstraction while machines cannot?&quot; Not clear what &quot;create new choices&quot; means, given that humans are also constrained in their vocabulary (and thus their event-space of possible thoughts)</li>
<li>(<strong>Lulie</strong>) In what ways could the critical rationalist culture improve by looking to EA?</li>
<li>(<strong>Scott</strong>) What principles do the @IncrementsPod duo apply to navigating effective conversations involving deep disagreement?</li>
<li>(<strong>Scott</strong>) Are there any contexts where bayesianism has utility? (steelman)</li>
<li>(<strong>Scott</strong>) What is Vaden going to do post graduation?</li>
</ol>

<p><strong>Quotes</strong> </p>

<blockquote>
<p>“The words or the language, as they are written or spoken,” he wrote, “do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be ‘voluntarily’ reproduced and combined...this combinatory play seems to be the essential feature in productive thought— before there is any connection with logical construction in words or other kinds of signs which can be communicated to others.” (Einstein) </p>
</blockquote>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p>Send Ben an email asking him why god why over at incrementspodcast.com</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Ask us anything? Ask us everything! Back at it again with AUA Part 2/N. We wax poetic and wane dramatic on a number of subjects, including:</p>

<ul>
<li>Ben&#39;s dark and despicable hidden historicist tendencies</li>
<li>Expounding upon (one of our many) critiques of Bayesian Epistemology</li>
<li>Ben&#39;s total abandonment of all of his principles</li>
<li>Similarities and differences between human and computer decision making</li>
<li>What can the critical rationalist community learn from Effective Altruism?</li>
<li>Ben&#39;s new best friend Peter Turchin</li>
<li>How to have effective disagreements and not take gleeful petty jabs at friends and co-hosts.</li>
</ul>

<p><strong>Questions</strong></p>

<ol>
<li>(<strong>Michael</strong>) A critique of Bayesian epistemology is that it &quot;assigns scalars to feelings&quot; in an ungrounded way. It&#39;s not clear to me that the problem-solving approach of Deutsch and Popper avoid this, because even during the conjecture-refutation process, the person needs to at some point decide whether the current problem has been solved satisfactorily enough to move on to the next problem. How is this satisfaction determined, if not via summarizing one&#39;s internal belief as a scalar that surpasses some threshold? If not this (which is essentially assigning scalars to feelings), by what mechanism is a problem determined to be solved?</li>
<li>(<strong>Michael</strong>) Is the claim that &quot;humans create new choices whereas machines are constrained to choose within the event-space defined by the human&quot; equivalent to saying &quot;humans can perform abstraction while machines cannot?&quot; Not clear what &quot;create new choices&quot; means, given that humans are also constrained in their vocabulary (and thus their event-space of possible thoughts)</li>
<li>(<strong>Lulie</strong>) In what ways could the critical rationalist culture improve by looking to EA?</li>
<li>(<strong>Scott</strong>) What principles do the @IncrementsPod duo apply to navigating effective conversations involving deep disagreement?</li>
<li>(<strong>Scott</strong>) Are there any contexts where bayesianism has utility? (steelman)</li>
<li>(<strong>Scott</strong>) What is Vaden going to do post graduation?</li>
</ol>

<p><strong>Quotes</strong> </p>

<blockquote>
<p>“The words or the language, as they are written or spoken,” he wrote, “do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be ‘voluntarily’ reproduced and combined...this combinatory play seems to be the essential feature in productive thought— before there is any connection with logical construction in words or other kinds of signs which can be communicated to others.” (Einstein) </p>
</blockquote>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p>Send Ben an email asking him why god why over at incrementspodcast.com</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#46 (Bonus) - Arguing about probability (with Nick Anyos)</title>
  <link>https://www.incrementspodcast.com/46</link>
  <guid isPermaLink="false">4b26dbf2-7bcd-44e6-ac65-c3dbca70c897</guid>
  <pubDate>Mon, 19 Dec 2022 12:30:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/4b26dbf2-7bcd-44e6-ac65-c3dbca70c897.mp3" length="85872117" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Ben and Vaden make a guest appearance on Nick Anyos' podcast on criticisms of effective altruism. As usual, they end up arguing about probability for most of it. </itunes:subtitle>
  <itunes:duration>1:59:16</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/4/4b26dbf2-7bcd-44e6-ac65-c3dbca70c897/cover.jpg?v=1"/>
  <description>We make a guest appearance on Nick Anyos' podcast to talk about effective altruism, longtermism, and probability. Nick (very politely) pushes back on our anti-Bayesian credo, and we get deep into the weeds of probability and epistemology. 
You can find Nick's podcast on institutional design here (https://institutionaldesign.podbean.com/), and his substack here (https://institutionaldesign.substack.com/?utm_source=substack&amp;amp;utm_medium=web&amp;amp;utm_campaign=substack_profile). 
We discuss: 
- The lack of feedback loops in longtermism 
- Whether quantifying your beliefs is helpful 
- Objective versus subjective knowledge 
- The difference between prediction and explanation
- The difference between Bayesian epistemology and Bayesian statistics
- Statistical modelling and when statistics is useful 
Links
- Philosophy and the practice of Bayesian statistics (http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf) by Andrew Gelman and Cosma Shalizi
- EA forum post (https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations) showing all forecasts beyond a year out are uncalibrated. 
- Vaclav smil quote where he predicts a pandemic by 2021:
     &amp;gt; The following realities indicate the imminence of the risk. The typical frequency of influenza pan- demics was once every 50–60 years between 1700 and 1889 (the longest known gap was 52 years, between the pandemics of 1729–1733 and 1781–1782) and only once every 10–40 years since 1889. The recurrence interval, calculated simply as the mean time elapsed between the last six known pandemics, is about 28 years, with the extremes of 6 and 53 years. Adding the mean and the highest interval to 1968 gives a span between 1996 and 2021. We are, probabilistically speaking, very much inside a high-risk zone.
     &amp;gt; 
     &amp;gt; - Global Catastropes and Trends, p.46
Reference for Tetlock's superforecasters failing to predict the pandemic. "On February 20th, Tetlock’s superforecasters predicted only a 3% chance that there would be 200,000+ coronavirus cases a month later (there were)." (https://wearenotsaved.com/2020/04/18/pandemic-uncovers-the-ridiculousness-of-superforecasting/) 
Contact us
- Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
- Check us out on youtube at https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ
- Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Errata
- At the beginning of the episode Vaden says he hasn't been interviewed on another podcast before. He forgot his appearence (https://www.thedeclarationonline.com/podcast/2019/7/23/chesto-and-vaden-debatecast) on The Declaration Podcast in 2019, which will be appearing as a bonus episode on our feed in the coming weeks. 
Sick of hearing us talk about this subject? Understandable! Send topic suggestions over to incrementspodcast@gmail.com. 
Photo credit: James O’Brien (http://www.obrien-studio.com/) for Quanta Magazine (https://www.quantamagazine.org/where-quantum-probability-comes-from-20190909/) 
</description>
  <itunes:keywords>probability, longtermism, effective altruism, bayesianism, statistics</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>We make a guest appearance on Nick Anyos&#39; podcast to talk about effective altruism, longtermism, and probability. Nick (very politely) pushes back on our anti-Bayesian credo, and we get deep into the weeds of probability and epistemology. </p>

<p>You can find Nick&#39;s podcast on institutional design <a href="https://institutionaldesign.podbean.com/" rel="nofollow">here</a>, and his substack <a href="https://institutionaldesign.substack.com/?utm_source=substack&utm_medium=web&utm_campaign=substack_profile" rel="nofollow">here</a>. </p>

<p><strong>We discuss:</strong> </p>

<ul>
<li>The lack of feedback loops in longtermism </li>
<li>Whether quantifying your beliefs is helpful </li>
<li>Objective versus subjective knowledge </li>
<li>The difference between prediction and explanation</li>
<li>The difference between Bayesian epistemology and Bayesian statistics</li>
<li>Statistical modelling and when statistics is useful </li>
</ul>

<p><strong>Links</strong></p>

<ul>
<li><a href="http://www.stat.columbia.edu/%7Egelman/research/published/philosophy.pdf" rel="nofollow">Philosophy and the practice of Bayesian statistics</a> by Andrew Gelman and Cosma Shalizi</li>
<li><a href="https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations" rel="nofollow">EA forum post</a> showing all forecasts beyond a year out are uncalibrated. </li>
<li><p>Vaclav smil quote where he predicts a pandemic by 2021:</p>

<blockquote>
<p><em>The following realities indicate the imminence of the risk. The typical frequency of influenza pan- demics was once every 50–60 years between 1700 and 1889 (the longest known gap was 52 years, between the pandemics of 1729–1733 and 1781–1782) and only once every 10–40 years since 1889. The recurrence interval, calculated simply as the mean time elapsed between the last six known pandemics, is about 28 years, with the extremes of 6 and 53 years. Adding the mean and the highest interval to 1968 gives a span between 1996 and 2021. We are, probabilistically speaking, very much inside a high-risk zone.</em></p>

<p><em>- Global Catastropes and Trends, p.46</em></p>
</blockquote></li>
<li><p>Reference for Tetlock&#39;s superforecasters failing to predict the pandemic. <a href="https://wearenotsaved.com/2020/04/18/pandemic-uncovers-the-ridiculousness-of-superforecasting/" rel="nofollow">&quot;On February 20th, Tetlock’s superforecasters predicted only a 3% chance that there would be 200,000+ coronavirus cases a month later (there were).&quot;</a> </p></li>
</ul>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p><strong>Errata</strong></p>

<ul>
<li>At the beginning of the episode Vaden says he hasn&#39;t been interviewed on another podcast before. He forgot <a href="https://www.thedeclarationonline.com/podcast/2019/7/23/chesto-and-vaden-debatecast" rel="nofollow">his appearence</a> on The Declaration Podcast in 2019, which will be appearing as a bonus episode on our feed in the coming weeks. </li>
</ul>

<p>Sick of hearing us talk about this subject? Understandable! Send topic suggestions over to <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p>

<p>Photo credit: <a href="http://www.obrien-studio.com/" rel="nofollow">James O’Brien</a> for <a href="https://www.quantamagazine.org/where-quantum-probability-comes-from-20190909/" rel="nofollow">Quanta Magazine</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>We make a guest appearance on Nick Anyos&#39; podcast to talk about effective altruism, longtermism, and probability. Nick (very politely) pushes back on our anti-Bayesian credo, and we get deep into the weeds of probability and epistemology. </p>

<p>You can find Nick&#39;s podcast on institutional design <a href="https://institutionaldesign.podbean.com/" rel="nofollow">here</a>, and his substack <a href="https://institutionaldesign.substack.com/?utm_source=substack&utm_medium=web&utm_campaign=substack_profile" rel="nofollow">here</a>. </p>

<p><strong>We discuss:</strong> </p>

<ul>
<li>The lack of feedback loops in longtermism </li>
<li>Whether quantifying your beliefs is helpful </li>
<li>Objective versus subjective knowledge </li>
<li>The difference between prediction and explanation</li>
<li>The difference between Bayesian epistemology and Bayesian statistics</li>
<li>Statistical modelling and when statistics is useful </li>
</ul>

<p><strong>Links</strong></p>

<ul>
<li><a href="http://www.stat.columbia.edu/%7Egelman/research/published/philosophy.pdf" rel="nofollow">Philosophy and the practice of Bayesian statistics</a> by Andrew Gelman and Cosma Shalizi</li>
<li><a href="https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations" rel="nofollow">EA forum post</a> showing all forecasts beyond a year out are uncalibrated. </li>
<li><p>Vaclav smil quote where he predicts a pandemic by 2021:</p>

<blockquote>
<p><em>The following realities indicate the imminence of the risk. The typical frequency of influenza pan- demics was once every 50–60 years between 1700 and 1889 (the longest known gap was 52 years, between the pandemics of 1729–1733 and 1781–1782) and only once every 10–40 years since 1889. The recurrence interval, calculated simply as the mean time elapsed between the last six known pandemics, is about 28 years, with the extremes of 6 and 53 years. Adding the mean and the highest interval to 1968 gives a span between 1996 and 2021. We are, probabilistically speaking, very much inside a high-risk zone.</em></p>

<p><em>- Global Catastropes and Trends, p.46</em></p>
</blockquote></li>
<li><p>Reference for Tetlock&#39;s superforecasters failing to predict the pandemic. <a href="https://wearenotsaved.com/2020/04/18/pandemic-uncovers-the-ridiculousness-of-superforecasting/" rel="nofollow">&quot;On February 20th, Tetlock’s superforecasters predicted only a 3% chance that there would be 200,000+ coronavirus cases a month later (there were).&quot;</a> </p></li>
</ul>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p><strong>Errata</strong></p>

<ul>
<li>At the beginning of the episode Vaden says he hasn&#39;t been interviewed on another podcast before. He forgot <a href="https://www.thedeclarationonline.com/podcast/2019/7/23/chesto-and-vaden-debatecast" rel="nofollow">his appearence</a> on The Declaration Podcast in 2019, which will be appearing as a bonus episode on our feed in the coming weeks. </li>
</ul>

<p>Sick of hearing us talk about this subject? Understandable! Send topic suggestions over to <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p>

<p>Photo credit: <a href="http://www.obrien-studio.com/" rel="nofollow">James O’Brien</a> for <a href="https://www.quantamagazine.org/where-quantum-probability-comes-from-20190909/" rel="nofollow">Quanta Magazine</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#29 - Some Scattered Thoughts on Superforecasting</title>
  <link>https://www.incrementspodcast.com/29</link>
  <guid isPermaLink="false">3cd18700-daac-4eb2-b515-e8022a526436</guid>
  <pubDate>Mon, 16 Aug 2021 14:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/3cd18700-daac-4eb2-b515-e8022a526436.mp3" length="33224972" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>We discuss Philip Tetlock's work on Superforecasting.</itunes:subtitle>
  <itunes:duration>45:20</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/3/3cd18700-daac-4eb2-b515-e8022a526436/cover.jpg?v=2"/>
  <description>We're back! Apologies for the delay, but Vaden got married and Ben was summoned to be an astronaut on the next billionaire's vacation to Venus. This week we're talking about how to forecast the future (with this one simple and easy trick! Astrologers hate them!). Specifically, we're diving into Philip Tetlock's work on Superforecasting (https://en.wikipedia.org/wiki/Superforecasting:_The_Art_and_Science_of_Prediction). 
So what's the deal? Is it possible to "harness the wisdom of the crowd to forecast world events" (https://en.wikipedia.org/wiki/The_Good_Judgment_Project)? Or is the whole thing just a result of sloppy statistics? We believe the latter is likely to be true with probability 64.9% - no, wait, 66.1%. 
Intro segment:
"The Sentience Debate": The moral value of shrimps, insects, and oysters (https://www.facebook.com/103405457813911/videos/254164216090604)
Relevant timestamps:
10:05: "Even if there's only a one in one hundred chance, or one in one thousand chance, that insects are sentient given current information, and if we're killing trillions or quadrillions of insects in ways that are preventable or avoidable or that we can in various ways mitigate that harm... then we should consider that possibility."
25:47: "If you're all going to work on pain in invertebrates, I pity you in many respects... In my previous work, I was used to running experiments and getting a clear answer, and I could say what these animals do and what they don't do. But when I started to think about what they might be feeling, you meet this frustration, that after maybe about 15 years of research, if someone asks me do they feel pain, my answer is 'maybe'... a strong 'maybe'... you cannot discount the possibility."
46:47: "It is not 100% clear to me that plants are non sentient. I do think that animals including insects are much more likely to be sentient than plants are, but I would not have a credence of zero that plants are sentient."
1:01:59:  "So the hard problem I would like to ask the panel is: If you were to compare the moral weight of one ant to the moral weight of one human, what ratio would you put? How much more is a human worth than an ant? 100:1? 1000:1? 10:1? Or maybe 1:1? ... Let's start with Jamie."
Main References:
Superforecasting: The Art and Science of Prediction - Wikipedia (https://en.wikipedia.org/wiki/Superforecasting:_The_Art_and_Science_of_Prediction)
How Policymakers Can Improve Crisis Planning (https://www.foreignaffairs.com/articles/united-states/2020-10-13/better-crystal-ball)
The Good Judgment Project - Wikipedia (https://en.wikipedia.org/wiki/The_Good_Judgment_Project)
Expert Political Judgment: How Good Is It? How Can We Know?: Tetlock, Philip E.: 9780691128719: Books - Amazon.ca (https://www.amazon.ca/Expert-Political-Judgment-Good-Know/dp/0691128715)
Additional references mentioned in the episode:
The Drunkard's Walk: How Randomness Rules Our Lives (https://en.wikipedia.org/wiki/The_Drunkard%27s_Walk)
The Black Swan: The Impact of the Highly Improbable - Wikipedia (https://en.wikipedia.org/wiki/The_Black_Swan:_The_Impact_of_the_Highly_Improbable)
Book Review: Superforecasting | Slate Star Codex (https://slatestarcodex.com/2016/02/04/book-review-superforecasting/)
Pandemic Uncovers the Limitations of Superforecasting – We Are Not Saved (https://wearenotsaved.com/2020/04/18/pandemic-uncovers-the-ridiculousness-of-superforecasting/)
My Final Case Against Superforecasting (with criticisms considered, objections noted, and assumptions buttressed) – We Are Not Saved (https://wearenotsaved.com/2020/05/30/my-final-case-against-superforecasting-with-criticisms-considered-objections-noted-and-assumptions-buttressed/)
Use your Good Judgement and send us email at incrementspodcast@gmail.com.  
</description>
  <itunes:keywords>Superforecasting, Good Judgement Project, Philip Tetlock, Politics, Probability</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>We&#39;re back! Apologies for the delay, but Vaden got married and Ben was summoned to be an astronaut on the next billionaire&#39;s vacation to Venus. This week we&#39;re talking about how to forecast the future (with this one simple and easy trick! Astrologers <em>hate</em> them!). Specifically, we&#39;re diving into Philip Tetlock&#39;s work on <a href="https://en.wikipedia.org/wiki/Superforecasting:_The_Art_and_Science_of_Prediction" rel="nofollow">Superforecasting</a>. </p>

<p>So what&#39;s the deal? Is it possible to <a href="https://en.wikipedia.org/wiki/The_Good_Judgment_Project" rel="nofollow">&quot;harness the wisdom of the crowd to forecast world events&quot;</a>? Or is the whole thing just a result of sloppy statistics? We believe the latter is likely to be true with probability 64.9% - no, wait, 66.1%. </p>

<p><strong>Intro segment:</strong></p>

<p><a href="https://www.facebook.com/103405457813911/videos/254164216090604" rel="nofollow">&quot;The Sentience Debate&quot;: The moral value of shrimps, insects, and oysters</a></p>

<p>Relevant timestamps:</p>

<ul>
<li><strong>10:05:</strong> &quot;Even if there&#39;s only a one in one hundred chance, or one in one thousand chance, that insects are sentient given current information, and if we&#39;re killing trillions or quadrillions of insects in ways that are preventable or avoidable or that we can in various ways mitigate that harm... then we should consider that possibility.&quot;</li>
<li><strong>25:47:</strong> &quot;If you&#39;re all going to work on pain in invertebrates, I pity you in many respects... In my previous work, I was used to running experiments and getting a clear answer, and I could say what these animals do and what they don&#39;t do. But when I started to think about what they might be feeling, you meet this frustration, that after maybe about 15 years of research, if someone asks me do they feel pain, my answer is &#39;maybe&#39;... a strong &#39;maybe&#39;... you cannot discount the possibility.&quot;</li>
<li><strong>46:47:</strong> &quot;It is not 100% clear to me that plants are non sentient. I do think that animals including insects are much more likely to be sentient than plants are, but I would not have a credence of zero that plants are sentient.&quot;</li>
<li><strong>1:01:59:</strong>  &quot;So the hard problem I would like to ask the panel is: If you were to compare the moral weight of one ant to the moral weight of one human, what ratio would you put? How much more is a human worth than an ant? 100:1? 1000:1? 10:1? Or maybe 1:1? ... Let&#39;s start with Jamie.&quot;</li>
</ul>

<p><strong>Main References:</strong></p>

<ul>
<li><a href="https://en.wikipedia.org/wiki/Superforecasting:_The_Art_and_Science_of_Prediction" rel="nofollow">Superforecasting: The Art and Science of Prediction - Wikipedia</a></li>
<li><a href="https://www.foreignaffairs.com/articles/united-states/2020-10-13/better-crystal-ball" rel="nofollow">How Policymakers Can Improve Crisis Planning</a></li>
<li><a href="https://en.wikipedia.org/wiki/The_Good_Judgment_Project" rel="nofollow">The Good Judgment Project - Wikipedia</a></li>
<li><a href="https://www.amazon.ca/Expert-Political-Judgment-Good-Know/dp/0691128715" rel="nofollow">Expert Political Judgment: How Good Is It? How Can We Know?: Tetlock, Philip E.: 9780691128719: Books - Amazon.ca</a></li>
</ul>

<p>Additional references mentioned in the episode:</p>

<ul>
<li><a href="https://en.wikipedia.org/wiki/The_Drunkard%27s_Walk" rel="nofollow">The Drunkard&#39;s Walk: How Randomness Rules Our Lives</a></li>
<li><a href="https://en.wikipedia.org/wiki/The_Black_Swan:_The_Impact_of_the_Highly_Improbable" rel="nofollow">The Black Swan: The Impact of the Highly Improbable - Wikipedia</a></li>
<li><a href="https://slatestarcodex.com/2016/02/04/book-review-superforecasting/" rel="nofollow">Book Review: Superforecasting | Slate Star Codex</a></li>
<li><a href="https://wearenotsaved.com/2020/04/18/pandemic-uncovers-the-ridiculousness-of-superforecasting/" rel="nofollow">Pandemic Uncovers the Limitations of Superforecasting – We Are Not Saved</a></li>
<li><a href="https://wearenotsaved.com/2020/05/30/my-final-case-against-superforecasting-with-criticisms-considered-objections-noted-and-assumptions-buttressed/" rel="nofollow">My Final Case Against Superforecasting (with criticisms considered, objections noted, and assumptions buttressed) – We Are Not Saved</a></li>
</ul>

<p>Use your Good Judgement and send us email at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>We&#39;re back! Apologies for the delay, but Vaden got married and Ben was summoned to be an astronaut on the next billionaire&#39;s vacation to Venus. This week we&#39;re talking about how to forecast the future (with this one simple and easy trick! Astrologers <em>hate</em> them!). Specifically, we&#39;re diving into Philip Tetlock&#39;s work on <a href="https://en.wikipedia.org/wiki/Superforecasting:_The_Art_and_Science_of_Prediction" rel="nofollow">Superforecasting</a>. </p>

<p>So what&#39;s the deal? Is it possible to <a href="https://en.wikipedia.org/wiki/The_Good_Judgment_Project" rel="nofollow">&quot;harness the wisdom of the crowd to forecast world events&quot;</a>? Or is the whole thing just a result of sloppy statistics? We believe the latter is likely to be true with probability 64.9% - no, wait, 66.1%. </p>

<p><strong>Intro segment:</strong></p>

<p><a href="https://www.facebook.com/103405457813911/videos/254164216090604" rel="nofollow">&quot;The Sentience Debate&quot;: The moral value of shrimps, insects, and oysters</a></p>

<p>Relevant timestamps:</p>

<ul>
<li><strong>10:05:</strong> &quot;Even if there&#39;s only a one in one hundred chance, or one in one thousand chance, that insects are sentient given current information, and if we&#39;re killing trillions or quadrillions of insects in ways that are preventable or avoidable or that we can in various ways mitigate that harm... then we should consider that possibility.&quot;</li>
<li><strong>25:47:</strong> &quot;If you&#39;re all going to work on pain in invertebrates, I pity you in many respects... In my previous work, I was used to running experiments and getting a clear answer, and I could say what these animals do and what they don&#39;t do. But when I started to think about what they might be feeling, you meet this frustration, that after maybe about 15 years of research, if someone asks me do they feel pain, my answer is &#39;maybe&#39;... a strong &#39;maybe&#39;... you cannot discount the possibility.&quot;</li>
<li><strong>46:47:</strong> &quot;It is not 100% clear to me that plants are non sentient. I do think that animals including insects are much more likely to be sentient than plants are, but I would not have a credence of zero that plants are sentient.&quot;</li>
<li><strong>1:01:59:</strong>  &quot;So the hard problem I would like to ask the panel is: If you were to compare the moral weight of one ant to the moral weight of one human, what ratio would you put? How much more is a human worth than an ant? 100:1? 1000:1? 10:1? Or maybe 1:1? ... Let&#39;s start with Jamie.&quot;</li>
</ul>

<p><strong>Main References:</strong></p>

<ul>
<li><a href="https://en.wikipedia.org/wiki/Superforecasting:_The_Art_and_Science_of_Prediction" rel="nofollow">Superforecasting: The Art and Science of Prediction - Wikipedia</a></li>
<li><a href="https://www.foreignaffairs.com/articles/united-states/2020-10-13/better-crystal-ball" rel="nofollow">How Policymakers Can Improve Crisis Planning</a></li>
<li><a href="https://en.wikipedia.org/wiki/The_Good_Judgment_Project" rel="nofollow">The Good Judgment Project - Wikipedia</a></li>
<li><a href="https://www.amazon.ca/Expert-Political-Judgment-Good-Know/dp/0691128715" rel="nofollow">Expert Political Judgment: How Good Is It? How Can We Know?: Tetlock, Philip E.: 9780691128719: Books - Amazon.ca</a></li>
</ul>

<p>Additional references mentioned in the episode:</p>

<ul>
<li><a href="https://en.wikipedia.org/wiki/The_Drunkard%27s_Walk" rel="nofollow">The Drunkard&#39;s Walk: How Randomness Rules Our Lives</a></li>
<li><a href="https://en.wikipedia.org/wiki/The_Black_Swan:_The_Impact_of_the_Highly_Improbable" rel="nofollow">The Black Swan: The Impact of the Highly Improbable - Wikipedia</a></li>
<li><a href="https://slatestarcodex.com/2016/02/04/book-review-superforecasting/" rel="nofollow">Book Review: Superforecasting | Slate Star Codex</a></li>
<li><a href="https://wearenotsaved.com/2020/04/18/pandemic-uncovers-the-ridiculousness-of-superforecasting/" rel="nofollow">Pandemic Uncovers the Limitations of Superforecasting – We Are Not Saved</a></li>
<li><a href="https://wearenotsaved.com/2020/05/30/my-final-case-against-superforecasting-with-criticisms-considered-objections-noted-and-assumptions-buttressed/" rel="nofollow">My Final Case Against Superforecasting (with criticisms considered, objections noted, and assumptions buttressed) – We Are Not Saved</a></li>
</ul>

<p>Use your Good Judgement and send us email at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#25 - Mathematical Explanation with Mark Colyvan</title>
  <link>https://www.incrementspodcast.com/25</link>
  <guid isPermaLink="false">1a5864a9-d5d7-43af-b8d6-e78dcb1d90c3</guid>
  <pubDate>Mon, 24 May 2021 14:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/1a5864a9-d5d7-43af-b8d6-e78dcb1d90c3.mp3" length="61259231" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>We're joined by professor Mark Colyvan to talk about the philosophy of mathematics, logic, and thought experiments. </itunes:subtitle>
  <itunes:duration>2:07:37</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
  <description>We often talk of explanation in the context of empirical sciences, but what about explanation in logic and mathematics? Is there such a thing? If so, what does it look like and what are the consequences? In this episode we sit down with professor of philosophy Mark Colyvan and explore 
How mathematical explanation differs from explanation in the natural sciences
Counterfactual reasoning in mathematics 
Intra versus extra mathematical explanation 
Alternate logics 
Mathematical thought experiments 
The use of probability in the courtroom
References: 
- The Unreasonable Effectiveness of Mathematics in the Natural Sciences (https://www.maths.ed.ac.uk/~v1ranick/papers/wigner.pdf) by Eugene Wigner. 
- Proofs and Refutations (https://en.wikipedia.org/wiki/Proofs_and_Refutations#:~:text=Proofs%20and%20Refutations%3A%20The%20Logic,characteristic%20defined%20for%20the%20polyhedron.) by Imre Lakatos. 
Mark Colyvan (http://www.colyvan.com/) is a professor of philosophy at the University of Sydney, and a visiting professor (and, previously, Humboldt fellow) at Ludwig-Maximilians University in Munich. He has a wide array of research interests, including the philosophy of mathematics, philosophy of logic, decision theory, environmental philosophy, and ecology. He has authored three books: The Indispensability of Mathematics (Oxford University Press, 2001), Ecological Orbits: How Planets Move and Populations Grow (Oxford University Press, 2004, co-authored with Lev Ginzburg), and An Introduction to the Philosophy of Mathematics (Cambridge University Press, 2012).
 Special Guest: Mark Colyvan.
</description>
  <itunes:keywords>counterfactual, explanation, philosophy of mathematics, logic, thought experiments</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>We often talk of explanation in the context of empirical sciences, but what about explanation in logic and mathematics? Is there such a thing? If so, what does it look like and what are the consequences? In this episode we sit down with professor of philosophy Mark Colyvan and explore </p>

<ul>
<li>How mathematical explanation differs from explanation in the natural sciences</li>
<li>Counterfactual reasoning in mathematics </li>
<li>Intra versus extra mathematical explanation </li>
<li>Alternate logics </li>
<li>Mathematical thought experiments </li>
<li>The use of probability in the courtroom</li>
</ul>

<p>References: </p>

<ul>
<li><a href="https://www.maths.ed.ac.uk/%7Ev1ranick/papers/wigner.pdf" rel="nofollow">The Unreasonable Effectiveness of Mathematics in the Natural Sciences</a> by Eugene Wigner. </li>
<li><a href="https://en.wikipedia.org/wiki/Proofs_and_Refutations#:%7E:text=Proofs%20and%20Refutations%3A%20The%20Logic,characteristic%20defined%20for%20the%20polyhedron." rel="nofollow">Proofs and Refutations</a> by Imre Lakatos. </li>
</ul>

<p><em><a href="http://www.colyvan.com/" rel="nofollow">Mark Colyvan</a> is a professor of philosophy at the University of Sydney, and a visiting professor (and, previously, Humboldt fellow) at Ludwig-Maximilians University in Munich. He has a wide array of research interests, including the philosophy of mathematics, philosophy of logic, decision theory, environmental philosophy, and ecology. He has authored three books: The Indispensability of Mathematics (Oxford University Press, 2001), Ecological Orbits: How Planets Move and Populations Grow (Oxford University Press, 2004, co-authored with Lev Ginzburg), and An Introduction to the Philosophy of Mathematics (Cambridge University Press, 2012).</em></p><p>Special Guest: Mark Colyvan.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>We often talk of explanation in the context of empirical sciences, but what about explanation in logic and mathematics? Is there such a thing? If so, what does it look like and what are the consequences? In this episode we sit down with professor of philosophy Mark Colyvan and explore </p>

<ul>
<li>How mathematical explanation differs from explanation in the natural sciences</li>
<li>Counterfactual reasoning in mathematics </li>
<li>Intra versus extra mathematical explanation </li>
<li>Alternate logics </li>
<li>Mathematical thought experiments </li>
<li>The use of probability in the courtroom</li>
</ul>

<p>References: </p>

<ul>
<li><a href="https://www.maths.ed.ac.uk/%7Ev1ranick/papers/wigner.pdf" rel="nofollow">The Unreasonable Effectiveness of Mathematics in the Natural Sciences</a> by Eugene Wigner. </li>
<li><a href="https://en.wikipedia.org/wiki/Proofs_and_Refutations#:%7E:text=Proofs%20and%20Refutations%3A%20The%20Logic,characteristic%20defined%20for%20the%20polyhedron." rel="nofollow">Proofs and Refutations</a> by Imre Lakatos. </li>
</ul>

<p><em><a href="http://www.colyvan.com/" rel="nofollow">Mark Colyvan</a> is a professor of philosophy at the University of Sydney, and a visiting professor (and, previously, Humboldt fellow) at Ludwig-Maximilians University in Munich. He has a wide array of research interests, including the philosophy of mathematics, philosophy of logic, decision theory, environmental philosophy, and ecology. He has authored three books: The Indispensability of Mathematics (Oxford University Press, 2001), Ecological Orbits: How Planets Move and Populations Grow (Oxford University Press, 2004, co-authored with Lev Ginzburg), and An Introduction to the Philosophy of Mathematics (Cambridge University Press, 2012).</em></p><p>Special Guest: Mark Colyvan.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#19 - Against Longtermism FAQ</title>
  <link>https://www.incrementspodcast.com/19</link>
  <guid isPermaLink="false">Buzzsprout-7623718</guid>
  <pubDate>Mon, 01 Feb 2021 20:00:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/5b58b507-52f8-4dd7-8abd-471f6371691d.mp3" length="65372208" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle></itunes:subtitle>
  <itunes:duration>1:30:44</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
  <description>&lt;p&gt;Back in the ring for round two on longtermism! We (Ben somewhat drunkenly) respond to some of the criticism of episode #17 and our two essays (&lt;a href="https://forum.effectivealtruism.org/posts/2NJszbnBTwibfdpo7/strong-longtermism-irrefutability-and-moral-progress"&gt;Ben's&lt;/a&gt;, &lt;a href="https://vmasrani.github.io/blog/2020/against_longtermism/"&gt;Vaden's&lt;/a&gt;) We touch on: &lt;/p&gt;&lt;ul&gt;
&lt;li&gt;Ben's hate mail from his &lt;a href="https://medium.com/conjecture-magazine/the-dangers-of-cliodynamics-c48392b4a985"&gt;piece on cliodynamics&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Longtermism as implying altruistic portfolio shuffling&lt;/li&gt;
&lt;li&gt;What on earth is Bayesian epistemology &lt;/li&gt;
&lt;li&gt;&lt;a href="http://colyvan.com/papers/pasadena.pdf"&gt;The Pasadena game&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Authoritarianism and the danger of seeking perfection &lt;/li&gt;
&lt;li&gt;Arrow's theorem&lt;/li&gt;
&lt;li&gt;Alternative decision theories focusing on error correction &lt;/li&gt;
&lt;li&gt;What's the probability of nuclear war before 2100?&lt;/li&gt;
&lt;li&gt;When are models reliable &lt;/li&gt;
&lt;li&gt;What problems to work on &lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;You will, dear listener, be either pleased or horrified to learn that this will not be our last foray into longtermism. It's like choose your own adventure ... except we're choosing the adventure, and the adventure is longtermism. Next stop is the &lt;a href="https://hearthisidea.com/"&gt;Hear this Idea podcast&lt;/a&gt;!&lt;br&gt;&lt;br&gt;Send us best longterm prediction at incrementspodcast@gmail.com&lt;/p&gt; 
</description>
  <itunes:keywords>longtermism</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Back in the ring for round two on longtermism! We (Ben somewhat drunkenly) respond to some of the criticism of episode #17 and our two essays (<a href='https://forum.effectivealtruism.org/posts/2NJszbnBTwibfdpo7/strong-longtermism-irrefutability-and-moral-progress'>Ben&apos;s</a>, <a href='https://vmasrani.github.io/blog/2020/against_longtermism/'>Vaden&apos;s</a>) We touch on: </p><ul><li>Ben&apos;s hate mail from his <a href='https://medium.com/conjecture-magazine/the-dangers-of-cliodynamics-c48392b4a985'>piece on cliodynamics</a></li><li>Longtermism as implying altruistic portfolio shuffling</li><li>What on earth is Bayesian epistemology </li><li><a href='http://colyvan.com/papers/pasadena.pdf'>The Pasadena game</a></li><li>Authoritarianism and the danger of seeking perfection </li><li>Arrow&apos;s theorem</li><li>Alternative decision theories focusing on error correction </li><li>What&apos;s the probability of nuclear war before 2100?</li><li>When are models reliable </li><li>What problems to work on </li></ul><p>You will, dear listener, be either pleased or horrified to learn that this will not be our last foray into longtermism. It&apos;s like choose your own adventure ... except we&apos;re choosing the adventure, and the adventure is longtermism. Next stop is the <a href='https://hearthisidea.com/'>Hear this Idea podcast</a>!<br/><br/>Send us best longterm prediction at incrementspodcast@gmail.com</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Back in the ring for round two on longtermism! We (Ben somewhat drunkenly) respond to some of the criticism of episode #17 and our two essays (<a href='https://forum.effectivealtruism.org/posts/2NJszbnBTwibfdpo7/strong-longtermism-irrefutability-and-moral-progress'>Ben&apos;s</a>, <a href='https://vmasrani.github.io/blog/2020/against_longtermism/'>Vaden&apos;s</a>) We touch on: </p><ul><li>Ben&apos;s hate mail from his <a href='https://medium.com/conjecture-magazine/the-dangers-of-cliodynamics-c48392b4a985'>piece on cliodynamics</a></li><li>Longtermism as implying altruistic portfolio shuffling</li><li>What on earth is Bayesian epistemology </li><li><a href='http://colyvan.com/papers/pasadena.pdf'>The Pasadena game</a></li><li>Authoritarianism and the danger of seeking perfection </li><li>Arrow&apos;s theorem</li><li>Alternative decision theories focusing on error correction </li><li>What&apos;s the probability of nuclear war before 2100?</li><li>When are models reliable </li><li>What problems to work on </li></ul><p>You will, dear listener, be either pleased or horrified to learn that this will not be our last foray into longtermism. It&apos;s like choose your own adventure ... except we&apos;re choosing the adventure, and the adventure is longtermism. Next stop is the <a href='https://hearthisidea.com/'>Hear this Idea podcast</a>!<br/><br/>Send us best longterm prediction at incrementspodcast@gmail.com</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#11 - Debating Existential Risk</title>
  <link>https://www.incrementspodcast.com/11</link>
  <guid isPermaLink="false">Buzzsprout-5475121</guid>
  <pubDate>Wed, 16 Sep 2020 16:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/4ed5459c-bf59-432a-966d-33c3dd5450f0.mp3" length="64654289" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle></itunes:subtitle>
  <itunes:duration>1:29:17</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/4/4ed5459c-bf59-432a-966d-33c3dd5450f0/cover.jpg?v=1"/>
  <description>&lt;p&gt;Vaden's arguments against Bayesian philosophy and existential risk are examined by someone who might actually know what they're talking about, i.e., not Ben. After writing a critique of our conversation in Episode 7, which started off &lt;a href="https://vmasrani.github.io/blog/2020/mauricio_first_response/"&gt;a series of blog posts&lt;/a&gt;, our good friend Mauricio (who studies political science, economics, and philosophy) kindly agrees to come on the podcast and try to figure out who's more confused. Does Vaden convert? &lt;br&gt;&lt;br&gt;
We apologize for the long wait between this episode and the last one. It was all Vaden's fault. &lt;br&gt;&lt;br&gt;Hit us up at &lt;em&gt;incrementspodcast@gmail.com&lt;/em&gt;!&lt;br&gt;&lt;br&gt;&lt;em&gt;Note from Vaden:  Upon relistening, I've just learned my new computer chair clicks in the most annoying possible way every time I get enthusiastic. My apologies - I'll work on being less enthusiastic in future episodes.  &lt;br&gt;&lt;br&gt;Second note from Vaden: Yeesh lots of audio issues with this episode - I replaced the file with a cleaned up version at 5:30pm September 17th. Still learning... &lt;br&gt;&lt;/em&gt;&lt;br&gt;&lt;/p&gt; 
</description>
  <itunes:keywords>existential risk, probability, bayesianism</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Vaden&apos;s arguments against Bayesian philosophy and existential risk are examined by someone who might actually know what they&apos;re talking about, i.e., not Ben. After writing a critique of our conversation in Episode 7, which started off <a href='https://vmasrani.github.io/blog/2020/mauricio_first_response/'>a series of blog posts</a>, our good friend Mauricio (who studies political science, economics, and philosophy) kindly agrees to come on the podcast and try to figure out who&apos;s more confused. Does Vaden convert? <br/><br/>
We apologize for the long wait between this episode and the last one. It was all Vaden&apos;s fault. <br/><br/>Hit us up at <em>incrementspodcast@gmail.com</em>!<br/><br/><em>Note from Vaden:  Upon relistening, I&apos;ve just learned my new computer chair clicks in the most annoying possible way every time I get enthusiastic. My apologies - I&apos;ll work on being less enthusiastic in future episodes.  <br/><br/>Second note from Vaden: Yeesh lots of audio issues with this episode - I replaced the file with a cleaned up version at 5:30pm September 17th. Still learning... <br/></em><br/></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Vaden&apos;s arguments against Bayesian philosophy and existential risk are examined by someone who might actually know what they&apos;re talking about, i.e., not Ben. After writing a critique of our conversation in Episode 7, which started off <a href='https://vmasrani.github.io/blog/2020/mauricio_first_response/'>a series of blog posts</a>, our good friend Mauricio (who studies political science, economics, and philosophy) kindly agrees to come on the podcast and try to figure out who&apos;s more confused. Does Vaden convert? <br/><br/>
We apologize for the long wait between this episode and the last one. It was all Vaden&apos;s fault. <br/><br/>Hit us up at <em>incrementspodcast@gmail.com</em>!<br/><br/><em>Note from Vaden:  Upon relistening, I&apos;ve just learned my new computer chair clicks in the most annoying possible way every time I get enthusiastic. My apologies - I&apos;ll work on being less enthusiastic in future episodes.  <br/><br/>Second note from Vaden: Yeesh lots of audio issues with this episode - I replaced the file with a cleaned up version at 5:30pm September 17th. Still learning... <br/></em><br/></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#8 - Philosophy of Probability III: Conjectures and Refutations</title>
  <link>https://www.incrementspodcast.com/8</link>
  <guid isPermaLink="false">Buzzsprout-4756712</guid>
  <pubDate>Tue, 28 Jul 2020 16:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/731a65a4-1cd7-48ee-9cb2-b34a81d168b2.mp3" length="51393073" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle></itunes:subtitle>
  <itunes:duration>1:10:52</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/7/731a65a4-1cd7-48ee-9cb2-b34a81d168b2/cover.jpg?v=1"/>
  <description>&lt;p&gt;On the same page at last! Ben comes to the philosophical confessional to announce his probabilistic sins. The Bayesians will be pissed (with high probability). At least Vaden doesn't make him kiss anything. After too much agreement and self-congratulation, Ben and Vaden conclude the mini-series on the philosophy of probability, and "announce" an upcoming mega-series on Conjectures and Refutations. &lt;br&gt;&lt;br&gt;&lt;br&gt;&lt;b&gt;References:&lt;/b&gt;&lt;br&gt;- &lt;a href="https://www.lesswrong.com/posts/Ti3Z7eZtud32LhGZT/my-bayesian-enlightenment"&gt;My Bayesian Enlightenment&lt;/a&gt; by Eliezer Yudkowsky&lt;br&gt;&lt;br&gt;&lt;b&gt;Rationalist community blogs:&lt;/b&gt;&lt;br&gt;- &lt;a href="https://www.lesswrong.com/"&gt;Less Wrong&lt;/a&gt;&lt;br&gt;- &lt;a href="https://slatestarcodex.com/"&gt;Slate Star Codex&lt;/a&gt;&lt;br&gt;- &lt;a href="https://marginalrevolution.com/"&gt;Marginal Revolution&lt;/a&gt;&lt;br&gt;&lt;br&gt;Yell at us at incrementspodcast@gmail.com. &lt;br&gt;&lt;br&gt;&lt;br&gt;&lt;br&gt;&lt;/p&gt; 
</description>
  <itunes:keywords>Karl Popper, conjectures and refutations, probability</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>On the same page at last! Ben comes to the philosophical confessional to announce his probabilistic sins. The Bayesians will be pissed (with high probability). At least Vaden doesn&apos;t make him kiss anything. After too much agreement and self-congratulation, Ben and Vaden conclude the mini-series on the philosophy of probability, and &quot;announce&quot; an upcoming mega-series on Conjectures and Refutations. <br/><br/><br/><b>References:</b><br/>- <a href='https://www.lesswrong.com/posts/Ti3Z7eZtud32LhGZT/my-bayesian-enlightenment'>My Bayesian Enlightenment</a> by Eliezer Yudkowsky<br/><br/><b>Rationalist community blogs:</b><br/>- <a href='https://www.lesswrong.com/'>Less Wrong</a><br/>- <a href='https://slatestarcodex.com/'>Slate Star Codex</a><br/>- <a href='https://marginalrevolution.com/'>Marginal Revolution</a><br/><br/>Yell at us at incrementspodcast@gmail.com. <br/><br/><br/><br/></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>On the same page at last! Ben comes to the philosophical confessional to announce his probabilistic sins. The Bayesians will be pissed (with high probability). At least Vaden doesn&apos;t make him kiss anything. After too much agreement and self-congratulation, Ben and Vaden conclude the mini-series on the philosophy of probability, and &quot;announce&quot; an upcoming mega-series on Conjectures and Refutations. <br/><br/><br/><b>References:</b><br/>- <a href='https://www.lesswrong.com/posts/Ti3Z7eZtud32LhGZT/my-bayesian-enlightenment'>My Bayesian Enlightenment</a> by Eliezer Yudkowsky<br/><br/><b>Rationalist community blogs:</b><br/>- <a href='https://www.lesswrong.com/'>Less Wrong</a><br/>- <a href='https://slatestarcodex.com/'>Slate Star Codex</a><br/>- <a href='https://marginalrevolution.com/'>Marginal Revolution</a><br/><br/>Yell at us at incrementspodcast@gmail.com. <br/><br/><br/><br/></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#6 - Philosophy of Probability I: Introduction</title>
  <link>https://www.incrementspodcast.com/6</link>
  <guid isPermaLink="false">Buzzsprout-4407194</guid>
  <pubDate>Wed, 01 Jul 2020 18:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/eeb49cea-deb7-4957-8f51-8d5f0949c799.mp3" length="55868881" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle></itunes:subtitle>
  <itunes:duration>1:17:05</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/e/eeb49cea-deb7-4957-8f51-8d5f0949c799/cover.jpg?v=1"/>
  <description>&lt;p&gt;Don't leave yet - we swear this will be more interesting than it sounds ... &lt;br&gt;&lt;br&gt;... But a drink will definitely help. Ben and Vaden dive into the interpretations behind probability. What do people mean when they use the word, and why do we use this one tool to describe different concepts. The rowdiness truly kicks in when Vaden releases his pent-up critique of Bayesianism, thereby losing both his friends and PhD position. But at least he's ingratiated himself with Karl Popper. &lt;br&gt;&lt;br&gt;&lt;b&gt;&lt;em&gt;References:&lt;/em&gt;&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://vmasrani.github.io/assets/popper_good.pdf"&gt;Vaden's  Slides&lt;/a&gt; on a 1975 &lt;a href="https://www.jstor.org/stable/20115014?seq=1#metadata_info_tab_contents"&gt;paper&lt;/a&gt; by Irving John Good titled &lt;em&gt;Explicativity, Corroboration, and the Relative Odds of Hypotheses&lt;/em&gt;. The paper is I.J. Good’s response to Karl Popper, and in the presentation I compare the two philosophers’ views on probability, epistemology, induction, simplicity, and content.&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.mrcc.uqam.ca/Publications/articles/deElia_MWR2005_.pdf"&gt;Diversity in Interpretations of Probability: Implications for Weather Forecasting&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Andrew Gelman, &lt;a href="http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf"&gt;Philosophy and the practice of Bayesian statistics&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Popper quote: &lt;em&gt;"Those who identify confirmation with probability must believe that a high degree of probability is desirable. They implicitly accept the rule: ‘Always choose the most probable hypothesis!’ Now it can be easily shown that this rule is equivalent to the following rule: ‘Always choose the hypothesis which goes as little beyond the evidence as possible!’ And this, in turn, can be shown to be equivalent, not only to ‘Always accept the hypothesis with the lowest content (within the limits of your task, for example, your task of predicting)!’, but also to ‘Always choose the hypothesis which has the highest degree of ad hoc character (within the limits of your task)!’" &lt;/em&gt;(Conjectures and Refutations p.391) &lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;Get in touch at incrementspodcast@gmail.com.&lt;br&gt;&lt;br&gt;&lt;em&gt;audio updated 13/12/2020&lt;/em&gt;&lt;/p&gt; 
</description>
  <itunes:keywords>probability, bayesianism, frequency, induction, epistemology</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Don&apos;t leave yet - we swear this will be more interesting than it sounds ... <br/><br/>... But a drink will definitely help. Ben and Vaden dive into the interpretations behind probability. What do people mean when they use the word, and why do we use this one tool to describe different concepts. The rowdiness truly kicks in when Vaden releases his pent-up critique of Bayesianism, thereby losing both his friends and PhD position. But at least he&apos;s ingratiated himself with Karl Popper. <br/><br/><b><em>References:</em></b></p><ul><li><a href='https://vmasrani.github.io/assets/popper_good.pdf'>Vaden&apos;s  Slides</a> on a 1975 <a href='https://www.jstor.org/stable/20115014?seq=1#metadata_info_tab_contents'>paper</a> by Irving John Good titled <em>Explicativity, Corroboration, and the Relative Odds of Hypotheses</em>. The paper is I.J. Good’s response to Karl Popper, and in the presentation I compare the two philosophers’ views on probability, epistemology, induction, simplicity, and content.</li><li><a href='http://www.mrcc.uqam.ca/Publications/articles/deElia_MWR2005_.pdf'>Diversity in Interpretations of Probability: Implications for Weather Forecasting</a></li><li>Andrew Gelman, <a href='http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf'>Philosophy and the practice of Bayesian statistics</a></li><li>Popper quote: <em>&quot;Those who identify confirmation with probability must believe that a high degree of probability is desirable. They implicitly accept the rule: ‘Always choose the most probable hypothesis!’ Now it can be easily shown that this rule is equivalent to the following rule: ‘Always choose the hypothesis which goes as little beyond the evidence as possible!’ And this, in turn, can be shown to be equivalent, not only to ‘Always accept the hypothesis with the lowest content (within the limits of your task, for example, your task of predicting)!’, but also to ‘Always choose the hypothesis which has the highest degree of ad hoc character (within the limits of your task)!’&quot; </em>(Conjectures and Refutations p.391) </li></ul><p>Get in touch at incrementspodcast@gmail.com.<br/><br/><em>audio updated 13/12/2020</em></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Don&apos;t leave yet - we swear this will be more interesting than it sounds ... <br/><br/>... But a drink will definitely help. Ben and Vaden dive into the interpretations behind probability. What do people mean when they use the word, and why do we use this one tool to describe different concepts. The rowdiness truly kicks in when Vaden releases his pent-up critique of Bayesianism, thereby losing both his friends and PhD position. But at least he&apos;s ingratiated himself with Karl Popper. <br/><br/><b><em>References:</em></b></p><ul><li><a href='https://vmasrani.github.io/assets/popper_good.pdf'>Vaden&apos;s  Slides</a> on a 1975 <a href='https://www.jstor.org/stable/20115014?seq=1#metadata_info_tab_contents'>paper</a> by Irving John Good titled <em>Explicativity, Corroboration, and the Relative Odds of Hypotheses</em>. The paper is I.J. Good’s response to Karl Popper, and in the presentation I compare the two philosophers’ views on probability, epistemology, induction, simplicity, and content.</li><li><a href='http://www.mrcc.uqam.ca/Publications/articles/deElia_MWR2005_.pdf'>Diversity in Interpretations of Probability: Implications for Weather Forecasting</a></li><li>Andrew Gelman, <a href='http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf'>Philosophy and the practice of Bayesian statistics</a></li><li>Popper quote: <em>&quot;Those who identify confirmation with probability must believe that a high degree of probability is desirable. They implicitly accept the rule: ‘Always choose the most probable hypothesis!’ Now it can be easily shown that this rule is equivalent to the following rule: ‘Always choose the hypothesis which goes as little beyond the evidence as possible!’ And this, in turn, can be shown to be equivalent, not only to ‘Always accept the hypothesis with the lowest content (within the limits of your task, for example, your task of predicting)!’, but also to ‘Always choose the hypothesis which has the highest degree of ad hoc character (within the limits of your task)!’&quot; </em>(Conjectures and Refutations p.391) </li></ul><p>Get in touch at incrementspodcast@gmail.com.<br/><br/><em>audio updated 13/12/2020</em></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
