<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Sat, 02 May 2026 12:25:30 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Increments - Episodes Tagged with “Epistemology”</title>
    <link>https://www.incrementspodcast.com/tags/epistemology</link>
    <pubDate>Sat, 18 Apr 2026 17:00:00 -0700</pubDate>
    <description>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Science, Philosophy, Epistemology, Mayhem</itunes:subtitle>
    <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
    <itunes:summary>Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. 
Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com. 
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>Philosophy,Science,Ethics,Progress,Knowledge,Computer Science,Conversation,Error-Correction</itunes:keywords>
    <itunes:owner>
      <itunes:name>Ben Chugg and Vaden Masrani</itunes:name>
      <itunes:email>incrementspodcast@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<itunes:category text="Science"/>
<item>
  <title>#101 (C&amp;R Chap 10, Part IV) - Was Popper Wrong about Verisimilitude?</title>
  <link>https://www.incrementspodcast.com/101</link>
  <guid isPermaLink="false">e06cffeb-8d9d-4301-bbf3-f758d27c089a</guid>
  <pubDate>Sat, 18 Apr 2026 17:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/e06cffeb-8d9d-4301-bbf3-f758d27c089a.mp3" length="74497489" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Conjectures and refutations, Chapter 10, Part 4 baby. What's the deal with corroboration and verisimilitude?</itunes:subtitle>
  <itunes:duration>1:17:01</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/e/e06cffeb-8d9d-4301-bbf3-f758d27c089a/cover.jpg?v=2"/>
  <description>Wasn't Popper a falsificationist? Then why did he try to develop ideas about corroboration and versimilitude - the extent to which a theory was closer to truth than another theory? Isn't this verging dangerously close to verificationist territory? 
In our fourth ep on Chapter 10 in C&amp;amp;R, we wrestle with Popper's treatment of verisimilutude, both the formal and informal versions. Did the project fail? Was Popper out of his mind? Does this invalidate everything?
We discuss
Murders with ball-peen hammers 
Walking the line between verification and falsification
Is science only after truth?
Verisimilutude and its formalization 
Why the formalization fails 
Popper's three requirements for the growth of knowledge
Popper's ratchet and the no ad-hoc rule 
Quotes
Like many other philosophers I am at times inclined to classify philosophers as belonging to two main groups—those with whom I disagree, and those who agree with me.
- C&amp;amp;R, page 309 
I shall give here a somewhat unsystematic list of six types of cases in which we should be inclined to say of a theory t1 that it is superseded by t2 in the sense that t2 seems—as far as we know—to correspond better to the facts than t1 , in some sense or other.
-  t2 makes more precise assertions than t1 , and these more precise assertions stand up to more precise tests.
- t2 takes account of, and explains, more facts than t1 (which will include for example the above case that, other things being equal, t2 ’s assertions are more precise).
- t2 describes, or explains, the facts in more detail than t1 .
- t2 has passed tests which t 1 has failed to pass.
- t2 has suggested new experimental tests, not considered before t 2 was designed (and not suggested by t1 , and perhaps not even applicable to t1 ); and t 2 has passed these tests.
- t2 has uniﬁed or connected various hitherto unrelated problems.
- C&amp;amp;R, page 315
Let me ﬁrst say that I do not suggest that the explicit introduction of the idea of verisimilitude will lead to any changes in the theory of method. On the contrary, I think that my theory of testability or corroboration by empirical tests is the proper methodological counterpart to this new metalogical idea. The only improvement is one of clariﬁcation.
- C&amp;amp;R, page 318
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Become a patreon subscriber&amp;nbsp;here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations&amp;nbsp;here (https://ko-fi.com/increments).
Click dem like buttons on&amp;nbsp;youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
How many chromosomes does diethyl-methyl pentophosphate have, exactly? Tell as at incrementspodcast@gmail.com
</description>
  <itunes:keywords>popper, verisimilitude, falsification, verificationism, conjectures-and-refutations, epistemology</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Wasn&#39;t Popper a falsificationist? Then why did he try to develop ideas about corroboration and versimilitude - the extent to which a theory was closer to truth than another theory? Isn&#39;t this verging dangerously close to verificationist territory? </p>

<p>In our fourth ep on Chapter 10 in C&amp;R, we wrestle with Popper&#39;s treatment of verisimilutude, both the formal and informal versions. Did the project fail? Was Popper out of his mind? Does this invalidate everything?</p>

<h1>We discuss</h1>

<ul>
<li>Murders with ball-peen hammers </li>
<li>Walking the line between verification and falsification</li>
<li>Is science only after truth?</li>
<li>Verisimilutude and its formalization </li>
<li>Why the formalization fails </li>
<li>Popper&#39;s three requirements for the growth of knowledge</li>
<li>Popper&#39;s ratchet and the no ad-hoc rule </li>
</ul>

<h1>Quotes</h1>

<blockquote>
<p>Like many other philosophers I am at times inclined to classify philosophers as belonging to two main groups—those with whom I disagree, and those who agree with me.<br>
- C&amp;R, page 309 </p>

<p>I shall give here a somewhat unsystematic list of six types of cases in which we should be inclined to say of a theory t1 that it is superseded by t2 in the sense that t2 seems—as far as we know—to correspond better to the facts than t1 , in some sense or other.</p>

<ul>
<li> t2 makes more precise assertions than t1 , and these more precise assertions stand up to more precise tests.</li>
<li>t2 takes account of, and explains, more facts than t1 (which will include for example the above case that, other things being equal, t2 ’s assertions are more precise).</li>
<li>t2 describes, or explains, the facts in more detail than t1 .</li>
<li>t2 has passed tests which t 1 has failed to pass.</li>
<li>t2 has suggested new experimental tests, not considered before t 2 was designed (and not suggested by t1 , and perhaps not even applicable to t1 ); and t 2 has passed these tests.</li>
<li>t2 has uniﬁed or connected various hitherto unrelated problems.</li>
</ul>

<p>- C&amp;R, page 315</p>

<p>Let me ﬁrst say that I do not suggest that the explicit introduction of the idea of verisimilitude will lead to any changes in the theory of method. On the contrary, I think that my theory of testability or corroboration by empirical tests is the proper methodological counterpart to this new metalogical idea. The only improvement is one of clariﬁcation.<br>
- C&amp;R, page 318</p>
</blockquote>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>How many chromosomes does diethyl-methyl pentophosphate have, exactly? Tell as at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Wasn&#39;t Popper a falsificationist? Then why did he try to develop ideas about corroboration and versimilitude - the extent to which a theory was closer to truth than another theory? Isn&#39;t this verging dangerously close to verificationist territory? </p>

<p>In our fourth ep on Chapter 10 in C&amp;R, we wrestle with Popper&#39;s treatment of verisimilutude, both the formal and informal versions. Did the project fail? Was Popper out of his mind? Does this invalidate everything?</p>

<h1>We discuss</h1>

<ul>
<li>Murders with ball-peen hammers </li>
<li>Walking the line between verification and falsification</li>
<li>Is science only after truth?</li>
<li>Verisimilutude and its formalization </li>
<li>Why the formalization fails </li>
<li>Popper&#39;s three requirements for the growth of knowledge</li>
<li>Popper&#39;s ratchet and the no ad-hoc rule </li>
</ul>

<h1>Quotes</h1>

<blockquote>
<p>Like many other philosophers I am at times inclined to classify philosophers as belonging to two main groups—those with whom I disagree, and those who agree with me.<br>
- C&amp;R, page 309 </p>

<p>I shall give here a somewhat unsystematic list of six types of cases in which we should be inclined to say of a theory t1 that it is superseded by t2 in the sense that t2 seems—as far as we know—to correspond better to the facts than t1 , in some sense or other.</p>

<ul>
<li> t2 makes more precise assertions than t1 , and these more precise assertions stand up to more precise tests.</li>
<li>t2 takes account of, and explains, more facts than t1 (which will include for example the above case that, other things being equal, t2 ’s assertions are more precise).</li>
<li>t2 describes, or explains, the facts in more detail than t1 .</li>
<li>t2 has passed tests which t 1 has failed to pass.</li>
<li>t2 has suggested new experimental tests, not considered before t 2 was designed (and not suggested by t1 , and perhaps not even applicable to t1 ); and t 2 has passed these tests.</li>
<li>t2 has uniﬁed or connected various hitherto unrelated problems.</li>
</ul>

<p>- C&amp;R, page 315</p>

<p>Let me ﬁrst say that I do not suggest that the explicit introduction of the idea of verisimilitude will lead to any changes in the theory of method. On the contrary, I think that my theory of testability or corroboration by empirical tests is the proper methodological counterpart to this new metalogical idea. The only improvement is one of clariﬁcation.<br>
- C&amp;R, page 318</p>
</blockquote>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>How many chromosomes does diethyl-methyl pentophosphate have, exactly? Tell as at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#95 (C&amp;R Chap 10, Part II) - A Problem-First View of Scientific Progress </title>
  <link>https://www.incrementspodcast.com/95</link>
  <guid isPermaLink="false">189bdf89-18ae-4bfd-a90b-9adbaa2353d3</guid>
  <pubDate>Sat, 29 Nov 2025 13:00:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/189bdf89-18ae-4bfd-a90b-9adbaa2353d3.mp3" length="55671326" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>After unsuccessfully trying to resolve our dispute about Popper's theory of content, we're back for part II of Chapter 10 of the Conjectures and Refutations Series. </itunes:subtitle>
  <itunes:duration>57:59</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/1/189bdf89-18ae-4bfd-a90b-9adbaa2353d3/cover.jpg?v=1"/>
  <description>After a long hiatus where we both saw grief counsellors over our fight about Popper's theory of content in the last C&amp;amp;R episode, we are back. And we're ready to play nice ... for about 30 seconds until Vaden admits that two sentences from Popper changed his mind about something Ben had arguing for literally years. 
But eventually putting those disagreements aside, we return to the subject at hand: The Conjectures and Refutations Series: Chapter 10: Truth, Rationality, and the Growth of Scientific Knowledge (Part II). Here all goes smoothly. Just kidding, we start fighting about content again almost immediately. Where are the guests to break us up when you need them. 
We discuss
Why Vaden changed his mind about "all thought is problem solving" 
Something that rhymes with wero horship 
Is Popper sloppy when it comes to writing about probability and content 
Is all modern data science based on the wrong idea? (Hint: No) 
Popper's problem-focused view of scientific progress 
How much formalization is too much? 
The difference between high verisimilitude and high probability 
Why do we value simplicity in science? 
Historical examples of science progressing via theories with increasing content 
Quotes
Consciousness, world 2, was presumably an evaluating and discerning consciousness, a problem-solving consciousness, right from the start. I have said of the animate part of the physical world 1 that all organisms are problem solvers. My basic assumption regarding world 2 is that this problem-solving activity of the animate part of world 1 resulted in the emergence of world 2, of the world of consciousness. But I do not mean by this that consciousness solves problems all the time, as I asserted of the organisms. On the contrary. The organisms are preoccupied with problem-solving day in, day out, but consciousness is not only concerned with the solving of problems, although that is its most important biological function. My hypothesis is that the original task of consciousness was to anticipate success and failure in problem-solving and to signal to the organism in the form of pleasure and pain whether it was on the right or wrong path to the solution of the problem.
In Search of a Better World, p.17 (emphasis added) 
The criterion of potential satisfactoriness is thus testability, or improbability: only a highly testable or improbable theory is worth testing, and is actually (and not merely potentially) satisfactory if it withstands severe tests—especially those tests to which we could point as crucial for the theory before they were ever undertaken. 
- C&amp;amp;R, Chapter 10 
Consequently there is little merit in formalizing and elaborating a deductive system (intended for use as an empirical science) beyond the requirements of the task of criticizing and testing it, and of comparing it critically with competitors.
- C&amp;amp;R, Chapter 10 
Admittedly, our expectations, and thus our theories, may precede, historically, even our problems. Yet science starts only with problems. Problems crop up especially when we are disappointed in our expectations, or when our theories involve us in diﬃculties, in contradictions; and these may arise either within a theory, or between two diﬀerent theories, or as the result of a clash between our theories and our observations.
- C&amp;amp;R, Chapter 10 
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Become a patreon subscriber&amp;nbsp;here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations&amp;nbsp;here (https://ko-fi.com/increments).
Click dem like buttons on&amp;nbsp;youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
Is "Ben and Vaden will fight about content" high or low probability? Tell us at incrementspodcast@gmail.com  
</description>
  <itunes:keywords>popper, philosophy of science, probability, epistemology, content, simplicity, verisimilitude</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>After a long hiatus where we both saw grief counsellors over our fight about Popper&#39;s theory of content in the last C&amp;R episode, we are back. And we&#39;re ready to play nice ... for about 30 seconds until Vaden admits that two sentences from Popper changed his mind about something Ben had arguing for literally years. </p>

<p>But eventually putting those disagreements aside, we return to the subject at hand: The Conjectures and Refutations Series: Chapter 10: Truth, Rationality, and the Growth of Scientific Knowledge (Part II). Here all goes smoothly. Just kidding, we start fighting about content again almost immediately. Where are the guests to break us up when you need them. </p>

<h1>We discuss</h1>

<ul>
<li>Why Vaden changed his mind about &quot;all thought is problem solving&quot; </li>
<li>Something that rhymes with wero horship </li>
<li>Is Popper sloppy when it comes to writing about probability and content </li>
<li>Is all modern data science based on the wrong idea? (Hint: No) </li>
<li>Popper&#39;s problem-focused view of scientific progress </li>
<li>How much formalization is too much? </li>
<li>The difference between high verisimilitude and high probability </li>
<li>Why do we value simplicity in science? </li>
<li>Historical examples of science progressing via theories with increasing content </li>
</ul>

<h1>Quotes</h1>

<blockquote>
<p>Consciousness, world 2, was presumably <em>an evaluating and discerning consciousness</em>, a problem-solving consciousness, right from the start. I have said of the animate part of the physical world 1 that all organisms are problem solvers. My basic assumption regarding world 2 is that this problem-solving activity of the animate part of world 1 resulted in the emergence of world 2, of the world of consciousness. But I do not mean by this that consciousness solves problems all the time, as I asserted of the organisms. On the contrary. The organisms are preoccupied with problem-solving day in, day out, but consciousness <em>is not only concerned</em> with the solving of problems, although that is its most important biological function. <strong>My hypothesis is that the original task of consciousness was to anticipate success and failure in problem-solving and to signal to the organism in the form of pleasure and pain whether it was on the right or wrong path to the solution of the problem.</strong></p>

<ul>
<li>In Search of a Better World, p.17 (emphasis added) </li>
</ul>

<p>The criterion of potential satisfactoriness is thus testability, or improbability: only a highly testable or improbable theory is worth testing, and is actually (and not merely potentially) satisfactory if it withstands severe tests—especially those tests to which we could point as crucial for the theory before they were ever undertaken. <br>
- C&amp;R, Chapter 10 </p>

<p>Consequently there is little merit in formalizing and elaborating a deductive system (intended for use as an empirical science) beyond the requirements of the task of criticizing and testing it, and of comparing it critically with competitors.<br>
- C&amp;R, Chapter 10 </p>

<p>Admittedly, our expectations, and thus our theories, may precede, historically, even our problems. Yet science starts only with problems. Problems crop up especially when we are disappointed in our expectations, or when our theories involve us in diﬃculties, in contradictions; and these may arise either within a theory, or between two diﬀerent theories, or as the result of a clash between our theories and our observations.<br>
- C&amp;R, Chapter 10 </p>
</blockquote>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Is &quot;Ben and Vaden will fight about content&quot; high or low probability? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>After a long hiatus where we both saw grief counsellors over our fight about Popper&#39;s theory of content in the last C&amp;R episode, we are back. And we&#39;re ready to play nice ... for about 30 seconds until Vaden admits that two sentences from Popper changed his mind about something Ben had arguing for literally years. </p>

<p>But eventually putting those disagreements aside, we return to the subject at hand: The Conjectures and Refutations Series: Chapter 10: Truth, Rationality, and the Growth of Scientific Knowledge (Part II). Here all goes smoothly. Just kidding, we start fighting about content again almost immediately. Where are the guests to break us up when you need them. </p>

<h1>We discuss</h1>

<ul>
<li>Why Vaden changed his mind about &quot;all thought is problem solving&quot; </li>
<li>Something that rhymes with wero horship </li>
<li>Is Popper sloppy when it comes to writing about probability and content </li>
<li>Is all modern data science based on the wrong idea? (Hint: No) </li>
<li>Popper&#39;s problem-focused view of scientific progress </li>
<li>How much formalization is too much? </li>
<li>The difference between high verisimilitude and high probability </li>
<li>Why do we value simplicity in science? </li>
<li>Historical examples of science progressing via theories with increasing content </li>
</ul>

<h1>Quotes</h1>

<blockquote>
<p>Consciousness, world 2, was presumably <em>an evaluating and discerning consciousness</em>, a problem-solving consciousness, right from the start. I have said of the animate part of the physical world 1 that all organisms are problem solvers. My basic assumption regarding world 2 is that this problem-solving activity of the animate part of world 1 resulted in the emergence of world 2, of the world of consciousness. But I do not mean by this that consciousness solves problems all the time, as I asserted of the organisms. On the contrary. The organisms are preoccupied with problem-solving day in, day out, but consciousness <em>is not only concerned</em> with the solving of problems, although that is its most important biological function. <strong>My hypothesis is that the original task of consciousness was to anticipate success and failure in problem-solving and to signal to the organism in the form of pleasure and pain whether it was on the right or wrong path to the solution of the problem.</strong></p>

<ul>
<li>In Search of a Better World, p.17 (emphasis added) </li>
</ul>

<p>The criterion of potential satisfactoriness is thus testability, or improbability: only a highly testable or improbable theory is worth testing, and is actually (and not merely potentially) satisfactory if it withstands severe tests—especially those tests to which we could point as crucial for the theory before they were ever undertaken. <br>
- C&amp;R, Chapter 10 </p>

<p>Consequently there is little merit in formalizing and elaborating a deductive system (intended for use as an empirical science) beyond the requirements of the task of criticizing and testing it, and of comparing it critically with competitors.<br>
- C&amp;R, Chapter 10 </p>

<p>Admittedly, our expectations, and thus our theories, may precede, historically, even our problems. Yet science starts only with problems. Problems crop up especially when we are disappointed in our expectations, or when our theories involve us in diﬃculties, in contradictions; and these may arise either within a theory, or between two diﬀerent theories, or as the result of a clash between our theories and our observations.<br>
- C&amp;R, Chapter 10 </p>
</blockquote>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Is &quot;Ben and Vaden will fight about content&quot; high or low probability? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#85 (Reaction) - On Confidence and Evidence: Reacting to Brett Hall and Peter Boghossian (Part 1) </title>
  <link>https://www.incrementspodcast.com/85</link>
  <guid isPermaLink="false">2411225d-dc31-4f0f-9907-cf386fc6e475</guid>
  <pubDate>Thu, 08 May 2025 20:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/2411225d-dc31-4f0f-9907-cf386fc6e475.mp3" length="81702284" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Reacting to a discussion about belief, confidence, and epistemology between Brett Hall and Peter Boghossian</itunes:subtitle>
  <itunes:duration>1:49:48</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/2/2411225d-dc31-4f0f-9907-cf386fc6e475/cover.jpg?v=4"/>
  <description>We all knew that Vaden would release his inner Youtube debate bro at some point. Well he finally paid Ben enough to do it, and here we are: our first reaction video. Today we're commenting on the video What's the most rational way to know? (https://www.youtube.com/watch?v=vNQlmVJxySc&amp;amp;t=3614s&amp;amp;ab_channel=CordialCuriosity), a discussion between Brett Hall and Peter Boghossian on the relationship between confidence and evidence. Are we overly confident in our ability to make reaction videos? Evidently. 
Check out more from Brett Hall here (https://www.bretthall.org/) and Peter Boghossian here (https://peterboghossian.com/). 
We discuss
What is the relationship between confidence and evidence? 
The "formal apparatus of science" vs the "sociology" of science 
Eddington's famous experiment 
Why confidence and belief can't be mathematized (But why they are useful nonetheless)
Confidence as a function of falsifying experiments
Bayesianism vs critical rationalism  
References
Paper discussing how it took the wider scientific community over 40 years (after Eddington's experiment!) to become convinced in the truth of general relativity: The 1919 measurement of the deflection of light (https://arxiv.org/abs/1409.7812)
Eddington's original paper (https://w.astro.berkeley.edu/~kalas/labs/documents/dyson1920.pdf):
Vaden and Brett's blog exchange (https://vmasrani.github.io/blog/2023/predicting-human-behaviour/) 
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Become a patreon subscriber&amp;nbsp;here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations&amp;nbsp;here (https://ko-fi.com/increments).
Click dem like buttons on&amp;nbsp;youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
Where were you last night, and why do you have condoms in your pocket? Tell us at incrementspodcast@gmail.com. 
</description>
  <itunes:keywords>epistemology, reaction video, confidence, belief, falsification</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>We all knew that Vaden would release his inner Youtube debate bro at some point. Well he finally paid Ben enough to do it, and here we are: our first reaction video. Today we&#39;re commenting on the video <a href="https://www.youtube.com/watch?v=vNQlmVJxySc&t=3614s&ab_channel=CordialCuriosity" rel="nofollow">What&#39;s the most rational way to know?</a>, a discussion between Brett Hall and Peter Boghossian on the relationship between confidence and evidence. Are we overly confident in our ability to make reaction videos? Evidently. </p>

<p>Check out more from Brett Hall <a href="https://www.bretthall.org/" rel="nofollow">here</a> and Peter Boghossian <a href="https://peterboghossian.com/" rel="nofollow">here</a>. </p>

<h1>We discuss</h1>

<ul>
<li>What is the relationship between confidence and evidence? </li>
<li>The &quot;formal apparatus of science&quot; vs the &quot;sociology&quot; of science </li>
<li>Eddington&#39;s famous experiment </li>
<li>Why confidence and belief can&#39;t be mathematized (But why they are useful nonetheless)</li>
<li>Confidence as a function of falsifying experiments</li>
<li>Bayesianism vs critical rationalism<br></li>
</ul>

<h1>References</h1>

<ul>
<li>Paper discussing how it took the wider scientific community over 40 years (after Eddington&#39;s experiment!) to become convinced in the truth of general relativity: <a href="https://arxiv.org/abs/1409.7812" rel="nofollow">The 1919 measurement of the deflection of light</a></li>
<li><a href="https://w.astro.berkeley.edu/%7Ekalas/labs/documents/dyson1920.pdf" rel="nofollow">Eddington&#39;s original paper</a>:</li>
<li><a href="https://vmasrani.github.io/blog/2023/predicting-human-behaviour/" rel="nofollow">Vaden and Brett&#39;s blog exchange</a> </li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Where were you last night, and why do you have condoms in your pocket? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>We all knew that Vaden would release his inner Youtube debate bro at some point. Well he finally paid Ben enough to do it, and here we are: our first reaction video. Today we&#39;re commenting on the video <a href="https://www.youtube.com/watch?v=vNQlmVJxySc&t=3614s&ab_channel=CordialCuriosity" rel="nofollow">What&#39;s the most rational way to know?</a>, a discussion between Brett Hall and Peter Boghossian on the relationship between confidence and evidence. Are we overly confident in our ability to make reaction videos? Evidently. </p>

<p>Check out more from Brett Hall <a href="https://www.bretthall.org/" rel="nofollow">here</a> and Peter Boghossian <a href="https://peterboghossian.com/" rel="nofollow">here</a>. </p>

<h1>We discuss</h1>

<ul>
<li>What is the relationship between confidence and evidence? </li>
<li>The &quot;formal apparatus of science&quot; vs the &quot;sociology&quot; of science </li>
<li>Eddington&#39;s famous experiment </li>
<li>Why confidence and belief can&#39;t be mathematized (But why they are useful nonetheless)</li>
<li>Confidence as a function of falsifying experiments</li>
<li>Bayesianism vs critical rationalism<br></li>
</ul>

<h1>References</h1>

<ul>
<li>Paper discussing how it took the wider scientific community over 40 years (after Eddington&#39;s experiment!) to become convinced in the truth of general relativity: <a href="https://arxiv.org/abs/1409.7812" rel="nofollow">The 1919 measurement of the deflection of light</a></li>
<li><a href="https://w.astro.berkeley.edu/%7Ekalas/labs/documents/dyson1920.pdf" rel="nofollow">Eddington&#39;s original paper</a>:</li>
<li><a href="https://vmasrani.github.io/blog/2023/predicting-human-behaviour/" rel="nofollow">Vaden and Brett&#39;s blog exchange</a> </li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Become a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Where were you last night, and why do you have condoms in your pocket? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#76 (Bonus) - Is P(doom) meaningful? Debating epistemology (w/ Liron Shapira) </title>
  <link>https://www.incrementspodcast.com/76</link>
  <guid isPermaLink="false">c2b5df9d-ecb4-43d0-9e80-a713495335d8</guid>
  <pubDate>Fri, 08 Nov 2024 14:30:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/c2b5df9d-ecb4-43d0-9e80-a713495335d8.mp3" length="98349666" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>We were invited onto Liron Shapira's "Doom debates" to discuss Bayesian versus Popperian epistemology, AI doom, and superintelligence. Unsurprisingly, we got about one third of the way through the first subject ... </itunes:subtitle>
  <itunes:duration>2:50:58</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/c/c2b5df9d-ecb4-43d0-9e80-a713495335d8/cover.jpg?v=2"/>
  <description>&lt;p&gt;Liron Shapira, host of [Doom Debates], invited us on to discuss Popperian versus Bayesian epistemology and whether we're worried about AI doom. As one might expect knowing us, we only got about halfway through the first subject, so get yourselves ready (presumably with many drinks) for part II in a few weeks! The era of Ben and Vaden's rowdy youtube debates has begun. Vaden is jubilant, Ben is uncomfortable, and the world has never been more annoyed by Popperians. &lt;/p&gt;

&lt;p&gt;Follow Liron on twitter (@liron) and check out the Doom Debates &lt;a href="https://www.youtube.com/@DoomDebates" target="_blank" rel="nofollow noopener"&gt;youtube channel&lt;/a&gt; and &lt;a href="https://podcasts.apple.com/us/podcast/doom-debates/id1751366208" target="_blank" rel="nofollow noopener"&gt;podcast&lt;/a&gt;.  &lt;/p&gt;

We discuss

&lt;ul&gt;
&lt;li&gt;Whether we're concerned about AI doom &lt;/li&gt;
&lt;li&gt;Bayesian reasoning versus Popperian reasoning &lt;/li&gt;
&lt;li&gt;Whether it makes sense to put numbers on all your beliefs &lt;/li&gt;
&lt;li&gt;Solomonoff induction &lt;/li&gt;
&lt;li&gt;Objective vs subjective Bayesianism &lt;/li&gt;
&lt;li&gt;Prediction markets and superforecasting &lt;/li&gt;
&lt;/ul&gt;

References

&lt;ul&gt;
&lt;li&gt;Vaden's blog post on Cox's Theorem and Yudkowsky's claims of "Laws of Rationality": &lt;a href="https://vmasrani.github.io/blog/2021/the_credence_assumption/" target="_blank" rel="nofollow noopener"&gt;https://vmasrani.github.io/blog/2021/the_credence_assumption/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Disproof of probabilistic induction (including Solomonov Induction): &lt;a href="https://arxiv.org/abs/2107.00749" target="_blank" rel="nofollow noopener"&gt;https://arxiv.org/abs/2107.00749&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;EA Post Vaden Mentioned regarding predictions being uncalibrated more than 1yr out: &lt;a href="https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations" target="_blank" rel="nofollow noopener"&gt;https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: &lt;a href="https://ifp.org/can-policymakers-trust-forecasters/" target="_blank" rel="nofollow noopener"&gt;https://ifp.org/can-policymakers-trust-forecasters/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Superforecaster p(doom) is ~1%: &lt;a href="https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:%7E:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25)" target="_blank" rel="nofollow noopener"&gt;https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:~:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25)&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;The existential risk persuasion tournament &lt;a href="https://www.astralcodexten.com/p/the-extinction-tournament" target="_blank" rel="nofollow noopener"&gt;https://www.astralcodexten.com/p/the-extinction-tournament&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Some more info in Ben's article on superforecasting: &lt;a href="https://benchugg.com/writing/superforecasting/" target="_blank" rel="nofollow noopener"&gt;https://benchugg.com/writing/superforecasting/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Slides on Content vs Probability: &lt;a href="https://vmasrani.github.io/assets/pdf/popper_good.pdf" target="_blank" rel="nofollow noopener"&gt;https://vmasrani.github.io/assets/pdf/popper_good.pdf&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

Socials

&lt;ul&gt;
&lt;li&gt;Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron&lt;/li&gt;
&lt;li&gt;Come join our discord server! DM us on twitter or send us an email to get a supersecret link&lt;/li&gt;
&lt;li&gt;Trust in the reverend Bayes and get exclusive bonus content by becoming a patreon subscriber &lt;a href="https://www.patreon.com/Increments" target="_blank" rel="nofollow noopener"&gt;here&lt;/a&gt;. Or give us one-time cash donations to help cover our lack of cash donations &lt;a href="https://ko-fi.com/increments" target="_blank" rel="nofollow noopener"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Click dem like buttons on &lt;a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" target="_blank" rel="nofollow noopener"&gt;youtube&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What's your credence that the second debate is as fun as the first? Tell us at &lt;a href="mailto:incrementspodcast@gmail.com" target="_blank" rel="nofollow noopener"&gt;incrementspodcast@gmail.com&lt;/a&gt; &lt;br&gt;
 Special Guest: Liron Shapira.&lt;/p&gt;
</description>
  <itunes:keywords>AI, belief, Popper, Bayes, epistemology, prediction, induction</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Liron Shapira, host of [Doom Debates], invited us on to discuss Popperian versus Bayesian epistemology and whether we&#39;re worried about AI doom. As one might expect knowing us, we only got about halfway through the first subject, so get yourselves ready (presumably with many drinks) for part II in a few weeks! The era of Ben and Vaden&#39;s rowdy youtube debates has begun. Vaden is jubilant, Ben is uncomfortable, and the world has never been more annoyed by Popperians. </p>

<p>Follow Liron on twitter (@liron) and check out the Doom Debates <a href="https://www.youtube.com/@DoomDebates" rel="nofollow">youtube channel</a> and <a href="https://podcasts.apple.com/us/podcast/doom-debates/id1751366208" rel="nofollow">podcast</a>.  </p>

<h1>We discuss</h1>

<ul>
<li>Whether we&#39;re concerned about AI doom </li>
<li>Bayesian reasoning versus Popperian reasoning </li>
<li>Whether it makes sense to put numbers on all your beliefs </li>
<li>Solomonoff induction </li>
<li>Objective vs subjective Bayesianism </li>
<li>Prediction markets and superforecasting </li>
</ul>

<h1>References</h1>

<ul>
<li>Vaden&#39;s blog post on Cox&#39;s Theorem and Yudkowsky&#39;s claims of &quot;Laws of Rationality&quot;: <a href="https://vmasrani.github.io/blog/2021/the_credence_assumption/" rel="nofollow">https://vmasrani.github.io/blog/2021/the_credence_assumption/</a></li>
<li>Disproof of probabilistic induction (including Solomonov Induction): <a href="https://arxiv.org/abs/2107.00749" rel="nofollow">https://arxiv.org/abs/2107.00749</a> </li>
<li>EA Post Vaden Mentioned regarding predictions being uncalibrated more than 1yr out: <a href="https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations" rel="nofollow">https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations</a></li>
<li>Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: <a href="https://ifp.org/can-policymakers-trust-forecasters/" rel="nofollow">https://ifp.org/can-policymakers-trust-forecasters/</a></li>
<li>Superforecaster p(doom) is ~1%: <a href="https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:%7E:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25)" rel="nofollow">https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:~:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25)</a>.</li>
<li>The existential risk persuasion tournament <a href="https://www.astralcodexten.com/p/the-extinction-tournament" rel="nofollow">https://www.astralcodexten.com/p/the-extinction-tournament</a></li>
<li>Some more info in Ben&#39;s article on superforecasting: <a href="https://benchugg.com/writing/superforecasting/" rel="nofollow">https://benchugg.com/writing/superforecasting/</a></li>
<li>Slides on Content vs Probability: <a href="https://vmasrani.github.io/assets/pdf/popper_good.pdf" rel="nofollow">https://vmasrani.github.io/assets/pdf/popper_good.pdf</a></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Trust in the reverend Bayes and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s your credence that the second debate is as fun as the first? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p><p>Special Guest: Liron Shapira.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Liron Shapira, host of [Doom Debates], invited us on to discuss Popperian versus Bayesian epistemology and whether we&#39;re worried about AI doom. As one might expect knowing us, we only got about halfway through the first subject, so get yourselves ready (presumably with many drinks) for part II in a few weeks! The era of Ben and Vaden&#39;s rowdy youtube debates has begun. Vaden is jubilant, Ben is uncomfortable, and the world has never been more annoyed by Popperians. </p>

<p>Follow Liron on twitter (@liron) and check out the Doom Debates <a href="https://www.youtube.com/@DoomDebates" rel="nofollow">youtube channel</a> and <a href="https://podcasts.apple.com/us/podcast/doom-debates/id1751366208" rel="nofollow">podcast</a>.  </p>

<h1>We discuss</h1>

<ul>
<li>Whether we&#39;re concerned about AI doom </li>
<li>Bayesian reasoning versus Popperian reasoning </li>
<li>Whether it makes sense to put numbers on all your beliefs </li>
<li>Solomonoff induction </li>
<li>Objective vs subjective Bayesianism </li>
<li>Prediction markets and superforecasting </li>
</ul>

<h1>References</h1>

<ul>
<li>Vaden&#39;s blog post on Cox&#39;s Theorem and Yudkowsky&#39;s claims of &quot;Laws of Rationality&quot;: <a href="https://vmasrani.github.io/blog/2021/the_credence_assumption/" rel="nofollow">https://vmasrani.github.io/blog/2021/the_credence_assumption/</a></li>
<li>Disproof of probabilistic induction (including Solomonov Induction): <a href="https://arxiv.org/abs/2107.00749" rel="nofollow">https://arxiv.org/abs/2107.00749</a> </li>
<li>EA Post Vaden Mentioned regarding predictions being uncalibrated more than 1yr out: <a href="https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations" rel="nofollow">https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations</a></li>
<li>Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: <a href="https://ifp.org/can-policymakers-trust-forecasters/" rel="nofollow">https://ifp.org/can-policymakers-trust-forecasters/</a></li>
<li>Superforecaster p(doom) is ~1%: <a href="https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:%7E:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25)" rel="nofollow">https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/#:~:text=Domain%20experts%20in%20AI%20estimated,by%202100%20(around%2090%25)</a>.</li>
<li>The existential risk persuasion tournament <a href="https://www.astralcodexten.com/p/the-extinction-tournament" rel="nofollow">https://www.astralcodexten.com/p/the-extinction-tournament</a></li>
<li>Some more info in Ben&#39;s article on superforecasting: <a href="https://benchugg.com/writing/superforecasting/" rel="nofollow">https://benchugg.com/writing/superforecasting/</a></li>
<li>Slides on Content vs Probability: <a href="https://vmasrani.github.io/assets/pdf/popper_good.pdf" rel="nofollow">https://vmasrani.github.io/assets/pdf/popper_good.pdf</a></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @liron</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Trust in the reverend Bayes and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s your credence that the second debate is as fun as the first? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a> </p><p>Special Guest: Liron Shapira.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#74 - Disagreeing about Belief, Probability, and Truth (w/ David Deutsch)</title>
  <link>https://www.incrementspodcast.com/74</link>
  <guid isPermaLink="false">03508f9b-3a2a-4b15-9b23-fe30083b431b</guid>
  <pubDate>Tue, 01 Oct 2024 09:30:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/03508f9b-3a2a-4b15-9b23-fe30083b431b.mp3" length="88784483" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>We talk with David Deutsch about whether the concept of belief is a useful lens on human cognition, when probability and statistics are actually useful, and whether he disagrees with Karl Popper about the truth. </itunes:subtitle>
  <itunes:duration>1:32:02</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/0/03508f9b-3a2a-4b15-9b23-fe30083b431b/cover.jpg?v=9"/>
  <description>What do you do when one of your intellectual idols comes on the podcast? Bombard them with disagreements of course. We were thrilled to have David Deutsch on the podcast to discuss whether the concept of belief is a useful lens on human cognition, when probability and statistics should be deployed, and whether he disagrees with Karl Popper on abstractions, the truth, and nothing but the truth. 
Follow David on Twitter (@DavidDeutschOxf) or find his website here (https://www.daviddeutsch.org.uk/). 
We discuss
Whether belief is a fruitful lens through which to analyze ideas 
Whether a non-quantitative form of belief can be defended 
How does belief bottom out epistemologically? 
Whether statistics and probability are useful 
Where should statistics and probability be used in practice? 
The Popper-Miller theorem
Statements vs propositions and their relevance for truth 
Whether Popper and Deutsch disagree about truth 
References
The Popper-Miller theorem. See the original paper (https://www.nature.com/articles/302687a0) 
David's 2021 talk on the correspondence theory of truth (https://www.youtube.com/watch?v=DZ-opI-jghs) 
David's talk on physics without probability (https://www.youtube.com/watch?v=wfzSE4Hoxbc). 
Hempel's paradox (https://en.wikipedia.org/wiki/Raven_paradox) 
The Beginning of Infinity (https://www.amazon.com/Beginning-Infinity-Explanations-Transform-World/dp/0143121359)
Knowledge and the Body-Mind Problem (https://www.amazon.ca/Knowledge-Body-Mind-Problem-Defence-Interaction/dp/0415135567)
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @DavidDeutschOxf
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Believe in us and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
What's the truth about your belief on the probability of useful statistics? Tell us over at incrementspodcast@gmail.com.  Special Guest: David Deutsch.
</description>
  <itunes:keywords>probability, statistics, truth, belief, epistemology, certainty, mathematics</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>What do you do when one of your intellectual idols comes on the podcast? Bombard them with disagreements of course. We were thrilled to have David Deutsch on the podcast to discuss whether the concept of belief is a useful lens on human cognition, when probability and statistics should be deployed, and whether he disagrees with Karl Popper on abstractions, the truth, and nothing but the truth. </p>

<p>Follow David on Twitter (@DavidDeutschOxf) or find his website <a href="https://www.daviddeutsch.org.uk/" rel="nofollow">here</a>. </p>

<h1>We discuss</h1>

<ul>
<li>Whether belief is a fruitful lens through which to analyze ideas </li>
<li>Whether a non-quantitative form of belief can be defended </li>
<li>How does belief bottom out epistemologically? </li>
<li>Whether statistics and probability are useful </li>
<li>Where should statistics and probability be used in practice? </li>
<li>The Popper-Miller theorem</li>
<li>Statements vs propositions and their relevance for truth </li>
<li>Whether Popper and Deutsch disagree about truth </li>
</ul>

<h1>References</h1>

<ul>
<li>The Popper-Miller theorem. See the <a href="https://www.nature.com/articles/302687a0" rel="nofollow">original paper</a> </li>
<li>David&#39;s 2021 talk on the <a href="https://www.youtube.com/watch?v=DZ-opI-jghs" rel="nofollow">correspondence theory of truth</a> </li>
<li>David&#39;s talk on <a href="https://www.youtube.com/watch?v=wfzSE4Hoxbc" rel="nofollow">physics without probability</a>. </li>
<li><a href="https://en.wikipedia.org/wiki/Raven_paradox" rel="nofollow">Hempel&#39;s paradox</a> </li>
<li><a href="https://www.amazon.com/Beginning-Infinity-Explanations-Transform-World/dp/0143121359" rel="nofollow">The Beginning of Infinity</a></li>
<li><a href="https://www.amazon.ca/Knowledge-Body-Mind-Problem-Defence-Interaction/dp/0415135567" rel="nofollow">Knowledge and the Body-Mind Problem</a></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @DavidDeutschOxf</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Believe in us and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s the truth about your belief on the probability of useful statistics? Tell us over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p>Special Guest: David Deutsch.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>What do you do when one of your intellectual idols comes on the podcast? Bombard them with disagreements of course. We were thrilled to have David Deutsch on the podcast to discuss whether the concept of belief is a useful lens on human cognition, when probability and statistics should be deployed, and whether he disagrees with Karl Popper on abstractions, the truth, and nothing but the truth. </p>

<p>Follow David on Twitter (@DavidDeutschOxf) or find his website <a href="https://www.daviddeutsch.org.uk/" rel="nofollow">here</a>. </p>

<h1>We discuss</h1>

<ul>
<li>Whether belief is a fruitful lens through which to analyze ideas </li>
<li>Whether a non-quantitative form of belief can be defended </li>
<li>How does belief bottom out epistemologically? </li>
<li>Whether statistics and probability are useful </li>
<li>Where should statistics and probability be used in practice? </li>
<li>The Popper-Miller theorem</li>
<li>Statements vs propositions and their relevance for truth </li>
<li>Whether Popper and Deutsch disagree about truth </li>
</ul>

<h1>References</h1>

<ul>
<li>The Popper-Miller theorem. See the <a href="https://www.nature.com/articles/302687a0" rel="nofollow">original paper</a> </li>
<li>David&#39;s 2021 talk on the <a href="https://www.youtube.com/watch?v=DZ-opI-jghs" rel="nofollow">correspondence theory of truth</a> </li>
<li>David&#39;s talk on <a href="https://www.youtube.com/watch?v=wfzSE4Hoxbc" rel="nofollow">physics without probability</a>. </li>
<li><a href="https://en.wikipedia.org/wiki/Raven_paradox" rel="nofollow">Hempel&#39;s paradox</a> </li>
<li><a href="https://www.amazon.com/Beginning-Infinity-Explanations-Transform-World/dp/0143121359" rel="nofollow">The Beginning of Infinity</a></li>
<li><a href="https://www.amazon.ca/Knowledge-Body-Mind-Problem-Defence-Interaction/dp/0415135567" rel="nofollow">Knowledge and the Body-Mind Problem</a></li>
</ul>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani, @DavidDeutschOxf</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Believe in us and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>What&#39;s the truth about your belief on the probability of useful statistics? Tell us over at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>. </p><p>Special Guest: David Deutsch.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#62 (Bonus) - The Principle of Optimism (Vaden on the Theory of Anything Podcast) </title>
  <link>https://www.incrementspodcast.com/62</link>
  <guid isPermaLink="false">db9bb47c-e74e-43aa-b7e6-9f3550e239ab</guid>
  <pubDate>Wed, 31 Jan 2024 19:15:00 -0800</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/db9bb47c-e74e-43aa-b7e6-9f3550e239ab.mp3" length="54937324" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>Listen to Vaden's dulcet tones on Bruce Nielson's Theory of Anything Podcast discussing the principle of optimism. </itunes:subtitle>
  <itunes:duration>2:45:37</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/d/db9bb47c-e74e-43aa-b7e6-9f3550e239ab/cover.jpg?v=1"/>
  <description>Vaden has selfishly gone on vacation with his family, leaving beloved listeners to fend for themselves in the wide world of epistemological confusion. To repair some of the damage, we're releasing an episode of The Theory of Anything Podcast from last June in which Vaden contributed to a roundtable discussion on the principle of optimism. Featuring Bruce Nielson, Peter Johansen, Sam Kuypers, Hervé Eulacia, Micah Redding, Bill Rugolsky, and Daniel Buchfink. Enjoy! 
From The Theory of Anything Podcast description: Are all evils due to a lack of knowledge? Are all interesting problems soluble? ALL the problems, really?!?! And what exactly is meant by interesting? Also, should “good guys” ignore the precautionary principle, and do they always win? What is the difference between cynicism, pessimism, and skepticism? And why is pessimism so attractive to so many humans? 
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Help us solve problems and get exclusive bonus content by becoming a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments).
Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ)
Which unsolvable problem would you most like to solve? Send your answer via quantum tunneling to incrementspodcast@gmail.com
 Special Guests: Bruce Nielson and Sam Kuypers.
</description>
  <itunes:keywords>optimism, physics, epistemology, progress, constraints</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Vaden has selfishly gone on vacation with his family, leaving beloved listeners to fend for themselves in the wide world of epistemological confusion. To repair some of the damage, we&#39;re releasing an episode of The Theory of Anything Podcast from last June in which Vaden contributed to a roundtable discussion on the principle of optimism. Featuring Bruce Nielson, Peter Johansen, Sam Kuypers, Hervé Eulacia, Micah Redding, Bill Rugolsky, and Daniel Buchfink. Enjoy! </p>

<p><strong>From The Theory of Anything Podcast description:</strong> Are all evils due to a lack of knowledge? Are all interesting problems soluble? ALL the problems, really?!?! And what exactly is meant by interesting? Also, should “good guys” ignore the precautionary principle, and do they always win? What is the difference between cynicism, pessimism, and skepticism? And why is pessimism so attractive to so many humans? </p>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Help us solve problems and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Which unsolvable problem would you most like to solve? Send your answer via quantum tunneling to <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guests: Bruce Nielson and Sam Kuypers.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Vaden has selfishly gone on vacation with his family, leaving beloved listeners to fend for themselves in the wide world of epistemological confusion. To repair some of the damage, we&#39;re releasing an episode of The Theory of Anything Podcast from last June in which Vaden contributed to a roundtable discussion on the principle of optimism. Featuring Bruce Nielson, Peter Johansen, Sam Kuypers, Hervé Eulacia, Micah Redding, Bill Rugolsky, and Daniel Buchfink. Enjoy! </p>

<p><strong>From The Theory of Anything Podcast description:</strong> Are all evils due to a lack of knowledge? Are all interesting problems soluble? ALL the problems, really?!?! And what exactly is meant by interesting? Also, should “good guys” ignore the precautionary principle, and do they always win? What is the difference between cynicism, pessimism, and skepticism? And why is pessimism so attractive to so many humans? </p>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Help us solve problems and get exclusive bonus content by becoming a patreon subscriber <a href="https://www.patreon.com/Increments" rel="nofollow">here</a>. Or give us one-time cash donations to help cover our lack of cash donations <a href="https://ko-fi.com/increments" rel="nofollow">here</a>.</li>
<li>Click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube</a></li>
</ul>

<p>Which unsolvable problem would you most like to solve? Send your answer via quantum tunneling to <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a></p><p>Special Guests: Bruce Nielson and Sam Kuypers.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#54 - Ask Us Anything III: Emotional Epistemology</title>
  <link>https://www.incrementspodcast.com/54</link>
  <guid isPermaLink="false">d8df9bc8-2935-4592-b1b3-db3aea025b55</guid>
  <pubDate>Mon, 18 Sep 2023 12:30:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/d8df9bc8-2935-4592-b1b3-db3aea025b55.mp3" length="75308720" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>The third of infinite installments in our ask us anything series. We touch on universality, emotions, epistemology, and whether all thinking is problem solving. </itunes:subtitle>
  <itunes:duration>1:18:26</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/d/d8df9bc8-2935-4592-b1b3-db3aea025b55/cover.jpg?v=1"/>
  <description>Back again with AUA #3 - we're getting there people! Only, uhh, seven questions to go? Incremental progress baby. Plus, we see a good old Vaden and Ben fight in this one! Thank God, because things were getting a little stale with Vaden hammering on longtermism and Ben on cliodynamics. We cover: 
Is hypnosis a real thing?
Types of universality contained within the genetic code 
Pressures associated with turning political/philosophical ideas into personal identities 
How do emotions/feelings interface with our rational/logical mind? How should they? 
Vaden's (hopefully one-off) experience with Bipolar Type-1 and psychosis
Is problem solving the sole purpose of thinking? Vaden says yes (with many caveats!) and Ben says wtf no you fool. Then we argue about how to watch TV.
Questions
(Neil Hudson) Are there any theories as to the type of universality achievable via the genetic code (in BOI it is presumed to fall short of coding for all possible life forms)?
(Neil Hudson) Wd be gd to get your take on: riffing on the Sperber/Mercier social thesis v. individual, if one is scarce private space/time then the need to constantly avow one’s public identity may “swamp” the critical evaluation of arguments one hears? Goes to seeking truth v status
(Arun Kannan) What are your thoughts on inexplicit knowledge (David Deutsch jargon) and more broadly emotions/feelings in the mind ? How do these interplay with explicit ideas / thoughts ? What should we prioritize ? If we don't prioritize one over the other, how to resolve conflicts between them ? Any tips, literature, Popperian wisdom you can share on this ?
(Tom Nassis) Is the sole purpose of all forms of thinking problem-solving? Or can thinking have purposes other than solving a problem?
Quotes
Reach always has an explanation. But this time, to the best of my knowledge, the explanation is not yet known. If the reason for the jump in reach was that it was a jump to universality, what was the universality? The genetic code is presumably not universal for specifying life forms, since it relies on specific types of chemicals, such as proteins. Could it be a universal constructor? Perhaps. It does manage to build with inorganic materials sometimes, such as the calcium phosphate in bones, or the magnetite in the navigation system inside a pigeon’s brain. Biotechnologists are already using it to manufacture hydrogen and to extract uranium from seawater. It can also program organisms to perform constructions outside their bodies: birds build nests; beavers build dams. Perhaps it would it be possible to specify, in the genetic code, an organism whose life cycle includes building a nuclear-powered spaceship. Or perhaps not. I guess it has some lesser, and not yet understood, universality.
In 1994 the computer scientist and molecular biologist Leonard Adleman designed and built a computer composed of DNA together with some simple enzymes, and demonstrated that it was capable of performing some sophisticated computations. At the time, Adleman’s DNA computer was arguably the fastest computer in the world. Further, it was clear that a universal classical computer could be made in a similar way. Hence we know that, whatever that other universality of the DNA system was, the universality of computation had also been inherent in it for billions of years, without ever being used – until Adleman used it.
Beginning of Infinity, p.158 (emph added) 
References
Derren brown makes people forget their stop (https://www.youtube.com/watch?v=6kSq7dPlw0A)
Bari Weiss's conversation (https://open.spotify.com/episode/2WvW8VnfzwIM155NcFXwe5) with Freddie deBoer on psychosis, bipolar, and mental health. This conversation addresses the New York Times article (https://www.nytimes.com/2022/05/17/magazine/antipsychotic-medications-mental-health.html) which views having schizophrenia, bipolar, etc as no better or worse than not having schizophrenia, bipolar, etc. Also contains Vaden's favorite euphemism of 2022: "Nonconsensus Realities"
Sad existentialist cat (https://www.youtube.com/watch?v=pBjU3Ii7lfs)
Send Vaden an email with a thought you have not designed to solve a problem at incrementspodcast.com 
Socials
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
Toss us some coin over hur (patreon subscription approach (https://www.patreon.com/Increments/posts) or the ko-fi, just give us cash you animal approach (https://ko-fi.com/increments)), and click dem like buttons on youtube over hur (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ). 
</description>
  <itunes:keywords>ask-us-anything, universality, emotions, epistemology, problem-solving, thinking</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Back again with AUA #3 - we&#39;re getting there people! Only, uhh, seven questions to go? Incremental progress baby. Plus, we see a good old Vaden and Ben fight in this one! Thank God, because things were getting a little stale with Vaden hammering on longtermism and Ben on cliodynamics. We cover: </p>

<ul>
<li>Is hypnosis a real thing?</li>
<li>Types of universality contained within the genetic code </li>
<li>Pressures associated with turning political/philosophical ideas into personal identities </li>
<li>How do emotions/feelings interface with our rational/logical mind? How <em>should</em> they? </li>
<li>Vaden&#39;s (hopefully one-off) experience with Bipolar Type-1 and psychosis</li>
<li>Is problem solving the sole purpose of thinking? Vaden says yes (with many caveats!) and Ben says wtf no you fool. Then we argue about how to watch TV.</li>
</ul>

<h1>Questions</h1>

<ol>
<li><p><strong>(Neil Hudson)</strong> Are there any theories as to the type of universality achievable via the genetic code (in BOI it is presumed to fall short of coding for all possible life forms)?</p></li>
<li><p><strong>(Neil Hudson)</strong> Wd be gd to get your take on: riffing on the Sperber/Mercier social thesis v. individual, if one is scarce private space/time then the need to constantly avow one’s public identity may “swamp” the critical evaluation of arguments one hears? Goes to seeking truth v status</p></li>
<li><p><strong>(Arun Kannan)</strong> What are your thoughts on inexplicit knowledge (David Deutsch jargon) and more broadly emotions/feelings in the mind ? How do these interplay with explicit ideas / thoughts ? What should we prioritize ? If we don&#39;t prioritize one over the other, how to resolve conflicts between them ? Any tips, literature, Popperian wisdom you can share on this ?</p></li>
<li><p><strong>(Tom Nassis)</strong> Is the sole purpose of all forms of thinking problem-solving? Or can thinking have purposes other than solving a problem?</p></li>
</ol>

<h1>Quotes</h1>

<blockquote>
<p><em>Reach always has an explanation. But this time, to the best of my knowledge, the explanation is not yet known. If the reason for the jump in reach was that it was a jump to universality, what was the universality? The genetic code is presumably not universal <strong>for specifying life forms</strong>, since it relies on specific types of chemicals, such as proteins. Could it be a universal constructor? Perhaps. It does manage to build with inorganic materials sometimes, such as the calcium phosphate in bones, or the magnetite in the navigation system inside a pigeon’s brain. Biotechnologists are already using it to manufacture hydrogen and to extract uranium from seawater. It can also program organisms to perform constructions outside their bodies: birds build nests; beavers build dams. <strong>Perhaps it would it be possible to specify, in the genetic code, an organism whose life cycle includes building a nuclear-powered spaceship. Or perhaps not. I guess it has some lesser, and not yet understood, universality.</strong></em></p>

<p><em>In 1994 the computer scientist and molecular biologist Leonard Adleman designed and built a computer composed of DNA together with some simple enzymes, and demonstrated that it was capable of performing some sophisticated computations. At the time, Adleman’s DNA computer was arguably the fastest computer in the world. Further, it was clear that a universal classical computer could be made in a similar way. <strong>Hence we know that, whatever that other universality of the DNA system was, the universality of computation had also been inherent in it for billions of years, without ever being used – until Adleman used it.</strong></em></p>

<p>Beginning of Infinity, p.158 (emph added) </p>
</blockquote>

<h1>References</h1>

<ul>
<li><a href="https://www.youtube.com/watch?v=6kSq7dPlw0A" rel="nofollow">Derren brown makes people forget their stop</a></li>
<li>Bari Weiss&#39;s <a href="https://open.spotify.com/episode/2WvW8VnfzwIM155NcFXwe5" rel="nofollow">conversation</a> with Freddie deBoer on psychosis, bipolar, and mental health. This conversation addresses the New York Times <a href="https://www.nytimes.com/2022/05/17/magazine/antipsychotic-medications-mental-health.html" rel="nofollow">article</a> which views having schizophrenia, bipolar, etc as no better or worse than not having schizophrenia, bipolar, etc. Also contains Vaden&#39;s favorite euphemism of 2022: &quot;Nonconsensus Realities&quot;</li>
<li><a href="https://www.youtube.com/watch?v=pBjU3Ii7lfs" rel="nofollow">Sad existentialist cat</a></li>
</ul>

<p>Send Vaden an email with a thought you have not designed to solve a problem at incrementspodcast.com </p>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Toss us some coin over hur (<a href="https://www.patreon.com/Increments/posts" rel="nofollow">patreon subscription approach</a> or the <a href="https://ko-fi.com/increments" rel="nofollow">ko-fi, just give us cash you animal approach</a>), and click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube over hur</a>. </li>
</ul><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Back again with AUA #3 - we&#39;re getting there people! Only, uhh, seven questions to go? Incremental progress baby. Plus, we see a good old Vaden and Ben fight in this one! Thank God, because things were getting a little stale with Vaden hammering on longtermism and Ben on cliodynamics. We cover: </p>

<ul>
<li>Is hypnosis a real thing?</li>
<li>Types of universality contained within the genetic code </li>
<li>Pressures associated with turning political/philosophical ideas into personal identities </li>
<li>How do emotions/feelings interface with our rational/logical mind? How <em>should</em> they? </li>
<li>Vaden&#39;s (hopefully one-off) experience with Bipolar Type-1 and psychosis</li>
<li>Is problem solving the sole purpose of thinking? Vaden says yes (with many caveats!) and Ben says wtf no you fool. Then we argue about how to watch TV.</li>
</ul>

<h1>Questions</h1>

<ol>
<li><p><strong>(Neil Hudson)</strong> Are there any theories as to the type of universality achievable via the genetic code (in BOI it is presumed to fall short of coding for all possible life forms)?</p></li>
<li><p><strong>(Neil Hudson)</strong> Wd be gd to get your take on: riffing on the Sperber/Mercier social thesis v. individual, if one is scarce private space/time then the need to constantly avow one’s public identity may “swamp” the critical evaluation of arguments one hears? Goes to seeking truth v status</p></li>
<li><p><strong>(Arun Kannan)</strong> What are your thoughts on inexplicit knowledge (David Deutsch jargon) and more broadly emotions/feelings in the mind ? How do these interplay with explicit ideas / thoughts ? What should we prioritize ? If we don&#39;t prioritize one over the other, how to resolve conflicts between them ? Any tips, literature, Popperian wisdom you can share on this ?</p></li>
<li><p><strong>(Tom Nassis)</strong> Is the sole purpose of all forms of thinking problem-solving? Or can thinking have purposes other than solving a problem?</p></li>
</ol>

<h1>Quotes</h1>

<blockquote>
<p><em>Reach always has an explanation. But this time, to the best of my knowledge, the explanation is not yet known. If the reason for the jump in reach was that it was a jump to universality, what was the universality? The genetic code is presumably not universal <strong>for specifying life forms</strong>, since it relies on specific types of chemicals, such as proteins. Could it be a universal constructor? Perhaps. It does manage to build with inorganic materials sometimes, such as the calcium phosphate in bones, or the magnetite in the navigation system inside a pigeon’s brain. Biotechnologists are already using it to manufacture hydrogen and to extract uranium from seawater. It can also program organisms to perform constructions outside their bodies: birds build nests; beavers build dams. <strong>Perhaps it would it be possible to specify, in the genetic code, an organism whose life cycle includes building a nuclear-powered spaceship. Or perhaps not. I guess it has some lesser, and not yet understood, universality.</strong></em></p>

<p><em>In 1994 the computer scientist and molecular biologist Leonard Adleman designed and built a computer composed of DNA together with some simple enzymes, and demonstrated that it was capable of performing some sophisticated computations. At the time, Adleman’s DNA computer was arguably the fastest computer in the world. Further, it was clear that a universal classical computer could be made in a similar way. <strong>Hence we know that, whatever that other universality of the DNA system was, the universality of computation had also been inherent in it for billions of years, without ever being used – until Adleman used it.</strong></em></p>

<p>Beginning of Infinity, p.158 (emph added) </p>
</blockquote>

<h1>References</h1>

<ul>
<li><a href="https://www.youtube.com/watch?v=6kSq7dPlw0A" rel="nofollow">Derren brown makes people forget their stop</a></li>
<li>Bari Weiss&#39;s <a href="https://open.spotify.com/episode/2WvW8VnfzwIM155NcFXwe5" rel="nofollow">conversation</a> with Freddie deBoer on psychosis, bipolar, and mental health. This conversation addresses the New York Times <a href="https://www.nytimes.com/2022/05/17/magazine/antipsychotic-medications-mental-health.html" rel="nofollow">article</a> which views having schizophrenia, bipolar, etc as no better or worse than not having schizophrenia, bipolar, etc. Also contains Vaden&#39;s favorite euphemism of 2022: &quot;Nonconsensus Realities&quot;</li>
<li><a href="https://www.youtube.com/watch?v=pBjU3Ii7lfs" rel="nofollow">Sad existentialist cat</a></li>
</ul>

<p>Send Vaden an email with a thought you have not designed to solve a problem at incrementspodcast.com </p>

<h1>Socials</h1>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
<li>Toss us some coin over hur (<a href="https://www.patreon.com/Increments/posts" rel="nofollow">patreon subscription approach</a> or the <a href="https://ko-fi.com/increments" rel="nofollow">ko-fi, just give us cash you animal approach</a>), and click dem like buttons on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">youtube over hur</a>. </li>
</ul><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#51 - Truth, Moose, and Refrigerated Eggplant: Critiquing Chapman's Meta-Rationality</title>
  <link>https://www.incrementspodcast.com/51</link>
  <guid isPermaLink="false">bdd4d364-d829-4857-abc8-d121dccdaf5a</guid>
  <pubDate>Mon, 29 May 2023 04:30:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/bdd4d364-d829-4857-abc8-d121dccdaf5a.mp3" length="69211532" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>We discuss David Chapman's work on nebulosity, the correspondence theory of truth, and how it relates to Karl Popper's epistemology. </itunes:subtitle>
  <itunes:duration>1:12:05</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/b/bdd4d364-d829-4857-abc8-d121dccdaf5a/cover.jpg?v=1"/>
  <description>Vaden comes out swinging against David Chapman's work on meta-rationality. Is Chapman pointing out a fatal flaw, or has Popper solved these problems long ago? Do moose see cups? Does Ben see cups? What the f*** is a cup? 
We discuss 
- Chapman's concept of nebulosity 
- Whether this concept is covered by Popper 
- The relationship of nebulosity and the vagueness of language 
- The correspondence theory of truth 
- If the concept of "problem situation" saves us from Chapman's critique 
- Why "conjecture and criticism" isn't everything 
References
- The excellent Do Explain (https://doexplain.buzzsprout.com/) podcast. Go listen, right now!
- In the cells of the eggplant (https://metarationality.com/), David Chapman
- Chapman's website (https://meaningness.com/about-my-sites)
- Jake Orthwein on Do Explain (https://www.youtube.com/watch?v=irmwL97zGcM&amp;amp;ab_channel=DoExplainwithChristoferL%C3%B6vgren), Part I 
Chapman Quotes 
Reasonableness is not interested in universality. It aims to get practical work done in specific situations. Precise definitions and absolute truths are rarely necessary or helpful for that. Is this thing an eggplant? Depends on what you are trying to do with it. Is there water in the refrigerator? Well, what do you want it for? What counts as baldness, fruit, red, or water depends on your purposes, and on all sorts of details of the situation. Those details are so numerous and various that they can’t all be taken into account ahead of time to make a general formal theory. Any factor might matter in some situation. On the other hand, nearly all are irrelevant in any specific situation, so determining whether the water in an eggplant counts, or if Alain is bald, is usually easy.
David Chapman, When will you go bald? (https://metarationality.com/vagueness)
Do cow hairs that have come out of the follicle but that are stuck to the cow by friction, sweat, or blood count as part of the cow? How about ones that are on the verge of falling out, but are stuck in the follicle by only the weakest of bonds? The reasonable answer is “Dude! It doesn’t matter!”
David Chapman, Objects, objectively (https://metarationality.com/objective-objects)
We use words as tools to get things done; and to get things done, we improvise, making use of whatever materials are ready to hand. If you want to whack a piece of sheet metal to bend it, and don’t know or care what the “right” tool is (if there even is one), you might take a quick look around the garage, grab a large screwdriver at the “wrong” end, and hit the target with its hard rubber handle. A hand tool may have one or two standard uses; some less common but pretty obvious ones; and unusual, creative ones. But these are not clearly distinct categories of usage.
David Chapman, The purpose of meaning (https://metarationality.com/purpose-of-meaning)
Popper Quotes 
Observation is always selective. It needs a chosen object, a definite task, an interest, a point of view, a problem. And its description presupposes a descriptive language, with property words; it presupposes similarity and classification, which in their turn presuppose interests, points of view, and problems. ‘A hungry animal’, writes Katz,  ‘divides the environment into edible and inedible things. An animal in flight sees roads to escape and hiding places . . . Generally speaking, objects change . . . according to the needs of the animal.’ We may add that objects can be classified, and can become similar or dissimilar, only in this way—by being related to needs and interests. This rule applies not only to animals but also to scientists. For the animal a point of view is provided by its needs, the task of the moment, and its expectations; for the scientist by his theoretical interests, the special problem under investigation, his conjectures and anticipations, and the theories which he accepts as a kind of background: his frame of reference, his "horizon of expectations".
Conjectures and Refutations p. 61 (italics added)
I believe that there is a limited analogy between this situation and the way we ‘use our terms’ in science. The analogy can be described in this way. In a branch of mathematics in which we operate with signs defined by implicit definition, the fact that these signs have no ‘definite meaning’ does not affect our operating with them, or the precision of our theories. Why is that so? Because we do not overburden the signs. We do not attach a ‘meaning’ to them, beyond that shadow of a meaning that is warranted by our implicit definitions. (And if we attach to them an intuitive meaning, then we are careful to treat this as a private auxiliary device, which must not interfere with the theory.) In this way, we try to keep, as it were, within the ‘penumbra of vagueness’ or of ambiguity, and to avoid touching the problem of the precise limits of this penumbra or range; and it turns out that we can achieve a great deal without discussing the meaning of these signs; for nothing depends on their meaning. In a similar way, I believe, we can operate with these terms whose meaning wehave learned ‘operationally’. We use them, as it were, so that nothing depends upon their meaning, or as little as possible. Our ‘operational definitions’ have the advantage of helping us to shift the problem into a field in which nothing or little depends on words. Clear speaking is speaking in such a way that words do not matter.
OSE p. 841 (italics in original)
Frege’s opinion is different; for he writes: “A definition of a concept ... must determine unambiguously of any object whether or not it falls under the concept . . . Using a metaphor, we may say: the concept must have a sharp boundary.” But it is clear that for this kind of absolute precision to be demanded of a defined concept, it must first be demanded of the defining concepts, and ultimately of our undefined, or primitive, terms. Yet this is impossible. For either our undefined or primitive terms have a traditional meaning (which is never very precise) or they are introduced by so-called “implicit definitions”—that is, through the way they are used in the context of a theory. This last way of introducing them—if they have to be “introduced”—seems to be the best. But it makes the meaning of the concepts depend on that of the theory, and most theories can be interpreted in more than one way. As a result, implicity defined concepts, and thus all concepts which are defined explicitly with their help, become not merely “vague” but systematically ambiguous. And the various systematically ambiguous interpretations (such as the points and straight lines of projective geometry) may be completely distinct.
Unending Quest, p. 27 (italics added)
What I do suggest is that it is always undesirable to make an effort to increase precision for its own sake—especially linguistic precision—since this usually leads to loss of clarity, and to a waste of time and effort on preliminaries which often turn out to be useless, because they are bypassed by the real advance of the subject: one should never try to be more precise than the problem situation demands. ...  One further result is, quite simply, the realization that the quest for precision, in words or concepts or meanings, is a wild-goose chase. There simply is no such thing as a precise concept (say, in Frege’s sense), though concepts like “price of this kettle” and “thirty pence” are usually precise enough for the problem context in which they are used. 
Unending Quest, p. 22 (italics in original)
Contact us
Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani
Check us out on youtube at https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ
Come join our discord server! DM us on twitter or send us an email to get a supersecret link
How nebulous is your eggplant? Tell us at incrementspodcast@gmail.com.  
</description>
  <itunes:keywords>chapman, popper, epistemology, rationality, nebulosity</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Vaden comes out swinging against David Chapman&#39;s work on meta-rationality. Is Chapman pointing out a fatal flaw, or has Popper solved these problems long ago? Do moose see cups? Does Ben see cups? What the f*** <em>is</em> a cup? </p>

<p><strong>We discuss</strong> </p>

<ul>
<li>Chapman&#39;s concept of nebulosity </li>
<li>Whether this concept is covered by Popper </li>
<li>The relationship of nebulosity and the vagueness of language </li>
<li>The correspondence theory of truth </li>
<li>If the concept of &quot;problem situation&quot; saves us from Chapman&#39;s critique </li>
<li>Why &quot;conjecture and criticism&quot; isn&#39;t everything </li>
</ul>

<p><strong>References</strong></p>

<ul>
<li>The excellent <a href="https://doexplain.buzzsprout.com/" rel="nofollow">Do Explain</a> podcast. Go listen, right now!</li>
<li><a href="https://metarationality.com/" rel="nofollow">In the cells of the eggplant</a>, David Chapman</li>
<li><a href="https://meaningness.com/about-my-sites" rel="nofollow">Chapman&#39;s website</a></li>
<li><a href="https://www.youtube.com/watch?v=irmwL97zGcM&ab_channel=DoExplainwithChristoferL%C3%B6vgren" rel="nofollow">Jake Orthwein on Do Explain</a>, Part I </li>
</ul>

<p><strong>Chapman Quotes</strong> </p>

<blockquote>
<p>Reasonableness is not interested in universality. It aims to get practical work done in specific situations. Precise definitions and absolute truths are rarely necessary or helpful for that. Is this thing an eggplant? Depends on what you are trying to do with it. Is there water in the refrigerator? Well, what do you want it for? What counts as baldness, fruit, red, or water depends on your purposes, and on all sorts of details of the situation. Those details are so numerous and various that they can’t all be taken into account ahead of time to make a general formal theory. Any factor might matter in <em>some</em> situation. On the other hand, nearly all are irrelevant in any specific situation, so determining whether the water in an eggplant counts, or if Alain is bald, is usually easy.</p>

<ul>
<li>David Chapman, <a href="https://metarationality.com/vagueness" rel="nofollow">When will you go bald?</a></li>
</ul>

<p>Do cow hairs that have come out of the follicle but that are stuck to the cow by friction, sweat, or blood count as part of the cow? How about ones that are on the verge of falling out, but are stuck in the follicle by only the weakest of bonds? The reasonable answer is “Dude! It doesn’t matter!”</p>

<ul>
<li>David Chapman, <a href="https://metarationality.com/objective-objects" rel="nofollow">Objects, objectively</a></li>
</ul>

<p>We use words as tools to get things done; and to get things done, we improvise, making use of whatever materials are ready to hand. If you want to whack a piece of sheet metal to bend it, and don’t know or care what the “right” tool is (if there even is one), you might take a quick look around the garage, grab a large screwdriver at the “wrong” end, and hit the target with its hard rubber handle. A hand tool may have one or two standard uses; some less common but pretty obvious ones; and unusual, creative ones. But these are not clearly distinct categories of usage.</p>

<ul>
<li>David Chapman, <a href="https://metarationality.com/purpose-of-meaning" rel="nofollow">The purpose of meaning</a></li>
</ul>
</blockquote>

<p><strong>Popper Quotes</strong> </p>

<blockquote>
<p>Observation is always selective. It needs a chosen object, a definite task, an interest, a point of view, a problem. And its description presupposes a descriptive language, with property words; <em>it presupposes similarity and classification, which in their turn presuppose interests, points of view, and problems. ‘A hungry animal’, writes Katz,  ‘divides the environment into edible and inedible things. An animal in flight sees roads to escape and hiding places . . . Generally speaking, objects change . . . according to the needs of the animal.’ We may add that objects can be classified, and can become similar or dissimilar, only in this way—by being related to needs and interests.</em> This rule applies not only to animals but also to scientists. For the animal a point of view is provided by its needs, the task of the moment, and its expectations; for the scientist by his theoretical interests, the special problem under investigation, his conjectures and anticipations, and the theories which he accepts as a kind of background: his frame of reference, his &quot;horizon of expectations&quot;.</p>

<ul>
<li>Conjectures and Refutations p. 61 (italics added)</li>
</ul>

<p>I believe that there is a limited analogy between this situation and the way we ‘use our terms’ in science. The analogy can be described in this way. In a branch of mathematics in which we operate with signs defined by implicit definition, the fact that these signs have no ‘definite meaning’ does not affect our operating with them, or the precision of our theories. Why is that so? Because we do not overburden the signs. We do not attach a ‘meaning’ to them, beyond that shadow of a meaning that is warranted by our implicit definitions. (And if we attach to them an intuitive meaning, then we are careful to treat this as a private auxiliary device, which must not interfere with the theory.) In this way, we try to keep, as it were, within the ‘penumbra of vagueness’ or of ambiguity, and to avoid touching the problem of the precise limits of this penumbra or range; and it turns out that we can achieve a great deal without discussing the meaning of these signs; for nothing depends on their meaning. In a similar way, I believe, we can operate with these terms whose meaning wehave learned ‘operationally’. We use them, as it were, so that nothing depends upon their meaning, or as little as possible. Our ‘operational definitions’ have the advantage of helping us to shift the problem into a field in which nothing or little depends on words. <em>Clear speaking is speaking in such a way that words do not matter.</em></p>

<ul>
<li>OSE p. 841 (italics in original)</li>
</ul>

<p><em>Frege’s opinion is different; for he writes: “A definition of a concept ... must determine unambiguously of any object whether or not it falls under the concept . . . Using a metaphor, we may say: the concept must have a sharp boundary.” But it is clear that for this kind of absolute precision to be demanded of a defined concept, it must first be demanded of the defining concepts, and ultimately of our undefined, or primitive, terms. Yet this is impossible.</em> For either our undefined or primitive terms have a traditional meaning (which is never very precise) or they are introduced by so-called “implicit definitions”—that is, through the way they are used in the context of a theory. This last way of introducing them—if they have to be “introduced”—seems to be the best. But it makes the meaning of the concepts depend on that of the theory, and most theories can be interpreted in more than one way. As a result, implicity defined concepts, and thus all concepts which are defined explicitly with their help, become not merely “vague” but systematically ambiguous. And the various systematically ambiguous interpretations (such as the points and straight lines of projective geometry) may be completely distinct.</p>

<ul>
<li>Unending Quest, p. 27 (italics added)</li>
</ul>

<p>What I do suggest is that <em>it is always undesirable to make an effort to increase precision for its own sake—especially linguistic precision—since this usually leads to loss of clarity</em>, and to a waste of time and effort on preliminaries which often turn out to be useless, because they are bypassed by the real advance of the subject: <em>one should never try to be more precise than the problem situation demands.</em> ...  One further result is, quite simply, the realization that the quest for precision, in words or concepts or meanings, is a wild-goose chase. There simply is no such thing as a precise concept (say, in Frege’s sense), though concepts like “price of this kettle” and “thirty pence” are usually precise enough for the problem context in which they are used. </p>

<ul>
<li>Unending Quest, p. 22 (italics in original)</li>
</ul>
</blockquote>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p>How nebulous is <em>your</em> eggplant? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>.  </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Vaden comes out swinging against David Chapman&#39;s work on meta-rationality. Is Chapman pointing out a fatal flaw, or has Popper solved these problems long ago? Do moose see cups? Does Ben see cups? What the f*** <em>is</em> a cup? </p>

<p><strong>We discuss</strong> </p>

<ul>
<li>Chapman&#39;s concept of nebulosity </li>
<li>Whether this concept is covered by Popper </li>
<li>The relationship of nebulosity and the vagueness of language </li>
<li>The correspondence theory of truth </li>
<li>If the concept of &quot;problem situation&quot; saves us from Chapman&#39;s critique </li>
<li>Why &quot;conjecture and criticism&quot; isn&#39;t everything </li>
</ul>

<p><strong>References</strong></p>

<ul>
<li>The excellent <a href="https://doexplain.buzzsprout.com/" rel="nofollow">Do Explain</a> podcast. Go listen, right now!</li>
<li><a href="https://metarationality.com/" rel="nofollow">In the cells of the eggplant</a>, David Chapman</li>
<li><a href="https://meaningness.com/about-my-sites" rel="nofollow">Chapman&#39;s website</a></li>
<li><a href="https://www.youtube.com/watch?v=irmwL97zGcM&ab_channel=DoExplainwithChristoferL%C3%B6vgren" rel="nofollow">Jake Orthwein on Do Explain</a>, Part I </li>
</ul>

<p><strong>Chapman Quotes</strong> </p>

<blockquote>
<p>Reasonableness is not interested in universality. It aims to get practical work done in specific situations. Precise definitions and absolute truths are rarely necessary or helpful for that. Is this thing an eggplant? Depends on what you are trying to do with it. Is there water in the refrigerator? Well, what do you want it for? What counts as baldness, fruit, red, or water depends on your purposes, and on all sorts of details of the situation. Those details are so numerous and various that they can’t all be taken into account ahead of time to make a general formal theory. Any factor might matter in <em>some</em> situation. On the other hand, nearly all are irrelevant in any specific situation, so determining whether the water in an eggplant counts, or if Alain is bald, is usually easy.</p>

<ul>
<li>David Chapman, <a href="https://metarationality.com/vagueness" rel="nofollow">When will you go bald?</a></li>
</ul>

<p>Do cow hairs that have come out of the follicle but that are stuck to the cow by friction, sweat, or blood count as part of the cow? How about ones that are on the verge of falling out, but are stuck in the follicle by only the weakest of bonds? The reasonable answer is “Dude! It doesn’t matter!”</p>

<ul>
<li>David Chapman, <a href="https://metarationality.com/objective-objects" rel="nofollow">Objects, objectively</a></li>
</ul>

<p>We use words as tools to get things done; and to get things done, we improvise, making use of whatever materials are ready to hand. If you want to whack a piece of sheet metal to bend it, and don’t know or care what the “right” tool is (if there even is one), you might take a quick look around the garage, grab a large screwdriver at the “wrong” end, and hit the target with its hard rubber handle. A hand tool may have one or two standard uses; some less common but pretty obvious ones; and unusual, creative ones. But these are not clearly distinct categories of usage.</p>

<ul>
<li>David Chapman, <a href="https://metarationality.com/purpose-of-meaning" rel="nofollow">The purpose of meaning</a></li>
</ul>
</blockquote>

<p><strong>Popper Quotes</strong> </p>

<blockquote>
<p>Observation is always selective. It needs a chosen object, a definite task, an interest, a point of view, a problem. And its description presupposes a descriptive language, with property words; <em>it presupposes similarity and classification, which in their turn presuppose interests, points of view, and problems. ‘A hungry animal’, writes Katz,  ‘divides the environment into edible and inedible things. An animal in flight sees roads to escape and hiding places . . . Generally speaking, objects change . . . according to the needs of the animal.’ We may add that objects can be classified, and can become similar or dissimilar, only in this way—by being related to needs and interests.</em> This rule applies not only to animals but also to scientists. For the animal a point of view is provided by its needs, the task of the moment, and its expectations; for the scientist by his theoretical interests, the special problem under investigation, his conjectures and anticipations, and the theories which he accepts as a kind of background: his frame of reference, his &quot;horizon of expectations&quot;.</p>

<ul>
<li>Conjectures and Refutations p. 61 (italics added)</li>
</ul>

<p>I believe that there is a limited analogy between this situation and the way we ‘use our terms’ in science. The analogy can be described in this way. In a branch of mathematics in which we operate with signs defined by implicit definition, the fact that these signs have no ‘definite meaning’ does not affect our operating with them, or the precision of our theories. Why is that so? Because we do not overburden the signs. We do not attach a ‘meaning’ to them, beyond that shadow of a meaning that is warranted by our implicit definitions. (And if we attach to them an intuitive meaning, then we are careful to treat this as a private auxiliary device, which must not interfere with the theory.) In this way, we try to keep, as it were, within the ‘penumbra of vagueness’ or of ambiguity, and to avoid touching the problem of the precise limits of this penumbra or range; and it turns out that we can achieve a great deal without discussing the meaning of these signs; for nothing depends on their meaning. In a similar way, I believe, we can operate with these terms whose meaning wehave learned ‘operationally’. We use them, as it were, so that nothing depends upon their meaning, or as little as possible. Our ‘operational definitions’ have the advantage of helping us to shift the problem into a field in which nothing or little depends on words. <em>Clear speaking is speaking in such a way that words do not matter.</em></p>

<ul>
<li>OSE p. 841 (italics in original)</li>
</ul>

<p><em>Frege’s opinion is different; for he writes: “A definition of a concept ... must determine unambiguously of any object whether or not it falls under the concept . . . Using a metaphor, we may say: the concept must have a sharp boundary.” But it is clear that for this kind of absolute precision to be demanded of a defined concept, it must first be demanded of the defining concepts, and ultimately of our undefined, or primitive, terms. Yet this is impossible.</em> For either our undefined or primitive terms have a traditional meaning (which is never very precise) or they are introduced by so-called “implicit definitions”—that is, through the way they are used in the context of a theory. This last way of introducing them—if they have to be “introduced”—seems to be the best. But it makes the meaning of the concepts depend on that of the theory, and most theories can be interpreted in more than one way. As a result, implicity defined concepts, and thus all concepts which are defined explicitly with their help, become not merely “vague” but systematically ambiguous. And the various systematically ambiguous interpretations (such as the points and straight lines of projective geometry) may be completely distinct.</p>

<ul>
<li>Unending Quest, p. 27 (italics added)</li>
</ul>

<p>What I do suggest is that <em>it is always undesirable to make an effort to increase precision for its own sake—especially linguistic precision—since this usually leads to loss of clarity</em>, and to a waste of time and effort on preliminaries which often turn out to be useless, because they are bypassed by the real advance of the subject: <em>one should never try to be more precise than the problem situation demands.</em> ...  One further result is, quite simply, the realization that the quest for precision, in words or concepts or meanings, is a wild-goose chase. There simply is no such thing as a precise concept (say, in Frege’s sense), though concepts like “price of this kettle” and “thirty pence” are usually precise enough for the problem context in which they are used. </p>

<ul>
<li>Unending Quest, p. 22 (italics in original)</li>
</ul>
</blockquote>

<p><strong>Contact us</strong></p>

<ul>
<li>Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani</li>
<li>Check us out on youtube at <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ</a></li>
<li>Come join our discord server! DM us on twitter or send us an email to get a supersecret link</li>
</ul>

<p>How nebulous is <em>your</em> eggplant? Tell us at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>.  </p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#41 - Parenting, Epistemology, and EA (w/ Lulie Tanett) </title>
  <link>https://www.incrementspodcast.com/41</link>
  <guid isPermaLink="false">8ed5f8dd-a838-4df0-8791-af0372ee011d</guid>
  <pubDate>Mon, 20 Jun 2022 16:15:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/8ed5f8dd-a838-4df0-8791-af0372ee011d.mp3" length="77460808" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle>We're joined by the wonderful Lulie Tanett to talk about effective altruism, pulling spouses out of burning buildings, and why you should prefer critical rationalism to Bayesianism for your mom's sake.</itunes:subtitle>
  <itunes:duration>1:18:15</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/8/8ed5f8dd-a838-4df0-8791-af0372ee011d/cover.jpg?v=1"/>
  <description>We're joined by the wonderful Lulie Tanett to talk about effective altruism, pulling spouses out of burning buildings, and why you should prefer critical rationalism to Bayesianism for your mom's sake. Buckle up! 
We discuss:
- Lulie's recent experience at EA Global 
- Bayesianism and how it differs from critical rationalism 
- Common arguments in favor of Bayesianism 
- Taking Children Seriously 
- What it was like for Lulie growing up without going to school 
- The Alexander Technique, Internal Family Systems, Gendlin's Focusing, and Belief Reporting 
References 
- EA Global (https://www.eaglobal.org/)
- Taking Children Seriously (https://www.fitz-claridge.com/taking-children-seriously/) 
- Alexander Technique (https://expandingawareness.org/blog/what-is-the-alexander-technique/)
- Internal Family Systems (https://ifs-institute.com/)
- Gendlin Focusing (https://en.wikipedia.org/wiki/Focusing_(psychotherapy))
Social Media Everywhere 
Follow Lulie on Twitter @reasonisfun. Follow us at @VadenMasrani, @BennyChugg, @IncrementsPod, or on Youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ). 
Report your beliefs and focus your Gendlin's at incrementspodcast@gmail.com.   Special Guest: Lulie Tanett.
</description>
  <itunes:keywords>effective altruism, epistemology, rationality, bayesianism, critical rationalism</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>We&#39;re joined by the wonderful Lulie Tanett to talk about effective altruism, pulling spouses out of burning buildings, and why you should prefer critical rationalism to Bayesianism for your mom&#39;s sake. Buckle up! </p>

<p><strong>We discuss:</strong></p>

<ul>
<li>Lulie&#39;s recent experience at EA Global </li>
<li>Bayesianism and how it differs from critical rationalism </li>
<li>Common arguments in favor of Bayesianism </li>
<li>Taking Children Seriously </li>
<li>What it was like for Lulie growing up without going to school </li>
<li>The Alexander Technique, Internal Family Systems, Gendlin&#39;s Focusing, and Belief Reporting </li>
</ul>

<p><strong>References</strong> </p>

<ul>
<li><a href="https://www.eaglobal.org/" rel="nofollow">EA Global</a></li>
<li><a href="https://www.fitz-claridge.com/taking-children-seriously/" rel="nofollow">Taking Children Seriously</a> </li>
<li><a href="https://expandingawareness.org/blog/what-is-the-alexander-technique/" rel="nofollow">Alexander Technique</a></li>
<li><a href="https://ifs-institute.com/" rel="nofollow">Internal Family Systems</a></li>
<li><a href="https://en.wikipedia.org/wiki/Focusing_(psychotherapy)" rel="nofollow">Gendlin Focusing</a></li>
</ul>

<p><strong>Social Media Everywhere</strong> <br>
Follow Lulie on Twitter @reasonisfun. Follow us at @VadenMasrani, @BennyChugg, @IncrementsPod, or on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">Youtube</a>. </p>

<p>Report your beliefs and focus your Gendlin&#39;s at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>.  </p><p>Special Guest: Lulie Tanett.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>We&#39;re joined by the wonderful Lulie Tanett to talk about effective altruism, pulling spouses out of burning buildings, and why you should prefer critical rationalism to Bayesianism for your mom&#39;s sake. Buckle up! </p>

<p><strong>We discuss:</strong></p>

<ul>
<li>Lulie&#39;s recent experience at EA Global </li>
<li>Bayesianism and how it differs from critical rationalism </li>
<li>Common arguments in favor of Bayesianism </li>
<li>Taking Children Seriously </li>
<li>What it was like for Lulie growing up without going to school </li>
<li>The Alexander Technique, Internal Family Systems, Gendlin&#39;s Focusing, and Belief Reporting </li>
</ul>

<p><strong>References</strong> </p>

<ul>
<li><a href="https://www.eaglobal.org/" rel="nofollow">EA Global</a></li>
<li><a href="https://www.fitz-claridge.com/taking-children-seriously/" rel="nofollow">Taking Children Seriously</a> </li>
<li><a href="https://expandingawareness.org/blog/what-is-the-alexander-technique/" rel="nofollow">Alexander Technique</a></li>
<li><a href="https://ifs-institute.com/" rel="nofollow">Internal Family Systems</a></li>
<li><a href="https://en.wikipedia.org/wiki/Focusing_(psychotherapy)" rel="nofollow">Gendlin Focusing</a></li>
</ul>

<p><strong>Social Media Everywhere</strong> <br>
Follow Lulie on Twitter @reasonisfun. Follow us at @VadenMasrani, @BennyChugg, @IncrementsPod, or on <a href="https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ" rel="nofollow">Youtube</a>. </p>

<p>Report your beliefs and focus your Gendlin&#39;s at <a href="mailto:incrementspodcast@gmail.com" rel="nofollow">incrementspodcast@gmail.com</a>.  </p><p>Special Guest: Lulie Tanett.</p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#21 (C&amp;R Series, Ch.1) - The Problem of Induction</title>
  <link>https://www.incrementspodcast.com/21</link>
  <guid isPermaLink="false">Buzzsprout-8195969</guid>
  <pubDate>Tue, 23 Mar 2021 09:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/86b770bb-6b37-44ec-acdc-9d810bee3b7f.mp3" length="45649800" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle></itunes:subtitle>
  <itunes:duration>53:58</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/cover.jpg?v=18"/>
  <description>&lt;p&gt;After a long digression, we finally return to the Conjectures and Refutations series. In this episode we cover Chapter 1: &lt;em&gt;Science: Conjectures and Refutations&lt;/em&gt;. In particular, we focus on one of the trickiest Popperian concepts to wrap one's head around - the problem of induction.  &lt;br&gt; &lt;br&gt;&lt;em&gt;References:&lt;/em&gt;&lt;/p&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Scientific_law" target="_blank" rel="nofollow noopener"&gt;Wiki on scientific laws &lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://en.wikipedia.org/wiki/Dialogues_Concerning_Natural_Religion" target="_blank" rel="nofollow noopener"&gt;Hume's dialogues concerning natural religion&lt;/a&gt;  &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://vmasrani.github.io/assets/pdf/prob_induction_disproof.pdf" target="_blank" rel="nofollow noopener"&gt;Proof of the impossibility of probability induction&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;One of the &lt;a href="https://www.youtube.com/watch?v=Fd1U_MC_p3M&amp;amp;ab_channel=AeonVideo" target="_blank" rel="nofollow noopener"&gt;YouTube videos&lt;/a&gt; on induction. &lt;/li&gt;
&lt;/ul&gt;&lt;p&gt;And in case you were wondering what happened to the two unfalsifiable theories Popper attacks in this chapter, you'll be pleased to know that they have merged into a super theory. We give you &lt;em&gt;Psychoanalytic-Marxism: &lt;/em&gt;&lt;a href="http://oldsite.english.ucsb.edu/faculty/janmohamed/Psychoanalytic-Marxism.pdf" target="_blank" rel="nofollow noopener"&gt;http://oldsite.english.ucsb.edu/faculty/janmohamed/Psychoanalytic-Marxism.pdf&lt;/a&gt;.&lt;br&gt; &lt;br&gt;Sent us your favorite unfalsifiable theory at &lt;em&gt;incrementspodcast@gmail.com&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;audio updated: 29/08/2021&lt;/em&gt; &lt;/p&gt;
</description>
  <itunes:keywords>science, induction, law, popper</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>After a long digression, we finally return to the Conjectures and Refutations series. In this episode we cover Chapter 1: <em>Science: Conjectures and Refutations</em>. In particular, we focus on one of the trickiest Popperian concepts to wrap one&apos;s head around - the problem of induction.  <br/> <br/><em>References:</em></p><ul><li><a href='https://en.wikipedia.org/wiki/Scientific_law'>Wiki on scientific laws </a></li><li><a href='https://en.wikipedia.org/wiki/Dialogues_Concerning_Natural_Religion'>Hume&apos;s dialogues concerning natural religion</a>  </li><li><a href='https://vmasrani.github.io/assets/pdf/prob_induction_disproof.pdf'>Proof of the impossibility of probability induction</a> </li><li>One of the <a href='https://www.youtube.com/watch?v=Fd1U_MC_p3M&amp;ab_channel=AeonVideo'>YouTube videos</a> on induction. </li></ul><p>And in case you were wondering what happened to the two unfalsifiable theories Popper attacks in this chapter, you&apos;ll be pleased to know that they have merged into a super theory. We give you <em>Psychoanalytic-Marxism: </em><a href='http://oldsite.english.ucsb.edu/faculty/janmohamed/Psychoanalytic-Marxism.pdf'>http://oldsite.english.ucsb.edu/faculty/janmohamed/Psychoanalytic-Marxism.pdf</a>.<br/> <br/>Sent us your favorite unfalsifiable theory at <em>incrementspodcast@gmail.com</em></p>

<p><em>audio updated: 29/08/2021</em></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>After a long digression, we finally return to the Conjectures and Refutations series. In this episode we cover Chapter 1: <em>Science: Conjectures and Refutations</em>. In particular, we focus on one of the trickiest Popperian concepts to wrap one&apos;s head around - the problem of induction.  <br/> <br/><em>References:</em></p><ul><li><a href='https://en.wikipedia.org/wiki/Scientific_law'>Wiki on scientific laws </a></li><li><a href='https://en.wikipedia.org/wiki/Dialogues_Concerning_Natural_Religion'>Hume&apos;s dialogues concerning natural religion</a>  </li><li><a href='https://vmasrani.github.io/assets/pdf/prob_induction_disproof.pdf'>Proof of the impossibility of probability induction</a> </li><li>One of the <a href='https://www.youtube.com/watch?v=Fd1U_MC_p3M&amp;ab_channel=AeonVideo'>YouTube videos</a> on induction. </li></ul><p>And in case you were wondering what happened to the two unfalsifiable theories Popper attacks in this chapter, you&apos;ll be pleased to know that they have merged into a super theory. We give you <em>Psychoanalytic-Marxism: </em><a href='http://oldsite.english.ucsb.edu/faculty/janmohamed/Psychoanalytic-Marxism.pdf'>http://oldsite.english.ucsb.edu/faculty/janmohamed/Psychoanalytic-Marxism.pdf</a>.<br/> <br/>Sent us your favorite unfalsifiable theory at <em>incrementspodcast@gmail.com</em></p>

<p><em>audio updated: 29/08/2021</em></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
<item>
  <title>#6 - Philosophy of Probability I: Introduction</title>
  <link>https://www.incrementspodcast.com/6</link>
  <guid isPermaLink="false">Buzzsprout-4407194</guid>
  <pubDate>Wed, 01 Jul 2020 18:00:00 -0700</pubDate>
  <author>Ben Chugg and Vaden Masrani</author>
  <enclosure url="https://dts.podtrac.com/redirect.mp3/https://chrt.fm/track/1F5B4D/aphid.fireside.fm/d/1437767933/3229e340-4bf1-42a5-a5b7-4f508a27131c/eeb49cea-deb7-4957-8f51-8d5f0949c799.mp3" length="55868881" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Ben Chugg and Vaden Masrani</itunes:author>
  <itunes:subtitle></itunes:subtitle>
  <itunes:duration>1:17:05</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/3/3229e340-4bf1-42a5-a5b7-4f508a27131c/episodes/e/eeb49cea-deb7-4957-8f51-8d5f0949c799/cover.jpg?v=1"/>
  <description>&lt;p&gt;Don't leave yet - we swear this will be more interesting than it sounds ... &lt;br&gt;&lt;br&gt;... But a drink will definitely help. Ben and Vaden dive into the interpretations behind probability. What do people mean when they use the word, and why do we use this one tool to describe different concepts. The rowdiness truly kicks in when Vaden releases his pent-up critique of Bayesianism, thereby losing both his friends and PhD position. But at least he's ingratiated himself with Karl Popper. &lt;br&gt;&lt;br&gt;&lt;b&gt;&lt;em&gt;References:&lt;/em&gt;&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;a href="https://vmasrani.github.io/assets/popper_good.pdf"&gt;Vaden's&amp;nbsp; Slides&lt;/a&gt; on a 1975 &lt;a href="https://www.jstor.org/stable/20115014?seq=1#metadata_info_tab_contents"&gt;paper&lt;/a&gt; by Irving John Good titled &lt;em&gt;Explicativity, Corroboration, and the Relative Odds of Hypotheses&lt;/em&gt;. The paper is I.J. Good’s response to Karl Popper, and in the presentation I compare the two philosophers’ views on probability, epistemology, induction, simplicity, and content.&lt;/li&gt;&lt;li&gt;&lt;a href="http://www.mrcc.uqam.ca/Publications/articles/deElia_MWR2005_.pdf"&gt;Diversity in Interpretations of Probability: Implications for Weather Forecasting&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Andrew Gelman, &lt;a href="http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf"&gt;Philosophy and the practice of Bayesian statistics&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Popper quote: &lt;em&gt;"Those who identify confirmation with probability must believe that a high degree of probability is desirable. They implicitly accept the rule: ‘Always choose the most probable hypothesis!’ Now it can be easily shown that this rule is equivalent to the following rule: ‘Always choose the hypothesis which goes as little beyond the evidence as possible!’ And this, in turn, can be shown to be equivalent, not only to ‘Always accept the hypothesis with the lowest content (within the limits of your task, for example, your task of predicting)!’, but also to ‘Always choose the hypothesis which has the highest degree of ad hoc character (within the limits of your task)!’" &lt;/em&gt;(Conjectures and Refutations p.391)&amp;nbsp;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Get in touch at incrementspodcast@gmail.com.&lt;br&gt;&lt;br&gt;&lt;em&gt;audio updated 13/12/2020&lt;/em&gt;&lt;/p&gt; 
</description>
  <itunes:keywords>probability, bayesianism, frequency, induction, epistemology</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Don&apos;t leave yet - we swear this will be more interesting than it sounds ... <br/><br/>... But a drink will definitely help. Ben and Vaden dive into the interpretations behind probability. What do people mean when they use the word, and why do we use this one tool to describe different concepts. The rowdiness truly kicks in when Vaden releases his pent-up critique of Bayesianism, thereby losing both his friends and PhD position. But at least he&apos;s ingratiated himself with Karl Popper. <br/><br/><b><em>References:</em></b></p><ul><li><a href='https://vmasrani.github.io/assets/popper_good.pdf'>Vaden&apos;s  Slides</a> on a 1975 <a href='https://www.jstor.org/stable/20115014?seq=1#metadata_info_tab_contents'>paper</a> by Irving John Good titled <em>Explicativity, Corroboration, and the Relative Odds of Hypotheses</em>. The paper is I.J. Good’s response to Karl Popper, and in the presentation I compare the two philosophers’ views on probability, epistemology, induction, simplicity, and content.</li><li><a href='http://www.mrcc.uqam.ca/Publications/articles/deElia_MWR2005_.pdf'>Diversity in Interpretations of Probability: Implications for Weather Forecasting</a></li><li>Andrew Gelman, <a href='http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf'>Philosophy and the practice of Bayesian statistics</a></li><li>Popper quote: <em>&quot;Those who identify confirmation with probability must believe that a high degree of probability is desirable. They implicitly accept the rule: ‘Always choose the most probable hypothesis!’ Now it can be easily shown that this rule is equivalent to the following rule: ‘Always choose the hypothesis which goes as little beyond the evidence as possible!’ And this, in turn, can be shown to be equivalent, not only to ‘Always accept the hypothesis with the lowest content (within the limits of your task, for example, your task of predicting)!’, but also to ‘Always choose the hypothesis which has the highest degree of ad hoc character (within the limits of your task)!’&quot; </em>(Conjectures and Refutations p.391) </li></ul><p>Get in touch at incrementspodcast@gmail.com.<br/><br/><em>audio updated 13/12/2020</em></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Don&apos;t leave yet - we swear this will be more interesting than it sounds ... <br/><br/>... But a drink will definitely help. Ben and Vaden dive into the interpretations behind probability. What do people mean when they use the word, and why do we use this one tool to describe different concepts. The rowdiness truly kicks in when Vaden releases his pent-up critique of Bayesianism, thereby losing both his friends and PhD position. But at least he&apos;s ingratiated himself with Karl Popper. <br/><br/><b><em>References:</em></b></p><ul><li><a href='https://vmasrani.github.io/assets/popper_good.pdf'>Vaden&apos;s  Slides</a> on a 1975 <a href='https://www.jstor.org/stable/20115014?seq=1#metadata_info_tab_contents'>paper</a> by Irving John Good titled <em>Explicativity, Corroboration, and the Relative Odds of Hypotheses</em>. The paper is I.J. Good’s response to Karl Popper, and in the presentation I compare the two philosophers’ views on probability, epistemology, induction, simplicity, and content.</li><li><a href='http://www.mrcc.uqam.ca/Publications/articles/deElia_MWR2005_.pdf'>Diversity in Interpretations of Probability: Implications for Weather Forecasting</a></li><li>Andrew Gelman, <a href='http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf'>Philosophy and the practice of Bayesian statistics</a></li><li>Popper quote: <em>&quot;Those who identify confirmation with probability must believe that a high degree of probability is desirable. They implicitly accept the rule: ‘Always choose the most probable hypothesis!’ Now it can be easily shown that this rule is equivalent to the following rule: ‘Always choose the hypothesis which goes as little beyond the evidence as possible!’ And this, in turn, can be shown to be equivalent, not only to ‘Always accept the hypothesis with the lowest content (within the limits of your task, for example, your task of predicting)!’, but also to ‘Always choose the hypothesis which has the highest degree of ad hoc character (within the limits of your task)!’&quot; </em>(Conjectures and Refutations p.391) </li></ul><p>Get in touch at incrementspodcast@gmail.com.<br/><br/><em>audio updated 13/12/2020</em></p><p><a rel="payment" href="https://www.patreon.com/Increments">Support Increments</a></p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
