Sort by year:

There are two dominant paradigms in the theory of qualitative belief change. While belief revision theory attempts to describe the way in which rational agents revise their beliefs upon gaining new evidence about an essentially static world, the theory of belief update is concerned with describing how such agents change their qualitative beliefs upon learning that the world is changing in some way. A similar distinction can be made when it comes to describing the way in which a rational agent changes their subjective probability assignments, or `credences', over time. On the one hand, we need a way to describe how these credences evolve when the agent learns something new about a static environment. On the other hand, we need a way to describe how they evolve when the agent learns that the world has changed. According to orthodoxy, the correct answers to the questions of how an agent should revise their qualitative beliefs and numerical credences upon obtaining new information about a static world are given by the axiomatic AGM theory of belief revision and Bayesian conditionalisation, respectively. Now, under the influential Lockean theory of belief, an agent believes a proposition p if and only if their credence in p is sufficiently high (where what counts as `sufficiently high' is determined by some threshold value t ≥ 1/2. Thus, assuming a Lockean theory of belief, Bayesian conditionalization de nes an alternative theory of qualitative belief revision, where p is in the revised belief set if and only if the agent's posterior credence in p is above the relevant threshold after conditionalising on the new evidence. Call this theory of belief revision `Lockean revision'. The relationship between Lockean revision and the AGM theory of belief revision was systematically described by Shear and Fitelson (forthcoming). With regards to belief updating, the most widely accepted answers to the questions of how an agent should revise their qualitative beliefs and numerical credences upon obtaining new information about how the world is changing over time are given by Katsuno and Mendelzon's axiomatic theory of belief update (KM-update) and Lewis's technique of probabilistic imaging, respectively. In this sequel to our study of the relationship between Bayesian (viz., Lockean revision) and AGM revision, we investigate the relationships between Bayesian (viz., Lockean imaging) KM updating.

In this talk, I explore an epistemic approach to the paradoxes of self-reference. The basic idea is to use the notion of objective correctness (or objective ought) for belief as our guiding light. When this is done, a novel approach to the paradoxes emerges. The upshot is that we should view the paradoxes as providing compelling reason to reject the epistemic normativity of logic.

The standard assumption made by contemporary epistemic utility theorists is (Veritism) that truth/accuracy (of belief) is the only thing of positive epistemic value. We look at what happens when knowledge (rather than mere true belief) is given pride of place in epistemic utility theory. Some interesting new rational requirements for belief may be derived. We explain how, and we also look at some applications.

In this talk, we discuss various necessary, sufficient, and necessary & sufficient conditions for the transmission of confirmation via deductive entailment.

Khoo & Mandelkern (2017) provide an illuminating discussion of my recent triviality results for indicative conditionals. In these brief comments, I discuss two putative probabilistic counterexamples to some Import-Export principles they discuss in their paper. This brings out some interesting connections between confirmation, Simpson's Paradox, import-export, and indicative conditionals.

In this talk, we compare and contrast two approaches for revising qualitative (viz., “full”) beliefs. The first approach is a naïve Bayesian (*viz.*, Lockean) one, which operates via conditionalization and a Lockean thesis. The second approach is AGM (the classical, *logical* approach to revision). Our aim here is to provide the most straightforward explanation of the ways in which these two approaches agree and disagree with each other.

In this talk, I compare and contrast two approaches to the representation of the doxastic states of rational agents. First, I outline a (broadly) Bayesian (or "imprecise probability") approach. Then, I describe Spohn's Ranking Theory approach (as outlined in his recent book *The Laws of Belief*). Some new results (and conjectures) regarding the relationship between these two paradigms (as well as a decision procedure for Ranking Theory) are presented.

In this talk I will explain the basics of epistemic utility theory and how it can be used to ground coherence requirements for various sorts of judgments (e.g., both degree of belief and full belief). The main idea is that, while belief (and degree of belief) aims at truth (i.e., minimizing inaccuracy), epistemic rationality only requires that one's judgments minimize expected inaccuracy. This fact about the nature of epistemic rationality allows us to resolve various problems and puzzles in contemporary epistemology (e.g., lottery paradoxes, preface paradoxes, etc.).

In this talk, I offer a purely confirmation-theoretic explanation of the (seeming) paradoxicality of Simpson's Paradox. I also (briefly) contrast my approach with a recent (causal) explanation due to Judea Pearl.

Here, we elaborate the suggestion (first discussed by Sides et al., 2001) that in standard conjunction problems the fallacious probability judgments experimentally observed are typically guided by sound assessments of confirmation relations, meant in terms of contemporary Bayesian confirmation theory. Our main formal result is a confirmation-theoretic account of the conjunction fallacy which is proven robust (i.e., not depending on various alternative ways of measuring degrees of confirmation).

In this talk, I describe generalizations/strengthenings of the triviality results of Gibbard and Lewis regarding the indicative conditional. More precisely, I aim to do two things in this talk: (1) present an axiomatic generalization of Gibbard's (logical) triviality result for indicative conditionals, and (2) present an algebraic strengthening of Lewis's (probabilistic) triviality result for indicative conditionals. Both results start from a very weak background theory (either logical or probabilistic) of the indicative conditional, and (relative to these weak backgrounds) both results will rely only on the so-called Import-Export Law. So, these results can be viewed as (general, and strong) "odd consequences" of Import-Export.

In this talk, I investigate several interpretations of Richard Feldman's "Evidence of Evidence is Evidence" (EEE) principle. My talk draws heavily on recent work by Tal & Comesaña.

Conference University of San Diego, July 2015

In this talk, I explain how the distinction between convergent and linked premises (in argumentation/argument diagramming theory) can be given an elegant Bayesian explication.

After some general remarks about closure and counter-closure, I (a) review some (alleged) counterexamples to counter-closure, (b) discuss a popular strategy for responding to such cases, and (c) pose a dilemma for this popular strategy.

Conference Tilburg Center for Logic, Ethics, and Philosophy of Science, October 2014

I use naive epistemic utility theory to ground a synchronic (Lockean) coherence requirement for full belief. Then, I compare and contrast this approach with Leitgeb's "Stability Theory".

Taking Joyce’s (1998; 2009) recent argument(s) for probabilism as our point of departure, we propose a new way of grounding formal, synchronic, epistemic coherence requirements for (opinionated) full belief. Our approach yields principled alternatives to deductive consistency, sheds new light on the preface and lottery paradoxes, and reveals novel conceptual connections between alethic and evidential epistemic norms.

Conference University of Wisconsin-Madison, April 2014

A generalization of Joyce’s (2009) argumentative strategy for establishing probabilism as a coherence requirement for numerical degrees of confidence (credences) is developed and applied to comparative confidence judgments.

Paradoxes of individual coherence(e.g., the preface paradox) and group coherence (e.g., the doctrinal paradox for judgment aggregation) typically presuppose that deductive consistency is a coherence requirement for both individual and group judgment. In this paper, we introduce a new coherence requirement for (individual) full belief, and we explain how this new approach to individual coherence leads to an amelioration of the traditional paradoxes. In particular, we explain why our new coherence requirement gets around the standard doctrinal paradox. However, we also prove a new impossibility result, which reveals that (more complex) varieties of the doctrinal paradox can arise even for our new notion of coherence.

Colloquium New York University, December 2011

I sketch an argument for the claim that any epistemic argument against classical logic (either deductive or inductive) will have to rely on bridge principle(s), which are either (a) implausible, or (b) too weak to yield classically valid arguments. [Recently, Florian Steinberger has convincingly filled-in my argument sketch for this dilemma.]

Conference LMU Munich, September 2011

Joyce (1998) argues that for any credence function that doesn't satisfy the probability axioms, there is another function that dominates it in terms of accuracy. But if some potential credence functions are ruled out as violations of the Principal Principle, then some non-probabilistic credence functions fail to be dominated. We argue that to fix Joyce’s argument, one must show that all epistemic values for credence functions derive from accuracy.

First, a brief historical trace of the developments in confirmation theory leading up to Goodman’s infamous “grue” paradox is presented. Then, Goodman’s argument is analyzed from both Hempelian and Bayesian perspectives. A guiding analogy is drawn between certain arguments against classical deductive logic, and Goodman’s “grue” argument against classical inductive logic. The upshot of this analogy is that the “New Riddle” is not as vexing as many commentators have claimed (especially, from a Bayesian inductive-logical point of view). This talk is based (at this point, somewhat loosely) on my "grue" paper.

I give a tutorial for PrSAT, my *Mathematica*-based decision procedure for probability calculus. There is also a *Mathematica* notebook that goes along with this tutorial. And, here is a PDF format version of that notebook.

The (recent, Bayesian) cognitive science literature on the Wason Task (WT) has been modeled largely after the (not-so-recent, Bayesian) philosophy of science literature on the Paradox of Confirmation (POC). In this paper, we apply some insights from more recent Bayesian approaches to the (POC) to analogous models of (WT). This involves, first, retracing the history of the (POC), and, then, re-examining the (WT) with these historico-philosophical insights in mind. This talk is based on our Wason paper.

First, I discuss the (philosophical) distinction between probability and confirmation. This distinction dates back to a dispute between Popper and Carnap. More recently, cognitive scientists have been investigating the ways in which confirmation judgment and probability judgment interact. I close the talk with some recent empirical results along these lines.

In Chapter 1 of *Evidence and Evolution*, Sober (2008) defends a Likelihodist account of favoring. The main tenet of Likelihoodism is the so-called Law of Likelihood. In this talk, I explain why the Law of Likelihood fails to undergird an adequate explication of favoring. This talk was subsequently published.

Naive deductivist accounts of confirmation have the undesirable consequence that if *E* confirms *H*, then *E* also confirms the conjunction *H* & *X*, for any *X*—even if *X* is completely irrelevant to *E* and *H*. Bayesian accounts of confirmation may appear to have the same problem. We show how to simplify and improve upon Fitelson’s original solution to the irrelevant conjunction problem.

In this talk, we investigate various possible (Bayesian) precisifications of the (somewhat vague) statements of “the equal weight view” (EWV) that have appeared in the recent literature on disagreement. We will show that the renditions of (EWV) that immediately suggest themselves are untenable from a Bayesian point of view. In the end, we will propose some tenable (but not necessarily desirable) interpretations of (EWV). Our aim here will not be to defend any particular Bayesian precisification of (EWV), but rather to raise awareness about some of the difficulties inherent in formulating such precisifications.

Conference University of Michigan, May 2009

We examine some of the consequences of the separability assumption (for scoring rules) -- specifically, we discuss some potential weaknesses in scoring-rule-based arguments for probabilism that may be caused by the separability assumption.

Conference University of Michigan, May 2009

We argue that (in the context of scoring-rule based arguments for probabilism) evaluated sets of attitudes should always include *entire* Boolean algebras (and not merely partitions or subsets of them).

Colloquium Indiana, February 2009

A decision procedure (PrSAT) for classical (Kolmogorov) probability calculus is presented. This decision procedure is based on an existing decision procedure for the theory of real closed fields, which has recently been implemented in *Mathematica*. A *Mathematica* implementation of PrSAT is also described, along with several applications to various non-trivial problems in the probability calculus. Click here for the *Mathematica* notebook that goes with this talk (and here for a PDF version thereof).

In this talk, I discuss the role that "old evidence" plays in (my recent Bayesian inductive-logical reconstruction of) Goodman's "Grue" argument.

In this discussion of Sherri Roush's book *Tracking Truth*, I explain some of the virtues of thinking about truth-tracking in terms of likelihoods rather than counterfactuals.

I offer some comments on an early draft of Jill North's paper, which was later published under the title "An empirical approach to symmetry and probability".

In this talk, I offer some criticisms of an argument of Kim's for the claim that disjunctive laws are unconfirmable.

I provide a historical survey of Bayesian Confirmation Theory.

This talk explains how to model logical non-omniscience within a Bayesian framework (Garber-style).

This talk provides a historical survey of arguments surrounding the Paradox of Confirmation.

Conference Oregon State University, August 2005

I describe some basic techniques for using automated reasoning tools to solve problems in in various modal logics.

Carnap’s inductive logic (or confirmation) project is revisited from an “in- crease in firmness” (or probabilistic relevance) point of view. It is argued that Carnap’s main desiderata can be satisfied in this setting, without the need for a theory of “logical probability”. The emphasis here will be on ex- plaining how Carnap’s epistemological desiderata for inductive logic will need to be modified in this new setting. The key move is to abandon Carnap’s goal of bridging confirmation and credence, in favor of bridging confirmation and evidential support.

Colloquium University of Konstanz, July 2004

In this talk, I explain the debate between Likelihoodists and Bayesians over the so-called "Law of Likelihood". There is also a color plot and a *Mathematica* notebook (also available in PDF format) that goes with the discussion of the Monty Hall Problem at the end of this talk. This talk is based on a paper of mine.

Conference London School of Economics, June 2004

In this talk, I describe some refinements to my probabilistic account of coherence. There are three background files that accompany this talk. There is a note explaining two technical corrections to my original coherence measure, and there is a *Mathematica* notebook which works through the results in that note. The *Mathematica* notebook is also available in PDF format here.

In this talk, I explain how the probability/confirmation distinction can be used to explain both the 'base rate fallacy' and the 'conjunction fallacy'. Here is a podcast in which I discuss both 'fallacies' in the style of this talk.

Conference Benjamin N. Cardozo School of Law, April, 2003, April 2003

Here, I offer comments on a paper of James Franklin's. Papers presented at this conference later appeared in a special issue of the journal *Law, Probability, and Risk*.

Conference San Francisco, March 2003

Here, I offer comments on a paper by Kenneth Presting. Some of the ideas advanced by Presting have since gained traction in the 'Less Wrong' community (but Presting doesn't seem to receive credit for them, and as far as I know his paper was never published)..

Colloquium UC-Berkeley, March 2003

In this talk, I summarize some (then) new results in algebra and sentential logic that were discovered via automated reasoning. Here is a webpage which complements this talk.

In this talk, I describe my dissertation research on the foundations of Bayesian confirmation theory. Here is a link to my thesis.

Other Lawrence Livermore National Laboratory, August 2002

In this talk, I provide a historical overview of some controversies in the philosophy of statistics.

Here I present some (general purpose) techniques for using automated reasoning to solve problems in modal logics. Here is a webpage with various files related to this lecture.

In this talk, I discuss various proposed solutions to the problem of old evidence, and I sketch a new approach. Here is an unpublished paper that describes the proposed solution..

Conference Seattle, March 2002

Here, I give some comments on a paper by Tomoji Shogenji. His paper was laster published under the same title in the volume *Perspectives on Coherentism*.

Colloquium University of Colorado, Boulder, November 2001

In this talk, I explain my original approach to the irrelevant conjunction problem. This has since been superseded by my joint work with Jim Hawthorne.

In this talk, I explain my Bayesian account of independent evidence. This paper was later published in the proceedings of PSA 2000.

Conference Kansas City, October 1998

In this talk, I describe the problem of measure sensitivity for Bayesian confirmation theory. This paper was later published in the proceedings of PSA 1998.

Conference Champaign, Illinois, October 1997

In this talk, I explain how to use *Mathematica* to make Bill McCune's proof of the Robbins Conjecture more 'human readable'. Here is a webpage which contains my notebooks as well as the published version of this paper.