In this paper, we use various automated reasoning techniques to settle six open questions raised by Ulrich and Meredith regarding shortest single axioms for classical sentential logic. Along the way, we also pose several new open questions and challenge problems for automated reasoning.
Simpson’s Paradox has received a lot of attention in the contemporary literature. Typically, these presentations focus only on the qualitative structure of the phenomenon, and various explanations of its “paradoxicality”. In this paper, we discuss quantitative aspects of Simpson’s Paradox, via the use of various Bayesian measures of degree of confirmation. This leads to some interesting new results, both for the general phenomenon of Simpson’s Paradox and for Bayesian confirmation theory.
We explain how to use PrSAT to verify, generalize, and discover impossibility theorems for algorithmic fairness measures.
In contemporary epistemology and decision theory, much is made of the distinction between indicative and subjunctive supposition. Here, we systematically catalogue the relationship between the four most salient classes of theories of suppositional judgement: qualitative theories of indicative supposition, quantitative theories of indicative supposition, qualitative theories of subjunctive supposition and quantitative theories of subjunctive supposition. We conclude by drawing general lessons concerning the norms of suppositional judgement and the distinction between subjunctive and indicative supposition.
This note reports an new-and-improved version of my previous Gibbardian collapse theorem for indicative conditionals.
In this paper, I present a serious problem for confirmation measure Z.
Undergraduates from underrepresented groups are invited to study logic — five topics over five days, with ten top international instructors — at Northeastern University for one week in the summer of 2022. There is a competitive application process for this Summer Program, and we will accept 10 students. Transportation to and from Northeastern, as well as room and board are provided. A modest stipend is awarded to each participant. By participating in this supportive program, undergraduates study logic in an inclusive setting and learn that there is a place for them in the field and that they have something valuable to offer the discipline.
This volume features more than fifteen essays written in honor of Peter D. Klein. It explores the work and legacy of this prominent philosopher, who has had and continues to have a tremendous influence in the development of epistemology. The essays reflect the breadth and depth of Klein's work. They engage directly with his views and with the views of his interlocutors. In addition, a comprehensive introduction discusses the overall impact of Klein's philosophical work. It also explains how each of the essays in the book fits within that legacy. Coverage includes such topics as a knowledge-first account of defeasible reasoning, felicitous falsehoods, the possibility of foundationalist justification, the many formal faces of defeat, radical scepticism, and more. Overall, the book provides readers with an overview of Kleins contributions to epistemology, his importance to twentieth and twenty-first-century philosophy, and a survey of his philosophical ideas and accomplishments. It's not only a celebration of the work of an important philosopher. It also offers readers an insightful journey into the nature of knowledge, scepticism, and justification.
Undergraduates from underrepresented groups are invited to study logic — five topics over five days, with ten top international instructors — at Northeastern University for one week in the summer of 2018. There is a competitive application process for this Summer Program, and we will accept 10 students. Transportation to and from Northeastern, as well as room and board are provided. A modest stipend is awarded to each participant. By participating in this supportive program, undergraduates study logic in an inclusive setting and learn that there is a place for them in the field and that they have something valuable to offer the discipline.
The aim of this book project is to explain how to generalize Joyce-style epistemic utility aruments to ground coherence requirements for -- not only numerical credences, but -- comparative confidence orderings and qualitative (or "full") beliefs.
Click here to get a taste of what the ("full") belief part of the book (based on joint work with Kenny Easwaran) is going to look like, and here to get a sense for what the comparative confidence part of the book (based on joint work with David McCarthy) is going to look like. For a sense of the "big picture", have a look at the webpage for my Coherence seminar at the ILLC, which contains lectures and readings, in addition to drafts of the three parts of the book. I plan to complete the book in 2016 (and my hope is to publish it with OUP).
In this paper, we compare and contrast two methods for the revision of qualitative (viz., "full") beliefs. The first ("Bayesian") method is generated by a simplistic diachronic Lockean thesis requiring coherence with the agent's posterior credences after conditionalization. The second ("Logical") method is the orthodox AGM approach to belief revision. Our primary aim is to determine when the two methods may disagree in their recommendations and when they must agree. We establish a number of novel results about their relative behavior. Our most notable (and mysterious) finding is that the inverse of the golden ratio emerges as a non-arbitrary bound on the Bayesian method's free-parameter --- the Lockean threshold. This "golden threshold" surfaces in two of our results and turns out to be crucial for understanding the relation between the two methods.
In this paper, I review some recent treatments of Simpson's Paradox, and I propose a new rationalizing explanation of its (apparent) paradoxicality.
Charles Stein discovered a paradox in 1955 that many statisticians think is of fundamental importance. Here we explore its philosophical implications. We outline the nature of Stein’s result and of subsequent work on shrinkage estimators; then we describe how these results are related to Bayesianism and to model selection criteria like the Akaike Information Criterion. We also discuss their bearing on scientific realism and instrumentalism. We argue that results concerning shrinkage estimators underwrite a surprising form of holistic pragmatism.
In response to a paper by Harris & Fitelson , Slaney  states several open questions concerning possible strategies for proving distributivity in a wide class of positive sentential logics. In this note, I provide answers to all of Slaney’s open questions. The result is a better understanding of the class of positive logics in which distributivity holds.
After some general remarks about closure and counter-closure, I review some historical examples that appear to be cases of knowledge obtained via (rational) inference from unknown premises (i.e., some alleged counterexamples to counter-closure). Then, I discuss a popular strategy for responding to such cases. Finally, I pose a dilemma for this popular strategy, which involves both closure and counter-closure.
This was an international workshop which brought together scholars from the NYC area, Amsterdam (ILLC), and Munich (MCMP). It was inspired by the first Bridges workshop, which was organized by the MCMP and held in NYC in Fall 2014.
In this discussion note, we explain how to relax some of the standard assumptions made in Garber-style solutions to the Problem of Old Evidence. The result is a more general and explanatory Bayesian approach.
In this note, I prove the strongest possible Lewisian triviality result for the indicative conditional and compare it with some previous (weaker) triviality results.
Taking Joyce’s (1998; 2009) recent argument(s) for probabilism as our point of departure, we propose a new way of grounding formal, synchronic, epistemic coherence requirements for (opinionated) full belief. Our approach yields principled alternatives to deductive consistency, sheds new light on the preface and lottery paradoxes, and reveals novel conceptual connections between alethic and evidential epistemic norms.
Here is a podcast in which I discuss some of the ideas in this paper.
Paradoxes of individual coherence (e.g., the preface paradox for individual judgment) and group coherence (e.g., the doctrinal paradox for judgment aggregation) typically presuppose that deductive consistency is a coherence requirement for both individual and group judgment. In this paper, we introduce a new coherence requirement for (individual) full belief, and we explain how this new approach to individual coherence leads to an amelioration of the traditional paradoxes. In particular, we explain why our new coherence requirement gets around the standard doctrinal paradox. However, we also prove a new impossibility result, which reveals that (more complex) varieties of the doctrinal paradox can arise even for our new notion of coherence.
Here is a podcast in which I discuss some of the ideas in this paper.
In this note, we show that standard statistical tests for randomness of finite sequences are language-dependent in an inductively pernicious way.
On this webpage, I provide links the the talks and background readings for a workshop led by Justin Dallmann, Kenny Easwaran, and myself at NYU's Norms of Inquiry Conference (organized by Susanna Siegel and Jane Friedman).
We provide an algorithm for using (merit information, plus) historical ZIP+4 code admissions data to help improve geographical diversity of incoming classes.
According to orthodox (Kolmogorovian) probability theory, conditional probabilities are by definition certain ratios of unconditional probabilities. As a result, orthodox conditional probabilities are regarded as undefined whenever their antecedents have zero unconditional probability. This has important ramifications for the notion of probabilistic independence. Traditionally, independence is defined in terms of unconditional probabilities (the factorization of the relevant joint unconditional probabilities). Various “equivalent” formulations of independence can be given using conditional probabilities. But these “equivalences” break down if conditional probabilities are permitted to have conditions with zero unconditional probability. We reconsider probabilistic independence in this more general setting. We argue that a less orthodox but more general (Popperian) theory of conditional probability should be used, and that much of the conventional wisdom about probabilistic independence needs to be rethought.
In this note, I explain how Gibbard's Collapse Theorem for the indicative conditional can be generalized, by providing an axiomatic formulation of (the essential premises of) Gibbard's argument.
Here is an input file (tptp/fof syntax) for investigating Gibbardian collapse (using various AR tools). And, here is an input file (prover9 syntax) for verifying the main proof in the paper. Here is a handout which explains this result, and its relation to Lewis's triviality result. The handout reports a new result — that collapse entails import-export, relative to our weak background theory. Here is an input file (prover9 syntax) which verifies this new result.
Bayesian orthodoxy posits a tight relationship between conditional probability and updating. Namely, the probability of an event A after learning B should equal the conditional probability of A given B prior to learning B. We examine whether ordinary judgment conforms to the orthodox view. In three experiments we found substantial differences between the conditional probability of an event A supposing an event B compared to the probability of A after having learned B. Specifically, supposing B appears to have less impact on the credibility of A than learning that B is true.
Joyce (1998) argues that for any credence function that doesn't satisfy the probability axioms, there is another function that dominates it in terms of accuracy. But if some potential credence functions are ruled out as violations of the Principal Principle, then some non-probabilistic credence functions fail to be dominated. We argue that to fix Joyce's argument, one must show that all epistemic values for credence functions derive from accuracy. [Here is a helpful response by Richard Pettigrew.]
In this note, I explain how a variant of David Miller’s (1975) argument concerning the language-dependence of the accuracy of predictions can be applied to Joyce’s (1998) notion of the accuracy of “estimates of numerical truth-values” (viz., Joycean credences). This leads to a potential problem for Joyce’s accuracy-dominance-based argument for the conclusion that credences (understood as “estimates of numerical truth-values” in Joyce’s sense) should obey the probability calculus.
Here is a Mathematica notebook, which verifies the main example reported in the paper. Here is a helpful response by Paul Pedersen and Clark Glymour. And, here is a similar discussion by Richard Pettigrew. In light of these replies (and related considerations involving other judgment types), I have abandoned this line of criticism of such Joycean arguments.
Bayesianism provides a rich theoretical framework, which lends itself rather naturally to the explication of various “contrastive” and “non-contrastive” concepts. In this (brief) discussion, I will focus on issues involving “contrastivism”, as they arise in some of the recent philosophy of science, epistemology, and cognitive science literature surrounding Bayesian confirmation theory.
On this webpage, I explain how to run MacLogic (an old Mac natural deduction tutor) under emulation, which allows it to run on modern Apple Macintosh computers. [Warning: I no longer keep this page up to date.]
In Chapter 1 of Evidence and Evolution, Sober (2008) defends a Likelihodist account of favoring. The main tenet of Likelihoodism is the so-called Law of Likelihood. In this note, I explain why the Law of Likelihood fails to undergird an adequate explication of favoring. (This was presented as a conference paper at a book symposium for Evidence and Evolution at the APA Central Division Meetings in 2011.)
A number of theories of causation posit that causes raise the probability of their effects. In this paper, we survey a number of proposals for analyzing causal strength in terms of probabilities. We attempt to characterize just what each one measures, discuss the relationships between the measures, and discuss a number of properties of each measure. One encounters the notion of ‘causal strength’ in many contexts. In linear causal models with continuous variables, the regression coefficients are naturally interpreted as causal strengths. In Newtonian Mechanics, the total force acting on a body can be decomposed into component forces due to different sources. Connectionist networks are governed by a system of ‘’synaptic weights’ that are naturally interpreted as causal strengths. And in Lewis’s account of ‘causation as influence’ (Lewis 2000), he claims that the extent to which we regard one event as a cause of another depends upon the degree to which one event ‘influences’ the other. In this paper, we examine the concept of causal strength as it arises within probabilistic approaches to causation. In particular, we are interested in attempts to measure the causal strength of one binary variable for another in probabilistic terms. Our discussion parallels similar discussions in confirmation theory, in which a number of probabilistic measures of degree of confirmational support have been proposed. Fitelson (1999) and Joyce (MS) are two recent surveys of such measures.
Click here to download the companion Mathematica notebook that goes along with this paper, click here for a PDF version of that notebook, and click here to download the Mathematica package PrSAT that it uses.
From 2004-2010, I was a co-organizer (at first, just with Sahotra Sarkar, and then with many local organizers around the world) of the annual Formal Epistemology Workshops. I am no longer involved with organizing these meetings, but I am very happy to say that the workshops are going strong -- thanks to a terrific community!
The (recent, Bayesian) cognitive science literature on the Wason Task (WT) has been modeled largely after the (not-so-recent, Bayesian) philosophy of science literature on the Paradox of Confirmation (POC). In this paper, we apply some insights from more recent Bayesian approaches to the (POC) to analogous models of (WT). This involves, first, retracing the history of the (POC), and, then, re-examining the (WT) with these historico-philosophical insights in mind.
In this note, I offer a strengthened version of one of Warfield's examples of knowledge from falsehood.
John Pollock has done a lot of interesting and important work on the metaphysics and epistemology of probability over several decades. In Thinking About Acting , we find many fascinating and thought provoking ideas and arguments (both old and new) about probability. Owing to limitations of space, I will be confining my remarks to a handful of issues addressed in  pertaining to probability, logic, and epistemology. First, I will discuss some of Pollock’s arguments against Bayesian Epistemology (BE). Here, I’ll try to defend (BE) from what I take to be less than decisive objections. Then, I will make some critical remarks concerning Pollock’s alternative approach to “probabilistic epistemology”, which is based on his (non-Bayesian) theory of “nomic probability” .
The Paradox of the Ravens (aka, The Paradox of Confirmation) is indeed an old chestnut. A great many things have been written and said about this paradox and its implications for the logic of evidential support.1 The first part of this paper will provide a brief survey of the early history of the paradox. This will include the original formulation of the paradox and the early responses of Hempel, Goodman, and Quine. The second part of the paper will describe attempts to resolve the paradox within a Bayesian framework, and show how to improve upon them. This part begins with a discussion of how probabilistic methods can help to clarify the statement of the paradox itself. And it describes some of the early responses to probabilistic explications. We then inspect the assumptions employed by traditional (canonical) Bayesian approaches to the paradox. These assumptions may appear to be overly strong. So, drawing on weaker assumptions, we formulate a new-and-improved Bayesian confirmation-theoretic resolution of the Paradox of the Ravens.
In this paper, we investigate various possible (Bayesian) precisifications of the (somewhat vague) statements of “the equal weight view” (EWV) that have appeared in the recent literature on disagreement. We will show that the renditions of (EWV) that immediately suggest themselves are untenable from a Bayesian point of view. In the end, we will propose some tenable (but not necessarily desirable) interpretations of (EWV). Our aim here will not be to defend any particular Bayesian precisification of (EWV), but rather to raise awareness about some of the difficulties inherent in formulating such precisifications.
Click here to download the companion Mathematica notebook that goes along with this paper, click here for a pdf version of that notebook, and and click here to download the Mathematica package PrSAT that it uses.
A decision procedure (PrSAT) for classical (Kolmogorov) probability calculus is presented. This decision procedure is based on an existing decision procedure for the theory of real closed fields, which has recently been implemented in Mathematica. A Mathematica implementation of PrSAT is also described, along with several applications to various non-trivial problems in the probability calculus.
Click here to download the companion Mathematica notebook that goes along with this paper, click here for a pdf version of that notebook, and click here to download the latest version of my Mathematica package PrSAT itself.
In applying Bayes’s theorem to the history of science, Bayesians sometimes assume – often without argument – that they can safely ignore very implausible theories. This assumption is false, both in that it can seriously distort the history of science as well as the mathematics and the applicability of Bayes’s theorem. There are intuitively very plausible counter-examples. In fact, one can ignore very implausible or unknown theories only if at least one of two conditions is satisfied: (i) one is certain that there are no unknown theories which explain the phenomenon in question, or (ii) the likelihood of at least one of the known theories used in the calculation of the posterior is reasonably large. Often in the history of science, a very surprising phenomenon is observed, and neither of these criteria is satisfied.
First, a brief historical trace of the developments in confirmation theory leading up to Goodman’s infamous “grue” paradox is presented. Then, Goodman’s argument is analyzed from both Hempelian and Bayesian perspectives. A guiding analogy is drawn between certain arguments against classical deductive logic, and Goodman’s “grue” argument against classical inductive logic. The upshot of this analogy is that the “New Riddle” is not as vexing as many commentators have claimed (especially, from a Bayesian inductive-logical point of view). Specifically, the analogy reveals an intimate connection between Goodman’s problem, and the “problem of old evidence”. Several other novel aspects of Goodman’s argument are also discussed (mainly, from a Bayesian perspective).
The conjunction fallacy has been a key topic in debates on the rationality of human reasoning and its limitations. Despite extensive inquiry, however, the attempt to provide a satisfactory account of the phenomenon has proved challenging. Here we elaborate the suggestion (first discussed by Sides, Osherson, Bonini, & Viale, 2002) that in standard conjunction problems the fallacious probability judgments observed experimentally are typically guided by sound assessments of confirmation relations, meant in terms of contemporary Bayesian confirmation theory. Our main formal result is a confirmation-theoretic account of the conjunction fallacy, which is proven robust (i.e., not depending on various alternative ways of measuring degrees of confirmation). The proposed analysis is shown distinct from contentions that the conjunction effect is in fact not a fallacy, and is compared with major competing explanations of the phenomenon, including earlier references to a confirmation-theoretic account.
In this paper, the authors describe their initial investigations in computational metaphysics. Our method is to implement axiomatic metaphysics in an automated reasoning system. In this paper, we describe what we have discovered when the theory of abstract objects is implemented in prover9 (a first-order automated reasoning system which is the successor to otter). After reviewing the second-order, axiomatic theory of abstract objects, we show (1) how to represent a fragment of that theory in prover9’s first-order syntax, and (2) how prover9 then finds proofs of interesting theorems of metaphysics, such as that every possible world is maximal. We conclude the paper by discussing some issues for further research.
Here is a link to the Computational Metaphysics project webpage.
Likelihoodists and Bayesians seem to have a fundamental disagreement about the proper probabilistic explication of relational (or contrastive) conceptions of evidential support (or confirmation). In this paper, I will survey some recent arguments and results in this area, with an eye toward pinpointing the nexus of the dispute. This will lead, first, to an important shift in the way the debate has been couched, and, second, to an alternative explication of relational support, which is in some sense a “middle way” between Likelihoodism and Bayesianism. In the process, I will propose some new work for an old probability puzzle: the “Monty Hall” problem.
Hempel first introduced the paradox of confirmation in 1937. Since then, a very extensive literature on the paradox has evolved (Vranas 2004). Much of this literature can be seen as responding to Hempel's subsequent discussions and analyses of the paradox (Hempel 1945). Recently, it was noted that Hempel's intuitive (and plausible) resolution of the paradox was inconsistent with his official theory of confirmation (Fitelson and Hawthorne 2006). In this article, we will try to explain how this inconsistency affects the historical dialectic about the paradox and how it illuminates the nature of confirmation. In the end, we will argue that Hempel's intuitions about the paradox of confirmation were (basically) correct, and that it is his theory that should be rejected, in favor of a (broadly) Bayesian account of confirmation.
Here is a link to the Teaching & Learning Guide that goes along with this article.
This entry provides an (opinionated, historical) overview of inductive logic.
This entry provides an (opinionated, historical) overview of (interpretations of) probability.
Carnap’s inductive logic (or confirmation) project is revisited from an “increase in firmness” (or probabilistic relevance) point of view. It is argued that Carnap’s main desiderata can be satisfied in this setting, without the need for a theory of “logical probability”. The emphasis here will be on explaining how Carnap’s epistemological desiderata for inductive logic will need to be modified in this new setting. The key move is to abandon Carnap’s goal of bridging confirmation and credence, in favor of bridging confirmation and evidential support. (This was presented as part of a symposium on Confirmation and Inductive Logic with Jim Joyce and Patrick Maher at PSA 2004).
Here I review Dick Jeffrey's final book.
We provide an informal introduction to automated reasoning and some of its recent applications.
Here I review Stephan Hartmann and Luc Bovens's monograph.
Michael Strevens  has proposed an interesting and novel Bayesian analysis of the Quine-Duhem (Q–D) problem (i.e., the problem of auxiliary hypotheses). Strevens’s analysis involves the use of a simplifying idealization concerning the original Q–D problem. We will show that this idealization is far stronger than it might appear. Indeed, we argue that Strevens’s idealization oversimplifies the Q–D problem, and we propose a diagnosis of the source(s) of the oversimplification.
Naive deductivist accounts of confirmation have the undesirable consequence that if E confirms H, then E also confirms the conjunction H&X, for any X—even if X is completely irrelevant to E and H. Bayesian accounts of confirmation may appear to have the same problem. In a recent article in this journal Fitelson (2002) argued that existing Bayesian attempts to resolve of this problem are inadequate in several important respects. Fitelson then proposes a new-and-improved Bayesian account that overcomes the problem of irrelevant conjunction, and does so in a more general setting than past attempts. We will show how to simplify and improve upon Fitelson’s solution.
Here I review Swinburne's edited volume on Bayes's Theorem.
Here I review Hacking's textbook.
Here I review Howie's historical treatise.
I present a new probabilistic explication of the "degree of coherence" of a set of propositions.
Here is a note explaining two technical corrections to my original coherence measure, and here is a Mathematica Notebook which works through the results in that note. The Mathematica notebook is also available in PDF format here
Here I review Joyce's (first) book on decision theory.
We apply some basic ideas from confirmation theory to the so-called "Monty Hall" Problem.
Naive deductive accounts of confirmation have the undesirable consequence that if E confirms H, then E also confirms the conjunction H & X, for any X --- even if X is utterly irrelevant to H (and E). Bayesian accounts of confirmation also have this property (in the case of deductive evidence). Several Bayesians have attempted to soften the impact of this fact by arguing that --- according to Bayesian accounts of confirmation --- E will confirm the conjunction H & X less strongly than E confirms H (again, in the case of deductive evidence). I argue that existing Bayesian ``resolutions'' of this problem are inadequate in several important respects. In the end, I suggest a new-and-improved Bayesian account (and understanding) of the problem of irrelevant conjunction.
In ‘Corroborating Testimony, Probability and Surprise’, Erik J. Olsson ascribes to L. Jonathan Cohen the claims that if two witnesses provide us with the same information, then the less probable the information is, the more confident we may be that the information is true (C), and the stronger the information is corroborated (C*). We question whether Cohen intends anything like claims (C) and (C*). Furthermore, he discusses the concurrence of witness reports within a context of independent witnesses, whereas the witnesses in Olsson’s model are not independent in the standard sense. We argue that there is much more than, in Olsson’s words, ‘a grain of truth’ to claim (C), both on his own characterization as well as on Cohen’s characterization of the witnesses. We present an analysis for independent witnesses in the contexts of decision making under risk and decision-making under uncertainty and generalize the model for n witnesses. As to claim (C*), Olsson’s argument is contingent on the choice of a particular measure of corroboration and is not robust in the face of alternative measures. Finally, we delimit the set of cases to which Olsson’s model is applicable.
Shortest possible axiomatizations for the strict implicational fragments of the modal logics S4 and S5 are reported. Among these axiomatizations is included a shortest single axiom for implicational S4—which to our knowledge is the first reported single axiom for that system—and several new shortest single axioms for implicational S5. A variety of automated reasoning strategies were essential to our discoveries.
With the inclusion of an effective methodology, this article answers in detail a question that, for a quarter of a century, remained open despite intense study by various researchers. Is the formula XCB = e(x,e(e(e(x,y),e(z,y)),z)) a single axiom for the classical equivalential calculus when the rules of inference consist of detachment (modus ponens) and substitution? Where the function e represents equivalence, this calculus can be axiomatized quite naturally with the formulas e(x,x), e(e(x,y), e(y,x)), and e(e(x,y), e(e(y,z),e(x,z))), which correspond to reflexivity, symmetry, and transitivity, respectively. (We note that e(x,x) is dependent on the other two axioms.) Heretofore, thirteen shortest single axioms for classical equivalence of length eleven had been discovered, and XCB was the only remaining formula of that length whose status was undetermined. To show that XCB is indeed such a single axiom, we focus on the rule of condensed detachment, a rule that captures detachment together with an appropriately general, but restricted, form of substitution. The proof we present in this paper consists of twenty-five applications of condensed detachment, completing with the deduction of transitivity followed by a deduction of symmetry. We also discuss some factors that may explain in part why XCB resisted relinquishing its treasure for so long. Our approach relied on diverse strategies applied by the automated reasoning program OTTER. Thus ends the search for shortest single axioms for the equivalential calculus.
We present short single equational axioms for Boolean algebra in terms of disjunction and negation and in terms of the Sheffer stroke. Previously known single axioms for these theories are much longer than the ones we present. We show that there is no shorter axiom in terms of the Sheffer stroke than the ones we present. Automated deduction techniques were used for several different aspects of the work.
Several forms of symmetry in degrees of evidential support are con- sidered. Some of these symmetries are shown not to hold in general. This has implications for the adequacy of many measures of degree of evidential support that have been proposed and defended in the philosophical literature. [Our paper was reprinted in Philosophy of Probability: Contemporary Readings, A. Eagle (ed.), 2009, Routledge.
In Chapter 12 of Warrant and Proper Function, Alvin Plantinga constructs two arguments against evolutionary naturalism, which he construes as a conjunction E&N. The hypothesis E says that “human cognitive faculties arose by way of the mechanisms to which contemporary evolutionary thought directs our attention (p.220).” With respect to proposition N, Plantinga (p. 270) says “it isn’t easy to say precisely what naturalism is,” but then adds that “crucial to metaphysical naturalism, of course, is the view that there is no such person as the God of traditional theism.” Plantinga tries to cast doubt on the conjunction E&N in two ways. His “preliminary argument” aims to show that the conjunction is probably false, given the fact (R) that our psychological mechanisms for forming beliefs about the world are generally reliable. His “main argument” aims to show that the conjunction E&N is self-defeating -- if you believe E&N, then you should stop believing that conjunction. Plantinga further develops the main argument in his unpublished paper “Naturalism Defeated” (Plantinga 1994). We will try to show that both arguments contain serious errors. [Our paper was reprinted in Intelligent Design Creationism and its Critics, R. Pennock (ed.), 2001, MIT Press. Plantinga has since responded to our paper in both his book Warranted Christian Belief and his paper "Probability and Defeaters". We hope to write a rejoinder someday.]
A Bayesian account of independent evidential support is outlined. This account is partly inspired by the work of C.S. Peirce. I show that a large class of quantitative Bayesian measures of confirmation satisfy some basic desiderata suggested by Peirce for adequate accounts of independent evidence. I argue that, by considering further natural constraints on a probabilistic account of independent evidence, all but a very small class of Bayesian measures of confirmation can be ruled out. In closing, another application of my account to the problem of evidential diversity is also discussed. [This paper was presented at PSA 2000, and it won the PSA Graduate Student Essay Prize that year.]
This article features long-sought proofs with intriguing properties (such as the absence of double negation and the avoidance of lemmas that appeared to be indispensable), and it features the automated methods for nding them. The theorems of concern are taken from various areas of logic that include two-valued sentential (or propositional) calculus and infinite-valued sentential calculus. Many of the proofs (in effect) answer questions that had remained open for decades, questions focusing on axiomatic proofs. The approaches we take are of added interest in that all rely heavily on the use of a single program that offers logical reasoning, William McCune's automated reasoning program OTTER. The nature of the successes and approaches suggests that this program offers researchers a valuable automated assistant. This article has three main components. First, in view of the interdisciplinary nature of the audience, we discuss the means for using the program in question (OTTER), which flags, parameters, and lists have which effects, and how the proofs it finds are easily read. Second, because of the variety of proofs that we have found and their significance, we discuss them in a manner that permits comparison with the literature. Among those proofs, we offer a proof shorter than that given by Meredith and Prior in their treatment of Łukasiewicz's shortest single axiom for the implicational fragment of two-valued sentential calculus, and we offer a proof for the Łukasiewicz 23-letter single axiom for the full calculus. Third, with the intent of producing a fruitful dialogue, we pose questions concerning the properties of proofs and, even more pressing, invite questions similar to those this article answers.
Certain distributivity results for Łukasiewicz’s infinite-valued logic Ł ℵ0 are proved axiomatically (for the first time) with the help of the automated reasoning program Otter . In addition, non-distributivity results are established for a wide variety of positive substructural logics by the use of logical matrices discovered with the automated model finding programs Mace  and MaGIC .
We present the simplest known axiomatization of the relevant (implicational) logic RM→.
Urquhart and Méndez and Salto claim to establish completeness theorems for the system C and two of its negation extensions. In this note, we do the following three things: (1) provide a counterexample to all of these alleged completeness theorems, (2) attempt to diagnose the mistakes in the reported completeness proofs, and (3) provide complete axiomatizations of the desired systems..
This is my PhD. thesis (UW-Madison, 2001).
In this essay, we reply to David Christensen's paper "Measuring Confirmation".
Here we provide a detailed critical review of Dembski's (first) book. [Our review was reprinted in Intelligent Design Creationism and its Critics, R. Pennock (ed.), 2001, MIT Press.]
Here we review Skyrms's (first) book on evolutionary game theory.
Contemporary Bayesian confirmation theorists measure degree of (incremental) confirmation using a variety of non-equivalent relevance measures. As a result, a great many of the arguments surrounding quantitative Bayesian confirmation theory are implicitly sensitive to choice of measure of confirmation. Such arguments are enthymematic, since they tacitly presuppose that certain relevance measures should be used (for various purposes) rather than other relevance measures that have been proposed and defended in the philosophical literature. I present a survey of this pervasive class of Bayesian confirmation-theoretic enthymemes, and a brief analysis of some recent attempts to resolve the problem of measure sensitivity. [This paper was presented at PSA 1998.]
In October 1996, Bill McCune et al at Argonne National Laboratory made national headlines (see Kolata (1996) and Peterson (1997)) with their computer proof of the long-standing Robbins conjecture. The Robbins conjecture had stumped some of the world's best mathematicians for over 60 years. Amazingly, the Argonne team's automated theorem-proving program EQP took only 8 days to find a proof of it. Unfortunately, the proof found by EQP is quite complex and difficult to follow. Some of the steps of the EQP proof require highly complex and unintuitive substitution strategies. As a result, it is nearly impossible to reconstruct or verify the computer proof of the Robbins conjecture entirely by hand. This is where the unique symbolic capabilities of Mathematica 3 come in handy. With the help of Mathematica, it is relatively easy to work out and explain each step of the dense EQP proof in detail. In this paper, I use Mathematica to provide a detailed, step-by-step reconstruction of the highly complex EQP proof of the Robbins conjecture. [This paper was presented at the 1997 Mathematica Developer Conference, and subsequently published in Mathematica in Education and Research.]
Wayne (1995) critiques the Bayesian explication of the confirmational significance of evidential diversity (CSED) offered by Horwich (1982). Presently, I argue that Wayne’s reconstruction of Horwich’s account of CSED is uncharitable. As a result, Wayne’s criticisms ultimately present no real problem for Horwich. I try to provide a more faithful and charitable rendition of Horwich’s account of CSED. Unfortunately, even when Horwich’s approach is charitably reconstructed, it is still not completely satisfying.
The investigation of the visibility of Earth-bound satellites using three Deep Space Network (DSN) stations is performed in four steps, which progress from a very simplistic two-dimensional model to a general three-dimensional paradigm. [I completed this report while working as a summer intern at a NASA contractor (CSC).]