Volume 19 (2019)

    Hobbes's Laws of Nature in Leviathan as a Synthetic Demonstration: Thought Experiments and Knowing the Causes

    Marcus P. Adams

    January 2019, vol. 19, no. 05, pp. 1-23

    The status of the laws of nature in Hobbes’s Leviathan has been a continual point of disagreement among scholars. Many agree that since Hobbes claims that civil philosophy is a science, the answer lies in an understanding of the nature of Hobbesian science more generally. In this paper, I argue that Hobbes’s view of the construction of geometrical figures sheds light upon the status of the laws of nature. In short, I claim that the laws play the same role as the component parts – what Hobbes calls the “cause” – of geometrical figures. To make this argument, I show that in both geometry and civil philosophy, Hobbes proceeds by a method of synthetic demonstration as follows: 1) offering a thought experiment by privation; 2) providing definitions by explication of “simple conceptions” within the thought experiment; and 3) formulating generative definitions by making use of those definitions by explication. In just the same way that Hobbes says that the geometer should “put together” the parts of a square to learn its cause, I argue that the laws of nature are the cause of peace.

    Groundwork for an Explanationist Account of Epistemic Coincidence

    David Faraci

    January 2019, vol. 19, no. 04, pp. 1-26

    Many philosophers hold out hope that some final condition on knowledge will allow us to overcome the limitations of the classic "justified true belief" analysis. The most popular intuitive glosses on this condition frame it as an absence of epistemic coincidence (accident, luck). In this paper, I lay the groundwork for an explanationist account of epistemic coincidence—one according to which, roughly, beliefs are non-coincidentally true if and only if they bear the right sort of explanatory relation to the truth. The paper contains both positive arguments for explanationism and negative arguments against its competitors: views that understand coincidence in terms of causal, modal, and/or counterfactual relations. But the relationship between these elements is tighter than typical. I aim to show not only that explanationism is independently plausible, and superior to its competitors, but also that it helps make sense of both the appeal and failings of those competitors.

    Are There Indefeasible Epistemic Rules?

    Darren Bradley

    January 2019, vol. 19, no. 03, pp. 1-19

    What if your peers tell you that you should disregard your perceptions? Worse, what if your peers tell you to disregard the testimony of your peers? How should we respond if we get evidence that seems to undermine our epistemic rules? Several philosophers (e.g. Elga 2010, Titelbaum 2015) have argued that some epistemic rules are indefeasible. I will argue that all epistemic rules are defeasible. The result is a kind of epistemic particularism, according to which there are no simple rules connecting descriptive and normative facts. I will argue that this type of particularism is more plausible in epistemology than in ethics. The result is an unwieldy and possibly infinitely long epistemic rule — an Uber-rule. I will argue that the Uber-rule applies to all agents, but is still defeasible — one may get misleading evidence against it and rationally lower one’s credence in it.

    Merleau-Ponty and Naïve Realism

    Keith Allen

    January 2019, vol. 19, no. 02, pp. 1-25

    This paper has two aims. The first is to use contemporary discussions of naïve realist theories of perception to offer an interpretation of Merleau-Ponty’s theory of perception. The second is to use consideration of Merleau-Ponty’s theory of perception to outline a distinctive version of a naïve realist theory of perception. In a Merleau-Pontian spirit, these two aims are inter-dependent.

    Interventions in Premise Semantics

    Paolo Santorio

    January 2019, vol. 19, no. 01, pp. 1-27

    This paper investigates what happens when we merge two different lines of theorizing about counterfactuals. One is the comparative closeness view, which was developed by Stalnaker and Lewis in the framework of possible worlds semantics. The second is the interventionist view, which is part of the causal models framework developed in statistics and computer science. Common lore and existing literature have it that the two views can be easily fit together, aside from a few details. I argue that, on the contrary, transplanting causal-models-inspired ideas in a possible worlds framework yields a new semantics. The difference is grounded in different algorithms for handling inconsistent information, hence it touches on issues that are at the very heart of a semantics for contrary-to-fact conditionals. Roughly, Stalnaker/Lewis semantics requires us to evaluate the consequent of a counterfactual at all closest antecedent-verifying possibilities. Causal-models-based semantics also does this, but in addition uses the information contained in the antecedent, together with background causal information, to shift what worlds count as closest. This makes systematically different predictions and generates a new logic. The upshot is that we have a new semantics to study, and a substantial theoretical choice to make.