Volume 20 (2020)
Samuel C. Fletcher
January 2020, vol. 20, no. 03, pp. 1-22
How can inferences from models to the phenomena they represent be justified when those models represent only imperfectly? Pierre Duhem considered just this problem, arguing that inferences from mathematical models of phenomena to real physical applications must also be demonstrated to be approximately correct when the assumptions of the model are only approximately true. Despite being little discussed among philosophers, this challenge was taken up (if only sometimes implicitly) by mathematicians and physicists both contemporaneous with and subsequent to Duhem, yielding a novel and rich mathematical theory of stability with epistemological consequences.
Michael Glanzberg and Jeffrey C. King
January 2020, vol. 20, no. 02, pp. 1-29
In this paper, we defend a traditional approach to semantics, that holds that the outputs of compositional semantics are propositional, i.e. truth conditions (or anything else appropriate to be the objects of assertions or the contents of attitudes). Though traditional, this view has been challenged on a number of fronts over the years. Since classic work of Lewis, arguments have been offered which purport to show that semantic composition requires values that are relativized, e.g. to times, or other parameters that render them no longer propositional. Focusing in recent variants of these arguments involving quantification and binding, we argue that a correct understanding of how composition works gives no reason to relativize semantic values, and that propositional semantic values are in fact the preferred option. We take our argument to be mainly empirical, but along the way, we defend some more general theses. Simple propositional semantic values are viable in composition, we maintain, because composition is itself a complex phenomenon, involving multiple modes of composition. Furthermore, some composition principles make adjustments to the meanings of constituents in the course of composition. These adjustments are by triggered syntactic environments. We argue such small contributions of meaning from syntactic structure are acceptable.
Sara Aronowitz and Tania Lombrozo
January 2020, vol. 20, no. 01, pp. 1-18
Mental simulation — such as imagining tilting a glass to figure out the angle at which water would spill — can be a way of coming to know the answer to an internally or externally posed query. Is this form of learning a species of inference or a form of observation? We argue that it is neither: learning through simulation is a genuinely distinct form of learning. On our account, simulation can provide knowledge of the answer to a query even when the basis for that answer is opaque to the learner. Moreover, through repeated simulation, the learner can reduce this opacity, supporting self-training and the acquisition of more accurate models of the world. Simulation is thus an essential part of the story of how creatures like us become effective learners and knowers.