Of the things very distant from immediate human concerns – the very small, the very distant, events in the distant past, processes happening over very long periods of time – it is commonly assumed that science can, and does, give us knowledge. Perhaps science got many things wrong in the past, but it is assumed that many/most contemporary scientific theories are definitely on the right lines, if not plain true; this is scientific realism. But what makes us so sure we are right now, when we have been wrong so often in the past? If genuine scientific practice can readily lead to false theories of the very small, the very distant, etc., doesn’t that cast significant doubt on what is today commonly referred to as ‘scientific knowledge’? Philosophers of science have spent more than thirty years refining their response to this challenge. This project addresses one of the most sophisticated contemporary scientific realist positions, summarised by the following claim:

When a scientific theory (broadly construed) brings about substantial scientific successes (e.g. novel predictions), then the elements of that theory which did the work to bring about those successes are very likely at least approximately true.

Call this selective scientific realism. Despite deep issues concerning several of the key concepts here, there is widespread agreement that selective scientific realism can be tested by the history of science. That is, one can look to specific episodes in the history of science to see whether they support this claim. One can think of the position as analogous to a scientific theory which has been put forward, warranting extensive and thorough tests. But the position has not been thoroughly tested by the historical record: philosophers have focused on the same few cases for the past thirty years. In addition, philosophers have not adequately tracked the contemporary realist responses; in particular, they have not adequately concerned themselves with identifying the components of the theory that were responsible for novel predictive success.

GRQ: Which cases in the history of science threaten selective realism, and which substantive versions of selective realism (if any) are capable of addressing those cases?

The reference to different ‘versions’ is necessary, since there are different takes on what should be meant by ‘did the work’, ‘approximately true’, and so on. Currently established versions of selective realism which will be considered are semirealism (Chakravartty 1998), structural realism (Worrall 1989, Ladyman 2008), and the positions put forward in Kitcher (1993) and Psillos (1999). Other possible characterisations of the position will also be considered, especially building on the suggestions in Lyons (2002, 2006, 2009a, 2009b) and Vickers (2013).

The historical episodes to be considered have been selected by the PI and Co-I from previous work (e.g. the list in Vickers 2013). The following are the most promising candidates to move the debate forward significantly, and in different directions than have been previously explored.

Cases From Thermodynamics

(a) Rankine’s vortex theory of thermodynamics

(b) Taking the thermodynamic limit

(c) Modelling in statistical mechanics, assuming at the same time that particles do not interact, but do share energy

Cases from the History of Chemistry

(d) Scheele’s phlogiston theory (a distinctive history of science not currently detailed in the literature on ‘phlogiston theory’)

(e) Dalton’s atomic theory

(f) Mendeleev’s periodic law

(g) Kekulé’s theory of the Benzene molecule

Cases from the history of biology and medicine

(h) Drug receptor theory

(i) Teleomechanist theories of organic development and the prediction of gill slits in human ontogenetic development

(j) Reduction division in the formation of sex cells

In several of these cases important work has already been done by historians. For example, in the case of (a), Ben Marsden (Aberdeen) is currently completing the first major scientific biography of Macquorn Rankine. In the case of (h) Holger Maehle (Durham) has been at the forefront of work on the history of drug receptors (e.g. Prüll et al. 2009); this case will stand as a test case from the history of medicine, which is completely unexplored within the realism debate. And Kyle Stanford (2006, 2009) has very briefly introduced cases (i) and (j), but the philosopher is yet to draw on the available wealth of historical work which has been done (e.g. Churchill 1970, Gould 1977, Roll-Hansen 2009).

Going beyond these specific examples, the investigators have reason to anticipate that the exploration of further cases will be fruitful. Some of these are listed in Lyons (2002, 2006, 2013, footnote 13) and Vickers (2013), but others are such that they have yet to be publically introduced to the realism debate. A PhD student will take on at least two major case studies over the course of the project. A balance will be struck between investigating cases in depth and investigating a significant number of cases. The balance will be dictated by considering at all times the significance of the analysis for answering the guiding research question (GRQ), as given above.