James Nguyen

No image provided

Contact details

Name:
James Nguyen
Institute:
Institute of Philosophy
Email address:
James.Nguyen@sas.ac.uk

Research Summary and Profile

Research interests:
Philosophy
Publication Details

Related publications/articles:

Date Details
12-Sep-2020 Modelling Nature: An Opinionated Introduction to Scientific Representation

Monographs

08-Sep-2020 Judgement aggregation in scientific collaborations: The case for waiving expertise

Articles

09-May-2020 Models and Denotation

Chapters

Publications available on SAS-space:

Date Details
Mar-2019 It's not a game: accurate representation with toy models

PeerReviewed

Drawing on `interpretational' accounts of scientific representation, I argue that the use of so-called `toy models' provides no particular philosophical puzzle. More specifically; I argue that once one gives up the idea that models are accurate representations of their targets only if they are appropriately similar, then simple and highly idealised models can be accurate in the same way that more complex models can be. Their differences turn on trading precision for generality, but, if they are appropriately interpreted, toy models should nevertheless be considered accurate representations. A corollary of my discussion is a novel way of thinking about idealisation more generally: idealised models may distort features of their targets, but they needn't misrepresent them.

Apr-2019 Mirrors Without Warnings

PeerReviewed

Veritism, the position that truth is necessary for epistemic acceptability, seems to be in tension with the observation that much of our best science is not, strictly speaking, true when interpreted literally. This generates a paradox: (i) truth is necessary for epistemic acceptability; (ii) the claims of science have to be taken literally; (iii) much of what science produces is not literally true and yet it is acceptable. We frame Elgin’s project in True Enough as being motivated by, and offering a particular resolution to, this paradox. We discuss the paradox with a particular focus on scientific models and argue that there is another resolution available which is compatible with retaining veritism: rejecting the idea that scientific models should be interpreted literally.

May-2019 The limitations of the Arrovian consistency of domains with a fixed preference

PeerReviewed

In this paper I investigate the properties of social welfare functions defined on domains where the preferences of one agent remain fixed. Such a domain is a degenerate case of those investigated, and proved Arrow consistent, by Sakai and Shimoji (Soc Choice Welf 26(3):435–445,2006). Thus, they admit functions from them to a social preference that satisfy Arrow’s conditions of Weak Pareto, Independence of IrrelevantAlternatives, and Non-dictatorship. However, I prove that according to any function that satisfies these conditions on such a domain, for any triple of alternatives, if the agent with the fixed preferences does not determine the social preference on any pair of them, then some other agent determines the social preference on the entire triple.

Nov-2020 Unlocking Limits

PeerReviewed

In a series of recent papers we have developed what we call the DEKI account of scientific representation, according to which models represent their targets via keys. These keys provide a systematic way to move from model-features to features to be imputed to their targets. We show how keys allow for accurate representation in the presence of idealisation, and further illustrate how investigating them provides novel ways to approach certain currently debated questions in the philosophy of science. To add specificity, we offer a detailed analysis of a kind of key that that is crucial in many parts of physics, namely what we call limit keys. These keys exploit the fact that the features exemplified by these models are limits of the features of the target.

Nov-2020 Do Fictions Explain?

PeerReviewed

I argue that fictional models, construed as models that misrepresent certain ontological aspects of their target systems, can nevertheless explain why the latter exhibit certain behaviour. They can do this by accurately representing whatever it is that that behaviour counterfactually depends on. However, we should be sufficiently sensitive to different explanatory questions, i.e., ‘why does certain behaviour occur?’ vs. ‘why does the counterfactual dependency invoked to answer that question actually hold?’. With this distinction in mind, I argue that whilst fictional models can answer the first sort of question, they do so in an unmysterious way (contra to what one might initially think about such models). Moreover, I claim that the second question poses a dilemma for the defender of the idea that fictions can explain: either these models cannot answer these sorts of explanatory questions, precisely because they are fictional; or they can, but in a way that requires reinterpreting them such that they end up accurately representing the ontological basis of the counterfactual dependency, i.e., reinterpreting them so as to rob them of their fictional status. Thus, the existence of explanatory fictions does not put pressure on the idea that accurate representation of some aspect of a target system is a necessary condition on explaining that aspect.

Research Projects & Supervisions

Research projects:

Details
Epistemological Pluralism

Back to top