Microsoft Word - Nguyen - PoS 2016 PP.docx   On the pragmatic equivalence between representing data and phenomena James Nguyen1 2016 Philosophy of Science 83(2) pp. 171-191 http://www.journals.uchicago.edu/doi/full/10.1086/684959 Abstract Van Fraassen (2008) argues that data provide the target-end structures required by structuralist accounts of scientific representation. But models represent phenomena not data. Van Fraassen agrees, but argues that there is no pragmatic difference between taking a scientific model to accurately represent a physical system and accurately represent data extracted from it. In this paper I reconstruct his argument and show that it turns on the false premise that the pragmatic content of acts of representation include doxastic commitments. 1. Introduction Models are important units of science and one of their primary roles is to represent phenomena. This much is uncontroversial. How they do so, and not unrelatedly, how they do so accurately, isn’t. One popular suggestion is to take models to be mathematical structures, and appeal to a morphism between models and their target                                                                                                                 1 j.nguyen1@lse.ac.uk I am particularly grateful to Roman Frigg, Alexandru Marcoci, F. A. Muller, Bryan Roberts, and two anonymous referees for this journal for extensive feedback on earlier drafts of this article. Thanks also to audiences at Mathematizing Science II at the University of East Anglia, the BSPS   systems as constituting, at least in part, the representation relation between the two.2 An alternative suggestion is to focus on accurate representation, and appeal to such morphisms as establishing this instead. But target systems are physical phenomena, and thus are not straightforwardly the kind of things that can enter into morphisms.3 So the structuralist of either stripe is required to provide an account of where the target-end structures come from. One suggestion is data models supply them. But as van Fraassen (2008) notes, scientific models ultimately represent, and by implication, accurately or inaccurately represent, phenomena, not data. His response is to argue that, for a particular individual, in a given context, there is no pragmatic difference between accurately representing the two. When using data to locate a target system in the logical space of a scientific model, a model user cannot doubt that the model accurately represents the                                                                                                                 2 There are different ways of cashing out the formal details of this suggestion. Some take mathematical structures to be set-theoretic (Suppes 1960). Others prefer state- spaces (van Fraassen 1980) or partial structures (French and Ladyman 1999, Bueno and French 2011). The particular morphism appealed to also varies. Isomorphism (Suppes 1960), homomorphisms (Mundy 1986), partial isomophisms or homomorphisms (accompanying partial structures), and isomorphic embeddings (van Fraassen 1980, 2008) have all been suggested. The terminology used throughout this paper is set-theoretic but the same points could be made in other languages (i.e. category theory). I use ‘morphism’ to remain neutral between the invoked structure preserving mappings. 3 Throughout this paper I use the terms ‘target systems’ and ‘target phenomena’ interchangeably, with no suggestion that as ‘systems’ they are thereby structured.   target whilst granting that it accurately represents the data on pain of a pragmatic contradiction. This contradiction is akin to Moore’s paradox, stated in terms of representation rather than assertion. But the argument requires that the act of using a data model to locate the target system in logical space induces certain pragmatic commitments. I will argue that this is false, so the argument is unsound. The structure of this paper is as follows. In Section 2 I distinguish between two different structuralist claims. Model-target morphisms may be invoked to establish that models represent their targets, or that they represent them accurately. I consider whether data supply the target-end structures required by either claim. I point out that since data and phenomena are distinct, this poses a problem for the structuralist. In Section 3 I reconstruct van Fraassen’s solution: pragmatically, in a given context, for a particular individual, there is no difference between accurately representing the two. This reconstruction requires significant clarification regarding where van Fraassen’s argument concerns representation simpliciter and where it concerns accurate representation. In Section 4 I make explicit how the argument requires that acts of representation induce pragmatic commitments and demonstrate this is false. I consider and rebut possible responses to my concerns and argue that van Fraassen’s apparent focus on providing an account of how models represent accurately pre-judges the more fundamental question: how do they represent in the first place? 2. Representation, accurate representation, and data The distinction between accurate representation and representation simpliciter has been explicit in the literature since at least Suárez (2003). Models can misrepresent   their targets, by attributing to them features they don’t have, but still represent them. The history of science provides numerous examples. Despite the efforts to carefully distinguish between philosophical accounts of scientific representation and accurate representation, the temptation to blur the two remains, especially in discussions of the structuralist position. Critics have typically taken structuralists to be committed to the claim that model-phenomena morphisms constitute, at least in part, representational relationships (Suárez 2003; Frigg 2006). But others have interpreted structuralists as claiming that they play a role in establishing representational accuracy (Contessa 2011).4 Furthermore, it’s plausible that they are supposed help address both notions. Muller (2011) suggests choosing tailor made morphisms on a case-by-case basis, where a weaker morphism establishes the representational relationship and a stronger one establishes representational accuracy. Similar suggestions have been made by advocates of the partial isomorphism approach (Bueno and French 2011). Van Fraassen’s argument for the pragmatic equivalence between taking a model to accurately represent phenomena and data is explicitly couched in terms that concern accuracy (or in line with his constructive empiricism, accurate representation of observable phenomena): ‘empirical adequacy’, ‘fit’, ‘match’, and so on. But as                                                                                                                 4 Even once the role played by morphisms is fixed, the extent of that role is contentious. Some attempt to reduce representation to model-target morphisms (French 2003, albeit with caveats). Others, including van Fraassen, are happy to adopt a non-reductive strategy and additionally appeal to the intentions of model users. Throughout this paper I use the term ‘structuralist’ to refer to any account that takes a model-target morphism to be at least a necessary condition on either representation or accurate representation.     discussed below, his argument makes use of both types of representational relationship, and clarifying what he is concerned with, and where, is an important, and often non-trivial, task. All structuralists are faced with the problem that morphisms, by definition, only hold between mathematical objects. And many target systems, animal populations, celestial bodies, and so on, are not structures, at least in any obvious way. Set- theoretic structures are abstract, mathematical, entities. Targets are physical. So the onus is on the structuralist to provide an account of where to find the structure at the target-end of the morphism. One suggestion is that targets instantiate structures in the sense that individuals in a system can be collected into a domain and the physical relations in the system provide extensional relations defined over it. I do not discuss this suggestion here, beyond noting that van Fraassen himself describes it as the ‘“dormative virtue” response [which is not] only … merely verbal, but … also hijacks a term from mathematics for unwarranted use elsewhere’ (2010, 549). An alternative is to appeal to data models as supplying the requisite target-end structures. This approach originated in Suppes (1962), and is found in van Fraassen’s earlier work, but van Fraassen (2008) provides the most fully developed account of the role of data in the structuralist tradition, and as such is my primary interest here.5                                                                                                                     5 In the following sections, all references given only with page numbers refer to Scientific Representation: Paradoxes of Perspective (2008). Formatting in quotations is from the original unless indicated otherwise.   What are data models? Experimental measuring processes gather raw data. These are then cleaned (with anomalous datum rejected and measurement error taken into account) and usually idealized, e.g. discrete data points may be replaced by a continuous function. Often, although not always, the result is a smooth curve through the data points that satisfies certain theoretical desiderata.6 These resulting data models can be treated as set-theoretic structures. Assuming that the data points are numeric, the smooth curve is a function that can be treated as a relation defined over ℝ, or ℝn, or intervals thereof.7   Thus, if data models are invoked as supplying target-end structures then the structuralist can conclude that a scientific model represents, or accurately represents, a data model only if the two are appropriately morphic. But the following points should make us suspicious whether this suffices as an account of scientific representation:                                                                                                                 6 See Harris (2003); van Fraassen (2008, 166-68), for further elaboration on this process. Van Fraassen’s discussion throws up a terminological issue that needs regimenting to avoid confusion. Throughout this paper I use ‘data model’ rather than van Fraassen’s ‘surface model’ to refer to the end result of the cleaning and idealizing process. I also use ‘scientific model’ in place of van Fraassen’s ‘theoretical model’. 7 The example of numerical data is illustrative. As van Fraassen notes, the process of creating data models is not restricted to ‘number assigning’, and the resulting structures do not have to have ℝ as their domain. For example, a measurement procedure may only provide an ordinal ranking, and therefore deliver a different kind of structure (pp.158-60). This has no bearing on the discussion below.   1. Phenomena ≠ Data: They are not the same. As van Fraassen puts it: ‘phenomena are actual objects, events, and processes, while [data models] are the products of our independent intellectual activity’ (p.259). That a scientific model represents data does not straightforwardly establish that it represents the phenomenon from which the data were gathered. 2. Loss of Reality: Models (ultimately at least) represent phenomena. And by implication, models (ultimately at least) accurately or inaccurately represent phenomena. This does not preclude data being represented, or accurately represented, it just requires that phenomena are the ultimate targets of scientific representation. These points aren’t new. Bogen and Woodward (1988) introduced the data- phenomena distinction, where the latter term is liberally interpreted as referring to objects, features of objects, events, processes, mechanisms, and so on. Their example of the discovery of weak neutral currents makes explicit that data (e.g. bubble chamber photographs or a data model extracted from them) and phenomena (interactions between neutrinos and bosons) should not be conflated. This is particularly pressing when we focus on representation simpliciter: our best theory of elementary particles, the so-called ‘standard model’, is about particles and their interactions, not bubble chamber photographs. But the concern remains when transposed into the context of accurate representation. If it is phenomena that are ultimately represented by our scientific models, then it is them that are accurately or inaccurately represented. Morphisms between scientific models and data can provide evidence for whether this is the case, but the relation of accurate representation is ultimately directed at phenomena.   One could argue that it is data, not phenomena, that are represented (in either sense). But then Loss of Reality looms. Muller, in discussing Suppes’ use of data models, pithily states a version of the objection as follows: ‘The best one could say is that a data structure D seems to act as simulacrum of the concrete actual being B … But this is not good enough. We don’t want simulacra. We want the real thing. Come on.’ (2011, 98) Van Fraassen is acutely aware of this (he coined the phrase ‘Loss of Reality’ (p.258)). He states the concern as follows: ‘Oh, so you say that the only ‘matching’ is between data models and theoretical [scientific] models. Hence the theory does not confront the observable phenomena, those things, events, and processes out there, but only certain representations [i.e. data models] of them.’ (ibid.) And he claims that: ‘[a]n empiricist account of what the sciences are all about must absolutely answer this objection’ (ibid). Without an answer, the structural empiricist is left in the uncomfortable position, whereby it is data, the ‘products of our independent intellectual activity’, not phenomena, that are the ultimate targets of scientific models. His phrasing in the quotation above suggests that he is concerned with the question of accurate representation rather than representation simpliciter. But this isn’t   straightforward (cf. Thomson-Jones 2011). He starts the discussion with the claim that the fundamental question to be answered is:   ‘How can an abstract entity, such as a mathematical structure, represent something that is not abstract, something in nature?’ (p.240) But he then shifts to the question of how a structure can do so accurately: ‘The question how an abstract structure can represent something … is just this: how, or in what sense, can such an abstract entity as a model “save” or fail to “save” this concrete phenomenon’ (p.245) And then, when presenting his solution, he couches it in terms of ‘fit’, ‘match’, ‘empirical adequacy’ and so on, and explicitly states: ‘If a model were offered to represent the phenomenon, that structural relation [a model-phenomenon morphism] would determine whether the model was adequate with respect to its purpose’ (pp.249-250 emphasis added) As discussed above, regardless of whether he is concerned with accurate representation, or representation simpliciter, answering Loss of Reality requires an account of how morphisms between models and data establish that phenomena are the ultimate targets of scientific models. For the purposes of this paper I take van Fraassen’s solution, stated in terms of accurate representation, at its word. I interpret his argument as an attempt to establish the pragmatic equivalence between taking a   scientific model to accurately represent a phenomenon and accurately represent (in virtue of a morphism) data extracted from it. But his argument utilizes, and at times equivocates between, both representational notions and, as I argue in Section 4, this equivocation is at least partly to blame for its eventual failure. 3. Van Fraassen’s argument Van Fraassen’s strategy for dealing with Loss of Reality is to diffuse it with what he describes as a ‘Wittgensteinian move’ (p.254) by invoking pragmatic features in the contexts of using scientific models. He claims that despite the data-phenomena distinction, for a given scientist, in a given context, there is no difference between accurately representing the two. That accurately representing data is the same as accurately representing the system that provided it is claimed to be a ‘pragmatic tautology … [something that is] … logically contingent but undeniable nonetheless’ (p.259). Van Fraassen’s argument for this is one of the most significant contributions of the book, but it hasn’t received the attention it deserves. I can only speculate about why this is, but I suspect that it is: in part due to the considerable novelty of many of the central notions used; in part due to the fact that the argument is spread out throughout the book, interwoven with substantial broader discussions of representation, measurement, and empiricism; and in part due to a style of presentation that is often difficult to penetrate. In fact, the project of extracting a coherent position from the rich and intricate lines of thought is beset with exegetic challenges. In this section I first isolate the important notions van Fraassen invokes and then reconstruct his argument. This is a necessary first step in any critical evaluation of van Fraassen’s developed philosophical position.   3.1 Toolbox I. Hauptsatz: ‘There is no representation except in the sense that some things are used, made, or taken, to represent things as thus or so’ (p.23). There are two important things to note about this. Firstly, it is clearly non-reductive as it invokes the intentions and acts of agents. Secondly, it involves representation-as, rather than representation- of. Van Fraassen (p.16) explicitly refers to Goodman (1976) as the source of the distinction, and following them I assume that x is a representation-of y iff x denotes y. Representation-as is stronger: x represents y as thus or so iff x denotes y and attributes certain features to y. If y has those features then x accurately represents y with respect to them. To use one of van Fraassen’s examples, the proper name ‘Margaret Thatcher’ is a representation-of Margaret Thatcher, since it denotes her. But a caricature of Margaret Thatcher also represents her as thus or so, e.g. if she is depicted with horns and a tail then it represents her as being draconian (pp.13-15). II. Use of Representations: Hauptsatz makes clear that representations only represent when they are used to do so. But in addition, certain representations have particular uses, ‘they are typically produced for a certain use, with a certain purpose or goal’ (p.76). Using maps to navigate provides an illustrative example: ‘[a] map is designed to help one get around in the landscape it depicts’ (ibid.). Throughout this paper, I assume that the analogous use of models is to generate predictions about their target systems. This is supported by van Fraassen’s analogy between using a map to navigate and using the Aviation Model (AVN) for weather forecasting, i.e. to generate predictions about the weather (p.77).   III. Logical Space: Representations are associated with ‘logical spaces’. This is a very general notion. Examples include PVT space in elementary gas theory, phase spaces in classical mechanics, and Hilbert spaces in quantum theory (p.164). Locations in PVT space are combinations of pressure, volume and temperature. Routes through a phase space are possible trajectories of an object, and locations in a Hilbert space are possible quantum states of a system. IV. Self-location: A necessary condition on using a map to navigate, or a model to predict, is that the user self-locate in the logical space provided. They ‘must be in some pertinent sense able to relate him or herself, his or her current situation, to the representation’ (p.80). In order to navigate with a map, the user must be able to locate themself in the terrain depicted and associate that location with an area on the map. They distinguish a particular map region as representing where they are, they orient the map to correspond to the direction they’re facing, and so on. In doing so they locate themselves with respect to the map. When it comes to scientific models, van Fraassen claims: ‘Suppose now that science gives us a model which putatively represents the world in full detail. Suppose even we believe that this is so. Suppose we regard ourselves as knowing that it is so. Then still, before we can go on to use that model, to make predictions and build bridges, we must locate ourselves with respect to that model. So apparently we need to have something in addition to what science has given us here. The extra is the self-ascription of location.’ (p.83)   It’s worth clarifying what ‘self-location’ could mean in the spaces under consideration. Although suggested by van Fraassen’s cartographic analogy, I presume that it doesn’t require that the model user locate herself in logical space. When it comes to measuring the pressure of his tire (p.181), what would it mean for van Fraassen to locate himself in PVT space? Van Fraassen is 100psi? A more plausible, reading of ‘self-location’ is that the model users themselves actively locate the target system in logical space. And this proceeds in two steps. The model user first adopts a certain perspective towards the target by taking it to be the sort of thing that can be located in the logical space provided by the model. For example: van Fraassen takes the tire to be the sort of object that can be located in PVT space. But although this may be a necessary condition on using a model to generate a prediction, it is not the condition van Fraassen has in mind when he invokes the cartographic analogy. It isn’t enough that a navigator is located somewhere in the terrain depicted; we need to delineate specific point, or at least an interval or region of the space. This is the second step in self-location. When it comes to generating predictions using scientific models, this is done by inputting the target’s initial and boundary conditions: ‘The AVN itself requires input to be run at all, of course: namely initial conditions and lateral boundary conditions obtained from operational weather centers in the relevant area … The model presents a space of possible states and their evolution over time—the input locates the weather forecaster in that space, at the outset of the forecasting process.’ (p.78)   Self-location demands that it is not enough that the system is in fact thereby located, but the model user must perform an act of location. To speak loosely, the user distinguishes a region in logical space with the claim ‘that target system is there’.   V. Measurement as Location in Logical Space: ‘the act of measurement is an act – performed in accordance with certain operational rules – of locating an item in logical space’ (p.165). And these measurements deliver data models. As van Fraassen notes, the location needn’t be a point, but can be a region (ibid.). This can, but doesn’t have to, be the result of measurement imprecision. Even a perfectly precise pressure reading p determines only a region of PVT space since there are multiple volume- temperature pairs compatible with p. VI. Measurement as Representation: locating a system in logical space involves representing it as thus or so. This form of representation is not established by a morphism (recall van Fraassen’s worry about invoking a ‘dormative virtue’). Instead, data models represent because: ‘A measurement is a physical interaction, set up by agents, in a way that allows them to gather information. The outcome of a measurement provides a representation of the entity (object, event, process) measured…’ (pp.179-180) A data model represents the system measured as having the features corresponding to the region of logical space where it is thereby located. If the system has those features, the data model is accurate.   VII. Pragmatic Tautology: ‘a pragmatic tautology is a statement which is logically contingent, but undeniable nevertheless. Similarly, a pragmatic contradiction is a statement that is logically contingent, but cannot be asserted’ (p.259). Moore’s paradox – utterances of the form ‘P and it is not the case that I believe that P’ - is a classic example of the latter. They are logically contingent - their form is an agent i asserting ‘P & ¬Bi(P)’, where Bi(P) means i believes that P - and neither conjunct semantically entails the negation of the other (if they did, i would be clairvoyant). Such sentences are pragmatic contradictions because, in the context of i asserting P, i commits herself to believing P. It is this commitment that, when combined with the second conjunct, makes the sentence unassertable. Since van Fraassen’s account of scientific representation does not involve linguistic representation, his argument requires generalizing from the assertablity of sentences to certain acts of representation. With the above notions in mind, we can now turn to van Fraassen’s argument for the pragmatic equivalence between taking scientific models to accurately represent data and phenomena. My primary interest here is not the relationship between data and phenomena. For my current purposes I simply grant that data represent the systems from which they were gathered (as per VI. Measurement as representation). I further grant that morphisms play a role in establishing whether a scientific model represents, accurately or otherwise, data. I’m concerned with representational relationships between scientific models and phenomena. Representation (accurate or simpliciter) is not a transitive relation: that a scientific model M, represents a data model D, which in turn represents a target system T, does not establish that the M represents T (see Frigg (2002, 11-12); Suárez (2003, 232-233)). And although accurately representing D   might provide us with evidence that M is an accurate representation of T, this does not establish any representational relationship between M and T. Without this Loss of Reality remains. 3.2 The Wittgensteinian move Van Fraassen’s resolution to Loss of Reality is to claim that in the context of use there’s no difference between accurately representing data and phenomena. The reasoning, which is found in pp.254-260, is illustrated with an example. I present it here before reconstructing the argument that underpins it. The example in question concerns only observable features of a target system (the observable-unobservable distinction is largely irrelevant in the current context). Focusing on observables makes it clear how important the Wittgensteinian move is to van Fraassen’s project. If he fails to establish the pragmatic equivalence with respect to observable phenomena, then they fail to feature in his structuralist account of scientific representation. The result is a far more radical anti-realist position than has been offered previously, and is a far more radical position than I suspect van Fraassen would accept. He begins by considering a scientist representing the growth of the deer population in the Princeton region. The scientific model used includes assumptions about environmental features: luscious gardens, the council’s culling instinct, its tendency to experiment with birth-control measures for the local animal population, and so on. The data model D is supplied by a graph constructed from cleaned up data points gathered by field researchers measuring samples of ‘values of various parameters over time’ (p.255). Van Fraassen does not specify which parameters are measured, but   given that the theory concerns the deer population growth, I assume that the scientist literally counts deer in representative regions throughout the duration of the experiment. So the graph plots the number of deer against time. The target system is the deer population itself. The scientist has a model M about deer population growth and argues that M is morphic to D. Van Fraassen imagines a philosophical interlocutor, arguing that although M accurately represents D, the question is whether M accurately represents the population itself (p.254). The scientist showing the interlocutor that it matches D does not establish this. Van Fraassen replies that the scientist has ‘no leeway’ to deny that the model accurately represents the actual population without withdrawing the graph altogether (p.256). According to him, the scientist should say: ‘Since this is my representation of the deer population growth, there is for me no difference between the question whether [M] fits the graph and the question whether [M] fits the deer population growth. If I were to opt for a denial or even a doubt, though without withdrawing my graph, I would in effect be offering a reply of form: • The deer population growth in Princeton is thus or so, but the sentence “The deer population growth in Princeton is thus or so” is not true, for all I know or believe’ (ibid).   And since a scientist who replied this way would be faced with a Moorean paradox, the scientist simply cannot doubt that the model accurately represents the target system whilst accepting that it accurately represents the graph. This is supposed to establish the pragmatic equivalence between the two. That’s the example, now let’s work out why the scientist might be forced into such a position. In the rest of this section I reconstruct the argument for this conclusion in detail. I break it down into three sub-arguments, and show how the notions laid out in the previous subsection are utilized. It’s important to notice that the first two arguments – which establish that the scientist must locate the target in the logical space of the model in order to use it at all, and that this is done with the graph –  are explicitly concerned with representation simpliciter. This accounts for the scientist’s claim that ‘this [data model] is my representation of the deer population growth’ (ibid.). The third argument then shifts to the question of accurate representation in an attempt to establish that, for that scientist, there is ‘no difference between the question whether [M] fits the graph and the question whether [M] fits the deer population growth’ (ibid). The first premise in the third argument makes it explicit how van Fraassen requires that the necessary act of representation established in the first two arguments must generate doxastic commitments, i.e. commit the model user to certain beliefs, if the third argument is to generate the pragmatic equivalence. (A) The argument for self-location: A1. A scientist S is using M to represent a target system T for certain purposes P. (Premise)   A2. If S is using M to represent a target T for purposes P, then S must self-locate in the logical space, L, provided by the model. (Premise) A3. S must self-locate in L. (From A2 and A3)   M is a model of deer population growth, T is the target deer population, and the scientist is using M to represent T for the purpose of generating a prediction (II. Use of Representations). M provides a logical space L, the space of possible deer populations and their growth through time (III. Logical Space). A necessary condition on using M to generate a prediction about T is self-location in L (IV. Self-location). (B) The argument from self-location to representation-as: B1. S self-locates in L using a data model D. (Premise specifying A3) B2. If S uses D to self-locate in L, then S uses D to represent T as thus or so (Π). (Premise) B3. S uses D to represent T as Π. (From B2 and B3) Argument (A) required that the scientist self-locate in L. In van Fraassen’s example, this is done using a data model D, a graph of the deer population. When S uses D to represent the target system, S locates T in the logical space provided by the model (V. Measurement as location). Locating T in a region of L requires representing T as having the features corresponding to that region (VI. Measurement as representation). Let Π be the conjunction of predicates that corresponds to that region. This may be a region, not a point, so these predicates are of the form ‘the magnitude of A is in region ’. In this instance A is the size of the deer population at particular times and the size Δ   of corresponds to the potential measurement error induced by the counting process and the generalization from representative samples to the population as a whole. So when using D to locate T in logical space the scientist represents T as Π. (C) The argument from representation-as to the pragmatic tautology: C1. The (pragmatic) content of S using D to represent T as Π includes S believing that T is Π. (Premise) C2. If S is able to take M to accurately represent D, but not T, then S is able to express disbelief in any proposition concerning T that S commits herself to in using D to represent T. (Premise) C3. If S is able to take M to accurately represent D, but not T, then S is able to express disbelief that T is Π. (from B3, C1, and C2) C4. It is not the case that S is able to express disbelief that T is Π (whilst using D to represent T), on pain of pragmatic contradiction. (Premise) C5. It is not the case that S is able to take M to accurately represent D, but not T. (From C3 and C4) I return to C1 and C2 in Section 4. C3 follows from B3, C1 and C2. S represents T as Π (B3), and in doing so commits herself to the belief that T is Π (C1). This instantiates the universal quantifier in C2 delivering C3. C4 is the instance of Moore’s paradox that van Fraassen is concerned with. He claims that if the scientist were to accept that M accurately represents D but not T, whilst using D to represent T as Π, S would be offering a reply of in the form of Moore’s paradox (‘the deer population is thus or so but…’). Taking D to represent T is analogous to asserting the first conjunct. Δ   Denying that M accurately represents T is analogous to asserting the second conjunct (VII. Pragmatic tautology). This generates the pragmatic equivalence between accurately representing T and D (C5). 4. The argument scrutinized With the argument reconstructed, I now turn to my critical discussion. My objections are the following. Firstly, the pragmatics of representation don’t induce doxastic commitments: acts of representation don’t commit the agent doing the representing to any relevant beliefs. So C1 is false. Secondly, one option available to van Fraassen is to amend C1 to the claim that S takes D to accurately represent T as Π. But this isn’t supported by (A) and (B): it would require that in order to use a scientific model to generate a prediction, the model user must believe the inputted initial/boundary conditions. This is false. My final objection concerns C2, I argue that without an account of scientific representation (irrespective of accuracy), it’s difficult to get a grip on what it would mean for S to deny that M accurately represents T. 4.1. The pragmatics of representation The argument has the following macro structure. Models are used to generate predictions about their targets and a necessary condition on doing this is that the user locate the target in the model’s logical (A). This is typically done with a data model, and when S uses a data model to locate a target system T in such a way, S represents T as Π (B). So far so good.   C1 is vital for rest of the argument, since it is the move from S representing T as Π to pragmatically committing herself to the belief that T is Π that is required to generate the pragmatic tautology. Using the data model to represent the target system is supposed to commit S to the belief that the deer population is thus or so in a way analogous to asserting the first conjunct of the Moorean paradox. The denial that model accurately represents the deer population then provides the analogy with asserting the second. But all arguments (A) and (B) established is that S represents T as Π. And acts of representation do not incur the same pragmatic commitments as acts of assertion. Consider the example of representing Margaret Thatcher as draconian. A caricaturist can represent Thatcher as such without committing herself to the belief that Thatcher is draconian. There is a vital pragmatic difference between acts of representation and assertions. If the caricaturist were to assert that Margaret Thatcher was draconian, then she would commit herself to believing such. But the caricaturist doesn’t do this; she merely represents Thatcher in such a way. The artist could have been commissioned to draw the caricature despite having only a vague idea of who Thatcher was, and no knowledge about her time as Prime Minister. The artist can reasonably draw the caricature, thereby representing her as draconian, whilst at the same time remaining agnostic about her character. The same point applies to scientific representation: S’s act of representing the target system in a certain way doesn’t pragmatically commit her to the belief that the target is that way. It pays to be careful here. My claim does not concern whether or not S actually believes that T is Π, it is a conceptual point regarding the pragmatics of assertion and   representation. Presumably in most cases, model users do believe that the initial/boundary conditions used are (at least approximately) accurate. But this does not establish that an agent’s act of representing something in a particular way commits that agent to any particular beliefs in the way that acts of assertion do in the traditional version of Moore’s paradox. So C1 is false, S’s act of representing a target system as thus or so doesn’t commit S to the belief that the target is thus or so. Therefore (C) is unsound. A possible response is to invoke a weaker doxastic attitude than belief as being incurred in representing a target system. And although this attitude might not deliver the Moorean paradox van Fraassen discusses, it may deliver a closely related pragmatic contradiction that still allows a version of (C) to go through. In other contexts van Fraassen invokes the attitude of acceptance (Muller and van Fraassen 2008). Accepting a theory, or model, is to take it to be empirically adequate: to believe its observable content and to remain agnostic about its unobservable content (acceptance is typically applied to scientific models, but here I’m considering applying it to data). So, what happens if, in using D to represent T as Π, S commits herself to accepting that T is Π? Well that depends on T and Π. We can distinguish between the observable and unobservable content of Π(T), denoted Π(T)O and Π(T)U respectively. If S accepts Π(T), then S commits herself to believing Π(T)O and being agnostic about Π(T)U, i.e. not believing Π(T)U or ¬Π(T)U (see op. cit, 204). For neither of these types of content will acceptance do the work required. Regarding observable content we are back where we started. Accepting that Thatcher is draconian entails believing that she is. And an agent can represent her in such a way   without taking on this commitment. Regarding unobservable content, S accepting Π(T)U entails ¬BS [Π(T)U] and ¬BS [¬Π(T)U]. But this will not generate a pragmatic contradiction when combined with the second conjunct of van Fraassen’s instance of Moore’s paradox, i.e. ¬BS[Π(T)] (even restricted to its unobservable content). Invoking acceptance when an agent uses a data model to represent a target doesn’t work. But the above discussion suggests another available strategy available to van Fraassen. It proceeds in two steps. Firstly, introduce a weaker act than assertion – call it entertaining – and assume that an act of entertaining that P incurs a commitment to not believing ¬P. Again this alone doesn’t generate a pragmatic contradiction when combined with ¬Bi(P). But it does when combined with Bi(¬P). The second step is to move from a Moorean paradox of the form P & ¬Bi(P) to one of the form P & Bi(¬P) – i.e. from sentences like ‘it’s raining and I don’t believe it’s raining’ to ‘it’s raining and I believe that it’s not raining’. (C) then becomes (C’): C1'. The (pragmatic) content of S using D to represent T as Π includes S not believing that it is not the case that T is Π. (Premise) C2'. If S is able to take M to accurately represent D, but not T, then S is able to express belief in the negation of any proposition concerning T that S commits herself to in using D to represent T. (Premise) C3'. If S is able to take M to accurately represent D, but not T, then S is able to express belief that it is not the case that T is Π. (From B3, C1’, and C2’) C4'. It is not the case that S is able to express belief that it is not the case that T is Π (whilst using D to represent T), on pain of pragmatic contradiction. (Premise)   C5'. It is not the case that S is able to take M to accurately represent D, but not T. (From C3’ and C4’) Assuming that an act of representation is an act of entertaining, in using D to represent T as Π, S pragmatically commits herself to not believing that it is not the case that T is Π , i.e. ¬BS [¬Π(T)] (C1’). Further assume that S denying that M accurately represents T whilst accepting it accurately represents D, induces a commitment to believing that it is not the case that T is Π (C2’ and C3’). This is a stronger commitment than assumed in C, BS [¬Π(T)] rather than ¬BS [Π(T)]. Under these assumptions, if S were to take M to accurately represent D but not T, whilst at the same time using D to represent T, she would be offering a reply with the following commitments: ‘It’s not the case that I believe that T isn’t Π and I believe that T isn’t Π’. This would be a pragmatic contradiction. (C1’) requires that in using D to represent T as Π, S entertain that T is Π, and therefore commit herself to not believing that T isn’t Π. However, the following example shows that even this commitment is not incurred by acts of representation. Consider a different caricaturist representing Margaret Thatcher as draconian. This time assume that the Labour Party has commissioned the caricature and the artist is a staunch Conservative. He goes ahead and draws the picture because he desperate for the money. In drawing the caricature the artist represents Thatcher as a draconian, but he certainly doesn’t believe it. In fact, he explicitly believes that she isn’t draconian to the extent that he sings her praises whilst drawing the caricature. This makes him feel better about drawing something that goes so strongly against his political beliefs. Now, if, in representing Thatcher as draconian, the artist commits himself to not   believing that she isn’t, then his act of drawing her as such whilst singing the negation would be a pragmatic contradiction. But although a strange situation, this isn’t the case. Acts of representing that P don’t incur the pragmatic commitment to ¬Bi(¬P). So C1’ is false, and (C’) unsound. 4.2 From self-location to belief Despite van Fraassen’s phrasing, the above concerns suggest that argument (C) shouldn’t start from the premise that S uses D to represent T as Π, but rather S takes D to accurately represent T as Π. Rather than: ‘Since [D] is my representation of the deer population growth, there is for me no difference between the question whether [M] fits [D] and the question whether [M] fits the deer population growth’ (p.256) The scientist should say: Since I take D to be an accurate representation of the deer population growth, there is for me … It’s plausible that in taking D to accurately represent T as Π, S commits herself to believing that T is Π. But since B3 only got us as far as representation, the preceding argument needs amending. Argument (A) stays as it is. In order to use a model to generate a prediction, the model user must self-locate in its logical space. (B) gets revised to (B*):   B1*. S self-locates in L using a data model D. (Premise specifying A3) B2*. If S uses D to self-locate in L, then S takes D to accurately represent T as Π. (Premise) B3*. S takes D to accurately represent T as Π. (From B2*, B3*) And if this can be established then a revised version of (C) runs as follows:8 C1*. The (pragmatic) content of S taking D to accurately represent T as Π includes S believing that T is Π. (Premise) C2*. If S is able to take M to accurately represent D, but not T, then S is able to express disbelief in any proposition concerning T that S commits herself to in taking D to accurately represent T. (Premise) C3*. If S is able to take M to accurately represent D, but not T, then S is able to express disbelief that T is Π. (From B3*, C1*, and C2*) C4*. It is not the case that S is able to express disbelief that T is Π (whilst using D to represent T), on pain of pragmatic contradiction. (Premise) C5*. It is not the case that S is able to take M to accurately represent D, but not T. (From C3*, C4*) But although (C*) seems plausible in isolation, the argument as a whole is not, for B2* is false.                                                                                                                 8 (C*) is a revised version of (C), not (C’), but my criticisms can be run against a revised version of the latter as well.   To see why, recall what self-location required. The model user had to adopt a certain perspective towards the target by taking it to be the sort of thing that could be located in the model’s logical space. She then had to delineate an area within that space for the target. This was a necessary condition on generating a prediction using the model. But neither of these steps commits the agent to any beliefs. In particular, in using a data model to self-locate in a model’s logical space, the model user does not thereby commit herself to the data’s accuracy. Consider again the example of the deer population. In order to use her model to generate a prediction about its size, the scientist had to input an initial number of, and fitness values for, the deer. The model allows the scientist to make any number of predictions about the future size of the population. If the scientist inputs a low fitness value – imagine a pro-cull council – then the model will predict a small future population. If the scientist initially assumes that the deer population is too large for the region to support, then the model will predict population decline. And so on. The scientist can use the model to generate numerous predictions about the deer population regardless of whether or not she believes these values to be accurate. All of these inputs serve to delineate the logical space of the model, and some input is necessary to generate a prediction about the target. But she is not required to believe them. Other examples abound. Some are of scientists failing to believe that the logical space of a model is correct. Ptolemaic models can be used to generate predictions about planetary orbits without the user believing that those planets in fact are located anywhere in the model’s logical space. State of the art global climate models (GCMs)   contain variables that are known to describe model-processes with no direct real- world correlates. These variables – in that context typically referred to as ‘parameters’ – are loosely related to sub-grid processes such as small-scale convection and cloud coverage. However, their values depend to a large extent on details of the particular computational scheme used rather than on the state of the world. So we have here a case where scientists don’t believe that the logical space is correct (at least not completely correct), and yet they pick values for certain variables in order to make calculations (Bradley et al.). And this is no isolated instance; one can find similar cases, for example, in economics (Friedman 1953) and population dynamics.9 The problems don’t end here. Even supposing that the scientist believes that the logical space is correct, they still needn’t believe that the target is located in the region delineated by the model input. For example, representative concentration pathways (RCPs) are used to locate the global climate in the logical space of GCMs. They supply concentration trajectories of the main forcing agents of climate change. One particular pathway, RCP2.6, requires that we essentially eliminate greenhouse gas emissions immediately, something that no one believes is, or will be, the case. And yet RCP2.6 is widely used to generate numerous predictions about the global climate (see the IPPC report (2013), Ch.12 in particular). The point is that the scientists can use models to generate predictions about target systems without adopting any epistemological position towards the model, or where                                                                                                                 9 See for example Weisberg and Reisman (2008) who offer individual-based versions of the Lotka-Volterra model that start from the assumption that individuals move about on a 30x30 toroidal lattice.   the target is located in its logical space. As stressed previously, this isn’t to say that scientists don’t believe that their data models and initial/boundary conditions are (at least approximately) accurate. My claim is that this belief is not a necessary condition on using a model to generate a prediction. As such it is not part of the pragmatic content of locating a target in logical space. And this is what van Fraassen requires. So, although (C*) may seem plausible in isolation, it rests on (B*) for its support, which in turn rests on the false premise B2*. So the argument as a whole is unsound. 4.3 Representation and accurate representation I hope by now to have shown that van Fraassen’s argument fails. But there is a further problem that indicates more general issue. His argument concerns how scientific models, as set-theoretic structures, can accurately represent physical phenomena. It’s worth stepping back and taking stock of what could be gained by answering this question. If it’s an attempt to establish in virtue of what a pre-existing representational relationship between the model and phenomenon is accurate, then the question of what establishes representation simpliciter remains unanswered. We still don’t have an account of in virtue of what scientific models represent their targets (cf. Thomson- Jones 2011). This is particularly worrying given that it is plausible we should answer this question before we investigate representational accuracy. This has been stressed by Suárez (2004), Contessa (2007), and Frigg (2010) who all provide accounts of scientific representation before commenting on the notions of representational accuracy that result. They all take the question of representation as conceptually prior to accurate representation. Moreover, nothing precludes them from accepting that model-data morphisms provide evidence that model-phenomena representational   relationships are accurate. But if this is all van Fraassen is attempting to establish, then the whole machinery driving the pragmatic tautology becomes irrelevant. Van Fraassen himself starts his argument by claiming that the fundamental question to be answered is: ‘How can an abstract entity, such as a mathematical structure, represent something that is not abstract, something in nature?’ (p.240) But his Wittgensteinian solution does not address this question. I suspect that van Fraassen would fall back on his Hauptsatz and claim that representation cannot be analyzed beyond this. But this does not help when we look at C2 (or its variants). In particular, what would it mean for S to deny that M accurately represents T? Van Fraassen’s phrasing suggests that in doing so S would take M to represent T, but to do so inaccurately. In this sense the deer scientist would be effectively asserting the second conjunct of van Fraassen’s version of Moore’s paradox (the sentence ‘the deer population is thus or so’ is not true for all I know or believe). But this needn’t be the question the philosophical interlocutor asks. Rather than asking whether the scientific model matches the phenomenon, they can ask whether the model represents it in the first place. This is the fundamental question after all. How can one use, make, or take a mathematical structure to represent something that is not abstract, something in nature?10 And if the scientist were to doubt that M represents T in this sense, then (C)                                                                                                                 10 I’m not demanding what Suárez (2015) calls a ‘substantive account’ of scientific representation. The deflationary ones offered there, although they only pick out ‘platitudes’, are enough to get a grip on the concept. And if van Fraassen’s Hauptsatz   (and its variants) will again fail irrespective of my previous criticisms. I have established that acts representation failed to incur doxastic commitment. But what about representational denial, as it occurs in C2/C2’/C2*? Consider a caricature that depicts David Cameron as draconian. Denying that it represents Margaret Thatcher doesn’t incur a commitment to believing (or disbelieving) anything about Thatcher, other than she isn’t the one caricatured. That an agent incur any doxastic commitments in the denial of representational relationships is even less plausible than him incurring them when affirming them. 5. Conclusion My concern in this paper is van Fraassen’s claim that for an individual scientist, in a given context, taking a scientific model to accurately represent data and the phenomenon from which the data were extracted are pragmatically equivalent. I showed that the argument as he states it relied on the false premise that acts of representation induce doxastic commitments in the way that assertions do. I considered an alternative formulation of the argument that would have led to the appropriate commitments, but argued that it turned on a false premise concerning necessary conditions on using models to generate predictions. My final objection concerned van Fraassen’s focus on accurate representation, rather than representation simpliciter. Without a clear account of the latter, one of his central premises, and indeed the dialectical structure of his argument, fails to get off the ground.                                                                                                                                                                                                                                                                                                                               is supposed to tell us that bare representational intentions of model users suffice to establish scientific representation, then it falls foul of the same problems as Callender and Cohen (2006) (see Frigg 2010; Toon 2010).   As such the question of target-end structure remains. Unless van Fraassen is willing to revisit the idea that these structures are to be found ‘in the world’ this leaves him two options. Either give up on a structuralist account of scientific representation or adopt a radically anti-realist position whereby only data are represented. The latter seems implausible. It provides an account of science according to which models don’t represent, accurately or simpliciter, what they are typically taken to represent: physical objects, or features of objects, or events, or processes, or mechanisms. Instead they represent data, abstract mathematical objects that are the product of our independent intellectual activity. Such position is strange when the models concern unobservables – e.g. the model used to predict the existence of weak neutral currents represents bubble chamber photographs, not weak neutral currents - but the situation is even more troubling when the model concerns observables. To use van Fraassen’s example, if all that are represented are data then the replicator model represents a graph of Princeton’s deer population, not actual deer. Since his argument for the pragmatic equivalence fails, this seems to me like a reductio of the claim that data, rather than phenomena, are the targets of scientific models. Although the conclusions of this paper are largely negative, I hope it stimulates further investigation into the pragmatics of scientific representation and the role of data in scientific representation broadly construed. Both questions require further research.   References Bogen, James and Woodward, James. 1988. 'Saving the Phenomena', Philosophical Review, 97, 303-52. Bueno, Otávio and French, Steven. 2011. ‘How Theories Represent’, British Journal for the Philosophy of Science, 62, 857-894. Bradley, Richard., Roman Frigg, Katie Steele, Erica Thompson, and Charlotte Werndl. Forthoming. ‘The Philosophy of Climate Science’ in Carlos Galles, Pablo Lorenzano, Eduardo Ortiz, and Hans-Jörg Rheinberger (eds.): History and Philosophy of Science and Technology, Encyclopedia of Life Support Systems Volume 4 (Isle of Man: Eolss). Contessa, Gabriele. 2007. ‘Scientific Representation, Interpretation, and Surrogative Reasoning’, Philosophy of Science, 74, 48-68. --- .2011. ‘Scientific Models and Representation’, in Steven French and Juha Saatsi (eds.): The Continuum Companion to the Philosophy of Science (Continnum Press), 74, 120-137. Friedman, Milton. 1953. ‘The Methodology of Positive Economics’, reprinted in Daniel Hausman (ed.) (2008): The Philosophy of Economics: An Anthology (3rd ed., Cambridge University Press), 145-178. French, Steven. 2003. 'A Model-Theoretic Account of Representation (Or, I Don’t Know Much about Art...but I Know It Involves Isomorphism)', Philosophy of Science, 70, 1472–83. French, Steven and Ladyman, James. 1999. 'Reinflating the Semantic Approach', International Studies in the Philosophy of Science, 13, 103-21.   Frigg, Roman. 2002. ‘Models and Representation: Why Structures Are Not Enough’, Measurement in Physics and Economics Project Discussion Paper Series, DP MEAS 25/02, London School of Economics. --- 2006. ‘Scientific Representation and the Semantic View of Theories’, Theoria, 55, 49-65. --- 2010. ‘Fiction and Scientific Representation’, in Roman Frigg and Matthew Hunter (eds.): Beyond Mimesis and Nominalism: Representation in Art and Science (Berlin and New York: Springer), 97-138. Goodman, Nelson. 1976. Languages of Art (2nd ed., Indianapolis and Cambridge: Hacket). Harris, Todd. 2003. 'Data Models and the Acquisition and Manipulation of Data', Philosophy of Science, 70, 1508-17. IPCC. 2013. Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.) (Cambridge University Press) Muller, F.A. 2011. 'Reflections on the Revolution at Stanford', Synthese, 183, 87-114. Muller, F.A. and van Fraassen, Bas. C. 2008. 'How to talk about unobservables', Analysis, 68, 197-205. Mundy, Brent. 1986. 'On the General Theory of Meaningful Representation', Synthese, 67, 391-437. Suárez, Mauricio. 2003. ‘Scientific Representation: Against Similarity and Isomorphism’, International Studies in the Philosophy of Science 17, 225– 244.   --- 2004. ‘An Inferential Conception of Scientific Representation’, Philosophy of Science, 71, 767–779. --- 2015. ‘Deflationary representation, inference, and practice’, Studies in History and Philosophy of Science, 49, 36-47. Suppes, Patrick. 1960. 'A Comparison of the Meaning and Uses of Models in Mathematics and the Empirical Sciences', in Patrick Suppes (ed.) (1969), Studies in the Methodology and Foundations of Science: Selected Papers from 1951 to 1969 (Dordrecht: Reidel), 10-23. --- 1962. 'Models of Data', in P. Suppes (ed.) (1969), Studies in the Methodology and Foundations of Science: Selected Papers from 1951 to 1969 (Dordrecht: Reidel), 24-35. Thomson-Jones, Martin. 2011. ‘Review of Scientific Representation: Paradoxes of Perspective by Bas C. van Fraasseen’, Australasian Journal of Philosophy, 89, 567-570. Toon, Adam. 2010. ‘Models as Make-Believe’ in Roman Frigg and Matthew Hunter (eds.): Beyond Mimesis and Nominalism: Representation in Art and Science (Berlin and New York: Springer), 71-96. van Fraassen, Bas C. 1980. The Scientific Image (Oxford University Press). --- 2008. Scientific Representation: Paradoxes of Perspective (Oxford University Press). --- 2010. 'Reply to Contessa, Ghins, and Healey', Analysis Reviews, 70, 547-56. Weisberg, Michael and Reisman, Kenneth. 2008. ‘The Robust Volterra Principle’, Philosophy of Science, 75, 106-131.