Descartes’ influence on Turing Our reference: SHPS 897 P-authorquery-v8 AUTHOR QUERY FORM Journal: SHPS Please e-mail or fax your responses and any corrections to: Article Number: 897 E-mail: corrections.eseo@elsevier.sps.co.in Fax: +31 2048 52799 Dear Author, Please check your proof carefully and mark all corrections at the appropriate place in the proof (e.g., by using on-screen annotation in the PDF file) or compile them in a separate list. To ensure fast publication of your paper please return your corrections within 48 hours. For correction or revision of any artwork, please consult http://www.elsevier.com/artworkinstructions. Any queries or remarks that have arisen during the processing of your manuscript are listed below and highlighted by flags in the proof. Click on the ‘Q’ link to go to the location in the proof. Location in article Query / Remark: click on the Q link to go Please insert your reply or correction at the corresponding line in the proof We have frequently found errors in the citation to Studies in History and Philosophy of Science, Studies in History and Philosophy of Modern Physics and Studies in History and Philosophy of Biological & Biomedical Sciences and therefore would be grateful if you could confirm that the article you are citing is correctly referenced to the right journal Thank you for your assistance. mailto:corrections.eseo@elsevier.sps.co.in Highlights SHPS 897 No. of Pages 1, Model 5G 17 September 2011 " I first show that the Turing test is not an expression of behaviourism. " To demonstrate this, I outline Turing’s necessary condition for intelligence. " Then I show that Alan Turing was likely aware of Descartes’s ‘language test’. " Last I argue that Descartes’s and Turing’s tests have similar epistemic purposes. 1 1 2 3 4 5 67 8 urn 9 10 11 d 12 t’ 13 n 14 k. 15 s 16 l 17 18 e 19 t 20 g 21 - 22 g 23 l 24 25 it 26 e 27 28 l 29 e 30 31 32 is 33 d 34 a 35 y 36 s 37 g 38 t, 39 t 40 n 41 f 42 e 43 44 45 46e 47e, 48r 49is 50. 51h 52 53is 54e 55d 56i- 57 58it 59e, 60n 61e 62f 63y 64 65e 66s: 67- 68t 69e 70f 71r 72d 73.2 74ts 75e 76 Studies in History and Philosophy of Science xxx (2011) xxx–xxx le d w . SHPS 897 No. of Pages 9, Model 5G 17 September 2011 Descartes’ influence on Turing Darren Abramson Department of Philosophy, Dalhousie University, Halifax, Nova Scotia, Canada When citing this paper, please use the full jo 1. Introduction Alan Turing, in his 1950 Mind paper ‘Computing Machinery an Intelligence,’ introduces what is now called ‘The Turing tes (Turing, 1950). Turing’s paper has inspired countless papers i support of, and critical of, the claim that computers can thin The received view of Turing’s philosophy of mind is that he wa a behaviorist. This view has persisted despite numerous critica evaluations of his work that argue to the contrary. In this paper I begin by briefly comparing reasons that hav been offered for the claim that Turing was not a behavioris (despite his apparent commitment to the claim that thinkin requires nothing more than displaying verbal behavior indistin guishable from a person). The strongest reason for understandin Turing this way, I argue, is his commitment to a non-behaviora necessary condition for intelligence. Then I show that 1. Turing was aware of Descartes’ ‘language test’, and likely had in mind when writing his 1950 Mind paper that introduces th Turing test; and, 2. Turing intended the imitation game to play an epistemologica role that is similar to the role that Descartes intended th language test to play. If Turing wasn’t offering a behaviorist view, unlike many of h contemporaries, what non-behaviorist influences (if any) plante the seed in Turing’s mind of what may seem, at first glance, behaviorist understanding of thinking? I answer this question b a close reading of some of Turing’s personal papers from the year immediately preceding the publication of the paper introducin the Turing test. With historical influences in place, I argue tha far from being coincidentally similar, Descartes’ language tes and Turing’s imitation game are both intended as nearly certai tests for thinking, and as tests for internal, particular causes o thinking (although Turing and Descartes disagree on what th necessary internal causes of thinking are). Contents lists availab Studies in History an j o u r n a l h o m e p a g e : w w 0039-3681/$ - see front matter � 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.shpsa.2011.09.004 E-mail address: da@dal.ca 1 For example, Sterrett (2000). 2 A compelling case, with an overwhelming (but not exhaustive) amount of textual Please cite this article in press as: Abramson, D. Descartes’ influence on j.shpsa.2011.09.004 al title Studies in History and Philosophy of Science 2. Turing and behaviorism 2.1. Definitions In his 1950 article, Turing explains a party game he calls th ‘imitation game.’ In it, an interrogator (C) judge must determin solely through written queries and responses, which of two othe participants is a man (A) and which is a woman (B). The judge aware that one of the participants is a man and one is a woman Turing proposes to replace the question ‘can machines think’ wit the following: What will happen when a machine takes the part of A in th game? Will the interrogator decide wrongly as often when th game is played like this as he does when the game is playe between a man and a woman? These questions replace our or ginal, ‘Can machines think?’ (Turing, 1950, p. 434) Let us fix our attention on one particular digital computer C. Is true that by modifying this computer to have adequate storag suitably increasing its speed of action, and providing it with a appropriate programme, C can be made to play satisfactorily th part of A [the man contestant] in the imitation game, the part o B [the woman contestant in the imitation game] being taken b a man? (Turing, 1950, p. 442) Later commentators have, almost universally, interpreted th ‘modified imitation game,’ now called the Turing test, as follow can a judge, communicating entirely through typed text, distin guish a human from a computer? This interpretation irons ou two ambiguities of Turing’s presentation. Some readers hav argued that Turing intended the judge in the computer version o the imitation game to be answering a question about the gende of the players.1 However, there is ample evidence that Turing di not intend the computer version of his test to involve gender issues Also, there is the question of what adequate performance amoun to. I will use the formulation that the judge distinguishes th computer from the person at a rate no better than chance. at SciVerse ScienceDirect Philosophy of Science e l s e v i e r . c o m / l o c a t e / s h p s a support for the standard interpretation can be found in Piccinini (2000). Turing. Studies in History and Philosophy of Science (2011), doi:10.1016/ http://dx.doi.org/10.1016/j.shpsa.2011.09.004 mailto:da@dal.ca http://dx.doi.org/10.1016/j.shpsa.2011.09.004 http://www.sciencedirect.com/science/journal/00393681 http://www.elsevier.com/locate/shpsa http://dx.doi.org/10.1016/j.shpsa.2011.09.004 http://dx.doi.org/10.1016/j.shpsa.2011.09.004 D Inserted Text , D Cross-Out D Cross-Out D Inserted Text 77 78 olo 79 Co 80 Sc 81 be 82 On 83 sta 84 is 85 of 86 ‘re 87 im 88 89 ar 90 ph 91 Ma 92 its 93 Th 94 lis 95 ad 96 fic 97 tio 98 p. 99 ar 100 Sc 101 cla 102 tio 103 2.2 104 105 as 106 qu 107 les 108 qu 109 m 110 ifi 111 wi 112 113 pe 114 th 115 wi 116 m 117 be 118 an 119 in 120 sp 121 th 122 hu 123 sc 124 de 125 co 126 ne 127 sk 128 no 129 nie 130 131 sis 132 ior 133 th 134is 135so 136no 1372.3 138 139er 140be 141tio 142 143 144 145 146 147 148 149su 150Th 151in 152ex 153be 154pr 155ica 156 157‘su 158all 159di 160th 161Joh 162ta 163of 164pr 165 166ha 167ge 168ge 169kn 170su 171In 172be 1732.4 174 175ior 176pr 177so 178hi 179on 180ior 181Th 182ce 183in 184so 185re 186 187Jam 3 4 5 6 7 8 2 D. Abramson / Studies in History and Philosophy of Science xxx (2011) xxx–xxx SHPS 897 No. of Pages 9, Model 5G 17 September 2011 Pl j.s Given Turing’s position, and the influential logical and method- gical behaviorists he was contemporary with (Gilbert Ryle’s The ncept of Mind had been published the previous year; B.F. Skinner’s ience and Human Behavior would be published in 1953), a haviorist interpretation of Turing’s views is almost irresistible. both sides of the Atlantic Ocean, there were pushes to under- nd the mind in terms of behavior. The Turing test, at first blush, a paradigm of behaviorism: Turing says outright that the question whether machines can think is strictly meaningless, and must be placed’ with the question of whether a machine can pass the itation game (Turing, 1950, p. 442). By the mid-1960s, the behaviorist interpretation of Turing’s ticle was presented without critical assessment in popular ilosophical texts. For example, consider the anthology Minds and chines, edited by Alan Ross Anderson (1964), which contains as first article Turing’s ‘Computing Machinery and Intelligence’. e second article reprints Michael Scriven’s 1953 article, also pub- hed in Mind, but with a short addendum. The first part of the dendum reads: ‘‘This article can be taken as a statement of the dif- ulties in (and was written partly as a reaction to) Turing’s exten- n [sic] of behaviorism into the computer field.’’ (Scriven, 1964, 42). Minds and Machines collected a number of the most significant ticles of the era on the computational theory of mind by Turing, riven, Keith Gunderson, J.R. Lucas, and Hilary Putnam. I am not iming that this anthology engendered the behaviorist interpreta- n of Turing, but it is an early sign of its widespread adoption. . A first response: Turing’s consideration of ‘contrary views’ As mentioned above, Turing uses language that unequivocally ks for the replacement of questions of machine mentality with estions of behavior, and does so by appealing to the meaning- sness of the former questions. The replacement of mentality estions with behavioral questions appears to betray a commit- ent to something like a verificationist criterion of meaning. A ver- cationist interpretation of the Turing test, though, is inconsistent th much of what Turing says in his article. As some have pointed out,3 the sixth section of Turing’s 1950 pa- r deals with objections that he can’t seriously consider if he takes e passing of his test to be equivalent to the possession of mind. I ll mention just two examples. In his consideration of the mathe- atical objection, Turing takes seriously the idea that there might in-principle limitations that distinguish computers from humans, d that one might not be able to ascertain whether the candidates the imitation game were subject to those limitations. Turing’s re- onse does not at all address the detectability of those limitations in e test, but in fact denies that computers are subject to them while mans are not.4 In his consideration of the argument from con- iousness, Turing considers at length an objector who claims that spite excellent performance in the test, a machine wouldn’t have nsciousness. Turing does not respond by denying the reality of in- r conscious states to mental entities. Instead, Turing offers the eptic a parity argument, according to which consciousness can more be denied of machines that pass the test than it can be de- d of other people.5 Still, someone might argue, this only shows that Turing lacks con- tency in his presentation; although he veers away from his behav- ist line in defending from criticism the claim that machines can ink, his central positive project retains a criterion for thought that For example, Leiber (1995, p. 63). See, for example, Turing (1950, pp. 444–445). I am not, of course, endorsing Turing’s response to a skeptic of computer consciousness, For example, see Block (1981, pp. 15–16). Daniel Dennett makes this point. See Dennett (1985, p. 4). The most famous criticisms are by Ned Block and John Searle, namely the Chinese gym a ease cite this article in press as: Abramson, D. Descartes’ influence on Turi hpsa.2011.09.004 behaviorist. This claim can be answered: there are stronger rea- ns for denying Turing a behaviorist interpretation, which I will w present. . The second response: necessary vs. sufficient conditions Many commentators have pointed out that ‘Computing Machin- y and Intelligence’ cannot be read as a bare statement of logical haviorism.6 Here is the passage that obviates such an interpreta- n of the article: May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection (Turing, 1950, p. 435). In this passage, Turing reveal that he is only committed to the fficiency of passing the test for thinking, and not its necessity. erefore, he cannot be offering a definition or an analysis of think- g. Put more simply, the passage quoted is consistent with the istence of thinking things that don’t display the particular haviors under consideration. Logical behaviorism purports to ovide the referents for mental terms—so, Turing cannot be a log- l behaviorist.7 Many have noticed this property of Turing’s position (its fficiency behaviorism’) and object to this, calling it behaviorism the same. For example, Ned Block targets this view of Turing’s rectly, and views prior attacks on behaviorism as deficient because ey don’t rule out sufficiency behaviorism (Block, 1981, pp. 15–16). n Searle also identifies sufficiency behaviorism and makes it his rget: ‘‘The Turing test, as you will have noticed, expresses a kind behaviorism. It says that the behavioral test is conclusive for the esence of mental states’’ (Searle, 2004, p. 70). Once Turing’s position has been thus clarified, many have been ppy to simply call the view that behavior is sufficient for intelli- nce a form of behaviorism, thus reuniting Turing’s views in a neral way with his famous contemporaries. There are many well own criticisms of Turing’s claim that passing the Turing test is fficient for thinking8, but I will not wade into these debates. stead, I will now show that Turing is not, in fact, a strict sufficiency haviorist. . The third response: the strength of the Turing test A third, subtle response one could make to the charge of behav- ism is that Turing is committed to the view that his test only ovides a sufficient condition for intelligence because it measures me non-behaviorally defined property. Now, if Turing understands s own test this way (as I will argue he does), then whether or not e agrees that the test is a measurement tool for this non-behav- al property, Turing is not even a strict ‘sufficiency behaviorist’. at is, Turing cannot be understood as believing that possessing rtain behaviors is always sufficient for intelligence. Instead, the terpretation goes, possessing certain behaviors is evidence for me other property, and the possession of the other property is quired for intelligence. This interpretation of Turing is not new. It is offered first by es Moor, who describes success in the Turing test as ‘inductive but merely pointing out the inconsistency of Turing’s response with behaviorism. nd Chinese room arguments, respectively. ng. Studies in History and Philosophy of Science (2011), doi:10.1016/ http://dx.doi.org/10.1016/j.shpsa.2011.09.004 http://dx.doi.org/10.1016/j.shpsa.2011.09.004 188 s 189 k 190 g, 191 e 192 e 193 l 194 , 195 s 196 - 197 - 198 s 199 e 200 ) 201 - 202 203 t 204 o 205 k 206 y 207 208 209 210 o 211 - 212 i- 213 is 214 is 215 216 - 217 e 218 - 219 c 220 - 221 n 222 ’s 223 y 224 - 225 - 226 is 227 228 , 229 o 230 - 231 n 232 t 233 r 234 l- 235 - 236 ’s 237 e 238 n 239 e 240 s 241 242 y 243 o 244 245- 246 247g 248d 249g, 250- 251s 252 253- 254g, 255f 256f 257is 258h 259e 260, 261 262f- 263g 264- 265g. 266- 267n 268 269- 270s, 271, 272e 273- 274e 275is 276e 277e 278 279r 280- 281d 282l 283s, 284l 285 286 287n 288p 289g 290 291o 292s 293e 294t 295e 296- 297g 298e 299e 300 301- 302d 303 D. Abramson / Studies in History and Philosophy of Science xxx (2011) xxx–xxx 3 SHPS 897 No. of Pages 9, Model 5G 17 September 2011 evidence’ of thinking, where the conclusion that something think is subject to scientific scrutiny (Moor, 1976, pp. 252–253). Jac Copeland summarizes some of the behaviorist accounts of Turin and claims that ‘‘twenty-five years [after Moor’s 1976 article], th lesson has still not been learned that there is no definition to b found in Turing’s paper of 1950’’ (Copeland, 2001, p. 522). Danie Dennett argues for this view in a bit more detail (Dennett, 1985 p. 6). Dennett does not provide textual evidence that Turing ha this understand of his own test, but instead philosophical argu ment to convince the reader that this is the most reasonable under standing of the test. In particular, Dennett considers condition under which the ‘quick-probe assumption’ (that success on th Turing test implies success on an indefinite number of other tasks is false, and concludes that these conditions involve illicit con straints on the Turing test. Later I will argue that Dennett’s understanding of Turing’s tes can be given additional historical support. For now, I will turn t yet another set of reasons, closely related to the third set, to thin that Turing ought not to have been considered a behaviorist, in an sense. 2.5. The third response reconsidered: the epistemic-limitation condition’ Elsewhere I have argued that Turing reveals, in his response t ‘Lady Lovelace’s objection,’ (defined below) a commitment to a nec essary condition for thought (Abramson, 2008). I call this the ep stemic-limitation condition, and find evidence for it both in h 1950 paper, and in writings of Turing’s unpublished during h lifetime. In short, the epistemic-limitation condition states that for a com puter to think, its behavior must be unpredictable, even by someon who has access to its programming. Methods of constructing ma chines to pass the test, by preprogramming in responses to specifi questions, would cause failure of this necessary condition. This con dition is mentioned by Turing in a number of places, most often i response to some form of Lady Lovelace’s objection. Lady Lovelace objection says that machines cannot think, since any behavior the display is the result of their programmer’s intention for them to dis play that behavior. First I will provide a few of the texts in which Tur ing expresses this condition, and then make a few comments on th significance of the condition for Turing. . . . Let us return for a moment to Lady Lovelace’s objection which stated that the machine can only do what we tell it t do . . . An important feature of a learning machine is that its tea cher will often be very largely ignorant of quite what is going o inside, although he may still be able to some extent to predic his pupil’s behavior. This should apply most strongly to the late education of a machine arising from a child-machine of wel tried design (or programme). This is in clear contrast with a nor mal procedure when using a machine to do computations: one object is then to have a clear mental picture of the state of th machine at each moment in the computation. This object ca only be achieved with a struggle. The view that ‘the machin can only do what we know how to order it to do’, appear strange in the face of this (Turing, 1950, pp. 454, 458–459).9 It would be quite easy to arrange the experiences in such a wa that they automatically caused the structure of the machine t build up into a previously intended form, and this would r 9 Despite Turing’s beginning his section on learning machines with an expressed conclude that for Turing, learning machines are merely an expedient path to building 10 I am grateful to an anonymous referee for pointing out possible interpretations of Please cite this article in press as: Abramson, D. Descartes’ influence on j.shpsa.2011.09.004 obviously be a gross form of cheating, almost on a par with hav ing a man inside the machine (Turing, 1951b, p. 473). If we give the machine a programme which results in its doin something interesting which we had not anticipated, I shoul be inclined to say that the machine had originated somethin rather than to claim that its behaviour was implicit in the pro gramme, and therefore that the originality lies entirely with u (Turing, 1951a, p. 485). [As] soon as one can see the cause and effect working them selves out in the brain, one regards it as not being thinkin but a sort of unimaginative donkey-work. From this point o view one might be tempted to define thinking as consisting o ‘those mental processes that we don’t understand’. If this right, then to make a thinking machine is to make one whic does interesting things without our really understanding quit how it is done (Turing, Braithwaite, Jefferson, & Newman 1952, p. 500). The first and second of these quotations suggest at least two di ferent ways that computers can be unpredictable.10 Perhaps Turin intends merely that unpredictability of machines be faced by some one who has no knowledge of the program the machine is runnin Another possibility, which I claim is supported by the other quota tions, is that the computer runs a program that is unpredictable eve if one has access to the program itself. Notice that the first interpretation is quite weak. Suppose a pro grammer devises a clever algorithm for producing symphonie each of which she wrote in a previous career as a composer. Then so long as she doesn’t tell anyone what the algorithm is (suppos that the programmer/composer takes their compositions and algo rithm to the grave the moment the computer is switched on), th first interpretation suggests that Turing would be satisfied that th computer meets Lady Lovelace’s objection. This is absurd; no on would, in this case, agree that the computer had originated th symphonies. The first quotation does not hold up well independently unde this interpretation. Turing points out that the knowledge in ques tion of the computer, in the normal case, ‘can only be achieve with a struggle.’ If the ‘mental picture’ refers to a computationa state, the programmer is in an excellent position to know thi either by producing a ‘system dump’ (a description of the tota internal state of the computer) or working through the program and its input by hand. On the other hand, as programmers know very well, if ‘mental picture’ refers to a more general descriptio of the gross functional properties of the program, a system dum will often be insufficient for such clarity. This is why debuggin is such a ‘struggle.’ The third quotation has as its goal, as do the previous two, t account for how machines can originate their own behavior, a opposed to merely acting as stand-ins for the ingenuity of th programmer. In this case, though, Turing explicitly supposes tha we give the machine a program. One might wonder how someon can have a computer program in their hands that, when run, re sults in unanticipated behavior. The short answer is that, as Turin proved in his 1936 paper, under a reasonable assumption (th Church-Turing thesis), there will always be computers that ar unpredictable even for someone who knows how they work. The fourth quotation brings this point home. Rather than imag ining that there is some lack of knowledge that makes brains an computers unpredictable, Turing imagines cases in which we pee interest in revisiting Lady Lovelace’s objection, some seize upon isolated comments to thinking machines. See, for example Davidson, 1990, p. 86). these quotations and provoking clarification of their significance. Turing. Studies in History and Philosophy of Science (2011), doi:10.1016/ http://dx.doi.org/10.1016/j.shpsa.2011.09.004 http://dx.doi.org/10.1016/j.shpsa.2011.09.004 D Inserted Text D Inserted Text D Inserted Text D Inserted Text often 304 in 305 an 306 to 307 ex 308 Af 309 se 310 th 311 pu 312 313 sta 314 is, 315 ro 316 a 317 th 318 vio 319 m 320 tw 321 ch 322 323 a 324 te 325 ep 326 tio 327 ca 328 ne 329 330 fac 331 sa 332 on 333 an 334 Bl 335 th 336 cie 337 338 co 339 th 340 su 341 be 342 po 343 pa 344 tio 345 is 346 2.6 347 348 ior 349 str 350 in 351 be 352 tu 353 cri 354 m 355 sa 356 357 sh 358 wh 359 sh 360 Tu 3613. 3623.1. Introduction 363 364ha 365m 366in 367fo 368sig 369 370Tu 371in 372wi 373hy 374in 375hy 376de 377ca 378th 379wh 380tio 381in 3823.2 383 384in 385te 386 387 388 389 390 391 392 393 394 395 396 397Tu 398tio 399ica 400‘[T 401pl 402hu 403 404ev 405by 406m 407Tu 408sp 409gu 410m 411(D 412th 413m 414fro 11 How con mu 12 sati Tu 4 D. Abramson / Studies in History and Philosophy of Science xxx (2011) xxx–xxx SHPS 897 No. of Pages 9, Model 5G 17 September 2011 Pl j.s side each and lack understanding of what we see. Again, Turing is expert on the existence of such cases. It would be very strange hold the first interpretation of these quotations in attempting to plain the use of the word ‘understanding’ in the fourth quote. ter all, those who observe the computer described above pre- nting its symphonies don’t merely lack understanding of how e computer composes—they lack knowledge of how the com- ter operates altogether. The epistemic-limitation condition names the lack of under- nding that one has even after seeing how a machine works, that by observing its program. Learning computers provide a possible ute to constructing such machines. In some cases, construction of learning computer will fail to result in a machine that satisfies e epistemic-limitation condition. However, only building in pre- usly understood programs in machines is guaranteed to result in achines that fail the epistemic-limitation condition. Thus the first o quotations emphasize the importance of not constructing ma- ines that contain previously understood forms. So, there is ample textual evidence that, in addition to providing sufficient condition on intelligence, namely, passing the Turing st, Turing also holds a necessary condition on intelligence: the istemic-limitation condition, as I have called it. An obvious ques- n is, how can Turing consistently hold both of these? Doesn’t lling the Turing test a sufficient condition mean that no other cessary conditions must hold for something that passes it? In short, Turing is committed to the empirical claim that satis- tion of his sufficient condition (passing the Turing test) implies tisfaction of his necessary condition (the epistemic-limitation e) for having intelligence. To use a term from a widely cited d anthologized paper on the Turing test, Turing has what Ned ock calls a ‘psychologistic’ condition on thinking, but thinks that is condition will be satisfied by anything that passes the suffi- nt condition.11 The last response to Turing’s claimed behaviorism is intimately nnected to Dennett’s response. In fact, Dennett’s response can be ought of as the claim that the implication from satisfaction of the fficient condition to satisfaction of the necessary condition can justified on a priori grounds.12 I won’t offer an argument in sup- rt of that here. In the absence of such an argument on Turing’s rt, the parsimonious reading is that he simply believed a connec- n between his necessary and sufficient conditions for intelligence likely, and worth testing. . Summary Of the four responses to the claim that Turing offers a behav- al analysis of the possession of mental states, the last is the ongest. It does attribute to Turing the claim that passing the Tur- g test is a sufficient condition for intelligence, an apparently havioral criterion. However, on the strength of considerable tex- al evidence, Turing believes that satisfaction of this behavioral terion implies satisfaction of a non-behavioral criterion. Further- ore, Turing believes that this non-behavioral condition must be tisfied—is necessary for—having a mind. So far I have been merely setting up the problem. Now that I have own that Turing wasn’t a behaviorist in any sense, one can ask: at influences was Turing acting under, if not his zeitgeist? I will ow in the next section of the paper a significant influence for ring in the formulation of his sufficient condition for intelligence. See Block (1981). Clearly, Ned Block does not find this view in Turing’s own work. dition, together with an argument that its relationship to Turing’s sufficient condition Recently, Stuart Shieber has offered a sustained argument that the connection between ring test can be justified on mildly empirical grounds. See Shieber (2007, p. 709). ease cite this article in press as: Abramson, D. Descartes’ influence on Turi hpsa.2011.09.004 A Possible source for the Turing test In arguing for Turing’s commitment to his necessary condition I ve suggested that the usual understanding of the Turing test is istaken: it is not an expression of behaviorism. It bears explain- g, then, what historical influences (if any) contributed to Turing’s rmulation of his test, and whether these provide additional in- hts into how to understand Turing’s conception of his test. Now I will show that his sufficient condition was not original to ring, but taken, with light modification, from a significant figure the history of philosophy. So, in the remainder of this paper, I ll discuss the origin of the Turing test. First I review some potheses concerning what, if any, influences contributed to Tur- g’s development of his test. Then I offer and justify a particular pothesis. Finally, I try to show that the test, and its source, share ep commonalities. In short, I claim that Turing’s test, and Des- rtes’ so-called language test, are epistemologically analogous— ey play similar roles for each in collecting information about ether some object thinks. I both appeal to existing interpreta- ns of Descartes and Turing, and offer new historical evidence support of this interpretation. . Descartes and Turing Some commentators have tried to deduce the origin of the Tur- g test from an analysis of Turing’s work. Here is part of an at- mpt by Hodges, in his biography of Turing: The discrete state machine, communicating by teleprinter alone, was like an ideal for [Turing’s] own life, in which he would be left alone in a room of his own, to deal with the out- side world solely by rational argument. It was the embodiment of a perfect J.S. Mill liberal, concentrating upon the free will and free speech of the individual. From this point of view, his model was a natural development of the argument for his definition of ‘computable’ that he had framed in 1936, the one in which the Turing machine was to emulate anything done by the individual mind, working on pieces of paper. (Hodges, 1983, p. 425) So, the Turing test, according to Hodges, is the confluence of ring’s views on the equivalence of effectively computable func- ns and Turing computable functions, and his own personal polit- l and social temperament. In a similar vein, A.K. Dewdney writes uring’s] proposal [for the Test] was the essence of British fair ay: A human judge would interact with either a computer or a man and then guess which was which’ (Dewdney, 1992, p. 30). Daniel Dennett is, to my knowledge, the only person to have en considered the possibility that Turing may have been inspired previous philosophical thinking on the difference between inds and machines. In the same article in which he denies that ring is a behaviorist, Dennett writes, ‘Perhaps [Turing] was in- ired by Descartes, who in his Discourse on Method, plausibly ar- ed that there was no more demanding test of human entality than the capacity to hold an intelligent conversation’ ennett, 1985, pp. 5–6). In the relevant passage, Descartes argues at there are sure ways to distinguish beings that think from mere achines. I will quote a slightly longer passage than Dennett does m the Discourse. ever, Block’s paper can be understood as a defense of the epistemic-limitation st be contingent, not necessary. sfaction of plausible versions of Block’s psychologistic requirement and passing the ng. Studies in History and Philosophy of Science (2011), doi:10.1016/ http://dx.doi.org/10.1016/j.shpsa.2011.09.004 http://dx.doi.org/10.1016/j.shpsa.2011.09.004 D Cross-Out D Replacement Text a computer D Inserted Text Turing D Inserted Text and life 415 a 416 d 417 y 418 h 419 r 420 e 421 t 422 e 423 e 424 a 425 s 426 n 427 t 428 t, 429 d 430 - 431 e, 432 h 433 - 434 h 435 g 436 - 437 s 438 n 439 s 440 o 441 h 442 443 - 444 if 445 - 446 - 447 a 448 - 449 d 450 - 451 i- 452 s. 453 - 454 e 455 s 456 y 457 - 458 459 460 o 461 e 462 - 463 e 464 - 465 l- 466 e 467 d 468 t- 469 , 470 i- 471 n 472 473 y, 474 g 475 e 476 ll 477 n 478- 479 480 481i- 482i- 483r, 484n 485 486d 487y 488s 489n 490 491g 492- 493r, 494t 495 496d 497 498- 499- 500g 501- 502- 503 504r 505t 506e 507 508t 509f 510s 511e 512a 513e 514r 515d 516e 517e 518o 519e 520o 521l- 522f 523r 524- 525t 526n 527n 528f- 529a 530t 531y 532e 533e 534e 535 536- 537s 538’s 539h D. Abramson / Studies in History and Philosophy of Science xxx (2011) xxx–xxx 5 SHPS 897 No. of Pages 9, Model 5G 17 September 2011 . . . if any such machines had the organs and outward shape of monkey or of some other animal that lacks reason, we shoul have no means of knowing that they did not possess entirel the same nature as these animals; whereas if any suc machines bore a resemblance to our bodies and imitated ou actions as closely as possible for all practical purposes, w should still have two very certain means of recognizing tha they were not real men. The first is that they could never us words, or put together other signs, as we do in order to declar our thoughts to others. For we can certainly conceive of machine so constructed that it utters words, and even utter words which correspond to bodily actions causing a change i its organs (e.g. if you touch it in one spot it asks what you wan of it, if you touch it in another it cries out that you are hurting i and so on). But it is not conceivable that such a machine shoul produce different arrangements of words so as to give an appro priately meaningful answer to whatever is said in its presenc as the dullest of men can do. Secondly, even though suc machines might do some things as well as we do them, or per haps even better, they would inevitably fail in others, whic would reveal that they were acting not through understandin but only from the disposition of their organs. For whereas rea son is a universal instrument which can be used in all kind of situations, these organs need some particular dispositio for each particular action; hence it is for all practical purpose impossible for a machine to have enough different organs t make it act in all the contingencies of life in the way in whic our reason makes us act (Descartes, 1637, pp. 139–140). First, Descartes seems to think that, for machines lacking ratio nality, identical stimuli must give rise to identical responses (‘ you touch it one spot, it asks what you want . . . But it is not con ceivable that such a machine should produce different arrange ments . . ..’). Second, Descartes seems to think that once machine has been assembled, there is a fixed, finite number of cir cumstances it can behave appropriately in (‘these organs nee some particular disposition for each particular action . . ..’). The sec ond test is like the first, but involves observing an open ended var ety of abilities to accomplish physical, as opposed to verbal task The two related limitations of machines just mentioned pre clude, on Descartes’ account, the ability of a machine to acquir new dispositions, either for improving responses to circumstance it is ill-suited to in its beginning, or for circumstances it is initiall unable to respond to at all. Descartes’ reasoning thus leads natu rally to Lady Lovelace’s objection. 3.3. Descartes, Turing, and irony Many of us, in trying to motivate the idea of the Turing test t students or colleagues, present these comments from the Discours as a tonic for the complaint that Turing was an unreflective behav iorist. In fact, Jack Copeland, who also identifies precursors of th Turing test in the writings of Descartes and the Cartesian de Corde moy, writes ‘‘The idea that the ability to use language is the hal mark of a thinking being has a long history. Ironically, th seventeenth century French philosopher Rene Descartes propose conversation as a sure way of distinguishing any machine, no ma ter how subtle, from a genuine thinking being’’ (Copeland, 1993 pp. 38–39). The irony, I take it, is that Turing, an apparent mater alist about mind, and Descartes, a dualist, agree on how we ca determine that machines do or don’t have minds. However, other commentators have suggested, alternativel that Descartes and Turing have distinct motivations for offerin their criteria for the presence of mind, and that their tests ar not even comparable (for example, Chomsky (2004)). First I wi establish a likely influence, for Turing, in formulating his test. The Please cite this article in press as: Abramson, D. Descartes’ influence on j.shpsa.2011.09.004 I will examine each of these claims concerning the similarities be tween Descartes and Turing. 3.4. The Turing test: an adapted language test In this section I will argue that Turing’s primary source of insp ration for the Turing test was not his British upbringing, social id osyncrasies, nor even his views in computability theory. Rathe Turing’s test finds its likely origin in, yes, Descartes’ comments i the Discourse. It is widely known that Turing, in writing his 1950 paper, rea and responded to a paper called ‘The Mind of Mechanical Man’ b the neurosurgeon Geoffrey Jefferson. This paper was delivered a the Lister Oration at the Royal College of Surgeons of England o June 9, 1949. In responding to what he calls ‘The Argument from Consciousness’ against the possibility of machine thought, Turin quotes Jefferson at length. The online Digital Turing Archive con tains an image of the page from the preprint of Jefferson’s pape in Turing’s possession as he was writing his 1950 Mind paper, tha is the source of this quote. In the margin, next to the passage from Jefferson that Turing quotes, there is a heavy line made in colore pencil. (http://www.turingarchive.org/viewer/?id=504&title=a) The King’s College Archive, at Cambridge University, in notes re corded when this preprint was donated to it, indicates that annota tions to the preprint were in Turing’s hand. The Archive’s catalo entry describes the preprint in a batch of documents, left to the Ar chive by Robin Gandy, as having ‘‘annotations by AMT (Alan Tur ing).’’ (http://www.turingarchive.org/browse.php/B/33-57) However, in examining Turing’s preprint of the Jefferson pape in the physical archive, I found a second heavy line—so heavy, tha the indentation from the pencil carries through 5 pages. Here is th other passage that Turing annotated: Descartes made the point, and a basic one it is, that a parro repeated only what it had been taught and only a fragment o that; it never uses words to express its own thoughts. If, he goe on to say, on the one hand one had a machine that had th shape and appearance of a monkey or other animal without reasoning soul (i.e., without a human mind) there would b no means of knowing which was the counterfeit. On the othe hand, if there was a machine that appeared to be a man, an imitated his actions so far as it would be possible to do so, w should always have two very certain means of recognizing th deceit. First, the machine could not use words as we do t declare our thoughts to others. Secondly, although like som animals they might show more industry than we do, and d some things better than we, yet they would act without know edge of what they were about simply by the arrangement o their organs, their mechanisms, each particularly designed fo each particular action (cp. Karel Čapek’s Robots). Descartes con cluded: ‘From which it comes that it is morally impossible tha there be enough diversity in a machine for it to be able to act i all the occurrences of life in the same way that our reaso would cause us to act. By these means we can recognize the di ference between man and beasts.’ He could even conceive machine that might speak and, if touched in one spot, migh ask what one wanted—if touched in another that it would cr out that it hurt, and similar things. But he could not conceiv of an automaton of sufficient diversity to respond to the sens of all that could be said in its presence. It would fail becaus it had no mind (Jefferson, 1949, p. 1106). It is therefore extremely likely that Turing was aware of Des cartes’ views on the claimed in-principle difference between mind and machines. Descartes’ views at least helped crystallize Turing own conception of the Turing test, and at most presented him wit Turing. Studies in History and Philosophy of Science (2011), doi:10.1016/ http://www.turingarchive.org/viewer/?id=504&title=a http://www.turingarchive.org/browse.php/B/33-57 http://dx.doi.org/10.1016/j.shpsa.2011.09.004 http://dx.doi.org/10.1016/j.shpsa.2011.09.004 540 th 541 th 542 tw 543 ha 544 th 545 th 546 no 547 548 do 549 pa 550 an 551 ica 552 sta 553 go 554 co 555 so 556 fle 557 un 558 un 559 560 qu 561 no 562 fee 563 in 564 is, 565 ta 566 so 567 Jef 568 pa 569 pr 570 in 571 572 jec 573 ca 574 m 575 m 576 Na 577 578 un 579 es 580 Th 581 th 582 fo 583 is 584 3.5 585 586 lan 587 an 588 tw 589 qu 590 an 591 hi 592 tin 593 wh 594 gu 595 596 sc 597 do 598 th 599 th 600 wi 601 sa 602 pr 603 ap 604 605sib 606te 607th 608ex 609sim 610ou 611ni 612 613 614 615 616 617 618 619 620 621 622It 623pa 624th 625re 626hi 627a m 628I call the first interpretation the ‘conditional probability’ one. 629Ac 630th 631ev 632is 633ca 634ne 635In 636th 637 8x 639639 640 641Ac 642cla 643 �8 645645 646 647to 648pr 649alt 650ca 651fro 652no 653 654ca 655tin 656ar 657th 658which is defeasible on possible evidence, whereas Descartes’ meta- 659ph 660bo 66119 662 663De 664no 665(C 666de 667di 668pa 669its 670ha 6 D. Abramson / Studies in History and Philosophy of Science xxx (2011) xxx–xxx SHPS 897 No. of Pages 9, Model 5G 17 September 2011 Pl j.s e idea in toto. The King’s College Archive contains one preprint of e paper that Turing read and quoted from. That preprint contains o annotations that, according to the Archive, are in Turing’s own nd. One annotation is of a passage explicitly quoted by Turing; e other is an annotation of an expression of the central idea of e 1950 paper: that thinking things can be distinguished from n-thinking things by a flexible ability to use natural language. Jefferson’s paper not only paraphrases Descartes’ views, but en- rses them. Jefferson asserts a materialist view, and presents his per as an attempt to reject Descartes’ dualism concerning brain d mental function. He states that the notion that minds are phys- l objects seems to offend both our sense of the richness of mental tes, and our ethical and political self image. However, Jefferson es on to try to show that although minds are physical things, no mputer could ever pass Descartes’ language test. Jefferson’s rea- n for thinking this is that he is ‘‘quite sure that the extreme variety, xibility, and complexity of nervous mechanisms are greatly derestimated by the physicists, who naturally omit everything favourable to a point of view’’ (Jefferson, 1949, p. 1110). In consideration of ‘the argument from consciousness,’ Turing otes Jefferson to the effect that machines cannot think because, matter what they do, they will lack accompanying emotions and ling. In the quoted passage, Jefferson lapses into a position consistent with the one that appears elsewhere in his paper. That Jefferson claims that even if a computer could perform language sks, one could still question whether or not consciousness or rea- n were behind the expressions. This comment is made despite ferson’s approving presentation of Descartes elsewhere in the per. Turing’s selective presentation of Jefferson’s views may have evented later readers of Turing from investigating Descartes’ fluence on Turing, via Jefferson, further. Jefferson, to use contemporary terms from cognitive science, re- ts multiple realizability and adopts something like the dynami- l hypothesis, claiming that machines can only be imperfect imics of the brain: ‘‘however [the human brain’s] functions ay be mimicked my machines, it remains itself and is unique in ture’’ (Jefferson, 1949, p. 1106). In the next section I will argue that Descartes and Turing both derstand their own tests in the same way: as empirical hypoth- es concerning a theoretical commitment to the nature of mind. at is, I will show that each of them thinks that satisfaction of eir test implies the presence of some inner, necessary condition r a mind. But, we will see, their commitment to this implication subject to empirical investigation. . Moral Impossibility First I will present a widely held understanding of Descartes’ guage test, argue briefly in favor of it against an alternative, d then show that this yields a deep epistemic commonality be- een the two tests. To begin, let us pose the difficult historical estion: if Descartes was made aware both of the Turing test, d a machine that passed it, would he be compelled to abandon s view that mental substances and physical substances are dis- ct? Or, at least, we can pose the slightly less difficult question: at view would be consistent with Descartes’ remarks on the lan- age test? Descartes’ comments in the Discourse use qualifications to de- ribe the possibility of machines that pass the language test, but not possess reason (and therefore a soul). Descartes describes e most complicated machines we can conceive of, and then says at they ‘could never’ perform as even the stupidest humans do th natural language (Descartes, 1637, p. 140). Finally, Descartes ys that for any machine that is as close to a person ‘as possible for actical purposes’, we would have ‘very certain’ ways of telling it art from real people. ease cite this article in press as: Abramson, D. Descartes’ influence on Turi hpsa.2011.09.004 I want to suppose, then, that the qualification of ‘moral impos- ility’ applies to the case of a machine that passes the language st. Now, it is unlikely that Turing would have been aware of what is term meant for Descartes. So we cannot argue from Turing’s posure to Descartes’ view that he understood the Turing test to ilarly provide the same level of certainty. On the other hand, r question is served is by examining the definition of this tech- cal phrase. In Principles of Philosophy, Part Four, Descartes writes It would be disingenuous, however, not to point out that some things are considered as morally certain, that is, as having suf- ficient certainty for application to ordinary life, even though they may be uncertain in relation to the absolute power of God. (Descartes, 1644, pp. 289–90). ‘Morally certain’ in this passage means ‘not absolutely certain’. is therefore reasonable to interpret ‘morally impossible’ in the ssage from the Discourse as ‘not absolutely impossible’. Then ere are at least two different interpretations of Descartes’ marks in the Discourse, one of which allows him to maintain s position even after being presented with the Turing test and achine that passes it, and another that does not. cording to it, the probability that a given object has a soul, given e evidence that it passes the Turing test, is extremely high. How- er, this evidence can be defeated on the discovery that the object a mere machine. Let us interpret the modal operator � epistemi- lly. That is, for any sentence P, hP will mean ‘I believe it to be arly impossible (but not absolutely impossible) that P is false.’ formal terms, the first reading of Descartes’ commitment to e language test/Turing test can be expressed as �ðPassesTheTuringTestðxÞ! NotAMachineðxÞÞ I will call the second interpretation the ‘Turing’ interpretation. cording to the Turing interpretation, Descartes holds a universal im with near certainty. The claim is xðPassesTheTuringTestðxÞ! NotAMachineðxÞÞ Notice that on this latter interpretation, Descartes is committed the material conditional that passing the Turing test implies the esence of reason, as opposed to mere mechanism. However, hough his level of commitment is high, it is possible that Des- rtes could be mistaken, in which case there is no implication m the ability to use natural language to the presence of some n-mechanical process. The Turing interpretation is supported by some readers of Des- rtes. For example, in his analysis of Cartesian dualism, John Cot- gham claims that Descartes provides divergent arguments that e in tension with one another. In particular, Cottingham claims at the language test displays a ‘scientific’ motivation for dualism ysical argument (involving an argument for the separability of dy and mind) is not subject to empirical evidence (Cottingham, 92). Cottingham neatly ties together Lady Lovelace’s objection and scartes’ views, by suggesting that Descartes merely assumes that machine can be built that is unpredictable by its creator ottingham, 1992, p. 250). Presumably, the creator of a mechanical vice can see first hand the assemblage of organs with determined spositions that will produce the device’s behaviors. If a machine sses the language test, then it will have to perform in ways that creator cannot anticipate, since otherwise the programmer will ve to imagine all of the indefinite things the machine can do. ng. Studies in History and Philosophy of Science (2011), doi:10.1016/ http://dx.doi.org/10.1016/j.shpsa.2011.09.004 http://dx.doi.org/10.1016/j.shpsa.2011.09.004 671 is 672 - 673 t- 674 675 is 676 e 677 n 678 n 679 n 680 g 681 s 682 e 683 it 684 , 685 s 686 - 687 - 688 is 689 l. 690 - 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 726726 727 728 729 730 731 732 733 735735 736 737 738 739- 740n 741g’ 742e 743f 744- 745n 746n 747 748d 749s 750r 751l. 752y 753e. 754c 755n 756r 757e 758is D. Abramson / Studies in History and Philosophy of Science xxx (2011) xxx–xxx 7 SHPS 897 No. of Pages 9, Model 5G 17 September 2011 On Cottingham’s view, Descartes believes that the language test sufficient for distinguishing thinking beings from machines pre cisely because Descartes cannot imagine that a machine will ever sa isfy the epistemic-limitation condition on intelligence. Note that the Turing interpretation of Descartes’ views of h language test leaves open what Descartes would actually do if h were confronted by a talking machine. Cottingham’s interpretatio implies that Descartes would give up his test in the face of a apparent counterexample. On the other hand, Keith Gunderso writes ‘‘Even if ‘another Prometheus’ made a highly convincin talking mechanical man, I believe it is more likely that Descarte would rather have claimed that a generous God had granted th clever fellow an extra soul to go with his invention, than subm to the conclusion that we had no soul at all’’ (Gunderson, 1971 p. 34). Whether or not Gunderson is right about what Descarte would do, Gunderson in this passage attributes the Turing inter pretation to Descartes. That is, Gunderson thinks that for Des cartes, the alternative hypothesis, that the language test insufficient, is less probable than some alternative involving a sou Gunderson does not think that Descartes has open to him the pos sibility that he has been confronted by the rare machine that can converse without having a soul. Cottingham most clearly endorses the Turing interpretation of Descartes’ language test over the conditional probability interpre- tation with his claim that Descartes believes strongly that the lim- its of physics would prevent any object, operating according to purely mechanical principles, from having the ability to converse in natural language (Cottingham, 1992, p. 252). I will now provide some brief additional philosophical consider- ations in support of the Turing interpretation of the language test over the conditional probability interpretation. Suppose that a machine can be built that passes the Turing test, and that Descartes is presented with it. Given a manufacturing tech- nique that can produce a single machine that passes a Turing test, many more such machines can be created by just copying the first one. So, if the very small likelihood obtains that a machine exists that passes the Turing test, one can, conceivably, revise the probability of some object’s being a non-thinking possessor of natural language ability to approach any measure of likelihood. Such scenarios in- clude ones in which the machines being manufactured take control of the manufacturing process. Descartes clearly does not intend his commitment to the moral impossibility of language-using machines to rely on individual empirical observations; rather, admitting the existence of a language-using machine, even in a single instance, requires rejecting whole networks of beliefs and commitments. Therefore, even given the qualifications that Descartes offers, he would be compelled to at least revisit his dualism if Turing’s assertion, that a single machine can be built that passes the Turing test, is correct. Here is an analogous commitment for Turing that both makes clear that he is not a behaviorist, and highlights the similarity between his understanding of his test and Descartes’ (SatisfiesE- pLim(x) means x satisfies the epistemic-limitation condition): �8xðPassesTheTuringTestðxÞ! SatisfiesEpLimðxÞÞ This is the view that I interpret Turing as holding. He holds, with a high degree of certainty, that satisfaction of his sufficient condi- tion for intelligence implies satisfaction of his necessary condition. Consider now a conditional probability position on the relation- ship between the necessary and sufficient conditions for intelli- gence that Turing offers: 8x�ðPassesTheTuringTestðxÞ! SatisfiesEpLimðxÞÞ Turing cannot hold the weaker, conditional probability view, and maintain a necessary condition on intelligence, while holding that passing the Turing test is a sufficient condition for having a Please cite this article in press as: Abramson, D. Descartes’ influence on j.shpsa.2011.09.004 mind. Given the evidence for Turing’s commitment to his neces sary condition on intelligence, I claim that the first interpretatio of Turing’s position, analogous to what I have called the ‘Turin interpretation of Descartes, is more plausible. In both cases, ther is a strong commitment to a relationship between possession o properties that may be falsified through further empirical and the oretical investigation. Both Turing and Descartes hold their test i the same status, mutatis mutandis for each’s necessary conditio for having a mind. There is a limit, of course, to how similar Descartes an Turing can be in their understanding of their tests. Descarte subscribes to a necessary, internal and sufficient condition fo the possession of intelligence: the having of an immaterial sou Descartes believes, though, that we can’t observe souls directl in others, and must rely on the test to detect their presenc Turing, on the other hand, has his test (constituting a scientifi commitment rather than a statement of a behaviorist conditio for intelligence), but no independent sufficient condition fo mind. Perhaps we can make sense, then, of Turing’s distast for discussions of the meaning of terms like ‘thinking,’ and h 759(merely apparent) suggestions that the imitation game opera- 760tionalizes intelligence. By rejecting dualism, Turing has no alter- 761native, internal, sufficient condition for intelligence. But, Turing 762encourages us, this gap in our understanding need not preclude 763scientific inquiry. 7644. Conclusion 765So, I believe there is ample evidence that Turing at least con- 766ceived of his own test as fulfilling just the purpose that Descartes’ 767fulfilled for him. I have argued for this by presenting extant inter- 768pretations of Descartes, analysis of Turing’s texts, and philosophi- 769cal analysis of the views of each. 770Turing is in a dialogue spanning centuries in which he is pre- 771sented with the view that, due to some hidden property, humans 772are able to engage in natural language conversations, but comput- 773ers aren’t. Faith that a machine can be built that passes the Turing 774test constitutes a denial of this claim, together with the belief that 775such a machine can be built lacking any special physical or meta- 776physical property. Viewed this way, the Turing test is not a merely 777rhetorical tool designed to influence scientific or social commit- 778ments, but instead a concrete method for settling philosophical 779disputes over what can be taken to indicate the presence of a mind. 780I claim that the Turing test and Descartes’ language test fulfill ex- 781actly the same purpose—testing for the presence of some property 782that is necessary for mind, and claimed by some to be unimple- 783mentable in mere machines. 784Turing was aware of Descartes’ language test, and likely was in- 785spired by this to come up with the Turing test. Finally, on a defen- 786sible reading of both Descartes and Turing, performance in natural 787language contexts indicates to both a hidden, necessary property 788for intelligence. 789Acknowledgements 790This research was supported by a Dalhousie Faculty of Arts and 791Social Sciences Research Development Fund grant, from funds pro- 792vided by the Social Sciences and Humanities Research Council of 793Canada. I am grateful to helpful feedback from an anonymous ref- 794eree, participants at the 2008 meeting of the Canadian Society for 795the History of Philosophy and Science, and participants of the Dal- 796housie Philosophy Colloquium Series. I am also grateful to Joy 797Abramson, Duncan MacIntosh, Tom Vinci, Sara Parks and Lisa Kretz 798for encouragement and discussion while working through previous 799versions of this paper. Turing. Studies in History and Philosophy of Science (2011), doi:10.1016/ http://dx.doi.org/10.1016/j.shpsa.2011.09.004 http://dx.doi.org/10.1016/j.shpsa.2011.09.004 D Inserted Text epistemic 800 References 801 Anderson, A. R. (Ed.). (1964). Minds and machines. Prentice-Hall. 802 Abramson, D. (2008). Turing’s responses to two objections. Mind and Machines, 803 18(2), 147–167. 804 Block, N. (1981). Psychologism and behaviorism. The Philosophical Review, 90(1), 805 5–43. 806 Chomsky, N. (2004). Turing on the ‘‘imitation game’’. In S. Shieber (Ed.), The Turing 807 test: Verbal behavior as the hallmark of intelligence (pp. 317–321). MIT Press. 808 Copeland, J. (1993). Artificial intelligence: A philosophical introduction. Blackwell 809 Publishers. 810 Copeland, B. J. (2001). The Turing test. Minds and Machines, 10, 519–539. 811 Cottingham, J. (1992). Cartesian dualism: Theology, metaphysics, and science. In J. 812 Cottingham (Ed.), The Cambridge companion to descartes. Cambridge University 813 Press. 814 Davidson, D. (2004/1990). Turing’s test. In Problems of rationality. Oxford University 815 Press. 816 Dennett, D. (1998/1985). Can machines think? With postscripts 1985 and 1997. In 817 Brainchildren: Essays on designing minds. MIT Press. 818 Descartes, R. (1985/1637). Discourse on method. In The Philosophical Writings of 819 Descartes (Vol. I)(John Cottingham, Robert Stoothoff, Dugald Murdoch, Trans.). 820 Cambridge University Press. 821 Descartes, R. (1985/1644). Principles of philosophy. In The Philosophical Writings of 822 Descartes (Vol. I)(John Cottingham, Robert Stoothoff, Dugald Murdoch, Trans.). 823 Cambridge University Press. 824Dewdney, A. K. (1992). Turing test. Scientific American, 266(1), 30–31. 825Gunderson, K. (1971). Mentality and machines. Doubleday. 826Hodges, A. (1983). Alan Turing: The enigma. Burnett Books. 827Jefferson, G. (1949). The mind of mechanical man. British Medical Journal, 1(4616), 8281105–1110. 829Leiber, J. (1995). On Turing’s Turing test and why the matter matters. Synthese, 104, 83059–69. 831Moor, J. H. (1976). An analysis of the Turing test. Philosophical Studies, 30, 249–257. 832Piccinini, G. (2000). Turing’s rules for the imitation game. Minds and Machines, 10(4), 833573–582. 834Scriven, M. (1964). The mechanical concept of mind (with postscript). In A. R. 835Anderson (Ed.), Minds and machines (pp. 31–42). Prentice-Hall. 836Searle, J. (2004). Mind. Oxford University Press. 837Shieber, S. (2007). The Turing test as interactive proof. Noûs, 41(4), 686–713. 838Sterrett, S. (2000). Turing’s two tests for intelligence. Minds and Machines, 10(4), 839541–559. 840Turing, A. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460. 841Turing, A. (2004a). Can digital computers think? In B. J. Copeland (Ed.), The essential 842Turing. Oxford University Press. 843Turing, A. (2004b). Intelligent machinery, a heretical theory. In B. J. Copeland (Ed.), 844The essential Turing. Oxford University Press. 845Turing, A., Braithwaite, L. C., Jefferson, A. A., & Newman, E. (2004). Can an automatic 846calculating machine be said to think? In B. J. Copeland (Ed.), The essential Turing. 847Oxford University Press. 848 8 D. Abramson / Studies in History and Philosophy of Science xxx (2011) xxx–xxx SHPS 897 No. of Pages 9, Model 5G 17 September 2011 Please cite this article in press as: Abramson, D. Descartes’ influence on Turing. Studies in History and Philosophy of Science (2011), doi:10.1016/ j.shpsa.2011.09.004 http://dx.doi.org/10.1016/j.shpsa.2011.09.004 http://dx.doi.org/10.1016/j.shpsa.2011.09.004 D Cross-Out D Replacement Text /1951a D Cross-Out D Replacement Text /1951b D Inserted Text /1952 Descartes’ influence on Turing 1 Introduction 2 Turing and behaviorism 2.1 Definitions 2.2 A first response: Turing’s consideration of ‘contrary views’ 2.3 The second response: necessary vs. sufficient conditions 2.4 The third response: the strength of the Turing test 2.5 The third response reconsidered: the epistemic-limitation condition’ 2.6 Summary 3 A Possible source for the Turing test 3.1 Introduction 3.2 Descartes and Turing 3.3 Descartes, Turing, and irony 3.4 The Turing test: an adapted language test 3.5 Moral Impossibility 4 Conclusion Acknowledgements References