Striking the Pose: Remembering What It Means To Be a Philosopher of Science http://go.warwick.ac.uk/lib-publications Original citation: Fuller, Steve, 1959- (2012) The art of being human : a project for general Philosophy of Science. Journal for General Philosophy of Science, Vol.43 (No.1). pp. 113-123. Permanent WRAP url: http://wrap.warwick.ac.uk/50418 Copyright and reuse: The Warwick Research Archive Portal (WRAP) makes the work of researchers of the University of Warwick available open access under the following conditions. Copyright © and all moral rights to the version of the paper presented here belong to the individual author(s) and/or other copyright owners. To the extent reasonable and practicable the material made available in WRAP has been checked for eligibility before being made available. Copies of full items can be used for personal research or study, educational, or not-for- profit purposes without prior permission or charge. Provided that the authors, title and full bibliographic details are credited, a hyperlink and/or URL is given for the original metadata page and the content is not changed in any way. Publisher’s statement: The original publication is available at www.springerlink.com A note on versions: The version presented here may differ from the published version or, version of record, if you wish to cite this item you are advised to consult the publisher’s version. Please see the ‘permanent WRAP url’ above for details on accessing the published version and note that access may require a subscription. For more information, please contact the WRAP Team at: wrap@warwick.ac.uk http://wrap.warwick.ac.uk/50418 http://www.springerlink.com/ mailto:wrap@warwick.ac.uk 1 THE ART OF BEING HUMAN; A PROJECT FOR GENERAL PHILOSOPHY OF SCIENCE Steve Fuller 0. Introduction: The Road Back from Stanford to a Re-Humanised Science The decline of general philosophy of science can be traced to the influence of Thomas Kuhn on a generation of scholars, born around 1940, who started to become prominent in the history, philosophy and social studies of science in the late 1970s (Fuller 2000: chap. 6). Within ten years, a critical mass of these post-generalists was assembled at Stanford University, centred on Ian Hacking and Nancy Cartwright, and including such younger scholars as John Dupré and Peter Galison. Despite working in substantively different areas, they shared certain metatheoretic views: (a) anti-determinism and a more general scepticism about the reality of natural laws; (b) ontological pluralism as a pretext for methodological relativism and cross-disciplinary tolerance more generally; (c) a revival of interest in a localised sense of teleology and essentialism while renouncing more universalist versions of these doctrines; (d) a shift from physics to biology as the paradigmatic science and hence a shift in historiographical orientation from the Newton-to-Einstein to the Aristotle-to-Darwin trajectory; (e) a shift in empirical focus from the language of science to science’s non- linguistic practices; (f) an aversion to embracing a normative perspective that is distinct from, let alone in conflict with, that of the scientific practitioners under investigation. The Stanford School published a landmark volume (Galison and Stump 1996) that extended the reach of its anti-generalist line to fellow travellers of a more postmodernist, even posthumanist approach, as represented by, say, Donna Haraway and Bruno Latour. The result has been the establishment of a diffuse but relatively stable consensus of bespoke thinking in science and technology studies (STS) that considers science in all its various social and material entanglements, without supposing that science requires an understanding that transcends and unifies its diverse array of practices. For these anti-generalists, as long as there are people called ‘scientists’ who refer to what they do as ‘science’, science continues to exist as an object of investigation. Indeed, the scientific agents need not even be people, if they are recognized by other recognized scientists as producers of reliable knowledge. In effect, rather than using sociology to flesh out a normative philosophical ideal, the Stanford School cedes to sociology conceptual ground that philosophers previously treated as their own. My own approach to social epistemology stands in direct opposition to the Stanford School (see especially Fuller 2007c). However, the Stanford School is a useful foil for my argument because theyrecognize – if only to reject – the integral relationship between scientific unificationism, determinism, physics-mindedness and human-centredness. In what follows, I reassert all of these unfashionable positions but in a new key that acknowledges that the recent changes in the conduct and justification of science have coincided with a new sense of openness about what it means to be ‘human’. Throughout the medieval and modern periods, in various sacred and secular guises, the unification of all forms of knowledge under the rubric of ‘science’ has been taken as the prerogative of humanity as a species. However, as our sense of species privilege has been called increasingly into question, so too has the very salience of ‘humanity’ and ‘science’ as general categories, let alone ones that might bear some essential relationship to each other. 2 1. Science as Humanity’s Means to Manage Modality Grammatically speaking, modern science was born concessive, by which I mean that species of the subjunctive mood captured in English by ‘despite’ and ‘although’. The original image conveyed by these words was one of modal conflict between overriding necessity and entrenched contingency, conceived either synchronically or diachronically: on the one hand, the resolution of the widest range of empirical phenomena in terms of the smallest number of formal principles; on the other, the unfolding of a preordained plan over the course of a difficult history. In short: Newton or Hegel. In both cases, necessity and contingency are engaged in a ‘dialectical’ relationship. While the ‘necessary’ might be cast in terms of Newton’s laws or Hegel’s world-historic spirit, anything that resisted, refracted, diverted or dragged such a dominating force would count as ‘contingent’. What the Newtonian world- view defined as ‘secondary qualities’ or ‘boundary conditions’, the Hegelian one treated as ‘historically myopic’ or ‘culturally relative’. Both the synchronic and diachronic versions of this dialectic descended from divine teleology, where the end might be construed as either God’s original plan (Newton) or its ultimate outworking (Hegel). However, modernity is marked by the removal of God as primum mobile, something that arguably Newton himself made possible once he defined inertia as a body’s intrinsic capacity for motion (Blumenberg 1983). To be sure, de- deification has turned out to be a tricky move, since human confidence in science as a species-defining project is based on the Biblical idea that we have been created ‘in the image and likeness of God’ (Fuller 2008b). In that case, the project of modern science may be understood as the gradual removal of anthropomorphic elements from an ineliminably anthropocentric conception of inquiry. By ‘anthropocentric’ I mean the assumption of reality’s deep intelligibility – that is, reality’s tractability to our intelligence. In other words, over successive generations, organized science has not only repaid the initial effort invested but also issued in sufficient profit to inspire increased investment, resulting in a reconstitution of the life-world in the scientific image. In the early modern era, the great Cartesian theologian Nicolas Malebranche provided a vivid metaphysical grounding for this sensibility by speaking of our ‘vision in God’, that is, our capacity to think some of God’s own thoughts – specifically, those expressible in analytic geometry through which the motions of all bodies can be comprehensively grasped at once. Secular doctrines of a priori knowledge descend from this idea of an overlap in human and divine cognition. Moreover, the repeated empirical success of physicists’ vaunted theoretical preference for the ‘elegance’ of mathematically simple forms cannot be dismissed as merely a tribal fetish. Those aesthetic intuitions are reasonably interpreted as somehow tapping into a feature of ultimate reality, the nature of which is not necessarily tied to our embodiment – but may be tied to some other aspect of our being. Is the naturalist’s explanation for science’s success an improvement on the Cartesian one? After all, what exactly is the survival value of concentrating significant material and cultural resources on some hypothesised ‘universe’ that extends far beyond the sense of ‘environment’ that is of direct relevance to Homo sapiens? From a strictly Darwinian standpoint, a fetish that perhaps arose as a neutral by-product of a genetic survival strategy by a subset of the Eurasian population may only serve to undo the entire species in the long term. Specifically, the mentality that originally enabled a few academics to enter ‘The Mind of God’ has been also responsible for nuclear physics and the massive existential challenges that have followed in its wake (Noble 1997; Fuller 2010a: chap.1). In this respect, humanity’s 3 bloody-mindedness in the pursuit of science reflects a residual confidence in our godlike capacities, even after secularisation has discouraged us from connecting with its source. Two senses of freedom are implied in this theological rendering of humanity’s scientific impulse. On the one hand, we literally share God’s spontaneous desire to understand everything as a unified rational whole, which drives us to see nature in terms of the laws by which the deity created. The clear intellectual and practical efficacy of this project stands in stark contrast to the risk to which it exposes our continued biological survival, since at least for the time being we are not God. This serves to bias our cost-accounting for science’s technological presence in the world, whereby we tend to credit the good effects to the underlying science while blaming the bad effects on science’s specific appliers or users. On the other hand, our distinctiveness from God lies in the detail in which we seek a comprehensive understanding of reality, given our own unique status as creatures. This latter condition gives us a sphere of meaningful contingency, or ‘room to manoeuvre’ (Spielraum), to recall the phrase of the late 19 th century German physiologist and probability theorist, Johannes von Kries, who greatly influenced Max Weber’s thinking about what distinguishes humanity from the other lawfully governed entities in nature (Weber 1949: 164-88). Humans are self-legislating, in that even in a world that is determined by overarching principles (‘overdetermined’, as it were), we have the power to resolve any remaining uncertainty in how these principles are instantiated. Indeed, Weber appeared to suggest that our humanity rests on the fact that we treat overdetermined situations as providing meaningful choices – that the way or style in which something is done in the short term matters, even if in the long term it simply reinforces the fact that the thing had to be done. Vivid examples are provided by the extraordinary ethical debates that continue to surround birth and death -- the simple biological fact that people come into and go out of existence. More striking than the range of opinion that these debates elicit is that they are had at all, since regardless of where one stands on abortion, contraception, euthanasia, suicide or – for that matter -- murder, mortality is a certainty that befalls natural biological selves (Fuller 2006a: chap. 11). This opening meditation on modality suggests that science relates to our humanity in a rather sophisticated way. We are dualistic beings, which in the first instance may be understood as our ongoing enactment of some version of spirit persevering in the face of recalcitrant matter. However else they differ, Newton and Hegel both belong to this moment. In that case, science’s steadfast focus on necessity appears indifferent, if not cruel, with respect to the contingencies of particular human lives. But Max Weber – no doubt moved by the popular determinisms of his day (Marxism, psychoanalysis, energeticism) – recognised that these contingencies were precisely the means by which humans, even after conceding certain overriding tendencies in nature, express their distinctive identities. In that sense, for Weber, humanity is a variable by-product of the exigencies of the life-situations faced by our species. But there may also be collective by-products of those exigencies – and this is where what Karl Popper called ‘objective knowledge’, including science as a unique human achievement, belongs. 2. Science: A By-Product of Life That Becomes Its Point and Then Imposes Its Own Sense of Justice 4 Karl Popper stood apart from most modern epistemologists and philosophers of science in refusing to identify knowledge claims – hypotheses and theories – with the formal expression of beliefs, which he took to be irreducibly subjective and more about self-affirmation than knowledge as such. Popper had an admirably literal, thing-like understanding of ‘objective knowledge’ as external entities contact with which is generative of systematic thought (Popper 1972). An attractive feature of this conception of knowledge is that its sense of objectivity is studiously neutral on the metaphysical debate between idealism and materialism (Fuller 1988: chap. 2). Thus, both Plato’s Forms and Popper’s own example of the last library left standing after a nuclear holocaust would count as objective knowledge, in that each would enable thinking beings – whatever their provenance -- to create a civilised world. Considering that, as we shall later see, humanity may proceed in at least two distinct directions in the future, Popper’s view usefully refuses to tie science, understood as humanity’s distinguishing feature, with our current biological makeup. Popper’s account of the origin of objective knowledge follows a line that in his youth had been advanced in the Vienna Circle as a transformation in the concept of ‘economy’. It amounted to a definition of civilisation as the reversal of means and ends, once a means has achieved its end (Schlick 1974: 94-101). Thus, the practice of counting arose to solve problems related to survival but then, once those problems were solved (or at least managed), became a project pursued for its own sake in the form of number theory. In the transition, the relevant sense of ‘economy’ shifted from producing ideas that minimise effort to a more focused concern for the minimal ideas needed to account for effort as such. Moreover, this long-term pursuit came to be seen as providing the basis for a still deeper economisation of effort that could be imparted in pedagogy. Here I mean the subsumption of the ‘practical arts’ in their broadest sense under ‘science’ – that is, the relation in which engineering, medicine, business and law currently stand to physics, biology, economics and the socio-political sciences, respectively, in the academic curriculum. This general relationship is traceable to William Whewell’s insistence that science requires properly trained ‘scientists’, not simply amateurs who happen to stumble upon epistemic breakthroughs. The philosophical privileging of the ‘context of justification’ over the ‘context of discovery’ in science, which comes into its own in the second half of the 19 th century, is ultimately about devaluing the contingent features of scientific achievement, so as to avoid a sense of ‘path dependency’ to science that would lose sight of its ultimate aim of universal knowledge. While in the short term Whewell’s policies aimed to undercut the mechanics and inventors who over the previous century had flourished outside the clerically controlled academic networks, in the long term his policies staved off – if not ensured – that scientific knowledge would be, at least in principle, made available to everyone, specifically those who had not undergone any idiosyncratic creative process or belonged to the right social network (Laudan 1981; Fuller 2000: chap. 1). In this respect, in reasserting the authority of Oxbridge in the face of interloping parvenus, Whewell struck a blow for epistemic justice (Fuller 2007a: 24-29). In broadest terms, epistemic justice is about ensuring that individual inputs into the collective pursuit of knowledge are recognised in proportion to their relevance and credibility. Curiously, analytic philosophers frame this problem as one of epistemic injustice, namely, identifying and remedying presumptively clear cases in which the requisite sense of proportion has been violated (McConkey 2004). To be sure, such cases are easily conjured: medical research that studies only men but then pronounces on everyone; intelligence testing that fails to recognise its own cultural biases; psychological research that samples only 5 students to draw inferences about all age groups. Research designs that systematically ignore significant segments of humanity undermine science’s aspiration to knowledge of universal scope and respect. Who could disagree? But the way forward is far from clear, which suggests that we need to get clear about what is meant by ‘epistemic justice’ before speaking of ‘injustice’. Consider the options available for a research design that claims to draw conclusions that are representative of all of humanity: (1) It must include a range of individuals who belong to the human subgroups relevant to the research topic (Longino 2001). (1)(2) It must include a range of the relevant perspectives on the research topic, regardless of who represents them (Kitcher 2001). (1)(3) It must include people who can spontaneously adopt a sense of ‘critical distance’ on the research topic by virtue of having no necessary stake in whatever conclusions might be reached (Fuller 2006b: chap. 6). Each option makes an implicit modal cut: that is, a line is drawn between what is necessary versus contingent in the properties of subjects in order to ensure that knowledge is produced of universal purchase. For example, (1) is committed to, say, flesh-and-blood women as representatives of a woman’s point-of-view in a way that (2) is not. Yet, as long as women are allowed to represent themselves verbally (as opposed to by more direct physical means), it is entirely possible that their responses to research protocols will not deviate substantially from those of men (e.g. if they ‘correct’ their spontaneous phenomenology). In that case, perhaps some specially trained men (in Gender Studies?) would represent a woman’s perspective better than actual women, just as a trained linguist might know an indigenous language better than an assimilated person of indigenous descent. Indeed, such a prospect might even be welcomed for preventing knowledge from being so closely tied to one’s being in the world that it effectively becomes a source of rent, such that one cannot know a particular thing without being a particular person, in which case if one is not such a person, one needs to rent his or her services (Fuller 2010b). For science to live up to its universalist aspirations, it must oppose regimes of ‘information feudalism’ generated by ‘intellectual property’ of all sorts, not least epistemic rent, as discussed above, which sometimes travels under the politically correct label of ‘identity politics’. In all these cases, matters of epistemology would be converted to ones of ontology. Here the university stands as a bulwark against such conversion of knowledge to property with its Humboldtian mandate to incorporate the fruits of research into teaching, the overall lesson being that any knowledge originally acquired by one person can be, in principle, acquired by anyone (Fuller 2010b). But the implications of this maxim for a more ‘inclusive’ science are not clear. Option (3) suggests that perhaps subjects should be allowed to make the modal cut for themselves by considering situations tangentially related to their self- understanding. They would thus need to both gauge the likelihood of a situation’s relevance to their own lives and the difference it would make, were it relevant. The idea here would be to simulate a semi-detached but interested standpoint (Fuller 2007a: 110-114). Here it is worth observing that models (1) and (2) –and quite possibly (3) -- of epistemic justice sit uncomfortably with John Stuart Mill’s classic argument in On Liberty for the free expression of thought as the optimal vehicle for collective truth-seeking. His particular formulation of this ideal assumed neither that people’s beliefs are fixed by who they are (unlike 1) nor that certain beliefs deserved special representation in the polis (unlike 2). To be Formatted: Bullets and Numbering 6 sure, Mill held that the expression of many different viewpoints contributed greatly to organized inquiry, and that special care had to be taken to ensure that minority positions were heard. However, he meant this activity to transpire entirely in the open, such that when decisions were taken, everyone knew where everyone else stood. In that way, people could draw their own conclusions about how others had reasoned, and on that basis perhaps adjust their own positions. Thus, in Mill’s version of the open society, voting itself would be an open process, whose outcomes could be filed for future reference. A close analogue is the iterative character of the Delphi method developed by Nicholas Rescher (1998) and colleagues at the RAND Corporation in the 1950s to convert collective decision-making into a genuine learning experience that avoided the pre-emptive intellectual closure associated with ‘groupthink’. As we shall now see, a groupthink-averse social epistemology is precisely what is needed for placing our future understanding of ‘science’ and ‘humanity’ in some reflective equilibrium. 3. Science’s Continual Re-specification of that Projectible Predicate ‘Human’ To speak of ‘projectible predicates’ is to recall that old warhorse of analytic epistemology, the so-called grue paradox. According to Nelson Goodman (1955), ‘grue’ is the property of being green before a given time and blue thereafter. This property enjoys just as much empirical support as the property of being green when hypothetically applied to all known emeralds. For Goodman, this was a ‘new riddle of induction’ because unlike Hume’s original example of induction -- how do we know that the sun will rise tomorrow just given our past experience – his problem suggests that our propensity to inductive inference is shaped not simply by our prior experience but by the language in which that experience has been cast. Unfortunately Goodman drew a conservative conclusion from this situation, namely, that we are generally right to ‘project’ the more familiar predicate ‘green’ over ‘grue’ when making predictions about the colour of future emeralds. Why? Well, because that predicate is more ‘entrenched’, which is a bit of jargon for the rather unphilosophical stance of ‘if it ain’t broke, don’t fix it’. The prospect that a predicate like ‘grue’ might contribute to a more adequate account of all emeralds (both known and unknown) than ‘green’ is certainly familiar from the history of science. It trades on the idea that the periodic inability of our best theories to predict the future may rest on our failure to have understood the past all along. In short, we may have thought we lived in one sort of world, when in fact we have been always living in another one. After all, the ‘grue’ and ‘green’ worlds have looked exactly the same until now. In this respect, Goodman showed that induction is about locating the actual world in which a prediction is made within the set of possible worlds by proposing causal narratives that purport to connect past and future events, ‘green’ and ‘grue’ constituting two alternative accounts vis-à-vis the colour of emeralds. This is a profound point, especially for scientific realists, the full implications of which have yet to be explored – even now, more than half a century after Goodman’s original formulation (cf. Stanford 2006 on the problem of ‘unconceived alternatives’ to the best scientific explanation at a given time). In particular, Goodman suggests how the ‘paradigm shifts’ that Kuhn (1970) identified with ‘scientific revolutions’ should be expected if we take the fallibility of our theories as temporally symmetrical – that is, that the outcome of any prediction of a future state has implications for what we believed about relevantly similar states in the past. In this respect, every substantial new discovery is always an invitation to do 7 revisionist history. For, as science increases our breadth of knowledge by revealing previously unknown phenomena, it also increases our depth by revising our understanding of previously known phenomena so as to incorporate them within the newly forged understanding. Thus, Newton did not simply add to Aristotle’s project but superseded it altogether by showing that Aristotle had not fully grasped what he thought he had understood. Indeed, if Newton is to be believed, Aristotle literally did not know what he was talking about, since everything that he said that we still deem to be true could be said just as well – and better – by dispensing with his overarching metaphysical framework. In terms of my own version of social epistemology (Fuller 1988), science may be defined as the form of organized inquiry that is dedicated to reproducing Goodman’s new riddle of induction on a regular basis. To be sure, speculatively conjured predicates like ‘grue’ rarely lead to successful predictions, so some skill must be involved to make them work, whereby they acquire the leverage for rethinking inquiry’s prior history. Such skill is on display in first-rate scientific theorising. In this way, scientific revolutions metamorphose from Kuhn’s realm of the unwanted and the unintended to Popper’s (1981) positive vision of deliberately instigated ‘permanent revolutions’. One predicate whose projectibility will be increasingly queried vis-à-vis- developments in science is human. Already there is a small but interesting body of literature related to this topic, some arguing that science is confined to the human (Rescher 1999) and others that science exceeds the human (Humphreys 2004). But in both cases, ‘human’ is treated more like ‘green’ than ‘grue’. In contrast, I propose that scientific inquiry is so bound up with what it means to be human that advances in science may render ‘human’ grue-like, such that David Chalmers’ (1996) ‘hard problem of consciousness’ (which involves saving the phenomenology of human experience within a physicalistic world-view) is relegated to referring to accidental rather than essential features of our humanity. Prima facie evidence for the grue-likeness of ‘human’ is provided by a recent on-line debate on ‘the greatest technological advance of the 20 th century’ (The Economist 2010). The challengers were the computer and artificial fertiliser, the former extending certain human capacities for formal reasoning beyond their natural limits, the latter extending the quantity and quality of human lives to unprecedented levels. The debate was conducted largely in these terms, resulting 3:1 in favour of the computer. A more philosophically salient indicator is the persistence of the mind-body problem: Those who would track the human through successive experiential states of our animal bodies in natural environments perennially meet resistance from those who hold that our humanity is only contingently tied our embodiment and could, at least in principle, be realized – perhaps even more profoundly – in some other radically different medium, such as a computer. In the latter case, the human is associated primarily with the capacity to think, to reason and to reflect comprehensively (i.e. ‘consciousness’ in the strongest sense of ‘self-consciousness’), where the primary terms of reference do not depend on a particular material instantiation. Increasingly prominent as a battleground for these conflicting intuitions is the concept of ‘dignity’, perhaps the quality associated with our humanity that is most directly tied to our bodies (Cochrane 2010). Thus, put starkly, the relevant ‘green’ vs ‘grue’ alternatives consist in whether the future of the human should be projected through (1) the reproductive patterns of Homo sapiens and our evolutionary successors, even if natural selection turns out not to favour what we currently regard as our distinctive mental powers, or (2) whatever physical means is required to preserve and more fully realize those distinctive mental powers, even if they involve shifting from a carbon to a silicon base altogether. In terms of these alternatives, the philosopher of 8 animal liberation Peter Singer (1999) would represent the first projection of the human and the transhumanist guru Ray Kurzweil (1999) the second. Key indicators here are, on the one hand, Singer’s ‘levelling’ of the threshold of moral relevance from a unique human power such as rational agency to pain avoidance, a state that humans share with all animals in possession of a nervous system and, on the other, Kurzweil’s view of death as representing more a remediable technological shortcoming than a natural state of the human condition. (For more on this contrast, see Fuller 2007b: chap. 2; Fuller 2011: chap. 2). The middle ground between these two extremes is occupied by the programme of eugenics, whose advocates from Francis Galton to Julian Huxley have aimed to preserve and extend humanity’s distinctive mental traits, while staying within the parameters of our biological heritage. The idea was that strategic interventions into both our reproductive capacity and nature’s selection pressures would enable us to direct the course of evolution. The idea is prima facie plausible if we truly believe that Homo sapiens has had more impact on the planet in recent aeons than any other species. Over the past decade, science policy initiatives on both sides of the Atlantic to foster the ‘convergence’ of nano- and biotechnology research have given eugenics an ideological makeover, with ‘negative eugenics’ now re-branded as ‘therapeutic’ and ‘positive eugenics’ as ‘enhancing’ (Fuller 2011: chap. 3). Although much ink has been spilled about the dehumanising and irreligious character of eugenics research and policy, in fact many of its most distinguished contributors were, broadly speaking, non- conformist Christians. In any case, taken at face value (and not in terms of the political excesses it spawned), eugenics is most naturally understood as extending the idea of humanity’s Biblical stewardship of the Earth from agriculture to human culture. It is not by accident that a notable early impact of eugenics-based thinking was ‘puericulture’, the science (or, perhaps better, technology) of ‘raising children’ in the literal sense of raising plants or animals (Pichot 2009). Of these three projections of ‘human’ – Singer’s posthumanist evolutionary mutation, Kurzweil’s transhumanist android, and the eugenicist’s middle ground – we may ask a question inspired by George Sarton’s attempt to update Comte’s view that the history of science constitutes the narrative through which humanity’s collective self-realization is told (Sarton 1924; cf. Fuller 2010a: chap. 2): From which (if any) of these visions are likely to come beings who would be recognised by the historic exemplars of our humanity as their descendants? The intuition guiding the question is clear and reasonable: If we claim that, say, Newton or Goethe exemplifies our sense of humanity, then we should imagine that, given the opportunity, he would return the compliment and accept us as his intellectual offspring. Such thought experiments in mutual recognition may be seen as a more analytic and normative version of the hermeneutical ‘fusion of horizons’ that Gadamer held to be a precondition of historical understanding (cf. Fuller 2008a). These exercises bear on the projectibility of ‘human’ because they force us to imagine ourselves as, say, Newton or Goethe passing judgement on whether we have adequately captured – if not amplified -- the spirit (aka essence) of their original projects. Insofar as he would recoil from any of the three projected versions of ‘Humanity 2.0’, we are then faced with the following choice: (a) to dissociate ourselves from his legacy (and perhaps try to locate some intellectually more congenial ‘humanist’ ancestor); (b) to admit that there are now two species of ‘humanity’ that nevertheless share a common ancestor; (c) to abandon ‘humanity’ altogether as the conceptual banner under which our collective project travels. 9 If the reader is disoriented by this array of choices, the reason may be that I take seriously the prospect that, if ‘humanity’ is identified with our species-distinctive mental traits that are only imperfectly realized in our current biological form, our successor being may not look or feel very much like us at all – especially if Ray Kurzweil gets his way. For example, traditional virtues such as prudence and compassion have been compelling for beings with roughly our bodies whose experiences are confined to familiar social environments. But the meaning of these virtues are already being stretched as people who are still grounded in the bodies of their birth move in increasingly heterogeneous social environments, courtesy of air travel, mass media and cyberspace. Now, what happens once these people loosen their self- identification with their biological bodies, as in the phenomenon that Sherry Turkle (1984) originally called ‘second self’, which has now been normalised as ‘Second Life’ avatars? If the value placed on an activity is measured in terms of the time spent on it, then many people appear to be in the process of intellectually, if not physically, evacuating their biological bodies. A general philosophy of science curriculum is required for this emerging Humanity 2.0. Its aim would be to provide an intellectual framework to ensure that the three projections of the human outlined above are made the subject of educated judgements and not simply allowed to track the market forces that currently guide their development – specifically, those driven by ecological interests (for Singer’s posthumanism), biomedical interests (for the new eugenics) and cybernetic interests (for Kurzweil’s transhumanism). Two hundred years ago, the German idealists first gave philosophy a clear disciplinary mission – namely, to provide the metatheory of the university by articulating what qualifies the various bodies of disciplined knowledge as ‘science’. The practical side of this task was executed by an integrated course of study that would enable the student to learn enough about each of the disciplines to arrive at a personal synthesis, a ‘science of oneself’, as it were. Here philosophy offered principles for unifying knowledge in the service of life. In this context, Goethe was seen as superior to Newton as a human being for having engaged this task more thoroughly and creatively. A ‘general philosophy of science’ for our own time would re-forge the idealist link between the unity of the self and the unity of science – but in a new key. Instead of the familiar post-Kantian task of integrating the sciences of ‘nature’ and ‘spirit’, we now face the more daunting challenge of designing a scheme that draws together the increasingly marked ecological, biomedical and cybernetic interests that are charting the course of Humanity 2.0. References Blumenberg, H. (1983). The Legitimacy of the Modern Age. (Orig. 1966) Cambridge MA: MIT Press. Chalmers, D. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford: Oxford University Press. Cochrane, A. (2010). ‘Undignified Bioethics’. Bioethics 24 (5): 234-41. The Economist (2010). ‘This house believes the development of computing was the most significant technological advance of the 20th century.’ (On-line debate, 19-29 October) http://www.economist.com/debate/days/view/598#mod_module Fuller, S. (1988). Social Epistemology. Bloomington IN: Indiana University Press. Fuller, S. (2000). Thomas Kuhn: A Philosophical History for Our Times. Chicago: University of Chicago Press. Fuller, S. (2006a). The New Sociological Imagination. London: Sage. Fuller, S. (2006b). The Philosophy of Science and Technology Studies. London: Routledge. http://www.economist.com/debate/days/view/598#mod_module 10 Fuller, S. (2007a). The Knowledge Book: Key Concepts in Philosophy, Science and Culture. Durham UK: Acumen. Fuller, S. (2007b). Science vs Religion? Cambridge UK: Polity. Fuller, S. (2007c). New Frontiers in Science and Technology Studies. Cambridge UK: Polity. Fuller, S. (2008a). ‘The Normative Turn: Counterfactuals and a Philosophical Historiography of Science’. Isis 99: 576-584. Fuller, S. (2008b). Dissent over Descent. Cambridge UK: Icon. Fuller, S. (2010a). Science: The Art of Living. Durham UK: Acumen. Fuller, S. (2010b). 'Capitalism and Knowledge: The University between Commodification and Entrepreneurship', in H. Radder, ed., The Commodification of Academic Research: Science and the Modern University. (Pp. 277-306) Pittsburgh: University of Pittsburgh Press. Fuller, S. (2011). Humanity 2.0: The Past, Present and Future of What It Means to Be Human. London: Palgrave Macmillan. Galison, P. and Stump, D., eds. (1996). The Disunity of Science: Boundaries, Contexts, and Power. Palo Alto CA: Stanford University Press. Goodman, N. (1955). Fact, Fiction and Forecast. Cambridge MA: Harvard University Press. Humphreys, P. (2004). Extending Ourselves: Computational Science, Empiricism and the Scientific Method. Oxford: Oxford University Press. Kitcher, P. (2001). Science, Truth and Democracy. Oxford: Oxford University Press. Kuhn, T.S. (1970). The Structure of Scientific Revolutions. 2 nd edn. (Orig. 1962). Chicago: University of Chicago Press. Kurzweil, R. (1999). The Age of Spiritual Machines. New York: Random House. Laudan, L. (1981). Science and Hypothesis. Dordrecht: D. Reidel. Longino, H. (2001). The Fate of Knowledge. Princeton: Princeton University Press. McConkey, J. (2004). ‘Knowledge and Acknowledgement: “Epistemic Injustice” as a Problem of Recognition’. Politics 24 (3): 198-205. Noble, D. (1997). The Religion of Technology: The Divinity of Man and the Spirit of Invention. New York: Alfred Knopf. Pichot, A. (2009). The Pure Society: From Darwin to Hitler. London: Verso. Popper, K. (1972). Objective Knowledge. Oxford: Oxford University Press. Popper, K. (1981). “The Rationality of Scientific Revolutions.” In I. Hacking (ed.), Scientific Revolutions. (Pp. 80-106) Oxford: Oxford University Press. Rescher, N. (1998). Predicting the Future. Albany NY: SUNY Press. Rescher, N. (1999). The Limits of Science. Pittsburgh: University of Pittsburgh Press. Sarton, G. (1924). "The New Humanism." Isis 6: 9-24. Schlick, M. (1974). The General Theory of Knowledge (Orig. 1925) Berlin: Springer-Verlag. Singer, P. (1999). A Darwinian Left. London: Weidenfeld & Nicolson. Stanford, P.K. (2006). Exceeding Our Grasp. Oxford: Oxford University Press. Turkle, S. (1984). The Second Self: Computers and the Human Spirit. Cambridge MA: MIT Press. Weber, M. (1949). The Methodology of the Social Sciences. New York: Free Press.