PSA 2012 SUBM   1   To be published in Philosophy of Science (Proceedings of PSA 2012) Synthetic Modeling and Mechanistic Account: Material Recombination and Beyond Tarja Knuuttila University of Helsinki Andrea Loettgers California Institute of Technology Abstract: Recently, Bechtel and Abrahamsen have argued that mathematical models study the dynamics of mechanisms by recomposing the components and their operations into an appropriately organized system. We will study this claim through the practice of combinational modeling in circadian clock research. In combinational modeling experiments on model organisms and mathematical/computational models are combined with a new type of model—a synthetic model. While we appreciate Bechtel and Abrahamsen’s point that mathematical/computational models are used to provide dynamic mechanistic explanations, we think that the strategy of recomposition is more complicated than what Bechtel and Abrahamsen indicate. Moreover, as we will show, synthetic modeling as a kind of material recomposition strategy also points beyond the mechanistic paradigm. 1. Introduction. Modeling has begun to interest the mechanistic philosophers only relatively recently. Although exceptions exist, the mechanistic philosophy of science has so far tended to concentrate on the level of detailed decomposition and localization in the analysis of biological mechanisms. Glennan (2005) has suggested that this is due to the realist tendencies of mechanist philosophy as “most of the literature has focused on the properties of mechanisms themselves and not said much about […] their models or theoretical representations” (p. 443). Since then there has been more interest in modeling from the part of mechanists, but typically they have tended to assume that because of their often highly idealized nature, models usually provide mechanism schemas or sketches only (cf. Craver 2006, Darden 2007). Thus according to Craver “mechanistic models explain,” yet in order to   2   explain they are supposed to “account for all aspects of the phenomenon by describing how the component entities and activities are organized such that the phenomenon occurs” (Craver 2006, 374). From the perspective of the actual modeling practice this seems to be a rather stringent requirement, which is, however, shared by many philosophers of science who tend to approach the idealizations and approximations involved in modeling as merely their shortcomings (cf. Bailer- Jones, 2009). Interestingly, in discussing Hodgkin and Huxley’s model of the action potential Craver points out that they “knew a good deal more about action potentials than is included in their mathematical model” (ibid. 365). This remark is bound to raise two related questions. Firstly, why do modelers not include all their knowledge in their models—given that as such they fall short of providing us with any “how-actually” explanations? Secondly, what is the epistemic rationale for using these often abstract and schematized representations—models? That is, what is the place and role of modeling in the mechanistic research program? Recently, Bechtel and Abrahamsen have argued that models have the specific task of studying the dynamics of mechanisms (Bechtel 2011; Bechtel and Abrahamsen 2010, 2011). Mathematical models studying complex phenomena in terms of non-linear differential equations aim to account for how the mechanism’s parts and operations are “orchestrated in real time to produce dynamic phenomena” (Bechtel and Abrahamsen 2010, 322). According to Bechtel and Abrahamsen, the mechanistic program has paid much more attention “to the ways of decomposing a mechanism into component parts and operations than to the ways of recomposing them into an appropriately organized system” (ibid.). However, this need not be the case. Bechtel and Abrahamsen use research on the circadian clock as an exemplary case of mechanistic research that has successfully combined the decompositional approach of finding out the basic components and operations of mechanisms to the study of their organization and dynamical orchestration in time by means of modeling (Bechtel and Abrahamsen 2010, 2011).   3   In what follows we will study in more detail this insight of Bechtel and Abrahamsen. Although we appreciate their basic point that the task of mathematical modeling and their simulations is to study the dynamics of a mechanism, we find their idea that mathematical/computational models could recompose mechanisms by “reassembling the parts and operations into an organized arrangement that constitutes the mechanism” (Bechtel and Abrahamsen 2009, 173) too optimistic. It does not pay enough attention to the limitations of mathematical modeling in taking into account a host of empirical details. Namely, even in the area of circadian clock research, it can be shown that a gap remains between the experimental research into the basic components of the circadian mechanisms in various model organisms and the attempts to model these mechanisms mathematically. This gap is where the novel practice of synthetic modeling comes into play. We will discuss two instances of synthetic modeling in more detail, showing first how they partly contradicted what was expected on the basis of mathematical modeling, and second how they seem to also point beyond the mechanistic paradigm. 2. Recomposition, Dynamic Explanation, and Circadian Clock Research. In a series of recent articles Bechtel (e.g. 2011) and Bechtel and Abrahamsen (e.g. 2009, 2010, 2011) have developed the idea that mathematical and computational modeling studies the dynamic behavior of mechanisms. According to them, the mechanistic program has so far tended to concentrate on the decomposition of a mechanism into its components and their operations, paying less attention to how these components and operations are organized into a mechanism producing a certain kind of behavior. Bechtel and Abrahamsen call this task of integrating the components and their operations into the “orchestrated functioning of a mechanism” recomposition. Although the decompositional task is already challenging in itself, they suggest that the task of recomposition might prove even more challenging. This is due to two reasons: firstly, parts and operations in biological organisms are often highly integrated and capable of rearranging themselves as a result   4   of changes in some part of a mechanism; secondly, the organization of biological systems is often cyclical, involving feedback loops, which makes the dynamics of these systems highly complex. Biological systems are autonomous far-from-equilibrium systems that are able to remake and maintain themselves in changing environmental circumstances. Cyclic organization is central to such entities and is subject to various kinds of oscillations (see Bechtel and Abrahamsen 2011, Bechtel 2011). Bechtel (2011) argues that what he calls the “basic mechanistic account” is unable to deal with this cyclic nonsequential organization characteristic of living systems. Because of this, the basic mechanistic account needs to be supplemented with computational modeling providing dynamic mechanistic explanations of cyclic organizations of biological organisms. According to Bechtel, the mechanists have so far limited themselves to an account of a mechanistic explanation that involves a “sequential execution of qualitatively characterized operations” and is as such insufficient for explaining the biological phenomena involved in cyclic organization (2011, 533). Bechtel attributes this view to several prominent mechanists (e.g. Machamer, Darden and Craver 2000, Darden and Craver 2002). The “basic mechanistic account” construes a mechanism to generate a phenomenon (e.g. protein synthesis) through “a start-to-finish sequence of qualitatively characterized operations performed by component parts” (ibid., 534). A typical representation of the organization of a mechanism considered by the proponents of the basic mechanistic explanation is a diagram accompanied by a commentary to help interpret it. The reader of the diagram is then supposed to mentally “step through” the diagram from start to termination conditions. However, the situation changes if the mechanism involves feedback loops leading to cyclic phenomena, which are by their nature nonsequential and thus too complex to be rehearsed as series of operations in the mind of a theorist. Various types of tools are needed to analyze these kinds of systems. Such tools are offered by computational modeling and dynamic systems theory, which can deal with complex nonlinear relations between the components of a mechanism (the nonlinearity is   5   due to the feedback loops). Thus Bechtel and Abrahamsen envisage the mechanistic explanation as consisting of two parts: decomposition and recomposition. Decomposition is supposed to happen experimentally through typically inhibiting or stimulating a certain part of the assumed mechanism and detecting the result. In recomposing the mechanism, the theorists, in turn, typically set up “systems of differential equations in which variables and other terms correspond to properties of the mechanism’s parts and operations” (Bechtel 2011, 552). Accordingly, Bechtel and Abrahamsen seem to conceive of decomposition and recomposition as two sides of the same coin. While in cognitive sciences some adherents of the dynamic paradigm have claimed that it is incongruent with the reductionistic mechanistic approach (cf. Stepp, Chemero and Turvey 2011), Bechtel and Abrahamsen use the research into the circadian clock as an example of a line of research that has successfully combined the two. Circadian clock research, which studies the day and night rhythms of organisms, provides a good example indeed of the interplay between decomposition and recomposition. In this area, scientists have been able to experimentally isolate several genes and proteins involved in the creation of circadian rhythms in various model organisms, and the knowledge of these components and their operations has in turn informed the mathematical/computational modeling of circadian oscillations. We find Bechtel and Abrahamsen’s insight concerning the role of mathematical/computational modeling in mechanistic research very important. Yet it seems that they paint a somewhat too seamless and optimistic picture of the possibilities of recomposition by mathematical modeling. To be sure, Bechtel and Abrahamsen are certainly aware of the limitations of modeling. According to them, “[t]he equations [of models] cannot capture all the salient aspects of the mechanism and may even misrepresent or leave out critical components.” However, they go on to claim, “insofar as the focus in modeling is on the effects of organization, they do demonstrate that if particular parameter values were realized in the actual system the effects should correspond to those in the model” (Bechtel and Abrahamsen 2010, 325). Interestingly, this kind of supposition   6   has been questioned by synthetic modeling, a novel modeling practice that has arisen precisely in the context of the study of gene regulatory networks of which the circadian clock provides one of the most studied examples. Synthetic models are constructed on the basis of mathematical/computational models but from biological material, such as genes and proteins. Moreover, they are implemented in the natural cell environment. As such, they come closest to what a realization of a mathematical model could be, yet they have already produced results that have not been expected on the basis of the underlying mathematical models – and thus have provided novel perspectives on the dynamic organization of biological systems. 3. Synthetic Modeling. In their review article “The Pedestrian Watchmaker: Genetic Clocks from Engineered Oscillators,” Cookson et al. describe the construction of synthetic models in the following way: “First, genetic wiring diagrams are translated into equations that can be analyzed. […] Next, tools from applied math and computer science are used to analyze the model in order to extract the ‘design criteria’ for a desired output. Then, modern recombinant DNA techniques are used to construct gene-regulatory networks in living cells according to the design specifications. […] Lastly, micro- and nano- technologies are developed to acquire the precise single-cell measurements that are needed for comparison with model predictions and design refinement” (2009, 3931). What seems already clear in the light of this brief characterization is that apart from being constructed on the basis of mathematical models, synthetic models are supposed to be used in combination with them. Sprinzak and Elowitz (2005) call the “synthetic paradigm” a combinational use of experimentation on model organisms, mathematical modeling and synthetic modeling. The question is why should researchers invoke this combinational strategy and what is the role of synthetic modeling in this fabric? A closer look at two synthetic models may help us answer this question.   7   3.1. The Repressilator. Interestingly, one of the first and most famous synthetic models, the Repressilator (Elowitz and Leibler 2000) was presented precisely in the context of circadian clock research. It is an oscillatory genetic network, which was introduced in 2000 by Michael Elowitz and Stanislas Leibler (2000). The first step in constructing the Repressilator consisted in designing a mathematical model, which was used to explore the known basic biochemical parameters and their interactions. Having constructed a mathematical model of a gene regulatory network Elowitz and Leibler performed computer simulations on the basis of the model. They showed that there were two possible types of solutions: “The system may converge toward a stable steady state, or the steady state may become unstable, leading to sustained limit-cycle oscillations” (Elowitz and Leibler 2000, 336). Furthermore, the numerical analysis of the model gave insights into the experimental parameters relevant for constructing the synthetic model in showing that “[...] oscillations are favoured by strong promoters coupled to efficient ribosome binding sites, tight transcriptional repression (low ‘leakiness’), cooperative repression characteristics, and comparable protein and mRNA decay rates” (ibid, 336). The latter point helped in choosing the three genes used in the design of the network. The mathematical model functioned as a blueprint for the engineering of the biological system. The structure of the Repressilator is depicted in the following diagram:   8   Figure 1. The main components of the Repressilator (left hand side) and the Reporter (right hand side) (Elowitz and Leibler 2000, 336). In the diagram the synthetic genetic regulatory network, the Repressilator, is shown on the left hand side and it consists of two parts. The outer part is an illustration of the plasmid constructed by Elowitz and Leibler. The plasmid is an extra-chromosomal DNA molecule integrating the three genes of the Repressilator. Plasmids occur naturally in bacteria. In the state of competence, bacteria are able to take up extra chromosomal DNA from the environment. In the case of the Repressilator, this property allowed the integration of the specifically designed plasmid into E.coli bacteria. The inner part of the illustration represents the dynamics between the three genes, , , and λ cl The three genes are connected by a negative feedback loop. The left-hand side of the diagram shows the Reporter consisting of a gene expressing a green fluorescent protein (GFP), which is fused to one of the three genes of the Repressilator. The GFP oscillations in the protein level made visible the behavior of transformed cells allowing researchers to study them over time by using fluorescence microscopy. The construction of the Repressilator was enabled by the development of new methods and technologies, such as the construction of plasmids and Polymerase Chain Reactions (PCR). It is important to note that the Repressilator is an artificial, biological construction: its components (and ! TetR ! LacI   9   their number and arrangement) had to be chosen in view of what would be optimal for the behavior under study. The genes used in the Repressilator do not occur in such a combination in any known biological system but are chosen and tuned on the basis of the simulations of the underlying mathematical model and other background knowledge, in such a way that the resulting mechanism would allow for sustained oscillations. These technical constraints imply a constraint on what can be explored by such synthetic models: possible design principles in biological systems. In that synthetic models are like mathematical models, they still only provide “how-possibly” explanations. Yet in their “right kind” of materiality they share some important features with experiments. For example, anomalous experimental findings are more likely to incur change in our theoretical commitments than unexpected results from simulations.1 This was precisely the case with the Repressilator, too, which actually initiated a new research program. That is to say, the Repressilator was only a limited success. It was able to produce oscillations at the protein level but these showed irregularities. Interestingly, to find out what was causing such noisy behavior Elowitz and Leibler reverted back to modeling. In designing the Repressilator, Elowitz and Leibler had used a deterministic model. A deterministic model does not take into account stochastic effects such as stochastic fluctuations in gene expression. Performing computer simulations on a stochastic version of the original mathematical model, Elowitz and Leibler were able to reproduce similar variations in the oscillations as observed in the synthetic model. This led researchers to the conclusion that stochastic effects may play a role in gene regulation—which gave a rise to a new research program attempting to identify sources of noise in biological systems and the effect of noise on the dynamics of the system (e.g. Swain et al. 2002).                                                                                                                 1 For a discussion on the epistemic importance of the “same stuff” or “right kind” of materiality, see e.g. Morgan 2003 and Parker 2009.     10   3.2. The Dual-Feedback Synthetic Oscillator. Another good example of how synthetic models can surprise researchers is provided by the work of Jeff Hasty and his group. They constructed a dual-feedback synthetic oscillator exhibiting coupled positive and negative feedback, in which a promoter drives the production of both its own activator and repressor (Cookson et al. 2009). They introduced the synthetic oscillator by explaining how it was based on both experimental work on the Drosophila melanogaster (fruit fly) clock and earlier theoretical work, i.e. a mathematical model set forth by Hasty et al. (2002), which, as they put it, “is theoretically capable of exhibiting periodic behavior” (ibid., 3932). Fig. (2) shows the network diagram of the dual-feedback oscillator. Figure 2. A diagrammatic representation of the dual-feedback oscillator (Cookson et al. 2009, 3934) The synthetic network consists of two genes: araC and lacI. A hybrid promoter (the two small adjacent boxes), drives transcription of araC and lacI, forming positive and negative feedback loops. It is activated by the AraC protein in the presence of arabinose and repressed by the LacI protein in the absence of isopropyl b-D-1-thiogalactopyranoside (IPTG). As ! plac /ara"1   11   in the case of the Repressilator, the oscillations are made visible by a green fluorescent protein (GFP). This synthetic system provides a material system allowing for the study of the oscillatory dynamic of the dual-feedback system: the basic properties and conditions of the oscillatory dynamic can be estimated on the basis of the components of the model and their interactions. By adding arabinose and IPTG to the system the promoter becomes activated and the system’s two genes, araC and lacI, become transcribed. An increased production of the protein AraC in the presence of arabinose results in a positive feedback loop that increases the promoter activity. On the other hand, the increase in LacI production results in a linked negative feedback loop that decreases promoter activity. The difference in the promoter activities of the two feedback loops leads to the oscillatory behavior. On the basis of the analysis of the synthetic system, the researchers made three observations that were difficult to reconcile with the original mathematical model that nevertheless formed the basis of the synthetic model. The most drastic of these findings concerned the robustness of the oscillations. While the mathematical model predicted robust oscillations only for a small setting of parameters, the synthetic model contradicted the mathematical model by showing robust oscillations for a much larger set of parameter values! The researchers were perplexed by the fact that “[i]t was difficult to find inducer levels at which the system did not oscillate” (3934, italics in the original). These contradictory results led the researchers in question to reconsider the original model. Cookson et al. wrote about this as follows: “In other words, it became increasingly clear that the observed oscillations did not necessarily validate the model, even though the model predicted oscillations. We were able to resolve the discrepancy between the model and theory by reevaluating the assumptions that led to the derivation of the model equations” (Cookson et. al. 2009, 3934). The key finding in the reevaluation of the mathematical model was that a component in the complex biochemical process that had been underestimated and left out from the mathematical model turned   12   out to be of central importance for the robustness of the observed oscillations in the synthetic model. Without going into greater detail, the processes involved in the production of the transcription factors led to a posttranslational coupling between the activator (AraC) and the repressor (LacI), which is a consequence of an unintended interaction with the host cell. These unintended interactions with the host cell show that synthetic systems do not comprise “perfect” modules but rather that interactions with the cell environment occur and can even be advantageous, as the case of the dual-feedback oscillator shows. In fact, the most recent research in synthetic biology precisely focuses on how to develop a new generation of synthetic models that integrate more closely with the endogenous cellular processes of the host organisms (cf. Nandagopal and Elowitz 2011). It seems, rather paradoxically, that the program of synthetic biology that was originally based on the idea constructing synthetic circuits from well-characterized components is actually accumulating evidence on the limitations of the assumption of modularity. But then, we would like to suggest, the probing of the appropriateness of the mathematical methods and engineering concepts typically adapted to synthetic biology from physics and engineering was also on the agenda from early on. Apart from inquiring into the basic design principles driving and regulating the dynamic behavior of biological organisms, synthetic biology also provides a test for the principles and the theoretical assumptions underlying this very endeavor. 4. From Mathematical to Material Recomposition. Above we have studied two synthetic models in response to Bechtel and Abrahamsen’s claim that the task of mathematical/computational modeling is to recompose mechanisms in order to offer dynamic mechanistic explanations. It appears legitimate to conclude that Bechtel and Abrahamsen are in fact stressing that mathematical/computational models should not be considered as mere mechanism schemas or sketches in need of further articulation. The models are contributing to mechanistic accounts of science something that cannot be accomplished by traditional experimental   13   means or diagrammatic representations. We find this a very important point and a needed contribution. However, the practice of synthetic modeling shows that Bechtel and Abrahamsen’s idea of mathematical modeling as that of recomposing the decomposed components and their operations is not as straightforward in actual practice as it may seem at first sight. Bechtel and Abrahamsen appear to pay too little attention to the respective constraints of mathematical modeling and experimental practice that led synthetic biologists to develop a novel modeling strategy for studying the dynamics of gene regulatory networks. This novel modeling strategy, synthetic modeling, can also be conceived of as a recomposition strategy, albeit a material one. While we have already indirectly touched upon the constraints of mathematical modeling in the presentation of our two cases, let us first mention briefly some constraints of the experimental practice in circadian clock research. Apart from the technological constraints, they are largely due to the fact that gene knock out experiments allow only for indirect observation of the network architecture and its dynamic in terms of the results of the intervention. As even such genetically relatively tractable model organisms as Drosophila are actually very complex organisms, it is extremely difficult on the basis of experimental practice alone to decide whether one has found all the components and interactions of a mechanism. Clearly, mathematical modeling is needed for this task, as well as for the interpretation of the empirical results. Mathematical modeling, however, has its own characteristic constraints as well. Firstly, the level of abstraction of mathematical models typically renders them as mechanism schemas at best— as suggested by Darden and Craver, for example. What is more, the mathematical templates and methods used in the study of gene regulatory networks are typically borrowed from other disciplines, presenting further challenges to the modeling approach. Cookson et al. comment on this in the following way: “[t]here is a crucial yet often deemphasized difference between the design of electronic and   14   genetic circuits [i.e. gene regulatory networks]. The description of electronic circuits follows from physical laws (e.g. Maxwell’s equations) describing the connection of engineered components in a controlled physical environment. Genetic circuits, on the other hand, consist of evolved components that reside in the highly complex biological environment of the cell. Given this complexity, it would be exceedingly optimistic to expect that mesoscopic laws describing gene regulation will elegantly arise from the underlying physical chemistry (in this century).” (2009, 3931) Last but not least, although informed by the continuously advancing empirical research on the molecular basis of the circadian clock, mathematical/computational modeling in this area is still burdened by the problem of underdetermination. Craver (2006, 366) quotes Huxley, who wrote about the Hodgkin-Huxley model: “It was clear that the formulation we had used was not the only one that might have fitted the voltage clamps adequately.” This applies also to the research on gene regulatory networks. Various kinds of oscillatory phenomena have already been studied by physicists for a long time and there are many well-established ways of mathematically creating them (see e.g. Strogatz 1994). Consequently, circadian clock modelers have been able to utilize this established “toolbox” to model oscillations. But the problem has been that the alternative models are still too general and underdetermined by the available data. In sum, there seems to remain a gap between experimental research and mathematical modeling, which, in the area of circadian clock research, has been widened further by the fact that many of the mathematical templates, methods and concepts used have not been originally devised with biological organisms in mind. Synthetic models, as we have shown, partly fill this gap between mathematical modeling and experimentation on model organisms by offering a tool for identifying possible network design principles and showing whether they might be realizable in biological organisms. Interestingly, coupled with synthetic modeling, mathematical modeling can also   15   demonstrate some of its particular strengths: apart from using mathematical models as blueprints, the modelers in both cases reverted back to mathematical modeling in trying to explain the behavior of their synthetic models. 5. Conclusion: Beyond the Mechanistic Paradigm? We have argued that synthetic modeling can be approached as a kind of material recomposition strategy, which complements mathematical/computational modeling. Synthetic modeling provides an important corrective to mathematical modeling, as synthetic systems may—and in fact usually do—behave in unexpected or even counterintuitive ways as both the Repressilator and the dual- feedback synthetic oscillator show. While it certainly is the case that circadian clock research provides a particularly good example of the mechanistic strategy, it also seems that synthetic modeling in particular is simultaneously probing the limits of the mechanistic approach. The Repressilator showed that both internal and external noise are phenomena to be reckoned with in understanding the functioning of gene regulatory networks. While in engineered artifacts oscillations are traditionally considered as a disturbance, in the context of the circadian clock research this was not the case. The researchers sought to create oscillations with their models, but the created oscillations proved to be irregular, i.e. noisy. This in turn led them the researchers to entertain the possibility that noise could be functional for the biological systems. It seems not clear how stochastic noise, not to mention its supposed functional role, fits into the mechanist agenda. On the other hand, Cookson et al. (2009) pointed to the role of the interaction between the oscillator and the host cell components. Such interactions may make the oscillator function more robustly, thus bringing into question the mechanistic assumption of modularity (Nandagopal and Elowitz 2011, 1245, see also Stricker et al. 2008). Of course, the integration of a mechanism into higher-level mechanisms is on the agenda of the mechanistic program, but one may ask what   16   difference it makes if a supposed mechanism’s interaction with its environment significantly improves its functioning. References Bailer-Jones, Daniela M. 2009. Scientific Models in Philosophy of Science. Pittsburgh: University of Pittsburgh Press. Bechtel, William. 2011. “Mechanism and Biological Explanation. ” Philosophy of Science 78: 533- 557. Bechtel, William, and Abrahamsen, Adele. 2009. Decomposing, Recomposing, and Situating Circadian Mechanisms: Three Tasks in Developing Mechanistic Explanations.” In Reduction and Elimination in Philosophy of Mind and Philosophy of Neuroscience, eds. Hannes Leitgeb and Alexander Hieke, 173-186. Frankfurt: Ontos Verlag. Bechtel, William, and Abrahamsen, Adele. 2010. “Dynamic Mechanistic Explanation: Computational Modeling of Circadian Rhythms as an Exemplar for Cognitive Science.” Studies in History and Philosophy of Science 41: 321–333. Bechtel, William, and Abrahamsen, Adele. 2011. Complex Biological Mechanisms: Cyclic, Oscillatory, and Autonomous. In Philosophy of Complex Systems. Handbook of the philosophy of science, vol. 10, ed. C.A. Hooker, 257–285. Oxford: Elsevier. Cookson, Natalia A., Lev S. Tsimring, and Jeff Hasty. 2009. “The pedestrian watchmaker: Genetic clocks from engineered oscillators.” FEBS Letters 583:3931-3937. Craver, Carl F. 2006. “When Mechanistic Models Explain.” Synthese 153:355-376. Darden, Lindley. 2007. “Mechanisms and Models.” In Cambridge Companion to Philosophy of Biology, eds. David L. Hull and Michael Ruse, 139-159. New York: Cambridge University Press. Darden, Lindley, and Carl Craver. 2002. “Strategies in the Interfield Discovery of the Mechanism of Protein Synthesis.” Studies in History and Philosophy of Biological and Biomedical Sciences 33:1–28. Hasty, Jeff, Milos Dolnik, Vivi Rottschäfer, and James J. Collins. 2002. “Synthetic Gene Network for Entraining and Amplifying Cellular Oscillations.” Physical Review Letters 88,148101. Elowitz, Michael B., and Leibler, Stanislas. 2000. “A Synthetic Oscillatory Network of Transcriptional Regulators.” Nature 403: 335-338.   17   Glennan, Stuart S. 2005. “Modeling Mechanisms.” Studies in the History and Philosophy of Science C. 36: 443-64. Kaplan, David M., and William Bechtel. 2011. “Dynamical Models: An Alternative or Complement to Mechanistic Explanations.” Topics in Cognitive Science 3: 438-444. Machamer, Peter, Lindley Darden, and Carl F. Craver. 2000. “Thinking about Mechanisms.” Philosophy of Science 67:1–25. Morgan, Mary. 2003. “Experiments without material intervention: Model experiments, virtual experiments and virtually experiments.” In The philosophy of scientific experimentation, ed. Hans Radder, 216–235. Pittsburgh: University of Pittsburgh Press. Parker, Wendy. 2009. “Does matter really matter? Computer simulations, experiments and materiality.” Synthese 169: 483–96. Sprinzak, David, and Michael B. Elowitz. 2005. “Reconstruction of Genetic Circuits.” Nature 438:442-448. Stepp, Nigel, Anthony Chemero, and Michael T. Turvey. 2011. “Philosophy of the Rest of Cognitive Science.” Topics in Cognitive Science 3: 425-437. Stricker, Jesse, Scott Cookson, Matthew R. Bennet, William H. Mather, Lev S. Tsimring, and Jeff Hasty. 2008. “A Fast, Robust and Tunable Synthetic Gene Oscillator.” Nature 456, 516-519. Strogatz, Steven, H. 1994. Nonlinear dynamics and chaos: With Applications to Physics, Biology, Chemistry, and Engineering. Cambridge, MA: Perseus Books Publishing. Swain, P., S., M. B. Elowitz, and E. D. Siggia. 2002. “Intrinsic and Extrinsic Contributions to Stochasticity in Gene Expression.” Proc. Natl. Acad. Sci., 99(20), 12795-12800.