key: cord-296565-apqm0i58 authors: Togati, Teodoro Dario title: General Theorizing and Historical Specificity in the ‘Keynes Versus the Classics’ Dispute’ date: 2020-10-06 journal: East Econ J DOI: 10.1057/s41302-020-00177-1 sha: doc_id: 296565 cord_uid: apqm0i58 This paper addresses the issues of general theorizing and historical specificity in the ‘Keynes versus the Classics’ dispute and puts forward two main arguments. First, the current macroeconomic orthodoxy wins the ‘relative’ generality contest because it implies that institutions influence outcomes, such as the natural rate of unemployment, in contrast with Keynes’s ‘internalist’ approach, which neglects historical specificity. Secondly, mainstream macro is not truly general in an ‘absolute’ sense since it only makes sense under very special real-world institutional conditions. In their stimulating recent contributions, Hodgson (2019) and O'Donnell (2019a, b) discuss the issues of general theorizing and historical specificity, in the 'Keynes versus the Classics' dispute. Two limitations of their analysis emerge. First, they address the issues without placing them in the full context of the macroeconomic frameworks involved. For example, while correctly regarding the 'logic of choice' as a universal principle, they neglect that standard theorists themselves fail to apply it in full to phenomena such as money and expectations. This reveals both the impasse of their microfoundations project-stressed by Hodgson himself in other insightful contributions, see, for example, 2001, pp. 15-6, 2006, p. 126-and, as I suggest below, the reasons why such theorists end up by placing emphasis on institutions. Secondly, the two authors fail to compare the two theories and draw clear-cut conclusions to their generality dispute, which I believe is quite important in order to assess the foundations of current macroeconomic theory and policymaking in the light of recent crises. 1 On one side, Hodgson suspends his judgement-in his important book, How Economics Forgot History, he holds, for example, that 'it is difficult to say whether Keynes's theory or orthodox theory is the more general' (Hodgson 2001, p. 221 )-as he considers them to be equally wrong instances of the universalistic approach to general theorizing (i.e. one claimed as true for all times and all places) necessarily involving the neglect of historical specificity. On the other, while holding that Keynes's generality claim (against Pigou) is still valid today against the axiomatic general equilibrium approach, O'Donnell eschews direct comparison since he regards the General Theory (hereafter, GT) as involving a criterion of general theorizing alternative to the standard one. 2 In order to overcome such limitations, this paper views theories as Lakatosian 'research programmes' (RP). On one side, this notion favours direct comparison because it provides a neutral benchmark for 'internally consistent' theorizing: it suggests that in principle, all 'good' theories should possess various parts, organized in a hierarchical manner, such as 'hard core' beliefs, 'protective belt' assumptions and 'heuristics'. 3 On the other, due to its holistic nature (i.e. all parts must be considered together), the RP concept shows that historical specificity is compatible with general theorizing, while placing drastic limitations on its validity. 4 More specifically, based on this concept, the paper discusses the following arguments. First, general theorizing is not always synonymous with universal theorizing. For example, both Keynes's theory and standard macro share a common feature: while accepting a universalistic approach to theorizing-stressing behavioural hardcore 'drivers' such as agents' optimizing choices or psychological laws-they are not truly universal in their scope or object of analysis: both theories also include 1 One major question which arises today, for example, is whether the current 'consensus' macro is general enough to accommodate the post-Covid-19 scenario. 2 O' Donnell opposes the standard MTM (multiple-theory model) approach consisting of three different stages (axioms, extra assumptions and particular cases) to Keynes's OTM (one theory model), which he correctly characterizes as a 'one-stage' approach: '[it] has several senses in which it can express its generality, all are internal to the model and do not require external assistance of any kind' (2019a, p. 718, emphasis in the text). Based on this, I label Keynes's approach as 'internalist'. 3 I suggest that the RP criterion is a step forward from past comparisons of theories based exclusively on formal models, such as ISLM. As noted by Leijonhufvud, for example, it is insufficient to list model properties to distinguish between theories, in contrast with most economists, who use 'model' and 'theory' as synonyms: '"theory" is a "patterned set of substantive beliefs about how the economic system works…A "model" is the formal representation of a "theory", or a subset of it… We will seldom… have a model that gives us an exhaustive account of the hard core of the corresponding theory. We may miss quite essential characteristics of the research programmes, therefore, if we approach the problem only by inference from "lists" of distinctive model properties ' (1976, p. 70 ). An example is the controversy between Monetarists and Keynesians in the 1970s, in which economists like Friedman-holding that the differences between such schools only concerned the values of certain parameters of the ISLM model (such as the interest-elasticity of the demand for money function)-drew the conclusion that they belonged to the same RP. But according to Leijonhufvud, these schools belong to two distinct RPs insofar as they imply different visions of the economy (ibid., p. 71). 4 As noted by Hands, for example, the RP notion is useful, especially 'for understanding the structure of economics… Economic theories do seem to have hard cores, protective belts, positive and negative heuristics …' (2001, p. 296) . This notion is a more articulated version of the MTM criterion mentioned by O'Donnell. among their premises 'enabling' institutions of a capitalistic type, such as the ones underlying the 'complete markets' assumption in the Arrow Debreu Model (hereafter, ADM), or fiat money and the stock market in the GT. This means that both theories only make sense within the broad context of 'modern capitalism' (in various forms) and the essence of their dispute is one of 'relative' generality: that is, which theory provides the most general account of it. Secondly, general theories can accommodate historical specificity if they allow institutions to play a causal role by influencing outcomes, like income. I argue that the current orthodoxy wins the generality contest because its RP is correctly formulated and reconciles general theorizing, i.e. the 'hard-core' logic of choice, with historical specificity captured by specific models in the 'protective belt', in which institutions influence outcomes, like the natural rate of unemployment. On the other hand, Keynes's major flaw is that he did not make a sharp distinction between hardcore principles and lower-level 'modelling' claims: that is, he conflated them at a single analytical level. 5 For this reason, he developed his 'internalist' approach, stressing ultimate psychological factors and, as emphasized by Hodgson, neglecting historical specificity. 6 Thirdly, historical specificity also places constraints on general theorizing. I emphasize that what undermines mainstream macro is that while being general in a 'relative' sense, it is not truly general in an 'absolute' sense, namely when it is evaluated not just in terms of internal consistency, but also in terms of an 'externalist' criterion, such as its applicability to real-world contexts. The point is that this theory is not entirely self-contained due to intrinsic limitations of its microfoundations project. As noted long ago by Frank Hahn, for example, money and expectations are irreducible to the timeless logic of choice as revealed by their inconsistency with the ADM, which he regards as the 'best model' of the economy. 7 On close inspection, they turn out to be time-contingent 'conventions', requiring 'external' anchors, such as institutional factors, to be sustainable in a period of time. Now analysis of such anchors in real-world contexts suggests that standard theory only makes sense under very special institutional conditions, such as those needed to grant the 'communism of models', underlying Lucas' rational expectations hypothesis. To develop these points, this paper is organized as follows: 'Keynes's "Internalist" Approach to Generality' section focuses on Keynes's 'internalist' approach to generality. 'The Reasons for the Success of the Axiomatic Approach in the "Relative" Generality Dispute' section analyses the success of the axiomatic approach in 5 In other words, unlike O'Donnell, I regard Keynes's OTM approach as a vice rather than a virtue. 6 Keynes did not consider, for example, that 'the scope for governmental management of the level of effective demand would depend crucially on the economic institutions of a particular country and the nature and extent of its engagement with world markets … It may be possible to regard Keynes's work as a framework for viable analyses that addressed such specific circumstances, but Keynes himself did not lay down guidelines for the development of historically sensitive theories' (Hodgson 2001, p. 225) . 7 'The most serious challenge that the existence of money poses to the theorists is this: the best developed model of the economy cannot find room for it' (Hahn 1982, p. 1) , and that 'we have no theory of expectations firmly founded on elementary principles comparable say, to our theory of consumer choice' (ibid., p. 3). the 'relative' generality contest vs Keynes. 'Is Standard Macro Theory Truly General?' section discusses the limitations of standard theory concerning 'absolute' generality. There are five main reasons why we might regard Keynes as pursuing an 'internalist' approach to generality. For simplicity's sake, we shall regard these reasons as 'pillars' of his generalist strategy stressing different properties of the GT. The first pillar is Keynes's focus on the 'intrinsic' characteristics of a real-world 'monetary economy', which make it quite different from the 'real wage economy' abstraction underlying standard theory. As he himself put it: Moreover, the characteristics of the special case assumed by the classical theory happen not to be those of the economic society in which we actually live, with the result that its teaching is misleading and disastrous if we attempt to apply it to the facts of experience. (Keynes 1936, p. 3, my italics) More specifically, Keynes placed a lot of emphasis on characteristics, such as true uncertainty and conventional behaviour, as well as on institutions of modern capitalism such as the stock market, fiat money and elasticities of production and substitution, which explain both why this economy is irreducible to a barter economy (for example, the money interest rate cannot be put on a par with the 'own' rates of interest of other commodities) and a lack of effective demand is possible. In particular, I suggest that in the GT, this phenomenon is not 'caused' but simply 'enabled' by the above institutions. It is ultimately due to behavioural drivers, such as agents' decisions to hold money rather than buy goods. 8 The second pillar of Keynes's internalist approach is his belief that scientific analysis should be carried out in terms of 'self-contained models', and his main aim was to improve Pigou's model. As he pointed out in his letter to Harrod, 'Progress in economics consists almost entirely in a progressive improvement in the choice of models. The grave fault of the later classical school, exemplified by Pigou, has been to overwork a too simple or out of date model, and in not seeing that progress lay in improving the model' (Keynes 1938, p. 296) . Indeed, his revolution, establishing macroeconomics as an autonomous subject, can be regarded as the generalization of the Marshallian partial equilibrium model to the economy as a whole. Another pillar of Keynes's strategy is the link between the generality of his theory and its ability to account for multiple equilibria referring to actual, 'internal' states of the economy: that is, those corresponding to different levels of unemployment, in contrast with standard theory associated with full employment: I shall argue that the postulates of the classical theory are applicable to a special case only and not to the general case, the situation which it assumes being a limiting point of the possible positions of equilibrium […] the classical theory is only applicable to the case of full employment, it is fallacious to apply it to the problems of involuntary unemployment. (Keynes 1936, pp. 3, 16) Keynes's argument has two major implications. First of all, it implies the existence of a 'direct' relationship between theory and 'external reality' (see, for example, Pernecky and Wojic 2019, p. 773). In particular, he thought it was possible to define various employment states 'objectively' and to establish a one-to-one correspondence between such states and theories. Secondly, Keynes regarded his theory as covering the normal case of the economy, namely persistent high unemployment, in contrast with Pigou for whom unemployment is only a cyclical phenomenon due to short-lived 'external' shocks. The fourth pillar of Keynes's internalist approach is his proposed view of rationality, broader than the standard one. While the latter stresses atomistic agents' substantive rationality in abstract, oversimplified contexts, Keynes focused instead on how they actually behave in real-world contexts. As he clarified, especially in his 1937 summary article formulating the 'practical theory of the future', in order to make decisions in such contexts, agents are forced to rely upon conventional techniques, such as assuming that tomorrow will be like today or falling back on other agents' views. It can be argued that in his works, Keynes mainly provided a descriptive account of such techniques, 9 which led him to capture essentially the 'psychological' dimension of conventions-i.e. the fact that they are formed through interaction of individual agents, each seeking to find reassurance in the behaviour of others-rather than their theoretical dimension. He failed to underline, for example, that they are not just 'internal' market structures; being forms of precarious knowledge, they also call for 'external' anchors, including institutional factors, to be sustainable through time. 10 Moreover, he took the aggregate propensities underlying his model as quite impenetrable 'data '. 11 Keynes nevertheless regarded conventions as playing a key role in his theory. On the one hand, he described key variables, such as the monetary rate of interest and the money wage, as conventional features; 12 on the other, he used them to support his generality claim. He noted that standard theory is just one particular conventional technique; for example, he accused '…. classical economic theory of being itself one of these pretty, polite techniques which tries to deal with the present by abstracting from the fact that we know very little about the future' (1937, p. 115) . In particular, Keynes hinted at two different aspects of standard theory as a convention that should be distinguished: this theory can be regarded either as a technique used by economic agents to lull their disquietude in the face of uncertainty (it amounts to projecting the existing situation into the future 'to save the face of rational men') or a technique used by theorists themselves to lull the horror vacui of instability (for example, it amounts to relying on various ceteris paribus assumptions). In this case, standard theory could be regarded not as a true statement about the world or a substantive theory, but simply a useful heuristic assumption to 10 In other words, in line with Hodgson's critique, it is true that in the GT Keynes failed to carry out a full-blown 'conditional' analysis focusing on the specific role of institutions as anchors to conventions, an analysis which would have led him to go beyond 'self-contained' modelling and make reference to specific historical contexts. This is an important limitation for the applicability of demand policies, as they can only be devised in the light of such contexts. Paradoxically, instances of this kind of analysis can be found in some of Keynes's earlier writings, such as The Economic Consequences of the Peace, where he noted that the applicability of standard theory-which he viewed as a form of convention-to the conditions of the Victorian age, depended upon the existence of an implicit 'social pact' between workers and capitalists: This remarkable system depended for its growth on a double bluff or deception. On the one hand the labouring classes accepted from ignorance or powerlessness, or were compelled, persuaded, or cajoled by custom, convention, authority, and the well-established order of Society into accepting, a situation in which they could call their own very little of the cake that they and Nature and the capitalists were co-operating to produce. And, on the other hand, the capitalist classes were allowed to call the best part of the cake theirs and were theoretically free to consume it, on the tacit underlying condition that they consumed very little of it in practice. The duty of 'saving' became nine-tenths of virtue and the growth of the cake the object of true religion. (Keynes 1919, p. 4). Another instance of conditional analysis is provided by A Treatise on Money, where he refers to the Chartalist theory, according to which money is a convention that arises from voluntary exchanges in markets to solve the problems of trade but becomes stable only when it is supported by the state (see Keynes 1930, pp. 4,6) . 11 Keynes left readers quite unclear about whether conventions or psychological laws should be taken as the ultimate foundations of his aggregate propensities and the true 'drivers' of his theory. While referring to conventions, he did not regard them as the common background of all his functions. Suffice it to note, for example, that he viewed his ultimate independent variables 'as consisting of the three fundamental psychological factors, the psychological propensity to consume, the psychological attitude to liquidity and the psychological expectation of future yield from capital-assets…' (Keynes 1936, p. 247) . 12 For example, he noted that 'It might be more accurate, perhaps, to say that the rate of interest is a highly conventional, rather than a highly psychological, phenomenon. For its actual value is largely governed by the prevailing view as to what its value is expected to be' (Keynes 1936, p. 203 ). describe complex behaviour. This is the sense in which Keynes himself treated the key standard assumption of maximizing behaviour in the GT. Indeed, one key implication of his principle of effective demand-seen as substantive theory accounting, so to speak, for agents' ex ante behaviour-is that firms do not primarily act by trying to maximize profit (for example, they decide how much labour to hire on the grounds of expected demand, not by looking at the real wage and labour productivity, as in standard theory which takes maximizing behaviour as the key starting point of the analysis). Following his Marshallian background, Keynes used the optimizing assumption merely to characterize ex post equilibrium in his aggregate demand and supply model. In other words, he actually viewed firms as maximizers, but only in the context of the principle of effective demand as key driver of the analysis. The last pillar of Keynes's internalist approach concerns his 'orderly method of thinking', according to which, in order to deal with complex reality, where the relevant factors are organically interdependent, a theorist cannot consider all variables at once but is compelled to make simplifying distinctions such as the isolation of the true causal factors and the 'freezing' of other variables, at least temporarily. 13 Keynes clarified his approach in chapter 18 of the book in particular, where he summarizes his 'static' analysis of the determinants of income (i.e. relating to equilibrium at a point in time) by making a distinction between two types of givens, which can be labelled respectively 'primary' and 'secondary'. On the one hand, he singled out his ultimate independent variables as consisting of (1) the three fundamental psychological factors, the psychological propensity to consume, the psychological attitude to liquidity and the psychological expectation of future yield from capital-assets, (2) the wage-unit as determined by the bargains reached between employers and employed, and (3) the quantity of money as determined by the action of the central bank. (Keynes 1936, 247) On the other, he isolated the 'given' factors, which essentially correspond to the deep parameters of standard theory, that are preferences and resources: We take as given the existing skill, the existing quality and quantity of available equipment, the existing technique, the degree of competition, the tastes 13 As Keynes put it: The object of our analysis is, not to provide a machine, or method of blind manipulation, which will furnish an infallible answer, but to provide ourselves with an organized and orderly method of thinking out particular problems; and, after we have reached a provisional conclusion by isolating the complicating factors one by one, we then have to go back on ourselves and allow, as well as we can, for the probable interactions of the factors among themselves. This is the nature of economic thinking. Any other way of applying our formal principles of thought (without which, however, we shall be lost in the wood) will lead us into error. It is a great fault of symbolic pseudo-mathematical methods of formalising a system of economic analysis … that they expressly assume strict independence between the factors involved and lose all their cogency and authority if this hypothesis is disallowed… (ibid., 297). and habits of the consumer, the disutility of different intensities of labour and of the activities of supervision and organization, as well as the social structure including the forces, other than our variables set forth below, which determine the distribution of the national income. This does not mean that we assume these factors to be constant; but merely that in this place and context, we are not considering or taking into account the effects and consequences of changes in them. (ibid., 245) On this basis, one can understand why Keynes's 'orderly method' supports his generality claim. While not focusing on growth explicitly, it can be argued that in principle, Keynes regarded effective demand as being relevant to discuss not just fluctuations but also growth. 14 One implication of his organicist perspective-in contrast with the 'strict independence' view underlying mainstream theory-is that when the passage of time is considered, the primary demand factors are not truly independent but can be shaped by interaction with the secondary factors. This means that aggregate demand represents both the cause and effect of key trends or phenomena, such as technological progress or population growth, namely changes in the factors that Keynes takes as 'secondary' givens. On the one hand, it is clear, for example, that without a sufficiently high propensity to invest there can be no technological progress. On the other, there is no doubting that the latter also influences aggregate demand. In this section, I suggest that the RP notion helps us to understand why Keynes's internalist approach, while successful against Pigou, has revealed itself to be a blunt weapon in the face of current standard macro based on Debreu's axiomatic formulation of general equilibrium theory (GET). One can single out at least five different reasons-all relating to the introduction of institutions into the analysis-for the victory of the axiomatic approach vis-à-vis Keynes's GT over the 'relative' generality issue. First of all, the RP logic shows why Keynes's emphasis on the intrinsic characteristics of a 'monetary economy' is not sufficient to support his generality claim against the modern standard approach. The point is that the latter builds an 'augmented' 14 Suffice it note that conventions are intrinsic features of a monetary economy and thus do not vanish with the passage of time simply because one analyses the rate of change of income rather than just its level. However, it is true that Keynes somehow failed to present his principle of effective demand as the permanent driver of the economy. In Chapter 18, he pointed out, for example, that the order between the two sets of factors is reversible and depends upon the problem at hand. (ibid., p. 247). modelling dimension, capable in principle of accommodating all states of the economy, including those generated by a monetary economy. This augmentation can be understood if one considers that axiomatization-as an attempt to deal with the new unstable world, reflected in the 'crisis of foundations' in all fields at the end of the nineteenth century-creates what can be regarded as a certain degree of 'modelling freedom' by calling into question the one-to-one correspondence between theory and 'objective' structure of the world, which still underlay Walras' GET and Marshallian theory. The specific solution to this crisis provided by axiomatization is the shift from 'correspondence with external reality' to 'internal consistency' as the main prerequisite of successful theory. Strictly speaking, the new standard axiomatic approach does not imply a sharp break with Walras or Marshall in terms of substantive theory or 'cosmological' beliefs as to the internal stability of the economy, the key role of individual agents pursuing selfinterest and utility value theory. The new approach implies instead a streamlined or 'purified' reformulation of such beliefs, thus avoiding reference to realistic features, such as those typical of the Cambridge tradition. The reason why the ADM, as a formal representation of the hard-core beliefs of the neo-Walrasian RP, 15 involves an augmented modelling freedom is that it implies a kind of 'relativistic' turn: it no longer assumes an 'absolute' notion of a commodity-based simply on its intrinsic nature-but radically generalizes it by differentiating commodities also by time, place of delivery and states of the world. The ADM thus makes a very strong assumption, namely that of 'complete markets'-i.e. there exists a market for every time period and forward prices for every commodity at all time periods, in all places and states of the world-which necessarily involves a certain role of institutions as enabling factors. Indeed, the existence of all these markets has nothing to do with primitive 'barter' or the 'natural' tendency of men to exchange goods, noted for example by Adam Smith, but instead calls for a very 'artificial', hyper-capitalistic institutional setting to arise. 16 The reason why this, in turn, generates an augmented modelling dimension, which is in principle more general than Keynes's, can thus be readily understood: when commodities are specified as conditional on various 'states of the world', the ADM does incorporate not just 'actual' or 'observable' states of the economy reflecting the intrinsic properties of a monetary economy considered by Keynes, but also 'potential' or 'unobservable' ones. In this context, the GT appears to apply to a special case only, namely the kind of economy in which many future markets are missing. Secondly, the RP logic shows why Keynes's attempt to generalize Pigou's model is not sufficient to support his generality claim against the modern standard approach: it amounts to calling into question more the 'protective belt' of the classical macro RP than its 'hard core'. 17 Based on the ADM and its sophisticated institutional setting, this hard core plays a key role in modern economics because it turns 'choice theory' into the only possible 'natural laws'-i.e. those forms of agents' behaviour that are true whatever the context-generating the basic rules of 'grammar' of economics, from which serious theorists cannot simply depart if they want to be understood by their peers. This grammar represents the purely linguistic formulation of the (relatively informal) hard-core cosmological beliefs, which is 'neutral' in the sense that it is independent from all possible interpretations of external reality. 18 In other words, the neo-Walrasian RP implies that there is no 'direct access' to external reality (or to models reflecting it); this access can only be mediated by interpretations based on the ADM grammar. Strictly speaking, this does not mean that there is no ontologyi.e. claims about the real world-behind this RP, but only that it consists of those claims about agents-that they have preferences, pursue self-interest and calculate opportunity costs and so on-that appear so elementary or 'familiar' that they can be taken for granted and treated as axioms. Theorists are thus able to concentrate on the purely linguistic dimension. 'Grammatical' interpretations (i.e. those that respect the basic hard-core principles) take the shape of more specific 'models' in the protective belt. It is at this stage that another form of modelling freedom becomes possible: there can be many interpretations, depending, for example, on whether theorists take the ADM as a positive benchmark (namely, if one takes it as fully descriptive of external reality, as the Chicago school does) or a negative benchmark (that is, if one takes it as a tool for understanding what goes wrong in actual economies, as the New Keynesians do). 19 Based on this, one can see once again why Keynes's GT appears as a particular case within the axiomatic approach: when seen through the lens of New Keynesian 'grammatical' contributions, it represents merely a specific interpretation of capitalism, giving rise to a particular class of models, among many others, in the protective belt; it does not call into question the axioms of choice theory underlying the hardcore of this RP directly. 17 Keynes's approach is justified by the fact that he shared with Pigou many Marshallian cosmological beliefs, which were so 'open' as to appear compatible with alternative approaches. For example, Marshall regarded agents as being 'reasonable' as opposed to fully rational, as contemporary standard theory does. Moreover, his beliefs were not even strictly individualistic and reductionist, as shown for example by his mix of biological and mechanical metaphors and the 'representative firm' device, which represented a halfway construction between micro and macro rather than a simple magnification of micro thinking as it is today (see, for example, Hodgson 1993, pp. 101-2; Togati 2019). 18 As Debreu noted, in his approach, 'Alliance to rigor determines the axiomatic form of analysis where the theory, in the strict sense, is logically disconnected from its interpretation' (Debreu 1959, p. x) . 19 For this distinction, see, for example, Hahn 1982. Thirdly, in the light of the RP notion, by stressing the link between standard theory and full employment Keynes's multiple equilibria story also appears insufficient to establish his generality claim. In line with post-positivist approaches, this notion implies the existence of a gap between substantive theoretical claims (or cosmological beliefs) at the hard-core level and observable states of unemployment. On the one hand, cosmological beliefs are metaphysical and 'unfalsifiable', meaning that one cannot dismiss standard postulates by simply noting that they are at variance with 'facts'. On the other, 'facts do not speak by themselves': it is always possible to interpret them in alternative ways. Thus, while certainly legitimate, Keynes's multiple equilibria story is just one of many possible other stories that can, in principle, explain labour market outcomes. Suffice it to note that standard theory itself manages to explain even high unemployment rates as equilibrium phenomena. In particular, two key factors account for this result. First, the ADM also generates a broader notion of equilibrium. While Keynes regarded equilibrium simply as a 'state of rest' at a point in time, for standard theorists it has an inter-temporal dimension: it is, that is, 'stochastic' and in their view an economy following a multivariate stochastic process may be described as being in equilibrium. Following this broader notion, these theorists are able in principle to conceptualize and interpret actual states of the economy-such as relatively 'mild' fluctuations-in terms of Walrasian theory. More specifically, the stochastic approach is able to reconcile the 'up' (not a remote 'long-run equilibrium' state, but observed outcomes generated by the stochastic process) with the 'underlying structure' of the economy (described by the 'deep' parameters). Secondly, thanks to its consideration of institutional factors as causal factors capable of determining outcomes, this approach is consistent not only with 'mild' fluctuations but also with Keynes's persistent underemployment equilibria, thus once again reducing the GT to particular case status. This is made clear especially by Friedman, who introduces a number of institutional factors in his 'Natural Rate of Unemployment' (NRU) concept. In his view, the latter designates: (…) the level which would be ground out by the Walrasian system of general equilibrium equations, provided that there are imbedded in them the actual structural characteristics of the labor and commodity markets, including market imperfections, stochastic variability in demands and supplies, the cost of gathering information about job vacancies and labor availabilities, the costs of mobility and so on. (1968, p. 8) In this way, Friedman manages to interpret any actual rate of unemployment as 'structural' unemployment, rather than induced by a lack of aggregate demand. As he noted, the NRU 'is not a fixed number. It is not 6%, or 5%, or some other magic number… The natural rate is a concept that does have a numerical counterpart-but that counterpart is not easy to estimate and will depend on particular circumstances of time and place' (Friedman 1996) . 20 Fourthly, the Lakatosian benchmark also helps us to see that Keynes's 'descriptive' analysis of conventions is not sufficient to explain the 'data' of his model and thus establish his generality claim. The problem with such data is that they too 'do not speak by themselves', but must be interpreted. Now, without a clear, full-blown justification of such data at the hard-core level in the GT-for example, Keynes failed to regard conventions as the alternative 'natural laws' of macroeconomics (in contrast with the logic of choice) 21 or even clarify whether psychology or conventions are the ultimate independent variables of his analysis-it is not surprising that interpreters following the axiomatic approach have been predominant ever since the late 1930s. Such authors have taken two 'micro-foundation' steps, which appear to justify the generality of standard theory against the GT. The first was to replace Keynes's original formulation of his aggregate functions, which were widely regarded as being ad hoc-i.e. lacking proper theoretical justification 22 -with alternative ones based on choice theory. In particular, following the key heuristic principle of methodological individualism-according to which social scientists should seek to understand all macro phenomena as the result of the 20 This stance seems to contradict Friedman's earlier views-stated, for example, in his 1946 paper on Lange-according to which for example: 'the ultimate test of the validity of a theory is not conformity to the canons of formal logic but the ability to deduce facts that have not yet been observed, that are capable of being contradicted by observation, and that subsequent observation does not contradict' (Friedman 1946, p. 631) . It can be argued that Friedman shifts from a Marshallian stance that sees theory as an 'engine for the discovery of concrete truth' (Marshall (1885) 1925, p. 159) to a stance that regards the simple macro models deriving from the ADM as having descriptive or positive value on a priori grounds. This does not mean that they correctly describe all real-world phenomena but that they capture their 'essential' or 'normal' features, including cyclical co-movements and growth. As Reder (1982) notes, for example, adherents to this view, 'assume … that … one may treat observed prices and quantities as good approximation to their long-run competitive equilibrium values ' (1982, p. 12) . Investigators can thus 'abstract from the effects of transitory market imperfections resulting in misallocation of underutilization of resources' (ibid.). For example, in line with the neo-Walrasian hard-core claims, 'Chicago economists have always recognized (that) prices (especially wage rates) are sticky in the short-run' (ibid.), but this phenomenon is either temporary or in need of reconciliation with the maintained hypothesis of continuous optimization. Indeed, such theorists 'are far less willing than others to accept reports of irrational or inefficient behaviour at face value, including money illusion, and typically seek to discredit or reinterpret such reports so as to protect the basic theory' (ibid., p. 15, my italics). 21 In principle, the conventional techniques emphasized by Keynes could be regarded as the 'natural' features of agents' macroeconomic behaviour because they apply whatever the context. In this way, they could play the same role in his theory as the 'logic of choice' in standard theory, that is abstract hardcore principles from which all the propositions of his theory, such as the key role of aggregate demand and the possibility of involuntary unemployment, follow. 22 Leontief was the first to note that Keynes elevates 'a questionable theorem to the status of a fundamental postulate and interprets the … new term as an independent datum' (Leontief 1937, p. 345) . From the standpoint of GET to which he subscribed, Keynes's aggregate propensities cannot be truly independent data because, in this theory, only the 'deep parameters' enjoy this status. (For analysis of these issues, see, for example, Togati 1998 Togati , 2019 interaction of individual optimizing atomistic agents-authors like Modigliani and Tobin managed to obtain new formulations of the consumption and liquidity preference functions capable of accommodating Keynes's original insights as special cases. 23 The second step in the generalization process of standard theory vis-à-vis the GT-taken by monetarist authors like Lucas-was to construct a broader analytical dimension at the protective belt level (concerning more specific macro models) by providing an 'endogenous' account of expectations, which Keynesians of all persuasions had continued to regard essentially as exogenous givens. Once again, this greater generality has been achieved by placing the emphasis on institutional factors, such as policy rules, that were essentially ignored by Keynes. The existence of a link between expectations and policy rules in this approach arises because it implies the 'communism of models' assumption, according to which in the real-world economy not just rational agents but also policy-makers believe in standard models, and base their expectations and act upon them. 24 In particular, according to this assumption, the only way governments and central banks can succeed in granting stability to market economies populated by rational agents is to adopt policy rules that are in tune with the standard 'natural laws', such as the need to balance budgets or carry out structural reforms of markets. Ultimately, in the light of our Lakatosian benchmark, it also appears that Keynes's 'abstract' reference to his 'orderly method of thinking' is not sufficient to establish his generality claim. Due to the fact that Keynes essentially focused on the role of aggregate demand in his instantaneous equilibrium model, he did not actually implement the method he advocated (at least implicitly) to study growth, as he simply took the key propensities underlying this model as his 'primary givens'. He thus failed to provide an account of how they could change in the face of changes in the 'secondary givens', such as technology or population, which is necessary for a discussion of growth based on aggregate demand. It is not surprising therefore that this limitation opened the way for standard theorists to reverse Keynes' original generality claim once again. Initially, theorists like Samuelson and Solow accomplished this task by regarding the GT as relevant for analysis of temporary fluctuations only, in contrast with neoclassical theory which was capable of accounting for growth trends, on the grounds of models stressing the key role of supply-side factors, such as technology, saving 23 For example, the strong link between consumption and current income emphasized by Keynes can be shown to be a special case within Modigliani's Life-Cycle theory, which occurs when liquidity constraints are introduced. 24 'The rational expectations hypothesis imposes a communism of models: the people being modeled know the model. This makes the economic analyst, the policy maker and the agents being modeled all share the same model, i.e. the same probability distribution over sequences of outcomes' (Hansen and Sargent 2000, p. 1). ratio and population. In more recent times, instead, many standard theorists have established their generality claims by ruling out reference to GT altogether. On the one hand, new classical macroeconomists-on the basis of the notion of stochastic equilibrium-regard supply-side factors as being relevant to deal with both fluctuations and growth; on the other, important strands in growth theory embrace an endogenous perspective which resembles Keynes's orderly method inasmuch as it seeks to explain the supply-side factors themselves, which were taken as exogenous in old growth models à la Solow. However, in this perspective, institutions play the same explanatory, causal role in determining outcomes as aggregate demand (at least in principle) does in Keynes's approach; indeed, they appear to be the 'deep' factors of growth. In particular, based on the distinction between the 'proximate' causes of growth which act in the medium term and the 'deep' causes acting in the long run, this institutionalist perspective is tantamount to explaining 'fundamental' structural factors assumed as given in the medium run, such as technology, the stock of capital and labour and workers' productivity, on the grounds of further 'deep' structural and institutional factors, such as the capability to innovate, the saving ratio, the education system, the rule of law and quality of government (see, for example, Acemoglu et al. 2005) . Three basic implications of this perspective should be underlined. The first is a 'relativistic' one: the growth of a market economy is no longer 'automatic', but occurs only when an adequate pro-growth institutional context exists in the real world. Secondly, in order to explain why growth may not occur, standard theorists further broaden the scope of their analysis by considering new causal factors, such as market failures-including the incapability of pure market forces of providing a sufficient amount of particular goods, such as R&D or education, which is the task of 'structural' policies to remedy-or institutional failures, including 'extractive' institutions, corruption or bad political systems (see, for example, Acemoglu and Robinson 2012, 2013) that create uncertainty and impair the smooth working of markets. Thirdly, it shows why standard theory appears more general than its Keynesian competitor. Just as the NRU concept fits any labour market evidence, standard growth theory is able to account for all observable growth conditions, including 'negative' ones such as depression and stagnation, without making any reference to Keynes's analysis of persistent unemployment as being due to lack of aggregate demand. But is this really the end of the generality battle? Does the endless number of models and textbooks produced on the grounds of the standard premises really incorporate superior scientific knowledge that can, always and everywhere (in the context of modern capitalism), be taken as the correct foundation for policy moves? In this section, basing myself on the RP structure, I provide a negative answer to this question by emphasising one limitation of the generality of standard theory, which arises from its approach to institutions. More specifically, there is reason to regard this theory as an 'analytical giant with institutional feet of clay'. In order to make this claim clear, it is necessary to realize that a 'basic flaw' undermines the generality of the standard RP: namely, its key heuristic principle, whereby theorists ought to search for microfoundations, cannot actually be implemented. As noted by Frank Hahn, for example, key phenomena such as money and expectations cannot fully be explained in terms of the logic of choice underlying the ADM. Indeed, it is because the latter relies on the complete market assumption-whereby all choices are made instantly for all dates and contingencies-that it cannot simply find room for money and expectations, 25 hence for phenomena such as fluctuations and crises induced by them, which lie at the heart of macroeconomics. Based on this, the rational expectations revolution appears in a new light. Instead of providing an explanation of expectations in terms of first principles, it amounts to an ingenuous solution to the impasse created by the failure of finding this explanation. Suffice it to note that when considered together with other assumptions-such as the one whereby equilibrium is unique and stable, 26 the device of a representative agent-the rational expectations hypothesis turns out to be a heuristic, simplifying 25 'The most serious challenge that the existence of money poses to the theorists is this: the best developed model of the economy cannot find room for it' (Hahn 1982, p. 1) , and that 'we have no theory of expectations firmly founded on elementary principles comparable say, to our theory of consumer choice' (ibid., p. 3). Strictly speaking, by placing the emphasis on such claims, I do not mean to neglect the existence of a large number of contributions in the literature that address the 'microfoundations of money' issue, that is they seek to show its consistency with the first principles of choice theory. For a useful survey, see, for example, (Lagos et al. 2014) . What I hold instead is that such approaches do not seem to solve the basic problem identified by Hahn. They show, for example, that the role of money can only be justified by relying on features emphasized by search theory or game theory, such as imperfect information and strategic interaction, and/or various types of 'frictions' or ad hoc factors-such as the consideration of money as a component of individual utility, aggregate production functions, cash in advance constraints-that are not part of the original ADM. As Lagos, Rocheteau and Wright themselves underline: 'A defining characteristic unifying the work discussed below is that it models the exchange process explicitly, in the sense that agents trade with each other, at least in some if not all situations. That is not true in GE (general equilibrium) theory, where agents only "trade" against their budget lines' (2014, p. 2). One advantage of our RP approach is to clarify that by departing from the ADM-which is not just a model but the hard-core of the neo-Walrasian RP-such approaches also implicitly depart from general equilibrium tout court and standard macroeconomics based on it. In other words, they enter a post-Walrasian world. 26 Strictly speaking, it is true that there can be a lot of Walrasian equilibria for a given specification of preferences and endowments; however, this appears to be only a 'mathematical' result (that is, linked to an absence of restrictions on consumers' preferences; equilibrium can be shown to be unique and stable instead when there are restrictions on such preferences). This means that from the 'analytical' or 'theoretical' point of view, it is possible to demonstrate the existence of a unique link between the standard set of fundamentals and equilibrium outcomes. It should be noted that in the literature various authors seek to reconcile general equilibrium with multiple equilibria seen as a theoretical (rather than simply mathematical) result. However, they can only draw this conclusion by making assumptions which appear to be in contrast with the neo-Walrasian RP. For example, Farmer (2014 Farmer ( , 2017 broadens the set of fundamentals considered by the ADM by adding animal spirits to standard 'deep' parameters. assumption or technical device that is not 'true', but allows a smooth transition from the hard-core claims of the ADM (where there are complete markets) to the more specific aggregate models (where many futures markets are missing), 27 which form the protective belt of the neo-Walrasian RP. 28 More specifically, this hypothesis allows standard theorists to derive macroeconomic results-such as the convergence of the actual dynamics of the economy to the 'natural' levels of output and employment generated by the real, supply-side factors, which both money and expectations are unable to affect-that are in tune with the hard-core 'natural laws'. It must be noted, however, that if modern macro models are not fully anchored to the logic of choice but essentially turn out to be conventional features, based on a set of technical devices or assumptions, two major consequences follow. First of all, standard macro theory itself can no longer be regarded as a self-contained entity, obviously and unquestionably applying to all forms of modern capitalism. Indeed, despite the Lucasian rhetoric on this theory as being the 'true' model of the economy or 'the only game in town', its 'truth' or validity is far from being obvious. The point is that conventions are not intrinsically true; their 'truth' or validity is contingent and conditional: it all depends upon the strength of their external anchors. Secondly, we need to consider a new dimension of generality, namely that of 'absolute' generality, which concerns the relationship between standard theory and real-world economies. While to discuss the 'relative' generality of this theory, it is sufficient to focus only on 'internal consistency', to discuss its 'absolute' generality we must clarify instead the conditions under which it holds in real-world economies. More specifically, we have to single out its external anchors and establish their solidity. To address this absolute generality issue, we need to reconsider in greater detail the 'communism of models' assumption underlying rational expectations. More specifically, this assumption implies that all agents form expectations (in the shape of 'subjective' probability distributions) based directly on the models and predictions of standard theory itself (formulated in terms of 'theoretical' probability distributions), which in turn conform to actual events, represented by 'objective' probability distributions. In other words, this view presupposes that three different 'worlds'-to use Karl Popper's terminology (see Popper 1979, p. 154; also Togati 1998, pp. 69-80) -converge to draw the full conventional circle in which agents' expectations (world-2) are influenced by standard models (world-3) . This means that they learn the 'natural' laws of macroeconomics, such as thinking in 'real' terms, and act upon them. Their actions bring about observed events that justify their expectations (world-1), thus closing the circle. On this basis, we can conclude that standard models are applicable to the real world only if the successful closure of this conventional circle actually occurs. It may be argued that this closure cannot emerge spontaneously but only if three strict institutional conditions are realized in practice. The first anchor is the affirmation of standard theory itself as the 'best' popular model of the economy. As already noted, the claim that the neo-Walrasian is the 'best' RP is almost entirely based on its internal consistency. However, because of its 'basic flaw', this feature is not sufficient per se to command the spontaneous, unanimous consensus (among peers) based on genuine scientific prestige, comparable say to Einstein's relativity insight. This means that to emerge as the only game in town, the neo-Walrasian RP needs institutional features, such as research funding, scientific journals and academic careers, to all support this view forcefully. The second anchor is the convergence of policymakers in identifying and adopting single 'best' policy rules, such as austerity, the Taylor rule or structural reforms agenda, in tune with standard natural laws. Because of the basic flaw, which implies that money and expectations in the real-world economies may generate outcomes that are far from the 'natural' ones expected on the grounds of standard models, policymakers cannot wait for spontaneous consensus over policies to emerge as a result of 'good' practice. The only way to 'save' the standard model is thus to enforce the 'best' rules a priori. One example is the Friedmanite 'automatic pilot' prescription according to which central banks should stick to a simple money supply rule, whatever the context, inspired by a crude version of the quantity theory of money. Another is the pre-commitment to austerity rules in the European Union, which reduces scope for discretionary fiscal policies for individual member states. The third anchor grants that all agents eventually come to learn standard theory itself. In principle, the latter should be selected by agents on the grounds of an inductive 'natural' (psychological) learning process that weeds out errors and weaker competitors (in terms of criteria such as internal consistency or predictive ability) and thus justifies the convergence of expectations to the probability distribution captured by standard models. However, it may be argued once again that agents' learning is not really spontaneous for various reasons. First, there seems to be no inductive learning process at all: not only do data not speak by themselves and are unable to disconfirm theories, but the opposite often seems to be true: econometric techniques are used in a kind of 'cooking-the-books' manner to fashion empirical evidence in support of a priori thinking (e.g. calibration). Secondly, due to its basic flaw, the theory does not generate predictions that 'shine through' and strikes agents as being obviously correct. Thirdly, in macroeconomics there are no controlled experiments capable of confirming theories, as in physics, for example. On these grounds, one can conclude that agents' learning of standard models too ultimately calls for institutional mechanisms, such as teaching, textbooks and newspaper editorials, which simply enforce thinking along orthodox lines: that is, shape public opinion in a way consistent with natural laws. But to what extent is the fulfilment of such conditions for the closure of the conventional circle likely to occur in the real world? It can be argued that today, in the aftermath of the GFC, this closure is at risk because many developments potentially undermine the solidity of the anchors of standard theory, thus revealing its particular nature: namely that it applies to a special case only. Notable developments are the emergence of divisions within the academic community and the proliferation of journals and ideas, including pleas for alternative paradigms in the discipline that undermine the rhetoric of the 'only game in town'. Moreover, policymakers seem to be even more pragmatic than in the past, as they tend to adapt their stances to circumstances rather than impose 'correct' rules that follow from theory, quantitative easing being an obvious example. In the end, a major gap between agents' expectations and actual events also tends to emerge today, one instance of which is the recent rise of 'populist' political parties in Europe that reflect growing popular dissatisfaction with the implementation of austerity rules a priori. This gap creates new scope for pluralism, as shown by the multiple, 'non-official' cultural resources and ways of learning for ordinary people and students made available by the internet. However, it is important to be aware of the existence of at least three major factors that prevent the breaking of the circle from happening even in difficult times such as these. First, the unity of method underlying current mainstream approaches: the 'only game in town' logic in particular is defended by the enormous flexibility allowed by partial equilibrium modelling along choice theoretic lines, which allows standard macroeconomists to accommodate all topics in a piecemeal fashion, including those that appear as striking anomalies from the standpoint of simple aggregate general equilibrium models. For example, while admitting that DSGE models failed to predict the crisis and incorporate many important features of real-world economies, such as many financial factors, they believe there is nothing fundamentally wrong with the discipline at the methodological level, since partial equilibrium models addressing such factors already exist. 29 The challenge that lies ahead for macro theorists is not to change method but, at best, to integrate such contributions 29 This stance is well described by Saint-Paul: While the ultimate cause of the crisis is not entirely understood, many of its mechanisms are familiar to economists. This is because they already had the models to analyse those mechanisms. For example, it has been known for decades that fall in asset prices tend to reduce the wealth of households, which has a negative impact on consumption…. Other phenomena such as contagion or the epidemic spreading of insolvency …are less well understood. But many economists have been working on phenomena like asset bubbles and crashes, bank insolvency and illiquidity, bail-outs and moral hazard… for decades…In that sense, there is no reason why economics should change because of the crisis… the crisis is not a major challenge for economics research. In fact, the crisis has triggered an explosion of research in the areas I mentioned but this new research uses the same methodology and assumptions as the preceding one. into current aggregate models-a task that always appears as feasible in such models, with relatively minor twists. But that is not all. An important role in defending the status quo is also played by institutional features to create a kind of methodological 'entry barrier'. For example, in most leading journals, articles that are not cast in standard formal terms are rejected straightaway whatever their contents: that is, it is not that new ideas are forbidden per se, but that they can make sense or have an impact only if they are cast in 'grammatical' terms. Another key example is the ever-growing technical background that economics students must acquire before thinking seriously about substantive issues, such as unemployment. The second factor is the stabilizing power of Keynesian policies. Indeed, the success of ultra-expansionary fiscal and monetary policies accounts for the resilience of major economies in the face of the GFC: put simply, thanks to their implementation the world has not collapsed as it did during the Great Depression. The last factor that contributes to the present stalemate is the lack of a full-blown alternative paradigm in post-Keynesian macroeconomics capable of challenging the standard RP on its own grounds and representing a unifying reference point for the galaxy of heterodox contributions. 30 This means that neoclassical macro models are the best ones not because of their scientific superiority but simply because of the TINA syndrome. Based on the RP concept, this paper leads to a few conclusions that are relevant to the discussion started by Hodgson and O'Donnell on the general theorizing versus historical specificity issue. First of all, the key flaw of standard macro is not its neglect of historical specificity per se. Institutions play a major role in its success in the 'relative' generality contest, since they appear both as premises or enabling factors, making sense of hard-core claims, and causal factors, reconciling such claims with phenomenological reality. The true weakness of standard theory emerges instead in the absolute generality context in terms of its applicability to real-world conditions: namely, on account of its failure to accommodate money and expectations, the validity of the general theorizing it pursues (i.e. the logic of choice) is highly conditional: it only holds under highly specific 'institutional anchors', such as those implied by the 'communism of models'. For this reason, standard theory appears as 'an analytical 30 Pasinetti, for example, notes that for post-Keynesians, the GT does not play the same unifying role as the ADM does in standard macroeconomics, as shown by the fact that there is no comprehensive 'monetary theory of production' paradigm, which was Keynes's main objective in the GT: 'The model of a pure exchange economy was in fact going through an analytical process that was making it into the most clearly formulated economic theory so far. The Arrow-Debreu general equilibrium model, which has become the quintessence of it, presents itself as an extremely attractive, elegant formulation. Nothing similar has emerged for a monetary theory of production ' (1999, p. 11). giant with institutional feet of clay', whose key characteristic is to presuppose that these very special conditions are actually always in place. 31 Secondly, as noted by Hodgson, Keynes's major flaw is to neglect 'historical specificity'. However, while confirming his view, our analysis supports it on different grounds. This neglect is due not to the pursuit of general theorizing per se, but to the fact that Keynes chose to rely on the wrong general principle, namely psychology rather than convention. Indeed, what undermines an effective causal role of institutions in the GT is that Keynes failed to treat conventions as 'hard-core' features: that is, the alternative 'natural laws' of macroeconomics, in contrast with the standard logic of choice. He did not directly question the latter but attacked the more specific assumptions of the classical theory of employment, replacing it with an alternative 'model' in which the key aggregate propensities appear as exogenous 'givens', ultimately based on psychology. Moreover, he neglected the fact that insofar as they are forms of precarious knowledge, conventions also call for external 'anchors', such as institutional factors, including policy rules and implicit social pacts, to be sustainable through time. Paradoxically, it is because it has managed to incorporate this feature-which accounts for the causal role of institutions-that the rational expectations revolution has conquered the macroeconomic field and virtually pushed Keynes out of it. I have argued, finally, that Keynes's economics calls for a new RP. The key point is that the economics profession regards Keynesian policies-despite their continual success as last resort stabilizers-as mere pragmatic remedies, devoid of major theoretical consequences. Thus, as post-GFC history confirms, what this policy success ultimately achieves is, paradoxically, the restoration of the 'business as usual' atmosphere, rather than the original revolution. If this analysis is correct, the reconstruction of this RP is the most important task that heterodox macroeconomists should embark on. In order to carry out this task, it would be necessary first of all to emphasize the role of conventions as the alternative natural laws of macroeconomics. Indeed, the key reason why standard macro has increasingly become a technical discipline is that most economists (including many Keynesians) simply take for granted the logic of choice-i.e. they all share the same grammar-so they can concentrate on the technical, modelling side. In the light of these new laws, one could then fully justify Keynes's (relative) generality claims about standard theory being just a particular conventional technique based on a number of ceteris paribus conditions as well as integrate institutions as causal factors into Keynes's economics, thus remedying its lack of historical specificity. This integration could allow Keynes's theory also to regain its generality status in 'absolute' terms, for at least one major reason. Unlike standard natural laws, which constrain the role of institutions but cannot be influenced by them, such conventional laws are much more 'malleable': they are in principle consistent with a much broader range of external anchors, including for example discretionary policy moves. Why Nations Fail: The Origins of Power, Prosperity and Poverty The End of Low Hanging Fruit? Why Nations Fail Blog, Tuesday Institutions as a Fundamental Cause of Long-Run Growth Theory of Value How the Economy Works: Confidence, Crashes and Self-fulfilling Prophecies Neo-Paleo-Keynesianism: A Suggested Definition. Roger Farmer's Economic Window blog Post Keynesian Dynamic Stochastic General Equilibrium Theory Lange on Price Flexibility and Employment: A Methodological Criticism The Role of Monetary Policy The Fed and the Natural Rate Money and Inflation Reflection Without Rules. Economic Methodology and Contemporary Science Theory Wanting Robustness in Macroeconomics Economics and Evolution: Bringing Life Back Into Economics How Economics Forgot History. The Problem of Historical Specificity in Social Science Economics in the Shadows of Darwin and Marx: Essays on Institutional and Evolutionary Themes Keynes and the Historical Specificity Of Institutions: A Response to Rod O'Donnell, Rod On the Logical Properties of Keynes's Theorising, and Different Approaches to the Keynes-Institutionalism Nexus Keynes's 'Revolution'-The Major Event of Twentieth Century Economics? The Problematic Nature and Consequences of the Effort to Force Keynes into the Conceptual Cul-de-Sac of Walrasian Economics Objective Knowledge Chicago Economics: Permanence and Change How has the crisis changed economics? The Economist Keynes and the Neoclassical Synthesis. Einsteinian Versus Newtonian macroeconomics How Can We Restore the Generality of the General Theory? Making Keynes's 'Implicit Theorizing' Explicit General Equilibrium Analysis. Studies in Appraisal Acknowledgements I am most grateful to two anonymous referees for their comments.