Microsoft Word - OPTIMAL JUDGMENT AGGREGATION.doc 1 OPTIMAL JUDGMENT AGGREGATION Jesús Zamora Bonilla UNED, Madrid, jpzb@fsof.uned.es 1. JUDGMENT AGGREGATION: A CONSTITUTIONAL VIEW. After centuries of debate, philosophers, psychologists, and social scientists are hardly in agreement about whether ‘knowledge’ must be essentially conceived as a cognitive state of individual minds, or must be attributed to some collective entity, i.e., whether it’s me, or we, who ‘really’ knows. Without attempting to settle this question in a definitive way, a promising analytical approach has emerged in recent years, which is throwing new light on a much more specific problem: what formal connections exist between ‘knowledge’ as a social entity and ‘knowledge’ as a property of the individuals conforming the social? Authors within this judgment aggregation approach have mainly employed social choice theory 1 as a mathematical framework within which to analyse how individual opinions or judgments determine, according to well specified rules of aggregation, the claims endorsed by the collective, and also to what extent the rationality that individuals may display in their own epistemic states is transferred to the 1 A classical introduction to social choice theory is Sen (1970). 2 group’s opinions. 2 The most consequential result emerged from this approach has been the phenomenon known as the ‘doctrinal paradox’ or the ‘discursive dilemma’, first described by Lewis Kornhauser and Lawrence Sager, and generalised in a number of ‘impossibility theorems’ by other authors, mainly Philip Pettit, Christian List, and Franz Dietrich. 3 According to these results, when the members of a group disagree about certain statements, and they form a ‘collective opinion’ by means of some democratic aggregation procedure, it is possible for the group to reach conclusions that are mutually contradictory, even if each member has an internally coherent set of opinions. Table 1 illustrates this possibility: though a majority of the three members of the group accepts claim A, and a (different) majority accepts B, the majority still rejects the conjoined proposition A&B, and so, the set of claims collectively endorsed becomes logically inconsistent. Statements Agents A B A&B 1 V F F 2 F V F 3 V V V Majority decision V V F Table 1 2 A related but different problem, i.e., whether some aggregation rules are capable of producing rational collective judgments out of not-so-rational individual opinions, has not been attacked within this framework until now, but we thing it would be an interesting problem to further discussion, nevertheless. 3 Kornhauser and Sager (1986); List and Pettit (2002), Dietrich (2006). 3 Independently on the position one may have in the philosophical debate about the social nature of knowledge, the negative results we have just mentioned constitute a serious problem, both from a theoretical and from a pragmatical point of view, for there is no doubt that many instances of what we take as ‘knowledge’ in our complex modern societies, grounded on public deliberation and on the division of intellectual labour, are the result of the ‘aggregation’ (‘interconnection’, ‘networking’, etc.) of a large amount of epistemic inputs provided by separate but interrelated individuals. A particularly important case is that of scientific knowledge itself, which, according to contemporary expositions, basically consists in some kind of negotiated consensus among the specialists, but a consensus that always conceals a number of more or less significant disagreements. 4 It is surprising that the literature on judgment aggregation has escaped the attention of relativistic philosophers of science, for apparently it might provide new arguments to show the lack of objectivity of accepted scientific claims; however, those tempted to follow this road should also be aware that some results have been proved about the probability of ‘good’ epistemic outputs arising from the aggregation of individual opinions (e.g., List (2005); see also Goldman (2004)). Our aim in this paper is precisely to show that, in spite of the ‘impossibility theorems’ referred to in the preceding section, certain mechanisms do exist that, at least under some specified circumstances, guarantee the optimality of the procedure according to which individual claims are aggregated, mechanisms in which some of the assumptions made in those theorems are, of course, necessarily relaxed. 4 This has been supported both from ‘rationalistic’ perspectives (e.g., Kitcher (1993)) and from ‘social constructivist’ ones (e.g., Knorr-Cetina (1999)). 4 Contrarily to a high proportion of the work done until now on this problem, the approach we will follow here is not based on social choice theory, but on a different conceptual resource from the theoretical economist’s toolkit: constitutional political economy. This is a theory about the collective choice of norms by rational agents. 5 The main idea underlying this approach is that, when a group of people interact to produce some ‘social’ outcome under some regulated procedure, it is possible that the members of the group have the capacity of jointly deciding what specific rules to establish; if this is the case, we can reasonably assume that the chosen rules will be efficient (in the sense that no other conceivable set of rules would have been better for some member, and at least as good for the rest), and furthermore, if these norms are to be in force during a long period of time, and have to be acceptable for people with different, and even highly conflicting interests, then they will tend to be impartial (in the sense that they would correspond to a choice made ‘under a veil of ignorance’). The last hypothesis can be made operational by assuming that the chosen rules will maximise the average expected utility of the group’s members. We will discuss in the remaining sections of the paper two different aggregation situations, which depend on the reasons that individuals may have in the first place to want an aggregation mechanism. Why do they worry at all about having something like a ‘collective opinion’, as something different from the mere enumeration of their individual judgments? In most cases this will be due not to some intrinsic interest of the members of the relevant collective, but to the demands of people external to the group. For example, customers of a firm don’t care about the particular opinions of the 5 See Brennan and Buchanan (1985) for an introduction to constitutional economics. Interestingly enough, in recent papers List and Pettit refer to the aggregation mechanisms as 5 company’s workers or counsellors; citizens demand that a single and coherent law is passed by the Parliament; engineers want that scientists tell them what are the laws governing some physical system, and so on, and so forth. Thus, many groups have an external pressure to provide unified claims, which of course, have to be based in some way or another on the opinions of their individual members. Furthermore, that Philip Pettit and his collaborators have been among the main actors in the booming of literature on judgment aggregation has surely been due to the fact the ‘discursive dilemma’ constitutes a threat to the deliberative republicanism he has been advocating (cf. Pettit (2001a,b)). According to this view, a necessary condition for democracy is that political or administrative agencies are ‘rationally contestable’, i.e., it must be possible for other agents to engage in a reasoned deliberation with the former. Hence, it is not only that in many circumstances we want that collective agencies ‘speak with a single voice’, but we also want that the claims endorsed by this ‘voice’ are logically articulated, so that we can argue against them if we don’t find them reasonable, or so that we can defend them on good reasons if we want to persuade somebody else of their validity. If any judgment aggregation mechanism is liable to lead groups in some cases to inconsistent sets of claims, then the very concept of ‘public deliberation’ would become problematic. On the other hand, the examples usually employed in the literature on judgment aggregation have been not too fortunately chosen in at least one respect. In general, they have the form of an argument about whether to take one decision or another. In the case of table 1, the ‘reasons’ would correspond to the first two columns (suppose, e.g., that A is the proposition ‘the hurricane will pass through our city’, and B ‘the hurricane will be ‘constitutions’ (cf., List and Pettit (2006)). 6 of force x at least’; assume that the decision is whether to take some preventive measures, and that all agree that these must be taken if and only if both A and B are true). The opinions about A and B are epistemic judgments, for they refer to purely factual questions. On the contrary, the judgment presupposed in the third column of the table is a practical one, for it is about what decision to take. Most discussions about the discursive dilemma seem to assume that the members of the group only care about this practical decision, 6 and the judgments of the two first columns have for them nothing but an instrumental value in helping to arrive to the ‘right’ decision. We think this asymmetry has to be taken explicitely into account, since the kind of reasons that may help to solve disagreements about factual claims are, in principle, different from the reasons employed to settle practical disputations. What we propose is, instead, to separate the problem of ‘judgment aggregation’ referred to merely epistemic decisions (what propositions must the group take as true), from the related problem of the aggregation of practical judgments (what propositions must they make true). Or, stated somehow differently, we need to separate the problem of judgment aggregation from the related, but different problem of the aggregation of decisions or preferences. When people agree on the facts, but vary in their values or interests, we are just in the traditional realm of social choice theory, which studies the aggregation of preferences, and here we will just ignore this case. The discussion on judgment aggregation presupposes, hence, that there is some disagreement on the relevant facts, but this still leaves room for several possible scenarios. In the following 6 And, furthermore, that all individuals have the same bottom-line preferences, i.e., they only disagree about whether the appropriate circumstances to take a decision are given or not. In fact, most practical disagreements refer to real conflicts of interests, which, perhaps just for analytical convenience, have usually been ignored in the literature. 7 sections we will consider two ideal cases: 7 in the first place, we will assume that agents have purely epistemic preferences, i.e., they only care about the ‘distance’ between their own individual opinions and the collective claims, but are not worried at all about the practical consequences this distance may entail. In the second place, we will make the opposite assumption: individuals don’t care in any way about the epistemic difference between the collective opinion and their own, but they fear that, the most informative this collective opinion is, the higher is the risk of the group taking a decision contrary to their individual interests (if they happen to disagree with the group’s opinion). We shall prove that in both scenarios there is an optimal judgment aggregation mechanism (but a different one in each case), under the assumption that the members of the group constrain their decision to the choice of a consistent and deductively closed set of claims. This assumption contradicts one of the conditions on which the impossibility theorems are based: systematicity, i.e., that the same aggregation rule is uniformly applied to all the propositions one by one. 8 But, leaving aside our opinion that there is no a priori reason to prefer rules that obey systematicity, our assumption allows to solve in a simple way the problem of the possible inconsistency of the collective judgments: since the choice is now directly made on sets of propositions, and not claim by claim, the members of the group just abstain from including among the available options those sets that are internally inconsistent. Furthermore, if the options only include deductively closed sets of propositions, then all the possible deductive relations between 7 We do not assume that these two extreme cases exhaust the logical possibilities, but leave for further work the analysis of intermediate situations. 8 The rules considered here correspond to the class that List and Pettit (2006, p. 12) identify as ‘set-wise supervenience: The set of group judgments on all the propositions in the agenda is robustly a function of the individual sets of judgments on (some or all of) these propositions’. 8 propositions are automatically taken into account in the collective choice, and this debilitates the paradoxical appearance of the discursive dilemma (that is due to the fact that a different collective decision may emerge if the choice is made at the level of the ‘premises’ or at the level of the ‘conclusions’); we assume, instead, that the group is not choosing isolated claims, but a theory (in the logico-mathematical sense of the word), that necessarily incorporates all the relevant deductive connections between the proposition it contains. Furthermore, there is another, very important aspect of the process of judgment aggregation that needs to be taken into account and that is usually neglected in the literature. I think it is reasonable to make the assumption that the aggregation process takes place only after every individual has taken into account the judgments of the others. I.e., I asume there is a previous process of public deliberation, during which each agent presents her reasons in favour or against each debated statement, and in this process it is possible that those reasons lead some individuals to change their judgments. (Perhaps, as Christian List has suggested in a personal communication, this can be modeled as if each individual carries out a process of judgment aggregation within her own mind). Public agregation, instead, proceeds when the deliberation has finished, i.e., when all the reasons presented do not already make anybody to change her opinion, or, stated differently, when an equilibrium is attained in the deliberation process. Of course, we refer to the case where this equilibrium contains different individual judments that are mutually contradictory, because in this is not the case, i.e., if deliberation leads to full consensus, the aggregation problem is trivial. 9 This apparently innocent assumption, together with the hypotheses that the individuals are rational, has a dramatic consequence for the philosophical discussion on judgment aggregation: that there is no reason to suppose that the collectively adopted judgment is ‘epistemically superior’ in any sense to the individual judgements. Stated differently: a judgment aggregation problem is not equivalent to an analogous problem of aggregation of information. In the latter case, we can take the opinion of every individual as a kind of statistical estimator of the truth of the relevant propositions, and, by knowing each individual’s reliability, it would be possible to make an inference to the theory that is most likely true. But we are assuming that, were there some logical or statistical argument showing that the collective judgment is more likely true than the opinion of an individual, then this agent would have a reason to change her mind, and we have supposed that all rational changes of individual judgments have been already made. So, the constitution or approvation of the collective judgment should not force the members of the group to change their individual opinions in any way. So, from an epistemic point of view, we must not think of the collective judgement as a ‘better’ opinion than the individual ones. The problem of judgment aggregation, hence, is not that of ‘finding the truth amongst a bundle of contradictory opinions’, but rather, that of how to live and act in a group in which there are irreducible cognitive disagreements. And it is important not to forget that this is primarily a problem for the members of the group, and not for the philosopher observing them from the outside. This is what justifies our contractarian approach: judgment aggregation mechanisms need not be justified by means of philosophical or mathematical arguments (though some of these can obviously be relevant), but mainly by means of the practical advantages or 10 disadvantages that having one mechanism or another will have for the people whose judgments are going to be aggregated. 2. JUDGMENT AGGREGATION BY PURELY EPISTEMICALLY ORIENTED AGENTS. Imagine there is a group composed by an odd number (n) of individuals, 9 each one having a certain opinion about k independent atomic propositions, p 1 , p 2 , ..., p k . Since we assume that every individual has a definite judgment about every proposition, we say that each agent believes a complete and consistent theory. In this propositional framework, complete theories can be axiomatised by a proposition of the form: p = ± p 1 & ± p 2 & ... & ± p k , where each symbol ‘±’ is to be replaced by a negation or by nothing; since every proposition is assumed to be logically independent of the rest, these complete theories are consistent (inconsistencies will only take place in someone accepts p n and ¬p n ). Each complete and consistent theory is then equivalent to some row of a traditional truth table. The theory accepted by individual i will be called pi = ± p 1 i & ± p 2 i & ... & ± p k i. Now suppose that the group has to take a collective decision about what complete and consistent theory represents in the best way the judgments of the group’s members. In order to answer this question, we need some information about the epistemic preferences of the individuals. The particular assumption we are going to make in this section is that every member of the group only cares about the ‘distance’ between the theory which is collectively adopted and the theory she personally believes. 9 The assumption that n is odd is made to avoid the possibility of ties. 11 This distance can be measured in a very straightforward way: 10 d(p,q) = (1/k)(nr. of mismatches between p and q), and hence I will assume that the utility i receives if theory q is collectively accepted is given by the formula ui(q) = 1 - d(q,pi). From these assumptions, several interesting theorems can be derived: (1) For every distribution of individual opinions, there is one theory which maximises the sum of the individual utilities. This is straightforward: given the opinions of the individuals, each complete theory will have associated with it a certain degree of total utility, and for one of these theories this sum will have a maximum value. 11 A much more relevant question is whether the members of the group have some way of finding which one the optimum theory is. The following two theorems show that this is certainly the case: premise- based majority voting PMV (which consists in each member casting a separate vote on each atomic proposition p n , and selecting as the collective judgment p n or ¬p n , depending on which option receives more votes), has the desired properties. (2) The outcome of PMV maximises the sum of the individual utilities. 10 The notion of ‘logical distance’ between propositions or states of affairs has been particularly exploited within the literature on ‘truth approximation’ or ‘verisimilitude’ (cf. I. Niiniluoto, Truthlikeness, Dordrecht, D. Reidel, 1987). 11 For every atomic proposition p, only one of p or ¬p minimises the sum of the distances to the individual beliefs about p; so, if A and B ≠ A, then B will have at least one atomic proposition 12 Proof: For each atomic proposition p j , majority voting guarantees that the outcome has fewer mismatches with the individual opinions about p j than its negation. (3) PMV is non manipulable. Proof: For each atomic proposition, no individual can attain a higher utility level by voting the negation of the proposition she accepts, than by voting this proposition. So, PMV can be seen as the optimal judgment aggregation rule for agents that have the type of preferences assumed in this section. 12 Of course, by accepting the outcome of this voting mechanism, the group will be committed to accept many claims that some members would individually reject, and it is also possible that some propositions the group is forced to accept (because they logically follow from the adopted complete theory) are rejected by a majority. Furthermore, it would not be strange that the outcome of PMV were a theory that everybody would individually reject! In our scenario, however, this would only be an apparently dramatic conclusion, because the voted theory is just taken as something that represents in the best possible way the variety of opinions of the group’s members. It is really the result of a compromise, and, as in most cases of bargaining, the final outcome simply does not for which the sum of distances to indivudal judgments is not optimal, and hence B cannot be optimal. (The proof depends on there being an even number of individuals, of course). 12 PMV has been defended by Pettit (2001a, ch. 5) on the basis that it generates a consistent collective opinion. What our argument adds is that, in this idealised scenario, the rule is also optimal from an epistemic point of view, and it forces individuals to sincerely reveal their true opinions. 13 coincide with the optimum choice of any of the parties, though it minimises the aggregated losses. 13 3. JUDGMENT AGGREGATION BY CYNICAL AGENTS. We are going to consider next a situation which, in a sense, is a mirror image of the previous one. In section 2, we have assumed that the members of the group only care about how far the collective opinion lies from their own individual judgment about the truth. The practical consequences that the group will draw from having formed one opinion or another have not been taken into account, or have been simply assumed to have an effect on each individual’s utility function that is strictly proportional to the distance between the collective and the individual judgment. But this will certainly not happen in many real situations (perhaps scientific research is the best example of an institution relatively close to the idealised one depicted in section 2). Now we will assume, instead, that the individuals are utterly cynical, in the sense that, no matter how ‘close’ your personal opinion is from the collectively agreed one, if the latter is inconsistent with the former (i.e., if your own preferred theory happens to ‘loose’ in the voting), then the collective judgment will be interpreted by the winners in the most beneficial way for them, and the least beneficial way for you (i.e., they will use the collective judgment to justify those practical decisions that satisify in the best possible 13 Two complications that will be study in the final version of the paper refer to the conventional nature of the choice of atomic propositions, and to the possibility of individuals having a reservation utility from not reaching a consensus. The first problem relates to the phenomenon of This is the problem known as ‘language variance’, well known in the literature on truthlikeness, and first identified by David Miller. 14 way their own interests, at the cost of yours). Logic imposes some limits to this cynical use of reason, but the limits are often wide enough as to permit a considerable degree of exploitation of those that disagree on the public opinion. Our strategy in this section will be to assume that the members of the group, knowing this, may want to establish some constitutional mechanism that minimises the chances of being exploited, or, more exactly, that maximises the difference between the benefits they derive when they win and the costs they suffer when they lose. Now the agents can choose, not only amongst complete theories, but amongst all consistent and deductively closed set of sentences. 14 In order to calculate the sum of individual utilities derived from the choice of a particular theory, we have to make it explicit some assumptions about the individuals’ preferences (‘ui(A)’ indicates the utility agent i receives if theory A is the collective choice): (4) (a) If pi ├─ A ├─ B, then ui(B) ≤ ui(A). (b) If A ├─ B ├─ ¬pi, then ui(A) ≤ ui(B). (c) ∀A∀i∀j, if pi ├─ A, and pj ├─ ¬A, then ui(A) = ─ uj(A). (d) ∀A ∀i∀j, if i, j ├─ A, then ui(A) = uj(A). (e) ∀i ui(Taut) = 0. (4.a-b) assert that, amongst two theories that i accepts, she will prefer as the collective opinion the one with more content, and, amongst two theories she does not 14 Adding these options to the case of section 2 does not vary the result proved there, because it can be shown that all non complete theories give a total utility lower than that associated to the optimum complete theory. 15 accept, she will prefer the less contentful; these assumptions reflect the ‘cynical’ attitude agents have towards the collectively adopted claims: if a member of the group agrees with the collective opinion, then she will want this opinion to be as strong as possible, but if she disagrees, she will want it to be as weak as possible. On the other hand, (4.c-e) are assumed by analytical convenience (‘Taut’ stands by the tautology); in particular, (4.d) seems reasonable when discussing a choice made ‘under a veil of ignorance’. 15 A possible objection to (4.b) has been suggested to us by Franz Dietrich: if an individual beliefs p&q&r, this assumption entails that she will prefer that ¬p&q is collectively adopted, rather than ¬p&q&r, though in the latter case a new proposition accepted by her has been added. Our answer is simply that those individuals that happen to have this type of preferences (or those situations that generate this type of payoffs) are well represented by the scenario modelled in section 2. On the contrary, when people is afraid of linguistic manipulation of reasons, the new assumptions seem more reasonable. Take into account that the ‘distances’ between the different theories may depend on the set of concepts with which the language operates (rec. note 13), and, in absence of a clearly predetermined way of ‘measuring’ the similarity between several propositions, those ‘distances’ become extremely subjective. Fig. 1 represents this possibility: the individual opinion is p&q&r (the dotted area), ¬p&q is the area shadowed with horizontal lines, whereas ¬p&q&r is the area with vertical lines. In this example it is clear that the weaker theory ¬p&q is ‘closeer’ to the individual opinion than the stronger theory ¬p&q&r, though this includes an additional claim the agent accepts. We are not assuming that examples like this one are the norm, but, as long as 15 The hypotheses are not logically independent; for example, (4.b) can be derived from (4.a) and (4.c), and (4.c) also entails (4.d). 16 the possibility exists of using the collectively accepted claims to take decisions that serve to exploit the ‘dissidents’, our new hypotheses about individual utility functions become more justifiable. Figure 1 From the point of view of the group’s members, the most important fact is that, in each collective choice situation, there will be some theory for which the sum of individual utilities attains a maximum, and agents would like to have an aggregation procedure which systematically leads the group to accept that theory. We will show that there is a mechanism that, even if it fails to select the optimum theory in each particular collective choice, it generates an optimal pattern of choices on the average. By theory based majority voting (TMV) we will refer to a process in which the members of the p q r 17 group can form coalitions that propose a theory, A, which is then voted. If a majority of the members of the full group vote in favour of A, it becomes the collective opinion. If no theory attains a majority, then the group suspends judgment (this can be represented by the choice of Taut), which results in everybody having a utility equal to 0. We introduce a further difference between simple majority voting and qualified majority voting; in the latter case, some predetermined percentage w (≥ 0.5) of the group must vote in favour of the proposed theory if it is to be socially accepted (in the case of simple majority voting, w = 0.5). A theory A is w-defeatable if and only if there is another theory B such that the set of members for which ui(A) < ui(B) constitutes a w- majority. The following proposition states some very basic properties of this voting procedure (S w represents the outcome of applying the w-TMV aggregation procedure; pi represents the complete theory accepted by i, as in section 2): (5) (a) ∀w ∃i, j, ..., l, S w = pi ∨ pj ∨... ∨ pl (b) S w is non-w-defeatable. Proof: (5.a) follows from (4.a), for, the individuals voting in favour of S w will always prefer it to any other theory entailed by it. (5.b) follows from the fact that, if S w were w-defeatable, then another coalition would propose some theory which defeats S w (this is equivalent to saying that S w is the theory which maximises ui for those individuals belonging to the winning w-majority). Let S * be the theory which would maximise the sum of individual utilities if collectively chosen. Then the most important result is the following theorem: 18 (6) There is some qualified majority level, w’, such that S * = S w’ . Proof: (4.c-d) entail that S * , having a positive sum of individual utilities, will have a majority of members in favour of it (i.e., for which ui(S * ) > ui(¬S * )). Let w ’ be the proportion of members in favour of S * . If this theory is w ’ -defeatable, there will exist another theory, S’, such that at least the same number of members prefer S’ to S * , but this, together with (4.c-d), entails that S’ has a bigger aggregated utility than S * , contrarily to the definition of S * . The next relevant question is whether S * can be reached by simple majority voting (i.e., whether w ’ = 0.5). It is easy to see that, in general, this will not be the case. 16 So, there will be, for each collective choice situation, a particular value of w’ guaranteeing that the outcome of w’-TMV selects the optimal theory. Nevertheless, the extent of the optimal qualified majority will probably change from case to case. What the members of the group would like to choose as a constitutional rule, under this ‘cynical’ scenario, will be some value of w that maximises the average value of the outcomes of w-TMV. The more predisposed they are towards exploiting the other 16 (4.c-d) entail that the social utility associated to simple majority is equal to the utility of just one of the individuals voting for the winner theory (since there are 2n + 1 individuals, n+1 of them will get ui(S), and the remaining n will get ─ ui(S)). Let S 1 be the winning theory if w is set equal to (n+2)/(2n +1); in this case the total utility attained by the group is 3ui(S 1 ) (from a similar argument), and so, simple majority voting will be collectively better than w-majority voting only if ui(S)/ui(S 1 ) > 3. Hence, if individual utility decreases ‘slowly’ from the level it attains with the outcome of simple majority voting to the level attained under unanimity (i.e., when w equals 1), S * will necessarily correspond to a majority level higher than 0.5. 19 members of the group, the higher the value of w they will choose. The outcome of judgment aggregation in a situation like the one depicted in this section is a collective claim consisting simply in the disjunction of the beliefs of a high proportion of the members of the group. Perhaps this collective claim does not look like a powerful victory of deliberative reason, but we think that it is an extremely important point having shown that even in circumstances utterly inhospitable to reason and dialog, like the ones assumed in this section, agents can find a way of carrying out epistemic negotiations in an efficient way. REFERENCES Brennan, Geoffrey, and James Buchanan (1985), The Reason of Rules. Constitutional Political Economy, Cambridge, Cambridge University Press. Dietrich, Franz (2006), “Judgment Aggregation: (Im)possibility Theorems”, Journal of Economic Theory, 126(1): 286-298. Goldman, Alvin (2004), “Group Knowledge Versus Group Rationality: Two Approaches to Social Epistemology”, Episteme, 1(1): 11-22. Kornhauser, Lewis A., and Lawrence G. Sager (1986), “Unpacking the Court”, Yale Law Journal, 96(1): 82-117. Kitcher, Philip (1993), The Advancement of Science. Science without Legend, Objectivity without Illusions, Oxford, Oxford University Press. Knorr-Cetina, Karen (1999), Epistemic Cultures. How the Sciences Make Knowledge, Cambridge (Ma.), Harvard University Press. 20 List, Christian (2005). “The Probability of Inconsistencies in Complex Collective Decisions”, Social Choice and Welfare, 24(1): 3-32. List, Christian, and Philip Pettit (2002), “Aggregating Sets of Judgments: An Impossibility Result”. List, Christian, and Philip Pettit (2006), “Group Agency and Supervenience”, unpublished. Pettit, Philip (2001a), A Theory of Freedom: From the Psychology to the Politics of Agency, Cambridge and New York, Polity and Oxford University Press. Pettit, Philip (2001b) “Deliberative Democracy and the Discursive Dilemma”, Philosophical Issues 11: 268-299 Sen, Amartya (1970), Collective Choice and Social Welfare, San Francisco, Holden Day.