Fact-checked by Grok 2 weeks ago

Social choice theory

Social choice theory is a branch of and that analyzes the aggregation of individual preferences, utilities, or judgments into collective decisions or social welfare orderings. It originated in the 18th century with early paradoxes like the , where majority voting can produce cyclic preferences, but gained modern rigor through Arrow's 1951 work demonstrating inherent impossibilities in fair aggregation. The field's defining achievement is , which proves that no can simultaneously satisfy universal domain, , , and non-dictatorship when aggregating ordinal preferences over three or more alternatives. This result underscores causal trade-offs in : attempts to ensure fairness and responsiveness in voting systems inevitably fail under reasonable axioms, leading to strategic manipulation or inequitable outcomes in practice. Subsequent developments, including Gibbard-Satterthwaite theorem on , extend these insights to reveal that non-dictatorial voting rules are susceptible to manipulation by strategic voters. Social choice theory thus challenges idealistic views of collective rationality, emphasizing empirical and logical limits to democratic aggregation while informing real-world applications in electoral design and .

Core Concepts and Definitions

Social Welfare Functions and Ordinal Preferences

A (SWF) in social choice theory is a rule that maps a profile of individual preference relations to a collective social preference relation over a set of alternatives. In the ordinal framework, individual preferences are represented solely by complete and transitive rankings (total preorders) of alternatives, without measurable intensities or interpersonal comparisons of preference strength. This ordinal approach assumes that agents can only express strict orderings, such as preferring alternative A to B and B to C, implying A preferred to C by , but provides no cardinal scale for how much more A is preferred over B versus B over C. Formally, let X denote a of alternatives with at least three elements, and I the set of n \geq 2 individuals. Each individual i \in I has an ordinal preference relation R_i \subseteq X \times X, which is complete (for any x, y \in X, either x R_i y or y R_i x) and transitive (if x R_i y and y R_i z, then x R_i z). A profile is a (R_i)_{i \in I} of such relations, drawn from a D of admissible profiles. The SWF, denoted f: D \to \mathcal{R}, where \mathcal{R} is the set of complete transitive relations on X, outputs R = f((R_i)_{i \in I}), the social preference, satisfying similar and for rational social choice. Ordinal SWFs differ from cardinal variants, which incorporate functions u_i: X \to \mathbb{R} allowing aggregation via sums or weighted averages to reflect preference intensities. In the ordinal case, aggregation relies exclusively on orderings, precluding interpersonal comparisons and focusing on consistency with individual , as formalized in Kenneth Arrow's 1951 framework where SWFs must satisfy conditions like universal domain (admitting all logically possible profiles) and produce transitive social orderings. This setup highlights challenges in deriving unanimous social rankings from diverse ordinal inputs, as simple rules like pairwise majority voting can yield intransitive outcomes, such as cycles where A beats B, B beats C, and C beats A across profiles. Examples of ordinal SWFs include the , which sums ranks across individuals (e.g., assigning scores of 2, 1, 0 for top-to-bottom in a three-alternative ), though it violates by considering full profiles.

Aggregation Rules and Social Choice Functions


Aggregation rules in social choice theory constitute procedures for synthesizing individual or utilities into a outcome, encompassing both ordinal and input structures. These rules operate on preference profiles, where each individual i \in I (with |I| = n \geq 2) expresses a preference over a of alternatives X (with |X| \geq 3), to yield either a social ranking or a selected of X.
Social choice functions represent a specific of aggregation rules that map preference profiles directly to a non-empty of alternatives, often a singleton in resolute variants. Formally, given the domain \mathcal{D} of all possible profiles of strict weak orders on X, a social choice function f: \mathcal{D} \to 2^X \setminus \{\emptyset\} determines the socially chosen set for each profile. This contrasts with social welfare functions, which output a complete on X, as social choice functions need not induce transitive social preferences across varying feasible sets. Arrow's 1951 framework initially emphasized welfare functions but extended implications to choice functions by deriving choices as maximal elements under the social ordering. Common examples of social choice functions include the plurality rule, where the with the most first-place votes wins, and the Condorcet winner method, selecting the that pairwise defeats all others if such exists. Dictatorial functions, where f(R) = \arg\max_{x \in X} R_i for a fixed i, trivially satisfy basic efficiency properties like Pareto optimality—defined as no outside f(R) being preferred by all individuals to every element in f(R)—but violate and neutrality axioms requiring impartial treatment of individuals and alternatives, respectively. Amartya Sen's 1970 analysis in Collective Choice and Social Welfare highlighted how social choice functions can accommodate incomplete information or utilities, extending Arrow's ordinal focus to broader considerations. Properties such as strategy-proofness—ensuring no individual can benefit by misrepresenting preferences—and onto-ness—requiring every alternative to be selectable under some profile—are central to evaluating aggregation rules, though impossibility results demonstrate inherent trade-offs for non-dictatorial functions over universal domains. Empirical applications, such as electoral systems, illustrate these functions' practical constraints, where rules like aggregate ranked preferences to mitigate spoilers but may still admit manipulability.

Historical Development

Precursors in Enlightenment and 19th Century

Early explorations in aggregating individual preferences into collective decisions emerged during the , particularly in amid discussions of electoral reforms for the and the impending . In 1781, Jean-Charles de Borda proposed a ranking-based voting method, now known as the , wherein voters assign points to candidates according to their ranked preferences (e.g., m-1 points for first place among m candidates, decreasing sequentially), and the candidate with the highest total points wins. This approach aimed to account for the intensity of preferences beyond mere , reflecting a probabilistic view of voter judgments influenced by Enlightenment . The advanced these ideas in his 1785 Essai sur l'application de l'analyse à la probabilité des décisions rendues à la pluralité des voix, critiquing Borda's method for potentially favoring mediocrity over will. advocated for pairwise comparisons to identify a Condorcet winner—a candidate who defeats every opponent in head-to-head votes—and introduced probabilistic jury theorems to argue that larger electorates yield more accurate collective judgments under independence assumptions. He also identified the voting paradox, where transitive individual preferences aggregate into intransitive social rankings (e.g., A > B, B > C, C > A by ), foreshadowing later impossibility results, though his work emphasized probabilistic resolutions over deterministic aggregation failures. Parallel to these ordinal voting innovations, utilitarian philosophers developed cardinal approaches to social welfare. Jeremy Bentham's 1789 An Introduction to the Principles of Morals and Legislation posited the "greatest happiness principle," aggregating individual utilities arithmetically to evaluate policies, assuming interpersonal comparability and measurability of pleasure and pain. refined this in his 1861 Utilitarianism, distinguishing higher and lower pleasures while retaining summation as the basis for social choice, influencing but diverging from ordinal methods by prioritizing total utility over preference orderings. In the late 19th century, Charles Lutwidge Dodgson () extended voting analysis through pamphlets like his 1873 A Method of Taking Votes on More than Two Issues and 1876 A Method of Taking Multiples, critiquing plurality voting's flaws (e.g., vote-splitting) and proposing amendments to systems. Dodgson's work, motivated by University elections, explored preference aggregation under strategic incentives and advocated ranked-choice methods akin to Condorcet's, bridging 18th-century insights to modern concerns without formalizing general theorems. These precursors laid groundwork for social choice by highlighting tensions between fairness criteria, though they lacked the axiomatic rigor of 20th-century developments.

Mid-20th Century Formalization

The formalization of social choice theory in the mid-20th century marked a shift from descriptive analyses of paradoxes to axiomatic models of aggregation, emphasizing mathematical rigor in evaluating collective decision mechanisms. This period saw economists and political scientists develop frameworks to assess how individual ordinal could be combined into coherent social orderings, highlighting inherent tensions in democratic processes. Duncan Black's 1948 paper "On the Rationale of Group Decision-Making," published in the , provided an early axiomatic treatment of majority voting in committees. Black demonstrated that under single-peaked preferences—where voters' ideal points align along a single dimension— yields transitive outcomes equivalent to the median voter's preference, resolving potential cycles in pairwise comparisons. His analysis rediscovered and formalized 18th-century insights from Condorcet on spatial models, laying groundwork for later probabilistic and game-theoretic extensions while underscoring conditions under which avoids inconsistency. Independently, advanced this formalization in his 1951 monograph Social Choice and Individual Values, originating from his doctoral dissertation at . defined a as a mapping from profiles of individual strict orderings over alternatives to a complete social ordering, subject to axioms including universal domain (all preference profiles possible), (unanimous preference implies social preference), , and non-dictatorship (no single individual determines all social choices). This framework, developed amid postwar interest in at institutions like the Cowles Commission, established social choice as a branch of , proving that no such function satisfies all axioms for three or more alternatives, thus formalizing impossibility under ordinal assumptions. These contributions, building on concepts from earlier , integrated social choice with and anticipated by clarifying the logical limits of non-manipulable aggregation rules. Black's dimensional restrictions complemented Arrow's general impossibility, influencing subsequent empirical tests of in legislatures and elections.

Late 20th and Early 21st Century Extensions

In the 1980s, social choice theorists advanced characterizations of strategy-proof mechanisms, building on the Gibbard-Satterthwaite theorem by imposing domain restrictions to identify feasible rules. Hervé Moulin (1980) proved that on single-peaked preference domains, anonymous and strategy-proof social choice functions coincide with generalized median rules, which select outcomes at specific quantiles of voters' ideal points. Similarly, (1983) extended these results to show that strategy-proof rules for electing committees under single-peaked preferences reduce to selecting medians along multiple dimensions. These findings highlighted how relaxing unrestricted domain assumptions could yield constructive aggregation methods, though they remained vulnerable to manipulation outside restricted domains. The 1990s saw geometric and topological approaches to dissecting voting paradoxes, with Donald Saari (1994) using to quantify the prevalence of cycles in , demonstrating that Condorcet paradoxes arise generically but can be mitigated by certain scoring rules. Extensions to utilities also progressed, as Prasanta Pattanaik and others explored impossibility results for aggregating intensities while respecting ordinal information, revealing tensions between and rights protection in Sen's framework. These developments underscored the limits of ordinalism and prompted models incorporating interpersonal comparisons under . Entering the early , judgment aggregation formalized the aggregation of binary opinions on logically linked propositions, generalizing Arrow's theorem to non-preference judgments. Christian List and (2002) established an impossibility result: no non-dictatorial method aggregates individual judgments into consistent collective outputs while satisfying universal domain, , and , mirroring discursive dilemmas in group deliberation. This framework revealed doctrinal paradoxes in legal and ethical contexts, where majority voting on premises yields inconsistent conclusions despite consistent individual views. Concurrently, computational social choice integrated into aggregation problems, formalizing as a field around 2000 with analyses of winner determination and manipulation hardness. Bartholdi, Tovey, and Trick (1989) demonstrated for optimal winner selection under spatial voting rules, while early 2000s work by Conitzer and Sandholm (2002) quantified strategic manipulation costs, showing polynomial-time solvability for some rules but intractability for others like . These extensions emphasized practical constraints, influencing real-world system design amid large-scale elections.

Fundamental Theorems and Impossibilities

Arrow's Impossibility Theorem (1951)

Arrow's impossibility theorem, formally proved by economist Kenneth J. Arrow in his 1951 monograph Social Choice and Individual Values, asserts that no social welfare function (SWF) exists that aggregates individual ordinal preferences into a transitive social ordering while satisfying four key axioms—unrestricted domain, Pareto efficiency, independence of irrelevant alternatives (IIA), and non-dictatorship—when there are at least three alternatives and at least two individuals. The theorem highlights a fundamental tension in democratic decision-making: preferences that are rational and complete at the individual level cannot be consistently aggregated into a rational social preference without violating one of these conditions. A takes as input a profile of individual preference orderings—strict, complete, and transitive rankings over a set X of alternatives with |X| \geq 3—and outputs a social preference ordering that is also complete and transitive. The unrestricted domain axiom requires the SWF to be defined for every logically possible profile of individual preferences, ensuring broad applicability without restricting admissible inputs. Pareto efficiency (or unanimity) stipulates that if every individual strictly prefers alternative x to y, then the social ordering must rank x above y, capturing the intuitive idea that unanimous agreement should be respected. The IIA condition demands that the social ranking between any two alternatives x and y depends solely on individuals' relative rankings of x and y, unaffected by preferences over irrelevant third options, preventing arbitrary shifts from extraneous information. Finally, non-dictatorship prohibits any single individual from unilaterally determining the social ranking for every pair of alternatives across all profiles. The proceeds by , often employing a of identifying "decisive" sets or using ordinal techniques to show that satisfying the first three axioms forces the of a . One constructive approach, as outlined in simplified single-profile variants, assumes a two-person and neutrality to derive contradictions under IIA and Pareto, extending to general cases via . For instance, Tao's accessible proof leverages the axioms to demonstrate that the social ordering must replicate an individual's preferences in decisive scenarios, propagating to . Empirical verification through exhaustive enumeration for small electorates and alternatives confirms no such exists, as computational checks for three voters and three options yield no compliant functions. The theorem's implications underscore the inescapability of trade-offs in preference aggregation: real-world voting systems, such as plurality or , inevitably violate at least one , often IIA or leading to intransitivities akin to the . Arrow's result, building on earlier paradoxes, shifted focus from ordinal to utilities or probabilistic methods in later social choice research, influencing fields like . Despite critiques questioning realism—e.g., IIA's stringency in multi-issue settings—the theorem remains a cornerstone, rigorously establishing via logical deduction that fair, non-dictatorial ordinal aggregation is impossible under the specified conditions.

Condorcet Paradox and Preference Cycles

The refers to the situation in which a of voters prefer alternative A to B, a prefer B to C, and a prefer C to A, resulting in intransitive social preferences despite transitive individual preferences. This phenomenon was first identified by the in his 1785 work Essai sur l'application de l'analyse à la probabilité des décisions rendues à la pluralité des voix, where he illustrated how pairwise comparisons among three or more alternatives can fail to produce a coherent . Condorcet noted this as a limitation of rule, observing that it could lead to inconsistent collective outcomes even under rational individual orderings. A canonical example involves three voters and three alternatives, A, B, and C, with the following strict preference orderings:
VoterPreference Ranking
1A > B > C
2B > C > A
3C > A > B
In pairwise contests, A defeats B by a 2-1 (Voters 1 and 3), B defeats C by a 2-1 (Voters 1 and 2), and C defeats A by a 2-1 (Voters 2 and 3), forming a with no alternative that pairwise beats all others—a Condorcet winner. This configuration, known as a "cycle of length three," demonstrates that majority aggregation does not preserve , a core property of rational choice, as the social preference relation lacks a maximal element under repeated majority comparisons. Preference cycles generalize this issue in social choice theory, occurring when aggregated preferences over any finite set of alternatives form a non-transitive loop, such as A socially preferred to B, B to C, and C to A, potentially extending to longer circuits. Such cycles arise under neutral aggregation rules like majority voting because individual preferences, even if single-peaked or otherwise structured, can align in ways that induce instability in the social ordering; for instance, probabilistic models of preference generation show cycles with positive probability across diverse electorates. In formal terms, if individual preferences are complete, reflexive, and transitive, the social preference induced by a profile of such orderings may violate these axioms, leading to violations of acyclicity and thus no stable without additional restrictions. The paradox implies that pure cannot guarantee a transitive social preference relation, challenging the foundations of democratic aggregation and motivating criteria like the Condorcet winner condition, where a selects an that pairwise defeats all others if such exists. However, since cycles occur with non-zero likelihood in large electorates—estimated under impartial culture models to approach 8.77% for three alternatives as voter numbers grow—practical systems must either accept potential inconsistency or impose tie-breaking mechanisms, range restrictions, or rules like scoring methods to resolve indeterminacy. This underscores a core impossibility in social choice: aggregating ordinal preferences via majorities yields rational social choices only probabilistically or under restrictive domain assumptions, such as single-peaked preferences that eliminate cycles.

Gibbard-Satterthwaite Theorem on Manipulation

The establishes a fundamental limitation on the design of strategy-proof social choice mechanisms. It states that any non-dictatorial social choice function selecting a single alternative from a set of at least three alternatives, for two or more voters with ordinal preferences, must be manipulable if it is surjective (i.e., every alternative can be chosen under some preference profile). A social choice function is manipulable if there exists at least one preference profile and one voter such that misreporting that voter's true preferences yields a more preferred outcome for them, while others report truthfully. Strategy-proofness requires truthful reporting to be a dominant equilibrium, preventing such profitable deviations. Mark Satterthwaite proved a version of the in 1975 for deterministic voting procedures satisfying citizen (surjectivity) and non-dictatorship, showing they violate strategy-proofness. Independently, Allan Gibbard extended the result in 1977 to a broader class of non-deterministic social choice rules, demonstrating that only random dictatorships—where outcomes are probabilistic mixtures of individual dictators' choices—are incentive-compatible (strategy-proof in a Bayesian or via undominated strategies). The assumes unrestricted domain of strict ordinal preferences and focuses on non-dictatorial rules, excluding trivial cases like functions or single-voter scenarios. Proofs typically proceed by contradiction: assume a strategy-proof, surjective, non-dictatorial function exists, then identify a "pivotal" voter whose preference shift can alter the outcome between any pair of alternatives, implying local dictatorship; extending this globally yields full dictatorship, violating the assumption. Simpler variants first handle two-alternative cases (where median rules are strategy-proof) before inducting to three or more. The result complements by shifting focus from collective rationality conditions (like and ) to individual incentives, revealing that even weaker incentive constraints force dictatorial outcomes. Implications include the vulnerability of common voting rules like , , or instant-runoff to tactical voting, where voters rank insincerely to elevate preferred candidates or block disliked ones. For instance, in with three candidates A, B, C, a voter preferring A > B > C may strategically vote for B over C to prevent C's win if others split votes. Exceptions arise in restricted domains (e.g., single-peaked preferences, where voter rules are strategy-proof) or with two alternatives, but these do not generalize to unrestricted settings with three or more options. Quantitative extensions quantify manipulability probabilities, showing that neutral rules are manipulable with probability at least 1 - O(1/m) for m alternatives.

Additional Impossibility and Possibility Results

Amartya Sen's 1970 impossibility theorem, often termed the Paretian liberal paradox, establishes that no social decision function can satisfy three conditions simultaneously under unrestricted ordinal preferences: the weak (unanimous preference for an alternative over another implies social preference for it), minimal (at least two individuals each have decisive rights over a pair of alternatives affecting only themselves), and either the absence of social preference cycles or the existence of a socially maximal element for every profile. This result highlights tensions between efficiency and individual rights, as requires non-interference in personal domains, yet Pareto optimality can force overrides when individual rights conflict with collective , such as in Sen's example of two individuals disagreeing on whether a prude or lewd couple views a film. Sen's theorem assumes at least three alternatives and two individuals, and it holds even without full , underscoring that rights-based constraints exacerbate aggregation difficulties beyond Arrow's framework. The Muller-Satterthwaite theorem (1977) provides another impossibility for social choice functions selecting a single winner from a finite set of at least three alternatives. It states that any such function satisfying (all voters preferring A to all others selects A), surjectivity (every alternative wins under some profile), and strong monotonicity (if A beats all others and an individual shifts support toward A without reversing any pairwise loss, A still wins) must be dictatorial. This refines Gibbard-Satterthwaite by focusing on monotonicity rather than strategy-proofness directly, showing that non-dictatorial rules inevitably violate one of these for ordinal profiles; the proof exploits the contrapositive, demonstrating manipulation opportunities via adjustments that preserve or strengthen support. Possibility results emerge when relaxing unrestricted domains. On single-peaked preference domains—where alternatives are linearly ordered and each voter's preferences peak at one point, decreasing away— the median rule, selecting the median-ranked peak among odd-numbered voters, is anonymous, Pareto efficient, strategy-proof, and non-dictatorial for any number of alternatives. This rule avoids cycles and manipulation because misreporting a peak cannot beneficially shift the median without crossing indifference lines, as formalized in Black's 1948 extended to strategy-proofness. Similarly, for single-crossing domains (preferences align along a single dimension without crossings), non-dictatorial strategy-proof rules exist, such as variants, enabling aggregation without Gibbard-Satterthwaite impossibilities. These domain restrictions, empirically relevant in spatial voting like positions, demonstrate feasible non-dictatorial mechanisms when preferences exhibit structure absent in general cases.

Applications and Methodological Extensions

Mechanism Design and Incentive Compatibility

represents the engineering counterpart to social choice theory, focusing on the construction of institutional rules—termed mechanisms—that induce agents with private information to behave in ways that realize specific social objectives, such as Pareto-efficient allocations or equitable outcomes. Central to this approach is addressing strategic of preferences, a core challenge identified in social choice contexts like or resource distribution. pioneered the framework in the 1960s, emphasizing decentralized mechanisms where agents' incentives align with truthful reporting to achieve informational efficiency without a central authority possessing full knowledge. Incentive compatibility formalizes the requirement that a mechanism renders honest revelation of private types (preferences or valuations) as an optimal strategy for each agent, either in dominant strategies—regardless of others' actions—or in under uncertainty about types. A direct , where agents simply report their types to a social choice determining outcomes and transfers, serves as the ; indirect mechanisms, involving complex message spaces, can often be analyzed equivalently through this lens. The establishes that any equilibrium outcome achievable by an arbitrary corresponds to a direct , reducing the design space and enabling focus on truth-inducing rules . This result, rooted in equilibrium refinements from the onward, underpins much of modern analysis by confirming that strategic simplicity does not preclude complex implementations. Within social choice, incentive compatibility confronts foundational impossibilities. The Gibbard-Satterthwaite theorem proves that, for domains with at least three alternatives, no non-dictatorial social choice function—capable of selecting any alternative under some preference profile—is dominant strategy incentive compatible, implying unavoidable manipulability in unrestricted preference aggregation like voting. This 1973-1975 result shifts attention from full strategy-proofness to weaker criteria or restricted domains, such as single-peaked preferences where median voter rules prove incentive compatible. Possibility theorems emerge in settings, where agents' utilities decompose into value for outcomes plus linear monetary transfers, facilitating efficiency via side payments. The Vickrey-Clarke-Groves (VCG) mechanisms, building on Vickrey's 1961 second-price auction, Clarke's 1971 pivot rule, and Groves' 1973 generalization, implement -maximizing social choice functions as dominant strategies by taxing each agent the their participation imposes on others' . For instance, in public goods provision, VCG selects the efficient project and levies Clarke taxes to internalize externalities, ensuring truth-telling despite positive informational rents. These mechanisms apply to combinatorial allocation problems akin to social choice extensions, though they often violate budget balance or individual rationality ex post. Limitations persist even in quasilinear environments, as the Myerson-Satterthwaite theorem (1983) demonstrates: no mechanism can simultaneously achieve ex post efficiency, Bayesian compatibility, individual rationality, and budget balance in with overlapping value supports and private information. This underscores causal trade-offs between alignment and fiscal neutrality, prompting designs that approximate efficiency or relax axioms, such as posted-price mechanisms yielding constant-factor guarantees. In social choice applications, these insights inform hybrid rules blending with approximations, prioritizing empirical tractability over theoretical ideals.

Computational Social Choice

Computational social choice examines the algorithmic and computational aspects of aggregating individual preferences into collective decisions, drawing on techniques from to analyze social choice mechanisms like voting rules. It addresses questions such as the efficiency of computing outcomes, the detectability of strategic deviations, and the design of computationally feasible protocols that respect incentive constraints. Originating from early investigations into the hardness of electoral problems, the field gained momentum in the late 1990s and early 2000s, integrating to reveal when theoretical ideals clash with practical . A primary focus is the computational complexity of winner determination, where the input consists of voter preference rankings over candidates. Simple rules like , which tally first-place votes, admit polynomial-time algorithms. In contrast, Kemeny-Young ranking, minimizing total pairwise disagreements across voters, is NP-hard even for four candidates, as shown by reductions from problems. Dodgson's method, counting minimal adjacent transpositions to establish a Condorcet winner, is NP-complete via similar reductions. These results, established in the and refined thereafter, underscore that many Pareto-efficient and strategy-resistant rules incur exponential costs, prompting shifts toward hybrid or approval-based systems in large-scale elections. Strategic —altering reported preferences to sway outcomes—forms another core strand, building on impossibility theorems like Gibbard-Satterthwaite by quantifying its feasibility. For , single-agent manipulation is solvable in time via assignment of votes to preferred candidates. However, coalitional manipulation under is NP-complete, requiring enumeration over subset strategies. , where an attacker pays voters to change rankings, is NP-hard for most positional rules, with exact costs computable via but approximations needed for scale. actions, such as adding or deleting voters, exhibit similar hardness patterns, though parameterized analyses (e.g., by number of controls) yield fixed-parameter tractable algorithms for rules like . These findings suggest computational barriers deter widespread manipulation in complex rules, offering robustness absent in easy-to-game systems. Beyond elections, computational social choice extends to fair division and matching, analyzing complexity in allocating indivisible goods under envy-freeness or proportionality constraints—often #P-hard for cardinal utilities—and designing truthful mechanisms via . Communication complexity studies minimize information exchange for consensus, revealing exponential requirements for exact equilibria in some settings. Empirical tools, including benchmark datasets from real elections, facilitate testing heuristics, while approximation schemes mitigate hardness, as in O(1)-approximations for Kemeny via relaxations. Applications span multi-agent systems, where rules must scale to millions of agents, and platform governance, prioritizing efficiency over perfection.

Voting Rules and Practical Systems

Voting rules in social choice theory constitute specific social choice functions designed to aggregate individual ordinal preferences into a collective decision, typically selecting a or alternatives from a of . These rules vary in their mechanisms, such as , where each voter selects a preferred and the one with the most first-place votes wins; , which assigns points based on (e.g., m-1 for first place in an m- race, decreasing sequentially); and Condorcet methods, which evaluate pairwise majorities to identify a who defeats every opponent head-to-head. Plurality voting, while computationally simple and widely implemented, fails several desirable properties: it violates the Condorcet criterion (electing a candidate who loses pairwise to another) and exhibits the , where similar candidates split votes, allowing a less preferred option to win, as analyzed in preference aggregation models. In contrast, the promotes compromise candidates by rewarding higher rankings across voters but is susceptible to strategic manipulation and violates (IIA), where adding a non-winning option can alter the outcome between top contenders. Condorcet-consistent rules, such as Copeland or , prioritize majority pairwise victories but may encounter cycles where no Condorcet winner exists, necessitating tie-breaking procedures like invoking a random or fallback to another rule. Practical electoral systems draw from these rules, often hybridizing them to balance fairness, simplicity, and amid real-world constraints like voter literacy and administrative costs. First-past-the-post (FPTP), a plurality variant, dominates single-winner elections in the (e.g., congressional districts since 1789) and (general elections), fostering two-party dominance but criticized for underrepresenting minorities and amplifying tactical voting. Two-round systems, requiring a in the first round or a runoff between top-two in the second, are employed in presidential elections (since 1962), mitigating plurality's flaws by allowing preference expression in later stages while still vulnerable to strategic withdrawal. Alternative vote (AV) or (IRV), used in elections since 1918, simulates sequential elimination by redistributing votes from eliminated candidates based on voters' ranked preferences, satisfying the (more support does not hurt a ) in some implementations but failing IIA and occasionally producing non-Condorcet winners. (STV), a multi-winner extension of AV applied in parliamentary elections since 1922, aims for by filling quotas via surplus transfers and eliminations, though it can yield disproportional outcomes due to vote exhaustion and district magnitudes. , where voters approve multiple candidates and the one with most approvals wins, has been adopted experimentally in professional society elections (e.g., since 1975) and Fargo, North Dakota municipal races (since 2018), offering resistance to spoilers but assuming cardinal utilities over ordinal rankings.
Voting RuleKey MechanismStrengthsWeaknessesReal-World Use
Plurality (FPTP)Single vote per voter; most first-place winsSimplicity, quick tallySpoilers, non-monotonicUS Congress, UK Parliament
Borda CountPoints by rank positionRewards consensusIIA violation, manipulableSome academic committees (historical)
Condorcet (e.g., Copeland)Pairwise majoritiesElects majority-preferredCycles, computational costLimited; proposed for reforms
Instant-Runoff (IRV/AV)Ranked elimination with redistributionReduces vote-splittingIIA failure, complexityAustralia lower house
ApprovalMulti-approval; most approvals winsExpresses intensity, spoiler-resistantIgnores rankingsFargo, ND (2018–)
Despite theoretical impossibilities like precluding universally fair rules, practical systems persist by prioritizing implementability over or non-dictatorship in large electorates, with empirical studies revealing persistent strategic behavior and legitimacy varying by context—preferential methods often perceived fairer in diverse fields.

Interdisciplinary Connections

Relation to Public Choice Theory

Public choice theory emerged in the mid-20th century as an application of economic reasoning to , treating voters, politicians, and bureaucrats as self-interested rational actors akin to market participants, and it draws foundational insights from social choice theory's analysis of preference aggregation. While social choice theory focuses on the normative and logical challenges of deriving coherent social preferences from individual ones—often revealing impossibilities under reasonable axioms— extends this to positive explanations of political outcomes, emphasizing strategic behavior, , and institutional incentives that distort aggregation. For instance, incorporates social choice results like the , originally formalized by Duncan Black in 1948 and popularized by in 1957, to predict policy convergence toward the median voter's preference in two-party spatial competition under . A pivotal influence was Kenneth Arrow's 1951 impossibility theorem, which demonstrated that no voting system can simultaneously satisfy universal domain, non-dictatorship, , and , prompting theorists to reject idealized in favor of rule-based constraints. , a founder of , critiqued Arrow's framework in his 1954 essay "Social Choice, , and Free Markets," arguing that preference cycles (as in the ) do not undermine but reflect opportunities for bargaining and market-like exchanges in politics, provided institutions allow voluntary agreement. and further operationalized this in The Calculus of Consent (1962), using social choice concepts to model constitutional design as a unanimous minimizing decision costs and externalities, shifting focus from outcome aggregation to pre-commitment rules that limit majority tyranny. Despite overlaps, diverges by prioritizing empirical and behavioral realism over pure axiomatic impossibilities, incorporating transaction costs, , and bureaucratic discretion absent in many social choice models, which often assume sincere revelation and ordinal preferences. This integration has informed critiques of centralized planning and advocacy for decentralized, incentive-compatible , as seen in view that enables dynamic adjustment rather than stasis. Empirical applications in , such as analyses of pork-barrel spending or , build on social choice's warnings about manipulability (e.g., Gibbard-Satterthwaite ) to explain real-world inefficiencies.

Interpersonal Utility Comparisons and Welfare Economics

Interpersonal utility comparisons (IUC) refer to the evaluation of levels or differences across distinct individuals, essential for constructing social welfare functions that aggregate ordinal preferences into cardinal interpersonal assessments in . In social choice theory, IUC extend beyond Arrow's ordinal framework by assuming utilities are not merely ranked within individuals but comparable between them, enabling criteria like where total welfare is the sum of individual . Without IUC, social choice is confined to , where a change is deemed better only if it improves at least one person's without worsening another's, limiting normative guidance in trade-offs. The debate originated in the 1930s with Lionel Robbins' argument in his 1932 book An Essay on the Nature and Significance of Economic Science that IUC lack scientific foundation, as utilities are subjective psychological magnitudes unverifiable empirically, advocating a shift to positive economics focused on revealed preferences and ordinalism. I.M.D. Little countered in his 1950 Critique of Welfare Economics that rejecting IUC renders welfare economics impotent for policy, as real decisions inherently involve such judgments, proposing pragmatic interpersonal equity weights based on ethical intuitions rather than strict measurability. Kenneth Arrow's 1951 impossibility theorem reinforced ordinalism by demonstrating no non-dictatorial aggregation of ordinal preferences satisfies basic fairness axioms, implicitly sidestepping IUC to highlight aggregation challenges without cardinal assumptions. John Harsanyi advanced a defense of IUC in his 1955 paper "Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparisons of Utility," positing that under von Neumann-Morgenstern expected utility theory, individuals possess cardinal utilities, and impartial "extended preferences" over lotteries allow interpersonal ratios via an ethical observer averaging personal utilities equally. This enables utilitarian social functions, resolving Arrow's impossibility by incorporating comparability, though critics note it relies on untested axioms like neutrality in ethical judgments and assumes utilities reflect identical "common units" across diverse preferences. Empirical challenges persist, as utilities derive from introspective satisfaction, defying interpersonal calibration without behavioral proxies like willingness-to-pay, which conflate with income effects and reveal biases in market data. In welfare economics, partial IUC—such as affine transformations preserving ratios of differences—support Kaldor-Hicks efficiency, where gains exceed losses in monetary terms, presuming compensation feasibility, but this invites Scitovsky reversals where reciprocal changes cycle efficiency rankings. Social choice extensions with full IUC permit diverse functionals, like Rawlsian maximin prioritizing the worst-off, but require resolving ordinal-cardinal tensions; Amartya Sen's sidesteps direct IUC by comparing functionings and freedoms, emphasizing observed outcomes over latent utilities. Despite defenses, remains skeptical, viewing IUC as value-laden rather than factual, prioritizing Paretian limits to avoid unsubstantiated ethical imports into analysis.

Empirical and Experimental Investigations

Laboratory Experiments on Preference Aggregation

Laboratory experiments provide controlled environments to test theoretical predictions of social choice theory, such as the emergence of voting paradoxes, the prevalence of , and the efficiency of aggregation rules under induced preferences. Participants typically express ordinal or cardinal preferences over alternatives, often with monetary incentives tied to outcomes, allowing researchers to isolate causal effects of rules like or from real-world confounds. These studies reveal that theoretical impossibilities, like Condorcet cycles, occur but at lower frequencies than probabilistic models predict, often due to behavioral regularities such as or coordination failures. Early experiments focused on Condorcet paradoxes, where majority preferences form cycles (e.g., A beats B, B beats C, C beats A). In a 2003 study with students playing a repeated game over rotation rules for a geometric task, artificial double-peaked preferences were induced to facilitate cycles; groups exhibited cyclical voting patterns, such as alternating between rules T and E, despite theoretical stability expectations, attributed to individual errors rather than inherent instability. Cycles appeared in some sessions but not others, with groups sometimes converging to steady states, suggesting paradoxes are fragile to noise. Experiments on information aggregation under divided electorates compare rules like and . A laboratory study with 24 participants per session over 100 rounds tested scenarios with a divided receiving private signals about a common-value alternative; yielded higher welfare (e.g., 183.95 payoff units vs. 138.50 for in small-minority treatments) by better aggregating signals and electing Condorcet winners more often, while led to strategic convergence under in larger-minority cases but sincere voting otherwise. Sincere behavior dominated in with small minorities (91% frequency), but strategic abstention from weak candidates increased with minority size. Deliberation experiments examine non-voting aggregation. In a 2015 study, groups of three deliberated over reciprocating gifts or lotteries; median-ranked members disproportionately influenced outcomes, with non-extreme positions amplifying impact via spatial , and extreme preferences shifting toward prior group choices, indicating over pure compromise. Relative intragroup position, not demographics, drove , aligning with median-voter models but revealing dynamic adjustments absent in static . Overall, lab findings challenge pure assumptions: strategic occurs but persists due to low-stakes or prosociality, and aggregation varies by rule, with approval outperforming in uncertain environments. These results inform by highlighting empirical robustness of certain rules, though scale limitations caution against direct real-world extrapolation.

Field Studies and Real-World Voting Data

Field studies in social choice theory have investigated the manifestation of theoretical paradoxes and strategic behaviors in actual elections and polls, often using partial preference data or surveys to approximate full rankings. The , where preferences form a cycle with no pairwise winner, has been observed empirically, though infrequently in large-scale settings. A prominent example arose in a 1997 Danish poll involving 1,109 respondents selecting preferred prime ministerial candidates from three major parties: the Social Democrats defeated the Conservatives by 51% to 49%, the Conservatives defeated the Radical Left by 53% to 47%, and the Radical Left defeated the Social Democrats by 52% to 48%, creating a clear cyclical preference. This case demonstrates the paradox's occurrence beyond small groups or theoretical constructs, challenging assumptions of transitive collective preferences in real voter data. Empirical analyses of voting data indicate that Condorcet paradoxes are rare in large electorates due to probabilistic tendencies toward single-peaked preferences or other structures mitigating cycles. Examinations of German Politbarometer surveys, which aggregate voter preferences over multiple polls, reveal low frequencies of cyclical outcomes, with the paradox appearing in fewer than 5% of simulated large-scale profiles derived from the data. Similarly, studies approximating pairwise comparisons from historical election results, such as U.S. congressional races, find Condorcet winners in most cases under but occasional non-transitive patterns when full rankings are inferred from runoff or approval data. These findings suggest that while Arrow's impossibility and Condorcet inconsistencies hold theoretically, real-world preference distributions often align with possibility theorems like single-peakedness, reducing paradox prevalence. Strategic voting, predicted by the Gibbard-Satterthwaite theorem, manifests robustly in field data, where voters misrepresent preferences to influence outcomes under non-strategy-proof rules. A regression discontinuity analysis of Japanese lower house elections from 1993 to 2005, exploiting close third-place finishes, estimates that strategic abandonment of trailing candidates increases two-party vote concentration by 7-10 percentage points, confirming empirically through coordinated sincere-to-strategic shifts. In multi-round systems like France's presidential elections, data from 1965-2017 show voters in first rounds supporting non-preferred candidates to block frontrunners, with strategic defection rates exceeding 20% in polarized contests based on exit polls and panel surveys. Such behaviors violate when adding minor candidates alters pairwise majorities, as observed in U.K. general elections where third-party vote shares correlate with major-party gains inconsistent with expressive models. Real-world voting data also highlight violations of social choice axioms in practical systems. Plurality rule frequently elects non-Condorcet winners; for instance, in over 80% of simulated U.S. presidential scenarios reconstructed from Gallup polls (1952-2020), the victor lacked pairwise majority against at least one eliminated rival, underscoring non-manipulability trade-offs. trials, such as Fargo's 2018 municipal elections, reveal higher strategy resistance but still exhibit no-show paradoxes where abstaining strategically outperforms sincere participation in 12% of close races per post-election audits. These observations affirm that while theoretical impossibilities persist, empirical frequencies inform rule selection, with data-driven reforms like ranked-choice systems tested in jurisdictions showing reduced spoilers but persistent minor paradoxes.

Criticisms, Limitations, and Philosophical Implications

Challenges to Rationality Assumptions

Social choice theory conventionally posits that individuals hold complete, transitive preference orderings over alternatives, enabling the aggregation of ordinal rankings into collective decisions without interpersonal utility comparisons. This framework underpins theorems like , which requires voter for non-dictatorial social welfare functions. Empirical investigations in reveal systematic deviations from , where individual preferences form cycles (e.g., A preferred to B, B to C, C to A) rather than linear orders. A study of consumer choices across multiple product categories found in only 8% of participants on average, with violations attributed to context-dependent evaluations and framing effects. Such findings extend to voting contexts, where single-voter analogs of the demonstrate intransitivities under risk, challenging the assumption that even isolated preferences are consistently rational. Further challenges arise from , as conceptualized by Herbert Simon, where cognitive limitations prevent full optimization; individuals instead "satisfice" by selecting satisfactory options amid incomplete information and computational constraints. , developed by Kahneman and Tversky, documents violations of expected utility maximization—such as and probability weighting—that undermine the independence axiom central to rational choice models in social aggregation. Behavioral social choice theory addresses these by incorporating probabilistic models of choice, treating observed inconsistencies as noise around latent rational structures rather than outright rejections of . Experiments reveal myopic , fallible of past utilities, and errors in future preferences, all of which erode the predictive power of strict rationality in multi-agent settings. These deviations imply that social choice mechanisms assuming perfect rationality may amplify aggregation paradoxes in real-world applications, necessitating robust designs tolerant of behavioral irregularities.

Political and Economic Critiques of Centralized Aggregation

Centralized aggregation mechanisms in social choice theory, such as majority voting or representative deliberation, invite political critiques for fostering power concentration that undermines pluralism and individual autonomy. Proponents of these critiques, drawing from classical liberal thought, argue that no voting rule can consistently produce fair, non-dictatorial outcomes without violating key fairness axioms, as formalized by Arrow's impossibility theorem in 1951, which demonstrates the absence of a social welfare function satisfying universal domain, Pareto efficiency, independence of irrelevant alternatives, and non-dictatorship. Hayek extended this to contend that coherent centralized planning demands overriding democratic inconsistencies through coercive authority, rendering it incompatible with liberal democracy and prone to totalitarian drift, as democratic aggregation inherently produces cyclical or indeterminate preferences that planners must resolve undemocratically. Empirical observations of policy oscillations in parliamentary systems, where agenda control manipulates outcomes, support claims that centralized processes amplify elite capture rather than genuine collective will. From an economic standpoint, centralized aggregation falters in harnessing dispersed, tacit knowledge essential for efficient resource allocation, a limitation markets mitigate through price signals. Hayek's 1945 essay elucidates how individual-specific circumstances—such as a tinsmith's localized scarcity response—elude central authorities, who lack incentives and mechanisms to compile such data comprehensively, leading to systematic miscalculations. Public choice analysis reinforces this by modeling voters as rationally ignorant: in large electorates, the probability of a decisive vote approaches zero, deterring costly information acquisition and yielding uninformed aggregates prone to manipulation by special interests. Downs quantified this incentive structure in 1957, predicting voter underinvestment in policy details, which manifests in real-world phenomena like logrolling and pork-barrel spending that prioritize concentrated benefits over diffuse costs. Historical implementations of centralized planning provide stark empirical validation of these deficiencies, with the Soviet economy's chronic inefficiencies—evident in the 1980s grain import dependencies despite vast —stemming from distorted signals absent market competition. Voting paradoxes exacerbate economic distortions; Condorcet cycles, where preferences form intransitive loops (A beats B, B beats C, C beats A), have appeared in large-scale surveys, such as a 1990 Danish poll on prime ministerial preferences, eroding the stability required for sound fiscal or regulatory decisions. Critics from traditions, including Buchanan and Tullock, further highlight behaviors where aggregated votes enable coalitions to extract resources via taxation or regulation, yielding deadweight losses exceeding market equivalents, as quantified in studies of U.S. post-1970s expansions. These insights challenge reliance on centralized mechanisms, advocating decentralized alternatives like or competitive governance to better align incentives with dispersed expertise.

Recent Developments (2000s–2025)

Advances in Generative and AI-Driven Social Choice

Generative social choice emerged as a framework in , integrating large models (LLMs) with traditional social choice mechanisms to address limitations in expressing and aggregating preferences over unstructured or open-ended domains, such as policy proposals or textual statements. Unlike classical , which assumes predefined alternatives, generative approaches leverage to elicit detailed user inputs, extrapolate latent preferences, and produce a proportional slate of representative outcomes that satisfy axioms like and . This methodology was formalized by Dütting, Roughgarden, and Talgam-Cohen, who demonstrated its application in democratic by using LLMs to and summarize diverse opinions into concise, aggregates. Subsequent advancements, such as "Generative Social Choice: The Next Generation" presented at ICML 2025, refined these techniques by incorporating iterative prompting to enhance representativeness in high-dimensional spaces, ensuring that generated slates minimize distortion from individual views while scaling to large electorates. Empirical evaluations showed that LLM-driven aggregation outperforms baselines in metrics like Kendall-tau for , particularly for nuanced topics like constitutional amendments. These methods extend to by generating divisible resource allocations from textual demands, bridging social choice with generative AI's capacity for creative synthesis. In AI alignment, social choice theory informs agent evaluation by mapping multi-agent assessments to voting rules, revealing that reinforcement learning from human feedback (RLHF) often violates consistency axioms like unless constrained by probabilistic smoothing. For instance, studies from 2023–2025 highlight impossibilities in democratically aligning LLMs via RLHF without strategic manipulation risks, prompting hybrid mechanisms that use generative models to simulate diverse voter utilities and enforce . This has practical implications for multi-agent systems, where AI-driven social choice enables robust designs and preference learning from sparse data, as explored in frameworks combining kernel methods with .

Expansions to Fair Division, Tournaments, and Multi-Agent Systems

Social choice theory has extended its principles of preference aggregation to , addressing the allocation of resources—such as indivisible goods or budgets—among with heterogeneous valuations, while incorporating axioms like (each receives at least 1/n of total value) and envy-freeness (no prefers another's allocation). These extensions build on impossibility results akin to Arrow's theorem, showing trade-offs between efficiency, fairness, and strategy-proofness in non-monetary settings, with computational algorithms enabling approximate solutions for large instances. In fair division, post-2000 developments include dynamic and temporal variants, where allocations repeat over periods (e.g., daily tasks or spectrum auctions), requiring mechanisms that balance short-term equity with long-term welfare; for instance, algorithms achieving proportional allocations in temporal settings with additive utilities over T days ensure each gets at least 1/n of their total value across periods. Recent work integrates social choice with for public resource distribution, such as seats in committees or budgets, prioritizing alongside fairness metrics verifiable via ordinal preferences. These advances mitigate strategic through incentive-compatible rules, tested in settings with up to thousands of s. Tournaments represent another expansion, modeling complete directed graphs of pairwise comparisons (e.g., from voter or match outcomes) to derive collective rankings despite potential cycles, with social choice analyzing solution concepts like the Copeland score (counting net wins) or the essential set of undominated alternatives. Since the 2000s, computational social choice has advanced aggregation for applications in sports , webpage ranking (e.g., adapting to preference tournaments), and biological dominance hierarchies, revealing vulnerabilities to —such as by 1% of voters altering outcomes in Kemeny-Young rankings—and proposing robust methods like for acyclic resolutions. Empirical studies on real data, including NCAA from 1985–2015, validate these under Condorcet consistency, where a winner exists in 70–80% of cases. In multi-agent systems (MAS), social choice provides foundations for coordinating autonomous agents in distributed environments, such as voting for joint actions in robotics swarms or fair task delegation, by adapting mechanisms like Gibbard-Satterthwaite theorem to handle incomplete information and incentives. Developments from the 2010s onward incorporate dynamic fairness, as in recommender systems where multi-agent aggregation balances provider and user utilities over time, achieving envy-free outcomes via sequential voting protocols scalable to 10^3 agents via approximation algorithms. These extensions address real-time challenges, like in ad auctions or sensor networks, emphasizing computational tractability—e.g., NP-hardness of optimal aggregation relaxed to polynomial-time heuristics—and integration with reinforcement learning for adaptive preference elicitation. Cross-domain synergies, such as tournament-based leader election in MAS or fair division for resource sharing, have proliferated since 2020, driven by AI scalability needs.

References

  1. [1]
    SOCIAL CHOICE THEORY - Princeton University
    Jan 1, 2024 · Social choice theory studies the aggregation of individual preferences, utilities, or other attributes into a social preference or choice ...
  2. [2]
    The Problem of Social Choice (in 700 Words) - Econlib
    Dec 22, 2023 · Social choice theory analyzes under which conditions the preferences and values of different individuals can (or cannot) be transformed into social preferences ...
  3. [3]
    [PDF] Introduction to Social Choice
    < We'll start by looking at six social choice procedures. 1. Condorcet's Method. 2. Plurality Voting. 3. Borda Count. 4. The Hare System. 5 ...
  4. [4]
    [PDF] arrow's impossibility theorem of social choice - UChicago Math
    Kenneth Arrow listed five conditions that he thought one should expect the social-welfare function and the relations R and P that it produces to fulfill. 1 ...
  5. [5]
    Understanding Arrow's Impossibility Theorem: Definition, History ...
    Arrow's impossibility theorem is part of social choice theory, which studies how a society can reflect individual preferences. It became an important tool for ...What Is Arrow's Impossibility... · Understanding the Theorem · Example
  6. [6]
    [PDF] Social choice theory - Duke People
    Social choice theory generally uses the same basic tools and concepts as the rest of decision and game theory: individuals are described in terms of their ...
  7. [7]
    [PDF] Notes on Social Choice Theory 1 Notation you should be ...
    Social choice theory is the study of how decisions are made collectively. It examines the idea that, for a given society, the preferences of individuals can ...
  8. [8]
    [PDF] Chapter 12: Social Choice Theory - Felix Munoz-Garcia
    individual preferences with the use of a social welfare functional (also referred to as social welfare aggregator); as defined below. Social welfare functional.
  9. [9]
    [PDF] 1 Social Choice Theory Jacob M. Nebel and John A. Weymark 1 ...
    Specifically, the definition of a social welfare function prevents the social ranking from considering non-preference welfare information. Many social ...
  10. [10]
    [PDF] Arrow's Theorem: Ordinalism and Republican Government
    I. INTRODUCTION. The theory of social choice tries to identify how individual preferences can be combined into a social policy that meaningfully represents ...
  11. [11]
    [PDF] Social Welfare Functions that Satisfy Pareto, Anonymity, and ...
    The best known is the (global) Borda rule which orders alternatives by the sums of the positions that alternatives have in individual preference orderings.
  12. [12]
    [PDF] Aggregation of utility and social choice: A topological characterization
    In characterizing aggregation rules, social choice theories have focused either on ordinal preferences,. i.e. without information regarding the magnitude or.
  13. [13]
    [PDF] 1 Introduction and Basic Definitions 2 Axioms of Social Choice
    If the output of the social choice function is always a singleton set, it is called resolute. We additionally define a social welfare function, which instead ...
  14. [14]
    [PDF] Dependence and Independence in Social Choice: Arrow's Theorem
    A social choice function is a function F W B ! O where O D }.X/ n f;g and ... Amartya Sen adeptly explains the social choice problem in his Nobel Prize ...
  15. [15]
    Collective Choice and Social Welfare: An Expanded Edition on JSTOR
    Collective Choice and Social Welfare: An Expanded Edition. AMARTYA SEN ... social choice function, i.e., we are interested in... Save. Cite.
  16. [16]
    [PDF] Borda, Condorcet and the Mathematics of Voting Theory
    Jun 12, 2021 · Both men were born in France in the first half of the eighteenth century, Borda in 1733 and Condorcet in 1743. This situates their lives and ...
  17. [17]
    The French Connection: Borda, Condorcet and the Mathematics of ...
    ... voting theory are now named: Jean Charles, Chevalier de Borda (1733–1799) and Marie-Jean-Antoine-Nicolas de Caritat, Marquis de Condorcet (1743–1794). In ...
  18. [18]
    [PDF] Condorcet's theory of voting - Numdam
    number of pairwise votes. Thus Condorcet's choice rule is consistent with his ranking rule whenever a majority candidate exists. While Condorcet's solution is ...
  19. [19]
    The History of Utilitarianism - Stanford Encyclopedia of Philosophy
    Mar 27, 2009 · Like Bentham, Mill sought to use utilitarianism to inform law and social policy. The aim of increasing happiness underlies his arguments for ...
  20. [20]
    Social Choice Theory - Stanford Encyclopedia of Philosophy
    Dec 18, 2013 · Social choice theory is the study of collective decision procedures and mechanisms. It is not a single theory, but a cluster of models and results.History of social choice theory · The aggregation of judgments
  21. [21]
    Social Choice and Individual Values on JSTOR
    Originally published in 1951,Social Choice and Individual Valuesintroduced "Arrow's Impossibility Theorem" and founded the field of social choice theory in ...
  22. [22]
    On the Rationale of Group Decision-making
    Next article. No Access. On the Rationale of Group Decision-making. Duncan Black. Duncan Black. Search for more articles by this author.
  23. [23]
    Social Choice and Individual Values - Yale University Press
    In stock Free 20-day returnsOriginally published in 1951, Social Choice and Individual Values introduced “Arrow's Impossibility Theorem” and founded the field of social choice theory.
  24. [24]
  25. [25]
    A Brief History of Social Choice and Welfare Theory - ResearchGate
    Aug 29, 2021 · this concept and to Aristotle's analysis of it. Precursors of social choice theory are generally associated with voting rather than. welfare.
  26. [26]
    [PDF] The theory of judgment aggregation: An introductory review - LSE
    Abstract. This paper provides an introductory review of the theory of judgment aggre- gation. It introduces the paradoxes of majority voting that originally ...
  27. [27]
    [PDF] 1 Introduction to Computational Social Choice
    Social choice theory is the field of scientific inquiry that studies the aggregation of individual preferences towards a collective choice.
  28. [28]
    Social Choice and Individual Values, 1st ed.
    Kenneth J. Arrow & John Wiley & Sons. Publication Date: January 1951. Yale · Cowles Foundation for Research in Economics. Social Menu. Footer Menu. Department ...
  29. [29]
    [PDF] arrow's theorem - terence tao - UCLA Mathematics
    Arrow's theorem: If there are at least three candidates, then the above six axioms are inconsistent.Missing: statement | Show results with:statement
  30. [30]
    [PDF] Arrow's Impossibility Theorem: Two Simple Single-Profile Version
    Our first Arrow impossibility theorem, which is extremely easy to prove, assumes that there are only two people in society. The proof relies on a neutrality ...
  31. [31]
    Marquis de Condorcet (1743 - 1794) - Biography - MacTutor
    The final of Condorcet's examples is today known as the 'Condorcet Paradox'. It points out that it is possible that a majority prefers option A over option ...
  32. [32]
    [PDF] Grokking Condorcet's 1785 Essai - jehps
    For this reason, throughout the 1785 Essai, Condorcet repeatedly says that v, the competence or enlightenment of decision makers is more important than any ...
  33. [33]
    Condorcet's paradox and the likelihood of its occurrence
    Condorcet, Marquis de (1785a). An essay on the application of probability the-ory to plurality decision making: An election between three candidates.
  34. [34]
    Condorcet's paradox under the maximal culture condition
    Most of the results that have been obtained are based on one of two basic probabilistic models for generating voter preference profiles. The first one (and most ...
  35. [35]
    [PDF] 17.810S21 Game Theory, Lecture Slides 8: Social Choice
    A brief history of social choice: • Begins with Nicolas de Condorcet (1743-1794) and Jean-Charles de. Borda (1733-1799).
  36. [36]
    Condorcet Voting - Center for Effective Government
    Jul 8, 2025 · Would voters prefer a Condorcet voting system that uses a ranked ballot or instead has a voter simply choose her favorite from each pair of ...
  37. [37]
    [PDF] The Gibbard–Satterthwaite theorem: a simple proof
    The classic Gibbard–Satterthwaite theorem (Gibbard, 1973; Satterthwaite, 1975) states (essentially) that a dictatorship is the only non-manipulable voting ...
  38. [38]
    [PDF] Gibbard-Satterthwaite Theorem - Stanford University
    Everyone prefers A to B, so the social welfare function should not have B as a winner in order to satisfy Pareto optimality. Unanimity doesn't apply in this ...
  39. [39]
    [PDF] The Proof of the Gibbard-Satterthwaite Theorem Revisited
    This paper provides three short and very simple proofs of the clas- sical Gibbard-Satterthwaite theorem. The theorem is first proved in the case with only two ...
  40. [40]
    [PDF] Gibbard-Satterthwaite theorem - Hal-Inria
    Sep 17, 2018 · Since then, the Gibbard-Satterthwaite theorem is at the core of social choice theory, game theory and mechanism design. 1 Introduction. Since K.
  41. [41]
    [PDF] a Quantitative Proof of the Gibbard Satterthwaite Theorem
    Gibbard and Satterthwaite proved that any social choice function which attains three or more values, and whose out- come does not depend on just one voter, must ...
  42. [42]
    [PDF] Amartya Sen - Nobel Lecture
    Social choice relates social judgments to individual interests, and social choice theory evaluates social possibilities, including social welfare, inequality, ...
  43. [43]
    Rights and the liberal paradoxes | Social Choice and Welfare
    Further, it is shown that much weaker (and fully plausible) conditions are needed to avoid Sen's paradox (the impossibility of a Paretian liberal) in this ...
  44. [44]
    Coalitional Structure of the Muller-Satterthwaite Theorem
    The Muller-Satterthwaite theorem states that social choice functions that satisfy unanimity and monotonicity are also dictatorial. Unlike Arrow's theorem ...
  45. [45]
    [PDF] Strategy-Proof Social Choice on Single-Peaked Domains - UC Davis
    But there are also single-peaked domains that give rise to impossibility results. F or instance, the unrestricted domain envisaged by the Gibbard-Satterthwaite ...
  46. [46]
  47. [47]
    [PDF] Mechanism Design Theory - Nobel Prize
    Oct 15, 2007 · 1 Hurwicz's (1972) notion of incentive-compatibility can now be expressed as follows: the mechanism is incentive-compatible if it is a dom-.
  48. [48]
    [PDF] Leonid Hurwicz - Stanford University
    Early mention of incentive issues, and what appears to be the first coining of the term ``incentive compatibility,'' are due to Hurwicz. (1960). The fuller ...
  49. [49]
    [PDF] Mechanism Theory - Stanford University
    A social choice function is also said to be strategy-proof if it is dominant strategy incentive compatible. The usefulness of the class of direct mechanisms as ...
  50. [50]
    How Economists Define the Revelation Principle - ThoughtCo
    Apr 10, 2019 · The revelation principle of economics is that truth-telling, direct revelation mechanisms can generally be designed to achieve the Bayesian Nash equilibrium ...
  51. [51]
    [PDF] The VCG Mechanism - MIT OpenCourseWare
    Theorem: The VCG mechanism implements the utilitarian solution as a truthtelling dominant strategy BNE in any private value social choice problem. Some things ...
  52. [52]
    [PDF] Efficient Mechanisms for Bilateral Trading * - cs.Princeton
    In this paper we analyze a more general class of mechanisms, using some techniques similar to those developed in Myerson [6] to analyze optimal auction design.
  53. [53]
    [PDF] How Pervasive is the Myerson-Satterthwaite Impossibility?
    Abstract. The Myerson-Satterthwaite theorem is a founda- tional impossibility result in mechanism design which states that no mechanism can be Bayes-Nash.
  54. [54]
    [PDF] Computational Social Choice: The First Ten Years and Beyond
    search area known as computational social choice, which combines ideas, models, and techniques from social choice theory with those of computer science. Social.
  55. [55]
    [PDF] A Short Introduction to Computational Social Choice - Lamsade
    Computational social choice is an interdisciplinary field of study at the interface of social choice theory and computer science, pro- moting an exchange of ...
  56. [56]
    [PDF] Handbook of Computational Social Choice - Ariel Procaccia
    This handbook, written by thirty-six prominent members of the computational social choice community, covers the field comprehensively. Chapters devoted to each ...<|separator|>
  57. [57]
    The computational difficulty of manipulating an election
    We show how computational complexity might protect the integrity of social choice. We exhibit a voting rule that efficiently computes winners but is comput.
  58. [58]
    [PDF] Social Choice Rules
    So each of these voters gets to mark a ballot, and can vote for only one of the candidates. What is the social choice rule here, for determining which candidate ...
  59. [59]
    [PDF] Voting methods with more than 2 alternatives 4.1 Social choice ...
    Second-step election between the top two vote-getters in plurality election if no candidate receives a majority. Example. 6 voters 5 voters 4 voters 2 voters a.
  60. [60]
    [PDF] Social choice theory Introduction - Lamsade
    Remarks the majority method works well with two candidates when there are more than two candidates, organize a series of.
  61. [61]
    How voting rules impact legitimacy | Humanities and Social ... - Nature
    May 29, 2024 · Our findings suggest that the perceived legitimacy of a voting method is context-dependent. Specifically, preferential voting methods are seen as more ...<|control11|><|separator|>
  62. [62]
    [PDF] What Is Public Choice Theory? - AIER
    The now-famous “impossibility theorem,” as published in. Arrow's book Social Choice and Individual Values (1951), stimulated an extended discussion. What ...
  63. [63]
    What is the difference between "Social Choice Theory", "Public ...
    Aug 27, 2015 · Social choice theory or social choice is a theoretical framework for analysis of combining individual opinions, preferences, interests, or welfares to reach a ...
  64. [64]
    [PDF] Public Choice and Arrow's Impossibility Theorem - ResearchGate
    Public choice theory and Arrow's impossibility theorem should play a more prominent role in the public policy discipline today. The theorem is substantial ...
  65. [65]
    [PDF] Social Choice, Democracy, and Free Markets
    SOCIAL CHOICE, DEMOCRACY, AND FREE MARKETS¹. JAMES M. BUCHANAN. Florida State University. ROFESSOR Kenneth Arrow's pro- vocative essay, Social Choice and.
  66. [66]
    James M. Buchanan and the Public Choice Tradition
    Mar 3, 2021 · For example, he criticized Arrow's impossibility theorem. He argued that the potential for cycling in policy selection was not a defect of ...<|separator|>
  67. [67]
    The Past, Present, and Future of Public Choice: Part I - Econlib
    Oct 2, 2023 · Different public choice scholars followed either one or the other approach, with social choice theory the exemplar of the formal theory ...
  68. [68]
    [PDF] INTERPERSONAL COMPARISONS OF UTILITY - Stanford University
    Many problems have been created in social choice theory and in welfare eco- nomics by the extreme reluctance to make any kind of interpersonal comparison of ...
  69. [69]
    Social Choice with Interpersonal Utility Comparisons - jstor
    It is commonplace in welfare economics to use social-welfare functions which are strictly quasiconcave in utilities. None of the information assumptions.
  70. [70]
    Interpersonal utility theory | Social Choice and Welfare
    A “triangulation” strategy is sketched for constructing an empirical testable theory of interpersonal comparisons of utility units. Bargaining.
  71. [71]
    The Empirical Meaningfulness of Interpersonal Utility Comparisons
    pure science" (139). The best-known defense of interpersonal comparison is that given by Little in the fourth chapter of his Critique of Welfare.
  72. [72]
    [PDF] Cardinal Welfare, Individualistic Ethics, and Interpersonal ...
    I shall deal with the problem of comparisions between total utilities only, neglecting the problem of comparisons between differences in utility, since the ...
  73. [73]
    JOHN HARSANYI ON INTERPERSONAL UTILITY COMPARISONS ...
    May 11, 2010 · This paper traces interpersonal utility comparisons and bargaining in the work of John Harsanyi from the 1950s to the mid-1960s.
  74. [74]
    Interpersonal Comparisons of Utility - Econlib
    Feb 1, 2022 · Interpersonal comparisons of utility (that is, of preferred positions on an individual's preference scale) are known to be scientifically impossible in ...
  75. [75]
    Dual interpersonal comparisons of utility and the welfare economics ...
    Interpersonal comparisons can be of utility levels and/or of utility differences. Comparisons of levels can be used to define equity in distributing income.
  76. [76]
    [PDF] 1 Introduction and Outline 1.1 Interpersonal Comparisons
    Welfare economic theory, however, and the related discipline of social choice theory, have retained their links to ethics.
  77. [77]
    [PDF] Interpersonal Comparisons of What? - PhilSci-Archive
    Apr 14, 2022 · I examine the once popular claim according to which interpersonal comparisons of welfare are necessary for social choice. I side with cur-.<|separator|>
  78. [78]
    Behavioural social choice: a status report - PMC - NIH
    We argue that the nature of preference distributions in electorates is ultimately an empirical question, which social choice theory has often neglected. We also ...
  79. [79]
    [PDF] The Condorcet paradox: an experimental approach to a voting process
    The game that they play is based on a set of rules that must be voted by the players themselves before a new session of the experiment will be run. The idea is ...Missing: lab | Show results with:lab
  80. [80]
    [PDF] Divided Majority and Information Aggregation: Theory and Experiment
    This paper both theoretically and experimentally studies the properties of plu- rality and approval voting when the majority is divided as a result of ...
  81. [81]
  82. [82]
    An Empirical Example of the Condorcet Paradox of Voting in a Large ...
    examples of such, and none in large electorates. This paper demonstrates the existence a real cyclical majority in a poll of Danish voters' preferred prime ...
  83. [83]
    On the empirical relevance of Condorcet's paradox - ResearchGate
    Aug 9, 2025 · Since a definition of the paradox for even numbers of voters and alternatives, and for weak voter preferences is missing in the literature, we ...
  84. [84]
    [PDF] Three Empirical Analyses of Voting - VTechWorks
    May 5, 2022 · In this chapter, we analyze German Politbarometer data and then suggest a way to explain the observed frequency of the Condorcet paradox and ...
  85. [85]
    Elections and Social Choice: The State of the Evidence - jstor
    Social choice theory, an offshoot of welfare economics, is concerned with the normative properties of rules by which the preferences of individual.
  86. [86]
    [PDF] On the Prevalence of Condorcet's Paradox
    Jan 10, 2025 · Instead, we focus on whether the pattern of voter preferences would lead to a Condorcet paradox if amalgamated most simply and directly, ...
  87. [87]
    [PDF] A Regression Discontinuity Test of Strategic Voting and Duverger's ...
    Department of Economics, Princeton University, USA; fujiwara@princeton.edu. ABSTRACT. This paper uses exogenous variation in electoral rules to test the pre-.
  88. [88]
    [PDF] On the Extent of Strategic Voting
    Put differently, in large elections only a subset of candidates will be “in the race,” and strategic voters behave as if choosing only among those who are ...
  89. [89]
    Expressive vs. strategic voters: An empirical assessment
    I study the two leading paradigms of voter behavior. Novel empirical strategy to identify violations of both the pivotal and expressive voter model.
  90. [90]
    Paradoxes of Voting* | American Political Science Review
    Aug 1, 2014 · Five voting paradoxes are examined under procedures which determine social choice from voters' preference rankings.
  91. [91]
    Research and data on RCV in practice - FairVote
    We did not control for other factors, such as competitiveness of races on the ballot, which could drive turnout. Ranked choice voting and voter turnout.
  92. [92]
    20. Empirical examples of voting paradoxes - ElgarOnline
    Section 20.3 pertains to the Condorcet properties discussed in Chapters 6, 10, 14, 15 and 16;. I first examine the Condorcet paradox – that is, preference ...
  93. [93]
    Empirical evidence for intransitivity in consumer preferences - PMC
    Mar 4, 2020 · The results mostly showed that there was no evidence of transitivity in consumer preferences. On average, transitivity appeared in only 8% of the sample.
  94. [94]
    The voting paradox … with a single voter? Implications for transitivity ...
    Mar 4, 2019 · The voting paradox occurs when a democratic society seeking to aggregate individual preferences into a social preference reaches an intransitive ordering.
  95. [95]
    [PDF] On the Limits of Rational Choice Theory - Economic Thought
    A problem with the standard assumptions of rationality and expected utility maximisation is their lack of specific theoretical and conceptual content, ...<|separator|>
  96. [96]
    New Challenges to the Rationality Assumption
    Challenges include people being myopic, lacking skill in predicting future tastes, fallible memory, and incorrect evaluation of past experiences.
  97. [97]
    (PDF) Behavioural social choice: A status report - ResearchGate
    Aug 6, 2025 · Behavioural paradigms compare how rational actors should make certain types of decisions with how real decision makers behave empirically. We ...
  98. [98]
    [PDF] Hayek, Arrow, and the Problems of Democratic Decision-Making
    Abstract - Both Hayek and Arrow provide arguments about the inability of the vote process to yield a coherent social choice. Hayek demonstrated that ...
  99. [99]
    The Use of Knowledge in Society - FEE.org
    Hayek points out that sensibly allocating scarce resources requires knowledge dispersed among many people, with no individual or group of experts capable of ...
  100. [100]
    A Test for the Rational Ignorance Hypothesis: Evidence from a ...
    This paper tests the rational ignorance hypothesis by Downs (1957). This theory predicts that people do not acquire costly information to educate their ...
  101. [101]
    [PDF] Chapter II- 6-- The Death of Central Planning and the Birth of Markets
    A failed transition could lead to hyperinflation and a collapse in output, both horrible economic outcomes; but the political and social costs of failure would ...Missing: empirical evidence
  102. [102]
    [PDF] The Public Choice Revolution - Cato Institute
    The starting idea of public choice theory is disarmingly simple: Individuals, when acting as voters, politicians, or bureaucrats, continue to be self-.
  103. [103]
    [2309.01291] Generative Social Choice - arXiv
    Sep 3, 2023 · We introduce generative social choice, a design methodology for open-ended democratic processes that combines the rigor of social choice theory ...
  104. [104]
    ICML Poster Generative Social Choice: The Next Generation
    Jul 17, 2025 · Combining social choice and large language models, prior work has approached this challenge through a framework of generative social choice. We ...
  105. [105]
    [PDF] Generative Social Choice: The Next Generation - Ariel Procaccia
    Abstract. A key task in certain democratic processes is to produce a concise slate of statements that proportionally represents the full spectrum of user ...
  106. [106]
    Representative Social Choice: From Learning Theory to AI Alignment
    Oct 31, 2024 · In this study, we propose the representative social choice framework for the modeling of democratic representation in collective decisions.
  107. [107]
    [PDF] Expanding the Reach of Social Choice Theory - IJCAI
    In this pa- per, I present an overview of my efforts to expand the reach of social choice theory in the domains of fair division, voting, and tournaments.Missing: expansions | Show results with:expansions
  108. [108]
    Fair Public Decision Making: Allocating Budgets, Seats, and ...
    A current trend in social choice theory is to study new models that have the flavor of both collective decision-making and fair division.Missing: expansions | Show results with:expansions
  109. [109]
    [PDF] Temporal Fair Division - AAAI Publications
    Abstract. We study temporal fair division, whereby a set of agents are allocated a (possibly different) set of goods on each day for a period of days.
  110. [110]
    [PDF] Fair and Efficient Social Choice in Dynamic Settings - Rupert Freeman
    Abstract. We study a dynamic social choice problem in which an alternative is chosen at each round according to the reported valuations of a set of agents.Missing: expansions | Show results with:expansions
  111. [111]
    Elections and Fair Division: An Introduction to Social Choice Theory ...
    This graduate textbook introduces to social choice theory, with a specific focus on elections and fair division, supported by mathematical theories and ...
  112. [112]
    [PDF] Tournaments in Computational Social Choice: Recent Developments
    In many practical scenarios, these decisions are made based on pairwise comparisons between alternatives, also known as tournaments.
  113. [113]
    Tournament Solutions (Chapter 3) - Handbook of Computational ...
    More precisely, we will be concerned with social choice functions (SCFs) that are based on the dominance relation only, that is, those SCFs that Fishburn (1977) ...
  114. [114]
    Sports Tournaments and Social Choice Theory - MDPI
    May 30, 2019 · As such, tournaments mirror aggregation methods in social choice theory, where diverse individual preferences are put together to form an ...
  115. [115]
    Social Choice Theory as a Foundation for Multiagent Systems
    Social choice theory is the study of mechanisms for collective decision making. While originally concerned with modelling and analysing political decision ...
  116. [116]
    Multi-agent Social Choice for Dynamic Fairness-aware ...
    Jul 4, 2022 · In this paper, we describe a framework in which the interests of providers and other stakeholders are represented as agents.
  117. [117]
    Dynamic fairness-aware recommendation through multi-agent social ...
    Mar 2, 2023 · We propose a model to formalize multistakeholder fairness in recommender systems as a two stage social choice problem.