Fact-checked by Grok 2 weeks ago

Majority function

In and digital logic, the (also known as the majority gate or median operator) is a threshold that outputs 1 more than half of its inputs are 1, and 0 otherwise; for an number of inputs n, this means at least (n+1)/2 inputs must be 1. For three inputs x, y, z \in \{0,1\}, it can be expressed as (x \land y) \lor (x \land z) \lor (y \land z), which evaluates to 1 precisely when at least two inputs are 1. This is symmetric, monotone increasing, unate, and self-dual, meaning its output mirrors the majority value without requiring explicit complementation for duality. The majority function plays a foundational role in circuit design and computational complexity theory, serving as a building block for more complex operations due to its ability to realize basic gates like AND and OR when combined with constants or inversion—for instance, \mathrm{AND}(a,b) = \mathrm{MAJ}(a, b, 0) and \mathrm{OR}(a,b) = \mathrm{MAJ}(a, b, 1). Together with the NOT gate, it forms a functionally complete set capable of expressing any Boolean function. In digital circuits, majority gates are integral to arithmetic components such as carry-lookahead adders (e.g., Brent-Kung and Han-Carlson designs), where they reduce gate count and propagation delay compared to traditional XOR-based implementations. Beyond conventional CMOS technology, majority logic has seen renewed interest in emerging paradigms like quantum-dot cellular automata (QCA), single-electron transistors (SET), magnetic tunnel junctions (MTJ), and even DNA computing, offering advantages in energy efficiency (often orders of magnitude lower power) and scalability for atomic-scale devices. In , the majority function exemplifies key concepts in : it is evasive, requiring all n inputs to be queried in the worst case for evaluation (D(f) = n), and lies outside the class AC⁰, meaning it cannot be computed by constant-depth, polynomial-size circuits even with modular gates. It also exhibits noise stability, making it robust to input perturbations, and serves as a for algorithms—\epsilon-approximations via (DNF) require size $2^{O(\sqrt{n})}. Historically rooted in early logic and models from the mid-20th century, majority functions continue to influence areas like fault-tolerant voting in distributed systems and promise problems in learning theory.

Introduction and Definition

Formal Definition

The majority function, denoted as \mathrm{MAJ}_n(x_1, \dots, x_n), is a Boolean function from the domain \{0,1\}^n to \{0,1\} that outputs 1 if and only if at least \lfloor n/2 \rfloor + 1 of its inputs are 1, and 0 otherwise. It is conventionally defined for odd n to ensure a strict majority without ties, where the threshold is (n+1)/2. Formally, \mathrm{MAJ}_n(x_1, \dots, x_n) = 1 if \sum_{i=1}^n x_i \geq \lfloor n/2 \rfloor + 1, and 0 otherwise, with each x_i \in \{0,1\}. For even n, this requires more than half the inputs to be 1, outputting 0 in case of a tie, though some contexts use alternative tie-breaking rules. This function is a specific instance of a function T_{k,n}, which outputs 1 if at least k of the n inputs are 1, where for the majority case k = \lfloor n/2 \rfloor + 1. The notation \mathrm{MAJ}_n or \mathrm{Maj}_n is standard in and analysis, emphasizing its role as the unweighted threshold at the midpoint.

Basic Examples

The majority function for a single input, denoted MAJ_1(x), is the trivial case where the output simply equals the input x, as there is only one to consider. For three inputs, the majority function MAJ_3(x, y, z) outputs 1 if at least two of the inputs are 1, and 0 otherwise. The complete for this case is as follows:
xyzMAJ_3(x,y,z)
0000
0010
0100
0111
1000
1011
1101
1111
For instance, the input (1,1,0) yields 1, while (1,0,0) yields 0. For five inputs, the majority function MAJ_5(x_1, x_2, x_3, x_4, x_5) outputs 1 if at least three of the inputs are 1. A representative example is the input (1,1,1,0,0), which outputs 1 since exactly three inputs are 1. The majority function exhibits in its inputs, meaning the output depends solely on the count of 1s among the variables, regardless of their specific positions or order.

Mathematical Properties

Algebraic and Functional Properties

The majority function, denoted \mathrm{Maj}_n for n variables where n is odd, exhibits several key algebraic and functional properties within . It is fundamentally a threshold function, specifically \mathrm{Maj}_n(x_1, \dots, x_n) = T_{\lceil n/2 \rceil, n}(x_1, \dots, x_n), where T_{k,n} outputs 1 at least k of the n inputs are 1. This relation positions the majority function as a special case of linear threshold functions (LTFs), which compute the sign of a weighted of inputs compared to a threshold. The exact computation follows directly: \mathrm{Maj}_n(x) = 1 \sum_{i=1}^n x_i \geq \lceil n/2 \rceil, with each x_i \in \{0,1\}. Algebraically, the majority function is self-dual, satisfying \mathrm{Maj}_n(\neg x_1, \dots, \neg x_n) = \neg \mathrm{Maj}_n(x_1, \dots, x_n). This property arises because negating all inputs inverts the count of 1s, thereby flipping the majority outcome for odd n. Functionally, it is unate, meaning it is non-decreasing (or increasing) in each : fixing all but one input, increasing that variable from 0 to 1 cannot change the output from 1 to 0. This unateness stems from its monotonic dependence on the input sum. In the context of the Boolean hypercube \{0,1\}^n, the majority function is linearly separable, as the sets where it outputs 0 and 1 have disjoint convex hulls separated by the \sum x_i = n/2. However, while realizable as an LTF over the reals, it requires careful weighting in integer representations to avoid fractional thresholds, distinguishing it from simpler separations in some discrete senses. These properties collectively underscore the majority function's role as a symmetric, structured primitive in analysis. The majority function is a , belonging to the class of functions that preserve the partial order on the Boolean lattice \{0,1\}^n. Specifically, for any input vectors x, y \in \{0,1\}^n with x \leq y componentwise (i.e., x_i \leq y_i for all i) and x \neq y, it holds that \mathrm{Maj}_n(x) \leq \mathrm{Maj}_n(y). Equality occurs only if the differing coordinates do not cause the number of 1s to cross the of \lceil n/2 \rceil, meaning the function value remains unchanged despite the increase in some inputs. This monotonicity positions the majority function intermediately between the AND and OR functions within the family of threshold functions. The AND function requires all n inputs to be 1 (threshold t=n), while the OR function accepts any single 1 (threshold t=1); the majority function uses threshold t = \lceil (n+1)/2 \rceil, outputting 1 when at least this many inputs are 1. Thus, it generalizes these extremal cases, exhibiting AND-like behavior for inputs with few 1s and OR-like behavior for inputs with many 1s, while being strictly more balanced in its decision boundary. Each variable in the majority function exerts equal influence due to its symmetry. The influence of any single variable i is the probability that flipping x_i changes the output, which equals the probability that the remaining n-1 variables are exactly balanced at (n-1)/2 ones (assuming n odd for simplicity). This is given by \mathrm{Inf}_i[\mathrm{Maj}_n] = \frac{\binom{n-1}{(n-1)/2}}{2^{n-1}} \approx \sqrt{\frac{2}{\pi n}} for large n, reflecting the pivotal role of each input near the threshold. The total influence across all variables is then \Theta(\sqrt{n}). The majority function is nonlinear over \mathrm{GF}(2), distinguishing it from affine functions (parities), which are linear combinations modulo 2. Its algebraic normal form as a polynomial over \mathrm{GF}(2) includes terms of degree greater than 1; for example, \mathrm{Maj}_3(x_1,x_2,x_3) = x_1 x_2 + x_1 x_3 + x_2 x_3, a degree-2 polynomial. In general, the minimal degree required exceeds 1, underscoring its threshold nature rather than linearity.

Implementations and Constructions

Realization in Boolean Circuits

The majority function for three variables, denoted MAJ_3(x, y, z), can be realized in a Boolean circuit using three 2-input AND gates and two 2-input OR gates, with the expression MAJ_3(x, y, z) = (x ∧ y) ∨ (x ∧ z) ∨ (y ∧ z). This circuit has size 5 and depth 2. For the general case of n variables (assuming n is odd for simplicity), the majority function admits a Boolean circuit realization over the basis {∧, ∨, ¬} with fan-in two gates, achieving size O(n) and depth O(log n). One explicit construction computes the (sum of inputs) via a parallel addition network of linear size and logarithmic depth, followed by a comparison to \lceil n/2 \rceil in O(log n) depth. An alternative approach simulates sorting the input bits using a and outputs the bit; the Ajtai–Komlós–Szemerédi sorting network provides such a construction with size O(n log n) and depth O(log n). Any fan-in two Boolean circuit computing the n-variable majority function requires depth Ω(log n), as the function depends sensitively on every input variable, and a gate at depth d can depend on at most 2^d inputs. The majority function appeared in early computational models through John von Neumann's 1956 lectures on probabilistic logics, where majority voting mechanisms were employed in cellular automata to achieve reliable self-reproduction from unreliable components via redundancy and error correction.

Monotone Formulae and Expressions

A formula for the function employs only (AND) and disjunctions (OR), without negations, preserving the function's monotonicity—where setting more inputs to true can only change the output from false to true, never the reverse. For the case of three variables x_1, x_2, x_3, the function outputs true if at least two are true, and can be expressed as (x_1 \land x_2) \lor (x_1 \land x_3) \lor (x_2 \land x_3). This formula has three terms, each a conjunction of exactly two variables, reflecting the minimal sets that achieve . In general, for an odd number n = 2k + 1 of variables, a straightforward (DNF) representation of consists of the disjunction over all conjunctions of exactly k+1 distinct variables. This captures all minimal input sets where more than half the variables are true. The size of this formula, measured by the number of such terms, is the \binom{n}{k+1}, which grows exponentially as \Theta(2^n / \sqrt{n}) by . This exponential size arises because the terms correspond to the largest level in the , as guaranteed by , which states that the maximum size of an in the power set of n elements is the middle . To achieve polynomial size, more efficient constructions rely on recursive decomposition. Valiant's construction builds a logarithmic-depth monotone formula of size O(n^5) by recursively computing majority over smaller subsets and combining results using sorting-like networks adapted to monotone gates, ensuring the overall formula remains negation-free. A slightly larger but still polynomial bound of O(n^{5.3}) follows from an explicit formula for symmetric monotone functions, including majority. These recursive approaches exploit the structure of majority to avoid enumerating all threshold-reaching subsets. Regarding size-depth tradeoffs in circuits (which generalize by allowing shared subcomputations), the majority function requires at least \Omega(n^{5/3}) size, as established by Razborov's lower bound using methods on the communication structure of computations. The upper bounds mentioned yield \Theta(n^5) for , but constructions can improve to O(n^3) size with depth O(\log n), highlighting a separation between and complexities in the setting. For small n, exact monotone formulas are straightforward, as in the n=3 example. For larger n, efficient monotone approximations exist: any linear threshold function, including majority, can be approximated to constant accuracy (e.g., within 1/3 error) by a monotone Boolean formula of polynomial size, using probabilistic methods to select relevant conjunctions. These approximations are useful in learning and optimization contexts where exact computation is infeasible.

Handling Ties and Extensions

Ties in Odd and Even Cases

In the majority function defined over an odd number of boolean inputs n, ties are inherently impossible. The threshold for outputting 1 is set at (n+1)/2, which exceeds n/2 and ensures that the function always reflects a strict majority of 1s (or 0s otherwise). This design avoids ambiguity, as the number of inputs cannot be evenly split between 1 and 0. For an even number of inputs n, a tie arises precisely when exactly n/2 inputs are 1, leaving no strict majority. In this scenario, the function's output requires a specified tie-breaking rule, as the standard majority criterion does not dictate a unique result. Common resolutions include defaulting to 0 (a conservative bias toward the default or false value), defaulting to 1 (an optimistic bias toward true), or resolving randomly to introduce fairness in probabilistic contexts. For instance, with n=2, the input pairs (0,1) or (1,0) each yield one 1, constituting a tie; here, implementations might arbitrarily output 0 for simplicity in circuit design or specify a rule based on input order. Assuming uniform random boolean inputs, the probability of encountering a tie for even n is given by the central binomial probability \binom{n}{n/2} / 2^n. For large even n, this approximates to $1 / \sqrt{\pi n / 2}, derived via Stirling's approximation of factorials, highlighting that ties become increasingly rare as n grows, though they remain a non-negligible edge case in finite systems. The consideration of ties in majority-based decisions traces back to 18th-century voting theory, notably in the pairwise comparison methods proposed by the Marquis de Condorcet in his 1785 Essai sur l'application de l'analyse à la probabilité des décisions rendues à la pluralité des voix. Condorcet's framework addressed potential ties in majority preferences among candidates, influencing later formalizations of threshold functions in boolean algebra.

Generalizations to Weighted Majority

The weighted majority function extends the standard majority function by assigning distinct positive weights w_i > 0 to each input x_i \in \{0,[1](/page/1)\} for i = 1, \dots, n, with total weight W = \sum_{i=1}^n w_i. It outputs if the weighted sum \sum_{i=1}^n w_i x_i \geq W/2 and 0 otherwise. This formulation ensures that the output favors the side with at least half the total weight, generalizing the unweighted case where all w_i = [1](/page/1). Weighted majority functions form a subclass of linear threshold functions (LTFs), defined as f(\mathbf{x}) = \operatorname{sgn}\left( \sum_{i=1}^n a_i x_i + a_0 \right) for real coefficients a_i, a_0, where the weights w_i \geq 0 are nonnegative and the is set to W/2. The unweighted function arises as the special case of an LTF with equal weights w_i = 1 for all i and (n+1)/2 (assuming n odd). In this broader context, weighted variants capture asymmetric influences, such as in voting systems where participants have unequal stakes. In , weighted majority functions underpin key models and algorithms. Perceptrons, foundational single-layer neural networks, implement LTFs and thus encompass weighted majority as a core operation for via weighted sums compared to a . methods further leverage them through , where predictions from multiple base learners are aggregated by weighting according to performance; the Weighted Majority Algorithm, for instance, dynamically adjusts expert weights in to achieve near-optimal mistake bounds. The of working with weighted functions varies with magnitude. Determining whether a given (specified by its ) can be realized as a weighted —i.e., the realization problem—is , implying that synthesizing weights for arbitrary functions is NP-hard. However, when weights are restricted to small values (polynomially bounded in n), realization and verification become feasible in polynomial time using techniques like dynamic programming over possible sums. As an illustrative example, consider n=3 inputs with weights w = (1, 2, 3), so W=6 and threshold W/2 = 3. The function outputs 1 if \sum w_i x_i \geq 3. For input (x_1, x_2, x_3) = (0,1,0), the sum is $2 < 3, yielding 0; for (0,0,1), the sum is $3 \geq 3, yielding 1.

Applications and Significance

In Voting and Decision Theory

The majority function, also known as majority voting or simple majority rule, has deep roots in the study of collective decision-making, tracing back to the Enlightenment-era work of the in 1785. In his seminal essay Essai sur l'application de l'analyse à la probabilité des décisions rendues à la pluralité des voix, explored probabilistic models of group deliberation, laying the groundwork for understanding how aggregated individual judgments could yield reliable outcomes in democratic processes. This historical development emphasized the potential of majority rule to harness , influencing subsequent theories in and . A cornerstone of the majority function's application in voting theory is , which demonstrates that under certain conditions, majority voting converges to the correct decision as the number of voters increases toward . Specifically, if each voter independently and truthfully votes for the correct alternative with probability greater than 0.5, the probability that the vote selects the correct outcome approaches 1 as the electorate size grows. This theorem assumes independence among voters and a choice, highlighting the epistemic advantages of large-scale democratic aggregation while underscoring vulnerabilities if these assumptions fail, such as correlated errors or uninformed participation. In practical democratic systems, rule—requiring more than 50% of votes for passage—serves as the foundational mechanism for in parliaments and assemblies worldwide. For instance, in many legislative bodies, bills advance only upon securing a simple majority of attending members, balancing efficiency with broad consent while protecting against minority overreach through additional safeguards like . This rule underpins representative democracies by enabling decisive action on policy matters, though it can amplify the influence of larger coalitions. However, the majority function encounters fundamental limitations in multi-candidate settings, as revealed by impossibility theorems in . Kenneth Arrow's impossibility theorem (1951) proves that no voting system, including pairwise , can consistently produce transitive social preferences from individual ordinal rankings while satisfying basic fairness criteria like unrestricted domain, non-dictatorship, and . In particular, majority voting can fail transitivity, leading to cycles (e.g., A beats B, B beats C, C beats A) known as the , which undermines coherent collective choices. From a game-theoretic perspective, majority rule introduces incentives, where rational voters may deviate from truthful preferences to influence outcomes. In elections, voters might engage in tactical to avoid spoilers or results, as modeled in equilibrium analyses showing that even small expressive components in payoffs can alter equilibria and encourage insincere ballots. These incentives persist under , potentially eroding the theorem's probabilistic guarantees if voters anticipate others' strategies, though truthful remains dominant in large electorates with low probabilities.

In Computer Science and Algorithms

In , the majority function plays a central role in designing efficient algorithms for processing large datasets, particularly in streaming and fault-tolerant contexts. One seminal application is the Boyer-Moore majority vote algorithm, which identifies a majority element in a —defined as an element appearing more than half the time—in linear time and constant space. This streaming algorithm operates in two passes: the first pass uses a to pair elements and eliminate non-majority , potentially yielding a majority ; the second pass verifies if it indeed exceeds half the occurrences. Its simplicity and optimality make it foundational for one-pass in resource-constrained environments, such as . In fault-tolerant computing, majority voting is integral to (TMR), a hardware technique that enhances reliability by replicating a system three times and selecting the output via a majority voter to mask single-point failures. TMR ensures correct operation as long as at most one module fails, with the voter resolving discrepancies by outputting the value agreed upon by at least two modules; this approach has been widely adopted in safety-critical systems like and controls since the mid-20th century. Extensions to approximate majority in streaming models further leverage sketches like the Count-Min sketch to identify heavy hitters—elements with frequency exceeding a threshold φ—in sublinear space, providing (ε, δ)-approximations for majority-like queries in massive data streams. From a perspective, the majority function highlights limitations of constant-depth circuits: it cannot be computed by polynomial-size AC^0 circuits, a result established through switching lemmas that demonstrate exponential size lower bounds for functions. This places majority outside AC^0, similar to the , which also requires superpolynomial resources in this model, underscoring the class's inability to handle counting or tasks efficiently. In modern , majority voting aggregates predictions in ensemble methods, such as bagging introduced by Breiman in 1996, where bootstrap samples train multiple classifiers, and their reduces variance and improves accuracy on tasks.

References

  1. [1]
    Majority Function (Maj) - Boolean Algebra - WikiChip
    Dec 15, 2015 · Majority function (sometimes quorum function) is a threshold function that produces a 1 if and only if the majority of the inputs are 1.
  2. [2]
    Majority - Boolean Zoo
    Mar 20, 2022 · A function f:\{-1,1\}^n \to \{-1,1\} is called a majority function if f(x) returns the most common bit in the input.
  3. [3]
    [PDF] Majority-Logic, its applications, and atomic-scale embodiments
    Majority is a (positive) unate function, meaning that M(a,b,c) is an increasing function of each of the three input vari- ables. As such, M isn't universal ...
  4. [4]
    [PDF] ANALYSIS OF BOOLEAN FUNCTIONS Ryan O'Donnell
    ... Majority and threshold functions. 113. §5.1. Linear threshold functions and polynomial threshold functions 113. §5.2. Majority, and the Central Limit Theorem.<|control11|><|separator|>
  5. [5]
    None
    ### Formal Definition of the Majority Function
  6. [6]
    Decomposition of threshold functions into bounded fan-in threshold ...
    A threshold function of n variables with unit weights and a threshold equal to ⌊ n / 2 ⌋ + 1 is known as a majority function. A threshold function with unit ...
  7. [7]
    [PDF] Majority logic synthesis - Hal-Inria
    Dec 2, 2018 · The majority function [1] of three Boolean variables x, y, and z, denoted 〈xyz〉, evaluates to true if and only if at least two of the ...
  8. [8]
    [PDF] Gates and Boolean Algebra - UTK-EECS
    Majority Function Truth Table. • A majority function for 3 inputs (0 if most inputs are 0, 1 if most inputs are 1). 1. 111. 1. 011. 1. 101. 0. 001. 1. 110. 0.
  9. [9]
    [1208.3874] Upper bounds for the formula size of the majority function
    Aug 19, 2012 · The same bounds follow for the complexity of any threshold symmetric function of n variables and particularly for the majority function. Any ...
  10. [10]
    [PDF] Chapter 1 - Our Adversary: The Circuit
    Hence, if the number n of variables is odd, then the majority function Majn is also self-dual. ... Thus a boolean function f is self-dual if and only if f .
  11. [11]
    [PDF] The Complexity of Boolean Functions
    ... Wegener, Ingo. The complexity of boolean functions. (Wiley-Teubner series in computer science). Bibliography: p. Includes index. 1. Algebra, Boolean. 2 ...
  12. [12]
  13. [13]
    An 0(n log n) sorting network - ACM Digital Library
    The purpose of this paper is to describe a sorting network of size 0(n log n) and depth 0(log n). A natural way of sorting is through consecutive halvings.
  14. [14]
    [PDF] Lectures on PROBABILISTIC LOGICS AND THE SYNTHESIS OF ...
    von Neumann, “PROBABILISTIC LOGICS AND THE SYNTHESIS OF RE-. LIABLE ORGANISMS FROM UNRELIABLE COMPONENTS,” lectures deliv- ered at the California Institute ...
  15. [15]
    [PDF] Monotone Formula for majority (Math 262A), Session 13
    Let's review the definition of Majority and Threshold first. Definition 1. Let x0,...,xn−1 be input bits, we defind the threshold func- tion as.
  16. [16]
    [PDF] Lecture 5 1 Monotone boolean formulae and functions
    Oct 13, 2010 · Examples of monotone functions include OR, AND and the majority function. Say that a formula is monotone if it does not have any NOT gates.
  17. [17]
    Short monotone formulae for the majority function - ScienceDirect.com
    It is shown that the monotone formula-size complexity of the monotone symmetric functions on n variables can be bounded above by a function of order O(n 5.3).
  18. [18]
    [PDF] Monotone circuits for the majority function Shlomo Hoory4, Avner ...
    Abstract. We present a simple randomized construction of size O(n3) and depth 5.3 log n + O(1) monotone circuits for the majority function on n variables.
  19. [19]
    [PDF] Monotone Boolean Formulas can Approximate ... - Columbia CS
    Jul 23, 2003 · A monotone threshold function is a threshold function which computes a monotone Boolean function; ... The majority function MAJn : {−1, 1}n → {−1, ...<|separator|>
  20. [20]
    Majority Rules with Random Tie-Breaking in Boolean Gene ...
    We present variants of the majority rule, including alternate treatments of the tie situation. Impact of these variants on the corresponding dynamical ...
  21. [21]
    Stirling's Approximation for Binomial Coefficients - Andreas Kirsch
    Dec 4, 2021 · We will derive Stirling's approximation for binomial coefficients using a neat notation for information-theoretical quantities.
  22. [22]
    Voting Methods - Stanford Encyclopedia of Philosophy
    Aug 3, 2011 · Mathematicians, philosophers, political scientists and economists have devised various voting methods that select a winner (or winners) from a set of ...
  23. [23]
    [PDF] Analysis of Boolean Functions - arXiv
    May 2, 2012 · The majority function is special instance of linear threshold functions f(x) = sgn(a0 + a1x1 + ... + anxn), ai ∈ R, also known as weighted- ...
  24. [24]
    [PDF] CSE 291: Fourier analysis Chapter 2: Social choice theory
    For example, the MAJORITY function is monotone, unanimous, odd and symmetric. In fact, it's the only such function. The TRIBES function is monotone ...<|control11|><|separator|>
  25. [25]
    [PDF] Linear threshold functions and the perceptron algorithm
    Linear threshold functions use sign(θ0x) with decision boundaries as hyperplanes. The perceptron algorithm updates parameters based on misclassified data.
  26. [26]
    [PDF] The Weighted Majority Algorithm - CMU School of Computer Science
    Oct 26, 1992 · Boolean functions of the following form are called r-of-k threshold functions: f(x1;:::;xn)=1i. Pk j=1 xij r, where k, r and the distinct ij ...
  27. [27]
    [PDF] arXiv:1709.03663v2 [math.AG] 11 Jan 2018
    Jan 11, 2018 · The problem of deciding whether a boolean function f is a threshold function, denoted Thres(f) , is known to be co-NP-complete [HM96]. We prove ...
  28. [28]
    [PDF] Enumeration of Threshold Functions of Eight Variables
    The number and optimum weights of threshold func- tions of eight variables are easily obtained from these functions of nine variables and their realization.
  29. [29]
    Jury Theorems - Stanford Encyclopedia of Philosophy
    Nov 17, 2021 · 2.1 Condorcet's Jury Theorem. Condorcet's (1785) jury theorem will be discussed first on grounds of simplicity and historic importance ...Three Jury Theorems · Jury Theorems and Diversity · Other Types of Jury Theorems
  30. [30]
    Condorcet's Jury Theorem -- from Wolfram MathWorld
    Condorcet's jury theorem states that given a group of voters (a "jury") independently choosing by majority vote between a correct outcome with probability ...
  31. [31]
    When Should the Majority Rule? | Journal of Democracy
    Liberal democracy is not simply a system of majority rule: It combines majority rule and protection of minority rights. But constraints on electoral majorities ...
  32. [32]
    Majority Rule and Minority Rights - Annenberg Classroom
    The essence of democracy is majority rule, the making of binding decisions by a vote of more than one-half of all persons who participate in an election.
  33. [33]
    Elections, information aggregation, and strategic voting - PMC
    A central role of elections is the aggregation of information dispersed within a population. This article surveys recent work on elections as mechanisms for ...
  34. [34]
    [PDF] Outcome-Independent Payoffs in Strategic Voting∗ | LSA
    This outcome-independent component—even if arbitrarily small—can dramatically affect the set of Nash equilibria in voting games because it determines how voters ...
  35. [35]
    [PDF] MJRTY - A Fast Majority Vote Algorithm. - DTIC
    C )_.-)Anew algorithm ispresented for determining which, if any, of an arbitrary number of candidates has received a majority of tile votes cast.
  36. [36]
    [PDF] Fault Tolerance The Three universe model
    Triple modular redundancy (TMR): can mask an error by executing three times and taking a majority vote. • Sparing: Can have spares (hot or cold spares) and use ...
  37. [37]
    [PDF] Approximate Heavy Hitters and the Count-Min Sketch
    Apr 2, 2015 · Your task is to find the majority element. In algorithm design, the usual “holy grail” is a linear-time algorithm. For this problem, your post- ...
  38. [38]
    [PDF] Bagging Predictors - UC Berkeley Department of Statistics
    Abstract. Bagging predictors is a method for generating multiple versions of a pre- dictor and using these to get an aggregated predictor.