Fact-checked by Grok 2 weeks ago

Combinatorial principles

Combinatorial principles form the foundational toolkit of , a branch of dedicated to the , , and optimization of structures such as sets, graphs, and sequences. These principles provide systematic methods for counting possibilities in finite settings, addressing problems that arise in probability, , and optimization by establishing equalities between seemingly different quantities or proving the existence of certain configurations. Among the most fundamental are the sum principle and product principle, which enable basic . The sum principle states that the size of a equals the sum of the sizes of its disjoint subsets in a , allowing the count of across non-overlapping cases, such as the total number of ways to choose items from mutually exclusive groups. In contrast, the product principle calculates the size of a set formed by selecting one from each of multiple disjoint blocks; for blocks of sizes k_1, k_2, \dots, k_m, the total is \prod k_i, as seen in counting ordered tuples or functions between sets, like the n^k possible functions from a k-element to an n-element . More advanced principles handle complexities like overlaps and equivalences. The inclusion-exclusion principle computes the size of a of sets by adding individual sizes, subtracting pairwise intersections, adding triple intersections, and alternating signs thereafter, correcting for overcounting in scenarios such as counting elements belonging to multiple categories. The asserts that if more than n elements are distributed into n containers, at least one container holds more than one element, providing existential proofs for collisions in applications like the hatcheck problem or generalized . Complementing these, the bijection principle equates the sizes of two sets via a correspondence, underpinning proofs in counting—where the number of k-permutations of n elements is n! / (n-k)!—and enumerations, yielding formulas like \binom{n+k-1}{k} for multisets of size k from n types. The quotient principle further refines partitioning by dividing the total size by the block size to find the number of equal-sized blocks, useful in and problems. Collectively, these principles not only solve direct counting tasks, such as coefficients \binom{n}{k} = n! / (k!(n-k)!) for subsets, but also facilitate double-counting arguments and bijective proofs that reveal deep structural equalities without exhaustive . Their influence extends beyond , informing algorithms in data structures, network design, and statistical modeling.

Basic Counting Rules

Rule of Sum

The rule of sum, a foundational principle in , states that if A and B are disjoint finite sets, then the of their equals the sum of their individual : |A \cup B| = |A| + |B|. This holds because the elements of A and B do not overlap, allowing the total count to be obtained simply by addition without double-counting. This principle generalizes to any finite collection of pairwise disjoint finite sets A_1, A_2, \dots, A_n, where the of the is the sum of the cardinalities: \left|\bigcup_{i=1}^n A_i\right| = \sum_{i=1}^n |A_i|. The proof follows directly from the axioms of for finite sets, particularly the definition of via bijections and the fact that a into disjoint subsets allows independent counting of each component. The concepts underlying the rule of sum trace back to basic in ancient , such as practices evident in early civilizations, and were formalized within modern through 19th-century texts that emphasized enumerative methods. For instance, consider selecting a where there are 5 apples and 3 oranges available as distinct options; the total number of choices is $5 + 3 = 8. Similarly, if two non-overlapping classes have 20 and 15 students respectively, the total number of students is $20 + 15 = 35. These examples illustrate how the rule applies to alternatives from mutually exclusive categories.

Rule of Product

The rule of product, also known as the multiplication principle, states that if there are m ways to make a first choice and n independent ways to make a second choice, then the total number of ways to make both choices is m \times n. This principle applies to scenarios involving successive, independent decisions where the options at each stage do not affect the others. The rule generalizes to k stages of independent choices, where the i-th stage has n_i options for i = 1, 2, \dots, k; the total number of possible outcomes is then the product n_1 \times n_2 \times \dots \times n_k. For instance, when rolling two fair six-sided dice, there are 6 outcomes for the first die and 6 independent outcomes for the second, yielding $6 \times 6 = 36 possible results. Similarly, for a three-character password using any of 26 lowercase letters per position, the total number of distinct passwords is $26 \times 26 \times 26 = 17{,}576. In applications, the rule of product forms the basis for counting basic permutations without repetition, such as arranging k distinct objects selected from n items, which gives n \times (n-1) \times \dots \times (n-k+1) ordered sequences. This extends to full permutations of n distinct objects, totaling n!. The rule can be proved by mathematical induction on the number of stages k. For the base case k=1, there are simply n_1 outcomes. Assume the statement holds for k stages with product n_1 \times \dots \times n_k; for k+1 stages, the first k stages have that product of outcomes, and each pairs independently with the n_{k+1} options of the final stage, yielding (n_1 \times \dots \times n_k) \times n_{k+1}. By induction, the principle holds for all k \geq 1.

Inclusion and Exclusion Techniques

Inclusion-Exclusion Principle

The inclusion-exclusion principle provides a method for calculating the of the of finite sets that may overlap, by systematically adding the sizes of individual sets and subtracting or adding the sizes of their intersections to correct for overcounting. For two sets A and B, the formula is |A \cup B| = |A| + |B| - |A \cap B|, where the subtraction accounts for elements counted in both sets. This extends the rule of sum, which applies only to , to handle overlaps through corrective terms. For n finite sets A_1, A_2, \dots, A_n, the general formula is |A_1 \cup A_2 \cup \dots \cup A_n| = \sum_{i} |A_i| - \sum_{i < j} |A_i \cap A_j| + \sum_{i < j < k} |A_i \cap A_j \cap A_k| - \dots + (-1)^{n+1} |A_1 \cap A_2 \cap \dots \cap A_n|, where the k-th term sums the cardinalities of all intersections of exactly k sets, alternated in sign as (-1)^{k+1}. This alternating sum ensures each element in the union is counted exactly once, regardless of how many sets it belongs to. The principle originated in the early 18th century through applications in probability and games of chance, with early formulations by Abraham de Moivre in 1718, and later developments by Adrien-Marie Legendre in the late 18th century, Daniel Augusto da Silva in the mid-19th century, and Henri Poincaré in the late 19th century. A standard derivation uses indicator functions, which are defined for a set A \subseteq U as the function $1_A: U \to \{0,1\} where $1_A(x) = 1 if x \in A and 0 otherwise. The indicator of the union satisfies $1_{\cup A_i}(x) = 1 - \prod (1 - 1_{A_i}(x)), and expanding the product via the yields the inclusion-exclusion series, as the expectation (or sum over U) of the indicator gives the cardinality. A classic example computes the number of positive integers up to n divisible by 2 or 3. Let A be multiples of 2 and B multiples of 3; then |A \cup B| = \lfloor n/2 \rfloor + \lfloor n/3 \rfloor - \lfloor n/6 \rfloor, since multiples of 6 are the intersection. For n = 100, this yields 50 + 33 - 16 = 67. The principle also underpins Möbius inversion in number theory, where for functions f and g on positive integers with g(n) = \sum_{d|n} f(d), inversion gives f(n) = \sum_{d|n} \mu(d) g(n/d), with \mu the Möbius function incorporating inclusion-exclusion over divisors.

Division Principle

The division principle in combinatorics addresses overcounting in arrangements where certain objects or configurations are indistinguishable or related by symmetries, by dividing the total count of arrangements by the size of the equivalence classes formed by these indistinguishabilities. Specifically, when arranging n objects where there are groups of indistinguishable items of sizes k_1, k_2, \dots, k_m (with k_1 + k_2 + \dots + k_m = n), the initial count of n! permutations treats all objects as distinct, but each unique arrangement is overcounted by the number of ways to permute within the identical groups, which is k_1! \cdot k_2! \dots k_m!. The formula for the number of distinct arrangements is the multinomial coefficient: \binom{n}{k_1, k_2, \dots, k_m} = \frac{n!}{k_1! \, k_2! \dots k_m!}. This yields the count of unique permutations of the multiset. The proof relies on grouping equivalent permutations: the set of all n! permutations is partitioned into equivalence classes where each class consists of permutations that differ only by rearrangements within the identical groups, and each such class has size k_1! \cdot k_2! \dots k_m!, leading to the total number of distinct classes being n! / (k_1! \cdot k_2! \dots k_m!) by the division rule, which states that if there is a k-to-1 correspondence between two sets, the size of the second set is the size of the first divided by k. A classic example is counting distinct necklaces formed by arranging n fixed beads of different colors around a circle, where rotations are considered the same; the n! linear arrangements are grouped into equivalence classes of size n (one for each rotation), yielding (n-1)! distinct necklaces via division by n. In applications, the principle extends to the for distributing n indistinguishable objects into k distinct bins, where the number of non-negative integer solutions to x_1 + \dots + x_k = n is \binom{n + k - 1}{k - 1}, accounting for the indistinguishability of objects and dividers in the arrangement of stars and bars. While effective for uniform symmetries like identical groups or cyclic rotations, the division principle assumes all equivalence classes are of equal size; for more general group actions with varying fixed points, it is extended by Burnside's lemma, which averages the number of fixed configurations over the group elements to count orbits precisely.

Combinatorial Proof Techniques

Bijective Proofs

Bijective proofs constitute a fundamental technique in combinatorics for establishing identities by demonstrating a one-to-one correspondence, or bijection, between two finite sets whose cardinalities represent the quantities in question. This method equates the sizes of the sets directly, bypassing algebraic expansions or inductive arguments to reveal structural equivalences between combinatorial objects. The approach gained prominence in the 20th century through the work of , who emphasized bijective methods in enumerative combinatorics to provide intuitive validations of classical results. One key advantage of bijective proofs is their ability to uncover hidden relationships and symmetries among sets, offering conceptual clarity that algebraic proofs often lack. A prominent example illustrates the identity \sum_{k=0}^n \binom{n}{k} = 2^n, where the left side enumerates the total number of ways to select subsets of size k from an n-element set for all k, and the right side counts all possible subsets. The bijection simply identifies each k-subset with itself as an element of the power set, establishing a perfect matching that covers every subset exactly once. In graph theory, the handshaking lemma asserts that the sum of vertex degrees equals twice the number of edges. This is proved bijectively by considering the set of ordered pairs (v, e) where vertex v is incident to edge e; the cardinality equals the degree sum on one hand and twice the edge count on the other, with the natural incidence mapping providing the bijection. Bijective proofs for binomial theorem identities, such as expansions of (1 + x)^n, often employ mappings between weighted selections or lattice paths and monomial terms, highlighting generating function interpretations through set equivalences. Common techniques include explicit constructions like permutations that pair elements uniquely, involutions that serve as self-inverse pairings to simplify sets by canceling matched components, and sign-reversing involutions that handle alternating sums by assigning signs to paired elements (which cancel) while unpaired fixed points contribute positively or negatively to the net count.

Double Counting

Double counting is a fundamental technique in combinatorics for verifying identities by interpreting the same finite set or quantity in two distinct ways, leading to the equality of two expressions that each count that quantity. This method relies on the principle that if two counting arguments enumerate the same collection of objects, their totals must be equal. It is particularly useful for proving equalities without constructing explicit mappings between sets. A classic example arises in graph theory, where the handshaking lemma demonstrates double counting through the sum of vertex degrees. Consider an undirected graph with vertex set V and edge set E. Counting the incidences between vertices and edges from the vertex side yields \sum_{v \in V} \deg(v), the total number of edge endpoints. Counting from the edge side, each edge contributes two endpoints, giving $2|E|. Thus, \sum_{v \in V} \deg(v) = 2|E|. This equality follows because both expressions count the same set of endpoint incidences. Another illustrative example involves lattice paths in the plane. The number of monotonic lattice paths from (0,0) to (n,m) using right steps (1,0) and up steps (0,1) can be counted by selecting positions for the n right steps among the total n+m steps, yielding \binom{n+m}{n}. Alternatively, selecting positions for the m up steps gives \binom{n+m}{m}. Since both count the identical set of valid paths, \binom{n+m}{n} = \binom{n+m}{m}. This approach highlights how double counting establishes symmetries in enumerative problems. Double counting finds wide application in proving binomial identities. For instance, to verify \binom{n}{k} = \binom{n}{n-k}, consider the set of all k-element subsets of an n-element set. This number is \binom{n}{k}. Equivalently, each such subset corresponds to choosing n-k elements to exclude, counting the same collection as \binom{n}{n-k}. The equality arises directly from these dual interpretations of subset selection. While double counting often implies the existence of a bijection between sets (as in the complement map for binomial symmetry), it emphasizes the aggregate equality of counts rather than explicitly constructing the correspondence, distinguishing it as a more algebraic proof technique compared to direct bijective arguments. The validity holds for any finite set, as the two counts must match if they describe the same objects.

Extremal Principles

Pigeonhole Principle

The pigeonhole principle, also known as Dirichlet's box principle or the drawer principle, is a fundamental result in combinatorics that guarantees the existence of certain overlaps or concentrations when distributing discrete objects into discrete categories. Formally, it states that if n items are placed into m containers where n > m, then at least one container must contain more than one item. This principle derives its name from the intuitive of pigeons (items) being placed into pigeonholes (containers), where overcrowding in at least one hole is inevitable if there are more pigeons than holes. A generalized version extends this to bounds on : if n items are distributed into m such that n > m k for some positive integer k, then at least one holds more than k items. Equivalently, via an averaging argument, if the number of items per exceeds k, then at least one must exceed k items, since the would otherwise be at most k. The proof of the basic proceeds by : assume every has at most one item, which accommodates at most m items total, contradicting the placement of n > m items; thus, some has at least two. The averaging proof similarly notes that the n/m > 1 implies at least one has strictly greater than 1, as all equal to 1 would yield exactly 1. Illustrative examples highlight its utility in establishing minimal guarantees. For instance, among 13 people, at least two must share the same birth month, since there are only 12 months and $13 > 12. In a more advanced application, any selection of 55 distinct integers from 1 to 100 must contain at least one pair differing by 9, one by 10, one by 12, and one by 13. The principle's strength shines in , where it proves that in any finite , at least two vertices have the same , by considering the n possible degrees (0 through n-1) for n vertices. Probabilistic variants integrate it with expectation arguments, such as showing that constructions yield subgraphs with guaranteed properties beyond pure existence. Historically, the principle appeared in rudimentary form as early as 1622 in Jean Leurechon's Selectæ Propositiones, where it was used to argue that among 100 men, at least two have the same number of hairs (assuming fewer than 100 possible hair counts). It was later popularized in the 1830s–1840s by , who applied it in proofs, such as the infinitude of primes in arithmetic progressions, though he did not claim originality. An advanced extension is the Erdős–Ginzburg–Ziv theorem (1961), which asserts that among any 2n−1 integers, there exists a of n that sums to a multiple of n, generalizing the pigeonhole idea to .

Method of Distinguished Element

The method of the distinguished element is a proof in extremal combinatorics that involves selecting a or extremal element within a combinatorial structure—such as a of maximum in a or an element of highest in a set family—to simplify arguments for existence, bounds, or contradictions. By isolating this element, the reduces the problem to analyzing a smaller or modified instance, often via , deletion, or recursive decomposition, thereby establishing structural properties or extremal sizes. This approach contrasts with more global methods by focusing analysis on a single representative that captures key properties of the entire structure. A prominent application appears in the proof of on Hamiltonian cycles in s. Consider a simple G with n \geq 3 vertices where the minimum \delta(G) \geq n/2. To show G contains a Hamiltonian cycle, assuming no Hamiltonian cycle exists, select a longest path P in G; its endpoints u and w then serve as distinguished elements, each with at least n/2 due to the minimum condition. Since P is maximal, the neighbors of u and w must lie on P, and the high degrees ensure that u and w have sufficiently many consecutive neighbors on P to close a cycle encompassing all vertices, yielding a . This deletion along the path reduces the problem inductively. In extremal set theory, the method underpins proofs of the Erdős–Ko–Rado theorem, which bounds the size of intersecting k-uniform families on an n-element ground set with n \geq 2k. For an intersecting family \mathcal{F}, select the distinguished element x that belongs to the maximum number d of sets in \mathcal{F}. Then \mathcal{F} is contained in the star centered at x (all k-subsets containing x), so |\mathcal{F}| \leq \binom{n-1}{k-1}, with equality when \mathcal{F} is exactly the star. If no element appears in all sets, the maximum degree d < \binom{n-1}{k-1}, but the theorem's bound holds via this reduction, often combined with shifting or double counting to confirm the extremal structure. For n < 2k, the full power set is intersecting, but the condition ensures the star is maximal. The technique also applies to partially ordered sets (posets), where selecting a minimal element facilitates decompositions or bounds on chains and antichains. In a finite poset P, a minimal element m (one with no elements below it) can be deleted to yield a smaller poset P \setminus \{m\}, allowing inductive arguments for properties like the height (longest chain) or width (largest antichain). For instance, repeatedly removing minimal elements partitions P into levels, aiding proofs of duality theorems such as Mirsky's theorem, which equates the length of the longest chain to the minimum number of antichains covering P. This deletion strategy leverages the distinguished minimal element to recursively bound the structure.

Advanced Enumerative Tools

Generating Functions

Generating functions provide a powerful algebraic framework for encoding sequences arising in combinatorial enumeration, where the coefficients of a formal power series capture the counts of objects of various sizes. Formally, a generating function G(x) = \sum_{n=0}^{\infty} a_n x^n is defined such that a_n represents the number of combinatorial objects of size n, allowing manipulations of the series to yield closed-form expressions or asymptotic behaviors for these counts. This approach transforms counting problems into operations on polynomials or series, facilitating solutions that might otherwise require intricate recursive or inclusion-based arguments. There are two primary types of generating functions in combinatorics: ordinary generating functions (OGFs) and exponential generating functions (EGFs). OGFs, of the form G(x) = \sum_{n=0}^{\infty} a_n x^n, are suited for unlabeled structures, such as permutations or partitions, where the coefficient a_n directly counts the objects without regard to internal labeling. In contrast, EGFs, given by G(x) = \sum_{n=0}^{\infty} a_n \frac{x^n}{n!}, account for labeled structures, like labeled graphs or trees, where the n! factor normalizes for the permutations of labels, making operations like multiplication correspond naturally to disjoint unions of labeled sets. The choice between OGFs and EGFs depends on whether the combinatorial objects are distinguished by labels or treated as indistinguishable except for structure. A classic example is the binomial theorem, which interprets the expansion of (1 + x)^n = \sum_{k=0}^{n} \binom{n}{k} x^k as the OGF for the number of subsets of an n-element set, where the coefficient \binom{n}{k} counts the k-element subsets. For subset sums, the OGF \prod_{i=1}^{n} (1 + x^i) encodes the number of subsets of \{1, 2, \dots, n\} with elements summing to k, as the product reflects independent choices of including or excluding each i, contributing x^i if included. These examples illustrate how generating functions distill combinatorial identities into algebraic equalities, such as deriving Pascal's identity from differentiation of (1 + x)^n. Key operations on generating functions mirror combinatorial constructions. Addition of two generating functions corresponds to the disjoint union of their respective structures, summing the counts for each size n. Multiplication represents sequences or convolutions, as the coefficient of x^n in G(x) H(x) is \sum_{k=0}^{n} a_k b_{n-k}, counting ways to partition n into components from each structure. Composition, G(H(x)), models hierarchical or substitutive structures, such as trees where substructures are plugged into nodes. For EGFs, these operations adjust for labeling, ensuring multiplication aligns with labeled disjoint unions. To extract coefficients from a generating function, particularly rational ones like \frac{P(x)}{Q(x)}, partial fraction decomposition expresses it as a sum of simpler terms, such as \sum \frac{A}{(1 - \beta x)^j}, whose series expansions yield explicit formulas via the binomial theorem for negative exponents. For more advanced analysis, basic complex analysis techniques, including residue theorems at poles, provide asymptotic approximations for large n, revealing growth rates without computing every coefficient. These methods often derive functional equations from recurrence relations, linking generating functions to iterative combinatorial builds. The concept of generating functions originated in the 18th century with Leonhard Euler, who employed infinite series to solve partition and pentagonal number problems in his 1748 work Introductio in analysin infinitorum. Euler's innovations laid the groundwork for their systematic use in enumeration, predating the formal naming by Pierre-Simon Laplace. In the 20th century, George Pólya expanded their application through his 1937 enumeration theorem, using cycle indices in generating functions to count symmetric structures like graphs and chemical isomers under group actions. In modern combinatorics, computational tools like implement generating function manipulations, including series expansion, coefficient extraction, and operations for both OGFs and EGFs, enabling efficient solving of large-scale enumeration problems beyond manual calculation. For instance, SageMath's built-in functions support symbolic computation of products and compositions, as well as numerical evaluation for asymptotic analysis in applied contexts.

Recurrence Relations

Recurrence relations provide a fundamental framework in combinatorics for defining sequences where each term is determined by preceding terms, enabling the recursive enumeration of combinatorial objects. Formally, a recurrence relation expresses a_n = f(a_{n-1}, \dots, a_{n-k}) for some function f and fixed order k \geq 1, where the sequence \{a_n\} counts the size of a combinatorial structure for input size n. These relations capture dependencies that arise naturally in problems involving hierarchical or decomposable structures, distinguishing them from direct counting methods by emphasizing step-by-step construction. A classic example is the Fibonacci sequence, which arises in the combinatorial problem of tiling a $2 \times n board with dominos. Let a_n denote the number of such tilings; then a_n = a_{n-1} + a_{n-2}, since the tiling either ends with a vertical domino (leaving a $2 \times (n-1) board) or two horizontal dominos (leaving a $2 \times (n-2) board). With initial conditions a_0 = 1 (empty tiling) and a_1 = 1 (one vertical domino), this uniquely determines the sequence, yielding the Fibonacci numbers shifted by one index. Another linear example is the binomial coefficients, where \binom{n}{k} = \binom{n-1}{k-1} + \binom{n-1}{k} for $1 \leq k \leq n-1, reflecting the choice of including or excluding a particular element in a subset; boundary conditions \binom{n}{0} = \binom{n}{n} = 1 and \binom{n}{k} = 0 for k > n or k < 0 ensure uniqueness. Non-linear recurrences extend this approach to more intricate structures. For instance, the C_n, which count the number of full binary trees with n+1 leaves or the number of valid parenthesis sequences of length $2n, satisfy C_0 = 1 and C_n = \sum_{i=0}^{n-1} C_i C_{n-1-i} for n \geq 1, where the sum decomposes the tree at its root's subtrees. This quadratic dependence on previous terms illustrates how non-linearity arises in enumerating objects with multiplicative compositions. To solve linear homogeneous recurrences with constant coefficients, such as a_n = c_1 a_{n-1} + \dots + c_k a_{n-k}, one forms the characteristic equation r^k - c_1 r^{k-1} - \dots - c_k = 0; the roots determine the general solution as a linear combination of terms like r_i^n (or n^m r_i^n for repeated roots), fitted to initial conditions. Generating functions offer another tool, transforming the recurrence into an equation for the ordinary generating function \sum a_n x^n, often yielding a closed form via partial fractions, though full derivations lie beyond basic recurrence methods. Applications abound in recursive counting problems. Lattice paths from (0,0) to (n,k) in a , avoiding certain regions, follow recurrences by considering the last step's direction. The number of rooted labeled trees on n vertices obeys t_n = n^{n-1} but can be approached recursively via Prüfer codes or decompositions summing over subtree sizes. partitions p(n), the number of ways to write n as a of positive integers disregarding , satisfy a linear recurrence p(n) = \sum_{k \neq 0} (-1)^{k-1} p(n - \omega(k)), where \omega(k) = k(3k \pm 1)/2 are generalized pentagonal numbers, derived from the generating function's Euler product. In practice, especially for non-linear or high- cases, sequences are computed iteratively using dynamic programming, storing prior terms in an array to build up to a_n in O(n) time for fixed , or via exponentiation for linear cases to achieve O(\log n) per term. The specification of initial conditions—typically the first k terms for an order-k —ensures the sequence is uniquely defined thereafter, as each subsequent term is a deterministic of the priors; without them, infinitely many sequences may satisfy the . Recurrence relations have been integral to since the , notably in Blaise Pascal's 1665 Traité du triangle arithmétique, which used them to compute coefficients for probability calculations. Systematic study and solving techniques, including characteristic equations, were formalized in the 18th and 19th centuries by mathematicians like Leonhard Euler and .

References

  1. [1]
    [PDF] Combinatorics Through Guided Discovery1 - Dartmouth Mathematics
    Nov 6, 2004 · illustrates one of the fundamental principles of combinatorial mathematics, the bijection principle: Two sets have the same size if and only ...<|control11|><|separator|>
  2. [2]
    cardinality of disjoint union of finite sets - PlanetMath
    Mar 22, 2013 · Let {Ak}nk=1 { A k } k = 1 n be a family of mutually disjoint, finite sets. Then ∣∣⋃nk=1Ak∣∣=∑nk=1∣Ak∣ ∣ ⋃ k = 1 n A k ∣ = ∑ k = 1 n ∣ A k ∣ .
  3. [3]
    [PDF] Basic Counting - UCSD Math
    By the Rule of Sum, this gives a total of 325 lists, or 326 if we count the empty list. In Exercise 1.2.11 you are asked to obtain an estimate when “5-set” is ...Missing: textbook | Show results with:textbook
  4. [4]
    [PDF] Combinatorics Sum and Product Rules Some Subtler Examples
    The Sum Rule: If there are n(A) ways to do A and, distinct from them, n(B) ways to do B, then the number of ways to do A or B is n(A) + n(B). • This rule ...
  5. [5]
    [PDF] An Introduction to Combinatorics and Graph Theory - Whitman College
    addition principle here: set A1 is all pairs (1,x), set A2 is all pairs (2 ... Graph Sequences, Bulletin of the Australian Mathematics Society, vol. 33 ...
  6. [6]
    [PDF] 1 The Inclusion-Exclusion Principle - Arizona Math
    Take the expectation, and use the fact that the expectation of the indicator function 1A is the probability P(A). indicator functions may thus be written.
  7. [7]
    Principle of Inclusion and Exclusion (PIE) | Brilliant Math & Science ...
    The principle of inclusion and exclusion (PIE) is a counting technique that computes the number of elements that satisfy at least one of several properties.
  8. [8]
    [PDF] M¨OBIUS INVERSION FORMULA 1. Introduction Many problems in ...
    The Principle of Inclusion and Exclusion (PIE) is used to calculate the size of the union of finite sets. We will use the notation PIEn to denote the Principle ...
  9. [9]
    [PDF] Combinatorics - Cornell: Computer Science
    Division Rule: If there is a k-to-1 correspondence between of objects of type A with objects of type B, and there are n(A) objects of type A, then there are n(A)/ ...<|control11|><|separator|>
  10. [10]
    [PDF] Lecture 5: Permutations of multisets - UC Davis Mathematics
    Jan 13, 2021 · Assume that n is a total number of elements in a multiset S (counting repetition). Then an r-permutation is again an ordering of r elements ...Missing: combinatorics | Show results with:combinatorics
  11. [11]
    CTGD Groups Acting on Sets
    When we used the quotient principle to count circular seating arrangements or necklaces, we partitioned up a set of lists of people or beads into blocks of ...<|separator|>
  12. [12]
    Stars and Bars - Discrete Mathematics
    The point: elements of the domain are distinguished, cookies are indistinguishable. This is analogous to the distinction between permutations (like counting ...
  13. [13]
    [PDF] Analysis and Applications of Burnside's Lemma - MIT Mathematics
    May 17, 2018 · Abstract. Burnside's Lemma, also referred to as Cauchy-Frobenius Theorem, is a result of group theory that is used to count distinct objects.
  14. [14]
    Website for "Bijective Combinatorics" by Nick Loehr
    Bijections and bijective proofs are introduced at an early stage and are then applied to help count compositions, multisets, and Dyck paths. The end of the ...
  15. [15]
    [PDF] BIJECTIVE PROOF PROBLEMS
    Aug 18, 2009 · The statements in each problem are to be proved combinatorially, in most cases by exhibiting an explicit bijection between two sets.
  16. [16]
    [PDF] Enumerative Combinatorics Volume 1 second edition - Mathematics
    What is Enumerative Combinatorics? Enumerative combinatorics has undergone enormous development since the publication of the first edition of this book in 1986.
  17. [17]
    [PDF] CMSC 27130 Honors Discrete Mathematics - Full-Time Faculty
    Theorem 21.16 (Handshaking lemma). Let G = (V,E) be a graph with e edges. Then. 2e = X v∈V. degG(v). Proof. Let S be the set of tuples (e, v) where e ∈ E ...
  18. [18]
    None
    ### Summary of Techniques in Bijective Proofs (Involutions and Sign-Reversing)
  19. [19]
    Counting-based Reasoning - CSC 208: Discrete Structures
    In other words, if we can count a collection of objects in two different ways, those two different ways must be equal. This is the principle of double counting ...
  20. [20]
    [PDF] The pigeon-hole principle and double counting - Pedro Tamaroff
    Sep 12, 2018 · Double counting gives us the following result, which implies in particular the so-called handshaking lemma. deg(v) = 2|E|.
  21. [21]
    Proofs in Combinatorics - Discrete Mathematics
    ( n 0 ) + ( n 1 ) + ( n 2 ) + … + ( n n ) = 2 n . Now try giving a combinatorial proof that uses lattice paths. You will want to consider a lot of lattice ...<|control11|><|separator|>
  22. [22]
    [PDF] Counting
    proof that uses one of the following methods. ○ A double counting proof uses counting arguments to prove that both sides of an identity count the same objects ...
  23. [23]
    [PDF] Counting in Two Ways - Yufei Zhao
    Jun 26, 2007 · ... combinatorics problems involve looking at a quantity in at least two different ways. This technique is often called “double counting.” In ...
  24. [24]
    Pigeonhole Principle - Interactive Mathematics Miscellany and Puzzles
    Variously known as the Dirichlet Principle, the statement admits an equivalent formulation: (2). If n > m pigeons are put into m pigeonholes, there's a hole ...Missing: history | Show results with:history
  25. [25]
    [PDF] The Pigeonhole Principle, Variants and Applications - Squarespace
    The Pigeonhole Principle is actually based on a more general fact concerning the average of a set of real numbers. Theorem 3.2. Let S = {a1, a2,...,an} be a ...
  26. [26]
    [PDF] Pigeonhole Principle and the Probabilistic Method - MIT Mathematics
    Feb 20, 2015 · The degree of a vertex v in a graph is the number of edges containing v. Theorem 1. In any finite graph, there are two vertices of equal degree.
  27. [27]
    Proof Bell-Number $B(n)=\sum_{k=0}^{n-1}\binom{n-1}{k}B(k)
    Jun 14, 2020 · The method of distinguished element can be applied. The result is trivial for the singleton set and the empty set, so we will prove that Bn+ ...
  28. [28]
    [PDF] Dirac's theorem
    Proof: The proof is by an explicit construction, that is, we show that if G satisfies the condition in the theorem that we can construct a Hamiltonian cycle in ...Missing: distinguished element
  29. [29]
    [PDF] generatingfunctionology - Penn Math
    May 21, 1992 · This book is about generating functions and some of their uses in discrete mathematics. The subject is so vast that I have not attempted to give ...<|separator|>
  30. [30]
    [PDF] AC.pdf - Analytic Combinatorics
    Analytic combinatorics aims to enable precise quantitative predictions of the proper- ties of large combinatorial structures. The theory has emerged over ...
  31. [31]
    [PDF] Generating Functions
    Mar 1, 2015 · We are going to discuss enumeration problems, and how to solve them using a powerful tool: generating functions.
  32. [32]
    AC Exponential generating functions - Applied Combinatorics
    While an ordinary generating function has the form , ∑ n a n x n , an exponential generating function is based on the power series for the exponential function ...
  33. [33]
    3. Generating Functions
    3.2 Exponential Generating Functions. Some sequences are more conveniently handled by a generating function that involves a normalizing factor: Definition.
  34. [34]
    [PDF] Generating Functions - Berkeley Math
    This example shows that the Binomial Theorem, which can be proved by mathematical induction, can be used to derive the formula for the number of. -combinations ...
  35. [35]
    Chapter 4: Formal Series and Generating Functions - enumeration.ca
    In this chapter we discuss formal series, which provide the algebraic background needed to define and compute with generating series.Generating Series · Calculus Operations And Exp... · Additional Problems
  36. [36]
    Generating Series - Combinatorics
    This file makes a number of extensions to lazy power series by endowing them with some semantic content for how they're to be interpreted.
  37. [37]
    Introduction to combinatorics in Sage
    It covers mainly the treatment in Sage of the following combinatorial problems: enumeration (how many elements are there in a set ?), listing (generate all the ...
  38. [38]
    [PDF] Recurrence Relations and Generating Functions
    A formula that recursively defines a function is called a “recurrence relation” or a “recurrence equation”.
  39. [39]
    [PDF] Recurrences - Cornell Mathematics
    This document aims to give the reader an introduction to the use of recurrence relations in combinatorics. Recurrences can be used to count families of ...
  40. [40]
    5.4 Counting Fibonacci numbers with tiles
    The number of ways to tile an n -board is a Fibonacci number! This means that anything we did with Fibonacci numbers can now be considered as tiling questions.
  41. [41]
    From Pascal's Triangle to the Bell-shaped Curve
    Pascal also pioneered the use of the binomial coefficients in the analysis of games of chance, giving the start to modern probability theory. In this column we ...
  42. [42]
    3.5 Catalan Numbers
    It is possible to see directly that A0=A1=1 and that the numbers An satisfy the same recurrence relation as do the Cn, which implies that An=Cn, without ...
  43. [43]
    3.4 Recurrence Relations - Generating Functions
    A recurrence relation defines a sequence {ai}∞i=0 by expressing a typical term an in terms of earlier terms, ai for i<n. For example, the famous Fibonacci ...Missing: combinatorics | Show results with:combinatorics
  44. [44]
    [PDF] Combinatorics - UCLA Mathematics
    It is interesting to note that the generating function for a linear recurrence is always a rational function where the denominator depends only on the rec-.
  45. [45]
    [PDF] Lecture 6: Combinatorics - Steven Skiena
    Sum Rule – The sum rule states that if there are |A| possibilities from set A and |B| possibilities from set B, then there are |A| + |B| ways for either A ...
  46. [46]
    [PDF] 1 Recurrence Relations and Generating Functions - DSpace@MIT
    Feb 6, 2009 · 1.2 A Combinatorial Interpretation of the Fibonacci Num ... The number of domino tilings of a 2-by-n grid is counted by the nth Fibonacci number, ...
  47. [47]
    [PDF] NOTES FOR MATH 188 Contents 1. Linear recurrence relations 2 ...
    In general, if we want to define a sequence using a linear recurrence relation of order d, we need to specify the first d initial values a0,a1,...,ad−1 to allow ...<|control11|><|separator|>
  48. [48]
    [PDF] Pascal's Treatise on the Arithmetical Triangle
    Pascal, however, was the first to connect binomial coefficients with combinatorial coefficients in probability. In fact, a major motivation for Pascal was a ...<|control11|><|separator|>