Fact-checked by Grok 2 weeks ago

Marginal distribution

In probability theory and statistics, a marginal distribution is the probability distribution of a single random variable or a subset of random variables derived from a joint probability distribution by summing or integrating out the probabilities of the other variables. This process effectively ignores the dependencies on the excluded variables, providing the unconditional probability distribution for the variable(s) of interest. For discrete random variables, the marginal probability mass function is obtained by summing the joint probabilities over all possible values of the other variables, as in f_X(x) = \sum_y f_{X,Y}(x,y). In the continuous case, the marginal probability density function results from integrating the joint density, such as f_X(x) = \int_{-\infty}^{\infty} f_{X,Y}(x,y) \, dy. Marginal distributions are fundamental in multivariate analysis because they allow researchers to focus on individual variables without considering interactions, which is essential for tasks like hypothesis testing, simulation, and model simplification. For instance, in a bivariate joint distribution table, the marginal probabilities appear in the row and column totals, representing the distributions of each variable alone. They differ from conditional distributions, which account for the value of another variable, and from the full joint distribution, which captures all interdependencies. Understanding marginals is crucial in fields like , where summing out latent variables yields predictive distributions, and in for marginalizing over hidden states in probabilistic models. The concept extends to higher dimensions, where marginalizing over multiple variables produces the distribution for any subset, facilitating in complex datasets. Properties of marginal distributions include preserving certain moments (like means and variances under ) and enabling the calculation of expectations via iterated integrals, as per the . In practice, computing marginals analytically is straightforward for simple cases but often requires numerical methods or approximations for high-dimensional or non-standard distributions.

Definition

General concept

In probability theory and statistics, a marginal distribution is the probability distribution of one or more random variables from a larger set, derived from their joint distribution by summing or integrating out the probabilities associated with the remaining variables. This process, known as marginalization, effectively isolates the distribution of the variables of interest while preserving the total probability mass or density. The resulting marginal distribution captures the behavior of the selected variables without regard to the specific values of the others, making it a fundamental tool for simplifying multivariate analyses. The term "marginal distribution" originates from the practice of recording totals in the margins of joint probability tables, a convention that emerged in early 20th-century statistics with the development of analysis by around 1900. Intuitively, obtaining a marginal distribution is akin to collapsing a multi-dimensional into a lower-dimensional one by summing the entries along rows or columns, thereby focusing on the totals for the variables of interest. In general, for two random variables X and Y with joint distribution P(X, Y), the marginal distribution of X, denoted P(X), is obtained through the marginalization operation over Y. This framework extends naturally to subsets of any collection of random variables, providing a way to extract univariate or lower-dimensional distributions from more complex joint structures.

Discrete case

In the discrete case, the marginal distribution of a X from a distribution of discrete s X and Y is defined by its (PMF), given by p_X(x) = \sum_y p_{X,Y}(x,y), where the sum is over all possible values of Y, and p_{X,Y}(x,y) is the PMF. This formula extracts the distribution of X by aggregating the joint probabilities across the of Y. Similarly, the marginal PMF for Y is p_Y(y) = \sum_x p_{X,Y}(x,y). The marginal (CDF) for the X is then obtained by summing the marginal PMF up to x: F_X(x) = \sum_{k \leq x} p_X(k), where the sum is over all discrete points k in the of X that are less than or equal to x. This CDF fully characterizes the marginal distribution, reflecting the countable nature of the outcomes. To compute the marginal PMF in practice, one constructs a joint PMF table representing p_{X,Y}(x,y) for the finite or countable supports of X and Y, then sums the entries along the rows (for p_X(x)) or columns (for p_Y(y)). For instance, consider a simple bivariate distribution where X takes values {1, 2} and Y takes values {a, b}, with the following joint PMF table:
y = ay = bMarginal p_X(x)
x=10.20.30.5
x=20.10.40.5
Marginal p_Y(y)0.30.71.0
Here, the marginal for X=1 is $0.2 + 0.3 = 0.5, and similarly for others, ensuring the marginals sum to 1 due to the total probability theorem. This tabular approach facilitates verification that \sum_x p_X(x) = 1 and \sum_y p_Y(y) = 1, a property inherent to the additivity of probabilities. In statistical applications, these marginal distributions often appear in the margins of contingency tables, which tabulate observed frequencies analogous to joint PMFs in probability models, allowing on individual variables while ignoring associations. This framework contrasts with the continuous case, where integration replaces summation to obtain marginals.

Continuous case

In the continuous case, the marginal distribution of a random variable is derived from the joint probability density function (PDF) of two or more continuous random variables. For jointly continuous random variables X and Y with joint PDF f_{X,Y}(x,y), the marginal PDF of X, denoted f_X(x), is obtained by integrating the joint PDF over all possible values of Y: f_X(x) = \int_{-\infty}^{\infty} f_{X,Y}(x,y) \, dy. This integral represents the total probability density associated with each value of x, marginalizing out the dependence on y. The limits of integration must align with the support of the joint distribution to ensure the result is well-defined; if the joint PDF has a restricted support region (e.g., y only defined for $0 \leq y \leq 1 given x), the integral bounds are adjusted accordingly, such as \int_{0}^{1} f_{X,Y}(x,y) \, dy, rather than extending to infinity. Improper integrals arise naturally when the support is unbounded, but the joint PDF's normalization guarantees that f_X(x) integrates to 1 over its domain. The marginal cumulative distribution function (CDF) of X, F_X(x) = P(X \leq x), follows from the marginal PDF as F_X(x) = \int_{-\infty}^{x} f_X(t) \, dt. This can also be expressed directly via the joint PDF by iterated integration: F_X(x) = \int_{-\infty}^{x} \int_{-\infty}^{\infty} f_{X,Y}(t,y) \, dy \, dt, though the marginal PDF route is typically more straightforward for computation. Unlike the discrete case, which relies on over probability mass functions, the continuous marginal distribution emphasizes probability densities and requires , often necessitating techniques like (e.g., transformations) when the joint PDF is expressed in non-Cartesian coordinates for complex support regions.

Relations to Other Distributions

With joint distributions

Marginal distributions are obtained from distributions through the process of marginalization, which involves summing the probability mass function (PMF) over the possible values of the other variables in the discrete case, or integrating the probability density function (PDF) over the range of the other variables in the continuous case. For two discrete random variables X and Y, the marginal PMF of X is given by P_X(x) = \sum_y P_{X,Y}(x,y), where the sum is taken over all possible values of Y. Similarly, for continuous variables, the marginal PDF of X is f_X(x) = \int_{-\infty}^{\infty} f_{X,Y}(x,y) \, dy.[22] This marginalization acts as a projection from the multidimensional joint space onto the lower-dimensional space of the variable of interest, effectively discarding information about dependencies between variables. The joint distribution cannot be uniquely recovered from the marginal distributions alone, as multiple joint distributions can yield the same marginals; reconstruction requires supplementary information, such as conditional distributions. In multivariate joint distributions involving more than two variables, the operation of marginalization exhibits associativity: the marginal distribution for a of variables remains the same irrespective of the sequence in which the remaining variables are marginalized out, due to the associative nature of and . A key implication arises under independence: if random variables X and Y are , their distribution factors into the product of their marginal distributions, P_{X,Y}(x,y) = P_X(x) P_Y(y), ensuring that the marginal of X extracted from the coincides precisely with its standalone marginal, with no from Y. This factorization highlights how eliminates dependencies, simplifying the relationship between and marginal forms. The connection between joint, marginal, and conditional distributions is bridged by the formula expressing the joint PMF in terms of the marginal and conditional: P_{X,Y}(x,y) = P_X(x) P_{Y|X}(y|x), where P_{Y|X}(y|x) is the conditional PMF of Y given X = x. An analogous relation holds for PDFs: f_{X,Y}(x,y) = f_X(x) f_{Y|X}(y|x). This decomposition underscores the complementary role of marginal and conditional components in fully specifying the joint distribution.

With conditional distributions

In probability theory, the conditional distribution describes the probability distribution of one random variable given the value of another. For discrete random variables X and Y, the conditional probability mass function is defined as P_{X \mid Y}(x \mid y) = \frac{P_{X,Y}(x,y)}{P_Y(y)}, provided that P_Y(y) > 0, where P_{X,Y}(x,y) is the joint probability mass function and P_Y(y) is the marginal probability mass function of Y. For continuous random variables, the conditional probability density function is similarly given by f_{X \mid Y}(x \mid y) = \frac{f_{X,Y}(x,y)}{f_Y(y)}, assuming f_Y(y) > 0, with f_{X,Y}(x,y) denoting the joint density and f_Y(y) the marginal density of Y. Both marginal and conditional distributions are derived from the joint distribution of the variables. A key distinction between marginal and conditional distributions lies in how they handle the conditioning . The marginal distribution of X ignores Y entirely by averaging over all possible values of Y through or of the joint distribution, providing an unconditional view of X's behavior. In contrast, the conditional distribution fixes Y to a specific value y, restricting the analysis to the subset of outcomes where Y = y and revealing how X behaves under that condition. This difference underscores their complementary roles in probabilistic modeling: marginals capture the overall, unconditional characteristics of a , while conditionals account for dependencies and provide context-specific insights. In probabilistic inference, marginal distributions are used to assess the general properties of a random variable without additional constraints, such as computing expected values or variances in isolation. Conditional distributions, however, play a central role in scenario-based reasoning, enabling updates to beliefs based on observed evidence; for instance, they form the basis of Bayes' theorem, where the posterior distribution is a conditional distribution proportional to the likelihood times the prior. A fundamental property linking the two is that the marginal distribution of X can be obtained by averaging the conditional distribution over the marginal of Y, as per the law of total probability: P_X(x) = \sum_y P_{X \mid Y}(x \mid y) P_Y(y) for the discrete case, or the integral analog f_X(x) = \int_{-\infty}^{\infty} f_{X \mid Y}(x \mid y) f_Y(y) \, dy for the continuous case, demonstrating that marginalizing over the conditional recovers the unconditional marginal.

Examples and Applications

Bivariate example

A classic example of a bivariate distribution arises when rolling two six-sided . Let X denote the outcome of the first die and Y the outcome of the second die, each taking values in \{1, 2, 3, 4, 5, 6\}. Since the dice are and , the (PMF) is : p_{X,Y}(i,j) = P(X = i, Y = j) = \frac{1}{36} \quad \text{for } i,j = 1, 2, \dots, 6. This PMF can be visualized in a 6×6 , where each cell (i,j) contains the probability \frac{1}{36}, the row sums (marginals for X) are each \frac{6}{36} = \frac{1}{6}, and the column sums (marginals for Y) are similarly \frac{1}{6}. To compute the marginal PMF of X, sum the joint probabilities over all possible values of Y: p_X(i) = \sum_{j=1}^6 p_{X,Y}(i,j) = \sum_{j=1}^6 \frac{1}{36} = \frac{6}{36} = \frac{1}{6} \quad \text{for } i = 1, 2, \dots, 6. By symmetry, the marginal PMF of Y is identical: p_Y(j) = \frac{1}{6} for j = 1, 2, \dots, 6. This follows the general formula for the marginal PMF in the case. The resulting marginal distributions are uniform over \{1, 2, 3, 4, 5, 6\}, which aligns with the known distribution of a single fair die, verifying the computation as the dice are independent.

Real-world application

In demographic analysis, a practical application of marginal distributions arises in processing U.S. Census Bureau data on household income and age. The (ACS) publishes joint distributions in tables such as B19037, which cross-tabulates age groups of householders (e.g., under 25, 25–44, 45–64, and 65 and over) with binned income categories (e.g., less than $10,000 to $200,000 or more), providing counts or percentages for each combination based on survey responses. This binned joint distribution reflects real-world data from millions of households, capturing variations influenced by factors like and patterns. To obtain the marginal income distribution, analysts sum the joint table entries across all age groups for each income bin, yielding the overall proportion or count of households in each income category irrespective of age. For instance, this computation aggregates from the 2022 ACS, revealing that approximately 5.5% of households earned less than $10,000, while 11.5% earned $200,000 or more, derived directly from the totals of the . These marginal distributions simplify complex analyses by focusing on aggregate income patterns, enabling policymakers to calculate key metrics like the median household income—$74,755 as of the 2022 ACS 1-year estimates—without conditioning on age, which supports broad economic indicators and inequality assessments. In policy contexts, such marginals inform decisions on taxation, social welfare programs, and poverty thresholds; for example, the Census Bureau uses them to track income inequality via the , guiding federal budget allocations for programs like the . This aggregation highlights overall economic health, as seen in reports showing stagnant median incomes for younger age groups influencing youth-targeted initiatives. Real-world joint data often faces incompleteness due to survey nonresponse, privacy protections, and aggregation for disclosure avoidance, particularly since the 2020 adopted techniques that introduce to prevent re-identification, potentially distorting fine-grained distributions. To address this, analysts employ approximations such as histogram-based marginals from binned tables or imputation methods like to reconstruct reliable aggregates from noisy or partial joints, ensuring usability for policy while maintaining confidentiality.

Multivariate Extensions

Definition in higher dimensions

In , the concept of marginal distribution generalizes naturally to higher dimensions, where a joint distribution involves three or more random variables, and one seeks the distribution of a of them. Consider a random \mathbf{X} = (X_1, \dots, X_k) forming a of a larger collection of n > k random variables with joint (PMF) p(\mathbf{x}, \mathbf{y}) for the discrete case, or joint (PDF) f(\mathbf{x}, \mathbf{y}) for the continuous case, where \mathbf{y} denotes the complementary of the remaining n - k variables. The marginal distribution of \mathbf{X} is derived by eliminating the dependence on \mathbf{y} through over all possible values of \mathbf{y} in the discrete case, or over the support of \mathbf{y} in the continuous case. For the discrete case, the marginal PMF is given by p_{\mathbf{X}}(\mathbf{x}) = \sum_{\mathbf{y}} p(\mathbf{x}, \mathbf{y}), where the is taken over all possible outcomes of \mathbf{y}. In the continuous case, the marginal PDF is f_{\mathbf{X}}(\mathbf{x}) = \int \cdots \int f(\mathbf{x}, \mathbf{y}) \, d\mathbf{y}, with the extending over the appropriate domain for \mathbf{y}. This process, known as marginalization, can be applied iteratively if the \mathbf{X} involves non-consecutive variables, building on the bivariate marginalization as a foundational step. Notation for marginal distributions in higher dimensions often employs subscripts to indicate the specific subset of variables. For instance, if the full joint distribution is over random variables X_1, \dots, X_n, the marginal distribution over the first m variables (m < n) is denoted p_{X_{1:m}}(x_{1:m}) or f_{X_{1:m}}(x_{1:m}), obtained by summing or integrating out X_{m+1}, \dots, X_n. This subscript convention facilitates precise reference to arbitrary subsets, such as X_S where S \subseteq \{1, \dots, n\}, and underscores the reduction in dimensionality from the full joint to the desired marginal.

Properties and computations

In the multivariate setting, the marginal distribution of a of random variables from a distribution over \mathbf{X} = (X_1, \dots, X_n) is obtained by integrating the (pdf) over the complementary variables or, for cases, by summing the (pmf). For a continuous pdf f(\mathbf{x}), the marginal pdf for a subvector \mathbf{X}_S corresponding to S \subset \{1, \dots, n\} is given by f_{\mathbf{X}_S}(\mathbf{x}_S) = \int_{\mathbb{R}^{n - |S|}} f(\mathbf{x}) \, d\mathbf{x}_{S^c}, where S^c denotes the complement of S. Similarly, for a pmf p(\mathbf{x}), the marginal pmf is p_{\mathbf{X}_S}(\mathbf{x}_S) = \sum_{\mathbf{x}_{S^c}} p(\mathbf{x}). These operations ensure that the resulting marginal is a valid , integrating (or summing) to 1 over its support. A key property is that the joint distribution uniquely determines all possible marginal distributions for subsets of any size, but the marginals do not uniquely determine the joint; multiple joints can share identical marginals, reflecting the loss of dependence information upon marginalization. Marginal distributions inherit certain structural properties from the joint when the latter belongs to a closed family under marginalization. For instance, in the \mathbf{X} \sim \mathcal{N}_n(\boldsymbol{\mu}, \boldsymbol{\Sigma}), the marginal distribution of any subvector \mathbf{X}_S is also , specifically \mathbf{X}_S \sim \mathcal{N}_{|S|}(\boldsymbol{\mu}_S, \boldsymbol{\Sigma}_{S,S}), where \boldsymbol{\mu}_S and \boldsymbol{\Sigma}_{S,S} are the subvector and principal submatrix corresponding to S. This closure property facilitates analytical computations without approximation for Gaussian joints. The and of the marginal follow directly: \mathbb{E}[\mathbf{X}_S] = \boldsymbol{\mu}_S and \mathrm{Cov}(\mathbf{X}_S) = \boldsymbol{\Sigma}_{S,S}. Computations of marginals are straightforward in low dimensions or when closed-form expressions exist, as in the multivariate normal case, where the marginal pdf is explicitly f_{\mathbf{X}_S}(\mathbf{x}_S) = (2\pi)^{-|S|/2} |\boldsymbol{\Sigma}_{S,S}|^{-1/2} \exp\left( -\frac{1}{2} (\mathbf{x}_S - \boldsymbol{\mu}_S)^\top \boldsymbol{\Sigma}_{S,S}^{-1} (\mathbf{x}_S - \boldsymbol{\mu}_S) \right). For discrete multivariate distributions, such as the multinomial, marginals are obtained via summation and often retain the same family form, like marginals from multinomial s. In higher dimensions or for non-closed-form s (e.g., certain copula-based models), numerical methods are required, including —sampling from the joint and averaging over the unwanted dimensions—or quadrature rules for deterministic approximation. These techniques scale poorly with dimension due to the curse of dimensionality, emphasizing the importance of exploiting joint structure when available.

References

  1. [1]
    17.1 - Two Discrete Random Variables | STAT 414
    Then, the probability mass function of alone, which is called the marginal probability mass function of , is defined by: f X ( x ) = ∑ y f ( x , y ) = P ( X = ...
  2. [2]
    Probability
    If the world consists of random variables X and Y, then the marginal distribution of X is the function P(X) specifying P(X=x)=∑y∈Val(y)P(X=x,Y=y). In shorthand, ...
  3. [3]
    [PDF] Lecture 8: Joint Probability Distributions
    Definition: Two continuous random variables X and Y are independent if f(x,y) = fX (x)fY (y) for all (x,y), where for example fX (x) is the marginal density of ...
  4. [4]
    [PDF] Chapter 4 Jointly Distributed Random Variables
    Notes: - Marginal distributions describe how one variable behaves ignoring the other variable. - Conditional distributions describe how one variable behaves ...
  5. [5]
    Math 480 lecture 3
    The marginal probability distributions are given in the last column and last row of the table. They are the probabilities for the outcomes of the first (resp ...
  6. [6]
    [PDF] 5.3 Marginal and Conditional Distributions
    A marginal distribution gives the distribution for one random variable. Conditional distributions are based on conditional probability. For discrete, sum joint ...
  7. [7]
    6.1 Probability Rundown | Introduction to Artificial Intelligence
    The marginal distribution of A,B A , B can be obtained by summing out all possible values that variable C C can take as P(A,B)=∑cP(A,B,C=c) P ( A , B ) = ∑ c P ...<|control11|><|separator|>
  8. [8]
    [PDF] Joint and Marginal Distributions - Arizona Math
    Oct 23, 2008 · Joint distribution considers multiple variables, while marginal distribution is the distribution of an individual random variable. For discrete ...
  9. [9]
    Marginal Distribution - an overview | ScienceDirect Topics
    Marginal distribution refers to the distribution of a subset of variables within a larger set, obtained by integrating or summing over the other variables.
  10. [10]
    Marginal distribution function - StatLect
    The probability distribution of one of its components, considered in isolation, is called marginal distribution.
  11. [11]
    2.8 Marginal distributions | An Introduction to Probability ... - Bookdown
    In multiple random variables, the distribution of any one random variable is called a marginal distribution.Missing: textbook | Show results with:textbook
  12. [12]
    [PDF] STEPHEN STIGLER - The missing early history of contingency tables
    now in all widely used general introductory statistical texts. of the 19th century were basically "look at margins of the table and/or dichotomize the ...
  13. [13]
    Marginal Distribution: Definition & Finding - Statistics By Jim
    In fact, the origin of the term originates from these tables. The table below organizes the data we collected for our computer type by gender study.
  14. [14]
    [PDF] Chapters 5. Multivariate Probability Distributions - Brown University
    Marginal Distributions. Consider a random vector (X, Y ). 1. Discrete random vector: The marginal distribution for X is given by. P(X = xi) = X.
  15. [15]
    [PDF] Review of Probability Theory - CS229
    If X and Y are discrete random variables, then the joint probability mass function pXY : R×R → ... In this case, we refer to pX(x) as the marginal probability ...<|control11|><|separator|>
  16. [16]
    [PDF] STAT 24400 Lecture 5 Section 3.1-3.3 Joint & Marginal Distributions ...
    Marginal Distribution. The marginal probability mass functions (marginal PMF's) of X and of Y are obtained by summing p(x, y) over values of the other variable.
  17. [17]
    [PDF] Discrete random variables
    • Cumulative Distribution function (cdf):. FX(x) = P(X ≤ x) = P w≤x p(w), for discrete random variables. 1. Page 2. • Continuous random variables: A random ...
  18. [18]
    [PDF] Random variables and discrete distributions - Stat@Duke
    A random variable is discrete if its cumulative distribution function (cdf) is a step function. A cdf is defined by F(x) = P(X ≤ x).
  19. [19]
    [PDF] Random Variables and Probability Distributions - Kosuke Imai
    Feb 22, 2006 · Definition 10 Let X and Y be random variables with the marginal probability mass (density) func- tion, fX(x) and fY (y), and the joint ...
  20. [20]
    [PDF] Introduction to Contingency Tables - University of Washington
    Nov 16, 2006 · {πij} is the joint distribution of the data. Marginal distributions are sums over rows or columns πi. = X j πij π.j = X i πij. The conditional ...
  21. [21]
    Module 5.1: Simple, Joint, Marginal and Conditional Probabilities
    This probability is called a simple probability when I am just looking at one categorical variable. It is called a marginal probability when we are looking at ...
  22. [22]
    Marginal probability density function | Definition, derivation, examples
    Marginal probability density function. by Marco Taboga, PhD. Consider a continuous random vector, whose entries are continuous random variables.What you need to know · Example
  23. [23]
    20.1 - Two Continuous Random Variables | STAT 414
    We'll look at formal definitions of joint probability density functions, marginal probability density functions, expectation and independence.
  24. [24]
    5.2: Joint Distributions of Continuous Random Variables
    Mar 25, 2020 · In order to find this probability, we need to find the region over which we will integrate the joint pdf. To do this, look for the intersection ...
  25. [25]
    5.2.1 Joint Probability Density Function (PDF)
    Here, we will define jointly continuous random variables. Basically, two random variables are jointly continuous if they have a joint probability density ...
  26. [26]
    [PDF] STA 611: Introduction to Mathematical Statistics Lecture 4 - Stat@Duke
    Two random variables X and Y have a continuous joint distribution if there exists ... The joint cumulative distribution function (joint cdf) of two random.
  27. [27]
    [PDF] ECE 302: Lecture 5.1 Joint PDF and CDF - Purdue Engineering
    Figure: A joint PMF for a pair of discrete random variables consists of an array of impulses. To measure the size of the event A, we sum all the impulses inside ...
  28. [28]
    How to Find Marginal Distribution from Joint Distribution
    Sep 16, 2024 · To find a marginal distribution from a joint distribution, you essentially sum or integrate over the variables that you are not interested in.
  29. [29]
    joint and marginal probability in nLab
    Jul 18, 2024 · The joint probability has more information than the marginal probabilities alone, because of the presence of correlation or other stochastic interactions.
  30. [30]
    [PDF] On statistical and causal models associated with acyclic directed ...
    Mar 27, 2025 · Because marginalization is associative, it suffices to prove this for˜V = V \ {Vj} for all. Vj ∈ V . Consider P ∈ PE(G,V), so V satisfy the ...<|separator|>
  31. [31]
    6.1.1 Joint Distributions and Independence - Probability Course
    Random variables X1, X2, ..., Xn are said to be independent and identically distributed (i.i.d.) if they are independent, and they have the same marginal ...
  32. [32]
    Conditional probability mass function - StatLect
    Conditional probability mass function · Definition · How to compute the conditional pmf · An example · How to derive the joint pmf from the conditional and marginal.Definition · How to compute the... · An example
  33. [33]
    Conditioning | Independence | CDF - Probability Course
    Here, we will discuss conditioning for random variables more in detail and introduce the conditional PMF, conditional CDF, and conditional expectation.
  34. [34]
    Conditioning and Independence | Law of Total Probability
    Here, we will discuss conditioning for continuous random variables. In particular, we will discuss the conditional PDF, conditional CDF, and conditional ...
  35. [35]
    Joint Probability Mass Function | Marginal PMF
    The joint probability mass function of two discrete random variables X and Y is defined as PXY(x,y)=P(X=x,Y=y).
  36. [36]
    Probability: Joint vs. Marginal vs. Conditional - GeeksforGeeks
    Jul 23, 2025 · Marginal probability is the probability of a single event. Joint probability is the probability of multiple events happening together.  ...Joint Probability · Marginal Probability · Examples of Marginal Probability
  37. [37]
    Law of Total Probability | Partitions | Formulas
    1.4. 2 Law of Total Probability​​ and using the definition of conditional probability, P(A∩B)=P(A|B)P(B), we can write P(A)=P(A|B)P(B)+P(A|Bc)P(Bc). We can state ...
  38. [38]
    [PDF] Multivariate Distributions - math.binghamton.edu
    Example: Roll two dice: X = # on first die, T = total on both dice. What are the marginal distributions of X and T? Page 17. Marginal Distribution. - For a ...<|control11|><|separator|>
  39. [39]
    [PDF] 18.05 S22 Reading 7a: Joint Distributions, Independence
    In Example 2 we rolled two dice and let X be the value on the first die and. T be the total on both dice. Compute the marginal pmf for X and for T . Solution: ...
  40. [40]
    Historical Income Tables: Households - U.S. Census Bureau
    Aug 25, 2025 · Table H-10. Age of Householder by Median and Mean Income. xls.svg. All Races [<1.0 MB]. Table H-11. Size of Household by Median and Mean Income.
  41. [41]
    The U.S. Income Distribution: Trends and Issues | Congress.gov
    This report describes recent and long-term income distribution trends; provides a summary of research on key factors that contribute to recent distributional ...Missing: marginal | Show results with:marginal
  42. [42]
    [PDF] Multivariate Distributions
    Marginal Distribution. The marginal distribution of a multivariate normal random vector is itself multivariate normal. In particular, Xi ∼ MN(µi, Σii), for i = ...