Fact-checked by Grok 2 weeks ago

Sample space

In probability theory, a sample space, often denoted by \Omega (omega), is the set of all possible outcomes of a random experiment or process. It serves as the foundational set upon which probabilities are defined, with individual outcomes called sample points or elementary events. For example, in the experiment of flipping a fair coin, the sample space consists of the two outcomes: heads or tails. The sample space is a key component of a probability space, formally defined as the triple (\Omega, \mathcal{F}, P), where \mathcal{F} is a \sigma-algebra of measurable subsets of \Omega (known as events), and P is a probability measure assigning non-negative probabilities to these events such that P(\Omega) = 1. This axiomatic framework was established by Andrey Kolmogorov in his 1933 monograph Foundations of the Theory of Probability, providing a rigorous mathematical basis for probability that extends beyond finite cases to handle infinite and continuous outcomes. Events, as subsets of the sample space, represent collections of outcomes to which probabilities are assigned, enabling the computation of likelihoods for complex scenarios. Sample spaces can be discrete (finite or countably infinite, such as the outcomes of repeated die rolls) or continuous (uncountably infinite, like points on the real line for measuring a length), influencing the choice of probability measures—discrete cases often use probability mass functions, while continuous ones employ probability density functions. This distinction is crucial for modeling real-world phenomena, from simple games of chance to stochastic processes in physics and finance, ensuring that probabilities adhere to Kolmogorov's axioms: non-negativity, normalization to 1 for the entire space, and countable additivity for disjoint events.

Fundamental Concepts

Definition

In probability theory, a sample space represents the set of all possible outcomes that can arise from a random experiment or stochastic process. This concept provides the foundational framework for modeling uncertainty, where each outcome corresponds to a conceivable result under the defined conditions of the experiment. For instance, it encapsulates the totality of scenarios without assigning probabilities at this stage, serving as the starting point for further probabilistic analysis. Formally, a sample space, denoted by the symbol \Omega, is defined as a set whose elements, known as sample points or outcomes and often represented by lowercase \omega, exhaustively capture every possible result of the experiment. This definition assumes familiarity with basic set theory, including the notions of sets and their elements, where \Omega acts as the universal collection for the given context. The structure ensures completeness, meaning no outcome is omitted, which is essential for rigorous probabilistic constructions. The sample space \Omega functions as the universal set within the probability model, distinguishing it from the event space, where individual events are defined as subsets of \Omega. While \Omega itself represents the entire range of possibilities, events capture specific collections of outcomes, enabling the assignment of probabilities to meaningful scenarios derived from the experiment. This separation underscores the sample space's role as the foundational layer upon which event-based reasoning is built.

Basic Examples

A sample space in probability theory is exemplified by simple experiments with finite outcomes, where the set lists all possible results that are mutually exclusive—meaning no two outcomes can occur simultaneously—and collectively exhaustive—covering every conceivable result of the experiment. For a single coin flip, the sample space consists of two outcomes: heads or tails, denoted as S = \{ H, T \}. This setup assumes a fair coin with distinct faces, ensuring the outcomes partition the experiment completely without overlap. Rolling a standard six-sided die provides another basic illustration, with the sample space S = \{ 1, 2, 3, 4, 5, 6 \}, representing the possible face values shown. Each number is mutually exclusive from the others, and together they exhaust all potential results of the roll. For instance, the event of rolling an even number corresponds to the subset {2, 4, 6}. Drawing a single card from a standard deck of 52 playing cards yields a sample space comprising all unique cards, specified by their suit (hearts, diamonds, clubs, spades) and rank (ace through king), such as the ace of spades or the seven of hearts. This finite set ensures mutual exclusivity, as only one card can be drawn at a time, and exhaustiveness, as every card in the deck is a possible outcome. In an urn containing balls of different colors, say three red and two blue balls, the sample space for drawing one ball at random is the set of all individual balls, often abstracted to their colors for simplicity: S = \{ R_1, R_2, R_3, B_1, B_2 \} or simply {red, blue} if colors are indistinguishable beyond hue. The outcomes remain mutually exclusive and exhaustive, capturing every possible draw from the urn.

Properties and Structure

Conditions and Requirements

A valid sample space \Omega in probability theory must satisfy specific mathematical conditions to provide a robust foundation for probability models. These conditions ensure that the sample space accurately captures the universe of possible outcomes from a random experiment without gaps or redundancies. The sample space must be non-empty, denoted as \Omega \neq \emptyset, guaranteeing the existence of at least one possible outcome and preventing the modeling of impossible scenarios. It must also be exhaustive, incorporating every conceivable outcome of the experiment to fully represent the scope of uncertainty involved. Additionally, the sample points, which serve as the distinct elements of \Omega, must be mutually exclusive, meaning they are non-overlapping such that no two outcomes can occur simultaneously in a single trial. These requirements—non-emptiness, exhaustiveness, and mutual exclusivity—collectively ensure the completeness of the sample space for modeling uncertainty. By including all possibilities precisely once, they establish a clear, unambiguous structure that allows probabilities to be assigned consistently and exhaustively across all scenarios, facilitating reliable inference about random processes. Without these properties, the sample space could fail to represent the experiment adequately, leading to incomplete or inconsistent probability assessments. The sample space further supports the construction of the event space, commonly taken as the power set of \Omega, which comprises all subsets of the outcomes and enables the definition of events as unions of sample points. This framework is essential for applying probability measures to arbitrary collections of outcomes.

Relation to Events

In probability theory, an event is formally defined as a subset A \subseteq \Omega of the sample space \Omega, representing a collection of possible outcomes that satisfy a particular condition of interest. The set of all such events, known as the event space, forms a sigma-algebra \mathcal{F} on \Omega, which is a collection of subsets closed under complementation and countable unions, ensuring a structured framework for probabilistic analysis. For finite sample spaces, the event space is typically the power set $2^\Omega, consisting of all possible subsets of \Omega, as this collection satisfies the properties of a sigma-algebra. In advanced contexts, especially with uncountable \Omega, the sigma-algebra \mathcal{F} restricts attention to measurable subsets, allowing for the consistent definition of events in measure-theoretic probability. Within this structure, the sample space \Omega itself denotes the certain event, encompassing every possible outcome, while the empty set \emptyset represents the impossible event, containing no outcomes. The sample space thus underpins event-based probability by providing a universal set of outcomes from which subsets can be selected as events, enabling the systematic classification and manipulation of uncertainties without ambiguity. For a simple illustration, consider a coin flip with \Omega = \{\text{heads}, \text{tails}\}; the event of obtaining heads corresponds to the subset \{\text{heads}\}.

Special Cases

Equally Likely Outcomes

In probability theory, equally likely outcomes refer to a scenario in a finite sample space where each individual outcome, or sample point, possesses the same probability of occurrence. This assumption forms the basis of classical probability, stipulating that for a sample space \Omega with finite cardinality |\Omega|, the probability assigned to each outcome \omega \in \Omega is P(\omega) = \frac{1}{|\Omega|}. This uniform distribution simplifies probability calculations by treating all outcomes as symmetric in likelihood, which is particularly applicable in finite sample spaces. Under this framework, the probability of an event A \subseteq \Omega is determined by the ratio of the number of favorable outcomes to the total number of possible outcomes, given by the formula P(A) = \frac{|A|}{|\Omega|}, where |A| denotes the cardinality of A. This approach originated in Pierre-Simon Laplace's classical theory of probability, as outlined in his Théorie Analytique des Probabilités (1812), where he defined probability as the ratio of favorable cases to all possible cases under the principle of insufficient reason, assuming equiprobability when no distinguishing factors exist. The assumption of equally likely outcomes is valid in situations where the experimental setup exhibits symmetry and lacks bias, such as rolling a fair six-sided die, where each face from 1 to 6 has an equal chance of landing up. However, this model has limitations and does not apply universally; for instance, in the case of a biased coin, the outcomes heads and tails do not share equal probabilities, rendering the uniform assignment inaccurate and necessitating alternative probability measures.

Finite and Countable Sample Spaces

In probability theory, a finite sample space is a set Ω with a finite cardinality, denoted |Ω| < ∞, consisting of a limited number of distinct possible outcomes. For instance, the sample space for rolling a standard six-sided die is Ω = {1, 2, 3, 4, 5, 6}, where each outcome represents a face value. Similarly, tossing a coin once yields Ω = {H, T}, with H for heads and T for tails. A countably infinite sample space, in contrast, is an enumerable set Ω whose elements can be placed in a one-to-one correspondence with the natural numbers, allowing them to be listed in a sequence despite having infinitely many outcomes. An example is the sample space for the number of flips until the first heads appears in repeated fair coin tosses, Ω = {1, 2, 3, ...}, where each integer k represents exactly k tails followed by a heads. Another classic case arises in sequences of independent Bernoulli trials, such as infinite coin flips, where Ω = {H, T}^ℕ comprises all possible infinite sequences of heads and tails. For both finite and countably infinite sample spaces, a probability measure P assigns non-negative probabilities to each outcome ω ∈ Ω such that the total probability sums to unity: ∑_{ω ∈ Ω} P(ω) = 1. This follows from the Kolmogorov axioms, which ensure the probability of the entire sample space is 1 and probabilities are additive over disjoint outcomes. In finite cases, uniform distributions are common, where each outcome has equal probability P(ω) = 1/|Ω|. These discrete sample spaces offer key advantages in probability calculations, as their outcomes can be explicitly enumerated, facilitating the direct summation of probabilities for events by considering subsets of Ω. This enumerability simplifies theoretical analysis and computational implementation compared to non-discrete cases, enabling straightforward verification of probability axioms and event probabilities through listing or indexing.

Uncountable and Infinite Sample Spaces

In probability theory, uncountable sample spaces arise when the set of possible outcomes \Omega has the cardinality of the continuum, meaning it is uncountable and cannot be put into one-to-one correspondence with the natural numbers. A classic example is the sample space \Omega = [0,1], which models a uniform random variable where every point in the interval represents a possible outcome, such as a randomly selected point on a unit line segment. This construction is fundamental in continuous probability models, where outcomes form a continuum rather than discrete points. A key challenge in uncountable sample spaces is that it is impossible to assign positive probability to individual singleton outcomes \{\omega\} while ensuring the total probability sums to 1, as the uncountable nature would lead to inconsistencies like infinite total measure or zero probabilities everywhere. Instead, probabilities are defined over intervals or sets of outcomes using probability density functions (PDFs), which describe the relative likelihood of outcomes in a continuous manner; the probability of an event is then obtained by integrating the density over the relevant subset of \Omega. For instance, in the uniform distribution on [0,1], the PDF is f(x) = 1 for x \in [0,1], and the probability of an interval [a,b] is b - a. To rigorously handle these spaces, modern probability theory employs a measure-theoretic framework, where a probability space is defined as a triple (\Omega, \mathcal{F}, \mu) consisting of the sample space \Omega, a \sigma-algebra \mathcal{F} of measurable events, and a probability measure \mu: \mathcal{F} \to [0,1] satisfying \mu(\Omega) = 1, non-negativity, and countable additivity. This approach, axiomatized by Andrey Kolmogorov, allows probabilities to be extended consistently to uncountable sets without assigning measures to non-measurable subsets. For sample spaces like subsets of the real line, the Borel \sigma-algebra \mathcal{B}(\mathbb{R})—generated by the open intervals—is typically used to ensure measurability, as it includes all sets arising from practical probability applications such as limits of intervals. Examples of uncountable sample spaces include the time until an event occurs, modeled by \Omega = (0, \infty) with an exponential distribution density f(t) = \lambda e^{-\lambda t} for t > 0, where \lambda > 0 is the rate parameter, capturing waiting times in processes like radioactive decay. Similarly, spatial positions in a plane can be represented by \Omega = \mathbb{R}^2, equipped with the Borel \sigma-algebra on the Euclidean topology, to model continuous random vectors such as particle locations in physics. These constructions highlight how measure theory resolves the paradoxes of continuity in probability while maintaining mathematical consistency.

Multiple Sample Spaces

In probability theory, when modeling multiple independent experiments within a single overall experiment, the sample space is often defined as the Cartesian product of the individual sample spaces. For two independent experiments with sample spaces \Omega_1 and \Omega_2, the product sample space is \Omega = \Omega_1 \times \Omega_2, where each outcome is an ordered pair (\omega_1, \omega_2) with \omega_1 \in \Omega_1 and \omega_2 \in \Omega_2. This construction ensures that the combined space enumerates all possible joint outcomes while preserving the independence of the components. A classic example is the experiment of flipping two fair coins. The sample space for one coin flip is \{H, T\}, so the product sample space is \{H, T\} \times \{H, T\} = \{HH, HT, TH, TT\}, representing all possible sequences of heads (H) and tails (T). Each outcome in this space corresponds to a unique combination of the two flips, allowing the model to capture interactions between the trials. Joint events in the product space are defined as subsets of \Omega that specify conditions on the paired outcomes, such as the event of at least one head, which is \{HH, HT, TH\}. These subsets facilitate the analysis of combined occurrences across the experiments. Product sample spaces are particularly useful for compound experiments involving sequential or simultaneous independent trials, as well as for representing multivariate outcomes where each dimension corresponds to a separate source of randomness. This framework is essential in applications like repeated Bernoulli trials or joint distributions. Unlike partitioning a single sample space into mutually exclusive outcomes, the product construction explicitly builds the space from interrelated yet independent components, avoiding the need to enumerate outcomes in a flattened, monolithic set.

Applications and Extensions

Probability Measures on Sample Spaces

A probability measure on a sample space quantifies the likelihood of subsets of outcomes, forming the foundation of modern probability theory. Formally, a probability space consists of a sample space \Omega, a \sigma-algebra \mathcal{F} of subsets of \Omega (events), and a probability measure P: \mathcal{F} \to [0,1] that satisfies specific axioms. This measure assigns non-negative probabilities to events, ensuring the entire sample space has probability 1, and extends naturally to combinations of events. The axioms governing the probability measure P, known as Kolmogorov's axioms, are: (1) P(A) \geq 0 for every event A \in \mathcal{F}; (2) P(\Omega) = 1; and (3) for any countable collection of pairwise disjoint events A_1, A_2, \dots \in \mathcal{F}, P\left(\bigcup_{i=1}^\infty A_i\right) = \sum_{i=1}^\infty P(A_i). These axioms, introduced in 1933, provide a rigorous measure-theoretic framework for probability, applicable to both finite and infinite sample spaces. For discrete sample spaces, where \Omega is finite or countably infinite, the probability measure is defined by assigning probabilities P(\{\omega\}) to each singleton outcome \omega \in \Omega, such that \sum_{\omega \in \Omega} P(\{\omega\}) = 1. The probability of any event A \subseteq \Omega is then the sum P(A) = \sum_{\omega \in A} P(\{\omega\}). This additive structure ensures consistency with the Kolmogorov axioms and normalization over the sample space. In continuous sample spaces, such as \Omega \subseteq \mathbb{R}^n, probabilities are defined using a probability density function f: \Omega \to [0, \infty) that integrates to 1 over \Omega, i.e., \int_\Omega f(\omega) \, d\omega = 1. For an event A \in \mathcal{F}, the probability is P(A) = \int_A f(\omega) \, d\omega. This integral formulation extends the discrete sum while adhering to countable additivity and the total probability normalization.

Sample Spaces in Random Sampling

In simple random sampling, the population is denoted as \Omega with size N = |\Omega|, and a sample of fixed size n is selected such that every possible subset of n elements from \Omega is equally likely to be chosen. The sample space for this procedure consists of all possible unordered samples of size n, which corresponds to the set of combinations \binom{N}{n}. This construction ensures unbiased representation of the population, where each element in \Omega has an equal probability of inclusion in the sample. The distinction between sampling with replacement and without replacement significantly affects the structure and size of the sample space. In sampling without replacement, as typically used in simple random sampling to avoid duplicates, the sample space size remains \binom{N}{n}, reflecting unordered selections where order does not matter. Conversely, sampling with replacement allows repetitions, resulting in a sample space of all ordered n-tuples from \Omega, with size N^n. This larger space is relevant in scenarios like bootstrap methods or when population size is effectively infinite, but it introduces dependencies that must be accounted for in probability assignments. Stratified sampling extends the simple random approach by partitioning the population \Omega into disjoint, homogeneous subgroups called strata, with independent simple random samples drawn from each. The overall sample space is then the Cartesian product of the individual stratum sample spaces, where each stratum's space is defined by combinations \binom{N_h}{n_h} for stratum size N_h and allocated sample size n_h. This partitioned structure reduces variance in estimates compared to simple random sampling by ensuring proportional representation within key subpopulations. In survey design, the sample space underpins the selection mechanism, enabling the calculation of inclusion probabilities for each population unit, which are essential for weighting responses and constructing unbiased estimators. For statistical inference, properties of the sample space—such as its uniformity in simple cases—facilitate design-based approaches to generalize sample statistics to the population, supporting hypothesis tests and confidence intervals while controlling for sampling error.

References

  1. [1]
    [PDF] Probability Theory 1 Sample spaces and events - MIT Mathematics
    Feb 10, 2015 · To treat probability rigorously, we define a sample space S whose elements are the possible outcomes of some process or experiment.
  2. [2]
    Sample Spaces | STAT 414
    The sample space (or outcome space), denoted S, is the collection of possible outcomes of a random study. In order to answer the first research question above, ...
  3. [3]
    [PDF] Overview 1 Probability spaces - UChicago Math
    Mar 21, 2016 · 1 Probability spaces. Definition A probability space is a measure space with total measure one. The standard notation is (Ω, F, P) where: • Ω ...
  4. [4]
    [PDF] Ling 5801: Lecture Notes 12 Probability
    Sample space: set of mutually exclusive possible propositions (e.g. FSA states / PDA store-states). Belief: given an infinite number of trials of O, the set of ...
  5. [5]
    [PDF] Probabilities and Random Variables
    The probability measure P on the sample space gives the probabilities of the values of a random variable. For example, in the case of a fair coin,. P{Z = 10} = ...
  6. [6]
    [PDF] Probability Theory - Purdue Department of Statistics
    If the elements of a sample space can be put into 1–1 cor- respondence with a subset of integers, the sample space is countable. Otherwise, it is uncountable.
  7. [7]
    [PDF] 0.2. The Probability Space
    Oct 8, 2022 · In this section, we set up the environment in which probability theory is performed. We consider the axiomatic approach of Andrey Kolmogorov and ...
  8. [8]
    [PDF] Probability Theory Review 1 Basic Notions: Sample Space, Events
    A sample space (Ω) is a set of possible outcomes of an experiment. An event (A) is a subset of the sample space, and its probability is the sum of its elements.
  9. [9]
    2.1 experiment, sample space, and events - MIT
    A sample space is the finest-grained, mutually exclusive, collectively exhaustive listing of all possible outcomes of an experiment.
  10. [10]
    [PDF] Chapter 1 Probability and Distributions - MyWeb
    This collection is called the sample space, usually denoted by C. Example (1.1.1). 1 Toss a coin once, sample space C = {T,H}.
  11. [11]
    [PDF] Lecture 3: Probability Theory - Victoria Manfredi
    Example: flipping a coin. Sample space: Events: ,. If head is equally likely to appear as tail if biased coin and head twice as likely to appear as tail. S ...
  12. [12]
    [PDF] Chapter 3: Probability - Coconino Community College
    Example 3.1.2: Simple Probabilities with a Fair Die​​ Roll a fair die one time. The sample space is S = {1, 2, 3, 4, 5, 6}.
  13. [13]
    [PDF] Exercises 2.1
    ✓2.1-1 Draw one card at random from a standard deck of cards. The sample space S is the collection of the 52 cards. Assume that the probability set ...
  14. [14]
    [PDF] 1. Suppose an urn contains three blue and two white balls. Consider ...
    Consider the random experiment where a ball is selected at random from the urn. The color of the ball selected is noted and then the ball is returned to the urn ...
  15. [15]
    [PDF] Probability
    For example, each outcome of the sample space must correspond to one unique actual outcome of the experiment. In other words, they must be mutually exclusive.Missing: textbook | Show results with:textbook
  16. [16]
    [PDF] Probability Theory - Computer Science
    Note: if Ω is a sample space, then its powerset is the set of all possible events. Given a discrete probability space (Ω,p), we extend the probability ...
  17. [17]
    Probability space | Definition, axioms, explanation - StatLect
    A probability space is a triple [eq1] , where Omega is a sample space, $ ciFourier $ is a sigma-algebra of events and $QTR{rm}{P}$ is a probability measure on ...
  18. [18]
    [PDF] Chapter 1 - Probability Theory: Introduction
    Apr 8, 2024 · Definition: Probability Space. A measure space is a probability space if μ(Ω)=1. In this case, μ is a probability measure, which we denote P.
  19. [19]
    [PDF] On Probability Axioms and Sigma Algebras
    It follows that a probabilistic system can be repressed by a triple (S,J,P), where S is a given (non-empty) sample space, J is a sigma algebra of events on that ...
  20. [20]
    [PDF] Probability with not necessarily equally likely outcomes, conditional ...
    Mar 9, 2001 · The classical definition of probability (Laplace) assumes that the sample space is finite S = {s1,s2,...,sn}, that all the outcomes si are ...
  21. [21]
    4.2: Experiments Having Equally Likely Outcomes
    Sep 11, 2021 · Equally likely means that each outcome of an experiment occurs with equal probability. For example, if you toss a fair, six-sided die, each face (1, 2, 3, 4, 5 ...
  22. [22]
    Laplace's Essay on Probabilities - The Information Philosopher
    The first of these principles is the definition itself of probability, which, as has been seen, is the ratio of the number of favorable cases to that of all the ...
  23. [23]
    [PDF] Probability - Asutosh College
    Classical probability fails to give required probability when number of possible outcomes is infinitely large. • Classical probability is not applicable to ...<|control11|><|separator|>
  24. [24]
    [PDF] MATH 468 / 568 Spring 2010 klin@math.arizona.edu Lecture 1 ...
    If we toss the coin an infinite number of times, the sample space is the set of all infinite sequences of H's and T's. In the latter case, the sample space is ...
  25. [25]
    [PDF] Axioms of Probability Math 217 Probability and Statistics
    As Andrey. Kolmogorov (1903–1987) described it in 1933, a probability distribution on a sample space, which is just a measure space where the measure of the.
  26. [26]
    [PDF] STA 611: Introduction to Mathematical Statistics - Stat@Duke
    Simple sample space: A finite sample space,. S = {s1,s2,...,sn}, where every outcome is equally likely, i.e.. P(si) = 1 n for all i and P(E) = #{sample points ...
  27. [27]
    [PDF] Probability theory - People @EECS
    Sample spaces, events and conditional probabilities. A sample space is a finite or countable set S together with a function. P:S ![0;1] = fy 2R : 0 y 1g ...
  28. [28]
    Continuous Models | Uncountable | Limits - Probability Course
    Consider a scenario where your sample space S is, for example, [0,1]. This is an uncountable set; we cannot list the elements in the set.
  29. [29]
    Sample space | Finite and infinite - StatLect
    Sample spaces can be not only countably infinite, as in the previous example, but also uncountable. Actually, both types are very common in statistics. If you ...Examples · Product space · Infinite sample space · Different kinds of infinity
  30. [30]
    14.1 - Probability Density Functions | STAT 414 - STAT ONLINE
    A continuous random variable takes on an uncountably infinite number of possible values. For a discrete random variable X that takes on a finite or ...
  31. [31]
    [PDF] 6.436J / 15.085J Fundamentals of Probability, Lecture 2
    Sets in this σ-algebra are called Borel sets or Borel measurable sets. Any set that can be formed by starting with intervals [a, b] and using a count- able ...
  32. [32]
    7. Probability Spaces Revisited - Random Services
    The product space then is the probability space that corresponds to the experiments performed independently. When modeling a random experiment, if we say ...
  33. [33]
    275A, Notes 0: Foundations of probability theory
    ### Summary of Product Spaces or Multiple Sample Spaces from https://terrytao.wordpress.com/2015/09/29/275a-notes-0-foundations-of-probability-theory/
  34. [34]
    [PDF] Grinstead and Snell's Introduction to Probability
    Today, probability theory is a well- established branch of mathematics that finds applications in every area of scholarly activity from music to physics, and in ...
  35. [35]
    [PDF] FOUNDATIONS THEORY OF PROBABILITY - University of York
    FOUNDATIONS. OF THE. THEORY OF PROBABILITY. BY. A.N. KOLMOGOROV. Second English Edition. TRANSLATION EDITED BY. NATHAN MORRISON. WITH AN ADDED BIBLIOGRPAHY BY.
  36. [36]
    Chapter 6 Simple Random Sampling | STAT392
    The set of all possible samples that can be drawn under a particular sampling scheme is called the sample space S S . ... population proportion (sample size n= ...
  37. [37]
    2.1.3 Unordered Sampling without Replacement: Combinations
    We now state the general form of the multinomial formula. Suppose that an experiment has r possible outcomes, so the sample space is given by S={s1,s2,...,sr}.
  38. [38]
    [PDF] Sampling without replacement - MAS 108 Probability I
    In general, if we choose n items from a set Ω of size N, and the sampling is done with replacement, then the sample space S consists of all ordered n-tuples of ...
  39. [39]
    [PDF] STAT:5100 (22S:193) Statistical Inference I - Week 2
    This is an example of ordered sampling with replacement. General Case. We can choose an ordered set of r out of n items with replacement in n × n ...
  40. [40]
    [PDF] Section 1.7. Counting Methods
    Jul 17, 2019 · If the objects are replaced after each is chosen in the above experiment then this is called sampling with replacement. The sample space for ...
  41. [41]
    [PDF] 3 STRATIFIED SIMPLE RANDOM SAMPLING
    Suppose the population is partitioned into disjoint sets of sampling units called strata. If a sample is selected within each stratum, then this sampling ...
  42. [42]
    Chapter 8 Stratified Sampling | STAT392
    Efficiency: we partition the sample space so that fewer extreme samples can be selected, or that influential sampling units are isolated and selected with a ...8.7 Formation Of Strata · 8.8 Allocation · 8.8. 1 Allocation Example
  43. [43]
    [PDF] Chapter 4: Stratified Random Sampling
    Every potential sample unit must be assigned to only one stratum and no units can be excluded. Stratifying involves classifying sampling units of the population ...<|separator|>
  44. [44]
    Sample Space - an overview | ScienceDirect Topics
    The Sample Space: A List of What Might Happen. Each random experiment has a sample space, which is a list of all possible outcomes of the random experiment ...