Fact-checked by Grok 2 weeks ago

Monotone likelihood ratio

In statistics, the monotone likelihood (MLR) is a exhibited by certain families of probability distributions, characterized by the likelihood f_{\theta_2}(x) / f_{\theta_1}(x) being non-decreasing in a real-valued Y(x) whenever \theta_1 < \theta_2 and the densities are positive. This condition ensures that the family admits uniformly most powerful (UMP) s for one-sided composite hypotheses, such as H_0: \theta \leq \theta_0 versus H_1: \theta > \theta_0, where the optimal rejects the null based on exceeding a of the Y(x). The MLR facilitates powerful inference by aligning the ordering of observations with the parameter space, minimizing type I and type II errors in a controlled manner. Prominent examples of distributions possessing the MLR include the (such as the exponential and gamma distributions), the on (0, \theta) with respect to the maximum , the with respect to the sum of observations, and the for testing the with known variance. Additionally, noncentral versions of the t, chi-squared, and F distributions exhibit MLR in their noncentrality parameters, extending the property's utility in more complex testing scenarios. The concept, formalized in foundational works on hypothesis testing such as Karlin and (1950), underpins much of modern statistical by providing a criterion for the existence of optimal tests without requiring full specification of alternative parameters.

Definition and Intuition

Formal Definition

A family of probability distributions \{P_\theta \mid \theta \in \Theta\} is said to have the monotone likelihood ratio (MLR) property in a T if, for all \theta_1 < \theta_2 in \Theta, the likelihood ratio \frac{L(\theta_2; x)}{L(\theta_1; x)} is a non-decreasing function of T(x) for every x in the support of the distribution. Here, L(\theta; x) denotes the likelihood function, which corresponds to the density f(x \mid \theta) with respect to a dominating measure \mu (such as Lebesgue measure for continuous distributions or counting measure for discrete distributions) under P_\theta. This formulation assumes that \Theta is an open interval of the real line, T is a sufficient for \theta, and the family \{P_\theta\} is dominated by \mu, ensuring the existence of densities f(x \mid \theta). For the discrete case, where the distributions have probability mass functions p(x \mid \theta) with respect to the counting measure, the ratio \frac{p(x \mid \theta_2)}{p(x \mid \theta_1)} is non-decreasing in T(x), allowing for plateaus where the ratio remains constant (i.e., equality holds over intervals of T(x)). Similarly, in the continuous case with densities f(x \mid \theta) with respect to , \frac{f(x \mid \theta_2)}{f(x \mid \theta_1)} is non-decreasing in T(x), again permitting equality in subregions. These cases unify under the dominated family framework, where the indicator for equality ensures the property holds weakly (non-strictly) to accommodate non-strict monotonicity. Equivalently, the MLR property can be expressed in terms of the conditional expectation of the likelihood ratio given T = t: \Lambda(\theta_1, \theta_2; t) = \mathbb{E}\left[ \frac{L(\theta_2; X)}{L(\theta_1; X)} \,\Big|\, T = t \right], which is non-decreasing in t for \theta_1 < \theta_2. By the sufficiency of T, this conditional expectation equals the ratio of the induced densities of T under \theta_2 and \theta_1, providing a precise condition on the marginal distribution of the sufficient statistic.

Intuitive Explanation

The monotone likelihood ratio (MLR) property captures the idea that, within a parameterized family of probability distributions, the evidence supporting a larger parameter value θ₂ over a smaller one θ₁ grows steadily stronger as a relevant summary statistic T (derived from the data) increases. Intuitively, this monotonicity reflects an ordered structure in the data: higher observed values of T tilt the balance more decisively toward θ₂, making the comparison between parameter values predictable and consistent across the range of possible observations. This property simplifies inference by ensuring that the likelihood ratio Λ(θ₁, θ₂; x) = f(x | θ₂) / f(x | θ₁)—where f denotes the density or mass function—behaves in a non-decreasing manner with respect to T(x), as formalized in the preceding definition. This ordering implies that the family of distributions is well-behaved for one-sided comparisons, where larger data realizations align intuitively with higher parameter values, enhancing the reliability of decisions like rejecting a null hypothesis in favor of an alternative. For instance, in settings where θ might represent a mean or rate parameter, the MLR ensures that accumulating evidence from the data reinforces the case for larger θ without reversals or ambiguities as T grows. Such intuitive alignment is crucial for practical statistical procedures, as it allows tests to leverage this monotonicity for maximal efficiency. The MLR property builds on the Neyman-Pearson lemma from the 1930s and was formalized by Samuel Karlin and Herman Rubin in 1956 to handle optimal tests for composite hypotheses. In families lacking MLR, however, the likelihood ratio may fluctuate non-monotonically with T, resulting in tests whose rejection regions cannot be simply threshold-based and often lack uniform optimality, thereby complicating inference and reducing power in one-sided scenarios.

Basic Example

A simple and illustrative example of the (MLR) property arises in the context of independent , each with success probability \theta, where the sufficient statistic T is the total number of successes k in n trials. The likelihood function for observing k successes is L(\theta; k) = \binom{n}{k} \theta^k (1 - \theta)^{n - k}. For two values \theta_1 < \theta_2, the likelihood ratio is given by \frac{L(\theta_2; k)}{L(\theta_1; k)} = \left( \frac{\theta_2}{\theta_1} \right)^k \left( \frac{1 - \theta_2}{1 - \theta_1} \right)^{n - k}. To verify monotonicity, consider the ratio of consecutive values: \frac{\frac{L(\theta_2; k+1)}{L(\theta_1; k+1)}}{\frac{L(\theta_2; k)}{L(\theta_1; k)}} = \frac{\theta_2 (1 - \theta_1)}{\theta_1 (1 - \theta_2)} > 1, since \theta_2 > \theta_1 implies \frac{\theta_2}{\theta_1} > 1 and \frac{1 - \theta_1}{1 - \theta_2} > 1. Thus, the likelihood ratio is strictly increasing in k. This demonstrates the MLR property because a larger value of k (more successes) provides progressively stronger evidence in favor of the higher \theta_2 over \theta_1, as the ratio grows with k. The Bernoulli example is particularly useful for highlighting MLR in binary outcome settings, which are foundational in introductory .

Distributions Exhibiting MLR

Common Parametric Families

Several well-known parametric families of distributions possess the monotone likelihood ratio (MLR) property, which facilitates the construction of uniformly most powerful tests for one-sided hypotheses. These families are often one-parameter exponential families or closely related, where the likelihood ratio is monotone in a sufficient statistic T. Examples include the binomial, Poisson, normal (with known variance), exponential, gamma (with fixed shape), Weibull (with fixed shape parameter), and uniform distributions on [0, \theta]. For the \text{Bin}(n, p) with fixed n and p, the likelihood ratio for p_2 > p_1 is increasing in the number of successes k, as it takes the form \left(\frac{p_2}{p_1}\right)^k \left(\frac{1-p_1}{1-p_2}\right)^{n-k}, where the first factor increases with k while the second decreases but at a slower rate. For the \text{Po}(\lambda), the ratio for \lambda_2 > \lambda_1 is \left(\frac{\lambda_2}{\lambda_1}\right)^k e^{(\lambda_1 - \lambda_2)}, which increases in the observation k since the exponential term is constant and less than 1, but outweighed by the increasing power term. In the \mathcal{N}(\mu, \sigma^2) with known \sigma^2 and \mu, the ratio for \mu_2 > \mu_1 is monotone increasing in the sample mean \bar{x}, reflecting the location family's shift invariance. The exponential distribution with rate \lambda (or scale $1/\lambda) has an MLR in the sum of observations \sum x_i, where for \lambda_2 > \lambda_1, the ratio \left(\frac{\lambda_2}{\lambda_1}\right)^n e^{(\lambda_1 - \lambda_2) \sum x_i} decreases in the sum, but equivalently increases when reparameterized by scale. For the \text{Gamma}(\alpha, \beta) with fixed shape \alpha and \beta, the likelihood ratio for \beta_2 > \beta_1 is increasing in \sum x_i, as the form involves \left(\frac{\beta_1}{\beta_2}\right)^{n\alpha} e^{(\beta_2 - \beta_1) \sum x_i / \beta_1 \beta_2} adjusted for the scale, but monotone due to the exponential term dominating. Similarly, the with fixed shape k and \lambda exhibits MLR in \sum \log x_i, where the ratio for \lambda_2 > \lambda_1 increases in this statistic, akin to the exponential case since Weibull is a transformation of . Finally, the on [0, \theta] has MLR in the maximum X_{(n)}, with the ratio for \theta_2 > \theta_1 being (\theta_1 / \theta_2)^n if X_{(n)} \leq \theta_1 and infinite otherwise, which is nondecreasing in X_{(n)}. A key reason many of these families satisfy MLR is that they belong to the class of one-parameter exponential families where the natural parameter is monotone and the log-partition function is convex, ensuring the likelihood ratio is increasing in the sufficient statistic (further details in the section on exponential families).
FamilyParameter(s)Sufficient Statistic T
Binomial(n, p)p (n fixed)Number of successes \sum k_i
Poisson(λ)λSum \sum x_i
Normal(μ, σ²)μ (σ² known)Sample mean \bar{x}
Exponential(λ)λ (rate)Sum \sum x_i
Gamma(α, β)β (α fixed)Sum \sum x_i
Weibull(k, λ)λ (k fixed)Sum of logs \sum \log x_i
Uniform[0, θ]θMaximum X_{(n)}

Characterization Conditions

A family of probability distributions parameterized by θ exhibits the monotone likelihood ratio (MLR) property with respect to a T(x) if, for any \theta_1 < \theta_2, the likelihood ratio \frac{f(x; \theta_2)}{f(x; \theta_1)} is a non-decreasing function of T(x) almost everywhere. This condition ensures that higher values of the statistic T(x) provide stronger evidence in favor of the larger parameter value \theta_2. In the continuous case, the MLR property holds if and only if \frac{\partial}{\partial x} \log \left[ \frac{f(x; \theta_2)}{f(x; \theta_1)} \right] \geq 0 almost everywhere for \theta_1 < \theta_2. This derivative condition is equivalent to the likelihood ratio being non-decreasing in x, as the logarithm preserves monotonicity. A sufficient condition for this to hold across the parameter space is that the mixed partial derivative \frac{\partial^2}{\partial \theta \partial x} \log f(x; \theta) \geq 0 for all \theta and x in the support. Under this assumption, the difference in score functions integrates to a non-negative value, confirming the monotonicity. For distributions in exponential form, f(x; \theta) = c(\theta) h(x) \exp\{\theta t(x)\} where t(x) is non-decreasing, the family possesses the MLR property with respect to t(x). This structure directly implies that the likelihood ratio \frac{f(x; \theta_2)}{f(x; \theta_1)} = \frac{c(\theta_2)}{c(\theta_1)} \exp\{(\theta_2 - \theta_1) t(x)\} increases with t(x) when \theta_2 > \theta_1, since the term is monotonic. Lehmann (1959) established that the MLR property is equivalent to specific convexity conditions on the underlying functions of the distribution. For location-parameter families f_\theta(x) = g(x - \theta), MLR holds if and only if -\log g(x) is a convex function. Similarly, for scale-parameter families f_\theta(x) = \theta^{-1} h(x / \theta), the condition is that -\log h(\log y) is convex in y > 0. These equivalences link the MLR to geometric properties of the log-density, facilitating verification in parametric settings.

Implications for Statistical Inference

Uniformly Most Powerful Tests

In statistical hypothesis testing, the monotone likelihood ratio (MLR) property plays a crucial role in establishing the existence of uniformly most powerful (UMP) tests for one-sided composite hypotheses of the form H_0: \theta \leq \theta_0 versus H_1: \theta > \theta_0, where \theta is a real-valued indexing a family of distributions. Specifically, if the family admits an MLR in a T, then the test that rejects H_0 for sufficiently large values of T achieves the maximum possible power among all tests of a given size \alpha for every \theta > \theta_0. This test is UMP because the MLR ensures that the power function is non-decreasing in \theta, allowing the same critical region to optimize power uniformly across the . This result extends the Neyman-Pearson lemma, which guarantees a most powerful test for simple hypotheses H_0: \theta = \theta_0 versus H_1: \theta = \theta_1 > \theta_0 by rejecting when the likelihood ratio f_{\theta_1}(x)/f_{\theta_0}(x) exceeds a . Under MLR, the likelihood ratio is monotone non-decreasing in T, so the rejection region simplifies to \{T \geq c\} for some c, and this form remains optimal even when H_0 and H_1 are composite, as the monotonicity preserves the power ordering across all \theta > \theta_0. Without the MLR property, UMP tests generally do not exist for such composite one-sided problems, as no single test can simultaneously maximize power against all alternatives in the composite H_1. To implement the UMP test, the critical value c is chosen such that the size is exactly \alpha, i.e., \sup_{\theta \leq \theta_0} P_\theta(T \geq c) = \alpha, often attained at \theta = \theta_0 due to the power function. For continuous distributions, c is determined by solving P_{\theta_0}(T \geq c) = \alpha; in discrete cases, randomization may be required at the boundary point where P_{\theta_0}(T > c) < \alpha \leq P_{\theta_0}(T \geq c), with the test rejecting with probability \gamma = [\alpha - P_{\theta_0}(T > c)] / P_{\theta_0}(T = c) when T = c. The Karlin-Rubin theorem provides a formal of this under MLR.

Karlin-Rubin Theorem

The Karlin-Rubin theorem provides a foundational result in statistical hypothesis testing for families of distributions possessing the monotone likelihood ratio (MLR) property. Specifically, consider a family of distributions parameterized by a scalar \theta with a T such that the family has MLR in T. For testing the composite hypotheses H_0: \theta \leq \theta_0 versus H_1: \theta > \theta_0 at level \alpha, the uniformly most powerful (UMP) of \alpha rejects H_0 with probability \phi(t) = 1 if t > c, \phi(t) = \gamma if t = c, and \phi(t) = 0 if t < c, where the constants c and $0 \leq \gamma \leq 1 are chosen such that \sup_{\theta \leq \theta_0} E_\theta[\phi(T)] = \alpha. This theorem, originally established by Samuel Karlin and Herman Rubin in 1956, extends the Neyman-Pearson lemma to composite hypotheses under MLR conditions and applies equally to both discrete and continuous distributions. A key property of this test is that its power function \beta(\theta) = E_\theta[\phi(T)] = P_\theta(T > c) + \gamma P_\theta(T = c) is non-decreasing in \theta, reflecting the monotonicity inherent in the MLR property and ensuring the test's power increases as the alternative moves further from the null. To outline the proof, the argument leverages the MLR condition to demonstrate that the proposed test is most powerful against any specific alternative \theta_1 > \theta_0 by applying the Neyman-Pearson lemma, which yields a test of the same form. Monotonicity of the likelihood ratio then implies that this test controls the under the entire composite while achieving at least as high as any other size-\alpha test against all alternatives \theta > \theta_0, establishing uniform most powerfulness; this relies on sign-variation diminishing properties of totally positive kernels associated with MLR families.

Unbiased Estimation

In families of distributions possessing the monotone likelihood ratio (MLR) property in a T, optimal median unbiased can be constructed by inverting uniformly most powerful (UMP) tests for one-sided hypotheses. Specifically, the \hat{\theta} is defined as the solution to P_{\theta}(T \geq t \mid \theta = \hat{\theta}) = 1/2, where t is the observed value of T, ensuring that \hat{\theta} is unbiased, meaning P_{\theta}(\hat{\theta} \leq \theta \mid \theta) = 1/2 for all \theta. This construction leverages the monotonicity of the of T in \theta, which guarantees the existence and uniqueness of such an increasing , and it is uniformly most powerful among unbiased in the sense of for the coverage probabilities of associated confidence intervals. This approach extends the Lehmann-Scheffé theorem from unbiased estimation to the unbiased setting, particularly when the T is boundedly complete. In such cases, the resulting estimator not only achieves minimum bias but also exhibits properties with respect to absolute error loss, minimizing the maximum risk over the parameter space among all unbiased estimators. A concrete example arises in the with \theta > 0, where the is f(x \mid \theta) = (1/\theta) \exp(-x/\theta) for x > 0. For a single observation X, the is T = X, and the is F(t \mid \theta) = 1 - \exp(-t/\theta). The unbiased estimator \hat{\theta} solves F(t \mid \hat{\theta}) = 1/2, yielding \hat{\theta} = t / \ln 2 \approx 1.4427 t. This estimator is stochastically optimal among unbiased alternatives, outperforming others in terms of higher probability of falling within any symmetric interval around the true \theta. In contrast, the UMVUE \hat{\theta} = t (which equals the maximum likelihood estimator) is unbiased but has a of \ln 2 \cdot \theta \approx 0.693 \theta, highlighting the distinct optimality criteria.

Connections to Other Properties

Exponential Families

In one-parameter exponential families with a fixed parameter, the distributions possess the monotone likelihood ratio (MLR) property in the natural . This property arises from the canonical parameterization, where the family is structured to facilitate monotonicity in the likelihood ratio as a function of the . The density function takes the form f(x; \theta) = h(x) \exp\left\{ \theta t(x) - A(\theta) \right\}, where \theta denotes the canonical , t(x) is the natural , h(x) serves as the base measure, and A(\theta) is the function. For distinct parameters \theta_1 < \theta_2, the logarithm of the likelihood ratio is given by (\theta_2 - \theta_1) t(x) - [A(\theta_2) - A(\theta_1)]. Differentiating this log-ratio with respect to t(x) yields \theta_2 - \theta_1 > 0, demonstrating that the ratio is strictly increasing in t(x). The function A(\theta) is in \theta, a property that guarantees the integrability of the and supports the monotonicity of the through the well-defined of the . All regular exponential families—those with an open natural parameter space—exhibit the MLR property with respect to the canonical parameter. Common parametric families such as the illustrate this connection.

Stochastic Dominance

A family of distributions \{P_\theta : \theta \in \Theta\} parameterized by a scalar \theta is said to have the monotone likelihood ratio (MLR) property with respect to an ordering of the sample space if, for \theta_1 < \theta_2, the likelihood ratio \frac{dP_{\theta_2}}{dP_{\theta_1}}(x) is non-decreasing in x. This property establishes an equivalence with first-order stochastic dominance within the family: for \theta_1 < \theta_2, P_{\theta_2} first-order stochastically dominates P_{\theta_1}, meaning the cumulative distribution functions satisfy F_{\theta_1}(t) \geq F_{\theta_2}(t) for all t \in \mathbb{R}. This connection arises because the likelihood ratio order, which underpins the MLR property, is stronger than the usual stochastic order in the theory of stochastic comparisons. Shaked and Shanthikumar (2007) formalized this relationship, showing that the likelihood ratio order implies first-order stochastic dominance (Theorem 1.C.1). A sketch of the proof relies on the densities f_\theta: the non-decreasing nature of f_{\theta_2}(x)/f_{\theta_1}(x) ensures that the cumulative distribution functions do not cross, as integrating the densities up to any t preserves the dominance relation through the monotone weighting. Specifically, for any t, \int_{-\infty}^t f_{\theta_2}(x) \, dx \leq \int_{-\infty}^t f_{\theta_1}(x) \, dx, derived from the ratio's monotonicity and normalization. This equivalence is particularly useful for comparing risks in decision-making under uncertainty, as stochastic dominance provides a robust ordering criterion that holds whenever the MLR assumption is satisfied, without requiring verification of stronger conditions like full likelihood computations.

Monotone Hazard Rates

In the context of lifetime distributions parameterized by a scale parameter θ, where larger values of θ correspond to greater reliability (longer expected lifetimes), families exhibiting the (MLR) property in the lifetime variable X imply that the distributions possess an increasing failure rate (IFR). This means the hazard rate h(t; θ) is non-decreasing in t for each fixed θ, a property central to reliability theory for modeling wearout phenomena. The hazard rate, also known as the failure rate, for a lifetime distribution with density f(t; θ) and survival function S(t; θ) = ∫_t^∞ f(u; θ) du is defined as h(t; \theta) = \frac{f(t; \theta)}{S(t; \theta)}. Under the MLR property, where the likelihood ratio f(t; θ_2)/f(t; θ_1) is non-decreasing in t for θ_2 > θ_1, the family ensures that ∂/∂θ log h(t; θ) ≤ 0 for all t, indicating that the hazard rate is non-increasing in the reliability parameter θ (or equivalently, the instantaneous failure risk decreases as reliability improves). A representative example is the , parameterized by rate λ (with mean 1/λ, so larger mean implies higher reliability). The hazard rate is h(t; λ) = λ, which is constant in t but strictly decreasing in λ (or increasing in the mean parameter), satisfying monotonicity in the reliability direction.

Applications

Economics

In principal-agent models, the monotone likelihood ratio (MLR) property plays a crucial role in justifying the optimality of monotone contracts, where higher effort levels by the increase the likelihood of higher output realizations in a stochastically increasing manner. This condition ensures that the principal's optimal contract rewards the more generously for better performance outcomes, aligning under where the agent's effort is unobservable. For instance, if an 's effort e influences their productivity parameter \theta, and the output distribution satisfies MLR with respect to \theta, the resulting optimal contract is increasing in observed output, mitigating shirking . Seminal work by in the 1970s established MLR as a key assumption for validating the first-order approach to solving these models, particularly in addressing and problems where unobservable actions complicate efficient contracting. Building on this, Bengt Holmström's 1979 analysis of and observability further demonstrated that MLR implies the agent's compensation should be monotone in performance signals, ensuring in settings with imperfect monitoring. The MLR property extends to , where it underpins the existence of monotone bidding strategies in . In models of common value , MLR in bidders' signal distributions implies that bids increase with private information, facilitating revenue comparisons across auction formats and supporting the for seller revenue maximization.

Survival Analysis and Reliability

In reliability engineering, families of lifetime distributions exhibiting the monotone likelihood ratio (MLR) property, such as the Weibull distribution with fixed shape parameter, facilitate hypothesis testing for aging characteristics. Specifically, the Weibull model allows for uniformly most powerful (UMP) tests to assess whether the hazard rate magnitude is elevated under stress (indicating accelerated degradation), against alternatives of lower hazard levels, assuming an increasing hazard function form. This is particularly useful in evaluating component durability, where the MLR property ensures the test statistic based on observed failure times rejects the null hypothesis in a controlled manner for one-sided alternatives. A representative application involves testing the lifetime of components under varying stress levels parameterized by θ, where the time-to-failure T follows a with scale inversely related to θ. Under the MLR property in θ, a UMP test for the H_0: θ ≤ θ_0 versus H_1: θ > θ_0 can be constructed using the derived from the sum of failure times, enabling reliable inference on whether the stress exceeds a that compromises reliability. This approach leverages the Karlin-Rubin theorem to guarantee optimality in power for such tests in survival data contexts. The MLR property plays a key role in (), where data collected under elevated stresses are to predict performance under normal conditions. For instance, in ALT designs assuming Weibull lifetimes, the MLR ensures that the likelihood ratios remain valid for model , supporting accurate reliability predictions without introducing from non-monotonic behaviors. Wayne Nelson's seminal work highlights this in the analysis of step-stress and constant-stress tests, where MLR-based models underpin the estimation of life quantiles at use conditions. In contemporary Bayesian , priors informed by the MLR property are employed to model monotone risks, particularly in scenarios with censored data from or studies. These priors, often constructed to enforce monotonicity in ratios or failure rates, yield posteriors that preserve the MLR structure, improving for time-to-event outcomes under constraints like increasing failure risks. This approach addresses challenges in traditional by incorporating prior knowledge of monotonicity, as explored in frameworks for proportional hazards models with monotone constraints.

References

  1. [1]
    [PDF] Neyman-Pearson lemma and monotone likelihood ratio
    Let fθ = dPθ /dν. The family 乡 is said to have monotone likelihood ratio in Y(X) (a. real-valued statistic) if and only if, for any θ1 < θ2, fθ2 (x)/fθ1 (x) ...
  2. [2]
  3. [3]
  4. [4]
    [PDF] Chapter 6 Testing
    The noncentral t, χ2, and F distributions have MLR in their noncentrality parameters. See Lehmann and Romano, page 224 for the t distribution; see Lehmann and ...
  5. [5]
    [PDF] Lecture 21: Significance Tests, I
    For composite hypotheses, likelihood ratio tests work best when the data model f(x; θ) satisfies an additional property, known as the monotone likelihood ratio ...
  6. [6]
    [PDF] §5.1 Monotone likelihood ratio tests
    ... Monotone. Likelihood Ratio. Definition 5.1. A joint distribution fθ(x) has a Monotone Likelihood Ratio in a statistic T(x) if for any two values of the.<|control11|><|separator|>
  7. [7]
    [PDF] Uniformly most powerful tests (UMP) and likelihood ratio tests
    Mar 29, 2016 · ▷ The likelihood L(θ;x) has the monotone likelihood ratio in statistic y = u(x) for θ1 < θ2 if. L(θ1;x). L(θ2;x) is a monotone function of y = u ...
  8. [8]
    [PDF] Springer Texts in Statistics
    We won't here comment on the long history of the book which is recounted in Lehmann (1997) but shall use this Preface to indicate the principal changes from the ...
  9. [9]
    [PDF] Testing Statistical Hypotheses (First Edition) - Gwern.net
    Page 1. Testing. Statistical Hypotheses. E. L. LEHMANN. Professor of Statistics. University of California, Berkeley. JOHN WILEY & SONS,New York « Chichester ...<|control11|><|separator|>
  10. [10]
    The Theory of Decision Procedures for Distributions with Monotone ...
    ... monotone likelihood ratio. In Section 1 the fundamental definition and preliminaries are introduced. ... Information. Published: June, 1956. First available in ...
  11. [11]
    [PDF] median-unbiased estimators - kyushu - 九州大学
    monotone function g(0) has the median-unbiased estimator g(0* (x)). This ... with the monotone likelihood ratio property, and if its cumulative distribu-.
  12. [12]
    [PDF] Chapter 8 The exponential family: Basics - People @EECS
    Theorem 1. The natural parameter space N is convex (as a set) and the cumulant function. A(η) is convex (as a function).Missing: monotone ratio
  13. [13]
  14. [14]
    Properties of Probability Distributions with Monotone Hazard Rate
    The paper relates distribution properties to hazard rate, showing that increasing hazard rate distributions are closed under convolution, and decreasing under ...Missing: likelihood ratio
  15. [15]
    The First-Order Approach to Principal-Agent Problems - jstor
    This paper identifies sufficient conditions-the monotone likelihood ratio condition and convexity of the distribution function condition-for the first-order ...
  16. [16]
    The Theory of Moral Hazard and Unobservable Behaviour: Part I - jstor
    Principal-agent models are studied, in which outcomes conditional on the agent's action are uncertain, and the agent's behaviour therefore unobservable.
  17. [17]
    Moral Hazard and Observability - jstor
    " The characterization can be proved rigorously as in Holmstrom (1977) using proposition. 9.6.1 in Luenberger (1969). Some technical assumptions which we do not ...
  18. [18]
  19. [19]
    [PDF] APPLIED LIFE DATA ANALYSIS
    Nelson, Wayne, 1936-. Applied life data analysis. (Wiley series in probability and mathematical statistics. Applied probability and statistics section, ISSN ...
  20. [20]
    Bayesian analysis for monotone hazard ratio
    Jul 16, 2010 · We propose a Bayesian approach for estimating the hazard functions under the constraint of a monotone hazard ratio. We construct a model for ...