Fact-checked by Grok 2 weeks ago

Expected loss

In probability theory and statistics, expected loss is the expected value of a random loss variable, calculated as the sum (or integral) of each possible loss multiplied by its probability. It represents the long-run average loss from repeated occurrences of the random event. In fields such as , , and , expected loss quantifies anticipated losses under uncertainty. In credit risk analysis, expected loss (EL) is a key metric representing the anticipated average loss from a portfolio of loans or investments due to borrower defaults or credit events. It is computed using the formula EL = PD × LGD × EAD, where PD (probability of default) is the likelihood of a borrower failing to meet obligations, LGD (loss given default) is the portion of exposure not recovered upon default, and EAD (exposure at default) is the total outstanding amount at risk during a default event. This metric enables to estimate potential losses across diversified portfolios rather than individual exposures, facilitating proactive provisioning and capital allocation. Components like are derived from historical data and statistical models assessing borrower creditworthiness, while LGD accounts for rates from or guarantees, and EAD incorporates undrawn commitments or future draws on credit lines. Expected loss plays a central role in global regulatory standards to promote . Under the , including , expected losses are typically covered by provisions, while banks must hold regulatory capital to absorb unexpected losses arising from exposures. The International Financial Reporting Standard 9 (), effective from January 1, 2018, mandates recognition of expected credit losses (ECL) on a forward-looking basis, using past events, current conditions, and reasonable forecasts, unlike prior incurred loss models that delayed provisioning until impairment was evident. ECL is categorized into stages: 12-month ECL for performing assets (Stage 1), lifetime ECL for those with increased risk (Stage 2), and lifetime ECL for impaired assets (Stage 3). In the U.S., the Current Expected Credit Loss (CECL) model under similarly requires lifetime loss estimates at origination, enhancing transparency but increasing operational complexity for lenders.

Conceptual Foundations

Definition

Expected loss is the long-run average value of losses incurred over many repetitions of a random process, representing the anticipated loss under . Formally, it is defined as the of a non-negative L \geq 0 that models the loss outcome. This distinguishes expected loss from the more general concept of , which applies to any and may yield positive, negative, or zero results depending on the distribution. Key properties of expected loss include the linearity of expectation, which holds regardless of dependence between loss components, allowing the total expected loss to be expressed as the sum of expected losses for individual components. Additionally, since the underlying L is non-negative, the expected loss is always greater than or equal to zero. The concept of expected value has roots in 17th-century work on games of chance, initiated by and Pierre de Fermat's correspondence on the in 1654, and systematically studied by in his 1657 treatise De ratiociniis in ludo aleae. further developed these ideas in (1713), where he introduced the that underpins the long-run interpretation. It was later formalized within modern measure-theoretic probability by in his axiomatic framework, establishing expectation as an over the .

Probabilistic Interpretation

In , the expected loss E[L] for a loss L defined on a (\Omega, \mathcal{F}, P) is fundamentally grounded in Kolmogorov's axiomatic framework, which establishes probability as a countably additive measure on a sigma-algebra. This foundation ensures the linearity of , E[L_1 + L_2] = E[L_1] + E[L_2], even for independent risks L_1 and L_2, allowing the expected loss of a portfolio to be the sum of individual expected losses without requiring for additivity. Measure-theoretically, the expected loss is expressed as the Lebesgue integral E[L] = \int_\Omega L(\omega) \, dP(\omega), where L: \Omega \to [0, \infty) is a measurable function representing losses, and the integral exists provided L is integrable, meaning E[|L|] < \infty. For non-negative loss variables, integrability holds if the integral is finite, enabling the computation of long-run averages over the probability space without reliance on specific distributions. This integral formulation generalizes classical expectations and aligns with the additivity axiom for disjoint events, ensuring consistency in probabilistic modeling of uncertain losses. The interpretation of expected loss as a long-run average is justified by the law of large numbers (LLN), which states that for independent and identically distributed losses L_1, L_2, \dots, L_n with finite expectation, the sample mean \bar{L}_n = n^{-1} \sum_{i=1}^n L_i converges almost surely to E[L] as n \to \infty. Kolmogorov's strong LLN provides this rigorous convergence under minimal conditions like integrability, underscoring why expected loss serves as a reliable predictor of average outcomes in repeated trials, such as claims in large pools. While expected loss captures the central tendency, it does not account for the variability or spread of possible losses; this is quantified separately by the variance \mathrm{Var}(L) = E[(L - E[L])^2] or its square root, the standard deviation \sqrt{\mathrm{Var}(L)}, which measures dispersion around the mean and highlights risks beyond the average. In actuarial contexts, focusing solely on expected loss may understate uncertainty, necessitating complementary measures like standard deviation to assess the full risk profile.

Mathematical Formulation

Discrete Case

In the discrete case, the expected loss E[L] for a random variable L taking values in a countable set \{l_i : i \in \mathcal{I}\} (where \mathcal{I} is typically the non-negative integers) is given by the summation formula E[L] = \sum_{i \in \mathcal{I}} l_i \, P(L = l_i), where P(L = l_i) denotes the probability mass function (pmf) of L at l_i. This formula represents the long-run average loss per realization when the outcomes are countable and probabilities are assigned to each discrete point. The derivation follows from the general definition of expectation as the Lebesgue integral E[L] = \int L \, dP over the probability space, specialized to the discrete case. For a discrete random variable, the sigma-algebra is generated by the singletons \{l_i\}, so the integral reduces to a sum over these atoms: E[L] = \sum_{i \in \mathcal{I}} l_i \, P(\{l_i\}), where P(\{l_i\}) = P(L = l_i). This step-by-step specialization holds under the measure-theoretic framework, ensuring consistency with the probabilistic interpretation of expectation as a weighted average. The formula assumes that the pmf satisfies \sum_{i \in \mathcal{I}} P(L = l_i) = 1, which is a defining property of any . For the expectation to exist and be finite (especially with unbounded support, where \mathcal{I} is countably infinite), the series must converge absolutely, i.e., \sum_{i \in \mathcal{I}} |l_i| \, P(L = l_i) < \infty; otherwise, E[L] may be infinite or undefined. When the support is finite (e.g., \mathcal{I} = \{0, 1, \dots, n\}), the sum is finite and computed directly. For infinite support, such as in distributions with tails (e.g., , where claims follow a ), exact closed-form expressions may exist, or the sum can be truncated at a point where the remaining tail probability is negligible for numerical approximation, provided convergence holds.

Continuous Case

In the continuous case, the expected loss E[L] for a non-negative continuous random variable L representing loss is defined as the integral of the loss values weighted by their probability density function f_L(l), taken over the support [0, \infty): E[L] = \int_{0}^{\infty} l \cdot f_L(l) \, dl. This formulation arises as the continuous analog to the discrete summation, providing a measure of the average loss under the distribution's density. An alternative expression, particularly useful for positive random variables, utilizes the survival function S_L(l) = P(L > l) = 1 - F_L(l), where F_L is the . For non-negative L, the expected value can be rewritten as E[L] = \int_{0}^{\infty} S_L(l) \, dl. This form, often derived via Fubini's theorem by interchanging the order of integration in the density-based integral, facilitates computations involving tail probabilities without explicitly requiring the . The general definition of the expected value for a continuous stems from the Riemann-Stieltjes with respect to the F_L: E[L] = \int_{0}^{\infty} l \, dF_L(l). When F_L is absolutely continuous, this reduces to the Lebesgue with respect to the f_L(l) \, dl. For the to exist and be finite, the L must satisfy absolute integrability, i.e., E[|L|] < \infty, ensuring the expectation is well-defined and not infinite. For common loss distributions such as the exponential (with rate \lambda > 0) or Pareto (Type I, with shape \alpha > 1 and scale x_m > 0), the tail expectation structure follows directly from these forms. The exponential survival function S_L(l) = e^{-\lambda l} yields E[L] = \int_{0}^{\infty} e^{-\lambda l} \, dl = 1/\lambda, while the Pareto survival S_L(l) = (x_m / l)^{\alpha} for l \geq x_m gives E[L] = \int_{x_m}^{\infty} (x_m / l)^{\alpha} \, dl = \alpha x_m / (\alpha - 1), highlighting how heavy tails in Pareto lead to finite expectations only for \alpha > 1. These structures underscore the applicability of the integral forms in modeling unbounded losses.

Calculation Examples

Basic Example

Consider a basic scenario for computing expected loss in a discrete setting: a single event, such as a potential insurance claim, with three possible outcomes—a loss of $0 occurring with probability 0.7, a loss of $100 with probability 0.2, and a loss of $500 with probability 0.1. The expected loss E[L] follows the discrete expected value formula, where each outcome is weighted by its probability: E[L] = \sum_i p_i l_i = (0 \times 0.7) + (100 \times 0.2) + (500 \times 0.1). The first term, $0 \times 0.7 = 0, accounts for the most likely outcome of no loss. The second term, $100 \times 0.2 = 20, reflects the expected contribution from the moderate loss scenario. The third term, $500 \times 0.1 = 50, captures the impact of the severe but less probable loss. Summing these gives E[L] = 0 + 20 + 50 = 70, so the expected loss is $70. This value predicts the average loss over many repetitions of the event; for 1000 independent trials, the total expected loss approximates $70,000. Common pitfalls in such calculations include neglecting to confirm that the probabilities sum to 1, which is essential for a valid discrete probability distribution, and incorrectly allowing negative loss values, as loss random variables are defined to be non-negative.

Recalculation and Sensitivity

In , recalculation of expected loss often involves adjusting input parameters to simulate varying scenarios and assess potential impacts. Extending the basic example of loss outcomes, suppose the probability of incurring the $500 loss increases to 0.15, while the probabilities of the other loss amounts are scaled proportionally to ensure they sum to 1. The revised probabilities are approximately p($0) = 0.661, p($100) = 0.189, and p($500) = 0.15. The revised expected loss is then computed as the sum of each adjusted probability multiplied by its corresponding loss amount, yielding E[L] ≈ $94. This represents approximately a 34% increase from the original expected loss of $70, demonstrating how even modest shifts in probability can significantly amplify overall . Sensitivity analysis further quantifies how expected loss responds to such parameter variations, providing insights into the relative influence of each component. For a discrete expected loss E[L] = \sum p_i l_i, the with respect to an individual probability p_i is \partial E[L] / \partial p_i = l_i, indicating that the change in expected loss per unit change in probability equals the loss amount itself. This relationship underscores the leverage of high-loss events: alterations in probabilities tied to large l_i produce disproportionately greater effects on E[L] compared to those with smaller losses, enabling prioritization of monitoring for tail risks. Practically, these recalculation and sensitivity techniques facilitate in by revealing vulnerabilities to input uncertainties without requiring exhaustive simulations. For instance, organizations can iteratively adjust probabilities based on emerging data or stress conditions to forecast shifts in expected loss, informing targeted mitigation strategies.

Applications

Insurance and Actuarial Science

In insurance and actuarial science, expected loss serves as the foundation for premium calculation, representing the anticipated cost of claims per unit of exposure. The pure premium is defined as the expected loss, \mathbb{E}[L], which covers the projected payouts for insured events without additional margins. This is typically computed as the product of claim frequency and average severity, where frequency is the expected number of claims per exposure unit and severity is the expected loss amount per claim. For aggregate claims distributions, such as in workers' compensation insurance, the pure premium might be derived from historical data adjusted for exposure, yielding a base rate like $44.53 per $1,000 of payroll after accounting for the distribution's mean. To arrive at the full premium rate, loadings are added for operational expenses, administrative costs, and profit contingencies; for instance, the rate formula incorporates fixed expenses per exposure, variable expenses as a percentage of the rate, and a profit factor, often expressed as R = ( \mathbb{E}[L] + F ) / (1 - V - Q), where F is fixed expenses, V is the variable expense ratio, and Q is the profit loading. An example calculation with frequency of 0.25 claims per exposure, severity of $100, fixed expenses of $10, V = 0.20, and Q = 0.05 results in a rate of $37.50 per exposure unit. Expected loss also plays a key role in loss reserving, particularly for estimating (IBNR) claims, where actuaries project future payments based on incomplete data. The chain-ladder method, a standard technique, uses historical loss patterns from run-off triangles of paid and incurred losses to forecast ultimate losses, with IBNR calculated as the between projected ultimates and currently reported amounts; for example, applying cumulative factors like 1.292 to recent accident years can yield IBNR reserves of approximately $25.7 million. However, to incorporate prior expectations of loss levels, the Bornhuetter-Ferguson method blends chain-ladder projections with a priori expected losses, weighting the estimate as (1 - maturity) \times expected ultimate loss + maturity \times developed loss, where expected ultimate loss is derived from and an a priori (e.g., 0.7). This approach reduces reliance on immature data for recent periods, producing IBNR estimates like $75,203 for a with 40,000 units. In integrating risk measures, expected loss provides a baseline for average anticipated claims, while (VaR) addresses tail risks by quantifying potential extreme losses at a high confidence level, such as the 99% of the loss distribution. Unlike expected loss, which averages all outcomes, VaR ignores losses beyond the threshold, potentially underestimating tail exposure in scenarios like concentrated portfolios or market stress; for instance, in property-casualty , VaR might highlight threats from catastrophic events that expected loss smooths over. Actuaries often pair expected loss with VaR or (the average loss exceeding VaR) to balance mean projections with extreme risk assessments, ensuring reserves cover both routine and claims. Regulatory frameworks like , implemented via Directive 2009/138/EC and updated post-2010, mandate expected loss projections for calculating technical provisions and ensuring capital adequacy in European insurance. The best estimate liability, a core component of technical provisions, is the expected of future cash flows from claims and expenses, discounted appropriately and projected over the contract boundary using models or roll-forward approximations for quarterly updates. These projections inform the Solvency Capital Requirement (SCR), where expected losses help calibrate the risk margin via cost-of-capital methods, requiring full projections of future SCRs to reflect run-off risks and effects, thus maintaining a above 100% to absorb unexpected deviations. For non-life insurers, this ensures reserves for IBNR and other obligations align with projected expected losses under varying economic scenarios. As of 2025, the review through Directive (EU) 2025/2 has introduced amendments effective January 2025, refining volatility adjustment and risk margin calculations to better incorporate expected losses in low-interest environments and enhancing proportionality for smaller insurers.

Finance and Credit Risk

In finance, expected loss quantifies anticipated financial impacts from borrower in lending beyond basic definitions. For comprising multiple loans or exposures, expected loss aggregates linearly via the principle of , yielding the total as the sum of individual expected losses weighted by exposure sizes, without dependence on correlations between . Diversification across borrowers or sectors reduces the variance of portfolio losses—enhancing stability in unexpected loss distributions—but does not alter the mean expected loss, which remains additive regardless of default interdependencies. This facilitates scalable in large portfolios, though variance considerations are critical for capital planning beyond mere . Under the Basel II framework, expected loss is integrated into regulatory capital requirements by deducting any shortfall between EL estimates and eligible provisions from and Tier 2 capital (50% each), ensuring banks hold sufficient buffers against anticipated shortfalls while focusing capital on unexpected losses. , introduced post-2008 to bolster resilience, refined these provisions by mandating deductions from Common Equity (CET1) capital for EL-provision shortfalls and enhancing models to better capture systemic risks, thereby improving overall bank amid volatile conditions. In April 2025, the Basel Committee updated its principles for management, emphasizing forward-looking data-driven approaches to expected loss , aligning with ongoing finalization implementations starting July 2025 in regions like the and , which strengthen output floor requirements and calculations for credit exposures. The underscored vulnerabilities in expected loss modeling, as underestimation of correlated defaults—particularly in mortgage-backed securities—amplified actual losses far beyond projections, leading to widespread bank failures and necessitating regulatory reforms.

References

  1. [1]
    Expected Loss (EL): Definition, Calculation, and Importance | CFI
    Expected loss (EL) is a key metric used in credit risk analysis, offering financial institutions a reliable way to estimate potential losses across their ...Understanding Expected Loss... · How Expected Loss Enhances...
  2. [2]
    Expected Loss and Its Components - 365 Financial Analyst
    Expected loss is the amount a lender might lose by lending to a borrower. There may be many different approaches to estimate and forecast that amount.
  3. [3]
    [PDF] IFRS 9 and expected loss provisioning - Executive Summary
    Under IFRS 9's ECL impairment framework, however, banks are required to recognise ECLs at all times, taking into account past events, current conditions and ...
  4. [4]
    3.3: Expected Value - Mathematics LibreTexts
    Sep 28, 2025 · Expected Value ( E ⁢ V ) is the average gain or loss if an experiment or procedure with a numerical outcome is repeated many times.<|control11|><|separator|>
  5. [5]
    5.3: Mean or Expected Value and Standard Deviation
    May 5, 2023 · The expected value, or mean, of a discrete random variable predicts the long-term results of a statistical experiment that has been repeated many times.
  6. [6]
    7.11: Expected Value - Mathematics LibreTexts
    Jan 2, 2025 · The expected value of an experiment: the mean of the values associated with the outcomes that we would observe over a large number of repetitions of the ...<|control11|><|separator|>
  7. [7]
  8. [8]
  9. [9]
    [PDF] The Significance of Jacob Bernoulli's Ars Conjectandi - Glenn Shafer
    The Significance of Jacob Bernoulli's Ars Conjectandi for the ... Today, most authorities derive rules for expected values from rules for probabilities.
  10. [10]
    [PDF] Kolmogorov's contributions to the foundations of probability
    Jan 27, 2003 · In this article we first review the three stages of Kolmogorov's work on the foundations of probability: (1) his formulation of measure- ...
  11. [11]
    [PDF] Probability Theory
    Nov 24, 2015 · 6.3.2 Weak Law of Large Numbers. Theorem 6.8 (L2 Weak Law of Large Numbers). Given Xi, i ≥ 1, suppose that supi EX2 i ≤ c, and suppose they ...
  12. [12]
    [PDF] Probability and Measure - University of Colorado Boulder
    For simple random variables—ones with finite range—the expected value is a sum instead of an integral. Measure theory, without integration, therefore ...
  13. [13]
    Expected value and the Lebesgue integral - StatLect
    The expected value of a random variable X is the weighted average of the values that X can take on, where each possible value is weighted by its respective ...
  14. [14]
    [PDF] Basic L2 Convergence Theorem and Kolmogorov's Law of Large ...
    Turning to a.s convergence, the method is to show the sequence (Sn) is a.s. Cauchy. The limit of Sn then exists a.s. by completeness of the set of real numbers.
  15. [15]
    [PDF] Expected Adverse Deviation as a Measure of Risk Distribution
    Sep 20, 2017 · This idea of reducing the variability between expected and actual losses is central to actuarial approaches to mea- suring risk distribution.Missing: ignores science
  16. [16]
    [PDF] The Analysis and Estimation of Loss & ALAE Variability:
    We note that such a range is often determined by considering the forecasts of a variety of deterministic “traditional” actuarial projection methods. Those ...
  17. [17]
    [PDF] Discrete Random Variables and Probability Distributions
    Back to theory: Mean (Expected Value) of X. Let X be a discrete r.v. with set of possible values D and pmf p(x). The expected value or mean value of X, denoted.
  18. [18]
    8.1 - A Definition | STAT 414
    The first equal sign arises from the definition of the expected value. The second equal sign just involves replacing the generic p.m.f. notation ...
  19. [19]
    [PDF] 1 Discrete probability - UChicago Math
    The expectation (also called expected value, mean, average value) of a discrete random variable E[X] is the average value that one would expect in the long run ...
  20. [20]
    [PDF] 4 Expectation & the Lebesgue Theorems - Stat@Duke
    Nov 20, 2015 · In particular,. µ := EX = R xµX(dx) and σ2 := E(X−µ)2 = R (x−µ)2 µX(dx) can be calculated using sums and PMFs if X is discrete, or integrals and ...
  21. [21]
    [PDF] Overview 1 Probability spaces - UChicago Math
    Mar 21, 2016 · Theorem 4.8 (Kolmogorov Zero-One Law). If A ∈ T then P(A)=0 or P(A)=1. Proof. Let A ∈ T and let > 0.<|separator|>
  22. [22]
    [PDF] Chapter 3 Expectation
    2 (Linearity of expected values) Let X and Y be discrete random variables, let a and b be real numbers, and put Z. aX. bY. Then E Z. aE X. bE Y . PROOF Let pX Y ...
  23. [23]
    [PDF] Discrete random variables and their expectations (cont.)
    Sep 29, 2008 · If α ≤ 1, the expected value is seen to be infinite. For α > 1, the sum is finite, but a closed form expression is not available; it is known as ...
  24. [24]
    [PDF] Chapter 5: Discrete Probability Distributions - Section 5.1
    A discrete probability distribution has x values and probabilities between 0 and 1, where the sum of all probabilities is 1. Discrete data arises from counting.
  25. [25]
    [PDF] Expectation and Functions of Random Variables - Kosuke Imai
    Mar 10, 2006 · E(aX + bY + c) = aE(X) + bE(Y ) + c for any a, b, c ∈ R. Let's use these definitions and rules to calculate the expectations of the following ...
  26. [26]
    [PDF] Topic 8: The Expected Value - Arizona Math
    As in the case of discrete random variables, a similar formula to (5) holds if we have a vector of random variables. X = (X1,X2,...,Xn), fX, the joint ...
  27. [27]
    [PDF] Expectation, Variance and Standard Deviation for Continuous ...
    The expected value of 𝑋 is defined by. 𝐸[𝑋] = ∫. 𝑏. 𝑥𝑓(𝑥)𝑑𝑥. 𝑎. Let's see how this compares with the formula for a discrete random variable: 𝑛. 𝐸[𝑋] ...
  28. [28]
    [PDF] The Expected Value - Arizona Math
    Let X be a nonnegative random variable with distribution function FX and density fX . Then the survival function. FX (x) = P{X > x} = 1 − FX (x).Missing: positive | Show results with:positive
  29. [29]
    [PDF] Chapter 4: Probability Models in Survival Analysis
    One of the central aspects of survival analysis is the investigation of the probability distribution of a random variable T which has nonnegative support.
  30. [30]
    [PDF] Expected Values - Arizona Math
    Using the Riemann-Stielitjes integral we can write the expectation in a unified manner,. Eg(X) = ∫ ∞. −∞ g(x)dFX (x). For the Riemann-Stieltjes integral.
  31. [31]
    Computing the Riemann-Stieltjes integral: some rules - StatLect
    In the lecture on the Expected value we have discussed a rigorous definition of expected value that involves the Riemann-Stieltjes integral. We present here ...
  32. [32]
    14.4 - Special Expectations | STAT 414 - STAT ONLINE
    Again, all we need to do is replace the summations with integrals. Expected Value. The expected value or mean of a continuous random variable \(X\) is: \(\mu ...
  33. [33]
    3.4: Expected Value of Discrete Random Variables
    Sep 24, 2020 · Example 3 . 4 . 2 · we win $ x if the first heads is on the x t ⁢ h toss, for x = 1 , 2 , 3 , · and we lose $1 if we get no heads in all three ...
  34. [34]
    4.3 Expected Value and Standard Deviation for a Discrete ...
    The expected value, or mean, of a discrete random variable predicts the long-term results of a statistical experiment that has been repeated many times.
  35. [35]
    [PDF] 1 PROBABILITY MODELS FOR ECONOMIC DECISIONS Chapter 2
    This is why all the possible values in any discrete probability distribution must have probabilities that sum to exactly 1. ... one of the most common errors that ...
  36. [36]
    [PDF] Risk Premiums and Their Applications in Ruin Probabilities - SOA
    (1990) and Cheng and Pai (1999a). Definition 1 (nth Stop-Loss Transform) Suppose loss random variable X is nonnegative with its distribution function being F(x) ...
  37. [37]
    Which Parameters Are Important? Differential Importance Under ...
    Jun 20, 2018 · In probabilistic risk assessment, attention is often focused on the expected value of a risk metric ... partial derivative into a semielasticity.
  38. [38]
    Sensitivity analysis for model risk management
    Nov 29, 2021 · Sensitivity testing with dependence has the potential for a wide range of applications in reporting, such as for Solvency II, IFRS 17, ...
  39. [39]
    [PDF] Going from a Pure Premium to a Rate - Casualty Actuarial Society
    The general model is given by the equation: R=P+f(E)+v(R,E)+Q-R. (1) where R. = rate per unit of exposure. P. = pure premium (expected loss cost) per unit ...
  40. [40]
    Chapter 7 Premium Foundations | Loss Data Analytics
    We can multiply and divide by the number of claims, claim count , to get Pure Premium=claim countExposure×Lossclaim count=frequency×severity. So, when premiums ...
  41. [41]
    Chain Ladder Method (CLM): the most common reserving method ...
    May 17, 2019 · The chain ladder method (clm) calculates incurred but not reported (IBNR) loss estimates, using run-off triangles of paid losses and incurred losses.
  42. [42]
    IBNR Models - Chainladder - Python - Read the Docs
    In the expected claims technique, the unpaid claim ... n=0 yields the expected loss method. n=1 yields the traditional :class: BornhuetterFerguson method.
  43. [43]
    Value-at-risk versus expected shortfall: A practical perspective
    We call this problem the "tail risk". In this paper, we illustrate how the tail risk of VaR can cause serious problems in certain cases, cases in which ...
  44. [44]
    [PDF] Guidelines on the valuation of technical provisions - EIOPA
    Insurance and reinsurance undertakings should assess whether a full projection of all future Solvency Capital Requirements is necessary in order to reflect the.
  45. [45]
    Economic Capital and the Assessment of Capital Adequacy | FDIC.gov
    Jul 26, 2023 · The resulting formula: Expected losses ($) = PD(%) * LGD(%) * EAD($). PD and LGD parameter estimates are drawn from the bank's historical ...
  46. [46]
    [PDF] Portfolio Approach of Measuring Credit Risk - CAFRAL
    Portfolio approach measures risk from holding multiple assets, including concentration risk from correlated defaults, and allows for stress testing of risk.
  47. [47]
    [PDF] Part 2: The First Pillar – Minimum Capital Requirements
    Where the total expected loss amount exceeds total eligible provisions, banks must deduct the difference. Deduction must be on the basis of 50% from Tier 1 and.
  48. [48]
    [PDF] Basel III: Finalising post-crisis reforms
    ... Basel III framework requires any shortfall in the stock of eligible provisions relative to expected loss amounts to be deducted from CET1 capital. The same ...Missing: 2008 | Show results with:2008
  49. [49]
    The Credit Rating Crisis: NBER Macroeconomics Annual: Vol 24
    ... underestimation of default correlation across firms or households. ... For example, Moody's focuses on expected loss, while S&P focuses on default probability.