The following is a review of the Market Risk Measurement and Management principles designed to address the
learning objectives set forth by GARP®. Cross-reference to GARP assigned reading—Dowd, Chapter 3.
READING 1
ESTIMATING MARKET RISK MEASURES: AN
INTRODUCTION AND OVERVIEW
Study Session 1
EXAM FOCUS
In this reading, the focus is on the estimation of market risk measures, such as value at
risk (VaR). VaR identifies the probability that losses will be greater than a pre-specified
threshold level. For the exam, be prepared to evaluate and calculate VaR using historical
simulation and parametric models (both normal and lognormal return distributions).
One drawback to VaR is that it does not estimate losses in the tail of the returns
distribution. Expected shortfall (ES) does, however, estimate the loss in the tail (i.e.,
after the VaR threshold has been breached) by averaging loss levels at different
confidence levels. Coherent risk measures incorporate personal risk aversion across the
entire distribution and are more general than expected shortfall. Quantile-quantile (QQ)
plots are used to visually inspect if an empirical distribution matches a theoretical
distribution.
ESTIMATING RETURNS
To better understand the material in this reading, it is helpful to recall the
computations of arithmetic and geometric returns. Note that the convention when
computing these returns (as well as VaR) is to quote return losses as positive values.
For example, if a portfolio is expected to decrease in value by $1 million, we use the
terminology “expected loss is $1 million” rather than “expected profit is –$1 million.”
Profit/loss data: Change in value of asset/portfolio, Pt, at the end of period t plus any
interim payments, Dt.
Arithmetic return data: Assumption is that interim payments do not earn a return
(i.e., no reinvestment). Hence, this approach is not appropriate for long investment
horizons.
Geometric return data: Assumption is that interim payments are continuously
reinvested. Note that this approach ensures that asset price can never be negative.
MODULE 1.1: HISTORICAL AND PARAMETRIC ESTIMATION
APPROACHES
Historical Simulation Approach
LO 1.a: Estimate VaR using a historical simulation approach.
Estimating VaR with a historical simulation approach is by far the simplest and most
straightforward VaR method. To make this calculation, you simply order return
observations from largest to smallest. The observation that follows the threshold loss
level denotes the VaR limit. We are essentially searching for the observation that
separates the tail from the body of the distribution. More generally, the observation that
determines VaR for n observations at the (1 − α) confidence level would be: (α × n) + 1.
PROFESSOR’S NOTE
Recall that the confidence level, (1 − α), is typically a large value (e.g., 95%)
whereas the significance level, usually denoted as α, is much smaller (e.g., 5%).
To illustrate this VaR method, assume you have gathered 1,000 monthly returns for a
security and produced the distribution shown in Figure 1.1. You decide that you want to
compute the monthly VaR for this security at a confidence level of 95%. At a 95%
confidence level, the lower tail displays the lowest 5% of the underlying distribution’s
returns. For this distribution, the value associated with a 95% confidence level is a
return of –15.5%. If you have $1,000,000 invested in this security, the one-month VaR is
$155,000 (= –15.5% × $1,000,000).
Figure 1.1: Histogram of Monthly Returns
EXAMPLE: Identifying the VaR limit
Identify the ordered observation in a sample of 1,000 data points that corresponds
to VaR at a 95% confidence level.
Answer:
Since VaR is to be estimated at 95% confidence, this means that 5% (i.e., 50) of the
ordered observations would fall in the tail of the distribution. Therefore, the 51st
ordered loss observation would separate the 5% of largest losses from the
remaining 95% of returns.
PROFESSOR’S NOTE
VaR is the quantile that separates the tail from the body of the distribution.
With 1,000 observations at a 95% confidence level, there is a certain level of
arbitrariness in how the ordered observations relate to VaR. In other words,
should VaR be the 50th observation (i.e., α × n), the 51st observation [i.e., (α ×
n) + 1], or some combination of these observations? In this example, using the
51st observation was the approximation for VaR, and the method used in the
assigned reading. However, on past FRM exams, VaR using the historical
simulation method has been calculated as just: (α × n), in this case, as the 50th
observation.
EXAMPLE: Computing VaR
A long history of profit/loss data closely approximates a standard normal
distribution (mean equals zero; standard deviation equals one). Estimate the 5%
VaR using the historical simulation approach.
Answer:
The VaR limit will be at the observation that separates the tail loss with area equal
to 5% from the remainder of the distribution. Since the distribution is closely
approximated by the standard normal distribution, the VaR is 1.65 (5% critical
value from the z-table). Recall that since VaR is a one-tailed test, the entire
significance level of 5% is in the left tail of the returns distribution.
From a practical perspective, the historical simulation approach is sensible only if you
expect future performance to follow the same return generating process as in the past.
Furthermore, this approach is unable to adjust for changing economic conditions or
abrupt shifts in parameter values.
Parametric Estimation Approaches
LO 1.b: Estimate VaR using a parametric approach for both normal and
lognormal return distributions.
In contrast to the historical simulation method, the parametric approach (e.g., the
delta-normal approach) explicitly assumes a distribution for the underlying
observations. In this section, we will analyze two cases: (1) VaR for returns that follow
a normal distribution and (2) VaR for returns that follow a lognormal distribution.
Normal VaR
Intuitively, the VaR for a given confidence level denotes the point that separates the tail
losses from the remaining distribution. The VaR cutoff will be in the left tail of the
returns distribution. Hence, the calculated value at risk is negative, but is typically
reported as a positive value since the negative amount is implied (i.e., it is the value that
is at risk). In equation form, the VaR at significance level α is:
where µ and σ denote the mean and standard deviation of the profit/loss distribution
and z denotes the critical value (i.e., quantile) of the standard normal. In practice, the
population parameters μ and σ are not likely known, in which case the researcher will
use the sample mean and standard deviation.
EXAMPLE: Computing VaR (normal distribution)
Assume that the profit/loss distribution for XYZ is normally distributed with an
annual mean of $15 million and a standard deviation of $10 million. Calculate the
VaR at the 95% and 99% confidence levels using a parametric approach.
Answer:
VaR(5%) = −$15 million + $10 million × 1.65 = $1.5 million. Therefore, XYZ expects
to lose at most $1.5 million over the next year with 95% confidence. Equivalently,
XYZ expects to lose more than $1.5 million with a 5% probability.
VaR(1%) = −$15 million + $10 million × 2.33 = $8.3 million. Note that the VaR (at
99% confidence) is greater than the VaR (at 95% confidence) as follows from the
definition of value at risk.
Now suppose that the data you are using is arithmetic return data rather than
profit/loss data. The arithmetic returns follow a normal distribution as well. As you
would expect, because of the relationship between prices, profits/losses, and returns,
the corresponding VaR is very similar in format:
EXAMPLE: Computing VaR (arithmetic returns)
A portfolio has a beginning period value of $100. The arithmetic returns follow a
normal distribution with a mean of 10% and a standard deviation of 20%.
Calculate VaR at both the 95% and 99% confidence levels.
Answer:
Lognormal VaR
The lognormal distribution is right-skewed with positive outliers and bounded below
by zero. As a result, the lognormal distribution is commonly used to counter the
possibility of negative asset prices (Pt). Technically, if we assume that geometric returns
follow a normal distribution (μR, σR), then the natural logarithm of asset prices follows
a normal distribution and Pt follows a lognormal distribution. After some algebraic
manipulation, we can derive the following expression for lognormal VaR:
EXAMPLE: Computing VaR (lognormal distribution)
A diversified portfolio exhibits a normally distributed geometric return with mean
and standard deviation of 10% and 20%, respectively. Calculate the 5% and 1%
lognormal VaR assuming the beginning period portfolio value is $100.
Answer:
Note that the calculation of lognormal VaR (geometric returns) and normal VaR
(arithmetic returns) will be similar when we are dealing with short time periods and
practical return estimates.
MODULE QUIZ 1.1
1. The VaR at a 95% confidence level is estimated to be 1.56 from a historical simulation of 1,000
observations. Which of the following statements is most likely true?
A. The parametric assumption of normal returns is correct.
B. The parametric assumption of lognormal returns is correct.
C. The historical distribution has fatter tails than a normal distribution.
D. The historical distribution has thinner tails than a normal distribution.
2. Assume the profit/loss distribution for XYZ is normally distributed with an annual mean of $20
million and a standard deviation of $10 million. The 5% VaR is calculated and interpreted as which
of the following statements?
A. 5% probability of losses of at least $3.50 million.
B. 5% probability of earnings of at least $3.50 million.
C. 95% probability of losses of at least $3.50 million.
D. 95% probability of earnings of at least $3.50 million.
MODULE 1.2: RISK MEASURES
Expected Shortfall
LO 1.c: Estimate the expected shortfall given profit and loss (P&L) or return data.
A major limitation of the VaR measure is that it does not tell the investor the amount or
magnitude of the actual loss. VaR only provides the maximum value we can lose for a
given confidence level. The expected shortfall (ES) provides an estimate of the tail
loss by averaging the VaRs for increasing confidence levels in the tail. Specifically, the
tail mass is divided into n equal slices and the corresponding n − 1 VaRs are computed.
For example, if n = 5, we can construct the following table based on the normal
distribution:
Figure 1.2: Estimating Expected Shortfall
Observe that the VaR increases (from Difference column) in order to maintain the same
interval mass (of 1%) because the tails become thinner and thinner. The average of the
four computed VaRs is 2.003 and represents the probability-weighted expected tail loss
(a.k.a. expected shortfall). Note that as n increases, the expected shortfall will increase
and approach the theoretical true loss [2.063 in this case; the average of a high number
of VaRs (e.g., greater than 10,000)].
Estimating Coherent Risk Measures
LO 1.d: Estimate risk measures by estimating quantiles.
A more general risk measure than either VaR or ES is known as a coherent risk
measure. A coherent risk measure is a weighted average of the quantiles of the loss
distribution where the weights are user-specific based on individual risk aversion. ES is
a special case of a coherent risk measure. When modeling ES, the weighting function is
set to [1 / (1 − confidence level)] for all tail losses. All other quantiles will have a
weight of zero.
Under expected shortfall estimation, the tail region is divided into equal probability
slices and then multiplied by the corresponding quantiles. Under the more general
coherent risk measure, the entire distribution is divided into equal probability slices
weighted by the more general risk aversion (weighting) function.
This procedure is illustrated for n = 10. First, the entire return distribution is divided
into nine (i.e., n − 1) equal probability mass slices at 10%, 20%, …, 90% (i.e., loss
quantiles). Each breakpoint corresponds to a different quantile. For example, the 10%
quantile (confidence level = 10%) relates to −1.2816, the 20% quantile (confidence
level = 20%) relates to −0.8416, and the 90% quantile (confidence level = 90%) relates
to 1.2816. Next, each quantile is weighted by the specific risk aversion function and
then averaged to arrive at the value of the coherent risk measure.
This coherent risk measure is more sensitive to the choice of n than expected shortfall,
but will converge to the risk measure’s true value for a sufficiently large number of
observations. The intuition is that as n increases, the quantiles will be further into the
tails where more extreme values of the distribution are located.
Even though the risk measure estimate eventually converges to the true value as the
number of observations is sufficiently large, knowing the exact value of n can be useful.
One approach involves beginning with a small value of n and repeatedly doubling it
until the risk measure estimates stabilize. Every time the number of observations is
doubled, the width of the tail slides is cut in half. This process allows for the calculation
of the “halving error,” and the ideal number of tail slides is found when the halving error
is near zero (i.e., the difference between the estimated risk measures as n increases is
minimal).
LO 1.e: Evaluate estimators of risk measures by estimating their standard errors.
Sound risk management practice reminds us that estimators are only as useful as their
precision. That is, estimators that are less precise (i.e., have large standard errors and
wide confidence intervals) will have limited practical value. Therefore, it is best
practice to also compute the standard error for all coherent risk measures.
PROFESSOR’S NOTE
The process of estimating standard errors for estimators of coherent risk
measures is quite complex, so your focus should be on interpretation of this
concept.
First, let’s start with a sample size of n and arbitrary bin width of h around quantile, q.
Bin width is just the width of the intervals, sometimes called “bins,” in a histogram.
Computing standard error is done by realizing that the square root of the variance of
the quantile is equal to the standard error of the quantile. After finding the standard
error, a confidence interval for a risk measure such as VaR can be constructed as
follows:
EXAMPLE: Estimating standard errors
Construct a 90% confidence interval for 5% VaR (the 95% quantile) drawn from a
standard normal distribution. Assume bin width = 0.1 and that the sample size is
equal to 500.
Answer:
The quantile value, q, corresponds to the 5% VaR which occurs at 1.65 for the
standard normal distribution. The confidence interval takes the following form:
PROFESSOR’S NOTE
Recall that a confidence interval is a two-tailed test (unlike VaR), so a 90%
confidence level will have 5% in each tail. Given that this is equivalent to
the 5% significance level of VaR, the critical values of 1.65 will be the same
in both cases.
Since bin width is 0.1, q is in the range 1.65 ± 0.1/2 = [1.7, 1.6]. Note that the left tail
probability, p, is the area to the left of −1.7 for a standard normal distribution.
Next, calculate the probability mass between [1.7, 1.6], represented as f(q). From
the standard normal table, the probability of a loss greater than 1.7 is 0.045 (left
tail). Similarly, the probability of a loss less than 1.6 (right tail) is 0.945.
Collectively, f(q) = 1 − 0.045 − 0.945 = 0.01.
The standard error of the quantile is derived from the variance approximation of q
and is equal to:
Now we are ready to substitute in the variance approximation to calculate the
confidence interval for VaR:
Let’s return to the variance approximation and perform some basic comparative
statistics. What happens if we increase the sample size holding all other factors
constant? Intuitively, the larger the sample size the smaller the standard error and the
narrower the confidence interval.
Now suppose we increase the bin size, h, holding all else constant. This will increase the
probability mass f(q) and reduce p, the probability in the left tail. The standard error
will decrease and the confidence interval will again narrow.
Lastly, suppose that p increases indicating that tail probabilities are more likely.
Intuitively, the estimator becomes less precise and standard errors increase, which
widens the confidence interval. Note that the expression p(1 − p) will be maximized at
p = 0.5.
The above analysis was based on one quantile of the loss distribution. Just as the
previous section generalized the expected shortfall to the coherent risk measure, we
can do the same for the standard error computation. Thankfully, this complex process is
not the focus of the LO.
Quantile-Quantile Plots
LO 1.f: Interpret quantile-quantile (QQ) plots to identify the characteristics of a
distribution.
A natural question to ask in the course of our analysis is, “From what distribution is the
data drawn?” The truth is that you will never really know since you only observe the
realizations from random draws of an unknown distribution. However, visual
inspection can be a very simple but powerful technique.
In particular, the quantile-quantile (QQ) plot is a straightforward way to visually
examine if empirical data fits the reference or hypothesized theoretical distribution
(assume standard normal distribution for this discussion). The process graphs the
quantiles at regular confidence intervals for the empirical distribution against the
theoretical distribution. As an example, if both the empirical and theoretical data are
drawn from the same distribution, then the median (confidence level = 50%) of the
empirical distribution would plot very close to zero, while the median of the
theoretical distribution would plot exactly at zero.
Continuing in this fashion for other quantiles (40%, 60%, and so on) will map out a
function. If the two distributions are very similar, the resulting QQ plot will be linear.
Let us compare a theoretical standard normal distribution relative to an empirical t-
distribution (assume that the degrees of freedom for the t-distribution are sufficiently
small and that there are noticeable differences from the normal distribution). We know
that both distributions are symmetric, but the t-distribution will have fatter tails.
Hence, the quantiles near zero (confidence level = 50%) will match up quite closely. As
we move further into the tails, the quantiles between the t-distribution and the normal
will diverge (see Figure 1.3). For example, at a confidence level of 95%, the critical z-
value is −1.65, but for the t-distribution, it is closer to −1.68 (degrees of freedom of
approximately 40). At 97.5% confidence, the difference is even larger, as the z-value is
equal to −1.96 and the t-stat is equal to −2.02. More generally, if the middles of the QQ
plot match up, but the tails do not, then the empirical distribution can be interpreted as
symmetric with tails that differ from a normal distribution (either fatter or thinner).
Figure 1.3: QQ Plot
MODULE QUIZ 1.2
1. Which of the following statements about expected shortfall estimates and coherent risk measures
are true?
A. Expected shortfall and coherent risk measures estimate quantiles for the entire loss
distribution.
B. Expected shortfall and coherent risk measures estimate quantiles for the tail region.
C. Expected shortfall estimates quantiles for the tail region and coherent risk measures estimate
quantiles for the non-tail region only.
D. Expected shortfall estimates quantiles for the entire distribution and coherent risk measures
estimate quantiles for the tail region only.
2. Which of the following statements most likely increases standard errors from coherent risk
measures?
A. Increasing sample size and increasing the left tail probability.
B. Increasing sample size and decreasing the left tail probability.
C. Decreasing sample size and increasing the left tail probability.
D. Decreasing sample size and decreasing the left tail probability.
3. The quantile-quantile plot is best used for what purpose?
A. Testing an empirical distribution from a theoretical distribution.
B. Testing a theoretical distribution from an empirical distribution.
C. Identifying an empirical distribution from a theoretical distribution.
D. Identifying a theoretical distribution from an empirical distribution.
KEY CONCEPTS
LO 1.a
Historical simulation is the easiest method to estimate value at risk. All that is required
is to reorder the profit/loss observations in increasing magnitude of losses and identify
the breakpoint between the tail region and the remainder of distribution.
LO 1.b
Parametric estimation of VaR requires a specific distribution of prices or equivalently,
returns. This method can be used to calculate VaR with either a normal distribution or
a lognormal distribution.
Under the assumption of a normal distribution, VaR (i.e., delta-normal VaR) is
calculated as follows:
Under the assumption of a lognormal distribution, lognormal VaR is calculated as
follows:
LO 1.c
VaR identifies the lower bound of the profit/loss distribution, but it does not estimate
the expected tail loss. Expected shortfall overcomes this deficiency by dividing the tail
region into equal probability mass slices and averaging their corresponding VaRs.
LO 1.d
A more general risk measure than either VaR or ES is known as a coherent risk
measure. A coherent risk measure is a weighted average of the quantiles of the loss
distribution where the weights are user-specific based on individual risk aversion. A
coherent risk measure will assign each quantile (not just tail quantiles) a weight. The
average of the weighted VaRs is the estimated loss.
LO 1.e
Sound risk management requires the computation of the standard error of a coherent
risk measure to estimate the precision of the risk measure itself. The simplest method
creates a confidence interval around the quantile in question. To compute standard
error, it is necessary to find the variance of the quantile, which will require estimates
from the underlying distribution.
LO 1.f
The quantile-quantile (QQ) plot is a visual inspection of an empirical quantile relative
to a hypothesized theoretical distribution. If the empirical distribution closely matches
the theoretical distribution, the QQ plot would be linear.
ANSWER KEY FOR MODULE QUIZZES
Module Quiz 1.1
1. D The historical simulation indicates that the 5% tail loss begins at 1.56, which is
less than the 1.65 predicted by a standard normal distribution. Therefore, the
historical simulation has thinner tails than a standard normal distribution. (LO
1.a)
2. D The value at risk calculation at 95% confidence is: −20 million + 1.65 × 10 million
= −$3.50 million. Since the expected loss is negative and VaR is an implied
negative amount, the interpretation is that XYZ will earn less than +$3.50 million
with 5% probability, which is equivalent to XYZ earning at least $3.50 million
with 95% probability. (LO 1.b)
Module Quiz 1.2
1. B ES estimates quantiles for n − 1 equal probability masses in the tail region only.
The coherent risk measure estimates quantiles for the entire distribution
including the tail region. (LO 1.c)
2. C Decreasing sample size clearly increases the standard error of the coherent risk
measure given that standard error is defined as:
As the left tail probability, p, increases, the probability of tail events increases,
which also increases the standard error. Mathematically, p(1 − p) increases as p
increases until p = 0.5. Small values of p imply smaller standard errors. (LO 1.e)
3. C Once a sample is obtained, it can be compared to a reference distribution for
possible identification. The QQ plot maps the quantiles one to one. If the
relationship is close to linear, then a match for the empirical distribution is found.
The QQ plot is used for visual inspection only without any formal statistical test.
(LO 1.f)