Risk beyond the Hedge - Actuarieel Genootschap

Risk beyond the Hedge - Actuarieel Genootschap
Risk beyond the Hedge
Options and guarantees embedded in life insurance
products in incomplete markets
Leendert van Gastel
March 6, 2013
-5
-10
-15
Cost
0
5
10
Profits and Losses
0
5
10
15
20
25
30
Time
Thesis
Master of Actuarial Science and Mathematical Finance
Universiteit van Amsterdam
in cooperation with
De Nederlandsche Bank N.V.
Contents
1 Introduction
1.1 Insurance in Distress . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Towards a New Supervisory Regime . . . . . . . . . . . . . . . . . .
1.3 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Overview of Embedded Options
2.1 Terms and Definitions . . . . .
2.2 Maturity Guarantee . . . . . .
2.3 Profit Sharing . . . . . . . . . .
2.4 Surrender Option . . . . . . . .
2.5 Guaranteed Annuity Option . .
2.6 Trends . . . . . . . . . . . . . .
3
3
4
4
and Guarantees
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6
6
8
9
11
11
12
3 Valuation in Incomplete Markets
3.1 Embedded Options as Financial Options . . . . . .
3.2 Towards a More General Theory . . . . . . . . . .
3.3 Incompleteness . . . . . . . . . . . . . . . . . . . .
3.4 Local Risk Minimization . . . . . . . . . . . . . . .
3.5 Monte Carlo Scenarios . . . . . . . . . . . . . . . .
3.6 Unhedgeable Residual Risk . . . . . . . . . . . . .
3.7 A Framework for Valuation in Incomplete Markets
3.8 Supervisory Perspective . . . . . . . . . . . . . . .
3.9 Further Research . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
14
14
14
15
16
17
17
19
20
20
4 Closed Form Valuation in Complete Markets
4.1 Endowments . . . . . . . . . . . . . . . . . . .
4.2 Maturity Guarantee . . . . . . . . . . . . . . .
4.3 Profit Sharing . . . . . . . . . . . . . . . . . . .
4.4 Guaranteed Annuity Option . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
22
22
23
24
24
5 Applications
5.1 Maturity Guarantee . . . .
5.1.1 Interest Rate . . . .
5.1.2 Mortality . . . . . .
5.1.3 Super-replication . .
5.1.4 Maturity . . . . . .
5.1.5 Equity . . . . . . . .
5.1.6 Elasticity Analysis .
5.1.7 Summary . . . . . .
5.2 Guaranteed Annuity Option
5.2.1 Quanto Hedge . . .
5.2.2 Stochastic Mortality
5.2.3 Equity . . . . . . . .
5.2.4 Elasticity Analysis .
5.2.5 Summary . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
27
27
27
30
31
32
33
34
36
36
36
38
39
39
40
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6 Conclusion
42
7 Bibliography
44
1
8 Appendix: Mathematical Background
8.1 Local Risk Minimization in Discrete Time . . . . . . .
8.1.1 Set-up and Preliminaries . . . . . . . . . . . . .
8.1.2 Local Risk Minimization . . . . . . . . . . . . .
8.1.3 Föllmer-Schweizer decomposition . . . . . . . .
8.2 Monte Carlo Setting . . . . . . . . . . . . . . . . . . .
8.2.1 Base functions . . . . . . . . . . . . . . . . . .
8.2.2 Approximate Doob decomposition . . . . . . .
8.2.3 Continuation Value for Martingales . . . . . . .
8.2.4 Approximate Föllmer-Schweizer Decomposition
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
48
48
48
51
53
55
56
57
59
60
9 Appendix: Implementation
9.1 Object-Oriented Implementation
9.2 Classes and Methods . . . . . . .
9.3 Choice of Models . . . . . . . . .
9.3.1 Interest . . . . . . . . . .
9.3.2 Equity . . . . . . . . . . .
9.3.3 Mortality . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
62
62
64
65
66
67
68
10 Acknowledgements
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
71
2
1
Introduction
A crisis that had a large impact on insurance and regulation is the starting point
for this research. It will lead us directly to three research questions. The successive
chapters address these questions one by one. We arrive at a framework for computing valuations for options and guarantees embedded in insurance products in an
incomplete market. The next chapter is devoted to practical applications. The final
chapter summarizes the answers to the research questions, and contains a discussion about the light this work sheds on the relation between actuarial and financial
perspectives on risk.
1.1
Insurance in Distress
It is 2012. British tax payers are to pay a compensation of around £1.5 billion
because of shortcomings of the British supervising authority (see Kingdom, 2010)
to nearly a million policy holders of Equitable Life, the oldest life insurance mutual
in England. What happened? In the account below we rely on the references
O’Brien (2006), Boyle and Hardy (2003) and Roberts (2012).
Equitable Life was founded in 1762 and had an outstanding reputation. For
instance, it was the first to introduce an actuarial foundation for life insurance
policies. Their reporting standards were an example to others. At the end of the
twentieth century, it was the fourth life insurance company in Britain and, being
a mutual, it was known to serve customers well. In the 1970’s and 1980’s, many
insurers started to offer financial guarantees to make the policies more attractive,
and Equitable Life was active and innovative in this field. Typically, the levels of
the guarantees were very much out-of-the-money, so the probability that it would
pay out was perceived to be negligible.
The Guaranteed Annuity Option was a popular choice. It offered a minimum
rate to convert the accumulated benefit to an annuity. So if the minimum rate was
10% and the accumulated amount was 10,000, then the policy holder would receive
annually 1,000 for the rest of his life. At the time that the policies were written,
the long-term interest rates were very high. But in the nineties, these rates started
to fall, and the value of the embedded options started to rise. The negative effect
was enhanced by two factors. By the time large numbers of policies matured, there
was a hausse at the stock market, so many policies had accumulated a large benefit.
But the baisse afterwards severely affected the assets of the mutual. Moreover, at
that time actuaries still worked with a mortality table of some decades ago. The
discrepancies with the actual mortality were considerable, and multiplied the value
of the guaranteed annuity option up to a factor of eight (Boyle and Hardy, 2003,
Figure 2).
The mutual did notice the problems but thought it could fall back on a paragraph in the contract for exceptional situations. Policy holders did not agree. This
led to a court case that the mutual lost. Equitable Life had to close for new business. Severe mistakes had been made, managers, accountants and actuaries were
blamed, and this led to many court cases. However, no claims were honoured as
no malicious intent was proven. The retired chief actuary, who once introduced
the guarantees, offered to return his pension. The Parliamentary Ombudsman also
received complaints from policyholders about the role of the supervisors. His investigation indeed did find some ground for these complaints, and this led to the
aforementioned compensation for the policyholders.
3
1.2
Towards a New Supervisory Regime
The dramatic situation at Equitable Life drew worldwide attention, and is extensively discussed in the literature. See Boyle and Hardy (2003) for a discussion of the
stochastic nature of stock market, interest and mortality, and see O’Brien (2006)
for a historic overview and management issues. But the case was certainly not
unique. Briys and de Varenne (1997) report over a hundred bankruptcies of life
insurance companies at that time in the United States in the so-called Savings and
Loans debacle. Similar problems occur in other countries like Canada and in Scandinavian countries. Gatzert (2009) mentions the downfall of Mannheimer Leben
in 2003. News items of that time mention the ”stille Lasten” (silent liabilities) as
cause. In Japan the Nissan Mutual life insurance group collapsed due to interest
rate guarantees of 4.7%.
These worldwide problems became a driving force in rethinking how insurance
could be organized in a less risky way (Roberts, 2012). Politics became convinced
that supervision should be reorganized in a more risk oriented way. Fair valuation
of insurance contracts became the leading idea. This entailed that for any contract
a fair market value is assigned at which another party should be willing to take
over the contract. In particular, it becomes important to assess the value and
risk in options and guarantees that are embedded in insurance contracts in a way
that conforms to the market. Currently, preparations for the installment of the
Solvency II framework are taking place. This new framework stresses the market
value approach and asks insurers to maintain a capital buffer that is sufficient to
stay solvent in 99.5% of all cases over a one-year horizon.
1.3
Research Questions
In view of the risky nature of options and guarantees embedded in insurance contracts and the attention for fair value in the new supervisory regimes, a need arises
for posing the following research questions.
1. Which options and guarantees embedded in insurance contracts do occur?
2. Which valuation methods are suitable to assess
(a) a fair market value in the above sense, and
(b) the risks involved?
3. Which simple methods do approximate the value and risk sufficiently and
what can be said about the approximation error?
The last question is especially motivated by the case of smaller insurers who may
not have the means to operate complex models.
The answer to the first research question will be provided by an extensive literature overview. Forty years of academic research has been devoted to this topic. The
literature on options and guarantees embedded in insurance, parallels with a small
time-lag the literature on pure financial options. Most research papers consider a
typical embedded option or guarantee and describe valuation methods, but there
are some papers that give an overview of embedded options and guarantees.
With regard to the second research question about valuation methods, we encounter a cyclic development in literature. Answers were given that were at that
time satisfactory. But then reality turned out to be more complex, and more
stochasticity had to be added or assumptions had to be weakened. Again at that
time satisfactory answers were found. And so on. In this development first attention was paid to equity risk, fifteen years later interest rate risk is included. More
4
recently attention is geared towards risk in policyholder behavior, and towards mortality, in particular longevity risk. We believe that, in the case of embedded options
and guarantees, fair valuation methods should take all four of them into account.
We will give an overview of the current state of the art of valuation methods in the
literature.
Another level of complexity whose importance cannot be underestimated, is
that of market incompleteness. Incompleteness here means that there are less financial instruments than stochastic factors, so there will be residual risk. Moreover, prices of options are no longer unique since they depend on the risk attitude
of the market participants. No-arbitrage no longer leads to non-arbitrary prices.
The standard financial valuation approach based on equivalent martingale measures does not suffice. A very successful approach to market incompleteness is the
local risk minimization theory of Föllmer, Sondermann, and Schweizer. It delivers
a hedging strategy to counter risk that is locally optimal given the incompleteness
(see Schweizer, 2001). The process of valuation falls apart in determining a hedge
strategy and determining a martingale that represents the residual risk. In a complete market, the residual risk vanishes, and the price of the locally optimal hedge
coincides with the standard value of the embedded option.
To be able to compute valuation in actual problems from practice, we will extend an idea for valuing American options of Longstaff and Schwartz (2001) along
the lines of Potters et al. (2001). We start with Monte Carlo simulations in discrete
time and then use a backward recursive least squares approach to determine conditional expectations with help of base functions. This approach will enable us to find
the locally optimal hedging strategy and the initial price of the hedge. Moreover,
the distribution of the hedging profits and losses emerges under the real probability
measure. So in the spirit of Solvency II, we can determine a sufficient capital buffer
to withstand a default up to a given level of probability. The resulting method is
to our knowledge the most general approach available, and seems to be less sensitive to the curse of dimensionality than other methods. For any other hedging
strategy this approach will yield a distribution of hedging profits and losses too. So
we can compare hedging strategies, and valuation methods. It also enables us to
compare model choices and to estimate parameter sensitivities. So our approach
will not only answer the original research question, but will also lead to a framework
that encompasses valuation of non-linearities in liabilities, including embedded options and guarantees. We have developed an object-oriented implementation of this
framework in the statistical software environment R.
To settle the last research question, this framework will help as well. Making
assumptions yields simpler valuation methods. Taking the forward measure as
equivalent martingale measure, and correcting it for mortality rates that we assume
deterministic, we come to closed formulas that have sufficient explanatory power if
we combine these with an assumption of geometric Brownian motion. To assess the
error in approximation, we will use the above local risk minimization framework.
In the traditional actuarial setting, reserving and buffering is the way to mitigate
risk, while in the financial setting hedging is the natural choice. In this thesis we see
how the reality of incomplete markets leads us in a natural way to risk mitigation for
options and guarantees embedded in insurance policies by an optimal combination
of hedging, reserving and buffering. We will apply this approach to two concrete
examples of embedded options and guarantees in insurance policies, among which
the Guaranteed Annuity Option. In this last case, this approach sheds new light
on the Equitable Life case, both from the financial and the actuarial perspective.
5
2
Overview of Embedded Options and Guarantees
To provide a bird’s eye view on the landscape of life insurance policies, we first
set up our idiom and then we examine existing categorizations and overviews. We
will discern four main types of options and guarantees embedded in life insurance
policies: a maturity guarantee, profit-sharing, surrender / paid-up type of options,
and guaranteed annuity option. The nature of these products is discussed, and the
relevant literature is introduced. We end this overview with a short discussion of
the current trends.
2.1
Terms and Definitions
Let us define a life insurance policy as a contract of a certain maturity between a
customer and an insurer where in return for the premium paid by the customer, the
insurer is held to a certain settlement at a particular covered life event. The benefit
is fixed or determined by the value of some financial reference or actual fund, which
we will call the underlying.
The traditional policy is based on a fixed interest rate, and delivers a fixed
benefit. For prudential reasons this fixed interest rate was rather low compared to
market rates. In the late 18th century insurance companies had already introduced
so-called with-profits or participating policies to enable policyholders to participate
in the company’s profit (Linton e.a., 2001). And in the 1960’s unit-linked policies
were introduced where the value of the policy is linked to an asset or a basket of
assets. The idea was to let the policyholder share in growth opportunities in the
market.
Gatzert (2009) gives an extensive overview of implicit options in life insurance
products. She focuses especially on participating policies. Following her work,
we list the most common form of life insurance policies. In Europe we encounter
term policies (payout of a benefit at death before maturity), endowment policies
(consisting of a term policy and a pure endowment that pays out a benefit when alive
at maturity), and unit-linked policies (payout can be at death and/or at maturity).
In the United States there mainly are term policies, whole life policies (like a term
contract, but running until death), and universal life policies (flexible premium
payment, value depending on insurance results).
For a definition of a guarantee and an option embedded in an insurance product
we follow the technical standard QIS5 of Solvency II (TP 2.71 - 2.73).
“A contractual option is defined as a right to change the benefits, to be
taken at the choice of its holder (generally the policyholder), on terms
that are established in advance. Thus, in order to trigger an option, a
deliberate decision of its holder is necessary.
[. . . ]
A financial guarantee is present when there is the possibility to pass
losses to the undertaking or to receive additional benefits as a result of
the evolution of financial variables (solely or in conjunction with nonfinancial variables) (e.g. investment return of the underlying asset portfolio, performance of indices, etc.). In the case of guarantees, the trigger
is generally automatic (the mechanism would be set in the policys terms
and conditions) and thus not dependent on a deliberate decision of the
policyholder / beneficiary. In financial terms, a guarantee is linked to
option valuation.”
We note that also in financial instruments we encounter deliberate decisions to
trigger an option, as for instance in American options. It turns out that also the
6
contractual options as defined above are linked to financial instrument valuation.
In the same technical standard QIS5 we find a list of main examples of embedded
options
• “Surrender value option, where the policyholder has the right to fully or partially surrender the policy and receive a pre-defined lump sum amount;
• Paid-up policy option, where the policyholder has the right to stop paying
premiums and change the policy to a paid-up status;
• Annuity conversion option, where the policyholder has the right to convert
a lump survival benefit into an annuity at a pre-defined minimum rate of
conversion;
• Policy conversion option, where the policyholder has the right to convert from
one policy to another at pre-specific terms and conditions;
• Extended coverage option, where the policyholder has the right to extend
the coverage period at the expiry of the original contract without producing
further evidence of health.”
The annuity conversion option is the same as the guaranteed annuity option as was
discussed in (1.1).
The QIS5 standard also mentions main forms of guarantees in life insurance
contracts
• “Guaranteed invested capital;
• Guaranteed minimum investment return;
• Profit sharing.”
We may encounter the first and second guarantee in a unit-linked policy. Since the
value of the underlying may fluctuate, a return guarantee makes sense, even at the
0% level as in the first guarantee of the invested capital.
We also encounter these types of guarantees in participating policies. They typically comprise a minimum interest rate guarantee and also a profit-sharing feature
that gives policyholders a share in the profit of the insurer due to positive results
in investment, mortality and expenses. If this share is on a year-by-year basis it is
often called a reversionary bonus, if it is only at maturity, then it is called a terminal
bonus.
So essentially the embedded options and guarantees as listed in the QIS5 standard fall into four categories:
(i) minimum investment return (maturity guarantee, minimum interest rate guarantee);
(ii) profit-sharing (participating policies).
(iii) exchange the policy for another policy or for a lumpsum (surrender, paid-up,
conversion);
(iv) guaranteed annuity option;
Below we will discuss typical examples of these four categories.
7
2.2
Maturity Guarantee
Consider a unit-linked policy that pays out at maturity if the policyholder is alive.
The value of such a policy is determined by the performance of a basket S of stocks.
For the premium, a number of units of the basket S is bought, and the value of the
policy is the number of units times the value of a single basket. To make up for
an unfortunate low value at maturity, such a policy often contains a rule that at
maturity T at least a minimum amount K will be paid if the insured is alive. Let
St be the value of the stock basket at t. If the insured is alive, the pay-off of the
maturity guarantee on top of the unit-linked value ST is then
(K − ST )+
where the notation a+ stands for max(a, 0). Apart from mortality, this guarantee
can be viewed as a put option on the stock basket St with strike price K. However,
a typical maturity of a life insurance police is 20 to 30 years, while in the market
for put options on stock maturities of more than 5 years are rare.
Brennan and Schwartz (1976) publish the first paper on embedded options and
guarantees from a financial option perspective, just three years after the seminal
work of Black and Scholes (1973) and Merton (1973). They discuss the maturity
guarantee and its similarity to a put option.
The main idea of Brennan and Schwartz (1976) is that by correcting the discount
factors for mortality rates in an actuarial way, the Black-Scholes set-up can be
used. They derive a Black-Scholes type of formula for the single premium case.
The authors bring forward several interesting points: they point out that the risk
involved here is non-diversifiable: if the guarantee is in-the-money for one policy,
it probably will be in the money for many policies at the same time. So it makes
sense to mitigate this kind of risk by hedging. In their words:
“A stock market collapse will render the company simultaneously liable
under the guarantees of all expiring policies. This has been a matter
of some concern to actuaries and regulatory bodies concerned with the
solvency of insurance companies.”
Moreover they remark that one encounters in a natural way the put-call parity
of derivatives. Indeed the total pay-out at maturity is the maximum of the stock
value and the guaranteed amount max(ST , K) if the insured is alive. We can write
this as the sum of the unit-linked value ST and a put option with the guarantee K
as strike, or, as the sum of the guaranteed amount K and a call option on the unit
with again the guaranteed amount as strike:
max(ST , K) = ST + (K − ST )+ = K + (ST − K)+
So the minimum guarantee for a unit-linked policy is essentially the same as a
profit-sharing guarantee for an interest-based policy.
At this point we would like to stress that from an accounting point of view,
the two forms are treated quite differently. In the first form, the unit-linked policy
itself contains no risk for the insurer as all risk is borne by the policyholder, but
the guarantee introduces risk for the insurer that should be hedged. In the second
form, the policy is treated as a fixed-income policy. Profit sharing will only be done
when there are profits, so there is no hedge for this option in place. But there is an
asymmetry: losses are for the company, while profits are partly for the customer.
So the company runs a risk if this is not properly hedged. In either case, the fair
market value for the whole product should be the same.
Not everybody was immediately convinced that the financial instruments perspective was sound. When in the UK in the 1970’s maturity guarantees in policies
8
took off, a Maturity Guarantees Working Party was installed by the Institute of
Actuaries. In their report (Ford e.a., 1980) reserving based on quantiles and conditional tail expectations is the proposed solution to cover the risk of maturity
guarantees. About immunization by options they wrote:
“The Working Party spent some time studying the subject and reached
varying degrees of confidence that the mathematics were sound. In
some cases the confidence was derived from the fact that nobody seems
to have seriously challenged the underlying theory. [...] The theory,
however, does seem to have serious practical disadvantages because it
depends upon several underlying assumptions.”
Indeed, Brennan and Schwartz (1976) still assumed a constant interest rate for the
whole contract period, geometric Brownian motion and deterministic mortality. We
agree with the Working Party that this set of assumptions is not very realistic. Due
to the seminal work of Jamshidian (1989) and Geman et al. (1995), the assumption
about interest has been weakened since. For example, in valuing options on bonds,
the so-called forward measure can be used. See Brigo and Mercurio (2006) for a
broad overview. Bacinello and Ortu (1993) were the first to address this in the
context of embedded options and guarantees.
For the case of periodic premium Brennan and Schwartz (1976) give a numerical
recipe. Nielsen and Sandmann (1995) were the first to realize that in a stochastic
interest environment, the periodic premium case leads to an Asian-like option, and
obtain prices by using Monte Carlo techniques. Schrager and Pelsser (2004) give a
general formula and work out the case of the LIBOR market model for interest.
In the approach of Brennan and Schwartz (1976), mortality is still deterministic.
Milevsky and Promislow (2001) were the first to consider contingent claims with
both interest rates and mortality stochastic. Dahl (2004) generalizes this in his
master’s thesis. In the case of a maturity guarantee, one might expect that the
sensitivity for mortality is not so large, so it makes sense to superreplicate the
guarantee by assuming no mortality at all. This yields a purely financial put option
with only equity and interest risk. It will provide an upper limit for the value of
the guarantee. In Section (5.1) we will illustrate this with examples.
2.3
Profit Sharing
Profit sharing guarantees entitle the policyholder to a share in the profit of the
insurance company. They are typically found in with-profits or participating policies
(Linton e.a., 2001). They usually contain a minimum interest rate guarantee and a
dividend option that gives policyholders a share in the profit of the insurer. If this
share is on a year-by-year basis it is often called a reversionary bonus, if it is only
at maturity, then it is a terminal bonus. Profits for insurers typically come from
mortality results, interest and equity results and control of expenses. The profit
may be defined on the basis of a reference portfolio or an actual portfolio that is
being held by the insurer, or on the profit of the company as it is determined at the
discretion of management.
There are various ways that the payout of the guarantee may be turned over to
the policyholder. It is possible to increase the benefit, to reduce the premium, to
cash-out immediately, or to extend the benefit at death possibly without an extra
medical examination. Bacinello and Ortu (1996) work out a pricing model where
either the payout is reinvested and the benefit will accumulate, or the pay-out is
returned immediately. Their work is based on the Cox et al. (1985). It yields
closed-form formulas for the no-reinvestment case, but the reinvestment case needs
simulations.
9
From a marketing perspective, a profit sharing feature is attractive for potential
customers. Although it looks similar to dividend for shareholders, there is a noteworthy difference because there is not really a downside, as there is for a shareholder
when the share price decreases. From an accounting point of view, a contractual
obligation as such a guarantee should be considered as a liability. It may be that
its level is not yet determined, it is a future cost. Profits ought to be defined only
when all costs are deducted from income. So to regard this guarantee as part of the
profit does not add to accounting transparency.
In practice, the minimum interest rate guarantee turned out to be quite costly
for the insurer. Later on we also encounter other forms of the option where there is
a bonus account next to the policy account. Profits shares are added to this bonus
account. When in some year the actual return is below the guaranteed return, then
the deficit of the guaranteed return for the policy account is borne by the bonus
account before drawing on the companies reserve. For an international comparison
of interest rate guarantees, see Cummins et al. (2004).
If rt is the return of the reference portfolio or the profit level of the company,
and r̂ is the fixed interest rate of the policy, then the yearly pay-out of the dividend
option is
at (rt − r̂)+ .
where at is the policy value at the beginning of year t. This kind of cashflow is
similar to the financial instrument known as a cap consisting of yearly caplets.
In the Netherlands, an often used reference portfolio is the so-called U-yield.
This is a figure that is published monthly by the Centrum voor Verzekeringsstatistiek (Centre for Insurance Statistics) as a standardized yield for use in insurance products. It is computed on the basis of Dutch government bonds and
uses averaging over bond categories and over time. Replicating the U-yield may
be cumbersome, but Plat (2005) shows how the U-yield can be approximated by
financial instruments.
Modeling discretionary profit sharing is notoriously difficult as there are arbitrary elements such as the discretion of management to define profits. Briys and
de Varenne (1997) are the first to present a model of participating policies with both
a minimum interest guarantee and a dividend sharing feature. They reformulate
the guarantees as options on the insurance company in the style of Merton (1973),
and show how the guarantees change the duration of the liabilities: short durations
increase, while long durations decrease. Grosen and Jørgensen (2000) build upon
the work of Briys and de Varenne (1997) by introducing more realistic modeling
of assets and of bonus distribution. Computations are done with the help of a
year-by-year balance sheet approach. Later work by the same authors also introduces mortality risk. In his thesis Rüfenacht (2012) brings this further to a general
balance sheet based framework. He applies it with several interest rate models to
the complete Swiss market to find market values of the guarantees. Comparing the
value of the guarantees between 2004 and 2009, he finds large discrepancies. As
main finding, he states:
“Performing best estimate simulation runs as at the end of 2004 and
2009 impressively demonstrates where the limitations of the market consistent valuation methodology are when applying a modeling approach
mixing both, market consistent and statutory valuation methodology.
According to this, insurers should successively move towards a comprehensive and entirely consistent application of the market consistent
methodologies regarding valuation, reserving and, as far as possible, also
pricing.”
In his approach no hedging is explicitly implemented, he considers the assets and
liabilities of the insurers just as they are. Rüfenacht mentions that this approach
10
can be used for other embedded options and guarantees as well. However, in our
opinion the need for hedging will becomes only more apparent.
2.4
Surrender Option
Surrender of a policy means that the policy is stopped before maturity, returning to
the policyholder the surrender value. Typically surrender values are determined at
contract date, where quite prudent interest rate and mortality rates are assumed.
However, developments in the market and mortality may lead to a surrender option
that is in-the-money.
Albizzati and Geman (1994) were the first to model a surrender option as a
financial option. They model early-exercise as a deterministic function of interest
rate, and arrive at a European option formula. Grosen and Jørgensen (1997) were
the first to recognize that a surrender option may be viewed as an American or
Bermudan option, where the surrender value is the strike price. As American options
are hard to valuate, the same holds for surrender options. Grosen and Jørgensen
use a binomial tree, that becomes quite large as both stock and interest are of
a stochastic nature. Steffensen (2000) derives in his master thesis a variational
inequality for the early exercise that also generalizes Thiele’s differential equation
for the policy reserve. He shows in fact how Thiele’s equation is related to the
Black-Scholes differential equation.
Bacinello (2005) considers the case of periodic premium payment, whereas other
authors restrict themselves to the single premium case. Several authors comment
that more factors influence policyholder behavior than only financial optimization
of the policy itself, for instance life events like changes in marital status, retirement
plans, etc. Also adverse selection, the asymmetry that the insured has better information over his of her health status than the insurer, may come into play. These
phenomena have a significant impact on the value and reserve necessary for early
exercise of the surrender, so it should be modeled. See e.g. Anzilli and Cesare
(2007). It might be that in the future, the policyholder behavior will change. The
role of insurance intermediaries is changing as they become more independent of
the insurance company. This may lead to a more alert attitude towards surrender
opportunities for policyholders. Moreover, Internet and social media will change
information flows. It is foreseeable that policyholders will inform each other easily of surrender opportunities. These developments may lead to a surrender value
that comes closer to the financial optimum as characterized by the analogy of the
American option.
Next to the surrender option, there are other embedded options of conversion
type that can be treated in a similar way. For instance, the paid-up option gives the
policyholder the choice to stop premium payment of the policy. This will lower the
benefit. The policy may even have resumption option, where premium payment can
be started again. Also options like flexible expiration, coverage extension, etc (see
Gatzert (2009)) can be seen as the right to exchange the current policy for another
policy, as is remarked by Dillmann and Ruß (2003). In this paper, valuation is
done with help of a two-dimensional tree, a binomial tree for stock and a bounded
trinomial tree for interest rates as in Hull and White (1994).
2.5
Guaranteed Annuity Option
The guaranteed annuity option gives the holder of a unit-linked policy a choice at
maturity whether the settlement will be in the form of a lump sum or an annuity
that consists of a yearly fraction of the lumpsum. This fraction, the annuity rate,
is guaranteed at the moment the contract is signed. So if a person of 30 years old
11
signs a unit-linked policy with maturity 35 years, and lives until 90, then the life
span of this guarantee turns out to be the large period of 60 years.
If St is the value of the unit at time t, and g is the guaranteed annuity factor,
and äx,t is the life-time annuity of a person of age x at time t, then the value at
maturity T of the guarantee is
ST (gäx,T − 1)+ = gST (äx,T − 1/g)+ .
The second form shows that this is in fact a kind of a call option on the annuity
with strike price 1/g, however multiplied with a stochastic factor ST , the value of
the unit. In finance, a quanto option is the term for an option in one currency on
a security that is traded in another currency. The payoff is multiplied with the
exchange rate as a factor. The guaranteed annuity option is comparable to such a
quanto option with the unit value as exchange rate.
The example of Equitable Life (see 1.1) shows how sensitive the value of the
option is for mortality, interest and stock. Alarmed by the unfortunate development
at Equitable Life, the British Annuity Guarantees Working Party organized in 1996
a survey of life assurance companies offering guaranteed annuity options. It turned
out that there was no consensus among the companies on how to reserve for such
options. The report (Bolton e.a., 1997) shows several approaches to reserving, but
did not come to a recommendation.
It did spark a very fruitful period of academic research. In the first papers we
see that the stock market and interest are modeled stochastically, but mortality is
still deterministic. Ballotta and Haberman (2003) use a geometric Brownian motion
for stock and a one-factor Heath-Jarrow-Morton model for the interest, and derive
a formula for the value of the guaranteed annuity option.
Pelsser (2003) does not consider the unit-linked policy, but rather a with-profits
policy. He derives a formula on the basis of a multi-factor Heath-Jarrow-Morton
model for the interest, where the annuity is used as numéraire, and shows how static
replication with swaptions can be done. Every time a share of the profit becomes
available for the policyholder, new swaptions should be bought to extend the hedge.
Boyle and Hardy (2003) discuss in a very informative way the case of Equitable
Life. On the basis of geometric Brownian motion for stock and the Hull-White onefactor interest model, they come to a formula as well. All papers use the change of
numéraire technique, and often use Jamshidian’s trick (Jamshidian, 1989) to express
an option on a sum of bonds as a sum of options on a single bond.
Later papers introduce more stochasticity, e.g. Ballotta and Haberman (2006)
extend their earlier work with stochastic mortality. Van Haastrecht et al. (2010)
combine the Schöbel-Zhu stochastic volatility model with a one- or two-factor HullWhite model for interest, and arrive at semi-closed formulas for the value of a
guaranteed annuity option.
The issue of hedging a guaranteed annuity option is still left open, as Boyle and
Hardy remark. First of all, the very long maturities for guaranteed annuity options
give problems, as financial instruments typically have much shorter maturities. Although there are swaptions with maturities up to 60 years, Boyle and Hardy point
out that hedging by swaptions as Pelsser proposed, does not take care of equity risk.
Moreover, uncertainty about longevity makes it difficult to know the right hedge ratios. As Dahl (2004) notes, introducing longevity bonds and other mortality-linked
securities might solve this last issue by transferring the risk to other parties. But
there is not yet a liquid market for these securities.
2.6
Trends
Since the savings and loans debacle, the downfall of Equitable Life and others,
insurers became more cautious in providing embedded options and guarantees in
12
life insurance products. Customers are more aware of the value of guarantees, and
ask for more transparent pricing. Regulators have started demanding that a fair
market value is determined for the embedded options and guarantees. We can
discern two trends. The first is that there are less options and guarantees offered
embedded in policies because of the troubles they may generate for the insurer.
Gatzert (2009) also notes a trend away from the current traditional contract
design towards more transparent modular forms such as variable annuities. These
are deferred annuities which are linked to stocks.
The following four classes of guarantees are typically found in Variable Annuities
• Guaranteed Minimum Death Benefit (GMDB): a guaranteed minimum benefit
at the event of death before the end of the contract
• Guaranteed Minimum Accumulation Benefit (GMAB): for intervals of the
contract period, a minimum guarantee for the growth is given.
• Guaranteed Minimum Income Benefit (GMIB): this guarantees a minimum
income stream (life annuity). The amount of the guaranteed minimum income
may be fixed in absolute terms, or in terms of premiums paid, of in terms of
the fund value.
• Guaranteed Minimum Withdrawal Benefit (GMWB): similar to GMIBs, but
at death the remainder of the fund is paid out to the heirs.
Motivated by the need of assured retirement income, this type of annuities has
been growing to a large share of the market notably in the US, Japan and UK (see
Ledlie et al., 2008). In the US, the market share in 2012 is twice as large as that of
fixed classical annuities (Institute, 2012). In the rest of Europe, there is much more
hesitation among insurers to offer variable annuities.
The advantage of variable annuities over previous types of policies is the transparency: the guarantees have a clear definition and price, and customers may choose
a guarantee or not. However, the reputation of Variable Annuities has suffered from
reportedly high costs for managing the funds. Moreover, the recent very low interest rates make this form of annuity with its guarantees less and less attractive for
insurers. So it remains to be seen whether this type of annuity will last.
13
3
Valuation in Incomplete Markets
The etymological origin of the word hedge is the Old English hecg, meaning a living
or artificial fence. Since the seventeenth century, the word is encountered in the
context of insuring oneself against a loss.
In finance, a hedge to protect oneself against a future risky claim is a portfolio
that has in itself the opposite risk. A good hedging strategy adapts itself to changes
in the economic surrounding and neutralizes the risk in the long run. Hence, the
starting point for valuation of a claim is the cost of a hedging strategy that neutralizes the risk. However, in life insurance there is more risk around than there
are instruments to counter the risk. This situation is called an incomplete market. It implies that there is risk left beyond the hedge, and that the price for this
unhedgeable risk is not visible in the cost of the hedging strategy.
In this chapter we give an account of a framework for finding a good hedging
strategy and determining the remaining unhedgeable risk. We start off with an
historically oriented overview. We present the theory, and finally we come to a
practical method for using this theory on Monte Carlo simulations. The mathematical background is presented in Appendix 8. The framework applies well to risk due
to options and guarantees in life insurance products in incomplete markets, as we
will see in Chapter 5.
3.1
Embedded Options as Financial Options
Brennan and Schwartz (1976) were among the first to realize that guarantees and
options embedded in insurance contracts can be modeled as financial derivatives
such as call or put options on stock. For instance, a unit-linked product with a
minimum guarantee of the fund value at maturity time, can be characterized as a
put option on the funds in stock.
In 1973, just a few years before, Black and Scholes (1973) published their famous
work on the valuation of financial derivatives. Merton (1973) placed the results in
a framework for contingent claim valuation. He showed that one can create a riskfree portfolio, that includes the financial derivative, and that is self-financing (no
intermediate losses or profits). The complement in the risk-free portfolio of the
financial derivative is called a hedge. This hedge essentially neutralizes the risk in
the derivative. The cost of this hedge is the market value to take over the financial
derivative, so it is the fair price of the financial derivative. The original assumptions
of Black, Scholes and Merton are a stock market with continuous frictionless trading, a geometric Brownian motion for stock with constant stock return, constant
volatility, and a constant interest rate.
By inserting deterministic survival rates as corrections for discount factors Brennan and Schwartz could apply the Black and Scholes framework in a straightforward
manner. Although the valuation problem seems solved as in principle a replicating portfolio can be formed using financial derivatives, they do point out that the
issue is more complex. Due to the specific nature of life insurance products with
their long maturities, there is no liquid market to buy these long-term financial
derivatives directly. The design of a hedge is possible, but the long term puts the
assumptions of constant volatility and interest rate under strain.
3.2
Towards a More General Theory
The Nobel-prize winning work of Black, Scholes and Merton sparked a strong theoretical development in the academic world about valuation of contingent claims,
especially when the link with the theory on martingales and stochastic integration
was made clear by Harrison and Pliska (1981). A martingale can be seen as a
14
mathematical characterization of a fair game. A stochastic process is a martingale
if future losses and gains are expected to cancel out, so if the current position is the
best estimate for the future position. This is related to the absence of arbitrage, so
to the absence of opportunities to realize a risk-free profit by some trading strategy. In an equilibrium situation, it makes sense that arbitrage opportunities are not
present, as otherwise prices would rebalance. In the discrete setting, it is possible to
show that the market is arbitrage free if and only if the discounted price processes
are martingales after a change of the probability measure to a risk neutral one.
Consider an arbitrage-free market in continuous time. Under the assumption
that there is at least one probability measure such that the discounted stock prices
process is a martingale, Harrison and Pliska (1981) showed that in the case that
the market is complete, i.e. if for every contingent claim there is a self-financing
portfolio that has the same payoff as this claim, then this probability measure
is unique. Moreover, the converse is true, so incompleteness implies that there
are several distinct martingale yielding probability measures, and therefore several
price systems that can be considered fair. The assumptions of the Black and Scholes
setting can be weakened, the price process of the stocks only needs to be a semimartingale, which means that it is the sum of a martingale and a process whose
variation is finite. But in their work, interest rate is still constant.
Geman et al. (1995) interpreted discounting a security as the ratio of two securities, where typically a zero-coupon bond is the denominator. This last security
takes the role of unit in which is payed, and is therefore called the numéraire. They
showed that in this ratio of the securities, one can take any non-dividend paying
security as denominator. Changing numéraire leads to changing the probability
measure, but the resulting prices will be the same because the self-financing hedging strategies are the same. Taking zero coupon bonds instead of the constant rate
savings account paves the way to include stochastic interest. The importance of
this result lies in the freedom of the choice of a suitable numéraire for a particular contingent claim, while the equivalent martingale measure approach can be
maintained.
3.3
Incompleteness
The work of Harrison and Pliska, Geman and others did remove many of the restrictive assumptions that were present in the original Black and Scholes setting.
The martingale approach still yields a unique price. But the – at first implicit –
assumption of completeness is of a different order. Incompleteness essentially means
that there are not enough instruments to cover the riskiness in the market. How the
remaining risk is priced, is a matter of preference. This corresponds to the result
of Harrison and Pliska that in an incomplete market, there are more fair prices
compliant with no-arbitrage.
In fact, in modeling the insurance market, many sources of risk break down
the assumption of completeness. This is the case for stochastic volatility in equity
models if only stocks and bonds are available, and the same holds for jump diffusion
models for equity. Also the illiquidity of the market of long term bonds and financial derivatives ensures that the market relevant for life insurance is not complete.
Mortality risk is by its nature also incomplete: for example it is not possible to
create a perfect hedge for a policy on a single life. Surrender of policies does not
only happen at financially optimal moments, but also for reasons outside the policy
setting. This also induces a source of incompleteness.
15
3.4
Local Risk Minimization
So the question arises how to handle the incompleteness of the market. As there
is no longer a unique price, what makes a good price and what can we say about
the remaining risk? We saw that in the complete market case, the price of a contingent claim is the initial cost of a self-financing hedging portfolio. Föllmer and
Sondermann (1986) started an approach now called local risk minimization that
was developed further, notably by Schweizer (1993) in his thesis and later work
(Schweizer, 2001). It entails the idea that a contingent claim that is not attainable
by a self-financing portfolio, can be attained by a portfolio that is at least mean-selffinancing, but for which adjustments are allowed. These adjustments can be seen
as profits and losses of the hedging process. Measuring the riskiness of these profits
and losses by a least squares criterion, we can try to minimize the risk at each point
in time. Just like in the set-up of Geman et al. (1995), the method works for any
numéraire. We note that since this numéraire is the unit in which quantities are
measured, the minimization criterion is in terms of units of the numéraire. Schweizer
also propagates another quadratic risk minimization approach called mean-variance
hedging. But in our opinion this does not fit well to general stochastic cashflows
that we are considering here.
A fundamental result is the Föllmer-Schweizer decomposition: under (i) the socalled structure condition, and (ii) the assumption that the mean-variance trade-off
process is uniformly bounded, the contingent claim can be uniquely decomposed in
a hedgeable and a non-hedgeable part such that the risk is locally minimized (Monat
and Stricker, 1995). In the discrete setting, these conditions are automatically fulfilled as long as conditional expectations and variances are finite. For the hedgeable
part there is a self-financing hedging strategy, and the cost of the hedge yields a
“price” that is unique due to non-arbitrage. The unhedgeable part is the profits and
losses process that is locally minimized. This profits and losses process is a martingale, so the local risk-minimizing hedging portfolio is mean-self-financing. The risk
in this cannot be covered by financial instruments, so there is not a unique market value. If the liabilities are transferred between parties, then the risk premium
depends on the preferences of the parties.
In the continuous case, the local risk minimization method is a deep result that
relies on the theory of stochastic integration and martingales. In the discrete case,
much less machinery is needed. As our computational setting is discrete, we will
restrict ourselves to the discrete case. See 8. In fact, hedging in practice is a
discrete process defined by the moments of actual trade. If we analyze a single step
in the discrete setting, we may recognize in the local risk minimization the minimum
variance hedge ratio as was already proposed by Johnson (1960). Statistically, this
relies on the fact that if the expectation is zero, minimizing the second moment
is equivalent to minimizing the variance. So the local risk minimization hedging
strategy can be seen as a sequence of minimum variance hedges.
In the hedging practice of tracking a benchmark by a portfolio, a common way
of measuring the performance is looking at the tracking error. If we take as the
benchmark the liabilities due to the policy, then minimizing the tracking error boils
down to minimizing variance, so tracking the liabilities is a characterization of the
Föllmer-Schweizer hedging strategy. If we consider the liabilities arising from a
portfolio of policies instead of a single policy, the Föllmer-Schweizer decomposition
is a key to the asset liability management for the portfolio.
In the literature, there are some specific cases for which formulae for the FöllmerSchweizer decomposition is worked out for insurance products (see Møller (1998),
Barbarin (2007), Vandaele and Vanmaele (2008)). Schweizer (1993) shows that for
continuous stock processes, there is a particular choice of a probability measure
that turns the local risk-minimizing price process into a martingale. It is intimately
16
linked to the market price of risk. This is the so-called minimal martingale measure
and connects the local-risk-minimization approach to the approach of pricing by
martingales.
Note that in the local-risk-minimization approach there is no need to change the
probability measure. Computations are done with the real probability measure, so
the distribution of profits and losses is meaningful.
Also note that there is no guarantee that this valuation is the all-time lowest,
as in principle it is possible that another hedging strategy leads to a better overall
result. But our impression is that the Föllmer-Schweizer hedging strategy is hard
to beat, as for complete markets it yields the unique price, notably the conditional
expectation value under the risk-neutral measure.
3.5
Monte Carlo Scenarios
To actually compute the Föllmer-Schweizer decomposition, we will use a backward
Monte-Carlo scheme based on work by Potters et al. (2001), who were inspired
by Longstaff and Schwartz (2001). We generalize their original approach to the
situation of any numéraire and any stochastic cashflow. This Monte-Carlo localrisk-minimization framework is very general as it works for any contingent claim in
an incomplete market defined by practically any stock process, any interest model
and any mortality or surrender model. So it is well suited for obtaining an insight
into complex products in highly stochastic settings. The main drawback is that
it is a numerical process depending on the generation of scenarios, so a sufficient
number of scenarios is needed for convergence and the results are not in the form
of an analytical formula. At this moment in time, the limits of this approach are
not known yet.
For computing the Föllmer-Schweizer decomposition, we regard the hedge and
price as a function of the underlying and time, and we approximate these functions
by a finite set of base functions. This reduces the backward Monte Carlo local
risk-minimization to an ordinary-least-square regression for each point in discrete
time basis. In line with the findings of Potters, e.a. we see that convergence of the
procedure is very good, so the necessary number of Monte Carlo scenarios is modest.
This is probably due to the fact that the hedging strategy acts like a control variate
for the Monte Carlo simulations. There are some numerical issues with the choice
of base functions as the rank of the system of equations may become very small
and the hedge may explode halfway, but these can be resolved with the help of the
singular value decomposition algorithm (see Press et al., 1988).
Our setup will thus be a combination of a standard market model with a finite
set of stochastic processes representing securities, among which a numéraire, and an
actuarial model with stochastic cashflows. Bonds are present among the securities,
and typically accommodate the choice of a numéraire. The stochastic cashflow may
depend for instance on the economic state, on an age- and generation-dependent
mortality distribution and a lapse distribution. For a single embedded option, like
a maturity guarantee, the cashflow has the form of a single contingent claim as
is standard in financial mathematics. But if we take a portfolio of policies with
various maturities and various policy-holder ages, the cashflow will have a more
general stochastic pattern.
3.6
Unhedgeable Residual Risk
The residual risk due to market incompleteness appears in the Föllmer-Schweizer
decomposition as the non-hedgeable martingale part. Schweizer shows that it is
strongly orthogonal to the martingale part of the financial instruments, which means
that we cannot improve the hedge to diminish the residual risk in terms of numéraire
17
units. As far as hedging is concerned, it should be left untouched. Here the appropriate way of mitigation of the residual risk is the actuarial approach of maintaining
a capital buffer as reserve to cover losses due to this residual risk.
From a solvency perspective the central issue is how much numéraire buffer is
needed to cover the losses in the cost process. To stay solvent in a scenario means
that at any time the value of the cumulative profit and loss process should stay
within the buffer. Here all variables are supposed to be properly discounted. If the
requirement is that in q% of all scenarios the company stays solvent, then the buffer
can be determined from the empirical data as the q-th quantile of the scenarios. We
call this the Value-at-Risk at level q%. The costs due to raising and holding this
buffer can be seen as a market value margin that is to be added to take over the
residual risk as well.
An empirical distribution of the residual risk results from the scenario-based
approach for computing the Föllmer-Schweizer decomposition. For each scenario,
there is a time series of the development of the profits and losses. Typically, this
consists of a certain initial cost, and then a series of profits and losses with zero average. As we know that the process is a martingale, the average change in the profit
and loss process is zero. Empirically the other moments and other measures such
as quantiles may be determined. In particular, the Value-at-Risk can be estimated.
The Value-at-Risk is not the only risk measure that is important from a solvency
perspective. Artzner et al. (1999) discuss the failure of subadditivity as a weakness
of Value-at-Risk as risk measure, and they propose other risk measures like the
Tail-Value-at-Risk. These risk measures can be computed as well on the basis of
the empirical residual risk distribution. As these measures depend more heavily
on the tail of the distribution, typically more scenarios are needed for a reliable
estimate.
Typical risk factors that lead to substantial contributions to the residual risk
are mortality, policy holder behavior, and also illiquidity of financial instruments
with long maturity.
The fact that we do not know at which moment a person will decease leads to
an unhedgeable risk. Such a risk typically diversifies well, in the sense
√ that the law
n, so the risk
of large numbers make that the
risk
for
n
policies
only
grows
with
√
per policy diminishes with 1/ n. When the portfolio of policies of an insurer is
not large, reinsurance is a proper way of mitigating this risk. The longevity issue,
i.e. the risk that we cannot estimate the trend in mortality well, is a risk that does
not diversify. A portfolio of n policies will have n times the risk of a single policy.
So it affects all policies in the same way and can lead to large contributions in the
residual risk. In the market, we see the first steps in a trend of securitization of the
longevity risk. If such contracts are in force, we may include them in the payment
stream and recompute the locally optimal hedging strategy.
Policy-holder behavior may also contribute to residual risk. If the policyholder
decisions are purely motivated by financial optimization considerations, then the
behavior becomes predictable and the embedded option or guarantee can be considered as an American option, so this is hedgeable. As it does not diversify, it
should be hedged indeed. But the part of policyholder behavior that can be attributed to the personal situation or irrationality leads to unpredictable decisions.
This will lead to residual risk that in principle diversifies well.
For maturities longer than five years, vanilla put and call options and most
other equity dependent derivatives become less and less liquid. Also interest rate
derivatives become less liquid. There are bonds of long maturity, but no longer in
the form of zero-coupon-bonds. Only swaps and swaptions with long time spans
are readily available. Therefore long maturity hedging will have to resort to equity
shares, bonds, swaps and swaptions. This implies that some market behavior factors
like stochastic volatility and jumps cannot be hedged by including derivatives as
18
financial instruments and might contribute to the residual risk. As this kind of
unhedgeable market risk does not diversify well, it deserves serious attention.
3.7
A Framework for Valuation in Incomplete Markets
For any hedging strategy and any scenario, the profits and losses can be computed.
So for a set of scenarios, a distribution over time of the profits and losses process
can be determined. Different hedging strategies may be compared by inspecting the
hedge costs and the residual risk. The approximate Föllmer-Schweizer decomposition delivers an optimized trading strategy in this form, and thus may be compared
to other trading strategies. Also models or parameter values can be compared by
considering profits and losses.
We arrive at a practical valuation framework that takes for input
1. a stochastic cashflow;
2. a set of securities as hedging instruments; and
3. a set of scenarios;
and gives the following output
1. a hedging strategy with locally minimized risk given a numéraire;
2. a distribution of profits and losses.
It consists of the following steps
1.
2.
3.
4.
Choose models for state variables for equity, interest, mortality, etc.;
Generate Monte Carlo simulations;
Determine a hedging strategy with Locally Minimized Risk;
Given a scenario set and a hedge strategy as function of state variables, determine the cost of the initial hedge and the distribution of the profit and loss
process.
If a set of scenarios is given, only the last two steps are necessary. If a hedge strategy
is already given as function of the discounted asset and time, then only the last step
is needed.
On the one hand, the power of this framework lies in the fact that in an incomplete market setting with stochastic equity, interest and mortality, a hedging
strategy for complex non-linear liabilities is provided that will be hard to beat.
On the other hand, it accommodates the comparison of hedging strategies, choices
of models and sensitivities of parameters, and gives insight in the effects of these
choices on the pricing, hedging and remaining risk due to market incompleteness.
In Appendix 8 the mathematical theory is provided for the case of discrete time
and two securities, including a numéraire. The mathematical treatment is based
on Föllmer and Schweizer (1989) and Schweizer (2001). We extend the approach
to general payment streams as Schweizer (2008) does in the continuous case. For
the risk minimization in a Monte Carlo setting, we extend the approach of Potters
et al. (2001) to a general numéraire discount and a general stochastic cashflow.
In Appendix 9 we show how we developed the framework for analyzing embedded options and guarantees in an object-oriented way in the statistical computing
environment R. Object-oriented classes and methods for insurance policies and for
embedded options and guarantees are developed. Both deterministic and stochastic
models are implemented, in particular geometric Brownian motion and jump diffusion for equity, the Hull-White 1-factor interest rate model, and the Lee-Carter
model for mortality. In this framework, both analytical closed form approaches and
Monte-Carlo type of simulations for valuation are implemented.
19
In Chapter 5 we illustrate the use of this framework by taking two typical examples from the four categories mentioned in Section (2.1) to obtain the FöllmerSchweizer decomposition. We chose the maturity guarantee and the guaranteed
annuity option. The profit sharing with respect to reference portfolio may be done
along the same lines. For discretionary profit sharing and for surrender type of options, human behavior models should be included. This adds to complexity, but we
feel that it does not add to insight into the method. We show how the sensitivity to
parameters can be analyzed, and we show the effect of another choice of the model
for interest, equity or mortality.
3.8
Supervisory Perspective
This research is motivated by the desire of the Dutch supervising authority to raise
the quality and transparency of the management of risk in options and guarantees
embedded in insurance contracts. The results may well be translated into implications for regulation and supervision of insurance companies holding portfolios with
complex products. We would like to put forward the following consequences.
• A working valuation algorithm is now available for valuation of complex products with non-linear components like embedded options and guarantees. It
will work together with any economic scenario generator.
• A framework is now available to assess the impact on hedging costs and the
unhedgeable residual risk of a diversity of hedge strategies, models, parameter
sensitivities, etc.
• The valuation algorithm specifies which and how many assets to hold in the
hedge strategy and in the capital buffer. Care should be taken if the last is a
volatile security;
• The outcome of a valuation in an incomplete market is only meaningful if
the appropriate hedging and buffering strategy is implemented. From a risk
control point of view, insurer and supervisor should be aware of this condition.
Regulation can benefit from this approach since it enables a quantitative inspection
and judgment of risk due to non-linear components in liabilities. The report of the
Groupe Consultatif (2012) mentions that this way of separating the hedgeable and
unhedgeable parts of the liabilities is currently not yet used for regulatory purposes.
A conditio sine qua non for usage is that these non-linearities are effectively
reported, which might be difficult for very large portfolios of policies. At this
point, a replicating portfolio can offer help. For instance, a thousand of maturity
guarantees with the same maturity and age of the insured may be combined in a
single “mortality indexed put option”. This can serve as input for the local risk
minimization algorithm. Beyond this quantitative inspection, supervisory activity
can focus on the quality of the choice of models and parameters for equity, interest,
mortality and policyholder behavior.
3.9
Further Research
We note that the unit of the capital buffer in the Föllmer-Schweizer decomposition
is the numéraire. So if this is a bond with certain maturity, this particular bond
should be kept in the capital buffer, otherwise an uncovered extra risk is introduced
in the hedging strategy. In the example of the guaranteed annuity option that we
will work out in Chapter 5, the choice for the numéraire is the equity share. So
in this case the minimization in the Föllmer-Schweizer hedging strategy is done
with a ’capital buffer’ in equity. One might argue that such a possibly volatile
20
equity share is not appropriate as capital buffer in the ordinary sense. This buffer
should rather be seen as an kind of second order hedge facility, designed to cover the
hedging errors. In the complete market, the work of Geman et al. (1995) showed
the numéraire independence. Indeed, the hedge is then with zero profits and losses,
so the numéraire dependence disappears. But in the incomplete market, a priori we
cannot expect this to hold.
If we want to know to profits and losses in money, then money account is the
preferred numéraire. But it is an open question whether this will yield the best
hedging and buffering performance from the viewpoint of risk mitigation. We believe
that we are lead to new questions for research. What is the effect of a numéraire
change? Which numéraire is optimal, and what are optimality criteria? Is the
hedging and buffering performance a good perspective to address transaction costs?
As a second topic for further research we propose the art of local risk minimization, which is about the design choices for the computational parameters. How
many state variables have to be taken into account? How many base functions
suffice? How many scenarios are necessary for good convergence. How big or small
should the time step be? How well does this approach work for combinations of
embedded options and guarantees?
At last, we remark that the local risk minimization bridges an actuarial and a
financial approach to insurance. The actuarial way is based on statistical expectations and risk measures, while the financial way is based on hedging strategies. The
Föllmer-Schweizer decomposition gives an expected best estimate of the hedging
strategy and returns the distribution of the remaining profits and losses in the form
of a martingale. This martingale may be subject to actuarial risk measurements.
In fact, when there are no financial instruments to hedge, the Föllmer-Schweizer
decomposition returns the actuarial fair provision. If there are sufficiently many financial instruments with regard to the risk factors, so if the market is complete, the
Föllmer-Schweizer decomposition returns the delta hedging strategy. The question
whether a risk can be hedged, is not a dichotomic question but a gradual one, and
the local risk minimization approach quantifies to what extent this is the case. This
raises interesting questions: Is this approach helpful for determining market prices
for the transfer of portfolios? Does this approach lead to a way for a determining
the market value margin of a liability? Does this have consequences for regulation?
21
4
Closed Form Valuation in Complete Markets
In the literature there are many results for the valuation of embedded options and
guarantees (see Chapter 2). Here we present a selection of closed formulas that do
take stochasticity of equity and interest into consideration, but are still rather easy
to compute. This is important for smaller insurers who are not in the position to
perform large calculations. Care is taken to derive the formulas with a common
numéraire, so that a unified approach is achieved. The formulas have been tried out
in a well-known spreadsheet program.
We discuss three of the main types of embedded options and guarantees, but not
the surrender/paid-up type which asks for the extra risk dimension of policyholder
behavior. Interesting as it may be, we do not need the extra complexity for the
purpose of this thesis.
There are several modeling assumptions underlying these formulas. In particular
we will need the assumption of completeness. As we saw in Section 3.3, for a realistic
match with reality, such assumptions may be judged as too strong. Nevertheless, the
formulas may be useful when they turn out to approximate the price sufficiently well.
The valuation framework (See Section 3.7) can be used to see what the size of the
approximation error is by a comparison with results from local risk minimization.
4.1
Endowments
Let us denote by qx,t the probability that somebody who is now x-year old dies in
year t from now. We suppose that this probability is deterministic for all x and t.
The generational mortality table is then given by
t∈{0,...,T }
χ = (qx,t )x∈{0,...,ω} ,
where ω is the maximum age. Denote by T −t px,t the conditional probability that
somebody who is now x-years old and who is still alive in year t from now, will
survive the following T − t years.
Let us denote by P (t, T ) the value- of a zero-coupon bond to be acquired at
time t and with bond tenor T − t. This asset can be used as numéraire, and it
yields the forward measure which we will denote by Q (See Jamshidian (1989) and
Brigo and Mercurio (2006). Let us suppose that an interest yield curve is given at
time t = 0, so spot rates P (0, T ) are known for all maturities.
Consider a portfolio of policies on i.i.d. lives, all of the same age x. Suppose that
mortality and market variables such as interest are independent. Then the actuarial
discount that takes interest and mortality into account is given by the estimated
value of the pure endowment
Pχ (t, T ) := T −t px,t P (t, T ).
By
√ the law of large numbers, the standard deviation per endowment decreases with
n if the number n of policies increases. So the risk in the endowment is diversified
due to the number of policies. In practice, the number of policies is finite, so there
is some risk left. But we assume that the portfolio of policies is large enough to
suppose complete diversification.
Next we consider an endowment with a continuous coupon stream with rate ρ
that is reinvested in the endowment. If the insured person is alive at maturity T ,
the pay-out is eρT . So seen from time t its worth at time T , we have an expected
value of
Pχ (t, T ) := eρ(t−T ) Pχρ (t, T ).
Note that the coupon-bearing endowment is not as such a traded security on the
market. By our assumptions, it only differs from a zero-coupon bond by a deterministic factor. Its main purpose is to clean up the formulas.
22
4.2
Maturity Guarantee
Consider a portfolio unit-linked policies with equity unit S on x-year aged persons and a guaranteed minimum amount of K at maturity T , Under the above
assumptions an amount is payed equal to
VT = T −t px,t (K − ST )+ .
Under the forward measure, the quotient Vt /P (t, T ) is a martingale, so
Vt
VT +
Ft = T −t px,t EQ (K − ST ) Ft .
= EQ
P (t, T )
P (T, T ) So we obtain for t = 0 the valuation
V0 = Pχ (0, T ) EQ (K − ST )+ .
(4.1)
If the maturity guarantee is expressed in terms of a guaranteed interest rate ρ, so
if K = eρT S0 , then it is natural to express this in terms of the coupon bearing
endowment. Just as above, we see that
ST +
V0 = S0 Pχρ (0, T ) EQ (1 −
)
.
(4.2)
S0 eρT
So the maturity guarantee boils down to a put option on the ratio of actual growth
and the guaranteed growth.
If St /Pχρ (t, T ) is log-normal with constant variance σ 2 , then following Geman
et al. (1995), the value is given by Black’s formula. In this case it takes the form
V0 = S0 Pχρ (0, T )Φ (−d2 ) − Φ (−d1 )
(4.3)
− log(Pχ (0, T )) − ρT + 1/2σ 2 T
√
σ T
− log(Pχ (0, T )) − ρT − 1/2σ 2 T
√
d2 =
.
σ T
d1 =
(4.4)
(4.5)
In particular, if we assume that St and P (t, T ) are both geometric Brownian motions
with known constant covariance, then this holds for the quotient as well and the
variance can be calculated by Itô’s lemma. Resuming we see that this formula is
valid on the basis of the following assumptions
•
•
•
•
•
•
•
a complete market;
mortality probabilities are deterministic;
mortality is independent from market variables under Q;
the portfolio of policies is fully diversified;
the dynamics of equity is a geometric Brownian motion;
the dynamics of zero-coupon bond is a geometric Brownian motion;
covariance between equity and zero-coupon bonds is constant.
We are aware that these assumptions are too strong for a realistic model. The
framework of Section 3.7 will help us to assess the error that is made by assumptions
by a comparison with results from local risk minimization applied to situations with
weaker assumptions.
23
4.3
Profit Sharing
We consider a with-profits policy with a minimum interest rate guarantee. Let r be
the guaranteed rate, and let r̃(t) be the floating yearly rate of return of a reference
portfolio. Let π be the single premium. At year τ the guaranteed amount is πeρτ ,
where ρ = log(1 + r). Let us write ρ̃(t) = log(1 + r̃(t)). At each year a diversified
pay-out is honored of size
+
bτ (τ ) = T −t px,t πeρτ (ρ̃(τ ) − ρ) .
Here we are dealing with a call option on the rate of return of the reference portfolio.
Denote by bτ (t) the value at time t of this option. When we take as above the
numéraire P (t, τ ), we obtain under the forward measure Q the martingale
bτ (t)
.
P (t, τ )
The invariance of the conditional expectation over time implies that we can value
the single option by
bτ (0) = πPχρ (0, τ ) EQ (ρ̃(τ ) − ρ)+ .
(4.6)
We see it yields a weighted call option on the return of the reference portfolio with
the guaranteed rate as strike.
The complete profit-sharing option consists of these separate pay-outs. So the
value of the complete profit-sharing option is the sum of all call options
B(0) =
T
X
bτ (0).
τ =1
Under the assumptions that the floating rate r̃(τ ) and the zero-coupon bond are
log-normally distributed with known fixed covariance, a single summand can be
valued just as the maturity guarantee above by applying Black’s formula and Itô’s
lemma.
This approach is similar to the standard way of valuing a cap option consisting
of a combination of caplets. See Brigo and Mercurio (2006). The only difference
here is that the face values of the caplets are different for each year. This formula
is valid on the basis of the following assumptions
•
•
•
•
•
a complete market;
mortality probabilities are deterministic;
mortality is independent from market variables under Q;
the portfolio of policies is fully diversified;
the dynamics of the rate of return of the reference portfolio is a geometric
Brownian motion;
• the dynamics of zero-coupon bond is a geometric Brownian motion;
• covariance between the reference portfolio and zero-coupon bonds is constant.
4.4
Guaranteed Annuity Option
Consider a unit-linked policy based on equity S that offers to pay-out the accumulated sum in the form of an annuity with a guaranteed payout rate g. Let gχ (t) be
the market annuity payout rate a time t, so
gχ (t) :=
24
1
äχ (t)
where the life-time annuity is given by the expected value of the sum of endowments
äχ (t) :=
∞
X
Pχ (t, s)
s=t
In principle this is an infinite summation, but without loss of generality we may
truncate it at the maximum age ω. The value of the benefit at maturity time T is
BT = T −t px,t (ST g äχ (T ) − ST )+ = T −t px,t g ST (äχ (T ) − g −1 )+ .
We recognize a quanto call-option on the annuity in the unit S. Following the
approach of Boyle and Hardy (2003), we take the forward measure as numéraire,
and we arrive at
B0 = Pχ (0, T ) g EQ ST (äχ (T ) − g −1 )+ .
Let us assume that equity and interest rates are independent, then we may write
this as
B0 = Pχ (0, T ) g EQ [ST ] EQ (äχ (T ) − g −1 )+ .
Under the risk measure Q the quotient St /P (t, T ) is a martingale, so
S0 = P (0, T ) EQ [ST ].
Hence
B0 = T −t px,t g S0 EQ (äχ (T ) − g −1 )+ .
This is essentially a call option on the life-time annuity, which is in itself a weighted
sum of zero coupon bonds. To evaluate this option, we make use of a result of
Jamshidian (1989) which is currently known as Jamshidian’s trick. This “trick”
can be used if a one-factor short rate interest model is used, and works as follows.
Consider a call option with strike price K on a sum of zero coupon bonds
X
ai P (t, T + i)
i
with positive coefficients ai . As we assume that the interest is determined by the
short rate rt , all functions fi (rt ) := ai P (t, TP
+ i) are monotonous decreasing in the
short rate rt , so the same holds for the sum i fi (rt ). Suppose there is a solution r̂
to
X
fi (r) = K.
i
So r̂ makes the option exactly at the money. By the monotonicity, this solution is
unique. Define for each i a strike price
Ki := fi (r̂).
Jamshidian’s observation is that we may decompose the pay-out as follows:
!+
!+
X
X
fi (r) − K
=
(fi (r) − fi (r̂))
i
i
!
=
X
(fi (r) − fi (r̂))
· 1{r≥r̂}
i
=
X
(fi (r) − fi (r̂)) · 1{r≥r̂}
i
=
X
+
(fi (r) − Ki ) ,
i
25
which implies that the option on the sum of bonds, decomposes into a sum of
options.
Still following Boyle and Hardy (2003), we opt for the one-factor Hull-White
model of the short rate specified for parameters α and σ by
drt = α(θ(t) − rt )dt + σdWt ,
(4.7)
where θ(t) is a function determined by the term structure, and Wt is a Brownian
motion. The bond value is given in this case by
P (0, T )
σ2
P (t, T ) =
exp B(t, T ) f (0, t) −
B(t, T )(1 − e−2at ) − rt
P (0, t)
4a
1
1 − e−a(T −t) ,
B(t, T ) =
a
where f (0, t) is the t-year forward rate. The price of a call option with maturity T
and strike Ki on a zero coupon bond P (t, T + i) with bond maturity i is then given
by a Black-type of formula
Vt (i, Ki ) = P (t, T + i)Φ (d1 (i)) − Ki P (t, T )Φ (d2 (i))
with parameters
1
P (t, T + i) σP (i)
log
,
+
σP (i)
P (t, T )Ki
2
P (t, T + i) σP (i)
1
log
,
d2 (i) =
−
σP (i)
P (t, T )Ki
2
r
1 − e−2a(T −t) 1 − e−ai
σP (i) = σ
2a
a
d1 (i) =
Putting the pieces together, we have found a closed form formula for the value for
the guaranteed annuity option
Vt = g S0
ω
X
T +i−t px,t Vt (i, Ki )
i=0
where the Ki are determined by Jamshidian’s trick. This formula is valid on the
basis of the following assumptions
•
•
•
•
•
a complete market;
mortality probabilities are deterministic;
mortality is independent from market variables under Q;
the portfolio of policies is fully diversified;
the dynamics of equity is independent of the dynamics of interest rate under
Q;
• the interest rate is defined by a one-factor short rate interest model;
• in particular, interest rate is governed by the Hull-White short rate model.
26
5
Applications
We discuss two applications of the framework for local risk minimization and comparison of hedging strategies. The first case is the classical example that was already
discussed by Brennan and Schwartz (1976): a policy with a maturity guarantee. We
start with the original Black-Scholes setting and then step-by-step we make the model
more stochastic and the market more incomplete. So the assumptions for the closed
form valuation are loosened, and we will compare the results with outcomes of closed
formulas of (4) as far as possible. We will show how elasticities with regard to parameters can be determined, both of the initial hedge and of statistics of the profits
and losses.
As a second application, we will look at the example that led to the downfall of
Equitable Life (see 1.1): the guaranteed annuity option. We will see that the quanto
style of hedging leads to a fair risk mitigation. We will encounter an example of
market incompleteness in its very essence when it turns out that a higher volatility
leads to a cheaper hedge.
5.1
Maturity Guarantee
Consider the set-up of (4.2) consisting of a unit-linked policy with equity unit S
with S0 = 100 on an x = 35-year aged person and a guaranteed minimum amount
at maturity T = 30 of K = S0 e0.03 T . A benefit is payed of size
VT = 1T <Tx (K − ST )+ ,
where Tx is the moment of death of an x-year old person, and 1T <Tx is the indicator
function returning 1 if the person is alive at T and 0 else.
5.1.1
Interest Rate
First we consider the case as in Brennan and Schwartz (1976) of a maturity guarantee ep1 with fixed interest rate, deterministic mortality and geometric Brownian
motion dynamics for equity. Market parameters mm1 are r = 0.027 for interest
rate, and expected return of 7% and volatility of 25% for equity. Mortality data
come from Actuarieel Genootschap (2010). We give the valuation, consisting of the
price of the initial hedge and the hedge coefficient, both in the case of the closed
“Black-Scholes”-formula, and also for the local risk minimization algorithm. For
this algorithm we take as numéraire bonds with the same maturity, and settings
setup.lrm1 of N = 2000 simulations, stepsize 0.02, and a population size of 10,000.
For the base functions we take the weighted Laguerre polynomials up to degree five.
In the R-implementation (see Appendix 9) this is computed by
value(ep1, mm1) # closed formula
[1] Price:
53.3392767446385
[1] Hedge:
-0.249721941539991
value(ep1, mm1, setup.lrm1) # local risk minimization
[1] Price:
53.5638111762768
[1] Hedge:
-0.257216419995145
So here the difference is about 0.5% for the price of the hedge, while the hedge
coefficient shows about 3% difference. Taking up to ten base functions reduces the
difference to less than 1%. The profits and losses for the N scenarios are plotted in
Figure 1. Note that these profits and losses are expressed in terms of the numéraire
bond, valued at t = 0. In this way, the profits and losses are discounted. Note that
due to the asymmetry in the geometric Brownian motion, the losses show higher
peaks than the profits. The average approximates zero at each point in time, so
27
-10
-5
Cost
0
5
10
Profits and Losses
0
5
10
15
20
25
30
Time
Figure 1: Profits and Losses for Maturity Guarantee
the median typically shows a small profit (here a negative value). For example, for
t = 25 we obtain the statistics:
Min.
1st Qu.
Median
-0.401600 -0.183000 -0.031920
Mean
0.000974
3rd Qu.
0.054230
Max.
4.760000
For solvency, we are interested in the distribution of cumulative profits and losses
up to maturity. The histogram is given in Figure 2. We get statistics
0.04
0.00
0.02
Density
0.06
0.08
Histogram of d1
-20
-10
0
10
20
d1
Figure 2: Cumulative Profits and Losses
Min.
-23.910000
1st Qu.
-3.425000
Median
0.282100
Mean
-0.000943
3rd Qu.
3.696000
Max.
25.370000
The 99.5% quantile is in this case 14.6. So a capital buffer of 15 is sufficient for
99.5% of all scenarios if it is invested in bonds with maturity T . This capital buffer
reflects the imperfection of hedging with discrete steps and the approximation error
28
due to the Longstaff-Schwartz approach. Taking smaller steps will improve the
accordance with the Black-Scholes formula, but will not reduce the capital buffer.
A different set of base functions may do so, especially if it is better adapted to the
last step in time. For instance, taking ten base functions reduces the buffer by 25%.
In fact, another set of base functions yields in principle a different hedging strategy.
For convergence to the risk-free limit situation in the Black-Scholes formula, we
need to let the step size go to zero, the number of base functions to infinity, and
both the number of simulations and the number of policies go to infinity. Then the
profits and losses distribution will vanish. But already for the current parameters,
we see that the algorithm approximates quite well. Moreover, in practice hedging
cannot be done infinitely often with an infinitely small time step.
The next instance we will consider is where the interest is given by a market
model mm2 with a yield curve. Black’s formula returns in this case
> value(ep1, mm2) # closed formula
[1] Price:
51.8423521980796
[1] Hedge:
-0.245367361615871
while the local risk minimization algorithm with the parameter settings as before
gives
> value(ep1, mm2, setup.lrm1) # local risk minimization
[1] Price:
52.0799529196114
[1] Hedge:
-0.250904304147194
0
-10
-5
Cost
5
10
The only difference is in the numéraire. Note that these values are close to the
constant interest rate case. This is not a surprise, because the numéraire is still
not stochastic, and the level of the fixed interest was intently set to the average of
the yield curve. The profits and losses distribution also do not differ from the fixed
interest case. The valuation resultsProfits
do change
when we consider the case where
and Losses
0
5
10
15
20
25
30
Time
Figure 3: Profits and Losses with Hull-White Interest Model
the interest is governed by the Hull-White dynamics (see 4.7). Set parameters in
market model mm3 to a = 0.35 and σ = 0.025, just as in Boyle and Hardy (2003).
The yield curve is the same as before. As the stochasticity increases, we increase
the number of simulations to 4000, and the number of base functions to 8. For this
situation, we still have a closed formula (see Brigo and Mercurio, 2006, B.2).
29
> value(ep1, mm3) # closed formula
[1] Price:
57.3665046590934
[1] Hedge:
-0.228869742653555
> value(ep1, mm3, setup.lrm.2) # local risk minimization
[1] Price:
57.4880467832121
[1] Hedge:
-0.235492749702711
Note that the price for the hedge is increasing, while the hedge coefficient decreases.
So the amount of bonds in the initial hedge becomes larger. The graph displays
a growth of the profits and losses. Indeed we see that the 99.5% quantile for the
capital buffer grows up to 15.2. In our trials we encountered this pattern often:
more stochasticity leads to a higher price of the initial hedge, a decreasing hedge
coefficient and a growing capital buffer.
Rüfenacht (2012) gives a calibration of α = 0.0167 and σ = 0.0079 for the Swiss
market in recent years. This leads to much larger values
> value(ep1, mm3b) # closed formula
[1] Price:
100.797547566288
[1] Hedge:
-0.27990858947193
> value(ep1, mm3b, setup.lrm.2) # local risk minimization
[1] Price:
101.467518074209
[1] Hedge:
-0.285781506910869
Min.
1st Qu.
Median
Mean
3rd Qu.
Max.
-41.34000 -4.83700
0.31160
0.03757
5.18600 54.20000
50%
95%
99.5%
0.3116479 13.5400418 23.0914550
We may conclude at this point that adding stochasticity in interest by means of the
Hull-White model does lead to an increase of the valuation and profits and losses,
but the overall character does not change, so hedging will still mitigate most of the
extra risk.
5.1.2
Mortality
One source of market incompleteness is the finite number of insured persons lives.
There are no financial instruments connected to individual lives. But, as we saw in
Section (3.6), this is a typical insurance risk that diversifies well. We will look at
examples of 10 and of 100 policies and compare this to the above choice of 10,000
policies. Clearly the profits for the insurer due to premature deaths are visible as
negative peaks. In the case of 100 policies, we see a clear decline of the peaks.
Passing to the above situation of 10,0000 policies, we see that the influence on the
profits and losses distribution of market incompleteness due to the limited number
of policies vanishes due the diversification. There is no effect on the capital buffers
needed to cover the 99.5%. This is due to the special character of a maturity
guarantee on persons that are expected to stay alive. The impact on the capital
buffer for a term insurance where payout is at a premature death will be quite large.
Another source of incompleteness is the uncertainty in the mortality model. Up
to now we have been computing with the AG-mortality table. First we shift to the
forecast from the Lee-Carter model, which is just another deterministic table.
> value(ep1, mm3a) # closed formula
[1] Price:
56.7164853593461
[1] Hedge:
-0.226276421851859
> value(ep1, mm3a, setup.lrm.2) # local risk minimization
[1] Price:
56.8409910637224
[1] Hedge:
-0.232836839028175
30
-5
-15
-10
Cost
0
5
10
Profits and Losses
0
5
10
15
20
25
30
Time
Figure 4: Profits and Losses for 10 policies
It leads to a small decrease in price of the hedge, while hedge coefficient and capital
buffer stay the same. Changing to the full stochastic Lee-Carter model does not
lead to a notable difference in this case.
> value(ep1, mm6a, setup.ohmc.2)
[1] Price:
56.7258560791702
[1] Hedge:
-0.232052716393696
Also the new profits and losses distribution turns out to be quite similar. This can
be understood because of the typical large fraction of policy holders that will survive
up to maturity. This raises two interesting questions. Does it make sense to neglect
mortality altogether? And does the effect of including uncertainty in the mortality
parameters become visible when we pass to policies with longer maturities? In the
next subsections, we will address these questions.
5.1.3
Super-replication
For the maturity guarantee there is a natural upper boundary given by neglecting
mortality. So we let everybody survive in our model. We can see this as a form of
super-replication, since it is replication of an upper boundary to the liabilities. The
main advantage is that it eliminates the mortality risk factor. A plain vanilla put
option p1 with the corresponding strike gives when discounted with the yield curve:
> value(p1, mm2) # closed formula
[1] Price:
54.7218535957527
[1] Hedge:
-0.258995903353652
> value(p1, mm2, setup.lrm1) # local risk minimization
[1] Price:
54.8008977582851
[1] Hedge:
-0.262395847158384
and a capital buffer of 14.8. Similarly, the super-replicating put option in the case
of a Hull-White interest rate model, yields
> value(p1, mm3) # closed formula
[1] Price:
60.5528363616816
31
-10
-5
Cost
0
5
10
Profits and Losses
0
5
10
15
20
25
30
Time
Figure 5: Profits and Losses for 100 policies
[1] Hedge:
-0.241581950258217
> value(p1, mm3, setup.lrm.2) # local risk minimization
[1] Price:
60.7781858014183
[1] Hedge:
-0.24911635011735
and a capital buffer of 15.0. So we see in both cases a modest increase, if any, of the
valuation outcomes, and we may conclude that super-replication is a worthwhile
approach.
The effect of uncertainty in mortality becomes much larger when we consider
policies of maturity T = 50. These are rather untypical in the market, but we
discuss them because they will give useful insights.
5.1.4
Maturity
When we raise the maturity to 50, we expect to see a clearer effect of of uncertainty
in mortality. Before, in the situation of a policy of maturity 30 on a person aged
35, the effect was quite modest, as we could expect since survival rates up to 65 are
quite high. In practice, there may be not many of these policies with maturity 50
sold, but this example does give a feeling for the tail of guaranteed annuities.
The price of the initial hedge decreases, which we might expect as the fraction
of survivors will decrease substantially.
> value(ep6, mm6, setup.lrm.6)
[1] Price:
24.23459
[1] Hedge:
[1] -0.07258544
The profits and losses behavior dramatically changes, as can be seen from the quantiles of the cumulative distribution
90%
95%
99%
99.5%
54.07106 58.95687 63.57376 65.80374
and from Figure 6 that illustrates the statistics
Min.
-1294.0000
1st Qu.
-1.4190
Median
2.0580
Mean
-0.0267
32
3rd Qu.
22.3900
Max.
70.7000
-40
-100
-80
-60
Cost
-20
0
Profits and Losses
0
10
20
30
40
50
Time
Figure 6: Profits and Losses for Maturity 50
The large profits are due to scenarios where the mortality turned out to be higher.
The mean-self-financing local risk minimization algorithm pushes the other scenarios
upwards, so the accumulated losses increase. This effect disappears when we take
a deterministic mortality table.
We see that the effect of mortality becomes much more pronounced than that of
dynamics of equity. This is due to the effect that we can hedge the stock dynamics,
and not the uncertainty in mortality parameters. So Figure 6 shows the effect of
market incompleteness due to mortality.
5.1.5
Equity
Unpredictable jumps in equity prices introduce market incompleteness. We can
model such equity behavior by a jump-diffusion model mm4. To the geometric Brownian motion we have added log-normal jumps that occur at Poisson-distributed
moments in time. For comparison, we compute the valuation also for a standard
geometric Brownian motion with the same variation and return.
> value(ep1, mm2, setup.lrm1) # GB
[1] Price:
51.4897720151086
[1] Hedge:
-0.248049081065266
>
> value(ep2, mm4, setup.lrm1) # JD
[1] Price:
48.8158969935822
[1] Hedge:
-0.221679230212979
We note that the values are lower. However the profits and losses distribution has
a different form and the 0.995 quantile jumps from 14.4 to 42.3, so a substantial
increase. From the graph Figure (7) it is clear that the profits and losses occur
during the whole period, they are no longer concentrated at the end. Apparently,
the hedging strategy becomes less effective, the hedge itself becomes cheaper, but
the profits and losses are much higher.
33
10
-10
0
Cost
20
30
Profits and Losses
0
5
10
15
20
25
30
Time
0.02
0.00
Density
0.04
Figure 7: Profits and Losses
for a Jump
Histogram
of d2Diffusion process
-40
-20
0
20
40
60
d2
Figure 8: Distribution of Accumulated Profits and Losses for a Jump Diffusion
process
5.1.6
Elasticity Analysis
We compute for a typical policy with a maturity guarantee the elasticities with
respect to the main parameters. Recall that the elasticity of y = y(x) with respect
to a variable x is defined as the expression
dy x
·
,
y dx
so it represents the ratio of the relative infinitesimal change in y and the relative
change in x. If the elasticity and the values of x and y are given, then the sensitivity,
i.e. the partial derivative, follows. A discrete approximation can be constructed by
taking ∆x instead of dx. Starting point is a unit-linked policy with a guarantee
of rg = 0.03 at maturity T = 30, and an initial investment of 100 in the unit.
We include stochasticity for all risk factors, so we take the Hull-White short rate
interest model, the Lee-Carter model for mortality, and geometric Brownian motion
for equity.
Initial values of the parameters are the following
34
rg
0.03
µS
0.07
σS
0.25
∆r
0
ar
0.35
σr
0.025
δ
−1.662
0.134
where rg stands for the guaranteed rate, µS and σS govern the equity dynamics, ∆r
induces a parallel shift to the yield curve, αr and σr are the Hull-White parameters,
and δ is the drift term and is the error size in the Lee-Carter mortality model.
We approximate the elasticities by multiplying one-by-one the values of parameters
by a factor of 1.25. In Table 1 these elasticities are displayed, alongside with the
initial values of the variables for the policy in the first row.
rg
µS
σS
∆r
αr
σr
δ
price
54.049
1.385
0.002
0.716
−1.117
−0.113
0.056
0.017
0.002
hedge
−0.242
0.833
0.045
−0.911
−0.729
0.017
−0.086
0.017
0.002
SD
5.549
0.925
−0.24
0.597
−0.75
−0.082
0.055
0.001
−0.003
skewness
−0.143
−0.66
0.322
−1.959
−0.364
−0.036
0.026
−0.034
0.005
kurtosis
0.706
0.097
1.712
2.403
0.143
0.046
0.019
0.052
0.002
q0.9
6.759
0.739
−0.196
0.452
−0.775
−0.085
0.038
0.036
−0.01
q0.95
9.002
0.652
−0.5
0.379
−0.779
−0.153
0.03
−0.014
−0.04
q0.995
15.179
1.146
−0.337
0.942
−0.747
−0.069
0.152
0
−0.014
Table 1: Elasticities for a Policy with Maturity Guarantee
As expected, we see that the increase in the guaranteed rate rg leads to an
increase of the price, the hedge, and the size of the profits and losses distribution.
A well-known phenomenon from the complete market situation is that the price
of the option does not depend on the annual return µS of the underlying. We see
this back in the elasticity values of price and hedge that are almost zero. Meanwhile,
it has an effect on the profits and losses distribution. The size diminishes as can be
read off from the standard deviation and the quantiles. Also it becomes less skewed,
but there is more kurtosis. As in the complete market situation, an increase of
volatility of equity σS increases the price of hedging, and causes the hedge to start
with relatively more bonds. Moreover, it leads to overall higher profits and losses,
more symmetry and fatter tails. Especially the increase of the kurtosis, also visible
in the highest quantile marks the necessary increase of the capital buffer.
A positive parallel shift of the yield curve ∆r , reduces price and the overall profits
and losses distribution. The kurtosis increases, which might be due to the relative
higher impact of equity dynamics. An increase of the mean reversion parameter
αr leads to a small decrease of the price, while the other values are about zero.
So these changes do not lead to a change of the hedge. An increase of the HullWhite volatility parameter σr only shows an effect on the highest quantile. But it is
important to realize that in practice the calibration of these parameters often leads
to very different outcomes. We saw before that a large change of both parameters
does lead to an increase of price and profits and losses. The effects of an increase
of drift and the error term in the mortality model, turn out to be negligible. But
we already saw that this has to do with the fact that the maturity stays far below
life expectancy in this case.
Note that we have only considered the elasticities around a single point. This
can be misleading, as we can see by looking at mortality which has a negligible
effect for this particular policy, but has a large effect for longer maturities. As
the effects are often non-linear, a more extensive study might be worthwhile. To
check for stability we have repeated these elasticity computations several times with
varying step size, number of simulations, function bases, randomization seeds. The
outcomes are stable, but more runs are needed to obtain confidence intervals.
35
5.1.7
Summary
For a maturity guarantee we see that the effect of introducing stochasticity for
interest is substantial. So for valuing a maturity guarantee a stochastic model for
interest is mandatory. Careful calibration of the parameters is important, as the
valuation is quite sensitive to the parameters. Market incompleteness due to jumps
increases especially the capital buffer, as is demonstrated by jump diffusion model
with similar market price of risk. Elasticities and sensitivities can be computed
for a particular policy. But care should be taken when drawing conclusions, as
dependencies are often non-linear.
In the case of a maturity guarantee of moderate length and a younger age of the
policy holder, the impact of uncertainty in mortality is quite limited. Incompleteness
due to the finite number of policies affects neither the valuation nor the capital buffer
for the maturity guarantee. In this case, super-replication by neglecting mortality
makes sense. It is a prudent action that removes a risk factor, and it increases the
values only modestly. However, if we consider longer maturities that come near life
expectancy, the situation changes dramatically.
We have seen that for this type of products, the profits and losses distributions
are skewed. Losses are mainly due to equity and interest movements, and profits
arise from uncertainty in mortality.
5.2
Guaranteed Annuity Option
We take the set-up that consists of a unit-linked policy based on equity S with
starting value S0 = 100 that offers to pay-out the accumulated sum in the form
of an annuity with a guaranteed payout rate g = 0.07. The age of the insured is
x = 35, maturity is set to T = 30 years, and the maximum age is set to 102, which
implies that the annuity is truncated after 37 years. We determined the value of
the benefit at maturity time T as
BT = 1T <Tx (ST g äχ (T ) − ST )+ = 1T <Tx g ST (äχ (T ) − g −1 )+ ,
where the life-time annuity is given by the sum of endowments
äχ (t) :=
∞
X
Pχ (t, s).
s=t
We recognize this payout as a quanto call-option on the annuity in the unit S.
The annuity is a random variable depending on the stochastic nature of interest
and mortality. As is clear from the form of the payout, stochasticity of interest
and mortality is essential, since otherwise the annuity is constant and the valuation
becomes trivial. This contrasts the situation for the maturity guarantee. So we start
with parameters α = 0.05 and σ = 0.005), and we take the Lee-Carter mortality
model that is calibrated for Dutch female mortality. For equity, we take a stock
with starting value of S0 = 100 and dynamics with return µ = 0.06 and volatility
σ = 0.20.
5.2.1
Quanto Hedge
For the local risk minimization algorithm, we need to compute conditional expectations. In this case they depend on the states of annuity, bonds and the numéraire
stock. A priori, the base functions are weighted polynomials in two “discounted”
variables Xt = äχ (t)/St and Yt = P (t, T )/St , so the dimension of the space of base
functions grows considerably in comparison with earlier situations. But in fact,
36
it is possible to restrict the number of base functions by means of the following
heuristics. The “discounted” payout has the form
BT /ST = g (äχ (T ) − g −1 Pχ (T, T ))+ ,
20
-20
0
Cost
40
60
80
So it is similar to an ordinary call option. We know that for such a call option the dynamic hedge strategy essentially depends on the ratio of the assets äχ (t)/Pχ (T, T ),
so on the ratio Xt /Yt . Therefore heuristically we choose to restrict our base functions in the local risk minimization to be one-dimensional polynomials in Xt /Yt .
When the local risk minimization is done, it turns out that hedging is roughly
performed by maintaining opposite positions in the annuity and the bond of equal
value, and a position in stock corresponding to the total value of the hedge. This
is a well known way to perform a quanto hedge if call option in another currency is
hedged.
Unfortunately, there is no liquid market for annuities, so in practice such an
annuity should be approximated by taking positions in bonds or in swaptions, with
survival rates as coefficients.
In Boyle and Hardy (2003), the hedging strategy for a guaranteed annuity option is still considered to be problematic. We think that our approach does offer a
solution, however the capital buffer should consist of equity shares, which is a questionable choice. As we remarked in Section 3.6, it is an open question whether other
numéraire choices will give better performance from risk mitigation perspective.
Next we show how well this approach
performs.
Profits and
Losses The local minimization setup
0
5
10
15
20
25
30
Time
Figure 9: Profits and Losses in Numéraire for a Guaranteed Annuity Option
setup.lrm.2 consists of N = 4, 000 simulations, step size equal to 0.25, and a
portfolio size of 10,000.
> value(gao, mm3, setup.lrm.2)
# Lee Carter
[1] Price:
29.9216317916123
[1] Hedge:
8.86607010124701 -200.76975401193
The profits and losses profile is different from the maturity guarantee, as can be
seen in Figure 9. During most of the time up to maturity there is more symmetry,
but the highest peaks are at the end, where there is the at-the-money discontinuity.
37
0.04
0.00
0.02
Density
0.06
0.08
Histogram of d1
-40
-20
0
20
40
60
d1
Figure 10: Histogram of Cumulative Profits and Losses
This leads to a cumulative profits and losses distribution that is wide compared to
the maturity guarantee, in the sense that standard deviation of 11 is large. It is
rather symmetrical with a skewness of 0.96. The computation reports a kurtosis of
3.8, so the distribution is leptokurtic, i.e. it is concentrated about the mean, and it
has relatively fat tails. (see Figure 10). More statistics for the cumulative profits
and losses distribution are given by
Min.
-46.2100
1st Qu.
-5.3700
Median
-0.1595
Mean
0.2394
3rd Qu.
3.9870
Max.
70.4700
and quantiles for the losses in right tail are
90%
95%
99.5%
13.15068 19.80736 42.96175
5.2.2
Stochastic Mortality
The above model incorporates stochastic mortality by means of the Lee-Carter
model fitted to the Dutch female population. If we take a deterministic model
instead, then the value diminishes considerably. This effect can be understood
because the variability of the annuity decreases. Therefore, we think that this
valuation is of limited importance.
value(gao, mm2, setup.lrm.2) # forecast from Lee Carter
[1] Price:
8.82733621312252
[1] Hedge:
7.20678167717752 -131.800487127849
with a smaller profits and losses distribution with standard deviation of 7.8, and
27.8 as 99.5% quantile. We have also considered the effect of the number of policies.
This is similar to that for a maturity guarantee, but somewhat less pronounced. See
Figure 11 for scenarios with a portfolio of five policies. Again, as the benefits are
payed when the policyholder is alive at maturity, there is not an extra capital buffer
necessary.
38
0
-20
-10
Cost
10
20
Profits and Losses
0
5
10
15
20
25
30
Time
Figure 11: Profits and Losses for 5 Policies
5.2.3
Equity
We compare a geometric Brownian model and a jump diffusion model with the
same volatility and market price of risk. On the initial hedge it has a very small
influence.
> value(ep1, mm7a, setup.lrm.2) # GB
[1] Price:
29.9216317916123
[1] Hedge:
8.86607010124701 -200.76975401193
> value(ep2, mm7b, setup.lrm.2) # JD
[1] Price:
30.0081288550579
[1] Hedge:
9.0191258577545
-204.316014491571
For the geometric Brownian model we obtain
Min. 1st Qu.
Median
-46.2100 -5.3700 -0.1595
90%
95%
99.5%
13.15068 19.80736 42.96175
Mean
0.2394
3rd Qu.
3.9870
Max.
70.4700
For the jump diffusion model we obtain
Min. 1st Qu.
Median
-52.9700 -6.0520 -0.3317
90%
95%
99.5%
14.25672 20.76598 43.10924
Mean
0.2423
3rd Qu.
Max.
4.4860 137.6000
So the effect of going over to the jump diffusion model is not very strong. This can
be explained by the fact that the hedge is essentially linear in stock, and the buffer
is in numéraire units, so in stock as well.
5.2.4
Elasticity Analysis
We determine the elasticities by multiplying one-by-one the values of parameters
by a factor of 1.25. In Table 1 these are displayed, together with the initial values
of the variables in the first row. We take a unit-linked policy with a guaranteed
39
annuity option with a rate rg = 0.06 and a setup with N = 10, 000 simulations,
step size of 0.2 and base functions up to degree eight. We start with the following
model parameters.
rg
0.06
µS
0.06
σS
0.2
∆r
0
ar
0.05
σr
0.005
δ
−1.662
0.134
Elasticities are computed by multiplying the parameter by a factor of 1.25 and
determine the result. The very high elasticity for the guaranteed annuity rate is
rg
µS
σS
∆r
αr
σr
δ
price
23.781
4.884
0.007
0.021
−1.589
−0.084
−0.261
0.463
0.016
hedge1
8.677
1.341
0.006
−0.004
0.465
0.459
−0.422
0.016
−0.003
hedge2
−189.577
1.336
0.005
−0.003
0.058
0.391
−0.419
0.111
0.001
SD
11.522
1.229
−0.008
0.442
−0.739
−0.698
0.918
0.149
−0.002
kurtosis
6.451
−0.52
−0.055
−1.097
0.538
−0.39
1.245
−0.296
0
q0.9
13.058
1.296
0.012
0.719
−0.776
−0.665
0.901
0.227
−0.005
q0.95
20.245
1.132
−0.006
0.565
−0.761
−0.755
1.052
0.172
−0.048
q0.995
46.289
1.004
−0.034
0.246
−0.769
−0.981
1.28
−0.019
−0.014
Table 2: Elasticities for GAO, N=10,000
well-known since the debâcle of Equitable Life. It reflects the strong effect of the
combination of the stochastic nature of equity, mortality and interest. The price of
the initial hedge increases, and the same holds for the profits and losses distribution
as can be seen from standard deviation and quantiles.
An increase in the return µS of stock does not affect the hedge nor the profits
and losses distribution, just like in the case of the maturity guarantee. An increase
of σS does not have any impact on the hedge, but it does lead to an overall increase
of the profits and losses distribution as measured by the standard deviation and the
quantiles.
The effect of a positive parallel shift ∆r of the yield curve is as expected. Both
price of the hedge and profits and losses diminish considerably. For the mean
reversion parameter αr we see that the price of the hedge is not affected, but
the ratio between the hedge coefficient in the annuity and the bond is altered.
Meanwhile, the profits and losses diminish when the mean reversion is higher.
Paradoxically, an increase of the volatility σr leads to a decrease of the price
of the initial hedge. A possible explanation is that there is less to hedge. Hedging
needs some predictability, and when movements become more erratic, there is less
that a hedge strategy can do. This aspect demonstrates incompleteness pur sang,
but it deserves more extensive study. The increase of σr does lead to higher profits
and losses, and also to fatter tails.
The increase in drift δ as a mortality parameter in the Lee Carter model leads
to an increase in the price of the hedge and the standard deviation of profits and
losses. But the outer quantiles are less affected. So if this trend is well estimated,
it seems that this can be hedged well. An increase in the error term in the Lee
Carter, does not seem to have any impact on hedge or profits and losses. Probably
this is due to averaging over the long period until death.
5.2.5
Summary
When we consider guaranteed annuity options for unit-linked policies, we see that
only if stochasticity of equity, interest and mortality are taken together, a meaningful valuation will occur. The natural choice of the stock as numéraire leads to a
40
quanto type of hedge that performs well, while the dimensionality in the local risk
minimization algorithm can be controlled by a clever exploitation of the quanto nature of the embedded guarantee. However, an essential part of the risk mitigation
strategy is that the capital buffer is formed by units in the numéraire, so by shares
of stock. This buffer should be seen as a special economic facility to absorb the
errors of this hedging strategy, not as a capital buffer for generic risk absorption.
The mere fact of going from a deterministic mortality model to a stochastic
model, adds to unhedgeable risk, so the market becomes more incomplete. When
a jump diffusion model is taken for equity, the hedge and the profits and losses
distribution does not really change.
Elasticity analysis shows interesting phenomena. First of all, the extreme sensitivity of the price of the initial hedge with respect of the guaranteed annuity rate
is confirmed. An increase in the volatility of interest σr leads to a decrease of the
price of the initial hedge, and a sharp increase of kurtosis. A possible explanation
is that there is more noise in the system, and there is less that a hedge can do.
Up to now, no other hedging strategy has been described in literature for guaranteed annuity options. We present a hedging strategy based on local risk minimization that takes out much of the risk in the hedging strategy. But this supposes
that the “capital buffer” for this type of product is invested in the stock, which
might be risky in itself. Further research may shed light on optimal solutions in
terms of both hedging and buffering strategies.
41
6
Conclusion
Let us recall the research questions of (1.3):
1. Which options and guarantees embedded in insurance contracts do occur?
2. Which valuation methods are suitable to assess
(a) a fair market value [. . . ], and
(b) the risks involved?
3. Which simple methods do approximate the value and risk sufficiently and
what can be said about the approximation error?
A comprehensive overview in Chapter (2) of the literature on options and guarantees
embedded in insurance contracts answers the first research question. The example
of Equitable Life (1.1) may serve as a lesson for companies and regulators about
the urge for careful handling of risks in non-linear components in liabilities.
To address the second research question, we first observe that embedded options
and guarantees face four main risk factors: interest, equity risk, mortality and
policyholder behavior. All these risk factors do not diversify completely, so risk
mitigation by the law of large numbers does not suffice, and hedging by market
instruments makes sense. Moreover, all four risk factors lead to incompleteness of
the market in the sense that there are unhedgeable residual risks that cannot be
neutralized by market instruments. As there is no unique way of pricing the residual
risk, there is no unique fair market value for an embedded option or guarantee in
an incomplete market setting.
So we are thrown back to the economic fundamentals of pricing: a fair price is
the cost of production (or for risks: annihilation). An important development in
the academic field of incomplete markets is the work on local risk minimization by
Föllmer and Schweizer a.o. It yields a hedging strategy that serves as an instrument
in making the match of assets and liabilities as good as possible. Here “good” means
that residual risk is minimized in terms of the numéraire security at each step,
and this part cannot decrease anymore by hedging. This Local Risk Minimization
hedging strategy is a theoretical method that can be used for general stochastic
cashflows and market processes. However, for actually computing this strategy in
practice, we need to know conditional expectations.
Based on work by Longstaff and Schwartz (2001), and Potters et al. (2001),
we show that if a set of scenarios is given for stochastic processes, then conditional
expectations can well be approximated by projections on truncated base functions in
a Hilbert space. Combining these two approaches leads to a powerful method that
works on any set of scenarios, including Monte Carlo simulations. This method
is independent of the model choice, and it seems to be well suited for the study
of complex issues like the impact of model choices, parameter sensitivities, the
introduction of longevity bonds, transaction costs, etc. It yields an approximate
Föllmer-Schweizer decomposition into a hedgeable and an unhedgeable component
that provide the price of the hedge and the residual risk, respectively. In principle,
this decomposition fits the Solvency II framework.
In fact, based on a set of scenarios for any hedging strategy, optimal or not, the
costs of the initial hedge and the distribution of profits and losses can be calculated
for diverse models and parameter choices. No assumptions are needed, just a set of
scenarios. Nevertheless, the dimensionality of the computations will grow with the
number of hedging instruments and in non-Markovian situations. To generate such
a set of scenarios, models may be used, but also historic data sets. This approach
yields a framework that accommodates the following steps
42
1. Choose models for state variables for equity, interest, mortality, etc.;
2. Generate Monte Carlo simulations;
3. Determine hedging strategy with locally minimized risk for given liabilities;
4. Given scenario set, liabilities and hedging strategy, compute the profits and
losses;
5. Compute technical provision and capital requirement.
This framework offers advantages, no arbitrary choices for probability measures are
required since it works with the P -measure; it works for complex products; it works
for incomplete markets; and it accommodates the comparison of different hedging
strategies, models, etc.
All together, we see that the second research question touches upon a deep issue
due to the market incompleteness of the insurance market. We believe that this
framework provides a complete valuation answer in the form of a dynamic hedging
strategy and a distribution of residual risk.
This very same framework can assist in answering the last research question. In
the literature (see Chapter 2) there are many formulae for valuation of embedded
options and guarantees available, and they rely on a plethora of assumptions. To
determine how accurate such formulae are under various conditions and models, the
framework can be used. We discuss in Chapter 4 a selection of formulae based on
the forward measure. In the worked-out examples, we see that where the market
is nearly complete, the formulas are quite useful and correspond to the hedgeable
part. But to understand the error made, the profits and losses have to be computed.
We work the local risk minimization approach out for two types of embedded
options and guarantees, which leads to an insight into the nature of the products
and the effects of parameters. As for model choices we have considered geometric
Brownian and jump diffusion for equity, Hull-White one factor model for interest,
and Lee-Carter model for mortality. We see that for a guaranteed amount for a
unit-linked policy at a maturity well before life expectancy, super-replication by
an vanilla put option makes sense. But for very long maturities, the market incompleteness had considerable effect on profits and losses. The guaranteed annuity
option can be hedged well, and risk due to the market incompleteness is sufficiently
mitigated as long as there is a ‘capital buffer’ formed by equity shares. If this is not
the case, the unhedgeable risk becomes quite delicate to handle.
Elasticity analysis shows that sometimes more stochasticity leads to a lower price
of the initial hedge position. This stresses that valuation of the guarantee requires
more than determining the price of the initial hedge, as the unhedgeable risk should
be valued as well. This is an important difference with the situation of a complete
market. The high sensitivity of the valuation with respect to the guaranteed annuity
rate illustrates that setting a priori a good price for such a product as a guaranteed
annuity option is delicate.
The application of the local risk minimization approach leads to further research
questions about the optimal capital buffer in relation to the numéraire, about the
design choices to obtain good results, and about the support this approach can give
to determine market value margins.
The local risk minimization approach only works with the real probabilities,
therefore we regard it on the whole as a natural choice for valuation of risks within
an incomplete market. The cost of the initial hedge is a good starting point for
valuation, because the remaining profits and losses process is a martingale. Beyond
the hedge, this martingale yields risks that may need quite careful handling. The
sad story of Equitable Life illustrated this by showing us the combined forces of the
risk factors of mortality, interest and equity.
43
7
Bibliography
F. Abergel and N. Millot. Nonquadratic local risk-minimization for hedging contingent claims in incomplete markets. SIAM Journal on Financial Mathematics, 2
(1):342–356, 2011.
M. Abramowitz and I.A. Stegun. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. New York: Dover Publications, 1972.
ISBN 978-0-486-61272-0.
Actuarieel Genootschap. Actuarieel genootschap prognosetafel 2010-2060, 2010.
URL http://www.ag-ai.nl.
M.-O. Albizzati and H. Geman. Interest rate risk management and valuation of the
surrender option in life insurance policies. Journal of Risk and Insurance, 61(4):
616–637, 1994.
L. Anzilli and L. De Cesare. Valuation of the surrender option in unit-linked life
insurance policies in a non-rational behaviour framework. Quaderni dsems, Dipartimento di Scienze Economiche, Matematiche e Statistiche, Universita’ di Foggia,
2007. URL http://EconPapers.repec.org/RePEc:ufg:qdsems:20-2007.
P. Artzner, F. Delbaen, J.-M. Eber, and D. Heath. Coherent measures of risk.
Mathematical Finance, 9(3):203–228, 1999.
A.R. Bacinello. Endogenous model of surrender conditions in equity-linked life
insurance. Insurance: Mathematics and Economics, 37:270–296, 2005.
A.R. Bacinello and F. Ortu. Pricing guaranteed securities-linked life insurance
under interest-rate risk. In Actuarial Approach for Financial Risks, volume 1 of
Transactions of the 3rd AFIR International Colloquium, pages 35–55, 1993.
A.R. Bacinello and F. Ortu. Fixed income linked life insurance policies with minimum guarantees: Pricing models and numerical results. European Journal of
Operational Research 91 (1996), 91:235–249, 1996.
L. Ballotta and S. Haberman. Valuation of guaranteed annuity conversion options.
Insurance: Mathematics and Economics, 33:87–108, 2003.
L. Ballotta and S. Haberman. The fair valuation problem of guaranteed annuity
options: The stochastic mortality environment case. Insurance: Mathematics and
Economics, 38:195–214, 2006.
J. Barbarin. Risk minimizing strategies for life insurance contracts with surrender option. Working Paper, March 2007. URL http://ssrn.com/abstract=
1334580orhttp://dx.doi.org/10.2139/ssrn.1334580.
F. Black and M. Scholes. The pricing of options and corporate liabilities. Journal
of Political Economy, 81(3):637–654, 1973.
M. Bolton e.a. Reserving for annuity guarantees, report of the annuity guarantees
working party. Technical report, Institute of Actuaries, London, 1997.
Ph. Boyle and M. Hardy. Guaranteed annuity options. ASTIN Bulletin, 33(2):
125–152, 2003.
M.J. Brennan and E.S. Schwartz. The pricing of equity-linked life insurance policies
with an asset value guarantee. Journal of Financial Economics, 3:195–213, 1976.
44
D. Brigo and F. Mercurio. Interest Rate Models - Theory and Practice: With Smile,
Inflation and Credit. Springer Verlag Berlin, 2006.
E. Briys and F. de Varenne. On the risk of insurance liabilities: Debunking some
common pitfalls. Journal of Risk and Insurance, 64(4):673–694, 1997.
J.F. Carrière. Valuation of the early-exercise price for derivative securities using
simulations and splines. Insurance: Mathematics and Economics, 19:19–30, 1996.
E. Clément, D. Lamberton, and P. Protter. An analysis of a least squares regression
method for american option pricing. Finance and Stochastics, 6:449–471, 2002.
J. Cox, J. Ingersoll, and S. Ross. A theory of the term structure of interest rates.
Econometrica, 53(2):385–408, 1985.
CRAN. Comprehensive R Arch. Netw., 2012. URL http://cran.r-project.org/.
J.D. Cummins, K.R. Miltersen, and S.-A. Persson. International comparison of
interest rate guarantees in life insurance. Conference Proceedings ASTIN, 2004.
URL http://ssrn.com/abstract=1071863.
M. Dahl. Stochastic mortality in life insurance: Market reserves and mortalitylinked insurance contracts. Insurance: Mathematics and Economics 35 (2004),
35:113–136, 2004.
S. Darses and I. Nourdin. Stochastic derivatives for fractional diffusions. Annals of
Probability, 35(5):1998–2020, 2007.
T. Dillmann and J. Ruß. Implicit options in life insurance contracts - from option
pricing to the price of the option. In Proceedings of the 7th Conference of the
Asia-Pacific Risk and Insurance Association, 2003.
H. Föllmer and M. Schweizer. Hedging by sequential regression: An introduction
to the mathematics of option trading. ASTIN Bulletin, 18:147–160, 1989.
H. Föllmer and D. Sondermann. Hedging of non-redundant contingent claims. In
W. Hildenbrand e.a., editor, Contributions to Mathematical Economics, pages
205–223. North-Holland, 1986.
A. Ford e.a. Report of the maturity guarantees working party. Journal of the
Institute of Actuaries, 107:101–231, 1980.
N. Gatzert. Implicit options in life insurance: An overview. Zeitschrift für die
Gesamte Versicherungswissenschaft, 98:141–164, 2009.
H. Geman, N. El Karoui, and J.-C. Rochet. Changes of numéraire, changes of
probability measure and option pricing. Journal of Applied Probability, 32(2):
443–458, 1995.
E. Gobet, J.-Ph. Lemor, and X. Warin. A regression-based monte carlo method to
solve backward stochastic differential equations. Annals of Applied Probability,
15(3):2172–2202, 2005.
A. Grosen and P. Jørgensen. Valuation of early exercisable interest rate guarantees.
Journal of Risk and Insurance, 64(3):481–503, 1997.
A. Grosen and P.L. Jørgensen. Fair valuation of life insurance liabilities: The impact
of interest rate guarantees, surrender options, and bonus policies. Insurance:
Mathematics and Economics, 26:37–57, 2000.
45
Groupe Consultatif. Market consistency, 2012. URL http://www.gcactuaries.
org.
J.M. Harrison and S.R. Pliska. Martingales and stochastic integrals in the theory
of continuous trading. Stochastic Processes and their Applications, 11:215–260,
1981.
J. Hull and A. White. Numerical procedures for implementing term structure models
ii, two-factor models. Journal of Derivatives, 2(2):37–48, 1994.
Insured Retirement Institute. State of the insured retirement industry. Technical report, Insured Retirement Institute, December 2012. URL https://www.
myirionline.org/eweb/uploads/December\%20Report.pdf.
F. Jamshidian. An exact bond option formula. Journal of Finance, 44(1):205–209,
1989.
L.L. Johnson. The theory of hedging and speculation in commodity futures. The
Review of Economic Studies, 27(3):139–151, 1960.
Chancellor United Kingdom. Chancellor’s Spending Review Statement, 2010. URL
http://www.hm-treasury.gov.uk/spend\_sr2010\_speech.htm.
M. Kohler. A review on regression-based monte carlo methods for pricing american
options. In L. Devroye et al, editor, Recent Developments in Applied Probability
and Statistics. Springer-Verlag Berlin Heidelberg, 2010.
M.C. Ledlie, D.P. Corry, G.S. Finkelstein, A.J. Ritchie, K. Su, and D.C.E. Wilson.
Variable annuities. British Actuarial Journal, 14(2):327–389, 2008.
R.S. Lee and L.R. Carter. Modeling and forecasting u.s. mortality. Journal of the
American Statistical Association, 87(419):659–671, 1992.
E. Linton e.a. A description and classification of with-profits policies. Technical
report, Financial Services Authority, October 2001.
F.A. Longstaff and E.S. Schwartz. Valuing american options by simulation: A
simple least-squares approach. The Review of Financial Studies, 14(1):113–147,
2001.
H. Markowitz. Portfolio selection. Journal of Finance, 7(1):77–91, 1952.
R. Merton. Theory of rational option pricing. Bell Journal of Economics and
Management Science, 4(1):141–183, 1973.
R.C. Merton. Option pricing when underlying stock returns are dicontinuous. Journal of Financial Economics, 3:125–144, 1976.
M. Milevsky and S. Promislow. Mortality derivatives and the option to annuitise.
Insurance: Mathematics and Economics, 29:299–318, 2001.
T. Møller. Risk-minimizing hedging strategies for unit-linked life insurance contracts. ASTIN Bulletin, 28(1):17–47, 1998.
P. Monat and C. Stricker. Föllmer-schweizer decomposition and mean-variance
hedging for general claims. Annals of Probability, 23(2):605–628, 1995.
J.A. Nielsen and K. Sandmann. Equity-linked life insurance: A model with stochastic interest rates. Insurance: Mathematics and Economics, 16:225–253, 1995.
46
G. Di Nunno. Stochastic integral representations, stochastic derivatives and minimal
variance hedging. Stochastics & Stochastic Reports, 73:181–198, 2002.
C. O’Brien. Downfall of equitable life in the united kingdom: the mismatch of
strategy and risk management. Risk Management and Insurance Review, 9(2):
189–204, 2006.
A.A.J. Pelsser. Pricing and hedging guaranteed annuity options via static option
replication. Insurance: mathematics and Economics, 33:283–296, 2003.
A. Petrelli, O. Siu, R. Chatterjee, J. Zhang, and V. Kapoor. Optimal dynamic
hedging of cliquets. Working Paper, 2008.
R. Plat. Analytische waardering van opties op u-rendement. De Actuaris, September:39–41, 2005.
M. Potters, J.-P. Bouchaud, and D. Sestovic. Hedged monte-carlo: low variance
derivative pricing with objective probabilities. Physica A: Statistical Mechanics
and its Applications, 289(3-4):517–525, 2001.
W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. Numerical Recipes
in C: the Art of Scientific Computing. Cambridge University Press, 1988. ISBN
0-521-43108-5.
EIOPA QIS5. Fifth Quantitative Impact Study for Solvency II. URL https:
//eiopa.europa.eu.
R. Roberts. Did anyone learn anything from the equitable life? Technical report,
ICBH, King’s College London, September 2012.
N. Rüfenacht. Implicit Embedded Options in Life Insurance contracts. A Market
Consistent Valuation Framework. Physica-Verlag, Springer, 2012.
P.A. Samuelson. Mathematics of speculative price. SIAM Review, 15(1):1–42, 1973.
D.F. Schrager and A.A.J. Pelsser. Pricing rate of return guarantees in regular
premium unit linked insurance. Insurance: Mathematics and Economics, 35:
369–398, 2004.
M. Schweizer. Option hedging for semimartingales. Stochastic Processes and their
Applications, 37:339–363, 1993.
M. Schweizer. A guided tour through quadratic hedging approaches. In E. Jouini,
J. Cvitanic, and M. Musiela, editors, Option Pricing, Interest Rates and Risk
Management, pages 538–574. Cambridge University Press, 2001.
M. Schweizer. Local risk-minimization for multidimensional assets and payment
streams. Banach Center Publications, 83:213–229, 2008.
M. Steffensen. A no arbitrage approach to thieles differential equation. Insurance:
Mathematics and Economics, 27:201–214, 2000.
J. Tilley. Valuing american options in a path simulation model. Transactions of the
Society of Actuaries, 45:83–104, 1993.
A. Van Haastrecht, A.J.J. Pelsser, and R. Plat. Valuation of guaranteed annuity options using a stochastic volatility model for equity prices. Insurance: Mathematics
and Economics, 47:266–277, 2010.
N. Vandaele and M. Vanmaele. A locally risk-minimizing hedging strategy for
unit-linked life insurance contracts in a lvy process financial market. Insurance:
Mathematics and Economics 42 (2008), 42:1128–1137, 2008.
47
8
8.1
Appendix: Mathematical Background
Local Risk Minimization in Discrete Time
In our presentation of the theory we follow Föllmer and Schweizer (1989) and
Schweizer (2001), but restrict ourselves to the discrete time case for two assets.
One of the assets will be used as numéraire. We extend the approach to general
payment streams as Schweizer (2008) does in the continuous case. For the risk
minimization in a Monte Carlo setting, the main reference is Potters et al. (2001).
Here we extend their approach to a general numéraire discount and a general payment stream that may depend on survival. Their work is based on Longstaff and
Schwartz (2001), which gives more information and background.
8.1.1
Set-up and Preliminaries
We consider a model with discrete time-steps k = 0, . . . , T . Let (Ω, F, F, P ) be
a filtered probability space, where Ω is the sample space, F is a σ-algebra on Ω,
F = {Ft | t ∈ {0, . . . , T }} is a filtration of σ-algebras and P : F → [0, 1] is a
probability measure on F = FT . Intuitively, Ω stands for the possible economic
states, so a scenario is represented by a time series (ωt )t∈{0,...,T } . The σ-algebra Ft
represents the information up to time t, and P gives the probability of economic
states from the set Ω.
Consider a frictionless market in continuous time with two primary tradable
assets modeled as stochastic processes S i = (Sti )t∈{0,...,T } for i = 0, 1. Let us recall
the following definitions.
Definition 8.1. A stochastic process A is called adapted with respect to F if the
random variable Ak is Fk -measurable for all k.
We suppose that S i is adapted to F. Assume that S 0 has a strictly positive price.
We will use S 0 as numéraire for discounting. Write the discounted asset as
X := S 1 /S 0 .
Note that the numéraire may be stochastic. We assume that Xk : Ω → R is P square-integrable for every k, and write this as Xk ∈ L2 (P ).
Definition 8.2. A stochastic process A is called predictable with respect to F if Ak
is Fk−1 -measurable for all k.
In discrete time, predictability tells us that you know the value of a stochastic
process one step ahead. As an example, think of monthly interest rates, which are
set at the beginning of the month.
Definition 8.3. A martingale is an adapted stochastic process M such that Mt is
integrable, and such that for all s ∈ {0, . . . , T } holds
E[Mt |Fs ] = Ms for all t ∈ {s, . . . , T }.
Theorem 8.4. (Doob’s decomposition) An integrable adapted stochastic process X
on (Ω, F, F, P ) can be uniquely decomposed as
X =A+M
where A is a predictable process with A0 = 0 and M is an adapted martingale with
M0 = X0 .
48
Here we give the proof because it preludes on our main risk minimization algorithm. Key is to consider conditional expectations of the differences ∆X and ∆A,
and the rest term will turn out to be a martingale.
Proof. Define A by A0 = 0 and for n > 0 by the estimating step by step
∆An := E[∆Xn |Fn−1 ],
so by induction
An =
n
X
E[∆Xk |Fk−1 ].
k=1
By construction A is predictable. Define M as the rest term
Mn := Xn − An ,
which is adapted because X and A are adapted. Consider the conditional expectation E[∆Mn |Fn−1 ]. We have
E[∆Mn |Fn−1 ] = E[∆Xn − ∆An |Fn−1 ]
= E[∆Xn |Fn−1 ] − E[∆Xn |Fn−1 ]
= 0.
So M is a martingale.
To prove uniqueness write two decompositions of X as X = A + M = B + N .
Then we have an equality
A − B = N − M.
Looking at the right-hand-side we see this is a martingale, looking at the left-handside we see it is a predictable process. So
(A − B)n−1 = (N − M )n−1 = E[(N − M )n |Fn−1 ] = E[(A − B)n |Fn−1 ] = (A − B)n ,
which shows it is a constant process. As moreover A0 = B0 = 0, uniqueness
follows.
Remark 8.5. This decomposition can be provided with some geometric intuition.
The proof shows that Ak+1 is the best estimate at moment in time k, constructed by
summing estimated differences over the time grid. This is similar to constructing a
numerical integral representation of X by summing estimated derivatives. Indeed,
Nelson (1967) suggests as forward stochastic derivative with respect to time (see
Darses and Nourdin, 2007).
Dt X = lim
∆t↓0
E[Xt+∆t − Xt |Ft ]
.
∆t
Meyer proved in 1962 a continuous version of the Doob decomposition, which is a
profound result in martingale theory.
Consider a discounted payment stream whose size depends on the economic
states. For instance, such a stream arises in an actuarial setting with premiums and
payouts, where both may depend on the economic states. The main idea behind
risk mitigation is to construct a trading strategy that mitigates the risks involved.
So a good trading strategy then serves as a hedging strategy. We provide the formal
definitions.
49
Definition 8.6. A payment stream H = (Ht )t∈{0,...,T } is an adapted, real-valued
and P -square-integrable process. For s < t the amount Ht − Hs stands for the
discounted stochastic cashflow (incomes with positive sign, payments with negative sign) between s and t (including t but excluding s. Thus Ht stands for the
cumulative value up to time t.
At time τ ∈ {0, . . . , T } the amount L(H, τ ) := Hτ − H stands for the remainder
of discounted payments minus incomes, so it represents the liabilities (with premia
subtracted). For a time moment t > τ the value of L(H, τ )t = Hτ − Ht gives the
liabilities from τ up to t. We see that H + L(H, τ ) = Hτ is Fτ -measurable, so the
future part of the payment stream forms the liabilities (with a minus sign to comply
to the balance sheet tradition to denote the liabilities as a positive value).
The provision for the liabilities minus premia is the sum of the expected remaining discounted payments minus incomes, so it equals
P (H)τ :=
T
X
E[Hi − Hi+1 |Fτ ] = E[Hτ − HT |Fτ ] = Hτ − E[HT |Fτ ].
i=τ
Example 8.7. An insurance policy is a payment stream that starts with premium
payments and ends with paying out some benefits. So after the premium income
is paid, Ht is positive, and the payout of the benefits will diminish it. If the net
premium is at an actuarial fair level, then 0 = E[HT |F0 ]. A higher premium can
be seen as a reward for the risk-taking by the insurer.
Example 8.8. A contingent claim is a negative payment stream which is non-zero
only at a single moment in time. Typically we take for this the last moment in our
interval, so it is non-zero at time T . Let V be the market price for hedging this
claim. For the hedger there is the payment stream that jumps up from 0 to level
V at t = 0 and goes down by the claimed amount at t = T . As is well known,
the market price for hedging such a contingent claim is the risk neutral expectation
V := EQ [HT |F0 ], so just as in the previous example, the premium is at such a level
that the expectation of HT is zero, albeit in the risk-neutral measure (see 8.28 and
3.2).
Let us fix for the remainder a payment stream H.
Definition 8.9. An L2 (P )-trading strategy is a pair ϕ = (ϑ, η) where ϑ is an Rvalued predictable process and η is a adapted process such that the value process
Vk (ϕ) := ϑk · Xk + ηk
is P -square integrable for each k.
Suppose that a company faces a payment stream H and follows the trading
strategy ϕ, so at time k the assets are given by ϕk with value Vk (ϕ). Then the
balance sheet surplus, which consists of assets minus liabilities, is given by
Vk (ϕ) − P (H)k .
Just after time k has passed, the hedging position ϑk is changed into ϑk+1 . So
the change
∆Vk+1 = ϑk+1 ∆Xk+1 + (∆ϑk+1 Xk + ∆ηk+1 )
can be decomposed in the gain ϑk+1 ∆Xk+1 due to the change in the value of X
over the period [k, k + 1], and the re-hedging costs of changing the portfolio
∆ϑk+1 Xk + ∆ηk+1 .
50
The total of costs over the period from time k to k + 1 is given by the difference of
rehedging costs and the incoming cashflow
∆Ck+1 (ϕ) := ∆ϑk+1 Xk + ∆ηk+1 − ∆Hk+1
= ∆Vk+1 − ϑk+1 ∆Xk+1 − ∆Hk+1
(8.1)
(8.2)
Accumulating leads to the following definitions
Definition 8.10. The cumulative gains process of a trading strategy ϕ is defined
for t ∈ {0, . . . , T } by
t
X
Gt (ϕ) :=
ϑs ∆Xs .
s=1
Definition 8.11. The cumulative cost process of trading strategy ϕ with respect
to payment stream H is defined by
CkH (ϕ) = Vk (ϕ) −
k
X
ϑj ∆Xj − Hk = Vk (ϕ) − Gk (ϕ) − Hk .
(8.3)
j=1
at time k, and C0H = V0 (ϕ) = η0 . This last value C0H (ϕ) is called the initial price
of the trading strategy ϕ as it stands for the costs of acquiring the initial hedge
position ϕ0 . Once this hedge position is in place, ∆CkH (ϕ)−∆Hk for k > 0 accounts
for the profits and losses of the trading strategy ϕ.
Remark 8.12. Often hedging performance at time k is expressed as the relative
cost ∆CkH (ϕ)/Vk (ϕ).
After the last payment is done, there is no value left in the contract, so we
consider only trading strategies with VT (ϕ) = 0.
Definition 8.13. An L2 -strategy ϕ is called zero-achieving if P -almost surely at
T the value process VT (ϕ) = 0.
Definition 8.14. An L2 -strategy ϕ is called self-financing if the hedging costs
C H (ϕ) is constant P -almost surely.
Definition 8.15. An L2 -strategy ϕ is called mean-self-financing if the cost process
C H (ϕ) is a martingale under P .
So for a self-financing ϕ, the hedging profits and losses are almost surely zero,
while for a mean-self-financing ϕ, we only expect that the hedging profits and losses
are zero.
Definition 8.16. The local risk process R is defined for k ∈ {0, . . . , T } by R0 = 0
and
∆Rk (ϕ) := E (∆CkH (ϕ))2 |Fk−1 .
(8.4)
In the remainder of the text, we will drop the ϕ if there is no ambiguity.
8.1.2
Local Risk Minimization
The central idea in Föllmer and Schweizer (1989), is that minimizing the local risk
process leads to a trading strategy that makes a good hedge.
51
Theorem 8.17. (Föllmer-Schweizer) Let the discounted stock process X and the
payment stream H be adapted, square-integrable stochastic processes. There exists
a unique zero-achieving L2 -trading strategy ϕ = (ϑ, η) with minimum local risk for
all k. It is given by
Cov(Vk+1 (ϕ) − Hk+1 , ∆Xk+1 |Fk )
,
Var(∆Xk+1 |Fk )
Vk (ϕ) = Hk + E[Vk+1 (ϕ) − Hk+1 − ϑk+1 ∆Xk+1 |Fk ],
ϑk+1 =
ηk = Vk (ϕ) − ϑk Xk .
(8.5)
(8.6)
(8.7)
This trading strategy is mean-self-financing.
Proof. The approach is a backward sequential regression that runs as follows. Let
ϑk+2 , . . . , ϑT and Vk+1 , . . . , VT be given. Note the shift in the index, as ϑ is supposed
to be predictable, which is not assumed for ηk nor for Vk . At time k, the values of
Xk and Hk are known as X and H are adapted. So we have
∆Rk = E[(∆Ck+1 )2 ]
= E[(Vk+1 − Vk − ϑk+1 ∆Xk+1 − Hk+1 + Hk )2 |Fk ]
where in the last expression all variables are given except ϑk+1 and Vk . These are
to be determined such that ∆Rk is at a minimum. This can be done by setting
the derivatives of ∆Rk to zero. We recognize this minimization as an ordinary
least squares regression. Minimizing ∆Rk is just finding the best linear estimate
for regressand Vk+1 with regressor ∆Xk+1 and coefficients ϑk+1 and Vk (translated
over Hk ). The minimization yields expressions
ϑk+1 =
Cov(Vk+1 (ϕ) − Hk+1 , ∆Xk+1 |Fk )
Var(∆Xk+1 |Fk )
and
Vk = Hk + E[Vk+1 (ϕ) − Hk+1 − ϑk+1 ∆Xk+1 |Fk ].
Note that if all ϑk and Vk are known, then ηk is determined as ηk = Vk −ϑk Xk . The
minimization also yields that E[∆Ck+1 |Fk ] = 0, so the cost process is a martingale,
and the trading strategy is mean-self-financing. Altogether, this completes the
proof.
Remark 8.18. For the single step from k to k + 1, the value of ϑk+1 is the wellknown conditional minimum variance hedge ratio (see Johnson (1960), Markowitz
(1952)).
Remark 8.19. Conditioning on Fk the values of Vk and Hk are constant, so we
may include these terms in expression (8.5). We now see that the expression for
ϑk+1 can be geometrically interpreted as a directional derivative of Vk+1 (ϕ) − Hk+1
in the direction ∆Xk+1 where the inner product on L2 is given by the conditional
covariance (see Nunno (2002) for the case where X is a martingale). So we may
geometrically interpret the predictable coefficients in the Föllmer-Schweizer decomposition as a kind of stochastic derivative, but now with respect to the stochastic
process X. The Föllmer-Schweizer decomposition can be seen as an integral representation with predictable derivatives and as residue a minimized martingale that
is orthogonal to the movement ∆X.
Remark 8.20. Tracking a particular benchmark by a trading strategy is a common
hedging task. Let Rb be the return of the benchmark over a given interval [t, t+∆t],
and Rp the return of the portfolio over this interval. The amount Rp −Rb shows how
52
much the trading strategy is off the exact track of the benchmark, in other words
how much the trading strategy is not self-financing. The tracking performance is
1
measured by the tracking error E[(Rp −Rb )2 |Ft ] 2 . Local risk minimization is similar
to a tracking process. If we include the cashflow, the cost process corresponds to
the cumulative tracking error.
8.1.3
Föllmer-Schweizer decomposition
Definition 8.21. The martingales M and N are called strongly orthogonal if the
product M · N is a martingale.
This definition is equivalent to the statement that Cov(∆Mk+1 , ∆Nk+1 |Fk ) = 0
for every k, since
0 = Cov(∆Lk+1 , ∆Mk+1 |Fk )
= E[∆Lk+1 ∆Mk+1 |Fk ]
= E[∆(L · M )k+1 |Fk ].
Definition 8.22. Let X and Y be an integrable stochastic processes. Let the Doob
decomposition of X be given by X = A + M . A decomposition
Yk =
k
X
ϑj ∆Xj + Nk
j=1
is called a Föllmer-Schweizer decomposition of Y with respect to X if ϑ is predictable
and N is a martingale that is strongly orthogonal to the martingale part M of X.
Theorem 8.23. Given a payment stream H, a zero-achieving L2 -trading strategy
ϕ = (ϑ, η) with value process Vk (ϕ) has minimum local risk for all k if and only if
the decomposition
k
X
Vk (ϕ) − Hk =
ϑj ∆Xj + CkH (ϕ)
j=1
is a Föllmer-Schweizer decomposition with respect to X.
Proof. Write the Doob decomposition of X as X = A + M . Since the trading
strategy has minimum local risk, it is mean-self-financing, so the cost process C is
a martingale.
The expression (8.5) for ϑk+1 is determined at time k, so ϑ is predictable. This
expression is equivalent to
0 = Cov(∆Ck+1 , ∆Xk+1 |Fk )
= Cov(∆Ck+1 , ∆Mk+1 |Fk )
(8.8)
as ∆Ak+1 is known at time k since A is predictable. Both C and M are F-adapted
martingales, so this is equivalent to
0 = Cov(∆Ck+1 , ∆Mk+1 |Fk )
= E[∆Ck+1 ∆Mk+1 |Fk ]
= E[∆(C · M )k+1 |Fk ],
in other words, C and M are strongly orthogonal in the L2 -space of martingales.
Remark 8.24. Local risk minimization is not global risk minimization. See the
example in Schweizer (2001).
53
Definition 8.25. The market price of risk for the discounted security X with Doobdecomposition X = A + M is the predictable process λ defined for k = 0, . . . , T
by
E[∆Xk |Fk−1 ]
∆Ak
λk :=
=
.
Var(∆Xk |Fk−1 )
E[(∆Mk )2 |Fk−1 ]
Associate to λ a stochastic process Z by Z0 = 1 and the inductive rule
Zk+1 := Zk (1 − λk+1 ∆Mk+1 ).
We have the property that
Proposition 8.26. The processes Z · X and Z · (V (ϕ) − H) are martingales.
Proof. We give the proof of the second statement
Vk (ϕ) − Hk = E[Vk+1 (ϕ) − Hk+1 − ϑk+1 ∆Xk+1 |Fk ]
∆Ak+1
)∆Mk+1 |Fk ]
= E[(Vk+1 (ϕ) − Hk+1 ) · (1 −
E[(∆Mk+1 )2 |Fk ]
= 1/Zk · E[Zk+1 (Vk+1 (ϕ) − Hk+1 )|Fk ]
(8.9)
(8.10)
(8.11)
The proof of the first statement uses the same arguments.
If Z happens to be strictly positive, we can take Z as Radon-Nikodym derivative yielding an equivalent probability measure. The above result implies that a
change of probability measure defined by this Radon-Nikodym derivative leads to
an equivalent martingale measure Q. It is called the minimal martingale measure
(see Schweizer, 1993).
Remark 8.27. The situation in continuous time is similar, a Föllmer-Schweizer
decomposition is equivalent to a zero-achieving mean-self-financing local-risk-minimizing L2 -trading strategy. The existence of such a strategy can be proved under
the so-called structure condition and bounds on the market price of risk for X.
For a continuous semi-martingale X these conditions are satisfied. See Schweizer
(2001).
Remark 8.28. The market is called complete if every contingent claim is attainable,
in the sense that there is a self-financing L2 -trading strategy ϕ that realizes the
claim. In a complete market, all contingent claims are attainable by definition.
This simplifies the situation. There is no risk left, so the cost process will be zero
apart from an initial cost. By the result of Harrison and Pliska, there is only one
martingale measure, so this must be the minimal martingale measure Q. Moreover,
the price process is a martingale
Vt = EQ [HT |Ft ],
and this uniquely determines the price of the option.
Remark 8.29. To actually compute the local risk minimizing strategy one has to
be able to compute conditional expectations. For example, this is possible if the
conditional distributions are known. If we know the stochastic evolution of X on
a tree, then conditional expectations can be computed as every node has a finite
number of children nodes. We can compute the Doob decomposition directly. For
the Föllmer-Schweizer decomposition we need backward induction and the formulae
(8.5), (8.6), and (8.7).
54
Remark 8.30. If we compute the Föllmer-Schweizer decomposition on a binary
tree, then locally at every node we get two equations for the two unknowns. So
it is solved in such a way that Rk = 0. In other words, the trading strategy is
risk-free. As Föllmer and Schweizer (1989) and also Potters et al. (2001) point
out, this should be considered an exceptional situation due to the special binary
structure of the tree. As the famous Black-Scholes approach can be considered a
limit situation of binary trees, it does deliver a risk-free strategy. But underneath
there is the strong assumption that movements of stock can be decomposed in
elementary binary movements that allow a new hedge position to be taken. It
might be doubtful whether in practice we can observe such elementary movements
and re-hedge so often.
Remark 8.31. Abergel and Millot (2011) notice that the square function in the
definition of the risk process equally penalizes profits and losses. They present an
approach where the square function is replaced by some risk function f that must
satisfy some criteria and may put more weight on losses than profits. In this thesis
we stay with the definition of Föllmer and Schweizer, but we do compute how much
extra capital is required to cover the risk of a loss.
8.2
Monte Carlo Setting
We show how conditional expectations can be approximated when a set of scenarios
is given. For example, such a set of scenarios arises when Monte Carlo simulations
are done. The approach is based on the least squares method developed by Longstaff
and Schwartz (2001) for valuing American options on the basis of Monte Carlo
simulations. In fact, Tilley (1993) presented a sorting and bundling algorithm, and
was the first to show that Monte Carlo simulations could be used for determining
American option values. Carrière (1996) was the first to use regression with help
of a set of base functions for this purpose. But Longstaff and Schwarz made this
much more practical and more easily accessible.
Their approach is based on two approximations: (1) replace in a Hilbert L2 -space
of functions an infinite expansion of a function by a truncation, (2) approximate
conditional expectations and variances by sample means and variances computed
on the set of scenarios. So we rely on convergence in the infinite dimensional Hilbert
space of functions, and on the central limit theorem. After these approximations
the computation boils down to a standard linear regression. Potters et al. (2001)
show how this approach can be used in a setting of risk minimization and point out
that the method can handle quite exotic and path-dependent options. For instance,
Petrelli et al. (2008) use this method for hedging cliquet options.
Our presentation will be a little more general as in our setup we consider a
general numéraire and a payment stream that may depend on survival and other
stochastic variables. To illustrate the approach, first we show how the Doob decomposition can be approximated. It could very well be that this is a new result
as we did not find it in the literature. We can use it, for instance, to compute the
market price of risk for a set of scenarios. Then we apply the same approach to the
valuation of a martingale, which is the setting of Longstaff and Schwarz. From that
point on it is only a small step to apply it to the Föllmer-Schweizer decomposition.
As a byproduct of the regression, we obtain an empirical distribution for the cost
process and the risk process.
An important advantage of this approach is that there are no a priori conditions.
Any set of scenarios will give results, and convergence is typically quite fast. The
method is well suited for incomplete markets and involved payment streams or
exotic options. A disadvantage is that we do not get a closed formula. Moreover,
it should be said that the selection of base functions sometimes needs to be done
55
carefully. But as Longstaff and Schwartz (2001) write
“Our experience suggests that the number of basis functions necessary
to approximate the conditional expectation function may be very manageable even for high-dimensional problems.”
So the effect of the so-called curse of dimensionality does not seem to be so strong
in this approach. We think this is a potentially very important advantage.
8.2.1
Base functions
Consider the Hilbert space L2 ([a, b] of square-integrable functions on a interval from
a to b (the interval boundaries may be infinite.) It is well known that it can be
equipped with a measure µ which defines a metric by an inner product
Z b
f · g dµ.
< f, g > :=
a
A standard example of a measure is µ(dx) = w(x)dx for a continuous function w.
The space is complete under taking limits and allows many choices of countable
bases. Let {Pj (x)}0≤j≤∞ be such a basis for the Hilbert space L2 ([a, b]). So we can
write for any function f (x) an infinite expansion
f (x) =
∞
X
βj Pj (x),
j=0
where equality holds up to a set of measure zero. If the basis is orthogonal, then
the coefficients are given by
< f, Pj >
.
βj =
< Pj , Pj >
By truncating the expansion at level M , we can approximate the function f (x) by
a finite series
M
X
f (x) ≈
βj Pj (x).
j=0
Geometrically, the truncation boils down to the projection on the finite-dimensional
subspace spanned by the function P0 , . . . , PM . We assume that P0 is a constant
function.
For the choice of a basis, there is quite some freedom. Longstaff and Schwarz
mention the orthogonal polynomials from Chapter 22 of Handbook of Mathematical
Functions by Abramowitz and Stegun (1972), like Legendre, Laguerre and Hermite
polynomials, and also Fourier series or monomials in the state variables. In the setting of non-parametric regression, Carriére (1996) proposes splines and local polynomial smoothers developed by Cleveland e.a. Indicator functions on hypercubes
and local polynomials on Voronoi-partitions are considered by Gobet et al. (2005).
Potters et al. (2001) report they have used piecewise quadratic functions with adaptive breakpoints, but do not specify how the breakpoints are chosen. Petrelli et al.
(2008) use for hedging cliquet options the method of Potters and Bouchaud with
piecewise hermite cubic base functions that they fully describe in an appendix.
Most authors report that the method is quite robust under a change of base functions. Following Longstaff and Schwarz, in our implementation we choose weighted
Laguerre, Legendre en Hermite polynomials, where the weight is the square root of
the function w that comes from the measure. So these base functions are orthogonal under the standard measure dx. We also tried the piecewise cubic functions of
Petrelli et al. (2008), but convergence turned out to be slower.
56
8.2.2
Approximate Doob decomposition
Let the evolution of an integrable process X be given by a set of N scenarios that
all start with the same X0 . Denote by Xki the value at time k in the i-th scenario.
We will give an algorithm to approximate the Doob decomposition by
e+M
f,
X=A
e predictable and M
f an adapted process that is approximately a martingale.
with A
Before we go into the details, we give some motivation. In the exact Doob
decomposition, the predictive term is given by
∆Ak+1 = E[∆Xk+1 |Fk ].
This conditional expectation is Fk -measurable so it may depend on the information
up to time k. For ease of exposition, we assume that the process X satisfies the
Markov-property, which implies that the conditional expectation only depends on
Xk . The idea is to write this as a function of k and Xk .
E[∆Xk+1 |Fk ] = f (k, Xk ).
For this function, we write the truncated expansion
ek+1 :=
∆A
M
X
βj (k)Pj (Xk ),
j=0
with βj (k) that are still to be determined. The objective is to do that in a way that
the expected values coincide
ek+1 |Fk ] = 0,
E[∆Xk+1 − ∆A
and that the variance is minimized
ek+1 |Fk ).
minβj Var(∆Xk+1 − ∆A
However, we do not know the conditional distribution of Xk+1 . All we have is a set
i
of scenarios, so for each k we have samples Xk+1
. By the central limit theorem, the
i
sample distribution (Xk+1 )1≤i≤N will converge to a normal distribution. It is well
known that the sample mean
N
1 X
x :=
xi
N i=0
and sample variance
N
s2 :=
1 X
(xi − x)2
N − 1 i=0
are the minimum variance unbiased estimators in case of a normal distribution.
So it makes sense to estimate the mean and variance in the above expressions by
these estimators. Please note that the Markov-property is not essential. We may
include other Xi as well if we add base functions accordingly, although this will
add to the computational complexity. But as we are working with scenarios, path
dependence may often be circumvented by adding some extra variables at time k.
For instance, for an Asian option depending on an average of Xi , it is sufficient to
add this average as extra variable.
As for the martingale part of the Doob decomposition, we will see that for all k
N
X
fi = 0.
∆M
k
i=0
57
fk+1 |Fk ], so we see
This sample mean regarded at time k estimates the mean E[∆M
that
fk+1 |Fk ] ≈ 0.
E[∆M
We will encounter this kind of approximation several times, so we formalize this
notion by defining an equivalence relation ≈ on stochastic processes.
Definition 8.32. Given a set N = (ωki )i=1,...,N
k=0,...,T of scenarios we will say that a
process A is N -approximately a martingale when
N
X
∆Aik = 0
i=0
where Aik is the value of the process A in the i-th scenario of N at time moment k.
In the same vein, two processes A and B that are N -approximately a martingale
are N -approximately strongly orthogonal when
N
X
∆Aik · ∆Bki = 0.
i=0
Definition 8.33. For each k write
ek+1 :=
∆A
M
X
βj (k)Pj (Xk ),
j=0
and determine βj (k) by minimizing the expression
N
X
i=1

i
∆Xk+1
−
M
X
2
βj (k)Pj (Xki ) .
j=0
e
In fact, we are regressing ∆Xk+1 on {Pj (Xk )|j = 0, . . . , M }. Define the process A
by A0 = 0 and for all k
k
X
ek :=
ej .
A
∆A
j=0
f by
Define the process M
f := X − A.
e
M
Proposition 8.34. (Monte Carlo Doob decomposition) Let the evolution of an
integrable process X be given by a set N = (ωki )i=1,...,N
k=0,...,T of scenarios that all start
with the same X0 . There is a decomposition
e+M
f
X=A
with
e is a predictable process A
e0 = 0;
1. A
f is an adapted process with M
f0 = X0 ;
2. M
f = 0, so M
f is N -approximately a martingale.
3. the sample mean ∆M
58
Proof. First the βj (k) are determined by the sequence of regressions. Given the
ek+1 only depends on the value of Xj up to j = k, so it is
βj (k), by definition A
ek . So
predictable. At time k the value of Xk is known, as well as the value of A
fk = Xk − A
ek .
M
fk+1 form the error term of the
is well-defined and adapted. The samples of ∆M
regression. As the base functions contain the constant function, we obtain that
N
X
i
fk+1
∆M
= 0.
i=1
In other words, the sample mean is zero.
So from the scenario set, regression leads to the predictable best estimate. The
error terms reflect the martingale part.
Note that the treatment of the information represented by the filtration is delicate. We are using the complete information of the scenario set to determine
e is predictable and M
f is
the coefficients βj . Once these are given, the resulting A
e
f
adapted. So for new scenarios of X, the values for Ak+1 and Mk can be computed
at time k.
8.2.3
Continuation Value for Martingales
As before, let the evolution of an integrable process X be given by a set N of N
scenarios that all start with the same X0 . Let V be a martingale whose future
value VT at time T depends functionally on the values of a stochastic process X up
to time T , but for which a priori other values Vt are not known.
We approximate the value Vt with the following recursion. Set VeTi = VTi for
i
i = 1, . . . , N . Suppose Vek+1
given. Set
Vek :=
M
X
βj (k)Pj (Xk )
j=0
where the βj (k) are determined by minimizing the expression
N
M
X
X
i
(Vek+1
−
βj (k)Pj (Xki ))2 .
i=1
j=0
This leads to a new expression
Veki :=
M
X
βj (k)Pj (Xki )
j=0
Proposition 8.35. (Continuation Value Decomposition) Let the evolution of an
integrable process X be given by a set N = (ωki )i=1,...,N
k=0,...,T of scenarios that all start
with the same X0 . The process Ve is adapted and it is N -approximately a martingale.
Proof. Just as above, minimizing the sample variance yields that the sample mean
∆Ve
is zero, so Ve is N -approximately a martingale. Given the βj , the expression
PMk+1
e
j=0 βj (k)Pj (Xk ) is Fk -measurable, so V is adapted to F.
59
After a change to an equivalent martingale measure, the value process of a
discounted European option is a martingale. So this proposition can be used to
compute the continuation value when a set of scenarios is given. In particular, it
approximates a price V0 of the option under this probability measure.
This very proposition is key in the determination of American option values in
the approach of Carrière (1996) and in Longstaff and Schwartz (2001). After the
change of measure, the value is determined by replacing at every stage Vek by the
maximum of the intrinsic value and the continuation value. Clément et al. (2002)
prove the convergence of the algorithm for the American options where also the time
step decreases. For an up-to-date review on regression-based Monte Carlo methods
for pricing American options, see Kohler (2010).
8.2.4
Approximate Föllmer-Schweizer Decomposition
As before, let the evolution of an integrable process X starting at X0 at time k = 0
be given by a set of N scenarios. Let H be a payment stream. We write
Vek :=
M
X
βj (k)Pj (Xk )
j=0
and
ϑek+1 :=
M
X
γj (k)Pj (Xk ).
j=0
The Föllmer-Schweizer approach is a backward recursive minimization of
Var(∆Vk+1 − ϑk+1 ∆Xk+1 − ∆Hk+1 |Xk ).
On the basis of the scenario set, we now minimize as above
N
X
i=0

i
i
Vek+1
− ∆Hk+1
−
M
X
βj (k)Pj (Xki ) −
j=0
M
X
2
i
 ,
γj (k)Pj (Xki )∆Xk+1
j=0
The determination of βj (k) and γj (k) leads to a trading strategy ϕ
e that is an
approximation of the local risk minimizing trading strategy. As in the case of the
Doob decomposition, the regression error term is approximately a martingale and
has zero sample mean. Here it concerns precisely the cost process of the trading
strategy ϕ.
e
Proposition 8.36. Let the evolution of an integrable process X be given by a set
i=1,...,N
N = (ωki )k=0,...,T
of scenarios that all start with the same X0 . In this setting, it
holds that:
1. The hedge coefficient ϑe is predictable;
2. The process Ve is adapted;
e ηe) with
3. Together they lead to a trading strategy ϕ
e = (ϑ,
ηek := Vek − ϑek · Xk
that can be seen to “approximate” the optimal local risk minimizing trading
strategy in the following sense:
4. The cost process C H (ϕ)
e is N -approximately a martingale; and
60
5. The cost process C H (ϕ)
e is N -approximately strongly orthogonal to the N approximate martingale part of the Doob decomposition of X.
Proof. Once the sequential regression is done, the βj (k) and γj (k) are fixed and the
first two statements follow from the definitions above.
By definition, for the cost process of ϕ
e holds that
i
i
∆C H (ϕ)
e ik = Vek+1
− ∆Hk+1
−
M
X
βj (k)Pj (Xki ) −
j=0
M
X
i
γj (k)Pj (Xki )∆Xk+1
j=0
is the error term in the regression. So the sample mean is zero, in other words,
C H (ϕ)
e is N -approximately a martingale. Moreover, minimizing the cost process
term ∆C H (ϕ)
e leads to an expression for ϑek+1 just as in (8.8) that implies
0 = CovN (∆C H (ϕ)
e k , ∆Xk ) =
N
X
fi
∆C H (ϕ)
e ik · ∆M
k
i=1
f is the N -approximate martingale part of X. So the cost process C H (ϕ)
where M
e
f.
is N -approximately strongly orthogonal to M
As for convergence, Gobet et al. (2005) present a sequential regression-based
Monte Carlo method for solving forward-backward stochastic differential equation.
They prove several convergence results, among which the convergence for the setting
of Potters et al. (2001).
Potters and Bouchaud compare the risk neutral continuation value method and
the above local risk minimization method. They observe that the convergence in
practice is much faster. The reason for this is that the included hedge term acts as a
control variate in the Monte Carlo setting. The variation in the process is absorbed
by the hedge term, as far as can be done predictably. The sequence of regressions
makes that once an error is introduced, it will propagate. So it is important to check
the quality of regression at each step, for instance by inspecting the measure R2 .
It can happen that the matrices that are inverted in the regression process almost
have rank 0. This gives rise to numerical instability problems, and it is advisable
to use the singular value decomposition where cut-off levels for small reciproques
of eigenvalues can be provided (see Press et al., 1988). As the theoretical FöllmerSchweizer decomposition does not depend on these computational parameters, a
result that is sensitive to changes in these computational parameters should be
distrusted.
Overall, the approximative local risk minimization method has clear advantages,
because it describes the complete picture. A dynamic hedge strategy is proposed
which probably is hard to beat, a distribution of the real costs is predicted, and
a price of the initial hedging strategy is provided. The method needs a careful
choice of base functions, time step and use of singular value decomposition. Results
of course will only be realistic as far as the scenarios are faithfully realistic. The
method works for any stochastic payment stream, so may be applied to single policy,
but also to a complete portfolio.
61
9
Appendix: Implementation
In answering the research questions (1.3) we arrived in (3.7) at a valuation framework that supports the local minimum risk hedging strategy. We have implemented
this framework in the statistical programming language R. In this chapter, we discuss the object-oriented way of implementing the framework, and explore the way it
works by considering a plain vanilla put option.
In the second part of this chapter, we specify the actual models that we will use in
our implementation: geometric Brownian motion and jump-diffusion for equity, the
Hull-White one-factor model for interest, and the Lee-Carter model for mortality.
Our simulations will be based on these models. Perhaps it is needless to say, but
the framework will work for any other choice of models, as long as scenarios can be
generated.
9.1
Object-Oriented Implementation
In a software implementation we need a versatile platform that matches the valuation framework in the sense that it can handle diverse valuation methods for
various financial objects, it can accommodate diverse model choices and it allows
the generation of scenario simulations. From a computer science point of view, the
valuation framework is well suited for an object-oriented approach.
In an object-oriented programming environment, objects may be defined as instances of classes, these classes are organized in a hierarchy. Function calls, called
methods, can be defined on a generic class level while their actual implementation
may depend on the more specific subclass. In our situation, we would like to head for
a generic “value”-function, whose particular implementation depends on the specific
financial object, on the model choices, and the desired calculation algorithm.
A natural choice for implementation to realize such a flexible and demanding
task is the statistical programming environment R (see CRAN, 2012). This environment is an interactive programming environment with an impressive support for
statistics. It originated from the project S, and it is geared towards object-oriented
programming.
The advantage of this choice is that it results in clean, modular programming
code that consists of small fragments. A lot of case-switching is taken care of by the
object-oriented structure. Such code is also easier to extend and to maintain. A disadvantage is that some programming overhead is necessary, and that changes in the
object-oriented design take care and time. But once the object-oriented structure
is sound, this disadvantage is outweighed by the versatility and maintainability.
62
Security
Stock
Bond
LifeInsurancePolicy
ZeroCouponBond
FixedIncomePolicy
Endowment
UnitLinkedPolicy
Annuity
63
Figure 12: Class Hierarchy for Securities
Model
EquityModel
GeometricBrownian
InterestModel
JumpDiffusion
MortalityModel
MortalityTable
FixedRate
YieldCurve
HullWhite
Figure 13: Class Hierarchy for Models
LeeCarter
9.2
Classes and Methods
To give an impression of the implementation, we will look at an example. A put
option on an equity share is defined by
stock1 <- new_Stock_GeometricBrownian(startingvalue=100,
mu=0.07, sigma=0.25)
put1 <- new_EuropeanPutOption(maturity=5, Strike=120)
where the constructor methods for stock and put options are called. The classes
are organized in a class hierarchy, for instance zero-coupon bonds are a subclass
of bonds. For financial securities, we have implemented a class hierarchy (see Figure 12).
Given a constant interest rate of 0.04 we can define a market model
r.1 <- new_FixedRate(0.04)
mm.1 <- new_MarketModel(r.1, stock1)
The closed form valuation of put1 can be called by
value(put1, mm1)
and R will print the initial price of the hedge and the hedging coefficient
[1] Price:
[1] Hedge:
20.9561312911024
-0.3778498875296
0
-10
-5
Cost
5
10
The price is computed with the help of the Black-Scholes formula, and the hedge
coefficient is the Delta-hedge. Next we do a valuation by Monte-Carlo simulation,
so by computing the expectation in a risk neutral setting. First we generate MonteCarlo data and fix the seed for pseudo random number to be able to repeat the
computation exactly.
0
1
2
3
4
5
Time
Figure 14: Profits and Losses per Scenario
setup.mc <- new_MonteCarloSetup(nrOfSimulations=1000,
stepSize=0.1, time=10, seed=1963)
value(put1, mm.1, setup.mc)
so it is sufficient to add the Monte-Carlo setup to the call. It yields
[1] Price:
20.5092413559576
64
Indeed, this method does not compute a hedge coefficient. The local risk minimization, here with the Laguerre basis functions up to degree 6, is called as follows
setup.lrm <- new_LRMSetup(setup.mc, basis="Laguerre", degree=6)
value(put1, mm.1, setup.lrm)
it returns with
[1] Price:
[1] Hedge:
20.9319462197389
-0.375706393034422
0.994
0.988
0.990
0.992
R-squared
0.996
0.998
1.000
We see that in this example, convergence is much better than for the Monte Carlo
simulation. This is typical for this algorithm as the hedging subsumes most of the
variance. Moreover, an initial hedge coefficient is returned.
In fact, the value method does more. It computes a valuation object, that
contains all relevant information. Not only the initial price and hedge coefficient,
but it also contains the complete hedging strategy, the fit quality in terms of an R2 coefficient for each step, the distribution of profits and losses for the given scenarios,
the payment stream, and the actual evolution of stock and interest.
For instance, we may inspect the profits and losses for all scenarios. Each scenario is represented by a colored line. The losses are positive in the graph, the
profits negative. Note that the profits and losses are quite modest in size. This
was to be expected since the market is complete. Moreover, note that the biggest
profits and losses are at the last step in time. This is due to the effect that it is
hard to approximate the piecewise linear pay-off function with the smooth Laguerre
polynomials. Reducing the time step will reduce the hedging profits and losses on
the way, but not this non-smoothness effect. A suggestion is to incorporate the
pay-off function in the base functions, probably this will yield a better result in
this case. But the basis of functions is no longer generically suited for an arbitrary
payment stream. We obtain the fit quality in Figure 15. We see that all fits are
0
1
2
3
4
5
seq(0.1, 5, by = 0.1)
Figure 15: R2 Fit Quality
quite good, but the last fit is the worst as we as we already saw in the profits and
losses distribution. In the next chapter we will use this value method for payment
streams coming from life insurance policies with embedded options and guarantees.
9.3
Choice of Models
For the setting of this thesis, a selection of models has been implemented that at
least displays the stochastic nature of the risk factors and leads to incompleteness
65
-0.05
0.00
Short Rate
0.05
0.10
of the market. The class hierarchy Hull-White
of models
of Figure 13 gives an overview.
simulations
0
10
20
30
40
50
60
Time
Figure 16: Hull-White short rate simulations
9.3.1
Interest
0.04
-0.02
0.00
0.02
Spot Rate
0.06
0.08
Hull-White simulations
0
10
20
30
40
50
60
Time
Figure 17: Hull-White spot rate simulations
As far as the interest rate models are concerned, the FixedRate assumes a
constant interest rate, the YieldCurve class models an interest term structure,
and the Hull-White class is governed by equation (4.7). These equations form the
Hull-White one-factor model (Hull and White, 1994) for the short rate.
drt = α(θ(t) − rt )dt + σdWt ,
66
(9.1)
where θ(t) is a function determined by the term structure, Wt is a Brownian motion,
and parameters α and σ determine mean-reversion and volatility. See the original
paper and Brigo and Mercurio (2006) for more details on this model.
Let us take the values for α = 0.35 and σ = 0.025 as in Boyle and Hardy (2003).
In Figure 16 we show a hundred simulations of the short rate according to this
model. On the one hand, paths are quite erratic, on the other hand, the mean
reversion is visible, because the bandwidth seems to be quite stable. It leads to
simulated yield curves in Figure 17. The yield curve is also displayed, and we see
how the spot rate simulations wrap around this curve.
800
600
0
200
400
Stock Value
1000
1200
1400
Geometric Brownian Motion
0
2
4
6
8
10
Time
Figure 18: Geometric Brownian motion
800
600
0
200
400
Value
1000
1200
1400
Jump Diffusion
0
2
4
6
8
10
Time
Figure 19: Jump diffusion
9.3.2
Equity
The well-known geometric Brownian model, originally due to Samuelson (1973), is
of the form
dSt /St = µdt + σdWt
where Wt is a Brownian motion, µ is the rate of return and σ is the volatility.
The jump-diffusion model, due to Merton (1976), adds to this a compound
Poisson jump process. Define a Poisson distributed process Nt with intensity λ. So
67
λ is the mean number of arrivals per unit of time. Let J denote the jump ratio,
so St jumps from St to JSt when dNt = 1. So the relative change in asset price is
J − 1. We choose for log(J) a normal distribution with variance σJ and mean equal
to µJ = −σJ2 /2. Then we have that
E[J − 1] = 0.
The differential equation governing the jump diffusion process is
600
400
0
200
Stock Value
800
1000
Jumps
0
2
4
6
8
10
Time
Figure 20: Jumps in a jump diffusion process
dSt
= µdt + σdWt + (J − 1)dNt .
ST
So µ and σ determine the drift and volatility of the diffusion, and the jumps are
caused by the last term.
In Figure 19 we show a jump diffusion process with a similar market price of risk
as the geometric Brownian motion above. And in Figure 20 we show pure jumps,
so a process with zero diffusion part.
9.3.3
Mortality
We take as starting point the mortality of females in the period of 1950 up to 2000.
In Figure 21 for each year a graph is drawn that represents the death rates per age.
The coloring is of rainbow type: the oldest years in red, and the most recent years
in purple. For a deterministic generational table a mortality-table class is defined.
We are also considering stochastic models as future mortality is uncertain. We opt
for the model from Lee and Carter (1992). It is a popular choice that can be easily
implemented, often with good results. We believe it is sufficient for the purpose
of this thesis, but our framework can handle other models as well. Let qx,t be the
central death rate for age x at time t. The Lee-Carter model is governed by the
equation
log qx,t = ax + bx κt + x,t .
where the ax coefficients describe the average shape of the age profile, and the
bx coefficients describe the pattern of deviations from this age profile when the
parameter κ varies with t. Here the model is fitted to the Dutch mortality data
with help of the R-package “demography” (CRAN, 2012, written by R. Hyndman).
Forecasting with the Lee-Carter model is based on an estimate of the drift δ, which
68
-4
-10
-8
-6
Log death rate
-2
0
2
Female Death Rates (1950-2000)
0
20
40
60
80
100
Age
Figure 21: Death Rates 1950-2000
is the change of κt over the years. A common approach is to take the first and last
value of κ, and extrapolate from these in a linear fashion. The historical deviations
of the intermediate values of the interpolation yield an estimate of the variance of
the drift.
When we generate a simulation on the basis of the Lee-Carter model, we draw
random numbers for and for δ from a normal distribution with zero mean and
variance that correspond to the historical data. Next a sample is drawn from the
simulated mortality distribution. The sample size corresponds to the number of
policies that we consider at the same time.
69
-6
-12
-10
-8
Log death rate
-4
-2
Lee Carter Forecast (2010-2050)
0
20
40
60
80
100
Age
Figure 22: Forecast from Lee-Carter model
-6
-8
-10
-12
Log death rate
-4
-2
0
Lee Carter Simulations
0
20
40
60
80
100
Age
Figure 23: Simulations based on Lee-Carter model
70
10
Acknowledgements
First of all, I would like to thank my thesis supervisor Michel Vellekoop (UvA)
for the challenging questions and stimulating discussions. My thanks are for De
Nederlandsche Bank N.V. for the opportunity for this research project and their
hospitality. Erwin Bramer (DNB) put me on the right trail and supported me
throughout the project, and after. Geert Dozeman and Wilbert van der Hoeven
(DNB) guided me through the world of insurance and have become my friends. I
am indebted to Anneke, Myrthe, Isa and Jolijn for their patience and support when
I was just sitting behind the computer.
71
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement