All posts by Alan

The Term Structure of the Equity Risk Premium

What is the equity risk premium, abbreviated ERP? It’s the market’s best point estimate, today, of what “stocks in the aggregate” will return in the future — after subtracting a risk-free interest rate.  By “stocks in the aggregate”, I am taking a US perspective, so thinking about an equity investment matching the  S&P500 (after dividends are accounted for).

If the future is very distant, say the next 20 years, a widely held belief is that the ERP will average around 4-6% per year.  The ERP has often been called “the most important number in finance”.

Speaking of interest rates, it’s well-known that rates vary in time and, at each time, have a term structure. For example, if r was a US Treasury rate, we would write r_{t,T} to denote the annual yield at time t for a Treasury bond maturing at time T.

Just like interest rates, the ERP is also time-varying and has a term structure —  so we also write \mbox{ERP}_{t,T}. For example, we could ask, what is the ERP today for holding stocks over the next 6 months? Regardless of the horizon, which might be very close, the convention is to quote the ERP as an annualized percentage rate. This is the same convention as for interest rates: even if you are borrowing money for just a couple weeks, your borrowing cost will be quoted as an annualized percentage rate.

The fact that the ERP has a term structure is not widely appreciated. In contrast, the term structure of interest rates is well-known and readily visible. For example, just look in the Wall Street Journal for the current term structure of US Treasury rates  — or find it in many places on the web.  The ERP term structure is not directly visible and needs to be estimated. Exactly how? That’s the question I answer in a new research paper, recently posted at the arXiv, titled:

Option-based Equity Risk Premiums

As an example, I show below my estimate of the (US) ERP term structure for Feb 7, 2018,  2 days after the so-called `Volpocalypse’.  That Feb 5, 2018 volatility event was the day of the Dow Jones Industrial Average’s largest point loss ever, although the percentage loss was less than 5%. The two charts show the same ERP, but the bottom uses a log time scale in order to better show the various, closely-spaced, option expirations.  As you can, the ERP is declining from about 26% per year for the nearest dates (2 days away) toward the longer-term values mentioned above. The  dark lines are central estimates and the red lines estimate the uncertainty. My assumptions and other details are found in the article.

The ERP term structure, estimated for Feb 7, 2018, two days after the ‘Volpocalypse’.

 

 

Option Valuation under Stochastic Volatility II — Extended Excerpts

I wrote a column for Wilmott Magazine for a couple of years. Then I reprinted those columns in the second part of my book “Option Valuation under Stochastic Volatility II”. (The first part was 550 pages of new material).

Sometimes I get requests for those magazine articles. So this post will serve as a repository for those reprinted chapters that I’d like to make available more widely. I’ll update it from time to time.

The first one is a recent request for Ch.18, “American options under jump diffusions: an introduction“: Lewis.Vol2.Ch18

Update: Here is another one: a request for Ch. 22, “Volatility jump-diffusions“: Lewis.Vol2.Ch22

Heston Model Reference Prices

A few years back, over at the Wilmott forum, somebody requested some high precision reference prices for the Heston ’93 stochastic volatility model, so as to compare to an implementation. I responded to that with some prices computed in Mathematica to high precision. A number of people found that thread quite useful, and other cases were added over time. Unfortunately, the thread became corrupted after a forum revision. I have gotten a recent request for the same data, so I reproduce some of those results here as perhaps a more permanent record.

In SDE notation, the Heston model is the system

\displaystyle \left \{ \begin{array}{l}dS_t = (r-q) \, S_t \, dt + \sqrt{V_t} S_t \, dB_t, \\dV_t = (\omega - \theta V_t) \, dt + \xi \, \sqrt{V_t} \, dW_t, \quad dB_t \, dW_t = \rho \, dt. \end{array}\right. \quad\quad\quad (1)

I use the parameter set:

\displaystyle r = \frac{1}{100},\quad q = \frac{2}{100}, \quad S_0 = 100, \quad T = 1, \quad V_0 = \frac{4}{100},  \omega = 1, \quad \theta = 4, \quad \xi = 1, \quad \mbox{and} \quad \rho = -\frac{1}{2}.

With WorkingPrecision=50, I asked Mathematica for 20 good digits, which is probably a conservative estimate of how many good digits there are in the result. (The results have been confirmed by others to at least 15-16 good digits). With that, the results for Put and Call option values were:

StrikeType (P=Put, C=Call)Option Price
80P7.958878113256768285213263077598987193482161301733
80C26.774758743998854221382195325726949201687074848341
90P12.017966707346304987709573290236471654992071308187
90C20.933349000596710388139445766564068085476194042256
100P17.055270961270109413522653999411000974895436309183
100C16.070154917028834278213466703938231827658768230714
110P23.017825898442800538908781834822560777763225722188
110C12.132211516709844867860534767549426052805766831181
120P29.811026202682471843340682293165857439167301370697
120C9.024913483457835636553375454092357136489051667150

Subsequently, Mark Joshi, a well-known researcher who has since passed away, asked for more extreme cases with small T and small volatility, specifically T = V_0 = 0.01. With those changes, but otherwise the same parameters, I found:

StrikeType (P=Put, C=Call)Option Price
90P4.5183603586861772614990106188215872180542*10^-8
90C9.989001595065276544935948045293485530832966049263
95P0.000461954855653851579672612557018857858641926937
95C4.989963479738160122154264702582719627807098780529
100P0.477781171629504680023239655436072890669645669297
100C0.467782671512844263098248405184095087949465507760
105P5.009501052563650299130635110520904481889436667608
105C2.527447823194706060519991248106500619490942*10^-6
110P10.008998550115123724684210555728039829315964456261
110C1.29932760052624920704881258510264466*10^-13

Mark found agreement with his results to 6 digits.

Update (Feb 21, 2019). Recently, Wilmott forum member zukimaten did additional checks. He confirmed the accuracy of the first panel of results to machine precision (say 15 digits). For the more difficult second panel, he confirmed the call values with strikes (90,95,100) to 15 leading digits, and the strikes (105, 110) to 13 leading digits. Of course, by put-call parity, similar results can be inferred for the corresponding puts.

Introducing XGBM — a new solvable stochastic volatility model having a stationary distribution

I have posted a new research paper to the arXiv titled “Exact Solutions for a GBM-type Stochastic Volatility Model having a Stationary Distribution”. The article may be found here.

Let me explain the title and motivation. The most popular stochastic volatility model is certainly the Heston ’93 model. Its virtues are that it is exactly solvable and it has some very nice features: a mean-reverting, stochastic volatility process  that can be negatively correlated with equity returns. This process has the well-known “square-root model” form

\displaystyle \mbox{Heston:} \quad d v_t = (\omega - \theta v_t) \, dt + \xi \,\sqrt{ v_t} \, dW_t,

 

where  v_t is the instantaneous variance rate of the asset in question. We could also write this evolution in terms of the instantaneous volatility \sigma_t = \sqrt{v_t}. However, time series analysis of the volatility of broad-based indices, such as the S&P500 index, suggests that a better specification would have  d \sigma_t \sim \xi \sigma_t dW_t, which is a geometric Brownian motion (GBM)-type volatility. Taking  d \sigma_t = \xi \sigma_t dW_t is the well-known SABR model. The problem with that one is that the volatility is not mean-reverting and does not admit a stationary density \psi(\sigma). To get a stationary density, you need a mean-reverting drift term. Also, it would be very nice to also have a model that has exact solutions. The XGBM model may be the first to combine all these properties. It is a standard bivariate model for the pair (S_t,\sigma_t), but here I’m just going to write the volatility evolution, which is

\displaystyle \mbox{XGBM:} \quad d \sigma_t = \sigma_t (\omega - \theta \sigma_t) \, dt + \xi \, \sigma_t \, dW_t.

 

To see how this model is closer to the “real-world” than the Heston ’93 model, take a look at the figures at the end, which show Maximum Likelihood fits to a proxy series for \{\sigma_t\}. The proxy is the daily (annualized) volatility for the S&P500 taken from the Oxford-Man Institutes “realized library”. They maintain a number of estimators — I am using a basic one (rv5) which is simply the daily (intraday) volatility using 5-minute log-return observations. I am using all the data available at the time of this study, which is January 3, 2000 – September 28, 2018 (4705 volatility observations). The first figure shows this volatility time series. You can see there is a maximum of around 140% which is the annualized volatility at the height of the 2007-2008 Financial Crisis.

The next figures show the stationary volatility fits. For these histograms, I am using the annualized volatility in decimal (not percent), so the time series maximum is a histogram entry at \sigma \approx 1.4. That small bump is not really visible, but you can see it is accounted for by the axis extension out to that value.

As one can see, the visual fit is better for XGBM vs. Heston, with corresponding log-likelihoods:  5356 (Heston) vs 5920 (XGBM). In fact, the fit is even better (LL = 6055) for the figure labeled “GARCH -3/2 model”, which is the stationary density corresponding to both the GARCH diffusion model and the 3/2-model. In any event, the point of the exercise is to motivate my new paper and interested readers may find it at the link above.

Stationary volatility density fits for three models

Update. This research paper is now published. The cite is “Lewis, Alan L. (2019), “Exact solutions for a GBM-type stochastic volatility model having a stationary distribution”, Wilmott mag, Vol. 2019, Issue 101, May, 20-41. The editors were kind enough to make it a
cover story. As part of that publication, I wrote an additional Introduction, which is available here: XGBMIntro.WMag.Final.

A jump for the record books

Facebook released disappointing earnings yesterday (July 25, 2018) after the close and fell about 20%. While a -20% jump due to earnings is actually quite routine, the associated market value loss was a record. According to Bloomberg,

The company’s shares fell the most in its history as a public company, wiping out more than $120 billion in market value. It marks the largest ever loss of value in one day for a U.S. traded company.

Here is a chart of the stock price move after about 2 hours into today’s regular session:

Facebook after earnings release on Jul 25, 2018

In financial modelling, there are various approaches to handling jumps. The most common is probably “Poisson-driven” jumps,
which are also known as compound Poisson models. In those, you don’t know either when the jump will occur or what magnitude it
will be.  When you add stochastic volatility, such models are often called SVJ models, such as the ones I discuss here.

Such models are inappropriate for jumps associated to earnings releases because you know exactly when the jump will occur.

Instead, drop the Poisson factor: all you need to model is the jump distribution. (And often a log-normal suffices for the price jump). I like to call these types of events “scheduled jumps”. There are several pages of discussion in “Option Valuation under Stochastic Volatility II”, mostly using the stocks with “equity VIXes”, as examples.

From an investor’s point of view, the good thing about (bad) earnings release jumps is that they are largely idiosyncratic. In this case,
although there was some spill-over to other internet stocks (and QQQ), the broad US equity market was largely unchanged by the event. As always, an investor’s first line of defense in the stock market is diversification.

While there’s no Facebook VIX, as I point out in the mentioned book, one could always construct one historically — perhaps easiest would be to use the “old VIX” methodology. It would be interesting to do so and see how that behaved through this event. The point of any VIX method is the benefit of a ‘standardized’ implied volatility when studying how the options implied vol changed across the jump event. In general, earnings releases impose  a “sawtooth-like” seasonal pattern upon the (otherwise noisy) time series of option implied volatility.

Calibration of the GARCH Diffusion Model

The GARCH diffusion model  is one of the running examples of bivariate stochastic volatility models in my first book. (Others include the well-known Heston ’93 model and the so-called 3/2-model). As with most finance models, it comes in two versions: a real-world (aka P-measure) version and a risk-neutral (aka Q-measure) version. The latter is used to value options.

For equity applications, the stochastic process pair is \{S_t,v_t\}. Here  S_t \ge 0 is a stock price and v_t = \sigma_t^2 > 0 is the associated stochastic variance rate. Then, the risk-neutral version, as an SDE (stochastic differential equation) system, reads

\displaystyle \left \{ \begin{array}{l}dS_t = (r-q) S_t dt + \sigma_t S_t dB_t,  \\d v_t = \kappa (\bar{v} - v_t) + \xi \, v_t \, dW_t, \quad dB_t \, dW_t = \rho \, dt. \end{array}\right. \quad\quad\quad\quad\quad (1)

It’s a very simple and natural model which, unfortunately, lacks  exact solutions for option values or transition densities. To determine the unknown parameters in (1), one needs to calculate option prices (numerically) and fit the model to option chains, a procedure generally called “calibration”. To do this efficiently and accurately for this model (and many others) requires a PDE approach. (There have been some earlier calibrations of this model via Monte Carlo).

The lack of an efficient — or, apparently, any — PDE calibration for this model prompted Yiannis Papadopoulos and me to perform one. Our methods and first results were recently posted on the arXiv:

A First Option Calibration of the GARCH Diffusion Model by a PDE Method

Yiannis’ own announcement may be found in his blog here. You will also find at that link a free downloadable GARCH diffusion calibrator demo that Yiannis has developed. (Windows executable). You can run it on a sample option chain that he supplies, and see a calibration in well under a minute (11 seconds on my desktop). Or you can run it on your own data, simply by imitating the provided data file format.

 

Errata for ‘Back to Basics: a new approach to the discrete dividend problem’

Here is a small correction to Table 6 in this 2003 article by E.G. Haug, J. Haug, and A. Lewis.

THHL
(Published value)
HHL
(Corrected value)
110.660610.6606
215.198915.2007
318.598418.6002

Table 6 shows various approximations to the value of a European call option under discrete dividends, with multiple dividends. The table column labeled ‘HHL’ is referred to as an ‘exact integration’. This language suggests that all the displayed digits should be correct. However, a careful reader was attempting to replicate those values and did not agree with all the digits. (Thank you, Yiannis!). Rerunning the codes a little more carefully yields new best estimates shown above. Note the T = 1 entry is unchanged. The revised entries also agree, to the digits shown, with the mentioned reader’s independent calculations, which use a different method.

I frequently refer to this article in Ch. 9 of ‘Option Valuation under Stochastic Volatility II’. In particular, under ‘Suggested reader projects’ I suggest the reader extend the chapter numerics to call options and compare results to the HHL tables. So, for those who might be doing that, the above small revisions may be meaningful.

References

Espen G. Haug, Jørgen Haug, and Alan Lewis. Back to Basics: a new approach to the discrete dividend problem. Wilmott magazine, pgs. 37-47, Sept. 2003

The Joint Transition Density for SVJ Models — Part I

In my latest book, I have a chapter titled “A Closer Look at the Square-root and 3/2-Model”.  One of the things I do in that chapter is give the joint transition density for the Heston ’93 model (which is associated to the Square-root model) and for the 3/2-model. These two models are popular stochastic volatility models (abbreviated ‘SV’).

reader over at the Wilmott forums asked the natural question: what is the corresponding joint transition density when one adds price jumps to the Heston model? The resulting class of models is often abbreviated ‘SVJ’, standing for ‘stochastic volatility + jumps’.

As it turns out, the modifying the book formula for price jumps is quite easy. Showing that is the subject of this post and the next one. In this first one, I’ll just give the result. In the second one, I’ll give the derivation.

First, let’s review the results given in the book without the jumps.

The Joint Transition Density for the Heston Model

The Heston model is the SV system:

\displaystyle \left \{ \begin{array}{l}dS_t = b \, S_t \, dt + \sqrt{V_t} S_t \, dB_t, \\dV_t = (\omega - \theta V_t) \, dt + \xi \, \sqrt{V_t} \, dW_t, \quad dB_t \, dW_t = \rho \, dt. \end{array}\right. \quad\quad\quad (1)

Actually, we are considering two slightly different models at once: the “risk-neutral” version and the “real-world” version. In the risk-neutral model, b = r - q, where r is an interest rate and q is a dividend yield. Under the risk-neutral model, e^{-(r-q) t} \, S_t is a martingale. Under the real-world model, b is simply a constant parameter that one estimates. With those two possibilities for b, the joint transition density is given by

p(t,S_t,V_t|S_0,V_0) = \frac{1}{\pi S_t} \int_0^{\infty} \Re \left\{(\frac{S_t e^{-b t}}{S_0})^{-i u} G(t;-u,V_0,V_t) \right\} \, du, \quad (2)

where

G(t;z,v_0,v_t) = \frac{d \, e^{(1+\nu) \hat{\theta} \tau_t/2}}{2 \sinh(d \tau_t/2)} \, e^{-B_t v_0 - C_t v_t}  \left( \frac{v_t}{v_0} \right)^{\nu/2} I_{\nu} \left( \frac{d \sqrt{v_0 v_t}}{\sinh(d \tau_t/2)} \right),

using

 \nu = \frac{2 \omega}{\xi^2} - 1, \quad \tau_t = \frac{1}{2}\xi^2 t, \quad B_t = \frac{r_1 - r_2 e^{d \tau_t}}{e^{d \tau_t}  - 1}, \quad C_t = \frac{r_1 e^{d \tau_t} - r_2}{e^{d \tau_t}  - 1}, 

\hat{\theta} = \frac{2}{\xi^2}(\theta + i z \rho \, \xi), \quad \tilde{c} = \frac{1}{\xi^2}(z^2 - i z), \quad d = (\hat{\theta}^2 + 4 \tilde{c})^{1/2}, \quad r_{1,2} = \frac{1}{2}(\hat{\theta} \pm d).

Equation (2) is equation (7.40) in the book. It looks messy, but essentially it’s just a single (numerical) integration involving elementary functions and the modified Bessel function:  I_{\nu}(z). It’s quite straight-forward to implement in Mathematica, with the caveat that one should re-work the answer slightly to use only e^{-d \tau_t} terms, which helps prevent unexpected branch-cut crossings. The code, which uses that re-work, in given in the book in Sec. 7.12 Appendix.

In addition to ‘b’, note that the other parameters will generally be different under the two versions of the model: real-world or risk-neutral. The exception is \xi, which must be the same, as well as the entire process \{V_t\}, which also must be the same.

Applications

Once you have the joint density, you can compute various interesting things. For example, with the risk-neutral version you could compute the value of a call option on a stock with a single discrete dividend prior to expiration, now accounting for stochastic volatility effects. Or, you could compute the value of a compound option. If you’re still reading this, you probably can think of your own applications. With the real-world version, a time series of underlying security prices and a good proxy for the instantaneous volatility, you can make parameter estimates, using Maximum Likelihood Estimation.

The Joint Transition Density for the SVJ class of Models

For our purposes, we will define the ‘SVJ class of models’ to mean the Heston model modified to include independent, Poisson-driven, price jumps. Specifically, the modified SDE system becomes:

\displaystyle \left \{ \begin{array}{l}dS_t = b \, S_t \, dt + \sqrt{V_t} S_t \, dB_t + S_t (e^{\Delta X_t} - 1) dN_t, \\dV_t = (\omega - \theta V_t) \, dt + \xi \, \sqrt{V_t} \, dW_t, \quad dB_t \, dW_t = \rho \, dt. \end{array}\right. \quad\quad\quad (3)

The new notations include N_t, a Poisson process which counts the number of jumps that have occurred. Letting \lambda be the corresponding Poisson intensity, we have E_t[dN_t] = \lambda \, dt in SDE language, where E_t[\cdots] is the time t expectation, conditional on the state of the system (S_t,V_t). (Again, with all the parameters, there is a ‘real-world’ value and a `risk-neutral’ value, but we are suppressing that in a notational abuse)

In addition, given that there is a jump at time t, the jump size (actually the change in \log S_t) is denoted by \Delta X_t. This is a random variate with support on (-\infty,\infty). It is drawn independently from some jump density p_J(x) (real-world) or q_J(x) (risk-neutral). In a brief notation for this draw:  \Delta X_t \sim p_J(\cdot), and similarly for the risk-neutral case.

Our discussion allows p_J(\cdot)  or q_J(\cdot) to be rather general. What are the restrictions? First, the q_J(\cdot) density is subject to the condition that the integral in equation (4) below is well-defined.

Second , a  requirement from “no-arbitrage” considerations is that the two densities have the same support. For example, if there is a positive probability density that the stock market can crash (i.e., jump downward) by, let’s say, -20%, then there must also be a positive risk-neutral probability for the same event. However, the magnitudes can be, and usually are, quite different. Indeed, because investors are risk-averse, the risk-neutral probability of a broad-market crash is typically much higher than a plausible real-world probability estimate. In older literature, these risk-neutral probabilities are more aptly named ‘util-probs’, for ‘utility-adjusted probabilities’. See the mentioned book (Ch. 16, “Fear of Jumps”) for more on that utility connection.

Let’s note a special case. Namely, when both jump-size densities are normal densities, the resulting model is the Bates ’96 model, which blends the Heston model and Merton’s jump-diffusion model.

Another important point is that the constant ‘b’ in (3) has now changed under both interpretations: real-world or risk-neutral. That is, b is still a constant, but it’s numerical value is now generally going to be different from what it was under the pure Heston model. Under the real-world interpretation of the process, suppose you fix a parameterized density p_J(\cdot). Then, you go and estimate all the parameters by Maximum Likelihood Estimation. The estimate for b will change (even on the same observations) because the model has changed.

Under the risk-neutral interpretation of the process, we still need to enforce that S_t e^{-(r-q)t} is a martingale. Taking the time-t expectation of the first line of (3) yields the martingale condition:

\displaystyle r - q = b + \lambda \int_{-\infty}^{\infty} (e^x - 1) \, q_J(x) \, dx = b + \lambda (\hat{q}_J(-i) -1). \quad\quad (4)

Note that I have introduced the Fourier transform of the (risk-neutral) jump-size density: \hat{q}_J(z) = \int_{-\infty}^{\infty} e^{i z x} \, q_J(x) \, dx. In a similar notation, \hat{p}_J(z) denotes the transform of the real-world density.

Finally, using these notations, what is the modification of (2) that incorporates jumps? It turns out to be quite simple: just insert a relatively simple factor into the integrand:

p(t,S_t,V_t|S_0,V_0) = \frac{1}{\pi S_t} \int_0^{\infty} \Re \left\{(\frac{S_t e^{-b t}}{S_0})^{-i u} \, e^{\psi(u) t} \, G(t;-u,V_0,V_t) \right\} \, du, \quad (5)

where

\displaystyle \psi(u) = \left \{ \begin{array}{ll} \lambda  \, (\hat{p}_J(u) - 1), & \mbox{(real-world)}, \\  -i u \lambda \, (\hat{q}_J(-i) - 1) + \lambda \, (\hat{q}_J(u) - 1), & \mbox{(risk-neutral)}. \end{array}\right. 

In the risk-neutral case, you keep b = r - q in (5). Then the additional term in \psi(u) accounts for the modification of b given in equation (4).

For the special case of the Bates model, we can write the jump term in (3) as

 e^{\Delta X_t} - 1 = e^{\alpha + \delta \epsilon_t} -1,

where \epsilon_t is a draw from an independent standard normal distribution. Thus the (log) jump-sizes are normally distributed with mean \alpha and variance \delta^2. In that case (5) uses

\displaystyle \psi(u) = \left \{ \begin{array}{ll} \lambda  \, (e^{i u \alpha - u^2 \delta^2/2} - 1), & \mbox{(real-world)}, \\ -i u \lambda \, (e^{ \alpha + \delta^2/2} - 1) + \lambda \, (e^{i u \alpha - u^2 \delta^2/2} - 1), & \mbox{(risk-neutral)}. \end{array}\right. 

Note that the values of \lambda, \alpha, \, \mbox{and} \, \delta will general differ between the real-world and risk-neutral cases, although my notation does not explicitly indicate this.

As it turns out, the modification here to add price jumps to the risk-neutral model is exactly the same as the modification discussed in J. Gatheral’s book “The Volatility Surface” (pg. 66) for the characteristic function that prices vanilla options. This is not too surprising — but deserves to be shown carefully: perhaps I will do that in a future post.

A Young Quant’s Illustrated Primer: the SABR Model

What is the SABR model?

The SABR model (pronounced as in light saber) is a popular stochastic volatility model. A somewhat more general version than the one we discuss here was introduced by P. Hagan and co-authors in 2002, with applications to interest rate-related derivatives. The model became popular because a (quite difficult) small-time analysis resulted in some easy-to-apply, approximate, “smile” formulas that could fit the authors’ target markets.

While equity applications are rare, the model makes mathematical sense for equities, where the stochastic process pair is \{S_t,\sigma_t\}. Here  S_t \ge 0 is a stock price and \sigma_t > 0 is the associated stochastic volatility. Using that equity language, the SABR model, as an SDE (stochastic differential equation) system, reads

\displaystyle \left \{ \begin{array}{l}dS_t = \sigma_t S_t^{\beta} dB_t, \quad (t < \tau_0), \\d \sigma_t = \nu \, \sigma_t \, dW_t, \quad dB_t \, dW_t = \rho \, dt. \end{array}\right. \quad\quad\quad\quad\quad (1)

In equation (1), \beta is any real number, and \nu > 0 is the “volatility of volatility”. The latter can be set equal to one without loss of generality, so we’ll do so for the remainder of this post. Although any real \beta makes mathematical sense, fits to market data will almost always have -\infty < \beta \le 1, so we’ll stick to that range also. Also \rho \in (-1,1) is a correlation parameter. Finally, \tau_0 is the first time, if any, that the stock price hits 0.

Boundary conditions

The stock price can hit 0 with a positive probability whenever \beta is strictly less than 1. What happens after that requires a boundary condition. Of course, for equities, to avoid an arbitrage opportunity, once a stock price hits zero, it must stay there (unless the company is recapitalized). So, an additional specification of the equity version of the model is that S_t = 0 for all t \ge \tau_0.

For the original fixed income applications, the underlying security is a (forward) interest rate and other boundary conditions are possible.

Exact solutions

By a “solution” to equation (1), I mean a solution for the associated transition probability density. Once you have that, option values, for example, are found by an integration. Exact solutions are known for the following special cases:

  • Normal SABR model: \beta = 0 with \rho = 0.
  • Lognormal SABR model: \beta = 1 with any \rho.

McKean’s formula

The Normal SABR model solution is found by adapting a famous formula for the transition density found by Henry P. McKean in 1970. For that case, it makes more sense to switch gears slightly in the notation and use X_t instead of S_t, now X_t the location of a “particle” which can hit 0 and keep on going to arbitrarily negative values. With that new notation, the Normal SABR model process is:

\displaystyle \left \{ \begin{array}{l}dX_t = \sigma_t dB_t, \\d \sigma_t = \sigma_t \, dW_t. \end{array}\right. \quad\quad\quad\quad (2)

Now B_t and W_t are two independent Brownian motions. Equation (2) also describes a generalized type of Brownian motion, namely Brownian motion on a two-dimensional hyperbolic space.

Suppose we start the particle at X_0 = x_0 = 0 and \sigma_0 = 1. Then, the probability of finding the particle at subsequent times is given by Mckean’s formula for G(t,X,\sigma|x_0,\sigma_0). If we let t range from 0.01 to 1, we can plot McKean’s formula to find the following evolution:

Evolution of McKean formula probability density for the Normal SABR model
Evolution of McKean formula probability density for the Normal SABR model

If you watch closely, you’ll see that the probability density starts with a spike near (X,\sigma) = (0,1) and then spreads out in the X direction. In the \sigma direction, mass begins to accumulate near \sigma \rightarrow 0. That’s due to lack of mean reversion in the volatility process and is an ‘unphysical’ aspect of the model.

Now that we’ve talked about it so much, you may be curious as to what this mysterious formula looks like. Here it is:

\displaystyle G(t,X,Y|x,y) = \frac{e^{-t/8}}{2 \, Y^2 \, (\pi t)^{3/2}} \int_d^{\infty} \frac{s \, e^{-s^2/(2 t)}}{\sqrt{\cosh s - \cosh d}} \, ds,

\displaystyle \mbox{where} \quad d = \cosh^{-1}z, \quad \mbox{using} \quad z = 1 + \frac{(X-x)^2 + (Y-y)^2}{2 y Y}

To adapt the McKean solution for an equity model, you can use the well-known “method of images”. The method will ensure that the associated equity process S_t, which starts at S_0 > 0, is absorbed should it hit S=0.

(There’s much more to the model than we have touched upon, and we’ll certainly return to it in future blog posts. This post has been adapted from material in Ch. 8 (“A Closer Look at the SABR Model”) from “Option Valuation under Stochastic Volatility II”).

Further reading and codes

Animation code

Started adding book codes from ‘Option Valuation under Stochastic Volatility II’

I have started adding book codes to this site. Almost all will be Mathematica notebooks (ending in .nb) , although there will be some C/C++. This may be a slow process since most files will require some clean-up before posting.

One file has been added so far: from Ch. 1: CIRJumpStart.nb, which was used to generate Fig. 1.6 in the book and below:

Fig. 1.6 CIR Bond Process with a Delayed Jump-start.
Fig. 1.6. CIR Bond Process with a Delayed Jump-start.

The figure shows a fit of the CIR bond process with a delayed jump-start (solid curve) to the US Treasury Zero Coupon yield curve (dashed curve), using Feb. 10, 2012 data. The delayed jump-start is/was a mechanism to model the end of the ZIRP (Zero Interest Rate Policy) regime. Details and related models are found in the book Chapter 1: “Slow Reflection, Jump-returns, and Short-term Interest Rates”.

The code file location is the Latest Book link (Just click on ‘Selected Codes’)

I will probably just work my way slowly through the book, but if you are a reader and particularly want to see certain book codes, please let me know by a comment or email. I will try to accommodate if possible.