0

I want to calibrate jointly the drift mu and volatility sigma of a geometric brownian motion,

log(S_t) = log(S_{t-1}) + (mu - 0.5*sigma^2)*Deltat + sigma*sqrt(Deltat)*Z_t

where Z_t is a standard normally distributed random variable, and am testing this by generating data x = log(S_t) via

x(1) = 0;
for i = 2:N
  x(i) = x(i-1) + (mu-0.5*sigma^2)*Deltat + sigma*sqrt(Deltat)*randn;
end

and my (log-)likelihood function

function LL = LL(x, pars)
mu    = pars(1);
sigma = pars(2);
Nt = size(x,2);
LL = 0;
for j = 2:Nt
  LH_j = normpdf(x(j), x(j-1)+(mu-0.5*sigma^2)*Deltat, sigma*sqrt(Deltat));
  LL = LL + log(LH_j);
end  

which I maximize using fmincon (because sigma is constrained to be positive), with starting values 0.15 and 0.3, true values 0.1 and 0.2, and N = Nt = 1000 or 100000 generated points over one year (=> Deltat = 0.0001 or 0.000001).

Calibrating the volatility alone yields a nice likelihood function with a maximum around the true parameter, but for small Deltat (less than say 0.1) calibrating both mu and sigma persistently shows a (log-)likelihood surface being very flat in mu (at least around the true parameter); I would expect also a maximum there; for a reason I think it should be possible to calibrate a GBM model to a data series of 100 stock prices in a year, making the average of Deltat = 0.01.

Any sharing of experience or help is greatly appreciated (thoughts passing through my mind: the likelihood function is not right / this is a normal behaviour / too few data points / data generation is not correct / ...?).
Thanks!

4

0 回答 0