 oercim -4 years ago 170
R Question

# Nonstable estimates for GARCH(1,1) for different initials using MLE in R

I want to estimate a GARCH(1,1) proces manually using maximum likelihood function which is also related to my previous post.

I wrote the below code:

``````library(fGarch)

#generate returns
set.seed(1)
garch11<-garchSpec(model = list())
y<-garchSim(garch11, n = 1000)

n<-length(y)
sigma2<- vector(length = n)

#estimating the parameters manually by mle
LL <- function(theta)
{

sigma2<-theta/(1-theta-theta)
#sigma2<-var(y)

for(i in 1:n)
{
sigma2[i+1]<-theta+theta*y[i]^2+theta*sigma2[i]
}

R =dnorm(y[1:n], 0, sqrt(sigma2[1:n]), log = TRUE)
L=-sum(R)
return(L)
}

fit1<-optim(theta<-c(0.1,0.,0.1),LL)
fit2<-optim(theta<-c(0.0001,0.5,0.1),LL)
``````

I use two set of initial values. Then I get two different set of estimates(probably local minimums).

``````> fit1
\$par
 0.000007919175 0.139449431273 0.190760467531

\$value
 -4264.499

\$counts
250       NA

\$convergence
 0

\$message
NULL

> fit2
\$par
 0.000001281119 0.100213870720 0.790732098375

\$value
 -4275.474

\$counts
314       NA

\$convergence
 0

\$message
NULL
``````

fit2 results are very close to the true parameter values. However fit1 is far to the true.

This is just an experiment for me. If I convince I will use the above code to estimate some models' parameters. However, getting different results for different initial values make me not trust to the code.

How can I improve the code to get more stable estimates? Eldioo

I also wanted to do a manual MLE once and found the `mle()`function very unstable, due to the sensitivity to the starting value choice. Trying the `solnp()`function from the `Rsolnp`package worked for me - here's the code:

``````#data <- "your data set"
library(Rsolnp)
solfun<-function(vec)
{
alpha<-vec;
beta<-vec;
omega<-vec;
mu<-vec; #only if you want to include a mean in the GARCH model, else set mu=0
sig0<-vec; #starting value estimation can be replaced by sd(data)
LL_ret<-1:(length(data));
sigs<-1:(length(data)+1);
sigs<-sig0^2 ;
for(i in 2:(length(data)+1)) (sigs[i] <- (omega+alpha*(data[i-1]-mu)^2+beta*sigs[i-1])) ;
for(i in 1:(length(data))) (LL_ret[i] <- (-0.5*log(2*pi*sigs[i])-((data[i]-mu)^2)/(2*sigs[i]))) ;
return(sum(-LL_ret)) ;
} #negative log likelihood that we want to minimize

#optimization:
sol<-solnp(pars=c(alpha=0.1, beta=0.9, omega=0, mu=mean(data),sig0=sd(data)),
fun=solfun,LB=c(0,0.7,0,-1,0),UB=c(0.3,1,1,1,1)) #mu and sig0 only if you use them in solfun

#plugging the estimated parameters into function for sigma^2 estimates:
LL2<-function(alpha, beta, omega, mu, sig0)
{
LL_ret<-1:(length(data));
sigs<-1:(length(data)+1);
sigs<-sig0^2 ;
for(i in 2:(length(data)+1)) (sigs[i] <- (omega+alpha*(data[i-1]-mu)^2+beta*sigs[i-1])) ;
for(i in 1:(length(data))) (LL_ret[i] <- (-0.5*log(2*pi*sigs[i])-((data[i]-mu)^2)/(2*sigs[i]))) ;
return(list(LL_ret,sigs)) ;
}

L2<-LL2(alpha=sol\$pars[],beta=sol\$pars[],
omega=sol\$pars[],mu=sol\$pars[],sig0=sol\$pars[])

#Summarizing results of log likelihood, parameters and sigma estimates:
GG1<-list(sigma=sqrt(L2[]),llh=L2[],llhs=sum(L2[]),coef=sol\$pars)
``````

If you don't want to estimate the mean and starting value, you just have to remove those parts. Also, you'll obviously have to adjust the parameter bounds in the optimization, to fit your data (in my case, it was financial return data). It doesn't have to be as exact, as for the `mle()` function, though.

Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download