Hans Hans - 2 months ago 23
R Question

R optim() L-BFGS-B needs finite values of 'fn' - Weibull

I try to estimate the three parameters a, b0 and b1 with the

optim()
function. But I always get the error:
Error in optim(par = c(1, 1, 1), fn = logweibull, method = "L-BFGS-B", :
L-BFGS-B needs finite values of 'fn'

t<-c(6,6,6,6,7,9,10,10,11,13,16,17,19,20,22,23,25,32,32,34,35,1,1,2,2,3,4,4,5,5,8,8,8,8,11,11,12,12,15,17,22,23)
d<-c(0,1,1,1,1,0,0,1,0,1,1,0,0,0,1,1,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1)
X<-c(1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)

logweibull <- function (a,b0,b1) {a <- v[1];b0 <- v[2]; b1 <- v[3];
sum (d*log(t^a*exp(b0+X*b1)-t^a*exp(b0+X*b1))) + sum (d + log((a*t^(a-1))/t^a)) }

v<-c(1,1,1)

optim( par=c(1,1,1) ,fn = logweibull, method = "L-BFGS-B",lower = c(0.1, 0.1,0.1), upper = c(100, 100,100),control = list(fnscale = -1) )


Can you help me? Do you know what I did wrong?

Answer

You may also consider

(1) passing the additional data variables to the objective function along with the parameters you want to estimate.

(2) passing the gradient function (added the gradient function)

(3) the original objective function can be further simplified (as below)

logweibull <- function (v,t,d,X) {
  a <- v[1] 
  b0 <- v[2] 
  b1 <- v[3] 
  sum(d*(1+a*log(t)+b0+X*b1) - t^a*exp(b0+X*b1) + log(a/t)) # simplified function
}

grad.logweibull <- function (v,t,d,X) {
  a <- v[1] 
  b0 <- v[2] 
  b1 <- v[3] 
  c(sum(d*log(t) - t^a*log(t)*exp(b0+X*b1) + 1/a), 
    sum(d-t^a*exp(b0+X*b1)),
    sum(d*X - t^a*X*exp(b0+X*b1)))
}

optim(par=c(1,1,1), fn = logweibull, gr = grad.logweibull,
      method = "L-BFGS-B",
      lower = c(0.1, 0.1,0.1), 
      upper = c(100, 100,100),
      control = list(fnscale = -1), 
      t=t, d=d, X=X)

with output

$par
[1] 0.2604334 0.1000000 0.1000000

$value
[1] -191.5938

$counts
function gradient 
      10       10 

$convergence
[1] 0

$message
[1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"

Also, below is a comparison between the convergence of with and without gradient function (with finite difference). With an explicit gradient function it takes 9 iterations to converge to the solution, whereas without it (with finite difference), it takes 126 iterations to converge.

enter image description here