Dylan Siegler - 2 months ago 8

Java Question

I am writing a basic neural network in Java and I am writing the activation functions (currently I have just written the sigmoid function). I am trying to use

`double`

`BigDecimal`

`public static double sigmoid(double t){`

return (1 / (1 + Math.pow(Math.E, -t)));

}

This function returns pretty precise values all the way down to when

`t = -100`

`t >= 37`

`1.0`

Answer

The surprising answer is that double is actually more precision than you need. This blog article by Pete Warden claims that even 8 bits are enough precision. And not just an academic idea: NVidia's new Pascal chips emphasize their single-precision performance above everything else, because that is what matters for deep learning training.

You should be normalizing your input neuron values. If extreme values still happen, it is fine to set them to -1 or +1. In fact, this answer shows doing that explicitly. (Other answers on that question are also interesting - the suggestion to just pre-calculate 100 or so values, and not use `Math.exp()`

or `Math.pow()`

at all!)

Source (Stackoverflow)

Comments