Exponential linear units were introduced to alleviate the disadvantages of ReLU and LeakyReLU units, namely to push the mean activation closer to zero while still saturating to a negative value which increases robustness against noise if the unit is in an off state (i.e. the input is very negative). The formula is f(x) = alpha * (exp(x) - 1) for x < 0 and f(x) = x for x >= 0 . For the exact details see the corresponding paper . Corresponds to the Keras ELU Layer .
- Type: Keras Deep Learning NetworkKeras NetworkThe Keras deep learning network to which to add a ELU layer.