# Keras ELU Layer

Other

Exponential linear units were introduced to alleviate the disadvantages of ReLU and LeakyReLU units, namely to push the mean activation closer to zero while still saturating to a negative value which increases robustness against noise if the unit is in an off state (i.e. the input is very negative). The formula is `f(x) = alpha * (exp(x) - 1) for x < 0 and f(x) = x for x >= 0`. For the exact details see the corresponding paper. Corresponds to the Keras ELU Layer.

### External Resources

### Input Ports

- Type: Keras Deep Learning Network The Keras deep learning network to which to add a
`ELU`layer.

### Output Ports

- Type: Keras Deep Learning Network The Keras deep learning network with an added
`ELU`layer.

## Short Link

Drag node into KNIME Analytics Platform