Keras ELU Layer

Exponential linear units were introduced to alleviate the disadvantages of ReLU and LeakyReLU units, namely to push the mean activation closer to zero while still saturating to a negative value which increases robustness against noise if the unit is in an off state (i.e. the input is very negative). The formula is f(x) = alpha * (exp(x) - 1) for x < 0 and f(x) = x for x >= 0. For the exact details see the corresponding paper. Corresponds to the Keras ELU Layer.


Name prefix
The name prefix of the layer. The prefix is complemented by an index suffix to obtain a unique layer name. If this option is unchecked, the name prefix is derived from the layer type.
Scale for the negative factor of the exponential linear unit.

Input Ports

The Keras deep learning network to which to add a ELU layer.

Output Ports

The Keras deep learning network with an added ELU layer.


This node has no views


  • No workflows found



You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.