Keras ELU Layer

Exponential linear units were introduced to alleviate the disadvantages of ReLU and LeakyReLU units, namely to push the mean activation closer to zero while still saturating to a negative value which increases robustness against noise if the unit is in an off state (i.e. the input is very negative). The formula is f(x) = alpha * (exp(x) - 1) for x < 0 and f(x) = x for x >= 0. For the exact details see the corresponding paper. Corresponds to the Keras ELU Layer.

Options

Name prefix
The name prefix of the layer. The prefix is complemented by an index suffix to obtain a unique layer name. If this option is unchecked, the name prefix is derived from the layer type.
Alpha
Scale for the negative factor of the exponential linear unit.

Input Ports

Icon
The Keras deep learning network to which to add a ELU layer.

Output Ports

Icon
The Keras deep learning network with an added ELU layer.

Views

This node has no views

Workflows

  • No workflows found

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.