Exponential linear units were introduced to alleviate the disadvantages of
ReLU and LeakyReLU units, namely to push the mean activation closer to zero while
still saturating to a negative value which increases robustness against noise if the unit is
in an off state (i.e. the input is very negative).
The formula is f(x) = alpha * (exp(x) - 1) for x < 0 and f(x) = x for x >= 0.
For the exact details see the corresponding paper.
Corresponds to the
Keras ELU Layer.
You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.
To use this node in KNIME, install the extension KNIME Deep Learning - Keras Integration from the below update site following our NodePit Product and Node Installation Guide:
A zipped version of the software site can be downloaded here.
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!