Keras GRU Layer

This Node Is Deprecated — This version of the node has been replaced with a new and improved version. The old version is kept for backwards-compatibility, but for all new workflows we suggest to use the version linked below.
Go to Suggested ReplacementKeras GRU Layer

Gated recurrent unit as introduced by Cho et al. There are two variants. The default one is based on 1406.1078v3 and has reset gate applied to hidden state before matrix multiplication. The other one is based on original 1406.1078v1 and has the order reversed. Corresponds to the GRU Keras layer .

Options

Name prefix
The name prefix of the layer. The prefix is complemented by an index suffix to obtain a unique layer name. If this option is unchecked, the name prefix is derived from the layer type.
Input tensor
The tensor to use as input for the layer.
Hidden state tensor
The tensor to use as initial hidden state in case the corresponding port is connected.
Units
Dimensionality of the output space.
Activation
The activation function to use on the input transformations.
Recurrent activation
The activation function to use for the recurrent step.
Use bias
If checked, a bias vector will be used.
Return sequences
Whether to return the last output in the output sequence or the full output sequence. If selected the output will have shape [time, units] otherwise the output will have shape [units].
Return state
Whether to return the hidden state in addition to the layer output. If selected the layer returns two tensors, the normal output and the hidden state of the layer. If sequences are returned, this also applies to the hidden state.
Dropout
Fraction of the units to drop for the linear transformation of the input.
Recurrent dropout
Fraction of the units to drop for the linear transformation of the recurrent state.
Go backwards
Whether to go backwards in time i.e. read the input sequence backwards.
Unroll
Whether to unroll the network i.e. convert it in a feed-forward network that reuses the layer's weights for each timestep. Unrolling can speed up an RNN but it's more memory-expensive and only suitable for short sequences. If the layer is not unrolled, a symbolic loop is used.
Implementation
Mode 1 will structure its operations as a larger number of smaller dot products and additions, whereas mode 2 will batch them into fewer, larger operations. These modes will have different performance profiles on different hardware and for different applications.
Reset after
GRU convention (whether to apply reset gate after or before matrix multiplication).

Initializers

Kernel initializer
Initializer for the weight matrix used for the linear transformations of the input. See initializers for details on the available initializers.
Recurrent initializer
Initializer for the weight matrix used for the linear transformation of the recurrent connection. See initializers for details on the available initializers.
Bias initializer
Initializer for the bias vector (if a bias is used). See initializers for details on the available initializers.

Regularizers

Kernel regularizer
Regularizer function applied to the weight matrix. See regularizers for details on the available regularizers.
Recurrent regularizer
Regularizer function applied to the weight matrix for the recurrent connection. See regularizers for details on the available regularizers.
Bias regularizer
Regularizer function applied to the bias vector. See regularizers for details on the available regularizers.
Activity regularizer
Regularizer function applied to the output of the layer i.e. its activation. See regularizers for details on the available regularizers.

Constraints

Kernel constraint
Constraint on the weight matrix for the input connection. See constraints for details on the available constraints.
Recurrent constraint
Constraint on the weight matrix for the recurrent connection. See constraints for details on the available constraints.
Bias constraint
Constraint on the bias vector. See constraints for details on the available constraints.

Input Ports

Icon
The Keras deep learning network to which to add an GRU layer.
Icon
An optional Keras deep learning network providing the initial state for this GRU layer. The hidden state must have shape [units], where units must correspond to the number of units this layer uses.

Output Ports

Icon
The Keras deep learning network with an added GRU layer.

Popular Predecessors

Views

This node has no views

Workflows

  • No workflows found

Links

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.