FastRNNCell
LuxRecurrentLayers.FastRNNCell
— TypeFastRNNCell(in_dims => out_dims, [activation];
use_bias=true, train_state=false, init_bias=nothing,
init_recurrent_bias=nothing, init_weight=nothing,
init_recurrent_weight=nothing, init_state=zeros32,
init_alpha=-3.0, init_beta=3.0)
Fast recurrent neural network cell.
Equations
\[\begin{aligned} \tilde{\mathbf{h}}(t) &= \sigma\left( \mathbf{W}_{ih} \mathbf{x}(t) + \mathbf{b}_{ih} + \mathbf{W}_{hh} \mathbf{h}(t-1) + \mathbf{b}_{hh} \right), \\ \mathbf{h}(t) &= \alpha \, \tilde{\mathbf{h}}(t) + \beta \, \mathbf{h}(t-1) \end{aligned}\]
Arguments
in_dims
: Input dimensionout_dims
: Output (Hidden State & Memory) dimensionactivation
: activation function. Default istanh
Keyword arguments
use_bias
: Flag to use bias in the computation. Default set totrue
.train_state
: Flag to set the initial hidden state as trainable. Default set tofalse
.init_bias
: Initializer for bias $\mathbf{b}_{ih}$. If set tonothing
, weights are initialized from a uniform distribution within[-bound, bound]
wherebound = inv(sqrt(out_dims))
. Default isnothing
.init_recurrent_bias
: Initializer for hidden to hidden bias $\mathbf{b}_{hh}^z, \mathbf{b}_{hh}^h$. If set tonothing
, weights are initialized from a uniform distribution within[-bound, bound]
wherebound = inv(sqrt(out_dims))
. Default isnothing
.init_weight
: Initializer for weight $\mathbf{W}_{ih}$. If set tonothing
, weights are initialized from a uniform distribution within[-bound, bound]
wherebound = inv(sqrt(out_dims))
. Default isnothing
.init_recurrent_weight
: Initializer for recurrent weight $\mathbf{W}_{hh}$. If set tonothing
, weights are initialized from a uniform distribution within[-bound, bound]
wherebound = inv(sqrt(out_dims))
. Default isnothing
.init_state
: Initializer for hidden state. Default set tozeros32
.init_alpha
: initializer for the $\alpha$ learnable parameter. Default is -3.0.init_beta
: initializer for the $\beta$ learnable parameter. Default is 3.0.
Inputs
- Case 1a: Only a single input
x
of shape(in_dims, batch_size)
,train_state
is set tofalse
- Creates a hidden state usinginit_state
and proceeds to Case 2. - Case 1b: Only a single input
x
of shape(in_dims, batch_size)
,train_state
is set totrue
- Repeatshidden_state
from parameters to match the shape ofx
and proceeds to Case 2. - Case 2: Tuple
(x, (h, ))
is provided, then the output and a tuple containing the updated hidden state is returned.
Returns
Tuple containing
- Output $h_{new}$ of shape
(out_dims, batch_size)
- Tuple containing new hidden state $h_{new}$
- Output $h_{new}$ of shape
Updated model state
Parameters
weight_ih
: Concatenated weights to map from input to the hidden state $\mathbf{W}_{ih}$.weight_hh
: Concatenated weights to map from hidden to the hidden state $\mathbf{W}_{hh}$.bias_ih
: Bias vector for the input-hidden connection (not present ifuse_bias=false
) $\mathbf{b}_{ih}$.bias_hh
: Bias vector for the hidden-hidden connection (not present ifuse_bias=false
) $\mathbf{b}_{hh}$.hidden_state
: Initial hidden state vector (not present iftrain_state=false
)alpha
: Learnable scalar to modulate candidate state.beta
: Learnable scalar to modulate previous state.
States
rng
: Controls the randomness (if any) in the initial state generation