FastGRNNCell
LuxRecurrentLayers.FastGRNNCell — TypeFastGRNNCell(input_size => hidden_size, [activation];
use_bias=true, use_recurrent_bias=true, train_state=false, init_bias=nothing,
init_recurrent_bias=nothing, init_weight=nothing,
init_recurrent_weight=nothing, init_state=zeros32,
init_zeta=1.0, init_nu=4.0)Fast gated recurrent neural network cell.
Equations
\[\begin{aligned} \mathbf{z}(t) &= \sigma\left( \mathbf{W}_{ih} \mathbf{x}(t) + \mathbf{b}_{ih}^{z} + \mathbf{W}_{hh} \mathbf{h}(t-1) + \mathbf{b}_{hh}^{z} \right), \\ \tilde{\mathbf{h}}(t) &= \tanh\left( \mathbf{W}_{ih} \mathbf{x}(t) + \mathbf{b}_{ih}^{h} + \mathbf{W}_{hh} \mathbf{h}(t-1) + \mathbf{b}_{hh}^{h} \right), \\ \mathbf{h}(t) &= \left( \left( \sigma(\zeta) (1 - \mathbf{z}(t)) + \sigma(\nu) \right) \circ \tilde{\mathbf{h}}(t) \right) + \mathbf{z}(t) \circ \mathbf{h}(t-1) \end{aligned}\]
Arguments
in_dims: Input dimensionout_dims: Output (Hidden State & Memory) dimensionactivation: activation function. Default istanh
Keyword arguments
use_bias: Flag to use bias $\mathbf{b}_{ih}$ in the computation. Default set totrue.use_recurrent_bias: Flag to use recurrent bias $\mathbf{b}_{hh}$ in the computation. Default set totrue.train_state: Flag to set the initial hidden state as trainable. Default set tofalse.init_bias: Initializer for input to hidden bias $\mathbf{b}_{ih}^z, \mathbf{b}_{ih}^h$. Must be a tuple containing 2 functions, e.g.,(glorot_normal, kaiming_uniform). If a single functionfnis provided, it is automatically expanded into a 2-element tuple (fn, fn). If set tonothing, weights are initialized from a uniform distribution within[-bound, bound]wherebound = inv(sqrt(out_dims)). Default isnothing.init_recurrent_bias: Initializer for hidden to hidden bias $\mathbf{b}_{hh}^z, \mathbf{b}_{hh}^h$. Must be a tuple containing 2 functions, e.g.,(glorot_normal, kaiming_uniform). If a single functionfnis provided, it is automatically expanded into a 2-element tuple (fn, fn). If set tonothing, weights are initialized from a uniform distribution within[-bound, bound]wherebound = inv(sqrt(out_dims)). Default isnothing.init_weight: Initializer for weight $\mathbf{W}_{ih}$. If set tonothing, weights are initialized from a uniform distribution within[-bound, bound]wherebound = inv(sqrt(out_dims)). Default isnothing.init_recurrent_weight: Initializer for recurrent weight $\mathbf{W}_{hh}$. If set tonothing, weights are initialized from a uniform distribution within[-bound, bound]wherebound = inv(sqrt(out_dims)). Default isnothing.init_state: Initializer for hidden state. Default set tozeros32.init_zeta: initializer for the $\zeta$ learnable parameter. Default is 1.0.init_nu: initializer for the $\nu$ learnable parameter. Default is 4.0.
Inputs
- Case 1a: Only a single input
xof shape(in_dims, batch_size),train_stateis set tofalse- Creates a hidden state usinginit_stateand proceeds to Case 2. - Case 1b: Only a single input
xof shape(in_dims, batch_size),train_stateis set totrue- Repeatshidden_statefrom parameters to match the shape ofxand proceeds to Case 2. - Case 2: Tuple
(x, (h, ))is provided, then the output and a tuple containing the updated hidden state is returned.
Returns
Tuple containing
- Output $h_{new}$ of shape
(out_dims, batch_size) - Tuple containing new hidden state $h_{new}$
- Output $h_{new}$ of shape
Updated model state
Parameters
weight_ih: Concatenated weights to map from input to the hidden state $\mathbf{W}_{ih}$.weight_hh: Concatenated weights to map from hidden to the hidden state $\mathbf{W}_{hh}$.bias_ih: Bias vector for the input-hidden connection (not present ifuse_bias=false) $\{ \mathbf{b}_{ih}^z, \mathbf{b}_{ih}^h \}$ The initializers ininit_biasare applied in the order they appear: the first function is used for $\mathbf{b}_{ih}^z$, and the second for $\mathbf{b}_{ih}^h$.bias_hh: Bias vector for the hidden-hidden connection (not present ifuse_bias=false) $\{ \mathbf{b}_{hh}^z, \mathbf{b}_{hh}^h \}$ The initializers ininit_biasare applied in the order they appear: the first function is used for $\mathbf{b}_{hh}^z$, and the second for $\mathbf{b}_{hh}^h$.hidden_state: Initial hidden state vector (not present iftrain_state=false)zeta: Learnable scalar to modulate candidate state.nu: Learnable scalar to modulate previous state.
States
rng: Controls the randomness (if any) in the initial state generation