GatedAntisymmetricRNNCell
LuxRecurrentLayers.GatedAntisymmetricRNNCell — TypeGatedAntisymmetricRNNCell(in_dims => out_dims, [activation];
use_bias=true, use_recurrent_bias=true,
train_state=false, init_bias=nothing,
init_recurrent_bias=nothing, init_weight=nothing,
init_recurrent_weight=nothing, init_state=zeros32,
epsilon=1.0, gamma=0.0)Antisymmetric recurrent cell with gating.
Equations
\[\begin{aligned} \mathbf{z}(t) &= \sigma\left( (\mathbf{W}_{hh} - \mathbf{W}_{hh}^\top - \gamma \cdot \mathbf{I}) \mathbf{h}(t-1) + \mathbf{b}_{hh} + \mathbf{W}_{ih}^z \mathbf{x}(t) + \mathbf{b}_{ih}^z \right), \\ \mathbf{h}(t) &= \mathbf{h}(t-1) + \epsilon \cdot \mathbf{z}(t) \circ \tanh\left( (\mathbf{W}_{hh} - \mathbf{W}_{hh}^\top - \gamma \cdot \mathbf{I}) \mathbf{h}(t-1) + \mathbf{b}_{hh} + \mathbf{W}_{ih}^x \mathbf{x}(t) + \mathbf{b}_{ih}^h \right). \end{aligned}\]
Arguments
in_dims: Input Dimensionout_dims: Output (Hidden State & Memory) Dimensionactivation: activation function. Default istanh
Keyword arguments
use_bias: Flag to use bias $\mathbf{b}_{ih}$ in the computation. Default set totrue.use_recurrent_bias: Flag to use recurrent bias $\mathbf{b}_{hh}$ in the computation. Default set totrue.use_recurrent_bias: Flag to use recurrent bias $\mathbf{b}_{hh}$ in the computation. Default set totrue.train_state: Flag to set the initial hidden state as trainable. Default set tofalse.init_bias: Initializer for input to hidden bias $\mathbf{b}_{ih}^z, \mathbf{b}_{ih}^h$. Must be a tuple containing 2 functions, e.g.,(glorot_normal, kaiming_uniform). If a single functionfnis provided, it is automatically expanded into a 2-element tuple (fn, fn). If set tonothing, weights are initialized from a uniform distribution within[-bound, bound]wherebound = inv(sqrt(out_dims)). Default isnothing.init_recurrent_bias: Initializer for hidden to hidden bias $\mathbf{b}_{hh}$. If set tonothing, weights are initialized from a uniform distribution within[-bound, bound]wherebound = inv(sqrt(out_dims)). Default isnothing.init_weight: Initializer for input to hidden weights $\mathbf{W}_{ih}^z, \mathbf{W}_{ih}^x$. Must be a tuple containing 2 functions, e.g.,(glorot_normal, kaiming_uniform). If a single functionfnis provided, it is automatically expanded into a 2-element tuple (fn, fn). If set tonothing, weights are initialized from a uniform distribution within[-bound, bound]wherebound = inv(sqrt(out_dims)). Default isnothing.init_recurrent_weight: Initializer for recurrent weight $\mathbf{W}_{hh}$. If set tonothing, weights are initialized from a uniform distribution within[-bound, bound]wherebound = inv(sqrt(out_dims)). Default isnothing.init_state: Initializer for hidden state. Default set tozeros32.epsilon: step size. Default is 1.0.gamma: strength of diffusion. Default is 0.0.
Inputs
- Case 1a: Only a single input
xof shape(in_dims, batch_size),train_stateis set tofalse- Creates a hidden state usinginit_stateand proceeds to Case 2. - Case 1b: Only a single input
xof shape(in_dims, batch_size),train_stateis set totrue- Repeatshidden_statefrom parameters to match the shape ofxand proceeds to Case 2. - Case 2: Tuple
(x, (h, ))is provided, then the output and a tuple containing the updated hidden state is returned.
Returns
Tuple containing
- Output $h_{new}$ of shape
(out_dims, batch_size) - Tuple containing new hidden state $h_{new}$
- Output $h_{new}$ of shape
Updated model state
Parameters
weight_ih: Concatenated weights to map from input to the hidden state. $\{ \mathbf{W}_{ih}^z, \mathbf{W}_{ih}^h \}$ The initializers ininit_weightare applied in the order they appear: the first function is used for $\mathbf{W}_{ih}^z$, and the second for $\mathbf{W}_{ih}^h$.weight_hh: Weights to map the hidden state to the hidden state $\mathbf{W}_{hh}$.bias_ih: Bias vector for the input-hidden connection (not present ifuse_bias=false) $\{ \mathbf{b}_{ih}^z, \mathbf{b}_{ih}^h \}$ The initializers ininit_biasare applied in the order they appear: the first function is used for $\mathbf{b}_{ih}^z$, and the second for $\mathbf{b}_{ih}^h$.bias_hh: Bias vector for the hidden-hidden connection (not present ifuse_bias=false) $\mathbf{b}_{hh}$hidden_state: Initial hidden state vector (not present iftrain_state=false)
States
rng: Controls the randomness (if any) in the initial state generation