AntysimmetricRNNCell

LuxRecurrentLayers.AntisymmetricRNNCellType
AntisymmetricRNNCell(in_dims => out_dims, [activation];
    use_bias=true, train_state=false, init_bias=nothing,
    init_recurrent_bias=nothing, init_weight=nothing,
    init_recurrent_weight=nothing, init_state=zeros32,
    epsilon=1.0, gamma=0.0)

Antisymmetric recurrent cell.

Equations

\[\begin{equation} \mathbf{h}(t) = \mathbf{h}(t-1) + \epsilon \cdot \tanh\left( \mathbf{W}_{ih} \mathbf{x}(t) + \mathbf{b}_{ih} + (\mathbf{W}_{hh} - \mathbf{W}_{hh}^\top - \gamma \cdot \mathbf{I}) \mathbf{h}(t-1) + \mathbf{b}_{hh} \right) \end{equation}\]

Arguments

  • in_dims: Input dimension
  • out_dims: Output (Hidden State & Memory) dimension
  • activation: activation function. Default is tanh

Keyword arguments

  • use_bias: Flag to use bias in the computation. Default set to true.
  • train_state: Flag to set the initial hidden state as trainable. Default set to false.
  • init_bias: Initializer for bias $\mathbf{b}_{ih}$. If set to nothing, weights are initialized from a uniform distribution within [-bound, bound] where bound = inv(sqrt(out_dims)). Default is nothing.
  • init_bias: Initializer for recurrent bias $\mathbf{b}_{hh}$. If set to nothing, weights are initialized from a uniform distribution within [-bound, bound] where bound = inv(sqrt(out_dims)). Default is nothing.
  • init_weight: Initializer for weight $\mathbf{W}_{ih}$. If set to nothing, weights are initialized from a uniform distribution within [-bound, bound] where bound = inv(sqrt(out_dims)). Default is nothing.
  • init_recurrent_weight: Initializer for recurrent weight $\mathbf{W}_{hh}$. If set to nothing, weights are initialized from a uniform distribution within [-bound, bound] where bound = inv(sqrt(out_dims)). Default is nothing.
  • init_state: Initializer for hidden state. Default set to zeros32.
  • epsilon: step size $\epsilon$. Default is 1.0.
  • gamma: strength of diffusion $\gamma$. Default is 0.0.

Inputs

  • Case 1a: Only a single input x of shape (in_dims, batch_size), train_state is set to false - Creates a hidden state using init_state and proceeds to Case 2.
  • Case 1b: Only a single input x of shape (in_dims, batch_size), train_state is set to true - Repeats hidden_state from parameters to match the shape of x and proceeds to Case 2.
  • Case 2: Tuple (x, (h, )) is provided, then the output and a tuple containing the updated hidden state is returned.

Returns

  • Tuple containing

    • Output $h_{new}$ of shape (out_dims, batch_size)
    • Tuple containing new hidden state $h_{new}$
  • Updated model state

Parameters

  • weight_ih: Concatenated weights to map from input to the hidden state $\mathbf{W}_{ih}$.
  • weight_hh: Concatenated weights to map from hidden to the hidden state $\mathbf{W}_{hh}$.
  • bias_ih: Bias vector for the input-hidden connection (not present if use_bias=false) $\mathbf{b}_{ih}$.
  • bias_hh: Bias vector for the hidden-hidden connection (not present if use_bias=false) $\mathbf{b}_{hh}$.
  • hidden_state: Initial hidden state vector (not present if train_state=false)

States

  • rng: Controls the randomness (if any) in the initial state generation
source