MinimalRNNCell

LuxRecurrentLayers.MinimalRNNCellType
MinimalRNNCell(in_dims => out_dims;
    use_bias=true, train_state=false,
    init_encoder_bias=nothing, init_recurrent_bias=nothing,
    init_memory_bias=nothing, init_encoder_weight=nothing,
    init_recurrent_weight=nothing, init_memory_weight=nothing,
    init_state=zeros32,)

Minimal recurrent neural network unit.

Equations

\[\begin{aligned} \mathbf{z}(t) &= \tanh\left( \mathbf{W}_{ih}^{z} \mathbf{x}(t) + \mathbf{b}_{ih}^{z} \right), \\ \mathbf{u}(t) &= \sigma\left( \mathbf{W}_{hh}^{u} \mathbf{h}(t-1) + \mathbf{W}_{zh}^{u} \mathbf{z}(t) + \mathbf{b}_{hh}^{u} \right), \\ \mathbf{h}(t) &= \mathbf{u}(t) \circ \mathbf{h}(t-1) + \left(1 - \mathbf{u}(t)\right) \circ \mathbf{z}(t) \end{aligned}\]

Arguments

  • in_dims: Input Dimension
  • out_dims: Output (Hidden State & Memory) Dimension

Keyword arguments

  • use_bias: Set to false to deactivate bias
  • train_state: Trainable initial hidden state can be activated by setting this to true
  • train_memory: Trainable initial memory can be activated by setting this to true
  • init_encoder_bias: Initializer for encoder bias $\mathbf{b}_{ih}^{z}$. Must be a single function. If nothing, initialized from a uniform distribution in [-bound, bound] where bound = inv(sqrt(out_dims)).
  • init_recurrent_bias: Initializer for recurrent bias $\mathbf{b}_{hh}^{u}$. Must be a single function. If nothing, initialized from a uniform distribution in [-bound, bound] where bound = inv(sqrt(out_dims)).
  • init_memory_bias: Initializer for memory bias $\mathbf{b}_{zh}^{u}$. Must be a single function. If nothing, initialized from a uniform distribution in [-bound, bound] where bound = inv(sqrt(out_dims)).
  • init_encoder_weight: Initializer for encoder weight $\mathbf{W}_{ih}^{z}$. Must be a single function. If nothing, initialized from a uniform distribution in [-bound, bound] where bound = inv(sqrt(out_dims)).
  • init_recurrent_weight: Initializer for recurrent weight $\mathbf{W}_{hh}^{u}$. Must be a single function. If nothing, initialized from a uniform distribution in [-bound, bound] where bound = inv(sqrt(out_dims)).
  • init_memory_weight: Initializer for memory weight $\mathbf{W}_{zh}^{u}$. Must be a single function. If nothing, initialized from a uniform distribution in [-bound, bound] where bound = inv(sqrt(out_dims)).
  • init_state: Initializer for hidden state
  • init_memory: Initializer for memory

Inputs

  • Case 1a: Only a single input x of shape (in_dims, batch_size), train_state is set to false, train_memory is set to false - Creates a hidden state using init_state, hidden memory using init_memory and proceeds to Case 2.
  • Case 1b: Only a single input x of shape (in_dims, batch_size), train_state is set to true, train_memory is set to false - Repeats hidden_state vector from the parameters to match the shape of x, creates hidden memory using init_memory and proceeds to Case 2.
  • Case 1c: Only a single input x of shape (in_dims, batch_size), train_state is set to false, train_memory is set to true - Creates a hidden state using init_state, repeats the memory vector from parameters to match the shape of x and proceeds to Case 2.
  • Case 1d: Only a single input x of shape (in_dims, batch_size), train_state is set to true, train_memory is set to true - Repeats the hidden state and memory vectors from the parameters to match the shape of x and proceeds to Case 2.
  • Case 2: Tuple (x, (h, c)) is provided, then the output and a tuple containing the updated hidden state and memory is returned.

Returns

  • Tuple Containing

    • Output $h_{new}$ of shape (out_dims, batch_size)
    • Tuple containing new hidden state $h_{new}$ and new memory $c_{new}$
  • Updated model state

Parameters

  • weight_ih: Encoder weights $\{ \mathbf{W}_{ih}^{z} \}$
  • weight_hh: Recurrent weights $\{ \mathbf{W}_{hh}^{u} \}$
  • weight_mm: Memory weights $\{ \mathbf{W}_{zh}^{u} \}$
  • bias_ih: Encoder bias (if use_bias=true) $\{ \mathbf{b}_{ih}^{z} \}$
  • bias_hh: Recurrent bias (if use_bias=true) $\{ \mathbf{b}_{hh}^{u} \}$
  • bias_mm: Memory bias (if use_bias=true) $\{ \mathbf{b}_{zh}^{u} \}$
  • hidden_state: Initial hidden state vector $\mathbf{h}(0)$ (not present if train_state=false).
  • memory: Initial memory vector $\mathbf{c}(0)$ (not present if train_memory=false).

States

  • rng: Controls the randomness (if any) in the initial state generation
source