STARCell

LuxRecurrentLayers.STARCellType
STARCell(in_dims => out_dims;
    use_bias=true, train_state=false,
    init_bias=nothing, init_recurrent_bias=nothing,
    init_weight=nothing, init_recurrent_weight=nothing,
    init_state=zeros32)

Stackable recurrent cell.

Equations

\[\begin{aligned} \mathbf{z}(t) &= \tanh\left(\mathbf{W}_{ih}^{z} \mathbf{x}(t) + \mathbf{b}_{ih}^{z}\right), \\ \mathbf{k}(t) &= \sigma\left(\mathbf{W}_{ih}^{k} \mathbf{x}(t) + \mathbf{b}_{ih}^{k} + \mathbf{W}_{hh}^{k} \mathbf{h}(t-1) + \mathbf{b}_{hh}^{k}\right), \\ \mathbf{h}(t) &= \tanh\left((1 - \mathbf{k}(t)) \circ \mathbf{h}(t-1) + \mathbf{k}(t) \circ \mathbf{z}(t)\right). \end{aligned}\]

Arguments

  • in_dims: Input Dimension
  • out_dims: Output (Hidden State & Memory) Dimension

Keyword arguments

  • use_bias: Flag to use bias in the computation. Default set to true.
  • train_state: Flag to set the initial hidden state as trainable. Default set to false.
  • train_memory: Flag to set the initial memory state as trainable. Default set to false.
  • init_bias: Initializer for input-to-hidden biases $\mathbf{b}_{ih}^{z}, \mathbf{b}_{ih}^{k}$. Must be a tuple containing 2 functions. If a single value is passed, it is copied into a 2-element tuple. If set to nothing, biases are initialized from a uniform distribution within [-bound, bound], where bound = inv(sqrt(out_dims)). The functions are applied in order: the first initializes $\mathbf{b}_{ih}^{z}$, the second $\mathbf{b}_{ih}^{k}$. Default set to nothing.
  • init_recurrent_bias: Initializer for hidden-to-hidden bias $\mathbf{b}_{hh}^{k}$. Must be a single function. If set to nothing, bias is initialized from a uniform distribution within [-bound, bound], where bound = inv(sqrt(out_dims)). Default set to nothing.
  • init_weight: Initializer for input-to-hidden weights $\mathbf{W}_{ih}^{z}, \mathbf{W}_{ih}^{k}$. Must be a tuple containing 2 functions. If a single value is passed, it is copied into a 2-element tuple. If set to nothing, weights are initialized from a uniform distribution within [-bound, bound], where bound = inv(sqrt(out_dims)). The functions are applied in order: the first initializes $\mathbf{W}_{ih}^{z}$, the second $\mathbf{W}_{ih}^{k}$. Default set to nothing.
  • init_recurrent_weight: Initializer for hidden-to-hidden weight $\mathbf{W}_{hh}^{k}$. Must be a single function. If set to nothing, weight is initialized from a uniform distribution within [-bound, bound], where bound = inv(sqrt(out_dims)). Default set to nothing.
  • init_state: Initializer for hidden state. Default set to zeros32.
  • init_memory: Initializer for memory. Default set to zeros32.

Inputs

  • Case 1a: Only a single input x of shape (in_dims, batch_size), train_state is set to false - Creates a hidden state using init_state and proceeds to Case 2.
  • Case 1b: Only a single input x of shape (in_dims, batch_size), train_state is set to true - Repeats hidden_state from parameters to match the shape of x and proceeds to Case 2.
  • Case 2: Tuple (x, (h, )) is provided, then the output and a tuple containing the updated hidden state is returned.

Returns

  • Tuple containing

    • Output $h_{new}$ of shape (out_dims, batch_size)
    • Tuple containing new hidden state $h_{new}$
  • Updated model state

Parameters

  • weight_ih: Input-to-hidden weights $\{ \mathbf{W}_{ih}^{z}, \mathbf{W}_{ih}^{k} \}$
  • weight_hh: Hidden-to-hidden weights $\{ \mathbf{W}_{hh}^{k} \}$
  • bias_ih: Input-to-hidden biases (not present if use_bias=false) $\{ \mathbf{b}_{ih}^{z}, \mathbf{b}_{ih}^{k} \}$
  • bias_hh: Hidden-to-hidden bias (not present if use_bias=false) $\{ \mathbf{b}_{hh}^{k} \}$
  • hidden_state: Initial hidden state vector (not present if train_state=false)

States

  • rng: Controls the randomness (if any) in the initial state generation
source