STARCell
LuxRecurrentLayers.STARCell
— TypeSTARCell(in_dims => out_dims;
use_bias=true, train_state=false,
init_bias=nothing, init_recurrent_bias=nothing,
init_weight=nothing, init_recurrent_weight=nothing,
init_state=zeros32)
Equations
\[\begin{aligned} \mathbf{z}(t) &= \tanh\left(\mathbf{W}_{ih}^{z} \mathbf{x}(t) + \mathbf{b}_{ih}^{z}\right), \\ \mathbf{k}(t) &= \sigma\left(\mathbf{W}_{ih}^{k} \mathbf{x}(t) + \mathbf{b}_{ih}^{k} + \mathbf{W}_{hh}^{k} \mathbf{h}(t-1) + \mathbf{b}_{hh}^{k}\right), \\ \mathbf{h}(t) &= \tanh\left((1 - \mathbf{k}(t)) \circ \mathbf{h}(t-1) + \mathbf{k}(t) \circ \mathbf{z}(t)\right). \end{aligned}\]
Arguments
in_dims
: Input Dimensionout_dims
: Output (Hidden State & Memory) Dimension
Keyword arguments
use_bias
: Flag to use bias in the computation. Default set totrue
.train_state
: Flag to set the initial hidden state as trainable. Default set tofalse
.train_memory
: Flag to set the initial memory state as trainable. Default set tofalse
.init_bias
: Initializer for input-to-hidden biases $\mathbf{b}_{ih}^{z}, \mathbf{b}_{ih}^{k}$. Must be a tuple containing 2 functions. If a single value is passed, it is copied into a 2-element tuple. If set tonothing
, biases are initialized from a uniform distribution within[-bound, bound]
, wherebound = inv(sqrt(out_dims))
. The functions are applied in order: the first initializes $\mathbf{b}_{ih}^{z}$, the second $\mathbf{b}_{ih}^{k}$. Default set tonothing
.init_recurrent_bias
: Initializer for hidden-to-hidden bias $\mathbf{b}_{hh}^{k}$. Must be a single function. If set tonothing
, bias is initialized from a uniform distribution within[-bound, bound]
, wherebound = inv(sqrt(out_dims))
. Default set tonothing
.init_weight
: Initializer for input-to-hidden weights $\mathbf{W}_{ih}^{z}, \mathbf{W}_{ih}^{k}$. Must be a tuple containing 2 functions. If a single value is passed, it is copied into a 2-element tuple. If set tonothing
, weights are initialized from a uniform distribution within[-bound, bound]
, wherebound = inv(sqrt(out_dims))
. The functions are applied in order: the first initializes $\mathbf{W}_{ih}^{z}$, the second $\mathbf{W}_{ih}^{k}$. Default set tonothing
.init_recurrent_weight
: Initializer for hidden-to-hidden weight $\mathbf{W}_{hh}^{k}$. Must be a single function. If set tonothing
, weight is initialized from a uniform distribution within[-bound, bound]
, wherebound = inv(sqrt(out_dims))
. Default set tonothing
.init_state
: Initializer for hidden state. Default set tozeros32
.init_memory
: Initializer for memory. Default set tozeros32
.
Inputs
- Case 1a: Only a single input
x
of shape(in_dims, batch_size)
,train_state
is set tofalse
- Creates a hidden state usinginit_state
and proceeds to Case 2. - Case 1b: Only a single input
x
of shape(in_dims, batch_size)
,train_state
is set totrue
- Repeatshidden_state
from parameters to match the shape ofx
and proceeds to Case 2. - Case 2: Tuple
(x, (h, ))
is provided, then the output and a tuple containing the updated hidden state is returned.
Returns
Tuple containing
- Output $h_{new}$ of shape
(out_dims, batch_size)
- Tuple containing new hidden state $h_{new}$
- Output $h_{new}$ of shape
Updated model state
Parameters
weight_ih
: Input-to-hidden weights $\{ \mathbf{W}_{ih}^{z}, \mathbf{W}_{ih}^{k} \}$weight_hh
: Hidden-to-hidden weights $\{ \mathbf{W}_{hh}^{k} \}$bias_ih
: Input-to-hidden biases (not present ifuse_bias=false
) $\{ \mathbf{b}_{ih}^{z}, \mathbf{b}_{ih}^{k} \}$bias_hh
: Hidden-to-hidden bias (not present ifuse_bias=false
) $\{ \mathbf{b}_{hh}^{k} \}$hidden_state
: Initial hidden state vector (not present iftrain_state=false
)
States
rng
: Controls the randomness (if any) in the initial state generation