NBRCell
LuxRecurrentLayers.NBRCell
— TypeNBRCell(in_dims => out_dims;
use_bias=true, train_state=false, init_bias=nothing,
init_weight=nothing, init_recurrent_weight=nothing,
init_state=zeros32)
Recurrently neuromodulated bistable recurrent cell.
Equations
\[\begin{aligned} \mathbf{a}(t) &= 1 + \tanh\left(\mathbf{W}_{ih}^{a} \mathbf{x}(t) + \mathbf{b}_{ih}^a + \mathbf{W}_{hh}^{a} \circ \mathbf{h}(t-1)+ \mathbf{b}_{hh}^a \right) \\ \mathbf{c}(t) &= \sigma\left(\mathbf{W}_{ih}^{c} \mathbf{x}(t) + \mathbf{b}_{ih}^c + \mathbf{W}_{hh}^{c} \circ \mathbf{h}(t-1) + \mathbf{b}_{hh}^c \right)\\ \mathbf{h}(t) &= \mathbf{c}(t) \circ \mathbf{h}(t-1) + (1 - \mathbf{c}(t)) \circ \tanh\left(\mathbf{W}_{ih}^{h} \mathbf{x}(t) + \mathbf{b}_{ih}^h + \mathbf{a}(t) \circ \mathbf{h}(t-1)\right) \end{aligned}\]
Arguments
in_dims
: Input Dimensionout_dims
: Output (Hidden State & Memory) Dimension
Keyword Arguments
use_bias
: Flag to use bias in the computation. Default set totrue
.train_state
: Flag to set the initial hidden state as trainable. Default set tofalse
.init_bias
: Initializer for input to hidden bias $\mathbf{b}_{ih}^a, \mathbf{b}_{ih}^c, \mathbf{b}_{ih}^h$. Must be a tuple containing 3 functions, e.g.,(glorot_normal, kaiming_uniform)
. If a single functionfn
is provided, it is automatically expanded into a 3-element tuple (fn, fn). If set tonothing
, weights are initialized from a uniform distribution within[-bound, bound]
wherebound = inv(sqrt(out_dims))
. Default isnothing
.init_recurrent_bias
: Initializer for hidden to hidden bias $\mathbf{b}_{hh}^a, \mathbf{b}_{hh}^c$. Must be a tuple containing 2 functions, e.g.,(glorot_normal, kaiming_uniform)
. If a single functionfn
is provided, it is automatically expanded into a 2-element tuple (fn, fn). If set tonothing
, weights are initialized from a uniform distribution within[-bound, bound]
wherebound = inv(sqrt(out_dims))
. Default isnothing
.init_weight
: Initializer for input to hidden weights $\mathbf{W}_{ih}^a, \mathbf{W}_{ih}^c, \mathbf{W}_{ih}^h$. Must be a tuple containing 3 functions, e.g.,(glorot_normal, kaiming_uniform)
. If a single functionfn
is provided, it is automatically expanded into a 3-element tuple (fn, fn). If set tonothing
, weights are initialized from a uniform distribution within[-bound, bound]
wherebound = inv(sqrt(out_dims))
. Default isnothing
.init_recurrent_weight
: Initializer for input to hidden weights $\mathbf{W}_{hh}^a, \mathbf{W}_{hh}^c$. Must be a tuple containing 2 functions, e.g.,(glorot_normal, kaiming_uniform)
. If a single functionfn
is provided, it is automatically expanded into a 2-element tuple (fn, fn). If set tonothing
, weights are initialized from a uniform distribution within[-bound, bound]
wherebound = inv(sqrt(out_dims))
. Default isnothing
.init_state
: Initializer for hidden state. Default set tozeros32
.
Inputs
- Case 1a: Only a single input
x
of shape(in_dims, batch_size)
,train_state
is set tofalse
- Creates a hidden state usinginit_state
and proceeds to Case 2. - Case 1b: Only a single input
x
of shape(in_dims, batch_size)
,train_state
is set totrue
- Repeatshidden_state
from parameters to match the shape ofx
and proceeds to Case 2. - Case 2: Tuple
(x, (h, ))
is provided, then the output and a tuple containing the updated hidden state is returned.
Returns
Tuple containing
- Output $h_{new}$ of shape
(out_dims, batch_size)
- Tuple containing new hidden state $h_{new}$
- Output $h_{new}$ of shape
Updated model state
Parameters
- `weight_ih`: Concatenated weights to map from input to the hidden state
``\{ \mathbf{W}_{ih}^a, \mathbf{W}_{ih}^c, \mathbf{W}_{ih}^h \}``
The initializers in `init_weight` are applied in the order they appear:
the first function is used for $\mathbf{W}_{ih}^a$, the second for $\mathbf{W}_{ih}^c$,
and the third for $\mathbf{W}_{ih}^h$.
weight_hh
: Weights to map the hidden state to the hidden state $\{ \mathbf{W}_{hh}^a, \mathbf{W}_{hh}^c \}$ The initializers ininit_weight
are applied in the order they appear: the first function is used for $\mathbf{W}_{hh}^a$, and the second for $\mathbf{W}_{hh}^c$.bias_ih
: Bias vector for the input-hidden connection (not present ifuse_bias=false
) $\{ \mathbf{b}_{ih}^a, \mathbf{b}_{ih}^c, \mathbf{b}_{ih}^h \}$ The initializers ininit_bias
are applied in the order they appear: the first function is used for $\mathbf{b}_{ih}^z$, the second for $\mathbf{b}_{ih}^c$, and the third for $\mathbf{b}_{ih}^h$.bias_hh
: Bias vector for the input-hidden connection (not present ifuse_bias=false
) $\{ \mathbf{b}_{hh}^a, \mathbf{b}_{hh}^c \}$ The initializers ininit_bias
are applied in the order they appear: the first function is used for $\mathbf{b}_{hh}^z$, and the second for $\mathbf{b}_{hh}^c$.hidden_state
: Initial hidden state vector (not present iftrain_state=false
)
States
rng
: Controls the randomness (if any) in the initial state generation