torchrecurrent.JANET#
- class torchrecurrent.JANET(input_size, hidden_size, num_layers=1, dropout=0.0, batch_first=False, **kwargs)[source]#
Multi-layer JANET (Just Another NETwork) recurrent neural network.
[arXiv]
Each layer consists of a
JANETCell
, which updates the hidden and cell states according to:\[\begin{split}\begin{aligned} s_t &= W_{ih}^{f} x_t + b_{ih}^{f} + W_{hh}^{f} h_{t-1} + b_{hh}^{f}, \\ \tilde{c}_t &= \tanh(W_{ih}^{c} x_t + b_{ih}^{c} + W_{hh}^{c} h_{t-1} + b_{hh}^{c}), \\ c_t &= \sigma(s_t) \circ c_{t-1} + (1 - \sigma(s_t - \beta)) \circ \tilde{c}_t, \\ h_t &= c_t \end{aligned}\end{split}\]where \(h_t\) is the hidden state at time t, \(c_t\) is the cell state at time t, \(\sigma\) is the sigmoid function, \(\tanh\) is the hyperbolic tangent, and \(\circ\) is the Hadamard product. The parameter \(\beta\) shifts the threshold of the update gate.
In a multilayer JANET, the input \(x^{(l)}_t\) of the \(l\)-th layer (\(l \ge 2\)) is the hidden state \(h^{(l-1)}_t\) of the previous layer multiplied by dropout \(\delta^{(l-1)}_t\), where each \(\delta^{(l-1)}_t\) is a Bernoulli random variable which is 0 with probability
dropout
.- Parameters:
input_size – The number of expected features in the input x.
hidden_size – The number of features in the hidden/cell states h and c.
num_layers – Number of recurrent layers. E.g., setting
num_layers=2
would mean stacking two JANET layers, with the second receiving the outputs of the first. Default: 1dropout – If non-zero, introduces a Dropout layer on the outputs of each layer except the last layer, with dropout probability equal to
dropout
. Default: 0batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature) instead of (seq, batch, feature). Default: Falsebias – If
False
, then the layer does not use input-side bias b_ih. Default: Truerecurrent_bias – If
False
, then the layer does not use recurrent bias b_{hh}. Default: Truekernel_init – Initializer for W_{ih}. Default:
torch.nn.init.xavier_uniform_()
recurrent_kernel_init – Initializer for W_{hh}. Default:
torch.nn.init.xavier_uniform_()
bias_init – Initializer for b_{ih}. Default:
torch.nn.init.zeros_()
recurrent_bias_init – Initializer for b_{hh}. Default:
torch.nn.init.zeros_()
beta – Threshold shift \(\beta\) for the update gate. Default: 1.0
device – The desired device of parameters.
dtype – The desired floating point type of parameters.
- Inputs: input, (h_0, c_0)
input: tensor of shape \((L, H_{in})\) for unbatched input, \((L, N, H_{in})\) when
batch_first=False
or \((N, L, H_{in})\) whenbatch_first=True
containing the features of the input sequence. The input can also be a packed variable length sequence. Seetorch.nn.utils.rnn.pack_padded_sequence()
ortorch.nn.utils.rnn.pack_sequence()
for details.h_0: tensor of shape \((\text{num_layers}, H_{out})\) for unbatched input or \((\text{num_layers}, N, H_{out})\) containing the initial hidden state for each element in the input sequence. Defaults to zeros if not provided.
c_0: tensor of shape \((\text{num_layers}, H_{out})\) for unbatched input or \((\text{num_layers}, N, H_{out})\) containing the initial cell state for each element in the input sequence. Defaults to zeros if not provided.
where:
\[\begin{split}\begin{aligned} N ={} & \text{batch size} \\ L ={} & \text{sequence length} \\ H_{in} ={} & \text{input\_size} \\ H_{out} ={} & \text{hidden\_size} \end{aligned}\end{split}\]- Outputs: output, (h_n, c_n)
output: tensor of shape \((L, H_{out})\) for unbatched input, \((L, N, H_{out})\) when
batch_first=False
or \((N, L, H_{out})\) whenbatch_first=True
containing the output features (h_t) from the last layer of the JANET, for each t. If atorch.nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence.h_n: tensor of shape \((\text{num_layers}, H_{out})\) for unbatched input or \((\text{num_layers}, N, H_{out})\) containing the final hidden state for each element in the sequence.
c_n: tensor of shape \((\text{num_layers}, H_{out})\) for unbatched input or \((\text{num_layers}, N, H_{out})\) containing the final cell state for each element in the sequence.
- cells.{k}.weight_ih
the learnable input-hidden weights of the \(k\)-th layer, of shape (2*hidden_size, input_size) for k = 0. Otherwise, the shape is (2*hidden_size, hidden_size).
- cells.{k}.weight_hh
the learnable hidden-hidden weights of the \(k\)-th layer, of shape (2*hidden_size, hidden_size).
- cells.{k}.bias_ih
the learnable input-hidden biases of the \(k\)-th layer, of shape (2*hidden_size). Only present when
bias=True
.
- cells.{k}.bias_hh
the learnable hidden-hidden biases of the \(k\)-th layer, of shape (2*hidden_size). Only present when
recurrent_bias=True
.
- cells.{k}.beta
the learnable threshold shift of the \(k\)-th layer, a scalar parameter.
Note
All the weights and biases are initialized according to the provided initializers (kernel_init, recurrent_kernel_init, etc.). The threshold shift \(\beta\) is initialized from the given value of beta.
Note
batch_first
argument is ignored for unbatched inputs.See also
Examples:
>>> rnn = JANET(10, 20, num_layers=2, dropout=0.1, beta=0.5) >>> input = torch.randn(5, 3, 10) # (seq_len, batch, input_size) >>> h0 = torch.zeros(2, 3, 20) # (num_layers, batch, hidden_size) >>> c0 = torch.zeros(2, 3, 20) # (num_layers, batch, hidden_size) >>> output, (hn, cn) = rnn(input, (h0, c0))
- __init__(input_size, hidden_size, num_layers=1, dropout=0.0, batch_first=False, **kwargs)[source]#
Initialize internal Module state, shared by both nn.Module and ScriptModule.
Methods
__init__
(input_size, hidden_size[, ...])Initialize internal Module state, shared by both nn.Module and ScriptModule.
add_module
(name, module)Add a child module to the current module.
apply
(fn)Apply
fn
recursively to every submodule (as returned by.children()
) as well as self.bfloat16
()Casts all floating point parameters and buffers to
bfloat16
datatype.buffers
([recurse])Return an iterator over module buffers.
children
()Return an iterator over immediate children modules.
compile
(*args, **kwargs)Compile this Module's forward using
torch.compile()
.cpu
()Move all model parameters and buffers to the CPU.
cuda
([device])Move all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Set the module in evaluation mode.
extra_repr
()Return the extra representation of the module.
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(inp[, state])Define the computation performed at every call.
get_buffer
(target)Return the buffer given by
target
if it exists, otherwise throw an error.get_extra_state
()Return any extra state to include in the module's state_dict.
get_parameter
(target)Return the parameter given by
target
if it exists, otherwise throw an error.get_submodule
(target)Return the submodule given by
target
if it exists, otherwise throw an error.half
()Casts all floating point parameters and buffers to
half
datatype.initialize_cells
(cell_class, **kwargs)Helper method to initialize cells for the derived recurrent layer class.
ipu
([device])Move all model parameters and buffers to the IPU.
load_state_dict
(state_dict[, strict, assign])Copy parameters and buffers from
state_dict
into this module and its descendants.modules
()Return an iterator over all modules in the network.
mtia
([device])Move all model parameters and buffers to the MTIA.
named_buffers
([prefix, recurse, ...])Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix, remove_duplicate])Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse, ...])Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Return an iterator over module parameters.
register_backward_hook
(hook)Register a backward hook on the module.
register_buffer
(name, tensor[, persistent])Add a buffer to the module.
register_forward_hook
(hook, *[, prepend, ...])Register a forward hook on the module.
register_forward_pre_hook
(hook, *[, ...])Register a forward pre-hook on the module.
register_full_backward_hook
(hook[, prepend])Register a backward hook on the module.
register_full_backward_pre_hook
(hook[, prepend])Register a backward pre-hook on the module.
register_load_state_dict_post_hook
(hook)Register a post-hook to be run after module's
load_state_dict()
is called.register_load_state_dict_pre_hook
(hook)Register a pre-hook to be run before module's
load_state_dict()
is called.register_module
(name, module)Alias for
add_module()
.register_parameter
(name, param)Add a parameter to the module.
register_state_dict_post_hook
(hook)Register a post-hook for the
state_dict()
method.register_state_dict_pre_hook
(hook)Register a pre-hook for the
state_dict()
method.requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state
(state)Set extra state contained in the loaded state_dict.
set_submodule
(target, module[, strict])Set the submodule given by
target
if it exists, otherwise throw an error.share_memory
()See
torch.Tensor.share_memory_()
.state_dict
(*args[, destination, prefix, ...])Return a dictionary containing references to the whole state of the module.
to
(*args, **kwargs)Move and/or cast the parameters and buffers.
to_empty
(*, device[, recurse])Move the parameters and buffers to the specified device without copying storage.
train
([mode])Set the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.xpu
([device])Move all model parameters and buffers to the XPU.
zero_grad
([set_to_none])Reset gradients of all model parameters.
Attributes
T_destination
call_super_init
dump_patches
input_size
hidden_size
bias
dropout
batch_first
training