torchrecurrent.MultiplicativeLSTM#

class torchrecurrent.MultiplicativeLSTM(input_size, hidden_size, num_layers=1, dropout=0.0, batch_first=False, **kwargs)[source]#

Multi-layer multiplicative long short-term memory network.

[arXiv]

Each layer consists of a MultiplicativeLSTMCell, which updates the hidden and cell states according to:

\[\begin{split}\begin{aligned} m_t &= (W_{ih}^m x_t + b_{ih}^m) \circ (W_{hh}^m h_{t-1} + b_{hh}^m), \\ \hat{h}_t &= W_{ih}^h x_t + b_{ih}^h + W_{mh}^h m_t + b_{mh}^h, \\ i_t &= \sigma(W_{ih}^i x_t + b_{ih}^i + W_{mh}^i m_t + b_{mh}^i), \\ f_t &= \sigma(W_{ih}^f x_t + b_{ih}^f + W_{mh}^f m_t + b_{mh}^f), \\ o_t &= \sigma(W_{ih}^o x_t + b_{ih}^o + W_{mh}^o m_t + b_{mh}^o), \\ c_t &= f_t \circ c_{t-1} + i_t \circ \tanh(\hat{h}_t), \\ h_t &= \tanh(c_t) \circ o_t \end{aligned}\end{split}\]

where \(h_t\) is the hidden state, \(c_t\) the cell state, \(\sigma\) is the sigmoid, and \(\circ\) the Hadamard product.

In a multilayer multiplicative LSTM, the input \(x^{(l)}_t\) of the \(l\)-th layer (\(l \ge 2\)) is the hidden state \(h^{(l-1)}_t\) of the previous layer multiplied by dropout \(\delta^{(l-1)}_t\), where each \(\delta^{(l-1)}_t\) is a Bernoulli random variable which is 0 with probability dropout.

Parameters:
  • input_size – The number of expected features in the input x.

  • hidden_size – The number of features in the hidden and cell states h, c.

  • num_layers – Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two multiplicative LSTM layers, with the second receiving the outputs of the first. Default: 1

  • dropout – If non-zero, introduces a Dropout layer on the outputs of each layer except the last layer, with dropout probability equal to dropout. Default: 0

  • batch_first – If True, then the input and output tensors are provided as (batch, seq, feature) instead of (seq, batch, feature). Default: False

  • bias – If False, then the layer does not use input-side biases. Default: True

  • recurrent_bias – If False, then the layer does not use recurrent biases. Default: True

  • multiplicative_bias – If False, then the layer does not use multiplicative biases. Default: True

  • kernel_init – Initializer for W_{ih}. Default: torch.nn.init.xavier_uniform_()

  • recurrent_kernel_init – Initializer for W_{hh}. Default: torch.nn.init.xavier_uniform_()

  • multiplicative_kernel_init – Initializer for W_{mh}. Default: torch.nn.init.normal_()

  • bias_init – Initializer for b_{ih} when bias=True. Default: torch.nn.init.zeros_()

  • recurrent_bias_init – Initializer for b_{hh} when recurrent_bias=True. Default: torch.nn.init.zeros_()

  • multiplicative_bias_init – Initializer for b_{mh} when multiplicative_bias=True. Default: torch.nn.init.zeros_()

  • device – The desired device of parameters.

  • dtype – The desired floating point type of parameters.

Inputs: input, (h_0, c_0)
  • input: tensor of shape \((L, H_{in})\) for unbatched input, \((L, N, H_{in})\) when batch_first=False or \((N, L, H_{in})\) when batch_first=True containing the features of the input sequence. The input can also be a packed variable length sequence. See torch.nn.utils.rnn.pack_padded_sequence() or torch.nn.utils.rnn.pack_sequence() for details.

  • h_0: tensor of shape \((\text{num_layers}, H_{out})\) for unbatched input or \((\text{num_layers}, N, H_{out})\) containing the initial hidden state for each element in the input sequence. Defaults to zeros if not provided.

  • c_0: tensor of shape \((\text{num_layers}, H_{out})\) for unbatched input or \((\text{num_layers}, N, H_{out})\) containing the initial cell state for each element in the input sequence. Defaults to zeros if not provided.

where:

\[\begin{split}\begin{aligned} N ={} & \text{batch size} \\ L ={} & \text{sequence length} \\ H_{in} ={} & \text{input\_size} \\ H_{out} ={} & \text{hidden\_size} \end{aligned}\end{split}\]
Outputs: output, (h_n, c_n)
  • output: tensor of shape \((L, H_{out})\) for unbatched input, \((L, N, H_{out})\) when batch_first=False or \((N, L, H_{out})\) when batch_first=True containing the output features (h_t) from the last layer of the multiplicative LSTM, for each t. If a torch.nn.utils.rnn.PackedSequence has been given as the input, the output will also be a packed sequence.

  • h_n: tensor of shape \((\text{num_layers}, H_{out})\) for unbatched input or \((\text{num_layers}, N, H_{out})\) containing the final hidden state for each element in the sequence.

  • c_n: tensor of shape \((\text{num_layers}, H_{out})\) for unbatched input or \((\text{num_layers}, N, H_{out})\) containing the final cell state for each element in the sequence.

cells.{k}.weight_ih

the learnable input-hidden weights of the \(k\)-th layer, of shape (5*hidden_size, input_size) for k = 0. Otherwise, the shape is (5*hidden_size, hidden_size).

cells.{k}.weight_hh

the learnable hidden-hidden weights of the \(k\)-th layer, of shape (hidden_size, hidden_size).

cells.{k}.weight_mh

the learnable multiplicative-hidden weights of the \(k\)-th layer, of shape (4*hidden_size, hidden_size).

cells.{k}.bias_ih

the learnable input-hidden biases of the \(k\)-th layer, of shape (5*hidden_size). Only present when bias=True.

cells.{k}.bias_hh

the learnable hidden-hidden biases of the \(k\)-th layer, of shape (hidden_size). Only present when recurrent_bias=True.

cells.{k}.bias_mh

the learnable multiplicative biases of the \(k\)-th layer, of shape (4*hidden_size). Only present when multiplicative_bias=True.

Note

All the weights and biases are initialized according to the provided initializers (kernel_init, recurrent_kernel_init, etc.).

Note

batch_first argument is ignored for unbatched inputs.

Examples:

>>> rnn = MultiplicativeLSTM(10, 20, num_layers=2, dropout=0.1)
>>> input = torch.randn(5, 3, 10)   # (seq_len, batch, input_size)
>>> h0 = torch.zeros(2, 3, 20)      # (num_layers, batch, hidden_size)
>>> c0 = torch.zeros(2, 3, 20)      # (num_layers, batch, hidden_size)
>>> output, (hn, cn) = rnn(input, (h0, c0))
__init__(input_size, hidden_size, num_layers=1, dropout=0.0, batch_first=False, **kwargs)[source]#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

Methods

__init__(input_size, hidden_size[, ...])

Initialize internal Module state, shared by both nn.Module and ScriptModule.

add_module(name, module)

Add a child module to the current module.

apply(fn)

Apply fn recursively to every submodule (as returned by .children()) as well as self.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

buffers([recurse])

Return an iterator over module buffers.

children()

Return an iterator over immediate children modules.

compile(*args, **kwargs)

Compile this Module's forward using torch.compile().

cpu()

Move all model parameters and buffers to the CPU.

cuda([device])

Move all model parameters and buffers to the GPU.

double()

Casts all floating point parameters and buffers to double datatype.

eval()

Set the module in evaluation mode.

extra_repr()

Return the extra representation of the module.

float()

Casts all floating point parameters and buffers to float datatype.

forward(inp[, state])

Define the computation performed at every call.

get_buffer(target)

Return the buffer given by target if it exists, otherwise throw an error.

get_extra_state()

Return any extra state to include in the module's state_dict.

get_parameter(target)

Return the parameter given by target if it exists, otherwise throw an error.

get_submodule(target)

Return the submodule given by target if it exists, otherwise throw an error.

half()

Casts all floating point parameters and buffers to half datatype.

initialize_cells(cell_class, **kwargs)

Helper method to initialize cells for the derived recurrent layer class.

ipu([device])

Move all model parameters and buffers to the IPU.

load_state_dict(state_dict[, strict, assign])

Copy parameters and buffers from state_dict into this module and its descendants.

modules()

Return an iterator over all modules in the network.

mtia([device])

Move all model parameters and buffers to the MTIA.

named_buffers([prefix, recurse, ...])

Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

named_children()

Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

named_modules([memo, prefix, remove_duplicate])

Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

named_parameters([prefix, recurse, ...])

Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

parameters([recurse])

Return an iterator over module parameters.

register_backward_hook(hook)

Register a backward hook on the module.

register_buffer(name, tensor[, persistent])

Add a buffer to the module.

register_forward_hook(hook, *[, prepend, ...])

Register a forward hook on the module.

register_forward_pre_hook(hook, *[, ...])

Register a forward pre-hook on the module.

register_full_backward_hook(hook[, prepend])

Register a backward hook on the module.

register_full_backward_pre_hook(hook[, prepend])

Register a backward pre-hook on the module.

register_load_state_dict_post_hook(hook)

Register a post-hook to be run after module's load_state_dict() is called.

register_load_state_dict_pre_hook(hook)

Register a pre-hook to be run before module's load_state_dict() is called.

register_module(name, module)

Alias for add_module().

register_parameter(name, param)

Add a parameter to the module.

register_state_dict_post_hook(hook)

Register a post-hook for the state_dict() method.

register_state_dict_pre_hook(hook)

Register a pre-hook for the state_dict() method.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

set_extra_state(state)

Set extra state contained in the loaded state_dict.

set_submodule(target, module[, strict])

Set the submodule given by target if it exists, otherwise throw an error.

share_memory()

See torch.Tensor.share_memory_().

state_dict(*args[, destination, prefix, ...])

Return a dictionary containing references to the whole state of the module.

to(*args, **kwargs)

Move and/or cast the parameters and buffers.

to_empty(*, device[, recurse])

Move the parameters and buffers to the specified device without copying storage.

train([mode])

Set the module in training mode.

type(dst_type)

Casts all parameters and buffers to dst_type.

xpu([device])

Move all model parameters and buffers to the XPU.

zero_grad([set_to_none])

Reset gradients of all model parameters.

Attributes

T_destination

call_super_init

dump_patches

input_size

hidden_size

bias

dropout

batch_first

training