Options for RNN modules. More...
#include <rnn.h>
Public Member Functions | |
| RNNOptions (int64_t input_size, int64_t hidden_size) | |
| RNNOptions & | tanh () |
Sets the activation after linear operations to tanh. | |
| RNNOptions & | relu () |
Sets the activation after linear operations to relu. | |
| TORCH_ARG (int64_t, input_size) | |
The number of features of a single sample in the input sequence x. | |
| TORCH_ARG (int64_t, hidden_size) | |
The number of features in the hidden state h. | |
| TORCH_ARG (int64_t, layers) | |
| The number of recurrent layers (cells) to use. | |
| TORCH_ARG (bool, with_bias) | |
| Whether a bias term should be added to all linear operations. | |
| TORCH_ARG (double, dropout)=0.0 | |
| If non-zero, adds dropout with the given probability to the output of each RNN layer, except the final layer. More... | |
| TORCH_ARG (bool, bidirectional) | |
| Whether to make the RNN bidirectional. | |
| TORCH_ARG (bool, batch_first) | |
If true, the input sequence should be provided as (batch, sequence, features). More... | |
| TORCH_ARG (RNNActivation, activation) | |
| The activation to use after linear operations. | |
|
pure virtual |
If non-zero, adds dropout with the given probability to the output of each RNN layer, except the final layer.
| torch::nn::RNNOptions::TORCH_ARG | ( | bool | , |
| batch_first | |||
| ) |
If true, the input sequence should be provided as (batch, sequence, features).
If false (default), the expected layout is (sequence, batch, features).
1.8.11