textformer.models.encoders¶
Pre-defined encoder architectures.
A package for already-implemented encoder models.
-
class
textformer.models.encoders.
BiGRUEncoder
(n_input=128, n_hidden_enc=128, n_hidden_dec=128, n_embedding=128, dropout=0.5)¶ Bases:
textformer.core.Encoder
A BiGRUEncoder is used to supply the encoding part of the Attention-based Seq2Seq architecture.
-
__init__
(self, n_input=128, n_hidden_enc=128, n_hidden_dec=128, n_embedding=128, dropout=0.5)¶ Initializion method.
- Parameters
n_input (int) – Number of input units.
n_hidden_enc (int) – Number of hidden units in the Encoder.
n_hidden_dec (int) – Number of hidden units in the Decoder.
n_embedding (int) – Number of embedding units.
dropout (float) – Amount of dropout to be applied.
-
forward
(self, x)¶ Performs a forward pass over the architecture.
- Parameters
x (torch.Tensor) – Tensor containing the data.
- Returns
The hidden state and cell values.
-
-
class
textformer.models.encoders.
ConvEncoder
(n_input=128, n_hidden=128, n_embedding=128, n_layers=1, kernel_size=3, dropout=0.5, scale=0.5, max_length=100)¶ Bases:
textformer.core.Encoder
A ConvEncoder is used to supply the encoding part of the Convolutional Seq2Seq architecture.
-
__init__
(self, n_input=128, n_hidden=128, n_embedding=128, n_layers=1, kernel_size=3, dropout=0.5, scale=0.5, max_length=100)¶ Initializion method.
- Parameters
n_input (int) – Number of input units.
n_hidden (int) – Number of hidden units.
n_embedding (int) – Number of embedding units.
n_layers (int) – Number of convolutional layers.
kernel_size (int) – Size of the convolutional kernels.
dropout (float) – Amount of dropout to be applied.
scale (float) – Value for the residual learning.
max_length (int) – Maximum length of positional embeddings.
-
forward
(self, x)¶ Performs a forward pass over the architecture.
- Parameters
x (torch.Tensor) – Tensor containing the data.
- Returns
The convolutions and output values.
-
-
class
textformer.models.encoders.
GRUEncoder
(n_input=128, n_hidden=128, n_embedding=128, dropout=0.5)¶ Bases:
textformer.core.Encoder
A GRUEncoder is used to supply the encoding part of the Seq2Seq architecture.
-
__init__
(self, n_input=128, n_hidden=128, n_embedding=128, dropout=0.5)¶ Initializion method.
- Parameters
n_input (int) – Number of input units.
n_hidden (int) – Number of hidden units.
n_embedding (int) – Number of embedding units.
dropout (float) – Amount of dropout to be applied.
-
forward
(self, x)¶ Performs a forward pass over the architecture.
- Parameters
x (torch.Tensor) – Tensor containing the data.
- Returns
The hidden state and cell values.
-
-
class
textformer.models.encoders.
LSTMEncoder
(n_input=128, n_hidden=128, n_embedding=128, n_layers=1, dropout=0.5)¶ Bases:
textformer.core.Encoder
A LSTMEncoder is used to supply the encoding part of the Seq2Seq architecture.
-
__init__
(self, n_input=128, n_hidden=128, n_embedding=128, n_layers=1, dropout=0.5)¶ Initializion method.
- Parameters
n_input (int) – Number of input units.
n_hidden (int) – Number of hidden units.
n_embedding (int) – Number of embedding units.
n_layers (int) – Number of RNN layers.
dropout (float) – Amount of dropout to be applied.
-
forward
(self, x)¶ Performs a forward pass over the architecture.
- Parameters
x (torch.Tensor) – Tensor containing the data.
- Returns
The hidden state and cell values.
-
-
class
textformer.models.encoders.
SelfAttentionEncoder
(n_input=128, n_hidden=128, n_forward=256, n_layers=1, n_heads=3, dropout=0.1, max_length=100)¶ Bases:
textformer.core.Encoder
A SelfAttentionEncoder is used to supply the encoding part of the Transformer architecture.
-
__init__
(self, n_input=128, n_hidden=128, n_forward=256, n_layers=1, n_heads=3, dropout=0.1, max_length=100)¶ Initializion method.
- Parameters
n_input (int) – Number of input units.
n_hidden (int) – Number of hidden units.
n_forward (int) – Number of feed forward units.
n_layers (int) – Number of attention layers.
n_heads (int) – Number of attention heads.
dropout (float) – Amount of dropout to be applied.
max_length (int) – Maximum length of positional embeddings.
-
forward
(self, x, x_mask)¶ Performs a forward pass over the architecture.
- Parameters
x (torch.Tensor) – Tensor containing the data.
x_mask (torch.Tensor) – Tensor containing the masked data.
- Returns
The output values.
-