ml4gw.nn.autoencoder.convolutional
Classes
|
|
|
Build a stack of convolutional autoencoder layer blocks. |
- class ml4gw.nn.autoencoder.convolutional.ConvBlock(in_channels, encode_channels, kernel_size, stride=1, groups=1, activation=<class 'torch.nn.modules.activation.ReLU'>, norm=<class 'torch.nn.modules.batchnorm.BatchNorm1d'>, decode_channels=None, output_activation=None, skip_connection=None)
Bases:
Autoencoder
- Parameters:
in_channels (int)
encode_channels (int)
kernel_size (int)
stride (int)
groups (int)
activation (Module)
norm (Callable[[...], Module])
decode_channels (int | None)
output_activation (Module | None)
skip_connection (SkipConnection | None)
- decode(X)
- Return type:
Tensor
- Parameters:
X (Tensor)
- encode(X)
- Return type:
Tensor
- Parameters:
X (Tensor)
- class ml4gw.nn.autoencoder.convolutional.ConvolutionalAutoencoder(in_channels, encode_channels, kernel_size, stride=1, groups=1, activation=<class 'torch.nn.modules.activation.ReLU'>, output_activation=None, norm=<class 'torch.nn.modules.batchnorm.BatchNorm1d'>, decode_channels=None, skip_connection=None)
Bases:
Autoencoder
Build a stack of convolutional autoencoder layer blocks. The output of each decoder layer will match the shape of the input to its corresponding encoder layer, except for the last decoder which can have an arbitrary number of channels specified by
decode_channels
.All layers also share the same
activation
except for the last decoder layer, which can have an arbitraryoutput_activation
.- Parameters:
in_channels (int)
encode_channels (Sequence[int])
kernel_size (int)
stride (int)
groups (int)
activation (Module)
output_activation (Module | None)
norm (Callable[[...], Module])
decode_channels (int | None)
skip_connection (SkipConnection | None)
- decode(*X, states=None, input_size=None)
- Return type:
Tensor
- Parameters:
input_size (int | None)
- forward(X)
- Return type:
Tensor
- Parameters:
X (Tensor)