ml4gw.nn.resnet.resnet_1d
In large part lifted from https://github.com/pytorch/vision/blob/main/torchvision/models/resnet.py but with 1d convolutions and arbitrary kernel sizes, and a default norm layer that makes more sense for most GW applications where training-time statistics are entirely arbitrary due to simulations.
Functions
|
Kernel-size 1 convolution |
|
1d convolution with padding |
Classes
|
Defines the structure of the blocks used to build the ResNet |
|
Bottleneck blocks implement one extra convolution compared to basic blocks. |
|
A version of ResNet that uses bottleneck blocks |
|
1D ResNet architecture |
- class ml4gw.nn.resnet.resnet_1d.BasicBlock(inplanes, planes, kernel_size=3, stride=1, downsample=None, groups=1, base_width=64, dilation=1, norm_layer=None)
Bases:
Module
Defines the structure of the blocks used to build the ResNet
- Parameters:
inplanes (int)
planes (int)
kernel_size (int)
stride (int)
downsample (Module | None)
groups (int)
base_width (int)
dilation (int)
norm_layer (Callable[[...], Module] | None)
-
expansion:
int
= 1
- forward(x)
- Return type:
Tensor
- Parameters:
x (Tensor)
- class ml4gw.nn.resnet.resnet_1d.Bottleneck(inplanes, planes, kernel_size=3, stride=1, downsample=None, groups=1, base_width=64, dilation=1, norm_layer=None)
Bases:
Module
Bottleneck blocks implement one extra convolution compared to basic blocks. In this layers, the
planes
parameter is generally meant to downsize the number of feature maps first, which then get expanded out toplanes * Bottleneck.expansion
feature maps at the output of the layer.- Parameters:
inplanes (int)
planes (int)
kernel_size (int)
stride (int)
downsample (Module | None)
groups (int)
base_width (int)
dilation (int)
norm_layer (Callable[[int], Module] | None)
-
expansion:
int
= 4
- forward(x)
- Return type:
Tensor
- Parameters:
x (Tensor)
- class ml4gw.nn.resnet.resnet_1d.BottleneckResNet1D(in_channels, layers, classes, kernel_size=3, zero_init_residual=False, groups=1, width_per_group=64, stride_type=None, norm_layer=None)
Bases:
ResNet1D
A version of ResNet that uses bottleneck blocks
- Parameters:
in_channels (int)
layers (List[int])
classes (int)
kernel_size (int)
zero_init_residual (bool)
groups (int)
width_per_group (int)
stride_type (List[Literal['stride', 'dilation']] | None)
norm_layer (Callable[[int], Module] | None)
- block
alias of
Bottleneck
- class ml4gw.nn.resnet.resnet_1d.ResNet1D(in_channels, layers, classes, kernel_size=3, zero_init_residual=False, groups=1, width_per_group=64, stride_type=None, norm_layer=None)
Bases:
Module
1D ResNet architecture
Simple extension of ResNet to 1D convolutions with arbitrary kernel sizes to support the longer timeseries used in BBH detection.
- Parameters:
in_channels (
int
) -- The number of channels in input tensor.layers (
List
[int
]) -- A list representing the number of residual blocks to include in each "layer" of the network. Total layers (e.g. 50 in ResNet50) is2 + sum(layers) * factor
, where factor is2
for vanillaResNet
and3
forBottleneckResNet
.kernel_size (
int
) -- The size of the convolutional kernel to use in all residual layers. _NOT_ the size of the input kernel to the network, which is determined at run-time.zero_init_residual (
bool
) -- Flag indicating whether to initialize the weights of the batch-norm layer in each block to 0 so that residuals are initialized as identities. Can improve training results.groups (
int
) -- Number of convolutional groups to use in all layers. Grouped convolutions induce local connections between feature maps at subsequent layers rather than global. Generally won't need this to be >1, and wil raise an error if >1 when using vanillaResNet
.width_per_group (
int
) -- Base width of each of the feature map groups, which is scaled up by the typical expansion factor at each layer of the network. Meaningless for vanillaResNet
.stride_type (
Optional
[List
[Literal
['stride'
,'dilation'
]]]) -- Whether to achieve downsampling on the time axis by strided or dilated convolutions for each layer. If left asNone
, strided convolutions will be used at each layer. Otherwise,stride_type
should be one element shorter thanlayers
and indicate eitherstride
ordilation
for each layer after the first.classes (int)
norm_layer (Callable[[int], Module] | None)
- block
alias of
BasicBlock
- forward(x)
- Return type:
Tensor
- Parameters:
x (Tensor)
- ml4gw.nn.resnet.resnet_1d.conv1(in_planes, out_planes, stride=1)
Kernel-size 1 convolution
- Return type:
Conv1d
- Parameters:
in_planes (int)
out_planes (int)
stride (int)
- ml4gw.nn.resnet.resnet_1d.convN(in_planes, out_planes, kernel_size=3, stride=1, groups=1, dilation=1)
1d convolution with padding
- Return type:
Conv1d
- Parameters:
in_planes (int)
out_planes (int)
kernel_size (int)
stride (int)
groups (int)
dilation (int)