torch.nn
These are the basic building block for graphs
torch.nn
- Containers
- Convolution Layers
- Pooling layers
- Padding Layers
- Non-linear Activations (weighted sum, nonlinearity)
- Non-linear Activations (other)
- Normalization Layers
- Recurrent Layers
- Transformer Layers
- Linear Layers
- Dropout Layers
- Sparse Layers
- Distance Functions
- Loss Functions
- Vision Layers
- Shuffle Layers
- DataParallel Layers (multi-GPU, distributed)
- Utilities
- Quantized Functions
- Lazy Modules Initialization
Parameter
| A kind of Tensor that is to be considered a module parameter. |
UninitializedParameter
| A parameter that is not initialized. |
Containers
Module
| Base class for all neural network modules. |
Sequential
| A sequential container. |
ModuleList
| Holds submodules in a list. |
ModuleDict
| Holds submodules in a dictionary. |
ParameterList
| Holds parameters in a list. |
ParameterDict
| Holds parameters in a dictionary. |
Global Hooks For Module
register_module_forward_pre_hook
| Registers a forward pre-hook common to all modules. |
register_module_forward_hook
| Registers a global forward hook for all the modules |
register_module_backward_hook
| Registers a backward hook common to all the modules. |
Convolution Layers
Applies a 1D convolution over an input signal composed of several input planes. | |
Applies a 2D convolution over an input signal composed of several input planes. | |
Applies a 3D convolution over an input signal composed of several input planes. | |
Applies a 1D transposed convolution operator over an input image composed of several input planes. | |
Applies a 2D transposed convolution operator over an input image composed of several input planes. | |
Applies a 3D transposed convolution operator over an input image composed of several input planes. | |
A | |
A | |
A | |
A | |
A | |
A | |
Extracts sliding local blocks from a batched input tensor. | |
Combines an array of sliding local blocks into a large containing tensor. |
Pooling layers
Applies a 1D max pooling over an input signal composed of several input planes. | |
Applies a 2D max pooling over an input signal composed of several input planes. | |
Applies a 3D max pooling over an input signal composed of several input planes. | |
Computes a partial inverse of | |
Computes a partial inverse of | |
Computes a partial inverse of | |
Applies a 1D average pooling over an input signal composed of several input planes. | |
Applies a 2D average pooling over an input signal composed of several input planes. | |
Applies a 3D average pooling over an input signal composed of several input planes. | |
Applies a 2D fractional max pooling over an input signal composed of several input planes. | |
Applies a 1D power-average pooling over an input signal composed of several input planes. | |
Applies a 2D power-average pooling over an input signal composed of several input planes. | |
Applies a 1D adaptive max pooling over an input signal composed of several input planes. | |
Applies a 2D adaptive max pooling over an input signal composed of several input planes. | |
Applies a 3D adaptive max pooling over an input signal composed of several input planes. | |
Applies a 1D adaptive average pooling over an input signal composed of several input planes. | |
Applies a 2D adaptive average pooling over an input signal composed of several input planes. | |
Applies a 3D adaptive average pooling over an input signal composed of several input planes. |
Padding Layers
Pads the input tensor using the reflection of the input boundary. | |
Pads the input tensor using the reflection of the input boundary. | |
Pads the input tensor using replication of the input boundary. | |
Pads the input tensor using replication of the input boundary. | |
Pads the input tensor using replication of the input boundary. | |
Pads the input tensor boundaries with zero. | |
Pads the input tensor boundaries with a constant value. | |
Pads the input tensor boundaries with a constant value. | |
Pads the input tensor boundaries with a constant value. |
Non-linear Activations (weighted sum, nonlinearity)
Applies the element-wise function: | |
Applies the hard shrinkage function element-wise: | |
Applies the element-wise function: | |
Applies the HardTanh function element-wise | |
Applies the hardswish function, element-wise, as described in the paper: | |
Applies the element-wise function: | |
Applies the element-wise function: | |
Allows the model to jointly attend to information from different representation subspaces. | |
Applies the element-wise function: | |
Applies the rectified linear unit function element-wise: | |
Applies the element-wise function: | |
Applies the randomized leaky rectified liner unit function, element-wise, as described in the paper: | |
Applied element-wise, as: | |
Applies the element-wise function: | |
Applies the Gaussian Error Linear Units function: | |
Applies the element-wise function: | |
Applies the silu function, element-wise. | |
Applies the element-wise function: | |
Applies the soft shrinkage function elementwise: | |
Applies the element-wise function: | |
Applies the element-wise function: | |
Applies the element-wise function: | |
Thresholds each element of the input Tensor. |
Non-linear Activations (other)
Applies the Softmin function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range | |
Applies the Softmax function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0,1] and sum to 1. | |
Applies SoftMax over features to each spatial location. | |
Applies the function to an n-dimensional input Tensor. | |
Efficient softmax approximation as described in Efficient softmax approximation for GPUs by Edouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou. |
Normalization Layers
Applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift . | |
Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift . | |
Applies Batch Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift . | |
Applies Group Normalization over a mini-batch of inputs as described in the paper Group Normalization | |
Applies Batch Normalization over a N-Dimensional input (a mini-batch of [N-2]D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift . | |
Applies Instance Normalization over a 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization. | |
Applies Instance Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization. | |
Applies Instance Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization. | |
Applies Layer Normalization over a mini-batch of inputs as described in the paper Layer Normalization | |
Applies local response normalization over an input signal composed of several input planes, where channels occupy the second dimension. |
Recurrent Layers
Applies a multi-layer Elman RNN with or non-linearity to an input sequence. | |
Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence. | |
Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence. | |
An Elman RNN cell with tanh or ReLU non-linearity. | |
A long short-term memory (LSTM) cell. | |
A gated recurrent unit (GRU) cell |
Transformer Layers
A transformer model. | |
TransformerEncoder is a stack of N encoder layers | |
TransformerDecoder is a stack of N decoder layers | |
TransformerEncoderLayer is made up of self-attn and feedforward network. | |
TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network. |
Linear Layers
A placeholder identity operator that is argument-insensitive. | |
Applies a linear transformation to the incoming data: | |
Applies a bilinear transformation to the incoming data: | |
A |
Dropout Layers
During training, randomly zeroes some of the elements of the input tensor with probability | |
Randomly zero out entire channels (a channel is a 2D feature map, e.g., the -th channel of the -th sample in the batched input is a 2D tensor ). | |
Randomly zero out entire channels (a channel is a 3D feature map, e.g., the -th channel of the -th sample in the batched input is a 3D tensor ). | |
Applies Alpha Dropout over the input. |
Sparse Layers
A simple lookup table that stores embeddings of a fixed dictionary and size. | |
Computes sums or means of ‘bags’ of embeddings, without instantiating the intermediate embeddings. |
Distance Functions
Returns cosine similarity between and , computed along dim. | |
Computes the batchwise pairwise distance between vectors , using the p-norm: |
Loss Functions
Creates a criterion that measures the mean absolute error (MAE) between each element in the input and target . | |
Creates a criterion that measures the mean squared error (squared L2 norm) between each element in the input and target . | |
This criterion combines | |
The Connectionist Temporal Classification loss. | |
The negative log likelihood loss. | |
Negative log likelihood loss with Poisson distribution of target. | |
Gaussian negative log likelihood loss. | |
The Kullback-Leibler divergence loss measure | |
Creates a criterion that measures the Binary Cross Entropy between the target and the output: | |
This loss combines a | |
Creates a criterion that measures the loss given inputs , , two 1D mini-batch | |
Measures the loss given an input tensor and a labels tensor (containing 1 or -1). | |
Creates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input (a 2D mini-batch | |
Creates a criterion that uses a squared term if the absolute element-wise error falls below beta and an L1 term otherwise. | |
Creates a criterion that optimizes a two-class classification logistic loss between input tensor and target tensor (containing 1 or -1). | |
Creates a criterion that optimizes a multi-label one-versus-all loss based on max-entropy, between input and target of size . | |
Creates a criterion that measures the loss given input tensors , and a | |
Creates a criterion that optimizes a multi-class classification hinge loss (margin-based loss) between input (a 2D mini-batch | |
Creates a criterion that measures the triplet loss given an input tensors , , and a margin with a value greater than . | |
Creates a criterion that measures the triplet loss given input tensors , , and (representing anchor, positive, and negative examples, respectively), and a nonnegative, real-valued function (“distance function”) used to compute the relationship between the anchor and positive example (“positive distance”) and the anchor and negative example (“negative distance”). |
Vision Layers
Rearranges elements in a tensor of shape to a tensor of shape , where r is an upscale factor. | |
Reverses the | |
Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D (volumetric) data. | |
Applies a 2D nearest neighbor upsampling to an input signal composed of several input channels. | |
Applies a 2D bilinear upsampling to an input signal composed of several input channels. |
Shuffle Layers
Divide the channels in a tensor of shape into g groups and rearrange them as , while keeping the original tensor shape. |
DataParallel Layers (multi-GPU, distributed)
Implements data parallelism at the module level. | |
Implements distributed data parallelism that is based on |
Utilities
From the torch.nn.utils
module
clip_grad_norm_
| Clips gradient norm of an iterable of parameters. |
clip_grad_value_
| Clips gradient of an iterable of parameters at specified value. |
parameters_to_vector
| Convert parameters to one vector |
vector_to_parameters
| Convert one vector to the parameters |
Abstract base class for creation of new pruning techniques. |
Container holding a sequence of pruning methods for iterative pruning. | |
Utility pruning method that does not prune any units but generates the pruning parametrization with a mask of ones. | |
Prune (currently unpruned) units in a tensor at random. | |
Prune (currently unpruned) units in a tensor by zeroing out the ones with the lowest L1-norm. | |
Prune entire (currently unpruned) channels in a tensor at random. | |
Prune entire (currently unpruned) channels in a tensor based on their Ln-norm. | |
Applies pruning reparametrization to the tensor corresponding to the parameter called | |
Prunes tensor corresponding to parameter called | |
Prunes tensor corresponding to parameter called | |
Prunes tensor corresponding to parameter called | |
Prunes tensor corresponding to parameter called | |
Globally prunes tensors corresponding to all parameters in | |
Prunes tensor corresponding to parameter called | |
Removes the pruning reparameterization from a module and the pruning method from the forward hook. | |
Check whether | |
weight_norm
| Applies weight normalization to a parameter in the given module. |
remove_weight_norm
| Removes the weight normalization reparameterization from a module. |
spectral_norm
| Applies spectral normalization to a parameter in the given module. |
remove_spectral_norm
| Removes the spectral normalization reparameterization from a module. |
Utility functions in other modules
Holds the data and list of | |
Packs a Tensor containing padded sequences of variable length. | |
Pads a packed batch of variable length sequences. | |
Pad a list of variable length Tensors with | |
Packs a list of variable length Tensors | |
Flattens a contiguous range of dims into a tensor. | |
Unflattens a tensor dim expanding it to a desired shape. |
Quantized Functions
Quantization refers to techniques for performing computations and storing tensors at lower bitwidths than floating point precision. PyTorch supports both per tensor and per channel asymmetric linear quantization. To learn more how to use quantized functions in PyTorch, please refer to the Quantization documentation.
Lazy Modules Initialization
A mixin for modules that lazily initialize parameters, also known as “lazy modules.” |
© 2019 Torch Contributors
Licensed under the 3-clause BSD License.
https://pytorch.org/docs/1.8.0/nn.html