"Fossies" - the Fresh Open Source Software Archive

Member "pytorch-1.8.2/docs/source/nn.rst" (23 Jul 2021, 7598 Bytes) of package /linux/misc/pytorch-1.8.2.tar.gz:


As a special service "Fossies" has tried to format the requested source page into HTML format (assuming markdown format). Alternatively you can here view or download the uninterpreted source code file. A member file download can also be achieved by clicking within a package contents listing on the according byte size field. See also the last Fossies "Diffs" side-by-side code changes report for "nn.rst": 1.11.0_vs_1.12.0.

torch.nn

These are the basic building block for graphs

torch.nn

torch.nn

~parameter.Parameter ~parameter.UninitializedParameter

Containers

Module Sequential ModuleList ModuleDict ParameterList ParameterDict

Global Hooks For Module

torch.nn.modules.module

register_module_forward_pre_hook register_module_forward_hook register_module_backward_hook

torch

Convolution Layers

nn.Conv1d nn.Conv2d nn.Conv3d nn.ConvTranspose1d nn.ConvTranspose2d nn.ConvTranspose3d nn.LazyConv1d nn.LazyConv2d nn.LazyConv3d nn.LazyConvTranspose1d nn.LazyConvTranspose2d nn.LazyConvTranspose3d nn.Unfold nn.Fold

Pooling layers

nn.MaxPool1d nn.MaxPool2d nn.MaxPool3d nn.MaxUnpool1d nn.MaxUnpool2d nn.MaxUnpool3d nn.AvgPool1d nn.AvgPool2d nn.AvgPool3d nn.FractionalMaxPool2d nn.LPPool1d nn.LPPool2d nn.AdaptiveMaxPool1d nn.AdaptiveMaxPool2d nn.AdaptiveMaxPool3d nn.AdaptiveAvgPool1d nn.AdaptiveAvgPool2d nn.AdaptiveAvgPool3d

Padding Layers

nn.ReflectionPad1d nn.ReflectionPad2d nn.ReplicationPad1d nn.ReplicationPad2d nn.ReplicationPad3d nn.ZeroPad2d nn.ConstantPad1d nn.ConstantPad2d nn.ConstantPad3d

Non-linear Activations (weighted sum, nonlinearity)

nn.ELU nn.Hardshrink nn.Hardsigmoid nn.Hardtanh nn.Hardswish nn.LeakyReLU nn.LogSigmoid nn.MultiheadAttention nn.PReLU nn.ReLU nn.ReLU6 nn.RReLU nn.SELU nn.CELU nn.GELU nn.Sigmoid nn.SiLU nn.Softplus nn.Softshrink nn.Softsign nn.Tanh nn.Tanhshrink nn.Threshold

Non-linear Activations (other)

nn.Softmin nn.Softmax nn.Softmax2d nn.LogSoftmax nn.AdaptiveLogSoftmaxWithLoss

Normalization Layers

nn.BatchNorm1d nn.BatchNorm2d nn.BatchNorm3d nn.GroupNorm nn.SyncBatchNorm nn.InstanceNorm1d nn.InstanceNorm2d nn.InstanceNorm3d nn.LayerNorm nn.LocalResponseNorm

Recurrent Layers

nn.RNNBase nn.RNN nn.LSTM nn.GRU nn.RNNCell nn.LSTMCell nn.GRUCell

Transformer Layers

nn.Transformer nn.TransformerEncoder nn.TransformerDecoder nn.TransformerEncoderLayer nn.TransformerDecoderLayer

Linear Layers

nn.Identity nn.Linear nn.Bilinear nn.LazyLinear

Dropout Layers

nn.Dropout nn.Dropout2d nn.Dropout3d nn.AlphaDropout

Sparse Layers

nn.Embedding nn.EmbeddingBag

Distance Functions

nn.CosineSimilarity nn.PairwiseDistance

Loss Functions

nn.L1Loss nn.MSELoss nn.CrossEntropyLoss nn.CTCLoss nn.NLLLoss nn.PoissonNLLLoss nn.GaussianNLLLoss nn.KLDivLoss nn.BCELoss nn.BCEWithLogitsLoss nn.MarginRankingLoss nn.HingeEmbeddingLoss nn.MultiLabelMarginLoss nn.SmoothL1Loss nn.SoftMarginLoss nn.MultiLabelSoftMarginLoss nn.CosineEmbeddingLoss nn.MultiMarginLoss nn.TripletMarginLoss nn.TripletMarginWithDistanceLoss

Vision Layers

nn.PixelShuffle nn.PixelUnshuffle nn.Upsample nn.UpsamplingNearest2d nn.UpsamplingBilinear2d

Shuffle Layers

nn.ChannelShuffle

DataParallel Layers (multi-GPU, distributed)

nn.DataParallel nn.parallel.DistributedDataParallel

Utilities

From the torch.nn.utils module

torch.nn.utils

clip_grad_norm clip_grad_value parameters_to_vector vector_to_parameters

prune.BasePruningMethod

prune.PruningContainer prune.Identity prune.RandomUnstructured prune.L1Unstructured prune.RandomStructured prune.LnStructured prune.CustomFromMask prune.identity prune.random_unstructured prune.l1_unstructured prune.random_structured prune.ln_structured prune.global_unstructured prune.custom_from_mask prune.remove prune.is_pruned weight_norm remove_weight_norm spectral_norm remove_spectral_norm

Utility functions in other modules

torch

nn.utils.rnn.PackedSequence nn.utils.rnn.pack_padded_sequence nn.utils.rnn.pad_packed_sequence nn.utils.rnn.pad_sequence nn.utils.rnn.pack_sequence

nn.Flatten nn.Unflatten

Quantized Functions

Quantization refers to techniques for performing computations and storing tensors at lower bitwidths than floating point precision. PyTorch supports both per tensor and per channel asymmetric linear quantization. To learn more how to use quantized functions in PyTorch, please refer to the quantization-doc documentation.

Lazy Modules Initialization

torch

nn.modules.lazy.LazyModuleMixin