"Fossies" - the Fresh Open Source Software Archive

Member "pytorch-1.8.2/docs/source/name_inference.rst" (23 Jul 2021, 20728 Bytes) of package /linux/misc/pytorch-1.8.2.tar.gz:


As a special service "Fossies" has tried to format the requested source page into HTML format (assuming markdown format). Alternatively you can here view or download the uninterpreted source code file. A member file download can also be achieved by clicking within a package contents listing on the according byte size field.

torch

Named Tensors operator coverage

Please read named_tensors-doc first for an introduction to named tensors.

This document is a reference for name inference, a process that defines how named tensors:

  1. use names to provide additional automatic runtime correctness checks
  2. propagate names from input tensors to output tensors

Below is a list of all operations that are supported with named tensors and their associated name inference rules.

If you don't see an operation listed here, but it would help your use case, please search if an issue has already been filed and if not, file one.

Warning

The named tensor API is experimental and subject to change.

Supported Operations
API Name inference rule
Tensor.abs, torch.abs keeps_input_names-doc
Tensor.abs_ keeps_input_names-doc
Tensor.acos, torch.acos keeps_input_names-doc
Tensor.acos_ keeps_input_names-doc
Tensor.add, torch.add unifies_names_from_inputs-doc
Tensor.add_ unifies_names_from_inputs-doc
Tensor.addmm, torch.addmm contracts_away_dims-doc
Tensor.addmm_ contracts_away_dims-doc
Tensor.addmv, torch.addmv contracts_away_dims-doc
Tensor.addmv_ contracts_away_dims-doc
Tensor.align_as See documentation
Tensor.align_to See documentation
Tensor.all, torch.all None
Tensor.any, torch.any None
Tensor.asin, torch.asin keeps_input_names-doc
Tensor.asin_ keeps_input_names-doc
Tensor.atan, torch.atan keeps_input_names-doc
Tensor.atan2, torch.atan2 unifies_names_from_inputs-doc
Tensor.atan2_ unifies_names_from_inputs-doc
Tensor.atan_ keeps_input_names-doc
Tensor.bernoulli, torch.bernoulli keeps_input_names-doc
Tensor.bernoulli_ None
Tensor.bfloat16 keeps_input_names-doc
Tensor.bitwise_not, torch.bitwise_not keeps_input_names-doc
Tensor.bitwise_not_ None
Tensor.bmm, torch.bmm contracts_away_dims-doc
Tensor.bool keeps_input_names-doc
Tensor.byte keeps_input_names-doc
torch.cat unifies_names_from_inputs-doc
Tensor.cauchy_ None
Tensor.ceil, torch.ceil keeps_input_names-doc
Tensor.ceil_ None
Tensor.char keeps_input_names-doc
Tensor.chunk, torch.chunk keeps_input_names-doc
Tensor.clamp, torch.clamp keeps_input_names-doc
Tensor.clamp_ None
Tensor.copy_ out_function_semantics-doc
Tensor.cos, torch.cos keeps_input_names-doc
Tensor.cos_ None
Tensor.cosh, torch.cosh keeps_input_names-doc
Tensor.cosh_ None
Tensor.acosh, torch.acosh keeps_input_names-doc
Tensor.acosh_ None
Tensor.cpu keeps_input_names-doc
Tensor.cuda keeps_input_names-doc
Tensor.cumprod, torch.cumprod keeps_input_names-doc
Tensor.cumsum, torch.cumsum keeps_input_names-doc
Tensor.data_ptr None
Tensor.deg2rad, torch.deg2rad keeps_input_names-doc
Tensor.deg2rad_ None
Tensor.detach, torch.detach keeps_input_names-doc
Tensor.detach_ None
Tensor.device, torch.device None
Tensor.digamma, torch.digamma keeps_input_names-doc
Tensor.digamma_ None
Tensor.dim None
Tensor.div, torch.div unifies_names_from_inputs-doc
Tensor.div_ unifies_names_from_inputs-doc
Tensor.dot, torch.dot None
Tensor.double keeps_input_names-doc
Tensor.element_size None
torch.empty factory-doc
torch.empty_like factory-doc
Tensor.eq, torch.eq unifies_names_from_inputs-doc
Tensor.erf, torch.erf keeps_input_names-doc
Tensor.erf_ None
Tensor.erfc, torch.erfc keeps_input_names-doc
Tensor.erfc_ None
Tensor.erfinv, torch.erfinv keeps_input_names-doc
Tensor.erfinv_ None
Tensor.exp, torch.exp keeps_input_names-doc
Tensor.exp_ None
Tensor.expand keeps_input_names-doc
Tensor.expm1, torch.expm1 keeps_input_names-doc
Tensor.expm1_ None
Tensor.exponential_ None
Tensor.fill_ None
Tensor.flatten, torch.flatten See documentation
Tensor.float keeps_input_names-doc
Tensor.floor, torch.floor keeps_input_names-doc
Tensor.floor_ None
Tensor.frac, torch.frac keeps_input_names-doc
Tensor.frac_ None
Tensor.ge, torch.ge unifies_names_from_inputs-doc
Tensor.get_device, torch.get_device None
Tensor.grad None
Tensor.gt, torch.gt unifies_names_from_inputs-doc
Tensor.half keeps_input_names-doc
Tensor.has_names See documentation
Tensor.index_fill, torch.index_fill keeps_input_names-doc
Tensor.index_fill_ None
Tensor.int keeps_input_names-doc
Tensor.is_contiguous None
Tensor.is_cuda None
Tensor.is_floating_point, torch.is_floating_point None
Tensor.is_leaf None
Tensor.is_pinned None
Tensor.is_shared None
Tensor.is_signed, torch.is_signed None
Tensor.is_sparse None
torch.is_tensor None
Tensor.item None
Tensor.kthvalue, torch.kthvalue removes_dimensions-doc
Tensor.le, torch.le unifies_names_from_inputs-doc
Tensor.log, torch.log keeps_input_names-doc
Tensor.log10, torch.log10 keeps_input_names-doc
Tensor.log10_ None
Tensor.log1p, torch.log1p keeps_input_names-doc
Tensor.log1p_ None
Tensor.log2, torch.log2 keeps_input_names-doc
Tensor.log2_ None
Tensor.log_ None
Tensor.log_normal_ None
Tensor.logical_not, torch.logical_not keeps_input_names-doc
Tensor.logical_not_ None
Tensor.logsumexp, torch.logsumexp removes_dimensions-doc
Tensor.long keeps_input_names-doc
Tensor.lt, torch.lt unifies_names_from_inputs-doc
torch.manual_seed None
Tensor.masked_fill, torch.masked_fill keeps_input_names-doc
Tensor.masked_fill_ None
Tensor.masked_select, torch.masked_select Aligns mask up to input and then unifies_names_from_input_tensors
Tensor.matmul, torch.matmul contracts_away_dims-doc
Tensor.mean, torch.mean removes_dimensions-doc
Tensor.median, torch.median removes_dimensions-doc
Tensor.nanmedian, torch.nanmedian removes_dimensions-doc
Tensor.mm, torch.mm contracts_away_dims-doc
Tensor.mode, torch.mode removes_dimensions-doc
Tensor.mul, torch.mul unifies_names_from_inputs-doc
Tensor.mul_ unifies_names_from_inputs-doc
Tensor.mv, torch.mv contracts_away_dims-doc
Tensor.names See documentation
Tensor.narrow, torch.narrow keeps_input_names-doc
Tensor.ndim None
Tensor.ndimension None
Tensor.ne, torch.ne unifies_names_from_inputs-doc
Tensor.neg, torch.neg keeps_input_names-doc
Tensor.neg_ None
torch.normal keeps_input_names-doc
Tensor.normal_ None
Tensor.numel, torch.numel None
torch.ones factory-doc
Tensor.pow, torch.pow unifies_names_from_inputs-doc
Tensor.pow_ None
Tensor.prod, torch.prod removes_dimensions-doc
Tensor.rad2deg, torch.rad2deg keeps_input_names-doc
Tensor.rad2deg_ None
torch.rand factory-doc
torch.rand factory-doc
torch.randn factory-doc
torch.randn factory-doc
Tensor.random_ None
Tensor.reciprocal, torch.reciprocal keeps_input_names-doc
Tensor.reciprocal_ None
Tensor.refine_names See documentation
Tensor.register_hook None
Tensor.rename See documentation
Tensor.rename_ See documentation
Tensor.requires_grad None
Tensor.requires_grad_ None
Tensor.resize_ Only allow resizes that do not change shape
Tensor.resize_as_ Only allow resizes that do not change shape
Tensor.round, torch.round keeps_input_names-doc
Tensor.round_ None
Tensor.rsqrt, torch.rsqrt keeps_input_names-doc
Tensor.rsqrt_ None
Tensor.select, torch.select removes_dimensions-doc
Tensor.short keeps_input_names-doc
Tensor.sigmoid, torch.sigmoid keeps_input_names-doc
Tensor.sigmoid_ None
Tensor.sign, torch.sign keeps_input_names-doc
Tensor.sign_ None
Tensor.sgn, torch.sgn keeps_input_names-doc
Tensor.sgn_ None
Tensor.sin, torch.sin keeps_input_names-doc
Tensor.sin_ None
Tensor.sinh, torch.sinh keeps_input_names-doc
Tensor.sinh_ None
Tensor.asinh, torch.asinh keeps_input_names-doc
Tensor.asinh_ None
Tensor.size None
Tensor.split, torch.split keeps_input_names-doc
Tensor.sqrt, torch.sqrt keeps_input_names-doc
Tensor.sqrt_ None
Tensor.squeeze, torch.squeeze removes_dimensions-doc
Tensor.std, torch.std removes_dimensions-doc
torch.std_mean removes_dimensions-doc
Tensor.stride None
Tensor.sub, torch.sub unifies_names_from_inputs-doc
Tensor.sub_ unifies_names_from_inputs-doc
Tensor.sum, torch.sum removes_dimensions-doc
Tensor.tan, torch.tan keeps_input_names-doc
Tensor.tan_ None
Tensor.tanh, torch.tanh keeps_input_names-doc
Tensor.tanh_ None
Tensor.atanh, torch.atanh keeps_input_names-doc
Tensor.atanh_ None
torch.tensor factory-doc
Tensor.to keeps_input_names-doc
Tensor.topk, torch.topk removes_dimensions-doc
Tensor.transpose, torch.transpose permutes_dimensions-doc
Tensor.trunc, torch.trunc keeps_input_names-doc
Tensor.trunc_ None
Tensor.type None
Tensor.type_as keeps_input_names-doc
Tensor.unbind, torch.unbind removes_dimensions-doc
Tensor.unflatten See documentation
Tensor.uniform_ None
Tensor.var, torch.var removes_dimensions-doc
torch.var_mean removes_dimensions-doc
Tensor.zero_ None
torch.zeros factory-doc

Keeps input names

All pointwise unary functions follow this rule as well as some other unary functions.

>>> x = torch.randn(3, 3, names=('N', 'C'))
>>> x.abs().names
('N', 'C')

Removes dimensions

All reduction ops like ~Tensor.sum remove dimensions by reducing over the desired dimensions. Other operations like ~Tensor.select and ~Tensor.squeeze remove dimensions.

Wherever one can pass an integer dimension index to an operator, one can also pass a dimension name. Functions that take lists of dimension indices can also take in a list of dimension names.

>>> x = torch.randn(1, 3, 3, 3, names=('N', 'C', 'H', 'W'))
>>> x.squeeze('N').names
('C', 'H', 'W')

>>> x = torch.randn(3, 3, 3, 3, names=('N', 'C', 'H', 'W'))
>>> x.sum(['N', 'C']).names
('H', 'W')

# Reduction ops with keepdim=True don't actually remove dimensions.
>>> x = torch.randn(3, 3, 3, 3, names=('N', 'C', 'H', 'W'))
>>> x.sum(['N', 'C'], keepdim=True).names
('N', 'C', 'H', 'W')

Unifies names from inputs

All binary arithmetic ops follow this rule. Operations that broadcast still broadcast positionally from the right to preserve compatibility with unnamed tensors. To perform explicit broadcasting by names, use Tensor.align_as.

For example,

# tensor: Tensor[   N, None]
# other:  Tensor[None,    C]
>>> tensor = torch.randn(3, 3, names=('N', None))
>>> other = torch.randn(3, 3, names=(None, 'C'))
>>> (tensor + other).names
('N', 'C')

Check names:

Finally, the output names are computed with [unify('N', None), unify(None, 'C')] = ['N', 'C']

More examples:

# Dimensions don't match from the right:
# tensor: Tensor[N, C]
# other:  Tensor[   N]
>>> tensor = torch.randn(3, 3, names=('N', 'C'))
>>> other = torch.randn(3, names=('N',))
>>> (tensor + other).names
RuntimeError: Error when attempting to broadcast dims ['N', 'C'] and dims
['N']: dim 'C' and dim 'N' are at the same position from the right but do
not match.

# Dimensions aren't aligned when matching tensor.names[-1] and other.names[-1]:
# tensor: Tensor[N, None]
# other:  Tensor[      N]
>>> tensor = torch.randn(3, 3, names=('N', None))
>>> other = torch.randn(3, names=('N',))
>>> (tensor + other).names
RuntimeError: Misaligned dims when attempting to broadcast dims ['N'] and
dims ['N', None]: dim 'N' appears in a different position from the right
across both lists.

Note

In both of the last examples, it is possible to align the tensors by names and then perform the addition. Use Tensor.align_as to align tensors by name or Tensor.align_to to align tensors to a custom dimension ordering.

Permutes dimensions

Some operations, like Tensor.t(), permute the order of dimensions. Dimension names are attached to individual dimensions so they get permuted as well.

If the operator takes in positional index dim, it is also able to take a dimension name as dim.

>>> x = torch.randn(3, 3, names=('N', 'C'))
>>> x.transpose('N', 'C').names
('C', 'N')

Contracts away dims

Matrix multiply functions follow some variant of this. Let's go through torch.mm first and then generalize the rule for batch matrix multiplication.

For torch.mm(tensor, other):

>>> x = torch.randn(3, 3, names=('N', 'D'))
>>> y = torch.randn(3, 3, names=('in', 'out'))
>>> x.mm(y).names
('N', 'out')

Inherently, a matrix multiplication performs a dot product over two dimensions, collapsing them. When two tensors are matrix-multiplied, the contracted dimensions disappear and do not show up in the output tensor.

torch.mv, torch.dot work in a similar way: name inference does not check input names and removes the dimensions that are involved in the dot product:

>>> x = torch.randn(3, 3, names=('N', 'D'))
>>> y = torch.randn(3, names=('something',))
>>> x.mv(y).names
('N',)

Now, let's take a look at torch.matmul(tensor, other). Assume that tensor.dim() >= 2 and other.dim() >= 2.

Examples:

# Batch matrix multiply of matrices Tensor['C', 'D'] and Tensor['E', 'F'].
# 'A', 'B' are batch dimensions.
>>> x = torch.randn(3, 3, 3, 3, names=('A', 'B', 'C', 'D'))
>>> y = torch.randn(3, 3, 3, names=('B', 'E', 'F'))
>>> torch.matmul(x, y).names
('A', 'B', 'C', 'F')

Finally, there are fused add versions of many matmul functions. i.e., addmm and addmv. These are treated as composing name inference for i.e. mm and name inference for add.

Factory functions

Factory functions now take a new names argument that associates a name with each dimension.

>>> torch.zeros(2, 3, names=('N', 'C'))
tensor([[0., 0., 0.],
        [0., 0., 0.]], names=('N', 'C'))

out function and in-place variants

A tensor specified as an out= tensor has the following behavior:

All in-place methods modify inputs to have names equal to the computed names from name inference. For example,

>>> x = torch.randn(3, 3)
>>> y = torch.randn(3, 3, names=('N', 'C'))
>>> x.names
(None, None)

>>> x += y
>>> x.names
('N', 'C')