"Fossies" - the Fresh Open Source Software Archive

Member "pytorch-1.8.2/docs/source/tensors.rst" (23 Jul 2021, 19872 Bytes) of package /linux/misc/pytorch-1.8.2.tar.gz:


As a special service "Fossies" has tried to format the requested source page into HTML format (assuming markdown format). Alternatively you can here view or download the uninterpreted source code file. A member file download can also be achieved by clicking within a package contents listing on the according byte size field. See also the last Fossies "Diffs" side-by-side code changes report for "tensors.rst": 1.11.0_vs_1.12.0.

torch

torch.Tensor

A torch.Tensor is a multi-dimensional matrix containing elements of a single data type.

Torch defines 10 tensor types with CPU and GPU variants which are as follows:

Data type dtype CPU tensor GPU tensor
32-bit floating point torch.float32 or torch.float torch.FloatTensor torch.cuda.FloatTensor
64-bit floating point torch.float64 or torch.double torch.DoubleTensor torch.cuda.DoubleTensor
16-bit floating point1 torch.float16 or torch.half torch.HalfTensor torch.cuda.HalfTensor

16-bit floating point2 32-bit complex 64-bit complex 128-bit complex

torch.bfloat16 torch.complex32 torch.complex64 torch.complex128 or torch.cdouble

torch.BFloat16Tensor

torch.cuda.BFloat16Tensor

8-bit integer (unsigned) torch.uint8 torch.ByteTensor torch.cuda.ByteTensor
8-bit integer (signed) torch.int8 torch.CharTensor torch.cuda.CharTensor
16-bit integer (signed) torch.int16 or torch.short torch.ShortTensor torch.cuda.ShortTensor
32-bit integer (signed) torch.int32 or torch.int torch.IntTensor torch.cuda.IntTensor
64-bit integer (signed) torch.int64 or torch.long torch.LongTensor torch.cuda.LongTensor
Boolean torch.bool torch.BoolTensor torch.cuda.BoolTensor

torch.Tensor is an alias for the default tensor type (torch.FloatTensor).

A tensor can be constructed from a Python list or sequence using the torch.tensor constructor:

>>> torch.tensor([[1., -1.], [1., -1.]])
tensor([[ 1.0000, -1.0000],
        [ 1.0000, -1.0000]])
>>> torch.tensor(np.array([[1, 2, 3], [4, 5, 6]]))
tensor([[ 1,  2,  3],
        [ 4,  5,  6]])

Warning

torch.tensor always copies data. If you have a Tensor data and just want to change its requires_grad flag, use ~torch.Tensor.requires_grad_ or ~torch.Tensor.detach to avoid a copy. If you have a numpy array and want to avoid a copy, use torch.as_tensor.

A tensor of specific data type can be constructed by passing a torch.dtype and/or a torch.device to a constructor or tensor creation op:

>>> torch.zeros([2, 4], dtype=torch.int32)
tensor([[ 0,  0,  0,  0],
        [ 0,  0,  0,  0]], dtype=torch.int32)
>>> cuda0 = torch.device('cuda:0')
>>> torch.ones([2, 4], dtype=torch.float64, device=cuda0)
tensor([[ 1.0000,  1.0000,  1.0000,  1.0000],
        [ 1.0000,  1.0000,  1.0000,  1.0000]], dtype=torch.float64, device='cuda:0')

The contents of a tensor can be accessed and modified using Python's indexing and slicing notation:

>>> x = torch.tensor([[1, 2, 3], [4, 5, 6]])
>>> print(x[1][2])
tensor(6)
>>> x[0][1] = 8
>>> print(x)
tensor([[ 1,  8,  3],
        [ 4,  5,  6]])

Use torch.Tensor.item to get a Python number from a tensor containing a single value:

>>> x = torch.tensor([[1]])
>>> x
tensor([[ 1]])
>>> x.item()
1
>>> x = torch.tensor(2.5)
>>> x
tensor(2.5000)
>>> x.item()
2.5

A tensor can be created with requires_grad=True so that torch.autograd records operations on them for automatic differentiation.

>>> x = torch.tensor([[1., -1.], [1., 1.]], requires_grad=True)
>>> out = x.pow(2).sum()
>>> out.backward()
>>> x.grad
tensor([[ 2.0000, -2.0000],
        [ 2.0000,  2.0000]])

Each tensor has an associated torch.Storage, which holds its data. The tensor class also provides multi-dimensional, strided view of a storage and defines numeric operations on it.

Note

For more information on tensor views, see tensor-view-doc.

Note

For more information on the torch.dtype, torch.device, and torch.layout attributes of a torch.Tensor, see tensor-attributes-doc.

Note

Methods which mutate a tensor are marked with an underscore suffix. For example, torch.FloatTensor.abs_ computes the absolute value in-place and returns the modified tensor, while torch.FloatTensor.abs computes the result in a new tensor.

Note

To change an existing tensor's torch.device and/or torch.dtype, consider using ~torch.Tensor.to method on the tensor.

Warning

Current implementation of torch.Tensor introduces memory overhead, thus it might lead to unexpectedly high memory usage in the applications with many tiny tensors. If this is your case, consider using one large structure.

There are a few main ways to create a tensor, depending on your use case.

new_tensor

new_full

new_empty

new_ones

new_zeros

is_cuda

is_quantized

is_meta

device

grad

ndim

T

real

imag

abs

abs

absolute

absolute

acos

acos

arccos

arccos

add

add

addbmm

addbmm

addcdiv

addcdiv

addcmul

addcmul

addmm

addmm

sspaddmm

addmv

addmv

addr

addr

allclose

amax

amin

angle

apply

argmax

argmin

argsort

asin

asin

arcsin

arcsin

as_strided

atan

atan

arctan

arctan

atan2

atan2

all

any

backward

baddbmm

baddbmm

bernoulli

bernoulli

bfloat16

bincount

bitwise_not

bitwise_not

bitwise_and

bitwise_and

bitwise_or

bitwise_or

bitwise_xor

bitwise_xor

bmm

bool

byte

broadcast_to

cauchy

ceil

ceil

char

cholesky

cholesky_inverse

cholesky_solve

chunk

clamp

clamp

clip

clip

clone

contiguous

copy

conj

copysign

copysign

cos

cos

cosh

cosh

count_nonzero

acosh

acosh

arccosh

arccosh

cpu

cross

cuda

logcumsumexp

cummax

cummin

cumprod

cumprod

cumsum

cumsum

data_ptr

deg2rad

dequantize

det

dense_dim

detach

detach

diag

diag_embed

diagflat

diagonal

fill_diagonal

fmax

fmin

diff

digamma

digamma

dim

dist

div

div

divide

divide

dot

double

eig

element_size

eq

eq

equal

erf

erf

erfc

erfc

erfinv

erfinv

exp

exp

expm1

expm1

expand

expand_as

exponential

fix

fix

fill

flatten

flip

fliplr

flipud

float

float_power

float_power

floor

floor

floor_divide

floor_divide

fmod

fmod

frac

frac

gather

gcd

gcd

ge

ge

greater_equal

greater_equal

geometric

geqrf

ger

get_device

gt

gt

greater

greater

half

hardshrink

heaviside

histc

hypot

hypot

i0

i0

igamma

igamma

igammac

igammac

index_add

index_add

index_copy

index_copy

index_fill

index_fill

index_put

index_put

index_select

indices

inner

int

int_repr

inverse

isclose

isfinite

isinf

isposinf

isneginf

isnan

is_contiguous

is_complex

is_floating_point

is_leaf

is_pinned

is_set_to

is_shared

is_signed

is_sparse

istft

isreal

item

kthvalue

lcm

lcm

ldexp

ldexp

le

le

less_equal

less_equal

lerp

lerp

lgamma

lgamma

log

log

logdet

log10

log10

log1p

log1p

log2

log2

log_normal

logaddexp

logaddexp2

logsumexp

logical_and

logical_and

logical_not

logical_not

logical_or

logical_or

logical_xor

logical_xor

logit

logit

long

lstsq

lt

lt

less

less

lu

lu_solve

as_subclass

map

masked_scatter

masked_scatter

masked_fill

masked_fill

masked_select

matmul

matrix_power

matrix_exp

max

maximum

mean

median

nanmedian

min

minimum

mm

smm

mode

movedim

moveaxis

msort

mul

mul

multiply

multiply

multinomial

mv

mvlgamma

mvlgamma

nansum

narrow

narrow_copy

ndimension

nan_to_num

nan_to_num

ne

ne

not_equal

not_equal

neg

neg

negative

negative

nelement

nextafter

nextafter

nonzero

norm

normal

numel

numpy

orgqr

ormqr

outer

permute

pin_memory

pinverse

polygamma

polygamma

pow

pow

prod

put

qr

qscheme

quantile

nanquantile

q_scale

q_zero_point

q_per_channel_scales

q_per_channel_zero_points

q_per_channel_axis

rad2deg

random

ravel

reciprocal

reciprocal

record_stream

register_hook

remainder

remainder

renorm

renorm

repeat

repeat_interleave

requires_grad

requires_grad

reshape

reshape_as

resize

resize_as

retain_grad

roll

rot90

round

round

rsqrt

rsqrt

scatter

scatter

scatter_add

scatter_add

select

set

share_memory

short

sigmoid

sigmoid

sign

sign

signbit

sgn

sgn

sin

sin

sinc

sinc

sinh

sinh

asinh

asinh

arcsinh

arcsinh

size

slogdet

solve

sort

split

sparse_mask

sparse_dim

sqrt

sqrt

square

square

squeeze

squeeze

std

stft

storage

storage_offset

storage_type

stride

sub

sub

subtract

subtract

sum

sum_to_size

svd

swapaxes

swapdims

symeig

t

t

tensor_split

tile

to

to_mkldnn

take

tan

tan

tanh

tanh

atanh

atanh

arctanh

arctanh

tolist

topk

to_sparse

trace

transpose

transpose

triangular_solve

tril

tril

triu

triu

true_divide

true_divide

trunc

trunc

type

type_as

unbind

unfold

uniform

unique

unique_consecutive

unsqueeze

unsqueeze

values

var

vdot

view

view_as

where

xlogy

xlogy

zero


  1. Sometimes referred to as binary16: uses 1 sign, 5 exponent, and 10 significand bits. Useful when precision is important at the expense of range.↩︎

  2. Sometimes referred to as Brain Floating Point: uses 1 sign, 8 exponent, and 7 significand bits. Useful when range is important, since it has the same number of exponent bits as float32↩︎