"Fossies" - the Fresh Open Source Software Archive

Member "pytorch-1.8.2/docs/source/torch.rst" (23 Jul 2021, 9497 Bytes) of package /linux/misc/pytorch-1.8.2.tar.gz:


As a special service "Fossies" has tried to format the requested source page into HTML format (assuming markdown format). Alternatively you can here view or download the uninterpreted source code file. A member file download can also be achieved by clicking within a package contents listing on the according byte size field. See also the last Fossies "Diffs" side-by-side code changes report for "torch.rst": 1.11.0_vs_1.12.0.

torch

The torch package contains data structures for multi-dimensional tensors and defines mathematical operations over these tensors. Additionally, it provides many utilities for efficient serializing of Tensors and arbitrary types, and other useful utilities.

It has a CUDA counterpart, that enables you to run your tensor computations on an NVIDIA GPU with compute capability >= 3.0

torch

Tensors

is_tensor is_storage is_complex is_floating_point is_nonzero set_default_dtype get_default_dtype set_default_tensor_type numel set_printoptions set_flush_denormal

Creation Ops

Note

Random sampling creation ops are listed under random-sampling and include: torch.rand torch.rand_like torch.randn torch.randn_like torch.randint torch.randint_like torch.randperm You may also use torch.empty with the inplace-random-sampling methods to create torch.Tensor s with values sampled from a broader range of distributions.

tensor sparse_coo_tensor as_tensor as_strided from_numpy zeros zeros_like ones ones_like arange range linspace logspace eye empty empty_like empty_strided full full_like quantize_per_tensor quantize_per_channel dequantize complex polar heaviside

Indexing, Slicing, Joining, Mutating Ops

cat chunk column_stack dstack gather hstack index_select masked_select movedim moveaxis narrow nonzero reshape row_stack scatter scatter_add split squeeze stack swapaxes swapdims t take tensor_split tile transpose unbind unsqueeze vstack where

Generators

Generator

Random sampling

seed manual_seed initial_seed get_rng_state set_rng_state

torch.default_generator

bernoulli multinomial normal poisson rand rand_like randint randint_like randn randn_like randperm

In-place random sampling

There are a few more in-place random sampling functions defined on Tensors as well. Click through to refer to their documentation:

Quasi-random sampling

quasirandom.SobolEngine

Serialization

save load

Parallelism

get_num_threads set_num_threads get_num_interop_threads set_num_interop_threads

Locally disabling gradient computation

The context managers torch.no_grad, torch.enable_grad, and torch.set_grad_enabled are helpful for locally disabling and enabling gradient computation. See locally-disable-grad for more details on their usage. These context managers are thread local, so they won't work if you send work to another thread using the threading module, etc.

Examples:

>>> x = torch.zeros(1, requires_grad=True)
>>> with torch.no_grad():
...     y = x * 2
>>> y.requires_grad
False

>>> is_train = False
>>> with torch.set_grad_enabled(is_train):
...     y = x * 2
>>> y.requires_grad
False

>>> torch.set_grad_enabled(True)  # this can also be used as a function
>>> y = x * 2
>>> y.requires_grad
True

>>> torch.set_grad_enabled(False)
>>> y = x * 2
>>> y.requires_grad
False

no_grad enable_grad set_grad_enabled

Math operations

Pointwise Ops

abs absolute acos arccos acosh arccosh add addcdiv addcmul angle asin arcsin asinh arcsinh atan arctan atanh arctanh atan2 bitwise_not bitwise_and bitwise_or bitwise_xor ceil clamp clip conj copysign cos cosh deg2rad div divide digamma erf erfc erfinv exp exp2 expm1 fake_quantize_per_channel_affine fake_quantize_per_tensor_affine fix float_power floor floor_divide fmod frac imag ldexp lerp lgamma log log10 log1p log2 logaddexp logaddexp2 logical_and logical_not logical_or logical_xor logit hypot i0 igamma igammac mul multiply mvlgamma nan_to_num neg negative nextafter polygamma pow rad2deg real reciprocal remainder round rsqrt sigmoid sign sgn signbit sin sinc sinh sqrt square sub subtract tan tanh true_divide trunc xlogy

Reduction Ops

argmax argmin amax amin all any max min dist logsumexp mean median nanmedian mode norm nansum prod quantile nanquantile std std_mean sum unique unique_consecutive var var_mean count_nonzero

Comparison Ops

allclose argsort eq equal ge greater_equal gt greater isclose isfinite isinf isposinf isneginf isnan isreal kthvalue le less_equal lt less maximum minimum fmax fmin ne not_equal sort topk msort

Spectral Ops

stft istft bartlett_window blackman_window hamming_window hann_window kaiser_window

Other Operations

atleast_1d atleast_2d atleast_3d bincount block_diag broadcast_tensors broadcast_to broadcast_shapes bucketize cartesian_prod cdist clone combinations cross cummax cummin cumprod cumsum diag diag_embed diagflat diagonal diff einsum flatten flip fliplr flipud kron rot90 gcd histc meshgrid lcm logcumsumexp ravel renorm repeat_interleave roll searchsorted tensordot trace tril tril_indices triu triu_indices vander view_as_real view_as_complex

BLAS and LAPACK Operations

addbmm addmm addmv addr baddbmm bmm chain_matmul cholesky cholesky_inverse cholesky_solve dot eig geqrf ger inner inverse det logdet slogdet lstsq lu lu_solve lu_unpack matmul matrix_power matrix_rank matrix_exp mm mv orgqr ormqr outer pinverse qr solve svd svd_lowrank pca_lowrank symeig lobpcg trapz triangular_solve vdot

Utilities

compiled_with_cxx11_abi result_type can_cast promote_types use_deterministic_algorithms are_deterministic_algorithms_enabled _assert