pytorch  1.8.2
About: PyTorch provides Tensor computation (like NumPy) with strong GPU acceleration and Deep Neural Networks (in Python) built on a tape-based autograd system. LTS (Long Term Support) release.
  Fossies Dox: pytorch-1.8.2.tar.gz  ("unofficial" and yet experimental doxygen-generated source code documentation)  

tensor.h File Reference
Include dependency graph for tensor.h:
This graph shows which files directly or indirectly include this file:

Go to the source code of this file.

Classes

class  caffe2::Tensor
 Tensor class holds a shared pointer to the implementation TensorImpl, redirects API calls to TensorImpl; Copying of Tensor results in sharing the same underlying implementation object. More...
 
class  caffe2::TensorPrinter
 

Namespaces

namespace  at
 Distributions kernel adapted from THRandom.cpp The kernels try to follow std::random distributions signature For instance: in ATen auto gen = at::detail::createCPUGenerator(); at::uniform_real_distribution<double> uniform(0, 1); auto sample = uniform(gen.get());.
 
namespace  caffe2
 Copyright (c) 2016-present, Facebook, Inc.
 

Typedefs

using caffe2::TensorCPU = Tensor
 
typedef TypeMeta(* caffe2::TypeCall) (const void *)
 
typedef vector< int64_t >(* caffe2::TensorInfoCall) (const void *, size_t *capacity, DeviceOption *device)
 

Functions

void caffe2::ReinitializeTensor (Tensor *t, at::IntArrayRef dims, at::TensorOptions options)
 Reinitialize a Tensor to given dims and options if necessary, note that this will not do anything if the Tensor already has correct size and data type. More...
 
void caffe2::ReinitializeAndCopyFrom (Tensor *t, at::TensorOptions options, const Tensor &src, bool async)
 
TypeCall caffe2::GetTypeCallFunction (TypeIdentifier id)
 
void caffe2::RegisterTypeCallFunction (TypeIdentifier id, TypeCall c)
 
TensorInfoCall caffe2::GetTensorInfoFunction (TypeIdentifier id)
 
void caffe2::RegisterTensorInfoFunction (TypeIdentifier id, TensorInfoCall c)
 
void caffe2::TensorVectorResize (std::vector< Tensor > &tensors, int size, DeviceType type)
 
Tensor caffe2::empty (at::IntArrayRef dims, at::TensorOptions options)
 
template<typename T >
Tensor caffe2::TensorCPUFromValues (at::IntArrayRef dims, at::ArrayRef< T > values)
 Creates a CPU tensor, and fills its contents with the given values. More...
 
vector< int64_t > caffe2::GetTensorInfo (const void *c, size_t *capacity, DeviceOption *device)
 

Variables

constexpr int caffe2::k_limit_default_ = 1000