As a special service "Fossies" has tried to format the requested source page into HTML format (assuming markdown format).
Alternatively you can here view or download the uninterpreted source code file.
A member file download can also be achieved by clicking within a package contents listing on the according byte size field. See also the last Fossies "Diffs" side-by-side code changes report for "tensor_view.rst": 1.11.0_vs_1.12.0.

torch

PyTorch allows a tensor to be a `View`

of an existing
tensor. View tensor shares the same underlying data with its base
tensor. Supporting `View`

avoids explicit data copy, thus
allows us to do fast and memory efficient reshaping, slicing and
element-wise operations.

For example, to get a view of an existing tensor `t`

, you
can call `t.view(...)`

.

```
>>> t = torch.rand(4, 4)
>>> b = t.view(2, 8)
>>> t.storage().data_ptr() == b.storage().data_ptr() # `t` and `b` share the same underlying data.
True
# Modifying view tensor changes base tensor as well.
>>> b[0][0] = 3.14
>>> t[0][0]
tensor(3.14)
```

Since views share underlying data with its base tensor, if you edit the data in the view, it will be reflected in the base tensor as well.

Typically a PyTorch op returns a new tensor as output, e.g. `~torch.Tensor.add`

. But in
case of view ops, outputs are views of input tensors to avoid unncessary
data copy. No data movement occurs when creating a view, view tensor
just changes the way it interprets the same data. Taking a view of
contiguous tensor could potentially produce a non-contiguous tensor.
Users should be pay additional attention as contiguity might have
implicit performance impact. `~torch.Tensor.transpose`

is a common example.

```
>>> base = torch.tensor([[0, 1],[2, 3]])
>>> base.is_contiguous()
True
>>> t = base.transpose(0, 1) # `t` is a view of `base`. No data movement happened here.
# View tensors might be non-contiguous.
>>> t.is_contiguous()
False
# To get a contiguous tensor, call `.contiguous()` to enforce
# copying data when `t` is not contiguous.
>>> c = t.contiguous()
```

For reference, here’s a full list of view ops in PyTorch:

- Basic slicing and indexing op, e.g.
`tensor[0, 2:, 1:7:2]`

returns a view of base`tensor`

, see note below. `~torch.Tensor.as_strided`

`~torch.Tensor.detach`

`~torch.Tensor.diagonal`

`~torch.Tensor.expand`

`~torch.Tensor.expand_as`

`~torch.Tensor.movedim`

`~torch.Tensor.narrow`

`~torch.Tensor.permute`

`~torch.Tensor.select`

`~torch.Tensor.squeeze`

`~torch.Tensor.transpose`

`~torch.Tensor.t`

`~torch.Tensor.T`

`~torch.Tensor.real`

`~torch.Tensor.imag`

`~torch.Tensor.view_as_real`

`~torch.Tensor.view_as_imag`

`~torch.Tensor.unflatten`

`~torch.Tensor.unfold`

`~torch.Tensor.unsqueeze`

`~torch.Tensor.view`

`~torch.Tensor.view_as`

`~torch.Tensor.unbind`

`~torch.Tensor.split`

`~torch.Tensor.split_with_sizes`

`~torch.Tensor.swapaxes`

`~torch.Tensor.swapdims`

`~torch.Tensor.chunk`

`~torch.Tensor.indices`

(sparse tensor only)`~torch.Tensor.values`

(sparse tensor only)

Note

When accessing the contents of a tensor via indexing, PyTorch follows Numpy behaviors that basic indexing returns views, while advanced indexing returns a copy. Assignment via either basic or advanced indexing is in-place. See more examples in Numpy indexing documentation.

It's also worth mentioning a few ops with special behaviors:

`~torch.Tensor.reshape`

,`~torch.Tensor.reshape_as`

and`~torch.Tensor.flatten`

can return either a view or new tensor, user code shouldn't rely on whether it's view or not.`~torch.Tensor.contiguous`

returns**itself**if input tensor is already contiguous, otherwise it returns a new contiguous tensor by copying data.

For a more detailed walk-through of PyTorch internal implementation, please refer to ezyang's blogpost about PyTorch Internals.