torch.Tensor
A torch.Tensor
is a multi-dimensional matrix containing elements of a single data type.
Torch defines 10 tensor types with CPU and GPU variants which are as follows:
Data type | dtype | CPU tensor | GPU tensor |
---|---|---|---|
32-bit floating point |
|
|
|
64-bit floating point |
|
|
|
16-bit floating point 1 |
|
|
|
16-bit floating point 2 |
|
|
|
32-bit complex |
| ||
64-bit complex |
| ||
128-bit complex |
| ||
8-bit integer (unsigned) |
|
|
|
8-bit integer (signed) |
|
|
|
16-bit integer (signed) |
|
|
|
32-bit integer (signed) |
|
|
|
64-bit integer (signed) |
|
|
|
Boolean |
|
|
|
-
1
-
Sometimes referred to as binary16: uses 1 sign, 5 exponent, and 10 significand bits. Useful when precision is important at the expense of range.
-
2
-
Sometimes referred to as Brain Floating Point: uses 1 sign, 8 exponent, and 7 significand bits. Useful when range is important, since it has the same number of exponent bits as
float32
torch.Tensor
is an alias for the default tensor type (torch.FloatTensor
).
A tensor can be constructed from a Python list
or sequence using the torch.tensor()
constructor:
>>> torch.tensor([[1., -1.], [1., -1.]]) tensor([[ 1.0000, -1.0000], [ 1.0000, -1.0000]]) >>> torch.tensor(np.array([[1, 2, 3], [4, 5, 6]])) tensor([[ 1, 2, 3], [ 4, 5, 6]])
Warning
torch.tensor()
always copies data
. If you have a Tensor data
and just want to change its requires_grad
flag, use requires_grad_()
or detach()
to avoid a copy. If you have a numpy array and want to avoid a copy, use torch.as_tensor()
.
A tensor of specific data type can be constructed by passing a torch.dtype
and/or a torch.device
to a constructor or tensor creation op:
>>> torch.zeros([2, 4], dtype=torch.int32) tensor([[ 0, 0, 0, 0], [ 0, 0, 0, 0]], dtype=torch.int32) >>> cuda0 = torch.device('cuda:0') >>> torch.ones([2, 4], dtype=torch.float64, device=cuda0) tensor([[ 1.0000, 1.0000, 1.0000, 1.0000], [ 1.0000, 1.0000, 1.0000, 1.0000]], dtype=torch.float64, device='cuda:0')
The contents of a tensor can be accessed and modified using Python’s indexing and slicing notation:
>>> x = torch.tensor([[1, 2, 3], [4, 5, 6]]) >>> print(x[1][2]) tensor(6) >>> x[0][1] = 8 >>> print(x) tensor([[ 1, 8, 3], [ 4, 5, 6]])
Use torch.Tensor.item()
to get a Python number from a tensor containing a single value:
>>> x = torch.tensor([[1]]) >>> x tensor([[ 1]]) >>> x.item() 1 >>> x = torch.tensor(2.5) >>> x tensor(2.5000) >>> x.item() 2.5
A tensor can be created with requires_grad=True
so that torch.autograd
records operations on them for automatic differentiation.
>>> x = torch.tensor([[1., -1.], [1., 1.]], requires_grad=True) >>> out = x.pow(2).sum() >>> out.backward() >>> x.grad tensor([[ 2.0000, -2.0000], [ 2.0000, 2.0000]])
Each tensor has an associated torch.Storage
, which holds its data. The tensor class also provides multi-dimensional, strided view of a storage and defines numeric operations on it.
Note
For more information on tensor views, see Tensor Views.
Note
For more information on the torch.dtype
, torch.device
, and torch.layout
attributes of a torch.Tensor
, see Tensor Attributes.
Note
Methods which mutate a tensor are marked with an underscore suffix. For example, torch.FloatTensor.abs_()
computes the absolute value in-place and returns the modified tensor, while torch.FloatTensor.abs()
computes the result in a new tensor.
Note
To change an existing tensor’s torch.device
and/or torch.dtype
, consider using to()
method on the tensor.
Warning
Current implementation of torch.Tensor
introduces memory overhead, thus it might lead to unexpectedly high memory usage in the applications with many tiny tensors. If this is your case, consider using one large structure.
-
class torch.Tensor
-
There are a few main ways to create a tensor, depending on your use case.
- To create a tensor with pre-existing data, use
torch.tensor()
. - To create a tensor with specific size, use
torch.*
tensor creation ops (see Creation Ops). - To create a tensor with the same size (and similar types) as another tensor, use
torch.*_like
tensor creation ops (see Creation Ops). - To create a tensor with similar type but different size as another tensor, use
tensor.new_*
creation ops.
-
new_tensor(data, dtype=None, device=None, requires_grad=False) → Tensor
-
Returns a new Tensor with
data
as the tensor data. By default, the returned Tensor has the sametorch.dtype
andtorch.device
as this tensor.Warning
new_tensor()
always copiesdata
. If you have a Tensordata
and want to avoid a copy, usetorch.Tensor.requires_grad_()
ortorch.Tensor.detach()
. If you have a numpy array and want to avoid a copy, usetorch.from_numpy()
.Warning
When data is a tensor
x
,new_tensor()
reads out ‘the data’ from whatever it is passed, and constructs a leaf variable. Thereforetensor.new_tensor(x)
is equivalent tox.clone().detach()
andtensor.new_tensor(x, requires_grad=True)
is equivalent tox.clone().detach().requires_grad_(True)
. The equivalents usingclone()
anddetach()
are recommended.- Parameters
-
-
data (array_like) – The returned Tensor copies
data
. -
dtype (
torch.dtype
, optional) – the desired type of returned tensor. Default: if None, sametorch.dtype
as this tensor. -
device (
torch.device
, optional) – the desired device of returned tensor. Default: if None, sametorch.device
as this tensor. -
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default:
False
.
-
data (array_like) – The returned Tensor copies
Example:
>>> tensor = torch.ones((2,), dtype=torch.int8) >>> data = [[0, 1], [2, 3]] >>> tensor.new_tensor(data) tensor([[ 0, 1], [ 2, 3]], dtype=torch.int8)
-
new_full(size, fill_value, dtype=None, device=None, requires_grad=False) → Tensor
-
Returns a Tensor of size
size
filled withfill_value
. By default, the returned Tensor has the sametorch.dtype
andtorch.device
as this tensor.- Parameters
-
- fill_value (scalar) – the number to fill the output tensor with.
-
dtype (
torch.dtype
, optional) – the desired type of returned tensor. Default: if None, sametorch.dtype
as this tensor. -
device (
torch.device
, optional) – the desired device of returned tensor. Default: if None, sametorch.device
as this tensor. -
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default:
False
.
Example:
>>> tensor = torch.ones((2,), dtype=torch.float64) >>> tensor.new_full((3, 4), 3.141592) tensor([[ 3.1416, 3.1416, 3.1416, 3.1416], [ 3.1416, 3.1416, 3.1416, 3.1416], [ 3.1416, 3.1416, 3.1416, 3.1416]], dtype=torch.float64)
-
new_empty(size, dtype=None, device=None, requires_grad=False) → Tensor
-
Returns a Tensor of size
size
filled with uninitialized data. By default, the returned Tensor has the sametorch.dtype
andtorch.device
as this tensor.- Parameters
-
-
dtype (
torch.dtype
, optional) – the desired type of returned tensor. Default: if None, sametorch.dtype
as this tensor. -
device (
torch.device
, optional) – the desired device of returned tensor. Default: if None, sametorch.device
as this tensor. -
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default:
False
.
-
dtype (
Example:
>>> tensor = torch.ones(()) >>> tensor.new_empty((2, 3)) tensor([[ 5.8182e-18, 4.5765e-41, -1.0545e+30], [ 3.0949e-41, 4.4842e-44, 0.0000e+00]])
-
new_ones(size, dtype=None, device=None, requires_grad=False) → Tensor
-
Returns a Tensor of size
size
filled with1
. By default, the returned Tensor has the sametorch.dtype
andtorch.device
as this tensor.- Parameters
-
-
size (int...) – a list, tuple, or
torch.Size
of integers defining the shape of the output tensor. -
dtype (
torch.dtype
, optional) – the desired type of returned tensor. Default: if None, sametorch.dtype
as this tensor. -
device (
torch.device
, optional) – the desired device of returned tensor. Default: if None, sametorch.device
as this tensor. -
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default:
False
.
-
size (int...) – a list, tuple, or
Example:
>>> tensor = torch.tensor((), dtype=torch.int32) >>> tensor.new_ones((2, 3)) tensor([[ 1, 1, 1], [ 1, 1, 1]], dtype=torch.int32)
-
new_zeros(size, dtype=None, device=None, requires_grad=False) → Tensor
-
Returns a Tensor of size
size
filled with0
. By default, the returned Tensor has the sametorch.dtype
andtorch.device
as this tensor.- Parameters
-
-
size (int...) – a list, tuple, or
torch.Size
of integers defining the shape of the output tensor. -
dtype (
torch.dtype
, optional) – the desired type of returned tensor. Default: if None, sametorch.dtype
as this tensor. -
device (
torch.device
, optional) – the desired device of returned tensor. Default: if None, sametorch.device
as this tensor. -
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default:
False
.
-
size (int...) – a list, tuple, or
Example:
>>> tensor = torch.tensor((), dtype=torch.float64) >>> tensor.new_zeros((2, 3)) tensor([[ 0., 0., 0.], [ 0., 0., 0.]], dtype=torch.float64)
-
is_cuda
-
Is
True
if the Tensor is stored on the GPU,False
otherwise.
-
is_quantized
-
Is
True
if the Tensor is quantized,False
otherwise.
-
is_meta
-
Is
True
if the Tensor is a meta tensor,False
otherwise. Meta tensors are like normal tensors, but they carry no data.
-
device
-
Is the
torch.device
where this Tensor is.
-
grad
-
This attribute is
None
by default and becomes a Tensor the first time a call tobackward()
computes gradients forself
. The attribute will then contain the gradients computed and future calls tobackward()
will accumulate (add) gradients into it.
-
ndim
-
Alias for
dim()
-
T
-
Is this Tensor with its dimensions reversed.
If
n
is the number of dimensions inx
,x.T
is equivalent tox.permute(n-1, n-2, ..., 0)
.
-
real
-
Returns a new tensor containing real values of the
self
tensor. The returned tensor andself
share the same underlying storage.Warning
real()
is only supported for tensors with complex dtypes.- Example::
-
>>> x=torch.randn(4, dtype=torch.cfloat) >>> x tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)]) >>> x.real tensor([ 0.3100, -0.5445, -1.6492, -0.0638])
-
imag
-
Returns a new tensor containing imaginary values of the
self
tensor. The returned tensor andself
share the same underlying storage.Warning
imag()
is only supported for tensors with complex dtypes.- Example::
-
>>> x=torch.randn(4, dtype=torch.cfloat) >>> x tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)]) >>> x.imag tensor([ 0.3553, -0.7896, -0.0633, -0.8119])
-
abs() → Tensor
-
See
torch.abs()
-
abs_() → Tensor
-
In-place version of
abs()
-
absolute() → Tensor
-
Alias for
abs()
-
absolute_() → Tensor
-
In-place version of
absolute()
Alias forabs_()
-
acos() → Tensor
-
See
torch.acos()
-
acos_() → Tensor
-
In-place version of
acos()
-
arccos() → Tensor
-
See
torch.arccos()
-
arccos_() → Tensor
-
In-place version of
arccos()
-
add(other, *, alpha=1) → Tensor
-
Add a scalar or tensor to
self
tensor. If bothalpha
andother
are specified, each element ofother
is scaled byalpha
before being used.When
other
is a tensor, the shape ofother
must be broadcastable with the shape of the underlying tensorSee
torch.add()
-
add_(other, *, alpha=1) → Tensor
-
In-place version of
add()
-
addbmm(batch1, batch2, *, beta=1, alpha=1) → Tensor
-
See
torch.addbmm()
-
addbmm_(batch1, batch2, *, beta=1, alpha=1) → Tensor
-
In-place version of
addbmm()
-
addcdiv(tensor1, tensor2, *, value=1) → Tensor
-
See
torch.addcdiv()
-
addcdiv_(tensor1, tensor2, *, value=1) → Tensor
-
In-place version of
addcdiv()
-
addcmul(tensor1, tensor2, *, value=1) → Tensor
-
See
torch.addcmul()
-
addcmul_(tensor1, tensor2, *, value=1) → Tensor
-
In-place version of
addcmul()
-
addmm(mat1, mat2, *, beta=1, alpha=1) → Tensor
-
See
torch.addmm()
-
addmm_(mat1, mat2, *, beta=1, alpha=1) → Tensor
-
In-place version of
addmm()
-
sspaddmm(mat1, mat2, *, beta=1, alpha=1) → Tensor
-
See
torch.sspaddmm()
-
addmv(mat, vec, *, beta=1, alpha=1) → Tensor
-
See
torch.addmv()
-
addmv_(mat, vec, *, beta=1, alpha=1) → Tensor
-
In-place version of
addmv()
-
addr(vec1, vec2, *, beta=1, alpha=1) → Tensor
-
See
torch.addr()
-
addr_(vec1, vec2, *, beta=1, alpha=1) → Tensor
-
In-place version of
addr()
-
allclose(other, rtol=1e-05, atol=1e-08, equal_nan=False) → Tensor
-
See
torch.allclose()
-
amax(dim=None, keepdim=False) → Tensor
-
See
torch.amax()
-
amin(dim=None, keepdim=False) → Tensor
-
See
torch.amin()
-
angle() → Tensor
-
See
torch.angle()
-
apply_(callable) → Tensor
-
Applies the function
callable
to each element in the tensor, replacing each element with the value returned bycallable
.Note
This function only works with CPU tensors and should not be used in code sections that require high performance.
-
argmax(dim=None, keepdim=False) → LongTensor
-
See
torch.argmax()
-
argmin(dim=None, keepdim=False) → LongTensor
-
See
torch.argmin()
-
argsort(dim=-1, descending=False) → LongTensor
-
See
torch.argsort()
-
asin() → Tensor
-
See
torch.asin()
-
asin_() → Tensor
-
In-place version of
asin()
-
arcsin() → Tensor
-
See
torch.arcsin()
-
arcsin_() → Tensor
-
In-place version of
arcsin()
-
as_strided(size, stride, storage_offset=0) → Tensor
-
atan() → Tensor
-
See
torch.atan()
-
atan_() → Tensor
-
In-place version of
atan()
-
arctan() → Tensor
-
See
torch.arctan()
-
arctan_() → Tensor
-
In-place version of
arctan()
-
atan2(other) → Tensor
-
See
torch.atan2()
-
atan2_(other) → Tensor
-
In-place version of
atan2()
-
all(dim=None, keepdim=False) → Tensor
-
See
torch.all()
-
any(dim=None, keepdim=False) → Tensor
-
See
torch.any()
-
backward(gradient=None, retain_graph=None, create_graph=False, inputs=None)
[source] -
Computes the gradient of current tensor w.r.t. graph leaves.
The graph is differentiated using the chain rule. If the tensor is non-scalar (i.e. its data has more than one element) and requires gradient, the function additionally requires specifying
gradient
. It should be a tensor of matching type and location, that contains the gradient of the differentiated function w.r.t.self
.This function accumulates gradients in the leaves - you might need to zero
.grad
attributes or set them toNone
before calling it. See Default gradient layouts for details on the memory layout of accumulated gradients.Note
If you run any forward ops, create
gradient
, and/or callbackward
in a user-specified CUDA stream context, see Stream semantics of backward passes.- Parameters
-
-
gradient (Tensor or None) – Gradient w.r.t. the tensor. If it is a tensor, it will be automatically converted to a Tensor that does not require grad unless
create_graph
is True. None values can be specified for scalar Tensors or ones that don’t require grad. If a None value would be acceptable then this argument is optional. -
retain_graph (bool, optional) – If
False
, the graph used to compute the grads will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Defaults to the value ofcreate_graph
. -
create_graph (bool, optional) – If
True
, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults toFalse
. -
inputs (sequence of Tensor) – Inputs w.r.t. which the gradient will be accumulated into
.grad
. All other Tensors will be ignored. If not provided, the gradient is accumulated into all the leaf Tensors that were used to compute the attr::tensors. All the provided inputs must be leaf Tensors.
-
gradient (Tensor or None) – Gradient w.r.t. the tensor. If it is a tensor, it will be automatically converted to a Tensor that does not require grad unless
-
baddbmm(batch1, batch2, *, beta=1, alpha=1) → Tensor
-
See
torch.baddbmm()
-
baddbmm_(batch1, batch2, *, beta=1, alpha=1) → Tensor
-
In-place version of
baddbmm()
-
bernoulli(*, generator=None) → Tensor
-
Returns a result tensor where each is independently sampled from .
self
must have floating pointdtype
, and the result will have the samedtype
.
-
bernoulli_()
-
-
bernoulli_(p=0.5, *, generator=None) → Tensor
-
Fills each location of
self
with an independent sample from .self
can have integraldtype
.
-
bernoulli_(p_tensor, *, generator=None) → Tensor
-
p_tensor
should be a tensor containing probabilities to be used for drawing the binary random number.The element of
self
tensor will be set to a value sampled from .self
can have integraldtype
, butp_tensor
must have floating pointdtype
.
See also
bernoulli()
andtorch.bernoulli()
-
-
bfloat16(memory_format=torch.preserve_format) → Tensor
-
self.bfloat16()
is equivalent toself.to(torch.bfloat16)
. Seeto()
.- Parameters
-
memory_format (
torch.memory_format
, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format
.
-
bincount(weights=None, minlength=0) → Tensor
-
See
torch.bincount()
-
bitwise_not() → Tensor
-
bitwise_not_() → Tensor
-
In-place version of
bitwise_not()
-
bitwise_and() → Tensor
-
bitwise_and_() → Tensor
-
In-place version of
bitwise_and()
-
bitwise_or() → Tensor
-
bitwise_or_() → Tensor
-
In-place version of
bitwise_or()
-
bitwise_xor() → Tensor
-
bitwise_xor_() → Tensor
-
In-place version of
bitwise_xor()
-
bmm(batch2) → Tensor
-
See
torch.bmm()
-
bool(memory_format=torch.preserve_format) → Tensor
-
self.bool()
is equivalent toself.to(torch.bool)
. Seeto()
.- Parameters
-
memory_format (
torch.memory_format
, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format
.
-
byte(memory_format=torch.preserve_format) → Tensor
-
self.byte()
is equivalent toself.to(torch.uint8)
. Seeto()
.- Parameters
-
memory_format (
torch.memory_format
, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format
.
-
broadcast_to(shape) → Tensor
-
See
torch.broadcast_to()
.
-
cauchy_(median=0, sigma=1, *, generator=None) → Tensor
-
Fills the tensor with numbers drawn from the Cauchy distribution:
-
ceil() → Tensor
-
See
torch.ceil()
-
ceil_() → Tensor
-
In-place version of
ceil()
-
char(memory_format=torch.preserve_format) → Tensor
-
self.char()
is equivalent toself.to(torch.int8)
. Seeto()
.- Parameters
-
memory_format (
torch.memory_format
, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format
.
-
cholesky(upper=False) → Tensor
-
See
torch.cholesky()
-
cholesky_inverse(upper=False) → Tensor
-
cholesky_solve(input2, upper=False) → Tensor
-
chunk(chunks, dim=0) → List of Tensors
-
See
torch.chunk()
-
clamp(min, max) → Tensor
-
See
torch.clamp()
-
clamp_(min, max) → Tensor
-
In-place version of
clamp()
-
clip(min, max) → Tensor
-
Alias for
clamp()
.
-
clip_(min, max) → Tensor
-
Alias for
clamp_()
.
-
clone(*, memory_format=torch.preserve_format) → Tensor
-
See
torch.clone()
-
contiguous(memory_format=torch.contiguous_format) → Tensor
-
Returns a contiguous in memory tensor containing the same data as
self
tensor. Ifself
tensor is already in the specified memory format, this function returns theself
tensor.- Parameters
-
memory_format (
torch.memory_format
, optional) – the desired memory format of returned Tensor. Default:torch.contiguous_format
.
-
copy_(src, non_blocking=False) → Tensor
-
Copies the elements from
src
intoself
tensor and returnsself
.The
src
tensor must be broadcastable with theself
tensor. It may be of a different data type or reside on a different device.
-
conj() → Tensor
-
See
torch.conj()
-
copysign(other) → Tensor
-
See
torch.copysign()
-
copysign_(other) → Tensor
-
In-place version of
copysign()
-
cos() → Tensor
-
See
torch.cos()
-
cos_() → Tensor
-
In-place version of
cos()
-
cosh() → Tensor
-
See
torch.cosh()
-
cosh_() → Tensor
-
In-place version of
cosh()
-
count_nonzero(dim=None) → Tensor
-
acosh() → Tensor
-
See
torch.acosh()
-
acosh_() → Tensor
-
In-place version of
acosh()
-
arccosh()
-
acosh() -> Tensor
See
torch.arccosh()
-
arccosh_()
-
acosh_() -> Tensor
In-place version of
arccosh()
-
cpu(memory_format=torch.preserve_format) → Tensor
-
Returns a copy of this object in CPU memory.
If this object is already in CPU memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters
-
memory_format (
torch.memory_format
, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format
.
-
cross(other, dim=-1) → Tensor
-
See
torch.cross()
-
cuda(device=None, non_blocking=False, memory_format=torch.preserve_format) → Tensor
-
Returns a copy of this object in CUDA memory.
If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters
-
-
device (
torch.device
) – The destination GPU device. Defaults to the current CUDA device. -
non_blocking (bool) – If
True
and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. Default:False
. -
memory_format (
torch.memory_format
, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format
.
-
device (
-
logcumsumexp(dim) → Tensor
-
cummax(dim) -> (Tensor, Tensor)
-
See
torch.cummax()
-
cummin(dim) -> (Tensor, Tensor)
-
See
torch.cummin()
-
cumprod(dim, dtype=None) → Tensor
-
See
torch.cumprod()
-
cumprod_(dim, dtype=None) → Tensor
-
In-place version of
cumprod()
-
cumsum(dim, dtype=None) → Tensor
-
See
torch.cumsum()
-
cumsum_(dim, dtype=None) → Tensor
-
In-place version of
cumsum()
-
data_ptr() → int
-
Returns the address of the first element of
self
tensor.
-
deg2rad() → Tensor
-
See
torch.deg2rad()
-
dequantize() → Tensor
-
Given a quantized Tensor, dequantize it and return the dequantized float Tensor.
-
det() → Tensor
-
See
torch.det()
-
dense_dim() → int
-
Return the number of dense dimensions in a sparse tensor
self
.Warning
Throws an error if
self
is not a sparse tensor.See also
Tensor.sparse_dim()
and hybrid tensors.
-
detach()
-
Returns a new Tensor, detached from the current graph.
The result will never require gradient.
Note
Returned Tensor shares the same storage with the original one. In-place modifications on either of them will be seen, and may trigger errors in correctness checks. IMPORTANT NOTE: Previously, in-place size / stride / storage changes (such as
resize_
/resize_as_
/set_
/transpose_
) to the returned tensor also update the original tensor. Now, these in-place changes will not update the original tensor anymore, and will instead trigger an error. For sparse tensors: In-place indices / values changes (such aszero_
/copy_
/add_
) to the returned tensor will not update the original tensor anymore, and will instead trigger an error.
-
detach_()
-
Detaches the Tensor from the graph that created it, making it a leaf. Views cannot be detached in-place.
-
diag(diagonal=0) → Tensor
-
See
torch.diag()
-
diag_embed(offset=0, dim1=-2, dim2=-1) → Tensor
-
diagflat(offset=0) → Tensor
-
See
torch.diagflat()
-
diagonal(offset=0, dim1=0, dim2=1) → Tensor
-
See
torch.diagonal()
-
fill_diagonal_(fill_value, wrap=False) → Tensor
-
Fill the main diagonal of a tensor that has at least 2-dimensions. When dims>2, all dimensions of input must be of equal length. This function modifies the input tensor in-place, and returns the input tensor.
- Parameters
-
- fill_value (Scalar) – the fill value
- wrap (bool) – the diagonal ‘wrapped’ after N columns for tall matrices.
Example:
>>> a = torch.zeros(3, 3) >>> a.fill_diagonal_(5) tensor([[5., 0., 0.], [0., 5., 0.], [0., 0., 5.]]) >>> b = torch.zeros(7, 3) >>> b.fill_diagonal_(5) tensor([[5., 0., 0.], [0., 5., 0.], [0., 0., 5.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.], [0., 0., 0.]]) >>> c = torch.zeros(7, 3) >>> c.fill_diagonal_(5, wrap=True) tensor([[5., 0., 0.], [0., 5., 0.], [0., 0., 5.], [0., 0., 0.], [5., 0., 0.], [0., 5., 0.], [0., 0., 5.]])
-
fmax(other) → Tensor
-
See
torch.fmax()
-
fmin(other) → Tensor
-
See
torch.fmin()
-
diff(n=1, dim=-1, prepend=None, append=None) → Tensor
-
See
torch.diff()
-
digamma() → Tensor
-
See
torch.digamma()
-
digamma_() → Tensor
-
In-place version of
digamma()
-
dim() → int
-
Returns the number of dimensions of
self
tensor.
-
dist(other, p=2) → Tensor
-
See
torch.dist()
-
div(value, *, rounding_mode=None) → Tensor
-
See
torch.div()
-
div_(value, *, rounding_mode=None) → Tensor
-
In-place version of
div()
-
divide(value, *, rounding_mode=None) → Tensor
-
See
torch.divide()
-
divide_(value, *, rounding_mode=None) → Tensor
-
In-place version of
divide()
-
dot(other) → Tensor
-
See
torch.dot()
-
double(memory_format=torch.preserve_format) → Tensor
-
self.double()
is equivalent toself.to(torch.float64)
. Seeto()
.- Parameters
-
memory_format (
torch.memory_format
, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format
.
-
eig(eigenvectors=False) -> (Tensor, Tensor)
-
See
torch.eig()
-
element_size() → int
-
Returns the size in bytes of an individual element.
Example:
>>> torch.tensor([]).element_size() 4 >>> torch.tensor([], dtype=torch.uint8).element_size() 1
-
eq(other) → Tensor
-
See
torch.eq()
-
eq_(other) → Tensor
-
In-place version of
eq()
-
equal(other) → bool
-
See
torch.equal()
-
erf() → Tensor
-
See
torch.erf()
-
erf_() → Tensor
-
In-place version of
erf()
-
erfc() → Tensor
-
See
torch.erfc()
-
erfc_() → Tensor
-
In-place version of
erfc()
-
erfinv() → Tensor
-
See
torch.erfinv()
-
erfinv_() → Tensor
-
In-place version of
erfinv()
-
exp() → Tensor
-
See
torch.exp()
-
exp_() → Tensor
-
In-place version of
exp()
-
expm1() → Tensor
-
See
torch.expm1()
-
expm1_() → Tensor
-
In-place version of
expm1()
-
expand(*sizes) → Tensor
-
Returns a new view of the
self
tensor with singleton dimensions expanded to a larger size.Passing -1 as the size for a dimension means not changing the size of that dimension.
Tensor can be also expanded to a larger number of dimensions, and the new ones will be appended at the front. For the new dimensions, the size cannot be set to -1.
Expanding a tensor does not allocate new memory, but only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the
stride
to 0. Any dimension of size 1 can be expanded to an arbitrary value without allocating new memory.- Parameters
-
*sizes (torch.Size or int...) – the desired expanded size
Warning
More than one element of an expanded tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first.
Example:
>>> x = torch.tensor([[1], [2], [3]]) >>> x.size() torch.Size([3, 1]) >>> x.expand(3, 4) tensor([[ 1, 1, 1, 1], [ 2, 2, 2, 2], [ 3, 3, 3, 3]]) >>> x.expand(-1, 4) # -1 means not changing the size of that dimension tensor([[ 1, 1, 1, 1], [ 2, 2, 2, 2], [ 3, 3, 3, 3]])
-
expand_as(other) → Tensor
-
Expand this tensor to the same size as
other
.self.expand_as(other)
is equivalent toself.expand(other.size())
.Please see
expand()
for more information aboutexpand
.- Parameters
-
other (
torch.Tensor
) – The result tensor has the same size asother
.
-
exponential_(lambd=1, *, generator=None) → Tensor
-
Fills
self
tensor with elements drawn from the exponential distribution:
-
fix() → Tensor
-
See
torch.fix()
.
-
fix_() → Tensor
-
In-place version of
fix()
-
fill_(value) → Tensor
-
Fills
self
tensor with the specified value.
-
flatten(input, start_dim=0, end_dim=-1) → Tensor
-
see
torch.flatten()
-
flip(dims) → Tensor
-
See
torch.flip()
-
fliplr() → Tensor
-
See
torch.fliplr()
-
flipud() → Tensor
-
See
torch.flipud()
-
float(memory_format=torch.preserve_format) → Tensor
-
self.float()
is equivalent toself.to(torch.float32)
. Seeto()
.- Parameters
-
memory_format (
torch.memory_format
, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format
.
-
float_power(exponent) → Tensor
-
float_power_(exponent) → Tensor
-
In-place version of
float_power()
-
floor() → Tensor
-
See
torch.floor()
-
floor_() → Tensor
-
In-place version of
floor()
-
floor_divide(value) → Tensor
-
floor_divide_(value) → Tensor
-
In-place version of
floor_divide()
-
fmod(divisor) → Tensor
-
See
torch.fmod()
-
fmod_(divisor) → Tensor
-
In-place version of
fmod()
-
frac() → Tensor
-
See
torch.frac()
-
frac_() → Tensor
-
In-place version of
frac()
-
gather(dim, index) → Tensor
-
See
torch.gather()
-
gcd(other) → Tensor
-
See
torch.gcd()
-
gcd_(other) → Tensor
-
In-place version of
gcd()
-
ge(other) → Tensor
-
See
torch.ge()
.
-
ge_(other) → Tensor
-
In-place version of
ge()
.
-
greater_equal(other) → Tensor
-
greater_equal_(other) → Tensor
-
In-place version of
greater_equal()
.
-
geometric_(p, *, generator=None) → Tensor
-
Fills
self
tensor with elements drawn from the geometric distribution:
-
geqrf() -> (Tensor, Tensor)
-
See
torch.geqrf()
-
ger(vec2) → Tensor
-
See
torch.ger()
-
get_device() -> Device ordinal (Integer)
-
For CUDA tensors, this function returns the device ordinal of the GPU on which the tensor resides. For CPU tensors, an error is thrown.
Example:
>>> x = torch.randn(3, 4, 5, device='cuda:0') >>> x.get_device() 0 >>> x.cpu().get_device() # RuntimeError: get_device is not implemented for type torch.FloatTensor
-
gt(other) → Tensor
-
See
torch.gt()
.
-
gt_(other) → Tensor
-
In-place version of
gt()
.
-
greater(other) → Tensor
-
See
torch.greater()
.
-
greater_(other) → Tensor
-
In-place version of
greater()
.
-
half(memory_format=torch.preserve_format) → Tensor
-
self.half()
is equivalent toself.to(torch.float16)
. Seeto()
.- Parameters
-
memory_format (
torch.memory_format
, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format
.
-
hardshrink(lambd=0.5) → Tensor
-
heaviside(values) → Tensor
-
histc(bins=100, min=0, max=0) → Tensor
-
See
torch.histc()
-
hypot(other) → Tensor
-
See
torch.hypot()
-
hypot_(other) → Tensor
-
In-place version of
hypot()
-
i0() → Tensor
-
See
torch.i0()
-
i0_() → Tensor
-
In-place version of
i0()
-
igamma(other) → Tensor
-
See
torch.igamma()
-
igamma_(other) → Tensor
-
In-place version of
igamma()
-
igammac(other) → Tensor
-
See
torch.igammac()
-
igammac_(other) → Tensor
-
In-place version of
igammac()
-
index_add_(dim, index, tensor) → Tensor
-
Accumulate the elements of
tensor
into theself
tensor by adding to the indices in the order given inindex
. For example, ifdim == 0
andindex[i] == j
, then thei
th row oftensor
is added to thej
th row ofself
.The
dim
th dimension oftensor
must have the same size as the length ofindex
(which must be a vector), and all other dimensions must matchself
, or an error will be raised.Note
This operation may behave nondeterministically when given tensors on a CUDA device. See Reproducibility for more information.
- Parameters
Example:
>>> x = torch.ones(5, 3) >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float) >>> index = torch.tensor([0, 4, 2]) >>> x.index_add_(0, index, t) tensor([[ 2., 3., 4.], [ 1., 1., 1.], [ 8., 9., 10.], [ 1., 1., 1.], [ 5., 6., 7.]])
-
index_add(tensor1, dim, index, tensor2) → Tensor
-
Out-of-place version of
torch.Tensor.index_add_()
.tensor1
corresponds toself
intorch.Tensor.index_add_()
.
-
index_copy_(dim, index, tensor) → Tensor
-
Copies the elements of
tensor
into theself
tensor by selecting the indices in the order given inindex
. For example, ifdim == 0
andindex[i] == j
, then thei
th row oftensor
is copied to thej
th row ofself
.The
dim
th dimension oftensor
must have the same size as the length ofindex
(which must be a vector), and all other dimensions must matchself
, or an error will be raised.Note
If
index
contains duplicate entries, multiple elements fromtensor
will be copied to the same index ofself
. The result is nondeterministic since it depends on which copy occurs last.- Parameters
Example:
>>> x = torch.zeros(5, 3) >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float) >>> index = torch.tensor([0, 4, 2]) >>> x.index_copy_(0, index, t) tensor([[ 1., 2., 3.], [ 0., 0., 0.], [ 7., 8., 9.], [ 0., 0., 0.], [ 4., 5., 6.]])
-
index_copy(tensor1, dim, index, tensor2) → Tensor
-
Out-of-place version of
torch.Tensor.index_copy_()
.tensor1
corresponds toself
intorch.Tensor.index_copy_()
.
-
index_fill_(dim, index, val) → Tensor
-
Fills the elements of the
self
tensor with valueval
by selecting the indices in the order given inindex
.- Parameters
- Example::
-
>>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float) >>> index = torch.tensor([0, 2]) >>> x.index_fill_(1, index, -1) tensor([[-1., 2., -1.], [-1., 5., -1.], [-1., 8., -1.]])
-
index_fill(tensor1, dim, index, value) → Tensor
-
Out-of-place version of
torch.Tensor.index_fill_()
.tensor1
corresponds toself
intorch.Tensor.index_fill_()
.
-
index_put_(indices, values, accumulate=False) → Tensor
-
Puts values from the tensor
values
into the tensorself
using the indices specified inindices
(which is a tuple of Tensors). The expressiontensor.index_put_(indices, values)
is equivalent totensor[indices] = values
. Returnsself
.If
accumulate
isTrue
, the elements invalues
are added toself
. If accumulate isFalse
, the behavior is undefined if indices contain duplicate elements.
-
index_put(tensor1, indices, values, accumulate=False) → Tensor
-
Out-place version of
index_put_()
.tensor1
corresponds toself
intorch.Tensor.index_put_()
.
-
index_select(dim, index) → Tensor
-
indices() → Tensor
-
Return the indices tensor of a sparse COO tensor.
Warning
Throws an error if
self
is not a sparse COO tensor.See also
Tensor.values()
.Note
This method can only be called on a coalesced sparse tensor. See
Tensor.coalesce()
for details.
-
inner(other) → Tensor
-
See
torch.inner()
.
-
int(memory_format=torch.preserve_format) → Tensor
-
self.int()
is equivalent toself.to(torch.int32)
. Seeto()
.- Parameters
-
memory_format (
torch.memory_format
, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format
.
-
int_repr() → Tensor
-
Given a quantized Tensor,
self.int_repr()
returns a CPU Tensor with uint8_t as data type that stores the underlying uint8_t values of the given Tensor.
-
inverse() → Tensor
-
See
torch.inverse()
-
isclose(other, rtol=1e-05, atol=1e-08, equal_nan=False) → Tensor
-
See
torch.isclose()
-
isfinite() → Tensor
-
See
torch.isfinite()
-
isinf() → Tensor
-
See
torch.isinf()
-
isposinf() → Tensor
-
See
torch.isposinf()
-
isneginf() → Tensor
-
See
torch.isneginf()
-
isnan() → Tensor
-
See
torch.isnan()
-
is_contiguous(memory_format=torch.contiguous_format) → bool
-
Returns True if
self
tensor is contiguous in memory in the order specified by memory format.- Parameters
-
memory_format (
torch.memory_format
, optional) – Specifies memory allocation order. Default:torch.contiguous_format
.
-
is_complex() → bool
-
Returns True if the data type of
self
is a complex data type.
-
is_floating_point() → bool
-
Returns True if the data type of
self
is a floating point data type.
-
is_leaf
-
All Tensors that have
requires_grad
which isFalse
will be leaf Tensors by convention.For Tensors that have
requires_grad
which isTrue
, they will be leaf Tensors if they were created by the user. This means that they are not the result of an operation and sograd_fn
is None.Only leaf Tensors will have their
grad
populated during a call tobackward()
. To getgrad
populated for non-leaf Tensors, you can useretain_grad()
.Example:
>>> a = torch.rand(10, requires_grad=True) >>> a.is_leaf True >>> b = torch.rand(10, requires_grad=True).cuda() >>> b.is_leaf False # b was created by the operation that cast a cpu Tensor into a cuda Tensor >>> c = torch.rand(10, requires_grad=True) + 2 >>> c.is_leaf False # c was created by the addition operation >>> d = torch.rand(10).cuda() >>> d.is_leaf True # d does not require gradients and so has no operation creating it (that is tracked by the autograd engine) >>> e = torch.rand(10).cuda().requires_grad_() >>> e.is_leaf True # e requires gradients and has no operations creating it >>> f = torch.rand(10, requires_grad=True, device="cuda") >>> f.is_leaf True # f requires grad, has no operation creating it
-
is_pinned()
-
Returns true if this tensor resides in pinned memory.
-
is_set_to(tensor) → bool
-
Returns True if both tensors are pointing to the exact same memory (same storage, offset, size and stride).
-
Checks if tensor is in shared memory.
This is always
True
for CUDA tensors.
-
is_signed() → bool
-
Returns True if the data type of
self
is a signed data type.
-
is_sparse
-
Is
True
if the Tensor uses sparse storage layout,False
otherwise.
-
istft(n_fft, hop_length=None, win_length=None, window=None, center=True, normalized=False, onesided=None, length=None, return_complex=False)
[source] -
See
torch.istft()
-
isreal() → Tensor
-
See
torch.isreal()
-
item() → number
-
Returns the value of this tensor as a standard Python number. This only works for tensors with one element. For other cases, see
tolist()
.This operation is not differentiable.
Example:
>>> x = torch.tensor([1.0]) >>> x.item() 1.0
-
kthvalue(k, dim=None, keepdim=False) -> (Tensor, LongTensor)
-
See
torch.kthvalue()
-
lcm(other) → Tensor
-
See
torch.lcm()
-
lcm_(other) → Tensor
-
In-place version of
lcm()
-
ldexp(other) → Tensor
-
See
torch.ldexp()
-
ldexp_(other) → Tensor
-
In-place version of
ldexp()
-
le(other) → Tensor
-
See
torch.le()
.
-
le_(other) → Tensor
-
In-place version of
le()
.
-
less_equal(other) → Tensor
-
See
torch.less_equal()
.
-
less_equal_(other) → Tensor
-
In-place version of
less_equal()
.
-
lerp(end, weight) → Tensor
-
See
torch.lerp()
-
lerp_(end, weight) → Tensor
-
In-place version of
lerp()
-
lgamma() → Tensor
-
See
torch.lgamma()
-
lgamma_() → Tensor
-
In-place version of
lgamma()
-
log() → Tensor
-
See
torch.log()
-
log_() → Tensor
-
In-place version of
log()
-
logdet() → Tensor
-
See
torch.logdet()
-
log10() → Tensor
-
See
torch.log10()
-
log10_() → Tensor
-
In-place version of
log10()
-
log1p() → Tensor
-
See
torch.log1p()
-
log1p_() → Tensor
-
In-place version of
log1p()
-
log2() → Tensor
-
See
torch.log2()
-
log2_() → Tensor
-
In-place version of
log2()
-
log_normal_(mean=1, std=2, *, generator=None)
-
Fills
self
tensor with numbers samples from the log-normal distribution parameterized by the given mean and standard deviation . Note thatmean
andstd
are the mean and standard deviation of the underlying normal distribution, and not of the returned distribution:
-
logaddexp(other) → Tensor
-
logaddexp2(other) → Tensor
-
logsumexp(dim, keepdim=False) → Tensor
-
logical_and() → Tensor
-
logical_and_() → Tensor
-
In-place version of
logical_and()
-
logical_not() → Tensor
-
logical_not_() → Tensor
-
In-place version of
logical_not()
-
logical_or() → Tensor
-
logical_or_() → Tensor
-
In-place version of
logical_or()
-
logical_xor() → Tensor
-
logical_xor_() → Tensor
-
In-place version of
logical_xor()
-
logit() → Tensor
-
See
torch.logit()
-
logit_() → Tensor
-
In-place version of
logit()
-
long(memory_format=torch.preserve_format) → Tensor
-
self.long()
is equivalent toself.to(torch.int64)
. Seeto()
.- Parameters
-
memory_format (
torch.memory_format
, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format
.
-
lstsq(A) -> (Tensor, Tensor)
-
See
torch.lstsq()
-
lt(other) → Tensor
-
See
torch.lt()
.
-
lt_(other) → Tensor
-
In-place version of
lt()
.
-
less()
-
lt(other) -> Tensor
See
torch.less()
.
-
less_(other) → Tensor
-
In-place version of
less()
.
-
lu(pivot=True, get_infos=False)
[source] -
See
torch.lu()
-
lu_solve(LU_data, LU_pivots) → Tensor
-
See
torch.lu_solve()
-
as_subclass(cls) → Tensor
-
Makes a
cls
instance with the same data pointer asself
. Changes in the output mirror changes inself
, and the output stays attached to the autograd graph.cls
must be a subclass ofTensor
.
-
map_(tensor, callable)
-
Applies
callable
for each element inself
tensor and the giventensor
and stores the results inself
tensor.self
tensor and the giventensor
must be broadcastable.The
callable
should have the signature:def callable(a, b) -> number
-
masked_scatter_(mask, source)
-
Copies elements from
source
intoself
tensor at positions where themask
is True. The shape ofmask
must be broadcastable with the shape of the underlying tensor. Thesource
should have at least as many elements as the number of ones inmask
- Parameters
-
- mask (BoolTensor) – the boolean mask
- source (Tensor) – the tensor to copy from
Note
The
mask
operates on theself
tensor, not on the givensource
tensor.
-
masked_scatter(mask, tensor) → Tensor
-
Out-of-place version of
torch.Tensor.masked_scatter_()
-
masked_fill_(mask, value)
-
Fills elements of
self
tensor withvalue
wheremask
is True. The shape ofmask
must be broadcastable with the shape of the underlying tensor.- Parameters
-
- mask (BoolTensor) – the boolean mask
- value (float) – the value to fill in with
-
masked_fill(mask, value) → Tensor
-
Out-of-place version of
torch.Tensor.masked_fill_()
-
masked_select(mask) → Tensor
-
matmul(tensor2) → Tensor
-
See
torch.matmul()
-
matrix_power(n) → Tensor
-
matrix_exp() → Tensor
-
max(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)
-
See
torch.max()
-
maximum(other) → Tensor
-
See
torch.maximum()
-
mean(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)
-
See
torch.mean()
-
median(dim=None, keepdim=False) -> (Tensor, LongTensor)
-
See
torch.median()
-
nanmedian(dim=None, keepdim=False) -> (Tensor, LongTensor)
-
min(dim=None, keepdim=False) -> Tensor or (Tensor, Tensor)
-
See
torch.min()
-
minimum(other) → Tensor
-
See
torch.minimum()
-
mm(mat2) → Tensor
-
See
torch.mm()
-
smm(mat) → Tensor
-
See
torch.smm()
-
mode(dim=None, keepdim=False) -> (Tensor, LongTensor)
-
See
torch.mode()
-
movedim(source, destination) → Tensor
-
See
torch.movedim()
-
moveaxis(source, destination) → Tensor
-
See
torch.moveaxis()
-
msort() → Tensor
-
See
torch.msort()
-
mul(value) → Tensor
-
See
torch.mul()
.
-
mul_(value) → Tensor
-
In-place version of
mul()
.
-
multiply(value) → Tensor
-
See
torch.multiply()
.
-
multiply_(value) → Tensor
-
In-place version of
multiply()
.
-
multinomial(num_samples, replacement=False, *, generator=None) → Tensor
-
mv(vec) → Tensor
-
See
torch.mv()
-
mvlgamma(p) → Tensor
-
See
torch.mvlgamma()
-
mvlgamma_(p) → Tensor
-
In-place version of
mvlgamma()
-
nansum(dim=None, keepdim=False, dtype=None) → Tensor
-
See
torch.nansum()
-
narrow(dimension, start, length) → Tensor
-
See
torch.narrow()
Example:
>>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> x.narrow(0, 0, 2) tensor([[ 1, 2, 3], [ 4, 5, 6]]) >>> x.narrow(1, 1, 2) tensor([[ 2, 3], [ 5, 6], [ 8, 9]])
-
narrow_copy(dimension, start, length) → Tensor
-
Same as
Tensor.narrow()
except returning a copy rather than shared storage. This is primarily for sparse tensors, which do not have a shared-storage narrow method. Calling`narrow_copy
with`dimemsion > self.sparse_dim()`
will return a copy with the relevant dense dimension narrowed, and`self.shape`
updated accordingly.
-
ndimension() → int
-
Alias for
dim()
-
nan_to_num(nan=0.0, posinf=None, neginf=None) → Tensor
-
See
torch.nan_to_num()
.
-
nan_to_num_(nan=0.0, posinf=None, neginf=None) → Tensor
-
In-place version of
nan_to_num()
.
-
ne(other) → Tensor
-
See
torch.ne()
.
-
ne_(other) → Tensor
-
In-place version of
ne()
.
-
not_equal(other) → Tensor
-
See
torch.not_equal()
.
-
not_equal_(other) → Tensor
-
In-place version of
not_equal()
.
-
neg() → Tensor
-
See
torch.neg()
-
neg_() → Tensor
-
In-place version of
neg()
-
negative() → Tensor
-
See
torch.negative()
-
negative_() → Tensor
-
In-place version of
negative()
-
nelement() → int
-
Alias for
numel()
-
nextafter(other) → Tensor
-
nextafter_(other) → Tensor
-
In-place version of
nextafter()
-
nonzero() → LongTensor
-
See
torch.nonzero()
-
norm(p='fro', dim=None, keepdim=False, dtype=None)
[source] -
See
torch.norm()
-
normal_(mean=0, std=1, *, generator=None) → Tensor
-
Fills
self
tensor with elements samples from the normal distribution parameterized bymean
andstd
.
-
numel() → int
-
See
torch.numel()
-
numpy() → numpy.ndarray
-
Returns
self
tensor as a NumPyndarray
. This tensor and the returnedndarray
share the same underlying storage. Changes toself
tensor will be reflected in thendarray
and vice versa.
-
orgqr(input2) → Tensor
-
See
torch.orgqr()
-
ormqr(input2, input3, left=True, transpose=False) → Tensor
-
See
torch.ormqr()
-
outer(vec2) → Tensor
-
See
torch.outer()
.
-
permute(*dims) → Tensor
-
Returns a view of the original tensor with its dimensions permuted.
- Parameters
-
*dims (int...) – The desired ordering of dimensions
Example
>>> x = torch.randn(2, 3, 5) >>> x.size() torch.Size([2, 3, 5]) >>> x.permute(2, 0, 1).size() torch.Size([5, 2, 3])
-
pin_memory() → Tensor
-
Copies the tensor to pinned memory, if it’s not already pinned.
-
pinverse() → Tensor
-
See
torch.pinverse()
-
polygamma(n) → Tensor
-
polygamma_(n) → Tensor
-
In-place version of
polygamma()
-
pow(exponent) → Tensor
-
See
torch.pow()
-
pow_(exponent) → Tensor
-
In-place version of
pow()
-
prod(dim=None, keepdim=False, dtype=None) → Tensor
-
See
torch.prod()
-
put_(indices, tensor, accumulate=False) → Tensor
-
Copies the elements from
tensor
into the positions specified by indices. For the purpose of indexing, theself
tensor is treated as if it were a 1-D tensor.If
accumulate
isTrue
, the elements intensor
are added toself
. If accumulate isFalse
, the behavior is undefined if indices contain duplicate elements.- Parameters
Example:
>>> src = torch.tensor([[4, 3, 5], ... [6, 7, 8]]) >>> src.put_(torch.tensor([1, 3]), torch.tensor([9, 10])) tensor([[ 4, 9, 5], [ 10, 7, 8]])
-
qr(some=True) -> (Tensor, Tensor)
-
See
torch.qr()
-
qscheme() → torch.qscheme
-
Returns the quantization scheme of a given QTensor.
-
quantile(q, dim=None, keepdim=False) → Tensor
-
See
torch.quantile()
-
nanquantile(q, dim=None, keepdim=False) → Tensor
-
q_scale() → float
-
Given a Tensor quantized by linear(affine) quantization, returns the scale of the underlying quantizer().
-
q_zero_point() → int
-
Given a Tensor quantized by linear(affine) quantization, returns the zero_point of the underlying quantizer().
-
q_per_channel_scales() → Tensor
-
Given a Tensor quantized by linear (affine) per-channel quantization, returns a Tensor of scales of the underlying quantizer. It has the number of elements that matches the corresponding dimensions (from q_per_channel_axis) of the tensor.
-
q_per_channel_zero_points() → Tensor
-
Given a Tensor quantized by linear (affine) per-channel quantization, returns a tensor of zero_points of the underlying quantizer. It has the number of elements that matches the corresponding dimensions (from q_per_channel_axis) of the tensor.
-
q_per_channel_axis() → int
-
Given a Tensor quantized by linear (affine) per-channel quantization, returns the index of dimension on which per-channel quantization is applied.
-
rad2deg() → Tensor
-
See
torch.rad2deg()
-
random_(from=0, to=None, *, generator=None) → Tensor
-
Fills
self
tensor with numbers sampled from the discrete uniform distribution over[from, to - 1]
. If not specified, the values are usually only bounded byself
tensor’s data type. However, for floating point types, if unspecified, range will be[0, 2^mantissa]
to ensure that every value is representable. For example,torch.tensor(1, dtype=torch.double).random_()
will be uniform in[0, 2^53]
.
-
ravel(input) → Tensor
-
see
torch.ravel()
-
reciprocal() → Tensor
-
reciprocal_() → Tensor
-
In-place version of
reciprocal()
-
record_stream(stream)
-
Ensures that the tensor memory is not reused for another tensor until all current work queued on
stream
are complete.Note
The caching allocator is aware of only the stream where a tensor was allocated. Due to the awareness, it already correctly manages the life cycle of tensors on only one stream. But if a tensor is used on a stream different from the stream of origin, the allocator might reuse the memory unexpectedly. Calling this method lets the allocator know which streams have used the tensor.
-
register_hook(hook)
[source] -
Registers a backward hook.
The hook will be called every time a gradient with respect to the Tensor is computed. The hook should have the following signature:
hook(grad) -> Tensor or None
The hook should not modify its argument, but it can optionally return a new gradient which will be used in place of
grad
.This function returns a handle with a method
handle.remove()
that removes the hook from the module.Example:
>>> v = torch.tensor([0., 0., 0.], requires_grad=True) >>> h = v.register_hook(lambda grad: grad * 2) # double the gradient >>> v.backward(torch.tensor([1., 2., 3.])) >>> v.grad 2 4 6 [torch.FloatTensor of size (3,)] >>> h.remove() # removes the hook
-
remainder(divisor) → Tensor
-
remainder_(divisor) → Tensor
-
In-place version of
remainder()
-
renorm(p, dim, maxnorm) → Tensor
-
See
torch.renorm()
-
renorm_(p, dim, maxnorm) → Tensor
-
In-place version of
renorm()
-
repeat(*sizes) → Tensor
-
Repeats this tensor along the specified dimensions.
Unlike
expand()
, this function copies the tensor’s data.Warning
repeat()
behaves differently from numpy.repeat, but is more similar to numpy.tile. For the operator similar tonumpy.repeat
, seetorch.repeat_interleave()
.- Parameters
-
sizes (torch.Size or int...) – The number of times to repeat this tensor along each dimension
Example:
>>> x = torch.tensor([1, 2, 3]) >>> x.repeat(4, 2) tensor([[ 1, 2, 3, 1, 2, 3], [ 1, 2, 3, 1, 2, 3], [ 1, 2, 3, 1, 2, 3], [ 1, 2, 3, 1, 2, 3]]) >>> x.repeat(4, 2, 1).size() torch.Size([4, 2, 3])
-
repeat_interleave(repeats, dim=None) → Tensor
-
requires_grad
-
Is
True
if gradients need to be computed for this Tensor,False
otherwise.
-
requires_grad_(requires_grad=True) → Tensor
-
Change if autograd should record operations on this tensor: sets this tensor’s
requires_grad
attribute in-place. Returns this tensor.requires_grad_()
’s main use case is to tell autograd to begin recording operations on a Tensortensor
. Iftensor
hasrequires_grad=False
(because it was obtained through a DataLoader, or required preprocessing or initialization),tensor.requires_grad_()
makes it so that autograd will begin to record operations ontensor
.- Parameters
-
requires_grad (bool) – If autograd should record operations on this tensor. Default:
True
.
Example:
>>> # Let's say we want to preprocess some saved weights and use >>> # the result as new weights. >>> saved_weights = [0.1, 0.2, 0.3, 0.25] >>> loaded_weights = torch.tensor(saved_weights) >>> weights = preprocess(loaded_weights) # some function >>> weights tensor([-0.5503, 0.4926, -2.1158, -0.8303]) >>> # Now, start to record operations done to weights >>> weights.requires_grad_() >>> out = weights.pow(2).sum() >>> out.backward() >>> weights.grad tensor([-1.1007, 0.9853, -4.2316, -1.6606])
-
reshape(*shape) → Tensor
-
Returns a tensor with the same data and number of elements as
self
but with the specified shape. This method returns a view ifshape
is compatible with the current shape. Seetorch.Tensor.view()
on when it is possible to return a view.See
torch.reshape()
- Parameters
-
shape (tuple of python:ints or int...) – the desired shape
-
reshape_as(other) → Tensor
-
Returns this tensor as the same shape as
other
.self.reshape_as(other)
is equivalent toself.reshape(other.sizes())
. This method returns a view ifother.sizes()
is compatible with the current shape. Seetorch.Tensor.view()
on when it is possible to return a view.Please see
reshape()
for more information aboutreshape
.- Parameters
-
other (
torch.Tensor
) – The result tensor has the same shape asother
.
-
resize_(*sizes, memory_format=torch.contiguous_format) → Tensor
-
Resizes
self
tensor to the specified size. If the number of elements is larger than the current storage size, then the underlying storage is resized to fit the new number of elements. If the number of elements is smaller, the underlying storage is not changed. Existing elements are preserved but any new memory is uninitialized.Warning
This is a low-level method. The storage is reinterpreted as C-contiguous, ignoring the current strides (unless the target size equals the current size, in which case the tensor is left unchanged). For most purposes, you will instead want to use
view()
, which checks for contiguity, orreshape()
, which copies data if needed. To change the size in-place with custom strides, seeset_()
.- Parameters
-
- sizes (torch.Size or int...) – the desired size
-
memory_format (
torch.memory_format
, optional) – the desired memory format of Tensor. Default:torch.contiguous_format
. Note that memory format ofself
is going to be unaffected ifself.size()
matchessizes
.
Example:
>>> x = torch.tensor([[1, 2], [3, 4], [5, 6]]) >>> x.resize_(2, 2) tensor([[ 1, 2], [ 3, 4]])
-
resize_as_(tensor, memory_format=torch.contiguous_format) → Tensor
-
Resizes the
self
tensor to be the same size as the specifiedtensor
. This is equivalent toself.resize_(tensor.size())
.- Parameters
-
memory_format (
torch.memory_format
, optional) – the desired memory format of Tensor. Default:torch.contiguous_format
. Note that memory format ofself
is going to be unaffected ifself.size()
matchestensor.size()
.
-
retain_grad()
[source] -
Enables .grad attribute for non-leaf Tensors.
-
roll(shifts, dims) → Tensor
-
See
torch.roll()
-
rot90(k, dims) → Tensor
-
See
torch.rot90()
-
round() → Tensor
-
See
torch.round()
-
round_() → Tensor
-
In-place version of
round()
-
rsqrt() → Tensor
-
See
torch.rsqrt()
-
rsqrt_() → Tensor
-
In-place version of
rsqrt()
-
scatter(dim, index, src) → Tensor
-
Out-of-place version of
torch.Tensor.scatter_()
-
scatter_(dim, index, src, reduce=None) → Tensor
-
Writes all values from the tensor
src
intoself
at the indices specified in theindex
tensor. For each value insrc
, its output index is specified by its index insrc
fordimension != dim
and by the corresponding value inindex
fordimension = dim
.For a 3-D tensor,
self
is updated as:self[index[i][j][k]][j][k] = src[i][j][k] # if dim == 0 self[i][index[i][j][k]][k] = src[i][j][k] # if dim == 1 self[i][j][index[i][j][k]] = src[i][j][k] # if dim == 2
This is the reverse operation of the manner described in
gather()
.self
,index
andsrc
(if it is a Tensor) should all have the same number of dimensions. It is also required thatindex.size(d) <= src.size(d)
for all dimensionsd
, and thatindex.size(d) <= self.size(d)
for all dimensionsd != dim
. Note thatindex
andsrc
do not broadcast.Moreover, as for
gather()
, the values ofindex
must be between0
andself.size(dim) - 1
inclusive.Warning
When indices are not unique, the behavior is non-deterministic (one of the values from
src
will be picked arbitrarily) and the gradient will be incorrect (it will be propagated to all locations in the source that correspond to the same index)!Note
The backward pass is implemented only for
src.shape == index.shape
.Additionally accepts an optional
reduce
argument that allows specification of an optional reduction operation, which is applied to all values in the tensorsrc
intoself
at the indicies specified in theindex
. For each value insrc
, the reduction operation is applied to an index inself
which is specified by its index insrc
fordimension != dim
and by the corresponding value inindex
fordimension = dim
.Given a 3-D tensor and reduction using the multiplication operation,
self
is updated as:self[index[i][j][k]][j][k] *= src[i][j][k] # if dim == 0 self[i][index[i][j][k]][k] *= src[i][j][k] # if dim == 1 self[i][j][index[i][j][k]] *= src[i][j][k] # if dim == 2
Reducing with the addition operation is the same as using
scatter_add_()
.- Parameters
-
- dim (int) – the axis along which to index
-
index (LongTensor) – the indices of elements to scatter, can be either empty or of the same dimensionality as
src
. When empty, the operation returnsself
unchanged. - src (Tensor or float) – the source element(s) to scatter.
-
reduce (str, optional) – reduction operation to apply, can be either
'add'
or'multiply'
.
Example:
>>> src = torch.arange(1, 11).reshape((2, 5)) >>> src tensor([[ 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10]]) >>> index = torch.tensor([[0, 1, 2, 0]]) >>> torch.zeros(3, 5, dtype=src.dtype).scatter_(0, index, src) tensor([[1, 0, 0, 4, 0], [0, 2, 0, 0, 0], [0, 0, 3, 0, 0]]) >>> index = torch.tensor([[0, 1, 2], [0, 1, 4]]) >>> torch.zeros(3, 5, dtype=src.dtype).scatter_(1, index, src) tensor([[1, 2, 3, 0, 0], [6, 7, 0, 0, 8], [0, 0, 0, 0, 0]]) >>> torch.full((2, 4), 2.).scatter_(1, torch.tensor([[2], [3]]), ... 1.23, reduce='multiply') tensor([[2.0000, 2.0000, 2.4600, 2.0000], [2.0000, 2.0000, 2.0000, 2.4600]]) >>> torch.full((2, 4), 2.).scatter_(1, torch.tensor([[2], [3]]), ... 1.23, reduce='add') tensor([[2.0000, 2.0000, 3.2300, 2.0000], [2.0000, 2.0000, 2.0000, 3.2300]])
-
scatter_add_(dim, index, src) → Tensor
-
Adds all values from the tensor
other
intoself
at the indices specified in theindex
tensor in a similar fashion asscatter_()
. For each value insrc
, it is added to an index inself
which is specified by its index insrc
fordimension != dim
and by the corresponding value inindex
fordimension = dim
.For a 3-D tensor,
self
is updated as:self[index[i][j][k]][j][k] += src[i][j][k] # if dim == 0 self[i][index[i][j][k]][k] += src[i][j][k] # if dim == 1 self[i][j][index[i][j][k]] += src[i][j][k] # if dim == 2
self
,index
andsrc
should have same number of dimensions. It is also required thatindex.size(d) <= src.size(d)
for all dimensionsd
, and thatindex.size(d) <= self.size(d)
for all dimensionsd != dim
. Note thatindex
andsrc
do not broadcast.Note
This operation may behave nondeterministically when given tensors on a CUDA device. See Reproducibility for more information.
Note
The backward pass is implemented only for
src.shape == index.shape
.- Parameters
Example:
>>> src = torch.ones((2, 5)) >>> index = torch.tensor([[0, 1, 2, 0, 0]]) >>> torch.zeros(3, 5, dtype=src.dtype).scatter_add_(0, index, src) tensor([[1., 0., 0., 1., 1.], [0., 1., 0., 0., 0.], [0., 0., 1., 0., 0.]]) >>> index = torch.tensor([[0, 1, 2, 0, 0], [0, 1, 2, 2, 2]]) >>> torch.zeros(3, 5, dtype=src.dtype).scatter_add_(0, index, src) tensor([[2., 0., 0., 1., 1.], [0., 2., 0., 0., 0.], [0., 0., 2., 1., 1.]])
-
scatter_add(dim, index, src) → Tensor
-
Out-of-place version of
torch.Tensor.scatter_add_()
-
select(dim, index) → Tensor
-
Slices the
self
tensor along the selected dimension at the given index. This function returns a view of the original tensor with the given dimension removed.Note
select()
is equivalent to slicing. For example,tensor.select(0, index)
is equivalent totensor[index]
andtensor.select(2, index)
is equivalent totensor[:,:,index]
.
-
set_(source=None, storage_offset=0, size=None, stride=None) → Tensor
-
Sets the underlying storage, size, and strides. If
source
is a tensor,self
tensor will share the same storage and have the same size and strides assource
. Changes to elements in one tensor will be reflected in the other.If
source
is aStorage
, the method sets the underlying storage, offset, size, and stride.
-
Moves the underlying storage to shared memory.
This is a no-op if the underlying storage is already in shared memory and for CUDA tensors. Tensors in shared memory cannot be resized.
-
short(memory_format=torch.preserve_format) → Tensor
-
self.short()
is equivalent toself.to(torch.int16)
. Seeto()
.- Parameters
-
memory_format (
torch.memory_format
, optional) – the desired memory format of returned Tensor. Default:torch.preserve_format
.
-
sigmoid() → Tensor
-
See
torch.sigmoid()
-
sigmoid_() → Tensor
-
In-place version of
sigmoid()
-
sign() → Tensor
-
See
torch.sign()
-
sign_() → Tensor
-
In-place version of
sign()
-
signbit() → Tensor
-
See
torch.signbit()
-
sgn() → Tensor
-
See
torch.sgn()
-
sgn_() → Tensor
-
In-place version of
sgn()
-
sin() → Tensor
-
See
torch.sin()
-
sin_() → Tensor
-
In-place version of
sin()
-
sinc() → Tensor
-
See
torch.sinc()
-
sinc_() → Tensor
-
In-place version of
sinc()
-
sinh() → Tensor
-
See
torch.sinh()
-
sinh_() → Tensor
-
In-place version of
sinh()
-
asinh() → Tensor
-
See
torch.asinh()
-
asinh_() → Tensor
-
In-place version of
asinh()
-
arcsinh() → Tensor
-
See
torch.arcsinh()
-
arcsinh_() → Tensor
-
In-place version of
arcsinh()
-
size() → torch.Size
-
Returns the size of the
self
tensor. The returned value is a subclass oftuple
.Example:
>>> torch.empty(3, 4, 5).size() torch.Size([3, 4, 5])
-
slogdet() -> (Tensor, Tensor)
-
See
torch.slogdet()
-
solve(A) → Tensor, Tensor
-
See
torch.solve()
-
sort(dim=-1, descending=False) -> (Tensor, LongTensor)
-
See
torch.sort()
-
split(split_size, dim=0)
[source] -
See
torch.split()
-
sparse_mask(mask) → Tensor
-
Returns a new sparse tensor with values from a strided tensor
self
filtered by the indices of the sparse tensormask
. The values ofmask
sparse tensor are ignored.self
andmask
tensors must have the same shape.Note
The returned sparse tensor has the same indices as the sparse tensor
mask
, even when the corresponding values inself
are zeros.- Parameters
-
mask (Tensor) – a sparse tensor whose indices are used as a filter
Example:
>>> nse = 5 >>> dims = (5, 5, 2, 2) >>> I = torch.cat([torch.randint(0, dims[0], size=(nse,)), ... torch.randint(0, dims[1], size=(nse,))], 0).reshape(2, nse) >>> V = torch.randn(nse, dims[2], dims[3]) >>> S = torch.sparse_coo_tensor(I, V, dims).coalesce() >>> D = torch.randn(dims) >>> D.sparse_mask(S) tensor(indices=tensor([[0, 0, 0, 2], [0, 1, 4, 3]]), values=tensor([[[ 1.6550, 0.2397], [-0.1611, -0.0779]], [[ 0.2326, -1.0558], [ 1.4711, 1.9678]], [[-0.5138, -0.0411], [ 1.9417, 0.5158]], [[ 0.0793, 0.0036], [-0.2569, -0.1055]]]), size=(5, 5, 2, 2), nnz=4, layout=torch.sparse_coo)
-
sparse_dim() → int
-
Return the number of sparse dimensions in a sparse tensor
self
.Warning
Throws an error if
self
is not a sparse tensor.See also
Tensor.dense_dim()
and hybrid tensors.
-
sqrt() → Tensor
-
See
torch.sqrt()
-
sqrt_() → Tensor
-
In-place version of
sqrt()
-
square() → Tensor
-
See
torch.square()
-
square_() → Tensor
-
In-place version of
square()
-
squeeze(dim=None) → Tensor
-
See
torch.squeeze()
-
squeeze_(dim=None) → Tensor
-
In-place version of
squeeze()
-
std(dim=None, unbiased=True, keepdim=False) → Tensor
-
See
torch.std()
-
stft(n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='reflect', normalized=False, onesided=None, return_complex=None)
[source] -
See
torch.stft()
Warning
This function changed signature at version 0.4.1. Calling with the previous signature may cause error or return incorrect result.
-
storage() → torch.Storage
-
Returns the underlying storage.
-
storage_offset() → int
-
Returns
self
tensor’s offset in the underlying storage in terms of number of storage elements (not bytes).Example:
>>> x = torch.tensor([1, 2, 3, 4, 5]) >>> x.storage_offset() 0 >>> x[3:].storage_offset() 3
-
storage_type() → type
-
Returns the type of the underlying storage.
-
stride(dim) → tuple or int
-
Returns the stride of
self
tensor.Stride is the jump necessary to go from one element to the next one in the specified dimension
dim
. A tuple of all strides is returned when no argument is passed in. Otherwise, an integer value is returned as the stride in the particular dimensiondim
.- Parameters
-
dim (int, optional) – the desired dimension in which stride is required
Example:
>>> x = torch.tensor([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]) >>> x.stride() (5, 1) >>> x.stride(0) 5 >>> x.stride(-1) 1
-
sub(other, *, alpha=1) → Tensor
-
See
torch.sub()
.
-
sub_(other, *, alpha=1) → Tensor
-
In-place version of
sub()
-
subtract(other, *, alpha=1) → Tensor
-
See
torch.subtract()
.
-
subtract_(other, *, alpha=1) → Tensor
-
In-place version of
subtract()
.
-
sum(dim=None, keepdim=False, dtype=None) → Tensor
-
See
torch.sum()
-
sum_to_size(*size) → Tensor
-
Sum
this
tensor tosize
.size
must be broadcastable tothis
tensor size.- Parameters
-
size (int...) – a sequence of integers defining the shape of the output tensor.
-
svd(some=True, compute_uv=True) -> (Tensor, Tensor, Tensor)
-
See
torch.svd()
-
swapaxes(axis0, axis1) → Tensor
-
See
torch.swapaxes()
-
swapdims(dim0, dim1) → Tensor
-
See
torch.swapdims()
-
symeig(eigenvectors=False, upper=True) -> (Tensor, Tensor)
-
See
torch.symeig()
-
t() → Tensor
-
See
torch.t()
-
t_() → Tensor
-
In-place version of
t()
-
tensor_split(indices_or_sections, dim=0) → List of Tensors
-
tile(*reps) → Tensor
-
See
torch.tile()
-
to(*args, **kwargs) → Tensor
-
Performs Tensor dtype and/or device conversion. A
torch.dtype
andtorch.device
are inferred from the arguments ofself.to(*args, **kwargs)
.Note
If the
self
Tensor already has the correcttorch.dtype
andtorch.device
, thenself
is returned. Otherwise, the returned tensor is a copy ofself
with the desiredtorch.dtype
andtorch.device
.Here are the ways to call
to
:-
to(dtype, non_blocking=False, copy=False, memory_format=torch.preserve_format) → Tensor
-
Returns a Tensor with the specified
dtype
- Args:
-
memory_format (
torch.memory_format
, optional): the desired memory format of returned Tensor. Default:torch.preserve_format
.
-
to(device=None, dtype=None, non_blocking=False, copy=False, memory_format=torch.preserve_format) → Tensor
-
Returns a Tensor with the specified
device
and (optional)dtype
. Ifdtype
isNone
it is inferred to beself.dtype
. Whennon_blocking
, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. Whencopy
is set, a new Tensor is created even when the Tensor already matches the desired conversion.- Args:
-
memory_format (
torch.memory_format
, optional): the desired memory format of returned Tensor. Default:torch.preserve_format
.
-
to(other, non_blocking=False, copy=False) → Tensor
-
Returns a Tensor with same
torch.dtype
andtorch.device
as the Tensorother
. Whennon_blocking
, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. Whencopy
is set, a new Tensor is created even when the Tensor already matches the desired conversion.
Example:
>>> tensor = torch.randn(2, 2) # Initially dtype=float32, device=cpu >>> tensor.to(torch.float64) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64) >>> cuda0 = torch.device('cuda:0') >>> tensor.to(cuda0) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], device='cuda:0') >>> tensor.to(cuda0, dtype=torch.float64) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0') >>> other = torch.randn((), dtype=torch.float64, device=cuda0) >>> tensor.to(other, non_blocking=True) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')
-
-
to_mkldnn() → Tensor
-
Returns a copy of the tensor in
torch.mkldnn
layout.
-
take(indices) → Tensor
-
See
torch.take()
-
tan() → Tensor
-
See
torch.tan()
-
tan_() → Tensor
-
In-place version of
tan()
-
tanh() → Tensor
-
See
torch.tanh()
-
tanh_() → Tensor
-
In-place version of
tanh()
-
atanh() → Tensor
-
See
torch.atanh()
-
atanh_(other) → Tensor
-
In-place version of
atanh()
-
arctanh() → Tensor
-
See
torch.arctanh()
-
arctanh_(other) → Tensor
-
In-place version of
arctanh()
-
tolist() → list or number
-
Returns the tensor as a (nested) list. For scalars, a standard Python number is returned, just like with
item()
. Tensors are automatically moved to the CPU first if necessary.This operation is not differentiable.
Examples:
>>> a = torch.randn(2, 2) >>> a.tolist() [[0.012766935862600803, 0.5415473580360413], [-0.08909505605697632, 0.7729271650314331]] >>> a[0,0].tolist() 0.012766935862600803
-
topk(k, dim=None, largest=True, sorted=True) -> (Tensor, LongTensor)
-
See
torch.topk()
-
to_sparse(sparseDims) → Tensor
-
Returns a sparse copy of the tensor. PyTorch supports sparse tensors in coordinate format.
- Parameters
-
sparseDims (int, optional) – the number of sparse dimensions to include in the new sparse tensor
Example:
>>> d = torch.tensor([[0, 0, 0], [9, 0, 10], [0, 0, 0]]) >>> d tensor([[ 0, 0, 0], [ 9, 0, 10], [ 0, 0, 0]]) >>> d.to_sparse() tensor(indices=tensor([[1, 1], [0, 2]]), values=tensor([ 9, 10]), size=(3, 3), nnz=2, layout=torch.sparse_coo) >>> d.to_sparse(1) tensor(indices=tensor([[1]]), values=tensor([[ 9, 0, 10]]), size=(3, 3), nnz=1, layout=torch.sparse_coo)
-
trace() → Tensor
-
See
torch.trace()
-
transpose(dim0, dim1) → Tensor
-
transpose_(dim0, dim1) → Tensor
-
In-place version of
transpose()
-
triangular_solve(A, upper=True, transpose=False, unitriangular=False) -> (Tensor, Tensor)
-
tril(k=0) → Tensor
-
See
torch.tril()
-
tril_(k=0) → Tensor
-
In-place version of
tril()
-
triu(k=0) → Tensor
-
See
torch.triu()
-
triu_(k=0) → Tensor
-
In-place version of
triu()
-
true_divide(value) → Tensor
-
true_divide_(value) → Tensor
-
In-place version of
true_divide_()
-
trunc() → Tensor
-
See
torch.trunc()
-
trunc_() → Tensor
-
In-place version of
trunc()
-
type(dtype=None, non_blocking=False, **kwargs) → str or Tensor
-
Returns the type if
dtype
is not provided, else casts this object to the specified type.If this is already of the correct type, no copy is performed and the original object is returned.
- Parameters
-
- dtype (type or string) – The desired type
-
non_blocking (bool) – If
True
, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect. -
**kwargs – For compatibility, may contain the key
async
in place of thenon_blocking
argument. Theasync
arg is deprecated.
-
type_as(tensor) → Tensor
-
Returns this tensor cast to the type of the given tensor.
This is a no-op if the tensor is already of the correct type. This is equivalent to
self.type(tensor.type())
- Parameters
-
tensor (Tensor) – the tensor which has the desired type
-
unbind(dim=0) → seq
-
See
torch.unbind()
-
unfold(dimension, size, step) → Tensor
-
Returns a view of the original tensor which contains all slices of size
size
fromself
tensor in the dimensiondimension
.Step between two slices is given by
step
.If
sizedim
is the size of dimensiondimension
forself
, the size of dimensiondimension
in the returned tensor will be(sizedim - size) / step + 1
.An additional dimension of size
size
is appended in the returned tensor.- Parameters
Example:
>>> x = torch.arange(1., 8) >>> x tensor([ 1., 2., 3., 4., 5., 6., 7.]) >>> x.unfold(0, 2, 1) tensor([[ 1., 2.], [ 2., 3.], [ 3., 4.], [ 4., 5.], [ 5., 6.], [ 6., 7.]]) >>> x.unfold(0, 2, 2) tensor([[ 1., 2.], [ 3., 4.], [ 5., 6.]])
-
uniform_(from=0, to=1) → Tensor
-
Fills
self
tensor with numbers sampled from the continuous uniform distribution:
-
unique(sorted=True, return_inverse=False, return_counts=False, dim=None)
[source] -
Returns the unique elements of the input tensor.
See
torch.unique()
-
unique_consecutive(return_inverse=False, return_counts=False, dim=None)
[source] -
Eliminates all but the first element from every consecutive group of equivalent elements.
-
unsqueeze(dim) → Tensor
-
unsqueeze_(dim) → Tensor
-
In-place version of
unsqueeze()
-
values() → Tensor
-
Return the values tensor of a sparse COO tensor.
Warning
Throws an error if
self
is not a sparse COO tensor.See also
Tensor.indices()
.Note
This method can only be called on a coalesced sparse tensor. See
Tensor.coalesce()
for details.
-
var(dim=None, unbiased=True, keepdim=False) → Tensor
-
See
torch.var()
-
vdot(other) → Tensor
-
See
torch.vdot()
-
view(*shape) → Tensor
-
Returns a new tensor with the same data as the
self
tensor but of a differentshape
.The returned tensor shares the same data and must have the same number of elements, but may have a different size. For a tensor to be viewed, the new view size must be compatible with its original size and stride, i.e., each new view dimension must either be a subspace of an original dimension, or only span across original dimensions that satisfy the following contiguity-like condition that ,
Otherwise, it will not be possible to view
self
tensor asshape
without copying it (e.g., viacontiguous()
). When it is unclear whether aview()
can be performed, it is advisable to usereshape()
, which returns a view if the shapes are compatible, and copies (equivalent to callingcontiguous()
) otherwise.- Parameters
-
shape (torch.Size or int...) – the desired size
Example:
>>> x = torch.randn(4, 4) >>> x.size() torch.Size([4, 4]) >>> y = x.view(16) >>> y.size() torch.Size([16]) >>> z = x.view(-1, 8) # the size -1 is inferred from other dimensions >>> z.size() torch.Size([2, 8]) >>> a = torch.randn(1, 2, 3, 4) >>> a.size() torch.Size([1, 2, 3, 4]) >>> b = a.transpose(1, 2) # Swaps 2nd and 3rd dimension >>> b.size() torch.Size([1, 3, 2, 4]) >>> c = a.view(1, 3, 2, 4) # Does not change tensor layout in memory >>> c.size() torch.Size([1, 3, 2, 4]) >>> torch.equal(b, c) False
-
view(dtype) → Tensor
Returns a new tensor with the same data as the
self
tensor but of a differentdtype
.dtype
must have the same number of bytes per element asself
’s dtype.Warning
This overload is not supported by TorchScript, and using it in a Torchscript program will cause undefined behavior.
- Parameters
-
dtype (
torch.dtype
) – the desired dtype
Example:
>>> x = torch.randn(4, 4) >>> x tensor([[ 0.9482, -0.0310, 1.4999, -0.5316], [-0.1520, 0.7472, 0.5617, -0.8649], [-2.4724, -0.0334, -0.2976, -0.8499], [-0.2109, 1.9913, -0.9607, -0.6123]]) >>> x.dtype torch.float32 >>> y = x.view(torch.int32) >>> y tensor([[ 1064483442, -1124191867, 1069546515, -1089989247], [-1105482831, 1061112040, 1057999968, -1084397505], [-1071760287, -1123489973, -1097310419, -1084649136], [-1101533110, 1073668768, -1082790149, -1088634448]], dtype=torch.int32) >>> y[0, 0] = 1000000000 >>> x tensor([[ 0.0047, -0.0310, 1.4999, -0.5316], [-0.1520, 0.7472, 0.5617, -0.8649], [-2.4724, -0.0334, -0.2976, -0.8499], [-0.2109, 1.9913, -0.9607, -0.6123]]) >>> x.view(torch.int16) Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: Viewing a tensor as a new dtype with a different number of bytes per element is not supported.
-
view_as(other) → Tensor
-
View this tensor as the same size as
other
.self.view_as(other)
is equivalent toself.view(other.size())
.Please see
view()
for more information aboutview
.- Parameters
-
other (
torch.Tensor
) – The result tensor has the same size asother
.
-
where(condition, y) → Tensor
-
self.where(condition, y)
is equivalent totorch.where(condition, self, y)
. Seetorch.where()
-
xlogy(other) → Tensor
-
See
torch.xlogy()
-
xlogy_(other) → Tensor
-
In-place version of
xlogy()
-
zero_() → Tensor
-
Fills
self
tensor with zeros.
- To create a tensor with pre-existing data, use
© 2019 Torch Contributors
Licensed under the 3-clause BSD License.
https://pytorch.org/docs/1.8.0/tensors.html