torch.Storage
A torch.Storage is a contiguous, one-dimensional array of a single data type.
Every torch.Tensor has a corresponding storage of the same data type.
-
class torch.FloatStorage(*args, **kwargs)[source] -
-
bfloat16() -
Casts this storage to bfloat16 type
-
bool() -
Casts this storage to bool type
-
byte() -
Casts this storage to byte type
-
char() -
Casts this storage to char type
-
clone() -
Returns a copy of this storage
-
complex_double() -
Casts this storage to complex double type
-
complex_float() -
Casts this storage to complex float type
-
copy_()
-
cpu() -
Returns a CPU copy of this storage if it’s not already on the CPU
-
cuda(device=None, non_blocking=False, **kwargs) -
Returns a copy of this object in CUDA memory.
If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.
- Parameters
-
- device (int) – The destination GPU id. Defaults to the current device.
-
non_blocking (bool) – If
Trueand the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect. -
**kwargs – For compatibility, may contain the key
asyncin place of thenon_blockingargument.
-
data_ptr()
-
device
-
double() -
Casts this storage to double type
-
dtype
-
element_size()
-
fill_()
-
float() -
Casts this storage to float type
-
static from_buffer()
-
static from_file(filename, shared=False, size=0) → Storage -
If
sharedisTrue, then memory is shared between all processes. All changes are written to the file. IfsharedisFalse, then the changes on the storage do not affect the file.sizeis the number of elements in the storage. IfsharedisFalse, then the file must contain at leastsize * sizeof(Type)bytes (Typeis the type of storage). IfsharedisTruethe file will be created if needed.
-
get_device()
-
half() -
Casts this storage to half type
-
int() -
Casts this storage to int type
-
is_cuda: bool = False
-
is_pinned()
-
is_sparse: bool = False
-
long() -
Casts this storage to long type
-
new()
-
pin_memory() -
Copies the storage to pinned memory, if it’s not already pinned.
-
resize_()
-
Moves the storage to shared memory.
This is a no-op for storages already in shared memory and for CUDA storages, which do not need to be moved for sharing across processes. Storages in shared memory cannot be resized.
Returns: self
-
short() -
Casts this storage to short type
-
size()
-
tolist() -
Returns a list containing the elements of this storage
-
type(dtype=None, non_blocking=False, **kwargs) -
Returns the type if
dtypeis not provided, else casts this object to the specified type.If this is already of the correct type, no copy is performed and the original object is returned.
- Parameters
-
- dtype (type or string) – The desired type
-
non_blocking (bool) – If
True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect. -
**kwargs – For compatibility, may contain the key
asyncin place of thenon_blockingargument. Theasyncarg is deprecated.
-
© 2019 Torch Contributors
Licensed under the 3-clause BSD License.
https://pytorch.org/docs/1.8.0/storage.html