torch.random
-
torch.random.fork_rng(devices=None, enabled=True, _caller='fork_rng', _devices_kw='devices')
[source] -
Forks the RNG, so that when you return, the RNG is reset to the state that it was previously in.
- Parameters
-
-
devices (iterable of CUDA IDs) – CUDA devices for which to fork the RNG. CPU RNG state is always forked. By default,
fork_rng()
operates on all devices, but will emit a warning if your machine has a lot of devices, since this function will run very slowly in that case. If you explicitly specify devices, this warning will be suppressed -
enabled (bool) – if
False
, the RNG is not forked. This is a convenience argument for easily disabling the context manager without having to delete it and unindent your Python code under it.
-
devices (iterable of CUDA IDs) – CUDA devices for which to fork the RNG. CPU RNG state is always forked. By default,
-
torch.random.get_rng_state()
[source] -
Returns the random number generator state as a
torch.ByteTensor
.
-
torch.random.initial_seed()
[source] -
Returns the initial seed for generating random numbers as a Python
long
.
-
torch.random.manual_seed(seed)
[source] -
Sets the seed for generating random numbers. Returns a
torch.Generator
object.- Parameters
-
seed (int) – The desired seed. Value must be within the inclusive range
[-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff]
. Otherwise, a RuntimeError is raised. Negative inputs are remapped to positive values with the formula0xffff_ffff_ffff_ffff + seed
.
-
torch.random.seed()
[source] -
Sets the seed for generating random numbers to a non-deterministic random number. Returns a 64 bit number used to seed the RNG.
-
torch.random.set_rng_state(new_state)
[source] -
Sets the random number generator state.
- Parameters
-
new_state (torch.ByteTensor) – The desired state
© 2019 Torch Contributors
Licensed under the 3-clause BSD License.
https://pytorch.org/docs/1.8.0/random.html