tf.distribute.CrossDeviceOps
View source on GitHub |
Base class for cross-device reduction and broadcasting algorithms.
tf.distribute.CrossDeviceOps()
The main purpose of this class is to be passed to tf.distribute.MirroredStrategy
in order to choose among different cross device communication implementations. Prefer using the methods of tf.distribute.Strategy
instead of the ones of this class.
Implementations:
tf.distribute.ReductionToOneDevice
tf.distribute.NcclAllReduce
tf.distribute.HierarchicalCopyAllReduce
Methods
batch_reduce
batch_reduce( reduce_op, value_destination_pairs, options=None )
Reduce values to destinations in batches.
See tf.distribute.StrategyExtended.batch_reduce_to
. This can only be called in the cross-replica context.
Args | |
---|---|
reduce_op | a tf.distribute.ReduceOp specifying how values should be combined. |
value_destination_pairs | a sequence of (value, destinations) pairs. See tf.distribute.CrossDeviceOps.reduce for descriptions. |
options | a tf.distribute.experimental.CommunicationOptions . See tf.distribute.experimental.CommunicationOptions for details. |
Returns | |
---|---|
A list of tf.Tensor or tf.distribute.DistributedValues , one per pair in value_destination_pairs . |
Raises | |
---|---|
ValueError | if value_destination_pairs is not an iterable of tuples of tf.distribute.DistributedValues and destinations. |
batch_reduce_implementation
batch_reduce_implementation( reduce_op, value_destination_pairs, options )
Implementation of batch_reduce
.
Overriding this method is useful for subclass implementers.
Args | |
---|---|
reduce_op | a tf.distribute.ReduceOp specifying how values should be combined. |
value_destination_pairs | a sequence of (value, destinations) pairs. See reduce for descriptions. |
options | a tf.distribute.experimental.CommunicationOptions . See tf.distribute.experimental.CommunicationOptions for details. |
Returns | |
---|---|
A list of tf.Tensor or tf.distribute.DistributedValues , one per pair in value_destination_pairs . |
Raises | |
---|---|
ValueError | if value_destination_pairs is not an iterable of tuples of tf.distribute.DistributedValues and destinations. |
broadcast
broadcast( tensor, destinations )
Broadcast tensor
to destinations
.
This can only be called in the cross-replica context.
Args | |
---|---|
tensor | a tf.Tensor like object. The value to broadcast. |
destinations | a tf.distribute.DistributedValues , a tf.Variable , a tf.Tensor alike object, or a device string. It specifies the devices to broadcast to. Note that if it's a tf.Variable , the value is broadcasted to the devices of that variable, this method doesn't update the variable. |
Returns | |
---|---|
A tf.Tensor or tf.distribute.DistributedValues . |
broadcast_implementation
broadcast_implementation( tensor, destinations )
Implementation of broadcast
.
Args | |
---|---|
tensor | a tf.Tensor like object. The value to broadcast. |
destinations | a tf.distribute.DistributedValues , a tf.Variable , a tf.Tensor alike object, or a device string. It specifies the devices to broadcast to. destinations . Note that if it's a tf.Variable , the value is broadcasted to the devices of that variable, this method doesn't update the variable. |
Returns | |
---|---|
A tf.Tensor or tf.distribute.DistributedValues . |
reduce
reduce( reduce_op, per_replica_value, destinations, options=None )
Reduce per_replica_value
to destinations
.
See tf.distribute.StrategyExtended.reduce_to
. This can only be called in the cross-replica context.
Args | |
---|---|
reduce_op | a tf.distribute.ReduceOp specifying how values should be combined. |
per_replica_value | a tf.distribute.DistributedValues , or a tf.Tensor like object. |
destinations | a tf.distribute.DistributedValues , a tf.Variable , a tf.Tensor alike object, or a device string. It specifies the devices to reduce to. To perform an all-reduce, pass the same to value and destinations . Note that if it's a tf.Variable , the value is reduced to the devices of that variable, and this method doesn't update the variable. |
options | a tf.distribute.experimental.CommunicationOptions . See tf.distribute.experimental.CommunicationOptions for details. |
Returns | |
---|---|
A tf.Tensor or tf.distribute.DistributedValues . |
Raises | |
---|---|
ValueError | if per_replica_value can't be converted to a tf.distribute.DistributedValues or if destinations is not a string, tf.Variable or tf.distribute.DistributedValues . |
reduce_implementation
reduce_implementation( reduce_op, per_replica_value, destinations, options )
Implementation of reduce
.
Overriding this method is useful for subclass implementers.
Args | |
---|---|
reduce_op | a tf.distribute.ReduceOp specifying how values should be combined. |
per_replica_value | a tf.distribute.DistributedValues , or a tf.Tensor like object. |
destinations | a tf.distribute.DistributedValues , a tf.Variable , a tf.Tensor alike object, or a device string. It specifies the devices to reduce to. To perform an all-reduce, pass the same to value and destinations . Note that if it's a tf.Variable , the value is reduced to the devices of that variable, this method doesn't update the variable. |
options | a tf.distribute.experimental.CommunicationOptions . See tf.distribute.experimental.CommunicationOptions for details. |
Returns | |
---|---|
A tf.Tensor or tf.distribute.DistributedValues . |
Raises | |
---|---|
ValueError | if per_replica_value can't be converted to a tf.distribute.DistributedValues or if destinations is not a string, tf.Variable or tf.distribute.DistributedValues . |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.4/api_docs/python/tf/distribute/CrossDeviceOps