tf.distribute.experimental.CentralStorageStrategy
View source on GitHub |
A one-machine strategy that puts all variables on a single device.
Inherits From: Strategy
tf.distribute.experimental.CentralStorageStrategy( compute_devices=None, parameter_device=None )
Variables are assigned to local CPU or the only GPU. If there is more than one GPU, compute operations (other than variable update operations) will be replicated across all GPUs.
For Example:
strategy = tf.distribute.experimental.CentralStorageStrategy() # Create a dataset ds = tf.data.Dataset.range(5).batch(2) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(ds) with strategy.scope(): @tf.function def train_step(val): return val + 1 # Iterate over the distributed dataset for x in dist_dataset: # process dataset elements strategy.run(train_step, args=(x,))
Attributes | |
---|---|
cluster_resolver | Returns the cluster resolver associated with this strategy. In general, when using a multi-worker Strategies that intend to have an associated Single-worker strategies usually do not have a The os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"], 'ps': ["localhost:34567"] }, 'task': {'type': 'worker', 'index': 0} }) # This implicitly uses TF_CONFIG for the cluster and current task info. strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() ... if strategy.cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. Since we set this # as a worker above, this block will run on this particular instance. elif strategy.cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. Since we # set this as a worker above, this block will not run on this particular # instance. For more information, please see |
extended | tf.distribute.StrategyExtended with additional methods. |
num_replicas_in_sync | Returns number of replicas over which gradients are aggregated. |
Methods
distribute_datasets_from_function
distribute_datasets_from_function( dataset_fn, options=None )
Distributes tf.data.Dataset
instances created by calls to dataset_fn
.
The argument dataset_fn
that users pass in is an input function that has a tf.distribute.InputContext
argument and returns a tf.data.Dataset
instance. It is expected that the returned dataset from dataset_fn
is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function
does not batch or shard the tf.data.Dataset
instance returned from the input function. dataset_fn
will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset
every step).
This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset
does batching and sharding for you.) For example, where experimental_distribute_dataset
is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset
). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed.
The dataset_fn
should take an tf.distribute.InputContext
instance where information about batching and input replication can be accessed.
You can use element_spec
property of the tf.distribute.DistributedDataset
returned by this API to query the tf.TypeSpec
of the elements returned by the iterator. This can be used to set the input_signature
property of a tf.function
. Follow tf.distribute.DistributedDataset.element_spec
to see an example.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when usingtf.distribute.Strategy.experimental_distribute_dataset
ortf.distribute.Strategy.distribute_datasets_from_function
is not guaranteed. This is typically required if you are usingtf.distribute
to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported withtf.distribute.experimental_distribute_dataset
ortf.distribute.distribute_datasets_from_function
. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has amap_fn
that usestf.random.uniform
to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section.
Args | |
---|---|
dataset_fn | A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset . |
options | tf.distribute.InputOptions used to control options on how this dataset is distributed. |
Returns | |
---|---|
A tf.distribute.DistributedDataset . |
experimental_distribute_dataset
experimental_distribute_dataset( dataset, options=None )
Distributes a tf.data.Dataset instance provided via dataset.
The returned dataset is a wrapped strategy dataset which creates a multidevice iterator under the hood. It prefetches the input data to the specified devices on the worker. The returned distributed dataset can be iterated over similar to how regular datasets can.
Note: Currently, the user cannot add any more transformations to a distributed dataset.
For Example:
strategy = tf.distribute.CentralStorageStrategy() # with 1 CPU and 1 GPU dataset = tf.data.Dataset.range(10).batch(2) dist_dataset = strategy.experimental_distribute_dataset(dataset) for x in dist_dataset: print(x) # Prints PerReplica values [0, 1], [2, 3],...
Args: dataset: tf.data.Dataset
to be prefetched to device. options: tf.distribute.InputOptions
used to control options on how this dataset is distributed.
Returns | |
---|---|
A "distributed Dataset " that the caller can iterate over. |
experimental_distribute_values_from_function
experimental_distribute_values_from_function( value_fn )
Generates tf.distribute.DistributedValues
from value_fn
.
This function is to generate tf.distribute.DistributedValues
to pass into run
, reduce
, or other methods that take distributed values when not using datasets.
Args | |
---|---|
value_fn | The function to run to generate values. It is called for each replica with tf.distribute.ValueContext as the sole argument. It must return a Tensor or a type that can be converted to a Tensor. |
Returns | |
---|---|
A tf.distribute.DistributedValues containing a value for each replica. |
Example usage:
- Return constant value per replica:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return tf.constant(1.) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (<tf.Tensor: shape=(), dtype=float32, numpy=1.0>, <tf.Tensor: shape=(), dtype=float32, numpy=1.0>)
- Distribute values in array based on replica_id:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) array_value = np.array([3., 2., 1.]) def value_fn(ctx): return array_value[ctx.replica_id_in_sync_group] distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (3.0, 2.0)
- Specify values using num_replicas_in_sync:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return ctx.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (2, 2)
- Place values on devices and distribute:
strategy = tf.distribute.TPUStrategy() worker_devices = strategy.extended.worker_devices multiple_values = [] for i in range(strategy.num_replicas_in_sync): with tf.device(worker_devices[i]): multiple_values.append(tf.constant(1.0)) def value_fn(ctx): return multiple_values[ctx.replica_id_in_sync_group] distributed_values = strategy. experimental_distribute_values_from_function( value_fn)
experimental_local_results
experimental_local_results( value )
Returns the list of all local per-replica values contained in value
.
In CentralStorageStrategy
there is a single worker so the value returned will be all the values on that worker.
Args | |
---|---|
value | A value returned by run() , extended.call_for_each_replica() , or a variable created in scope . |
Returns | |
---|---|
A tuple of values contained in value . If value represents a single value, this returns (value,). |
gather
gather( value, axis )
Gather value
across replicas along axis
to the current device.
Given a tf.distribute.DistributedValues
or tf.Tensor
-like object value
, this API gathers and concatenates value
across replicas along the axis
-th dimension. The result is copied to the "current" device
- which would typically be the CPU of the worker on which the program is running. For
tf.distribute.TPUStrategy
, it is the first TPU host. For multi-clientMultiWorkerMirroredStrategy
, this is CPU of each worker.
This API can only be called in the cross-replica context. For a counterpart in the replica context, see tf.distribute.ReplicaContext.all_gather
.
Note: For all strategies excepttf.distribute.TPUStrategy
, the inputvalue
on different replicas must have the same rank, and their shapes must be the same in all dimensions except theaxis
-th dimension. In other words, their shapes cannot be different in a dimensiond
whered
does not equal to theaxis
argument. For example, given atf.distribute.DistributedValues
with component tensors of shape(1, 2, 3)
and(1, 3, 3)
on two replicas, you can callgather(..., axis=1, ...)
on it, but notgather(..., axis=0, ...)
orgather(..., axis=2, ...)
. However, fortf.distribute.TPUStrategy.gather
, all tensors must have exactly the same rank and same shape.
Note: Given atf.distribute.DistributedValues
value
, its component tensors must have a non-zero rank. Otherwise, consider usingtf.expand_dims
before gathering them.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # A DistributedValues with component tensor of shape (2, 1) on each replica distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]]))) @tf.function def run(): return strategy.gather(distributed_values, axis=0) run() <tf.Tensor: shape=(4, 1), dtype=int32, numpy= array([[1], [2], [1], [2]], dtype=int32)>
Consider the following example for more combinations:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"]) single_tensor = tf.reshape(tf.range(6), shape=(1,2,3)) distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor)) @tf.function def run(axis): return strategy.gather(distributed_values, axis=axis) axis=0 run(axis) <tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=1 run(axis) <tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=2 run(axis) <tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy= array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)>
Args | |
---|---|
value | a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run , to be combined into a single tensor. It can also be a regular tensor when used with tf.distribute.OneDeviceStrategy or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a tf.IndexedSlices . |
axis | 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)). |
Returns | |
---|---|
A Tensor that's the concatenation of value across replicas along axis dimension. |
reduce
reduce( reduce_op, value, axis )
Reduce value
across replicas.
Given a per-replica value returned by run
, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3]
will be on replica 0 and [4, 5, 6, 7]
will be on replica 1. By default, reduce
will just aggregate across replicas, returning [0+4, 1+5, 2+6, 3+7]
. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient). More often you will want to aggregate across the global batch, which you can get by specifying the batch dimension as the axis
, typically axis=0
. In this case it would return a scalar 0+1+2+3+4+5+6+7
.
If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0
. If you specify tf.distribute.ReduceOp.MEAN
, using axis=0
will use the correct denominator of 6. Contrast this with computing reduce_mean
to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8
and others 1/4
.
For Example:
strategy = tf.distribute.experimental.CentralStorageStrategy( compute_devices=['CPU:0', 'GPU:0'], parameter_device='CPU:0') ds = tf.data.Dataset.range(10) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(ds) with strategy.scope(): @tf.function def train_step(val): # pass through return val # Iterate over the distributed dataset for x in dist_dataset: result = strategy.run(train_step, args=(x,)) result = strategy.reduce(tf.distribute.ReduceOp.SUM, result, axis=None).numpy() # result: array([ 4, 6, 8, 10]) result = strategy.reduce(tf.distribute.ReduceOp.SUM, result, axis=0).numpy() # result: 28
Args | |
---|---|
reduce_op | A tf.distribute.ReduceOp value specifying how values should be combined. |
value | A "per replica" value, e.g. returned by run to be combined into a single tensor. |
axis | Specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension). |
Returns | |
---|---|
A Tensor . |
run
run( fn, args=(), kwargs=None, options=None )
Run fn
on each replica, with the given arguments.
In CentralStorageStrategy
, fn
is called on each of the compute replicas, with the provided "per replica" arguments specific to that device.
Args | |
---|---|
fn | The function to run. The output must be a tf.nest of Tensor s. |
args | (Optional) Positional arguments to fn . |
kwargs | (Optional) Keyword arguments to fn . |
options | (Optional) An instance of tf.distribute.RunOptions specifying the options to run fn . |
Returns | |
---|---|
Return value from running fn . |
scope
scope()
Context manager to make the strategy current and distribute variables.
This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # Variable created inside scope: with strategy.scope(): mirrored_variable = tf.Variable(1.) mirrored_variable MirroredVariable:{ 0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>, 1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0> } # Variable created outside scope: regular_variable = tf.Variable(1.) regular_variable <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
-
strategy
is installed in the global context as the "current" strategy. Inside this scope,tf.distribute.get_strategy()
will now return this strategy. Outside this scope, it returns the default no-op strategy. - Entering the scope also enters the "cross-replica context". See
tf.distribute.StrategyExtended
for an explanation on cross-replica and replica contexts. - Variable creation inside
scope
is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies likeMirroredStrategy
,TPUStrategy
andMultiWorkerMiroredStrategy
create variables replicated on each replica, whereasParameterServerStrategy
creates variables on the parameter servers. This is done using a customtf.variable_creator_scope
. - In some strategies, a default device scope may also be entered: in
MultiWorkerMiroredStrategy
, a default device scope of "/CPU:0" is entered on each worker.
Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like kerasmodel.fit
. If you're not usingmodel.fit
, you need to usestrategy.run
API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside?
There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK).
- Anything that creates variables that should be distributed variables must be in
strategy.scope
. This can be either by directly putting it in scope, or relying on another API likestrategy.run
ormodel.fit
to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside thestrategy.scope
can also work seamlessly, without the user having to enter the scope. - Some strategy APIs (such as
strategy.run
andstrategy.reduce
) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself. - When a
tf.keras.Model
is created inside astrategy.scope
, we capture this information. When high level training frameworks methods such asmodel.compile
,model.fit
etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling themodel(..)
is not impacted - only high level training framework APIs are.model.compile
,model.fit
,model.evaluate
,model.predict
andmodel.save
can all be called inside or outside the scope. - The following can be either inside or outside the scope:
- Creating the input datasets
- Defining
tf.function
s that represent your training step - Saving APIs such as
tf.saved_model.save
. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. - Checkpoint saving. As mentioned above -
checkpoint.restore
may sometimes need to be inside scope if it creates variables.
Returns | |
---|---|
A context manager. |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.4/api_docs/python/tf/distribute/experimental/CentralStorageStrategy