tf.contrib.mixed_precision.LossScaleManager
Abstract loss scale manager class.
Loss scale managers with a different strategy should subclass this class. Loss scaling is a process that:
1) Applies a multiplier on the loss before computing gradients, and 2) Applies the reciprocal of the multiplier on the gradients before they are applied on variables.
This class is used together with tf.contrib.mixed_precision.LossScaleOptimizer
for mixed precision training (float32 variables and float16 ops) on Nvidia GPUs in order to achieve the same model quality as single precision training, with the benefits of potential higher throughput.
See tf.contrib.mixed_precision.LossScaleOptimizer
for more details.
Methods
get_loss_scale
@abc.abstractmethod get_loss_scale()
Returns the loss scale as a scalar float32
tensor.
update_loss_scale
@abc.abstractmethod update_loss_scale( finite_grads )
Updates loss scale based on if gradients are finite in current step.
Args | |
---|---|
finite_grads | bool scalar tensor indicating if all gradients are finite (i.e., not inf or nan). |
Returns | |
---|---|
An op, when executed updates the loss scale. If eager execution is enabled, does not return anything. |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/mixed_precision/LossScaleManager