tf.contrib.mixed_precision.FixedLossScaleManager
Loss scale manager with a fixed loss scale.
Inherits From: LossScaleManager
tf.contrib.mixed_precision.FixedLossScaleManager( loss_scale )
The loss scale is not updated for the lifetime of the class.
Args | |
---|---|
loss_scale | A Python float. Its ideal value varies depending on models to run. Choosing a too small loss_scale might affect model quality; a too big loss_scale might cause inf or nan. There is no single right loss_scale to apply. There is no harm choosing a relatively big number as long as no nan or inf is encountered in training. |
Raises | |
---|---|
ValueError | If loss_scale is less than 1. |
Methods
get_loss_scale
get_loss_scale()
Returns the loss scale as a scalar float32
tensor.
update_loss_scale
update_loss_scale( finite_grads )
Updates loss scale based on if gradients are finite in current step.
Args | |
---|---|
finite_grads | bool scalar tensor indicating if all gradients are finite (i.e., not inf or nan). |
Returns | |
---|---|
An op, when executed updates the loss scale. If eager execution is enabled, does not return anything. |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/mixed_precision/FixedLossScaleManager