tf.contrib.layers.recompute_grad
Decorator that recomputes the function on the backwards pass.
tf.contrib.layers.recompute_grad( fn, use_data_dep=_USE_DEFAULT, tupleize_grads=False )
To use this function, you must use ResourceVariable
s (i.e. `variable_scope(name, use_resource=True), which are the default in Eager mode and when running on TPU.
Args | |
---|---|
fn | a function that takes Tensors (all as positional arguments) and returns a tuple of Tensors. Note that fn should not close over any other Tensors or Variables. |
use_data_dep | bool , if True will use a dummy data dependency to force the recompute to happen. If False will use a control dependency. By default will be True if in an XLA context and False otherwise. XLA ignores control dependencies and so this data dependency is necessary. |
tupleize_grads | bool , if True will use control dependencies to ensure that all gradients are produced before any are consumed by downstream ops. If use_data_dep is also True , will use a data dependency instead of a control dependency. |
Returns | |
---|---|
A wrapped fn that is identical to fn when called, but its activations will be discarded and recomputed on the backwards pass (i.e. on a call to tf.gradients). |
Raises | |
---|---|
ValueError | if fn closes over any Tensors or Variables. |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/layers/recompute_grad