tensorflow::ops::ResourceApplyAdamWithAmsgrad
#include <training_ops.h>
Update '*var' according to the Adam algorithm.
Summary
$$lr_t := {learning_rate} * {1 - beta_2^t} / (1 - beta_1^t)$$ $$m_t := beta_1 * m_{t-1} + (1 - beta_1) * g$$ $$v_t := beta_2 * v_{t-1} + (1 - beta_2) * g * g$$ $$vhat_t := max{vhat_{t-1}, v_t}$$ $$variable := variable - lr_t * m_t / ({vhat_t} + )$$
Arguments:
- scope: A Scope object
- var: Should be from a Variable().
- m: Should be from a Variable().
- v: Should be from a Variable().
- vhat: Should be from a Variable().
- beta1_power: Must be a scalar.
- beta2_power: Must be a scalar.
- lr: Scaling factor. Must be a scalar.
- beta1: Momentum factor. Must be a scalar.
- beta2: Momentum factor. Must be a scalar.
- epsilon: Ridge term. Must be a scalar.
- grad: The gradient.
Optional attributes (see Attrs
):
- use_locking: If
True
, updating of the var, m, and v tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Returns:
- the created
Operation
Constructors and Destructors | |
---|---|
ResourceApplyAdamWithAmsgrad(const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input m, ::tensorflow::Input v, ::tensorflow::Input vhat, ::tensorflow::Input beta1_power, ::tensorflow::Input beta2_power, ::tensorflow::Input lr, ::tensorflow::Input beta1, ::tensorflow::Input beta2, ::tensorflow::Input epsilon, ::tensorflow::Input grad) | |
ResourceApplyAdamWithAmsgrad(const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input m, ::tensorflow::Input v, ::tensorflow::Input vhat, ::tensorflow::Input beta1_power, ::tensorflow::Input beta2_power, ::tensorflow::Input lr, ::tensorflow::Input beta1, ::tensorflow::Input beta2, ::tensorflow::Input epsilon, ::tensorflow::Input grad, const ResourceApplyAdamWithAmsgrad::Attrs & attrs) |
Public attributes | |
---|---|
operation |
Public functions | |
---|---|
operator::tensorflow::Operation() const |
Public static functions | |
---|---|
UseLocking(bool x) |
Structs | |
---|---|
tensorflow::ops::ResourceApplyAdamWithAmsgrad::Attrs | Optional attribute setters for ResourceApplyAdamWithAmsgrad. |
Public attributes
operation
Operation operation
Public functions
ResourceApplyAdamWithAmsgrad
ResourceApplyAdamWithAmsgrad( const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input m, ::tensorflow::Input v, ::tensorflow::Input vhat, ::tensorflow::Input beta1_power, ::tensorflow::Input beta2_power, ::tensorflow::Input lr, ::tensorflow::Input beta1, ::tensorflow::Input beta2, ::tensorflow::Input epsilon, ::tensorflow::Input grad )
ResourceApplyAdamWithAmsgrad
ResourceApplyAdamWithAmsgrad( const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input m, ::tensorflow::Input v, ::tensorflow::Input vhat, ::tensorflow::Input beta1_power, ::tensorflow::Input beta2_power, ::tensorflow::Input lr, ::tensorflow::Input beta1, ::tensorflow::Input beta2, ::tensorflow::Input epsilon, ::tensorflow::Input grad, const ResourceApplyAdamWithAmsgrad::Attrs & attrs )
operator::tensorflow::Operation
operator::tensorflow::Operation() const
Public static functions
UseLocking
Attrs UseLocking( bool x )
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r1.15/api_docs/cc/class/tensorflow/ops/resource-apply-adam-with-amsgrad