tensorflow::ops::ApplyRMSProp
#include <training_ops.h>
Update '*var' according to the RMSProp algorithm.
Summary
Note that in dense implementation of this algorithm, ms and mom will update even if the grad is zero, but in this sparse implementation, ms and mom will not update in iterations during which the grad is zero.
mean_square = decay * mean_square + (1-decay) * gradient ** 2 Delta = learning_rate * gradient / sqrt(mean_square + epsilon)
ms
Arguments:
- scope: A Scope object
- var: Should be from a Variable().
- ms: Should be from a Variable().
- mom: Should be from a Variable().
- lr: Scaling factor. Must be a scalar.
- rho: Decay rate. Must be a scalar.
- epsilon: Ridge term. Must be a scalar.
- grad: The gradient.
Optional attributes (see Attrs
):
- use_locking: If
True
, updating of the var, ms, and mom tensors is protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
Returns:
-
Output
: Same as "var".
Constructors and Destructors | |
---|---|
ApplyRMSProp(const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input ms, ::tensorflow::Input mom, ::tensorflow::Input lr, ::tensorflow::Input rho, ::tensorflow::Input momentum, ::tensorflow::Input epsilon, ::tensorflow::Input grad) | |
ApplyRMSProp(const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input ms, ::tensorflow::Input mom, ::tensorflow::Input lr, ::tensorflow::Input rho, ::tensorflow::Input momentum, ::tensorflow::Input epsilon, ::tensorflow::Input grad, const ApplyRMSProp::Attrs & attrs) |
Public attributes | |
---|---|
operation | |
out |
Public functions | |
---|---|
node() const | ::tensorflow::Node * |
operator::tensorflow::Input() const | |
operator::tensorflow::Output() const |
Public static functions | |
---|---|
UseLocking(bool x) |
Structs | |
---|---|
tensorflow::ops::ApplyRMSProp::Attrs | Optional attribute setters for ApplyRMSProp. |
Public attributes
operation
Operation operation
out
::tensorflow::Output out
Public functions
ApplyRMSProp
ApplyRMSProp( const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input ms, ::tensorflow::Input mom, ::tensorflow::Input lr, ::tensorflow::Input rho, ::tensorflow::Input momentum, ::tensorflow::Input epsilon, ::tensorflow::Input grad )
ApplyRMSProp
ApplyRMSProp( const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input ms, ::tensorflow::Input mom, ::tensorflow::Input lr, ::tensorflow::Input rho, ::tensorflow::Input momentum, ::tensorflow::Input epsilon, ::tensorflow::Input grad, const ApplyRMSProp::Attrs & attrs )
node
::tensorflow::Node * node() const
operator::tensorflow::Input
operator::tensorflow::Input() const
operator::tensorflow::Output
operator::tensorflow::Output() const
Public static functions
UseLocking
Attrs UseLocking( bool x )
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 4.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.4/api_docs/cc/class/tensorflow/ops/apply-r-m-s-prop