tf.nn.gelu
Compute the Gaussian Error Linear Unit (GELU) activation function.
tf.nn.gelu( features, approximate=False, name=None )
Gaussian error linear unit (GELU) computes x * P(X <= x)
, where P(X) ~ N(0, 1)
. The (GELU) nonlinearity weights inputs by their value, rather than gates inputs by their sign as in ReLU.
For example:
x = tf.constant([-3.0, -1.0, 0.0, 1.0, 3.0], dtype=tf.float32) y = tf.nn.gelu(x) y.numpy() array([-0.00404951, -0.15865529, 0. , 0.8413447 , 2.9959507 ], dtype=float32) y = tf.nn.gelu(x, approximate=True) y.numpy() array([-0.00363752, -0.15880796, 0. , 0.841192 , 2.9963627 ], dtype=float32)
Args | |
---|---|
features | A Tensor representing preactivation values. |
approximate | An optional bool . Defaults to False . Whether to enable approximation. |
name | A name for the operation (optional). |
Returns | |
---|---|
A Tensor with the same type as features . |
References:
Gaussian Error Linear Units (GELUs).
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.4/api_docs/python/tf/nn/gelu