tf.nn.selu
Computes scaled exponential linear: scale * alpha * (exp(features) - 1)
tf.nn.selu( features, name=None )
if < 0, scale * features
otherwise.
To be used together with initializer = tf.variance_scaling_initializer(factor=1.0, mode='FAN_IN')
. For correct dropout, use tf.contrib.nn.alpha_dropout
.
See Self-Normalizing Neural Networks
Args | |
---|---|
features | A Tensor . Must be one of the following types: half , bfloat16 , float32 , float64 . |
name | A name for the operation (optional). |
Returns | |
---|---|
A Tensor . Has the same type as features . |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.4/api_docs/python/tf/nn/selu