tf.keras.losses.KLD
| View source on GitHub | 
Computes Kullback-Leibler divergence loss between y_true and y_pred.
tf.keras.losses.KLD(
    y_true, y_pred
)
  loss = y_true * log(y_true / y_pred)
See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence
Standalone usage:
y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64)
y_pred = np.random.random(size=(2, 3))
loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred)
assert loss.shape == (2,)
y_true = tf.keras.backend.clip(y_true, 1e-7, 1)
y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1)
assert np.array_equal(
    loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1))
  
| Args | |
|---|---|
 y_true  |  Tensor of true targets. | 
 y_pred  |  Tensor of predicted targets. | 
| Returns | |
|---|---|
 A Tensor with loss.  |  
| Raises | |
|---|---|
 TypeError  |   If y_true cannot be cast to the y_pred.dtype.  |  
    © 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
    https://www.tensorflow.org/versions/r2.3/api_docs/python/tf/keras/losses/KLD