KLDivLoss
-
class torch.nn.KLDivLoss(size_average=None, reduce=None, reduction='mean', log_target=False)
[source] -
The Kullback-Leibler divergence loss measure
Kullback-Leibler divergence is a useful distance measure for continuous distributions and is often useful when performing direct regression over the space of (discretely sampled) continuous output distributions.
As with
NLLLoss
, theinput
given is expected to contain log-probabilities and is not restricted to a 2D Tensor. The targets are interpreted as probabilities by default, but could be considered as log-probabilities withlog_target
set toTrue
.This criterion expects a
target
Tensor
of the same size as theinput
Tensor
.The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:where the index spans all dimensions of
input
and has the same shape asinput
. Ifreduction
is not'none'
(default'mean'
), then:In default
reduction
mode'mean'
, the losses are averaged for each minibatch over observations as well as over dimensions.'batchmean'
mode gives the correct KL divergence where losses are averaged over batch dimension only.'mean'
mode’s behavior will be changed to the same as'batchmean'
in the next major release.- Parameters
-
-
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
-
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
-
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'batchmean'
|'sum'
|'mean'
.'none'
: no reduction will be applied.'batchmean'
: the sum of the output will be divided by batchsize.'sum'
: the output will be summed.'mean'
: the output will be divided by the number of elements in the output. Default:'mean'
-
log_target (bool, optional) – Specifies whether
target
is passed in the log space. Default:False
-
size_average (bool, optional) – Deprecated (see
Note
size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
.Note
reduction
='mean'
doesn’t return the true kl divergence value, please usereduction
='batchmean'
which aligns with KL math definition. In the next major release,'mean'
will be changed to be the same as'batchmean'
.- Shape:
-
- Input: where means, any number of additional dimensions
- Target: , same shape as the input
- Output: scalar by default. If :attr:
reduction
is'none'
, then , the same shape as the input
© 2019 Torch Contributors
Licensed under the 3-clause BSD License.
https://pytorch.org/docs/1.8.0/generated/torch.nn.KLDivLoss.html