教育部部署2018年重點高校招收農村和貧困地區學生工作

paddle.nn.functional. l1_loss ( input: Tensor, label: Tensor, reduction: _ReduceMode = 'mean', name: str | None = None ) Tensor [source]
百度 演练结束后,大队主官现场作了讲评,对演练过程中存在的一些不足提出了意见和建议,对个人防护装备进行了细致的检查,并对接下来的演练提出了更高的要求,要求进一步强化执勤备战意识,确保一旦发生火灾,能够做到快速反应,速战速决,最大限度减少人员伤亡和财产损失。

Computes the L1 Loss of Tensor input and label as follows.

If reduction set to 'none', the loss is:

\[Out = \lvert input - label \rvert\]

If reduction set to 'mean', the loss is:

\[Out = MEAN(\lvert input - label \rvert)\]

If reduction set to 'sum', the loss is:

\[Out = SUM(\lvert input - label \rvert)\]
Parameters
  • input (Tensor) – The input tensor. The shapes is [N, *], where N is batch size and * means any number of additional dimensions. It’s data type should be float32, float64, int32, int64.

  • label (Tensor) – label. The shapes is [N, *], same shape as input . It’s data type should be float32, float64, int32, int64.

  • reduction (str, optional) – Indicate the reduction to apply to the loss, the candidates are 'none' | 'mean' | 'sum'. If reduction is 'none', the unreduced loss is returned; If reduction is 'mean', the reduced mean loss is returned. If reduction is 'sum', the reduced sum loss is returned. Default is 'mean'.

  • name (str|None, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the L1 Loss of Tensor input and label. If reduction is 'none', the shape of output loss is \([N, *]\), the same as input . If reduction is 'mean' or 'sum', the shape of output loss is [].

Examples

>>> import paddle

>>> input = paddle.to_tensor([[1.5, 0.8], [0.2, 1.3]])
>>> label = paddle.to_tensor([[1.7, 1], [0.4, 0.5]])

>>> l1_loss = paddle.nn.functional.l1_loss(input, label)
>>> print(l1_loss)
Tensor(shape=[], dtype=float32, place=Place(cpu), stop_gradient=True,
        0.34999999)

>>> l1_loss = paddle.nn.functional.l1_loss(input, label, reduction='none')
>>> print(l1_loss)
Tensor(shape=[2, 2], dtype=float32, place=Place(cpu), stop_gradient=True,
        [[0.20000005, 0.19999999],
         [0.20000000, 0.79999995]])

>>> l1_loss = paddle.nn.functional.l1_loss(input, label, reduction='sum')
>>> print(l1_loss)
Tensor(shape=[], dtype=float32, place=Place(cpu), stop_gradient=True,
        1.39999998)