泸州老窖树立白酒质量安全管理新标杆

class paddle.nn. RReLU ( lower: float = 0.125, upper: float = 0.3333333333333333, name: Optional[str] = None ) [source]
百度 乡村振兴必须以产业为基础,使市场在农业资源配置中起决定性作用,利用财政资金撬动金融和社会资本进入乡村,将更多人财物资源配置到农村经济社会发展的重点领域和薄弱环节,满足乡村振兴多样化要素需求,发挥工商资本推动乡村振兴的积极作用。

RReLU activation layer.

Applies the randomized leaky rectified liner unit function to improve generalization performance, as described in the paper: Empirical Evaluation of Rectified Activations in Convolutional Network

During training, randomly samples the negative slope for activation values as described below:

\[\begin{split}RReLU(x)= \left\{ \begin{array}{rcl} x, & & if \ x >= 0 \\ a * x, & & otherwise \\ \end{array} \right.\end{split}\]

where \(x\) is the input tensor, \(a\) is randomly sampled from uniform distribution in range (\(lower\), \(upper\)),

In the test phase, the negative slope will take the average value of \(lower\) and \(upper\):

\[\begin{split}RReLU(x)= \left\{ \begin{array}{rcl} x, & & if \ x >= 0 \\ (lower + upper) * 0.5 * x, & & otherwise \\ \end{array} \right.\end{split}\]

where \(x\) is the input tensor, \(lower\) and \(upper\) are the bounds of uniform distribution.

Parameters
  • lower (float, optional) – The lower bound of uniform distribution. Default: 1.0/8.0.

  • upper (float, optional) – The upper bound of uniform distribution. Default: 1.0/3.0.

  • name (str|None, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Shape:
  • input: Tensor with any shape. Default dtype is float32.

  • output: Tensor with the same shape as input.

Examples

>>> import paddle
>>> paddle.seed(2023)

>>> input_tensor = paddle.to_tensor([[[[-2.0,  3.0, -4.0,  5.0],
...                                    [ 3.0, -4.0,  5.0, -6.0],
...                                    [-7.0, -8.0,  8.0,  9.0]],
...                                   [[ 1.0, -2.0, -3.0,  4.0],
...                                    [-5.0,  6.0,  7.0, -8.0],
...                                    [ 6.0,  7.0,  8.0,  9.0]]]], dtype='float32')
...
>>> rrelu_layer = paddle.nn.RReLU(0.1, 0.3)
>>> out = rrelu_layer(input_tensor)
>>> print(out)
Tensor(shape=[1, 2, 3, 4], dtype=float32, place=Place(cpu), stop_gradient=True,
[[[[-0.54633451,  3.        , -0.81611776,  5.        ],
   [ 3.        , -0.60768753,  5.        , -1.68630385],
   [-1.29360127, -1.45026064,  8.        ,  9.        ]],
  [[ 1.        , -0.58808362, -0.74662417,  4.        ],
   [-1.01785135,  6.        ,  7.        , -1.97268605],
   [ 6.        ,  7.        ,  8.        ,  9.        ]]]])
forward ( x: Tensor ) Tensor

forward?

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments

extra_repr ( ) str

extra_repr?

Extra representation of this layer, you can have custom implementation of your own layer.