陕规定民办学校招生 不得使用欺骗误导性用语

class paddle.distribution. LogNormal ( loc: _LognormalLoc, scale: _LognormalScale ) [source]
百度 而T-Roc是一汽-大众的大众品牌将要国产的首款SUV,这也是其首次在国内亮相,相比海外版的轴距增加了77毫米,所以一汽-大众版T-Roc也跻身为紧凑型SUV行列。

The LogNormal distribution with location loc and scale parameters.

\[ \begin{align}\begin{aligned}X \sim Normal(\mu, \sigma)\\Y = exp(X) \sim LogNormal(\mu, \sigma)\end{aligned}\end{align} \]

Due to LogNormal distribution is based on the transformation of Normal distribution, we call that \(Normal(\mu, \sigma)\) is the underlying distribution of \(LogNormal(\mu, \sigma)\)

Mathematical details

The probability density function (pdf) is

\[pdf(x; \mu, \sigma) = \frac{1}{\sigma x \sqrt{2\pi}}e^{(-\frac{(ln(x) - \mu)^2}{2\sigma^2})}\]

In the above equation:

  • \(loc = \mu\): is the means of the underlying Normal distribution.

  • \(scale = \sigma\): is the stddevs of the underlying Normal distribution.

Parameters
  • loc (int|float|complex|list|tuple|numpy.ndarray|Tensor) – The means of the underlying Normal distribution.The data type is float32, float64, complex64 and complex128.

  • scale (int|float|list|tuple|numpy.ndarray|Tensor) – The stddevs of the underlying Normal distribution.

Examples

>>> import paddle
>>> from paddle.distribution import LogNormal

>>> # Define a single scalar LogNormal distribution.
>>> dist = LogNormal(loc=0., scale=3.)
>>> # Define a batch of two scalar valued LogNormals.
>>> # The underlying Normal of first has mean 1 and standard deviation 11, the underlying Normal of second 2 and 22.
>>> dist = LogNormal(loc=[1., 2.], scale=[11., 22.])
>>> # Get 3 samples, returning a 3 x 2 tensor.
>>> dist.sample((3, ))

>>> # Define a batch of two scalar valued LogNormals.
>>> # Their underlying Normal have mean 1, but different standard deviations.
>>> dist = LogNormal(loc=1., scale=[11., 22.])

>>> # Complete example
>>> value_tensor = paddle.to_tensor([0.8], dtype="float32")

>>> lognormal_a = LogNormal([0.], [1.])
>>> lognormal_b = LogNormal([0.5], [2.])
>>> sample = lognormal_a.sample((2, ))
>>> # a random tensor created by lognormal distribution with shape: [2, 1]
>>> entropy = lognormal_a.entropy()
>>> print(entropy)
Tensor(shape=[1], dtype=float32, place=Place(cpu), stop_gradient=True,
    [1.41893852])
>>> lp = lognormal_a.log_prob(value_tensor)
>>> print(lp)
Tensor(shape=[1], dtype=float32, place=Place(cpu), stop_gradient=True,
    [-0.72069150])
>>> p = lognormal_a.probs(value_tensor)
>>> print(p)
Tensor(shape=[1], dtype=float32, place=Place(cpu), stop_gradient=True,
    [0.48641577])
>>> kl = lognormal_a.kl_divergence(lognormal_b)
>>> print(kl)
Tensor(shape=[1], dtype=float32, place=Place(cpu), stop_gradient=True,
    [0.34939718])
property mean : Tensor

Mean of lognormal distribution.

Returns

mean value.

Return type

Tensor

property variance : Tensor

Variance of lognormal distribution.

Returns

variance value.

Return type

Tensor

entropy ( ) Tensor

entropy?

Shannon entropy in nats.

The entropy is

\[entropy(\sigma) = 0.5 \log (2 \pi e \sigma^2) + \mu\]

In the above equation:

  • \(loc = \mu\): is the mean of the underlying Normal distribution.

  • \(scale = \sigma\): is the stddevs of the underlying Normal distribution.

Returns

Shannon entropy of lognormal distribution.

Return type

Tensor

property batch_shape : Sequence[int]

Returns batch shape of distribution

Returns

batch shape

Return type

Sequence[int]

property event_shape : Sequence[int]

Returns event shape of distribution

Returns

event shape

Return type

Sequence[int]

log_prob ( value: Tensor ) Tensor

log_prob?

The log probability evaluated at value.

Parameters

value (Tensor) – The value to be evaluated.

Returns

The log probability.

Return type

Tensor

prob ( value: Tensor ) Tensor

prob?

Probability density/mass function evaluated at value.

Parameters

value (Tensor) – value which will be evaluated

probs ( value: Tensor ) Tensor

probs?

Probability density/mass function.

Parameters

value (Tensor) – The input tensor.

Returns

probability.The data type is same with value .

Return type

Tensor

rsample ( shape: Sequence[int] = [] ) Tensor

rsample?

Reparameterized sample from TransformedDistribution.

Parameters

shape (Sequence[int], optional) – The sample shape. Defaults to [].

Returns

The sample result.

Return type

[Tensor]

sample ( shape: Sequence[int] = [] ) Tensor

sample?

Sample from TransformedDistribution.

Parameters

shape (Sequence[int], optional) – The sample shape. Defaults to [].

Returns

The sample result.

Return type

[Tensor]

kl_divergence ( other: LogNormal ) Tensor [source]

kl_divergence?

The KL-divergence between two lognormal distributions.

The probability density function (pdf) is

\[KL\_divergence(\mu_0, \sigma_0; \mu_1, \sigma_1) = 0.5 (ratio^2 + (\frac{diff}{\sigma_1})^2 - 1 - 2 \ln {ratio})\]
\[ratio = \frac{\sigma_0}{\sigma_1}\]
\[diff = \mu_1 - \mu_0\]

In the above equation:

  • \(loc = \mu_0\): is the means of current underlying Normal distribution.

  • \(scale = \sigma_0\): is the stddevs of current underlying Normal distribution.

  • \(loc = \mu_1\): is the means of other underlying Normal distribution.

  • \(scale = \sigma_1\): is the stddevs of other underlying Normal distribution.

  • \(ratio\): is the ratio of scales.

  • \(diff\): is the difference between means.

Parameters

other (LogNormal) – instance of LogNormal.

Returns

kl-divergence between two lognormal distributions.

Return type

Tensor