河北省张家口高速公路路政执法支队皮卡车废标公告
- paddle.nn.functional. embedding ( x: Tensor, weight: Tensor, padding_idx: int | None = None, max_norm: float | None = None, norm_type: float = 2.0, sparse: bool = False, scale_grad_by_freq: bool = False, name: str | None = None ) Tensor [source]
-
百度 现阶段,大规模的住房补贴也主要是针对高学历的、年轻人群,而不是低技能的、需要补助的人群,但我们认为,未来2亿的农村流动人口的聚集,不仅仅只是城市的成本,更是城市的竞争力。
Used to lookup embeddings vector of ids provided by
x
.The shape of output Tensor is generated by appending the last dimension of the input Tensor shape with embedding size.
Note
The id in
x
must satisfy \(0 <= id < weight.shape[0]\) , otherwise the program will throw an exception and exit.x is a Tensor. padding_idx = -1 x.data = [[1, 3], [2, 4], [4, 127]] x.shape = [3, 2] weight.shape = [128, 16] output is a Tensor: out.shape = [3, 2, 16] out.data = [[[0.129435295, 0.244512452, ..., 0.436322452], [0.345421456, 0.524563927, ..., 0.144534654]], [[0.345249859, 0.124939536, ..., 0.194353745], [0.945345345, 0.435394634, ..., 0.435345365]], [[0.945345345, 0.435394634, ..., 0.435345365], [0.0, 0.0, ..., 0.0 ]]] # padding data The input padding_idx is less than 0, it is automatically converted to padding_idx = -1 + 128 = 127 It will pad all-zero data when id is 127.
- Parameters
-
x (Tensor) – A Tensor with type int32/int64, which contains the id information. The value of the input id should satisfy \(0<= id < weight.shape[0]\) .
weight (Tensor) – The weight. A Tensor with shape of lookup table parameter. It should have two elements which indicates the size of the dictionary of embeddings and the size of each embedding vector respectively.
sparse (bool, optional) – The flag indicating whether to use sparse update. This parameter only affects the performance of the backwards gradient update. It is recommended to set True because sparse update is faster. But some optimizers does not support sparse update, such as api_paddle_optimizer_adadelta_Adadelta , api_paddle_optimizer_adamax_Adamax , api_paddle_optimizer_lamb_Lamb. In these cases, sparse must be False. Default: False.
padding_idx (int|None, optional) – padding_idx needs to be in the interval [-weight.shape[0], weight.shape[0]). If \(padding\_idx < 0\), the \(padding\_idx\) will automatically be converted to \(weight.shape[0] + padding\_idx\) . It will output all-zero padding data whenever lookup encounters \(padding\_idx\) in id. And the padding data will not be updated while training. If set None, it makes no effect to output. Default: None.
max_norm (float, optional) – If provided, will renormalize the embedding vectors to have a norm larger than
max_norm
. It will inplace update the input embedding weight in dynamic graph mode. Default: None.norm_type (float, optional) – The p of the p-norm to compute for the max_norm option. Default: 2.0.
scale_grad_by_freq (bool, optional) – Indicating whether to scale the gradients by the inverse frequency of the word ids in input x. Default: False.
name (str|None, optional) – For detailed information, please refer to Name. Usually name is no need to set and None by default.
- Returns
-
Tensor, Embedding Tensor mapped by x. The data type is the same as
weight
.
Examples
>>> import paddle >>> import paddle.nn as nn >>> x0 = paddle.arange(3, 6).reshape((3, 1)).astype(paddle.int64) >>> w0 = paddle.full(shape=(10, 3), fill_value=2).astype(paddle.float32) >>> x = paddle.to_tensor(x0, stop_gradient=False) >>> print(x.numpy()) [[3] [4] [5]] >>> print(x.shape) [3, 1] >>> w = paddle.to_tensor(w0, stop_gradient=False) >>> print(w.numpy()) [[2. 2. 2.] [2. 2. 2.] [2. 2. 2.] [2. 2. 2.] [2. 2. 2.] [2. 2. 2.] [2. 2. 2.] [2. 2. 2.] [2. 2. 2.] [2. 2. 2.]] >>> print(w.shape) [10, 3] >>> emb = nn.functional.embedding( ... x=x, weight=w, sparse=True, name="embedding") >>> print(emb.numpy()) [[[2. 2. 2.]] [[2. 2. 2.]] [[2. 2. 2.]]] >>> print(emb.shape) [3, 1, 3]