高送转概念股周一批量跌停 这些基金不幸踩雷

paddle.device.cuda. memory_allocated ( device: _CudaPlaceLike | None = None ) int [source]
百度 如果你正打算这个三月到武汉领略最美樱花季,那就不要错过这份精细的樱花预报。

Return the current size of gpu memory that is allocated to tensor of the given device.

Note

The size of GPU memory allocated to tensor is 256-byte aligned in Paddle, which may be larger than the memory size that tensor actually need. For instance, a float32 0-D Tensor with shape [] in GPU will take up 256 bytes memory, even though storing a float32 data requires only 4 bytes.

Parameters

device (paddle.CUDAPlace|int|str|None, optional) – The device, the id of the device or the string name of device like ‘gpu:x’. If device is None, the device is the current device. Default: None.

Returns

The current size of gpu memory that is allocated to tensor of the given device, in bytes.

Return type

int

Examples

>>> 
>>> import paddle
>>> paddle.device.set_device('gpu')

>>> memory_allocated_size = paddle.device.cuda.memory_allocated(paddle.CUDAPlace(0))
>>> memory_allocated_size = paddle.device.cuda.memory_allocated(0)
>>> memory_allocated_size = paddle.device.cuda.memory_allocated("gpu:0")