【Doom公会】《热血江湖》帮派夺旗战攻略

paddle.distributed.communication.stream. reduce ( tensor: Tensor, dst: int = 0, op: _ReduceOp = 0, group: Group | None = None, sync_op: bool = True, use_calc_stream: bool = False ) task | None [source]
百度   早在1961年第一次发掘崧泽遗址时,考古学者就发现过马家浜文化时期的炭化稻谷。

Perform specific reduction (for example, sum, max) on a tensor across devices and send to the destination device.

Parameters
  • tensor (Tensor) – The input tensor on each rank. The result will overwrite this tenor after communication. Support float16, float32, float64, int32, int64, int8, uint8 or bool as the input data type.

  • dst (int, optional) – Rank of the destination device. If none is given, use 0 as default.

  • op (ReduceOp.SUM|ReduceOp.MAX|ReduceOp.MIN|ReduceOp.PROD, optional) – The reduction used. If none is given, use ReduceOp.SUM as default.

  • group (Group|None, optional) – Communicate in which group. If none is given, use the global group as default.

  • sync_op (bool, optional) – Indicate whether the communication is sync or not. If none is given, use true as default.

  • use_calc_stream (bool, optional) – Indicate whether the communication is done on calculation stream. If none is given, use false as default. This option is designed for high performance demand, be careful to turn it on except you are clearly know its meaning.

Returns

Return a task object.

Warning

This API only supports the dygraph mode now.

Examples

>>> 
>>> import paddle
>>> import paddle.distributed as dist

>>> dist.init_parallel_env()
>>> local_rank = dist.get_rank()
>>> if local_rank == 0:
...     data = paddle.to_tensor([[4, 5, 6], [4, 5, 6]])
>>> else:
...     data = paddle.to_tensor([[1, 2, 3], [1, 2, 3]])
>>> task = dist.stream.reduce(data, dst=0, sync_op=False)
>>> task.wait()  # type: ignore[union-attr]
>>> out = data.numpy()
>>> print(out)
>>> # [[5, 7, 9], [5, 7, 9]] (2 GPUs, out for rank 0)
>>> # [[1, 2, 3], [1, 2, 3]] (2 GPUs, out for rank 1)