为对抗三星,苹果劝LG进军智能手机OLED屏幕业务

class paddle.distributed. ColWiseParallel ( gather_output: bool = False ) [source]
百度   类似香港这样的城市可能暂时是自动驾驶公司们的禁地,不过这也意味着这里充满了挑战和乐趣。

Col wise parallel plan for mp config. Will try to split weight on the second dim and the bias on the first dim. This api is designed for paddle.nn.Linear or paddle.nn.Embedding. If any other instance of paddle.nn.Layer is passed, this plan will try to split layer.weight and layer.bias if it has.

Note

  1. layer.weight should have two dims.

  2. layer.bias should have one dim.

Parameters

gather_output (bool) – Whether gather the output to change it from a local tensor to a global tensor. If gather the local tensor to global, an extra communication will be called. The default value is False, which means keeping the output as a local tensor.

Examples

>>> import paddle
>>> import paddle.distributed as dist

>>> class MLP(paddle.nn.Layer):
...     def __init__(self):
...         super().__init__()
...         self.fc1 = paddle.nn.Linear(8, 8)
...         self.fc2 = paddle.nn.Linear(8, 8)
...
...     def forward(self, input):
...         return self.fc2(self.fc1(input))

>>> 
>>> layer = MLP()
>>> mp_config = {
...     'fc1': dist.ColWiseParallel()
... }