-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AutoParallel] Support view mechanism in auto parallel dygraph mode. #59401
[AutoParallel] Support view mechanism in auto parallel dygraph mode. #59401
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
RESHAPE_CALCULATE_LOCAL_SHAPE_TEMPLATE = """ | ||
std::vector<int64_t> local_shape; | ||
for (size_t i = 0; i < shape.GetData().size(); i++) { | ||
auto out_dist_attr = PADDLE_GET_CONST(phi::distributed::TensorDistAttr, spmd_info.second[0]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
auto&
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, thx~
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
… support_view_op
… support_view_op
PR types
Others
PR changes
Others
Description
Pcard-73145
支持动态图半自动的view机制。paddle目前的view机制分为两种:stride_kernel和input、output共享显存。在动半适配stride_kernel不能取得性能收益,不采用该机制。该PR适配的是算子输入输出共享显存的view机制,如
reshape
、batch_norm
等。如legacy_ops.yaml
指定reshape
:view: (x -> out)
。view机制不同于inplace,inplace的input和output实质上是同一个
paddle::Tensor
,改变output(不限于修改显存内容),input也会对应地被修改。而view机制input和output是不同的paddle::Tensor
,内部的数据结构DistTensor
、DenseTensor
也是不同的,只是DenseTensor
的holder
指向同一块显存。对于reshape
、flatten
这一类算子,它们不需要实际修改input的显存,只需要设置Output的MetaTensor
的属性(如shape
、stride
、offset
、dtype
等),来改变显存数据存取的方式。view机制共享显存是通过
ShareBufferWith
实现的,同时用ShareInplaceVersionCounterWith
保证梯度计算的正确性。但动半存在的一个问题是,input的paddle::Tensor
可能进行一次reshard,得到一个新的临时DistTensor
,和原始input不共享显存。这有可能导致view机制的行为不符合预期。例如,目前batchnorm
使用兜底InferSPMD,所有输入都被reshard成replicated,如果inputmean
和variance
是shard状态,outputmean_out
和variance_out
实际是和临时的replicated DistTensorreplicated_mean
和replicated_variance
共享显存,以保证动半下view机制的正确性。除此之外,自动并行代码生成对
reshape
算子进行了特殊处理。reshape
的参数shape
是global_shape,需要根据ReshapeInferSPMD的切分推导规则,得到正确的local_shape,否则用local_dense_tensor和global_shape无法执行kernel。此外,切分无法整除的情况需要特别处理!此PR暂未解决这种情况。示例的
reshape
代码: