Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AutoParallel] Support view mechanism in auto parallel dygraph mode. #59401

Merged
merged 13 commits into from
Nov 30, 2023

Conversation

GhostScreaming
Copy link
Contributor

@GhostScreaming GhostScreaming commented Nov 27, 2023

PR types

Others

PR changes

Others

Description

Pcard-73145

支持动态图半自动的view机制。paddle目前的view机制分为两种:stride_kernel和input、output共享显存。在动半适配stride_kernel不能取得性能收益,不采用该机制。该PR适配的是算子输入输出共享显存的view机制,如reshapebatch_norm等。如legacy_ops.yaml指定reshapeview: (x -> out)

view机制不同于inplace,inplace的input和output实质上是同一个paddle::Tensor,改变output(不限于修改显存内容),input也会对应地被修改。而view机制input和output是不同的paddle::Tensor,内部的数据结构DistTensorDenseTensor也是不同的,只是DenseTensorholder指向同一块显存。对于reshapeflatten这一类算子,它们不需要实际修改input的显存,只需要设置Output的MetaTensor的属性(如shapestrideoffsetdtype等),来改变显存数据存取的方式。

view机制共享显存是通过ShareBufferWith实现的,同时用ShareInplaceVersionCounterWith保证梯度计算的正确性。但动半存在的一个问题是,input的paddle::Tensor可能进行一次reshard,得到一个新的临时DistTensor,和原始input不共享显存。这有可能导致view机制的行为不符合预期。例如,目前batchnorm使用兜底InferSPMD,所有输入都被reshard成replicated,如果input meanvariance是shard状态,output mean_outvariance_out实际是和临时的replicated DistTensor replicated_meanreplicated_variance共享显存,以保证动半下view机制的正确性。

除此之外,自动并行代码生成对reshape算子进行了特殊处理。reshape的参数shape是global_shape,需要根据ReshapeInferSPMD的切分推导规则,得到正确的local_shape,否则用local_dense_tensor和global_shape无法执行kernel。此外,切分无法整除的情况需要特别处理!此PR暂未解决这种情况。

示例的reshape代码:

....

      // 6. PrepareData (DataTransform & Prepare Dense Input)
      dist_input_x = PrepareDataForDistTensor(dist_input_x, GetKernelInputArgDef(kernel.InputAt(0), kernel_backend), {}, kernel_result.is_stride_kernel);
      auto input_x = &dist_input_x->value();

      // dense_out_0 is view output, it shares memory with input.
      // If input is resharded, dense_out_0 may hold
      // different memory with origin input.
      dense_out_0->ShareBufferWith(*input_x);
      dense_out_0->ShareInplaceVersionCounterWith(*input_x);

...

      // 8. Infer Local DenseTensor Meta
      phi::MetaTensor meta_dense_out_0(dense_out_0);
      phi::MetaTensor meta_dense_out_1(dense_out_1);
      std::vector<int64_t> local_shape;
      for (size_t i = 0; i < shape.GetData().size(); i++) {
        auto out_dist_attr = PADDLE_GET_CONST(phi::distributed::TensorDistAttr, spmd_info.second[0]);
        if (out_dist_attr.dims_mapping()[i] >= 0) {
          int64_t mesh_dim = out_dist_attr.process_mesh().shape()[i];
          local_shape.push_back(shape.GetData()[i] / mesh_dim);
        } else {
          local_shape.push_back(shape.GetData()[i]);
        }
      }

Copy link

paddle-bot bot commented Nov 27, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

RESHAPE_CALCULATE_LOCAL_SHAPE_TEMPLATE = """
std::vector<int64_t> local_shape;
for (size_t i = 0; i < shape.GetData().size(); i++) {
auto out_dist_attr = PADDLE_GET_CONST(phi::distributed::TensorDistAttr, spmd_info.second[0]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

auto&

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thx~

wanghuancoder
wanghuancoder previously approved these changes Nov 28, 2023
Copy link
Contributor

@wanghuancoder wanghuancoder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

chenwhql
chenwhql previously approved these changes Nov 28, 2023
@GhostScreaming GhostScreaming merged commit cd45edf into PaddlePaddle:develop Nov 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants