-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add backward gradient computation for op argsort #22203
Merged
FlyingQianMM
merged 2 commits into
PaddlePaddle:develop
from
FlyingQianMM:argsort_add_grad_new
Jan 10, 2020
Merged
add backward gradient computation for op argsort #22203
FlyingQianMM
merged 2 commits into
PaddlePaddle:develop
from
FlyingQianMM:argsort_add_grad_new
Jan 10, 2020
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
wawltor
approved these changes
Jan 10, 2020
Xreki
added a commit
to Xreki/Paddle
that referenced
this pull request
Jan 13, 2020
* Add the first implememtation of fusion_group op PaddlePaddle#19621 (#3) * Add the dynamic load of nvrtc, and support runtime compiling of CUDA kernel using nvrtc. test=develop * Call CUDA driver api to launch the kernel compiled by nvrtc. test=develop * Disable for mac and windows. test=develop * Refine the codes to support manually specified num_threads and workload_per_thread. test=develop * Refine the CUDA kernel to support large dims. test=develop * Add DeviceCodePool to manage all device codes. * Add the first implementation fusion_group op. * Add unit-test for fusion_group op. * Add the check of result. * Add the check of nvrtc in unit-test. test=develop * Add comment to explain the inputs, outputs and features of fusion_group op. test=develop * Disable fusion_group op for mac and windows. test=develop * Make the compiling of device code return status instead of hanging up. test=develop * Add the check of whether there is CUDA driver library, and do not core dump when failing to call the CUDA driver API. * Unify fusion_group_op's input and output names. test=develop * Add the check of CUDA driver library in unittest. test=develop * Enable generating code for a given subgraph. PaddlePaddle#21126 (#4) * Enable generating code for a given subgraph. * Support sorting the subgraph. * Remove the rearange of expressions because we use the sorted subgraph directly. * Enable generating code for a subgraph which is composed of grad ops. * Use expression information to check the accuracy in unittest. * Separate load and store from computation expressions. test=develop * Improve the loading statements in generated codes. test=develop * Remove unused arguments from formal list. test=develop * Enable the detection of subgraph of grad ops. * Generate code for detected subgraph in fusion_group_pass. * Add an option in BuildStrategy to enable fusion_group_pass and add unittest. test=develop * Fix a bug when checking whether the shape of all inputs are the same. * Add debug information. * Remove subgraph_detector from inference/analysis to the common framework/ir directory. (#5) test=develop * Call subgraph_detector in fusion_group pass. test=develop * Disable fusion_group when WITH_GPU is OFF. test=develop * Refine all PADDLE_ENFORCE message. test=develop * Fix the case that some inputs are not defined in grad ops, and set op_role for fused op. test=develop * add backward gradient computation for op argsort (PaddlePaddle#22203) * add backward gradient computation for op argsort test=developo * use pre-commit test=develop * fix the bug of profile update (PaddlePaddle#22207) * fix the bug of profile update test=develop * add NotImplementedError for multi optimizers (PaddlePaddle#22181) * add NotImplementedError for multi optimizers used on multi-places . test=develop * assert error only if num_devices>1. test=develop * set test_optimizer_in_control_flow in CMakeLists for using multi-GPU.test=develop * support fluid-lite subgraph run resnet test=develop (PaddlePaddle#22191) - 添加了fluid-lite子图方式运行resnet的单测 - 修改了依赖Lite的git commit id * fix bug fot test_dygraph_mnist_fp16.py, test=develop (PaddlePaddle#22222) * Check dygraph weight name (PaddlePaddle#22140) * add parameter check; test=develop * change parameter name checker in dygraph guard; test=develop * fix test layers error; test=develop * revert some code to develop; test=develop * fix exampel error; test=develop * fix comment error; test=develop * fix comment error; test=develop * only import used test case and function(PaddlePaddle#22208) Co-authored-by: FlyingQianMM <245467267@qq.com> Co-authored-by: wangchaochaohu <wangchao66@baidu.com> Co-authored-by: liym27 <33742067+liym27@users.noreply.github.com> Co-authored-by: Wilber <jiweibo1028@outlook.com> Co-authored-by: zhongpu <2013000149@qq.com> Co-authored-by: hong <43953930+phlrain@users.noreply.github.com> Co-authored-by: Zhang Ting <709968123@qq.com>
FlyingQianMM
added a commit
that referenced
this pull request
Jan 14, 2020
… test=release/1.7 (#22233)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The function argsort does not implement backward gradient computation, so we add backward gradient computation.
test=develop