forked from NVIDIA/apex
-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support all the softmax extensions and cherry-pick transformer-related commits #101
Open
hubertlu-tw
wants to merge
25
commits into
master
Choose a base branch
from
dev/hubertlu/run_transformer
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* new kernel Signed-off-by: Yi Dong <yidong@nvidia.com> * added the unit tests Signed-off-by: Yi Dong <yidong@nvidia.com> * clean up unittest Signed-off-by: Yi Dong <yidong@nvidia.com> * use float Signed-off-by: Yi Dong <yidong@nvidia.com> * more clean up Signed-off-by: Yi Dong <yidong@nvidia.com> * remove the long seq test case
…DIA#1448) * less mem consumption by fused generic softmax tests ran with RTX 3070 Ti Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * Deduplicate qlen of 1234
…VIDIA#1451) * Use xmlrunner.XMLTestRunner accordingly TODO: - [x] Remove `subTest` because it's not compatible with the current way of running L0 tests Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * use `torch.testing` more to enable xmlrunner Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * Remove `subTest` for xmlrunner Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * removing subTest Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * not depend on an env var Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * fix syntax errors * open with `"wb"` * xml file per dir Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * remove comment-out Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * Refactor `TestTransformer`: define member methods (#5) * setUpClass to define `test_` methods Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * manually define Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * add a missing test Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * remove print Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * remove ext Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
to use `torch.testing.assert_close` instead of `numpy.testing.assert_allclose`. The former uses a bit looser threshold values. Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
…VIDIA#1451) * Use xmlrunner.XMLTestRunner accordingly TODO: - [x] Remove `subTest` because it's not compatible with the current way of running L0 tests Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * use `torch.testing` more to enable xmlrunner Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * Remove `subTest` for xmlrunner Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * removing subTest Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * not depend on an env var Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * fix syntax errors * open with `"wb"` * xml file per dir Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * remove comment-out Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * Refactor `TestTransformer`: define member methods (#5) * setUpClass to define `test_` methods Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * manually define Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * add a missing test Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * remove print Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * remove ext Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
* apex.amp migration to torch.cuda.amp Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * add autocast tests Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * split with and without autocast Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
* Label smoothing in vocab parallel cross entropy Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fix context saving Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove .item() calls Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Update tests Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
…atron pipeline parallelism (NVIDIA#1475) * Refactor how dist Adam handles overlapped grad sync Each grad bucket independently keeps track of grads that have been generated. Add helper function to create callback functions. Change default param arg in grad norm functions to None. Perform communication for checkpointing in main stream to avoid memory pool overheads. * Support Megatron pipeline parallelism with async grad reduction Enables async grad reduction in first pipeline stage during last backward pass, and disables async grad reduction in all other pipeline stages. * Review suggestions from crcrpar Add unit test for pipeline parallelism with custom sync context. Style tweaks. * Use unittest assert functions in pipeline parallelism test Review suggestion from crcrpar
* Optionally disable stream synchronization after batched p2p communication * Add test cases with `sync_batch_comm=False` only when pytorch/pytorch#82450 is included in pytorch. Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * utilize existing test methods Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * consistent naming Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> Co-authored-by: Aidyn-A <Aidyn-A@users.noreply.github.com> * silly boy, to skip the sync, set False Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * cosmetic Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * Test with async_pipelinign w/o sync after batch_isend_irecv Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> * again, set sync_batch_comm to False Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> Co-authored-by: Aidyn-A <Aidyn-A@users.noreply.github.com> * Remove `torch.testing._internal.common_cuda` Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> Co-authored-by: Sangkug Lym <slym@nvidia.com> Co-authored-by: Aidyn-A <Aidyn-A@users.noreply.github.com>
…lelism (NVIDIA#1514) * Add option to use no_sync context with interleaved pipeline parallelism * Add unit test for no_sync context with interleaved pipeline parallelism * Debug no_sync context support in interleaved pipeline parallelism
…nstead of torch_ucc (NVIDIA#1495) * update HAS_TORCH_UCC to TORCH_UCC * add comments for failing tests * move HAS_UCC to _ucc_utils.py * whitespace * small changes * newline * updated list of failing tests * update failing tests list
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com> Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
* Update megatron fused softmax follow megatron-lm Signed-off-by: Yu Yao <yuya@nvidia.com> * Add mask=None support in scaled_masked_softmax Signed-off-by: Yu Yao <yuya@nvidia.com> * Update setup.py for scaled_softmax_cuda Signed-off-by: Yu Yao <yuya@nvidia.com> * Add tests for fused_scale_softmax (mask=None) Signed-off-by: Yu Yao <yuya@nvidia.com> * Assert grad equal in fused softmax test Signed-off-by: Yu Yao <yuya@nvidia.com> * Revert "Assert grad equal in fused softmax test" Signed-off-by: Yu Yao <yuya@nvidia.com> Signed-off-by: Yu Yao <yuya@nvidia.com> Co-authored-by: Yu Yao <yuya@nvidia.com>
) * working test_bert_minimal.py * remove some debugging statements * working test_gpt_minimal.py * test_dynamic_batchsize.py having issues with torch.backends.cudnn.allow_tf32 * working test_dynamic_batchsize.py * refactor test_bert_minimal.py, need to investigate rng of MANUAL_SEED for nccl only pipeline with virtual_pipeline_model_parallel_size = 2 * add test_bert_minimal_alt.py for visibility * update test_gpt_minimal.py * lint * update loss cutoff for bert test * split with / without interleaving tests for bert * use skipTest * remove ONCE * add ignore_unknown_args=True * remove old testing files * add num_devices logic to override_args
Signed-off-by: Masaki Kozuki <mkozuki@nvidia.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
To run the extension unit tests, please follow the following commands:
To run the transformer unit tests, please follow the following commands:
Ran 120 tests in 506.928s
FAILED (errors=7, skipped=55)
TODO: We will need to work on an IFU PR and start to look into the failed tests in order to skip them on ROCm.