-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge LoCo with Zero++ #6730
Merge LoCo with Zero++ #6730
Conversation
@microsoft-github-policy-service agree |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@XingyuXie thx for this effort.
Overall looks good to me. Just left a few comments
As required by cc @GuanhuaWang , we added the |
Thx @XingyuXie for the pr updates on unit-test. Overall, it Looks good to me. |
Integration of LoCo Method into ZeRO++
Overview
This PR introduces the integration of the LoCo method, as outlined in this paper, into the ZeRO++ framework of DeepSpeed. The key enhancement involves applying error feedback compensation to 4-bit gradients before communication. This approach improves pre-training loss outcomes without additional time overhead, though it requires extra GPU memory. The extent of this memory increase depends on model size and training configuration.
Experimental Results
We conducted pre-training experiments using the Llama2 architecture, adjusting the number of layers and hidden size. The experiments included:
The training data was sampled from Redpajama-V2.
Findings:
However, even a smaller pre-training loss gap in larger models can translate to meaningful gains in downstream tasks.
Example Script
For reference, the run.sh script used for the 8B parameter, 5B tokens experiment is attached. The experiment was conducted using the DeepSpeed-Megatron platform.
Acknowledgments
Special thanks to cc @GuanhuaWang for ongoing communication and guidance throughout this work.
We appreciate your consideration of this PR and welcome any feedback or questions!