-
Notifications
You must be signed in to change notification settings - Fork 639
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add AdEMAMix optimizer #1360
Add AdEMAMix optimizer #1360
Conversation
# For parity with bnb implementation we combine both fast | ||
# and slow EMA stats into one stacked tensor. | ||
state["m1_m2"] = p.new_zeros((2, *p.size())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is done for ease of compatibility with the existing test suite. In most other implementations we'll see two separate buffers here.
// AdEMAMix has an additional state buffer, which we packed | ||
// into state1. We need thread-local storage here for these. | ||
// TODO: Mark with [[maybe_unused]] after upgrade to min compiler. | ||
float s3_vals[NUM_PER_THREAD]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a few extra memory allocations like this to support AdEMAMix. Have not confirmed if the compiler is optimizing these out for instantiations with OPTIMIZER=ADAM
, but if not, the overhead isn't very much.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks all good to me.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
* Add AdEMAMix optimizer * Add PagedAdEMAMix32bit, AdEMAMix32bit * Add PagedAdEMAMix32bit, AdEMAMix32bit * AdEMAMix: add support for alpha/beta3 scheduling * Update paged AdEMAMix
Adds support for the AdEMAMix optimizer described here: https://arxiv.org/abs/2409.03137
Includes blockwise 8bit and 32bit versions, each supporting paged operation.
AdEMAMix is a modification to Adam which introduces an additional EMA component. It is observed that AdEMAMix can forget training data at a slower pace and can reach similar loss as AdamW with significantly less training data.
TODO: Implement scheduler for alpha/beta3