Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add codestral mamba2 #32080

Merged
merged 68 commits into from
Aug 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
68 commits
Select commit Hold shift + click to select a range
100f054
add new model like
molbap Jul 16, 2024
4df8fd5
draft cuda forward - mismatched keys (sharding on conv1)
molbap Jul 16, 2024
eaf921f
match keys successfully
molbap Jul 17, 2024
299071f
fix split
molbap Jul 17, 2024
8c61fb2
get generation/forward running (wrong gens, norm?)
molbap Jul 17, 2024
2101c98
:update
ArthurZucker Jul 17, 2024
c1a4de7
some refactoring
ArthurZucker Jul 17, 2024
89c5422
fixes
ArthurZucker Jul 17, 2024
6570bed
works up until copy to cache
ArthurZucker Jul 17, 2024
41eb3ed
fix
ArthurZucker Jul 17, 2024
e330d94
update
ArthurZucker Jul 17, 2024
d60f1df
NON WORKING VERSION
ArthurZucker Jul 17, 2024
cd28689
version that work?
ArthurZucker Jul 18, 2024
8c6794f
nit
ArthurZucker Jul 18, 2024
c0b2f47
fix config
molbap Jul 18, 2024
80626b3
fix conversion script
molbap Jul 18, 2024
b2718c1
working cuda forward
molbap Jul 18, 2024
23db9b7
fix merge conflict
molbap Jul 18, 2024
13ab6fc
nit
ArthurZucker Jul 18, 2024
fb2186e
update
ArthurZucker Jul 18, 2024
22e9c5b
Merge branch 'add_codestral_mamba2' of github.com:huggingface/new-mod…
molbap Jul 18, 2024
490e79e
simplifcation
ArthurZucker Jul 18, 2024
cc90dba
make mamba slow simple work
ArthurZucker Jul 18, 2024
48084e9
no einops
ArthurZucker Jul 18, 2024
be65a7c
todo
ArthurZucker Jul 18, 2024
32b6017
fix style
molbap Jul 18, 2024
266a87d
no einops
ArthurZucker Jul 18, 2024
0cd4ecb
update fix no einsum
ArthurZucker Jul 18, 2024
ab4b7e5
nit
ArthurZucker Jul 18, 2024
bf5464f
Merge branch 'add_codestral_mamba2' of github.com:huggingface/new-mod…
molbap Jul 19, 2024
951359c
Merge branch 'add_codestral_mamba2' of github.com:huggingface/new-mod…
molbap Jul 19, 2024
abd9c5f
remove einops
molbap Jul 19, 2024
1befaa2
bug: scan_output differs strongly
molbap Jul 19, 2024
e60ea8c
add rms norm option
molbap Jul 25, 2024
b7ce3b1
fix fast + slow generation with and w/o cache :heavy_check_mark:
molbap Jul 25, 2024
7e14814
draft integration tests
molbap Jul 25, 2024
43e6989
remove a big chunk of the einsum
molbap Jul 27, 2024
394ae99
fix slow, fast generations, without any einsum
molbap Jul 30, 2024
b18e28c
fix copies
molbap Jul 30, 2024
0fce131
fix structure
molbap Jul 30, 2024
d80c2ce
fix up modeling and tests
molbap Jul 31, 2024
7648852
fix tests
molbap Aug 1, 2024
d0550ab
Merge branch 'main' into add_codestral_mamba2
molbap Aug 1, 2024
7522ba9
clamping is indeed worse
molbap Aug 1, 2024
ed238b6
recover mamba2 cache test
molbap Aug 1, 2024
f75df9d
fix copies
molbap Aug 1, 2024
ecbd2e6
no cache position (yet)
molbap Aug 1, 2024
bd07f46
fix tf tests
molbap Aug 1, 2024
d06ae45
fix matmul for generate
molbap Aug 2, 2024
f8fa2d4
fixup
molbap Aug 2, 2024
e580482
skip cache tests for now
molbap Aug 2, 2024
5311fc3
[run-slow]mamba2
molbap Aug 2, 2024
ec56cbe
tune out hidden states for padding
molbap Aug 2, 2024
803cbe7
test batched generation
molbap Aug 2, 2024
bcc76d3
propagate attention mask changes
molbap Aug 2, 2024
798ff1e
fix past length
molbap Aug 5, 2024
b295112
fix integration test
molbap Aug 5, 2024
fccd533
style
molbap Aug 5, 2024
cbd1622
address comments
molbap Aug 6, 2024
af58188
update readme
molbap Aug 6, 2024
fce50da
add mamba2 version check
molbap Aug 6, 2024
2dc979b
fix tests
molbap Aug 6, 2024
ce9d8fe
[run-slow]mamba2
molbap Aug 6, 2024
c38647a
skip edge tests
molbap Aug 6, 2024
e068ba6
[run-slow]mamba2
molbap Aug 6, 2024
0fac4dc
last fixup
molbap Aug 6, 2024
cce32fd
[run-slow]mamba2
molbap Aug 6, 2024
7052786
update README
molbap Aug 6, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -436,6 +436,8 @@
title: MADLAD-400
- local: model_doc/mamba
title: Mamba
- local: model_doc/mamba2
title: mamba2
- local: model_doc/marian
title: MarianMT
- local: model_doc/markuplm
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -194,6 +194,7 @@ Flax), PyTorch, and/or TensorFlow.
| [M2M100](model_doc/m2m_100) | ✅ | ❌ | ❌ |
| [MADLAD-400](model_doc/madlad-400) | ✅ | ✅ | ✅ |
| [Mamba](model_doc/mamba) | ✅ | ❌ | ❌ |
| [mamba2](model_doc/mamba2) | ✅ | ❌ | ❌ |
| [Marian](model_doc/marian) | ✅ | ✅ | ✅ |
| [MarkupLM](model_doc/markuplm) | ✅ | ❌ | ❌ |
| [Mask2Former](model_doc/mask2former) | ✅ | ❌ | ❌ |
Expand Down
106 changes: 106 additions & 0 deletions docs/source/en/model_doc/mamba2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.

⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

-->

# Mamba 2

## Overview

The Mamba2 model was proposed in [Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality](https://arxiv.org/abs/2405.21060) by Tri Dao and Albert Gu. It is a State Space Model similar to Mamba 1, with better performances in a simplified architecture.


The abstract from the paper is the following:

*While Transformers have been the main architecture behind deep learning's success in language modeling, state-space models (SSMs) such as Mamba have recently been shown to match or outperform Transformers at small to medium scale. We show that these families of models are actually quite closely related, and develop a rich framework of theoretical connections between SSMs and variants of attention, connected through various decompositions of a well-studied class of structured semiseparable matrices. Our state space duality (SSD) framework allows us to design a new architecture (Mamba-2) whose core layer is an a refinement of Mamba's selective SSM that is 2-8X faster, while continuing to be competitive with Transformers on language modeling.*

Tips:

This version should support all implementations of Mamba 2, and in particular [Mamba-2 codestral](https://huggingface.co/mistralai/Mamba-Codestral-7B-v0.1) from Mistral AI. In particular, mamba 2 codestral was released with a number of `groups` equal to 8, which can be thought intuitively as similar to the number of kv heads in an attention-based model.
This model has two different forward passes, `torch_forward` or `cuda_kernels_forward`. The latter uses the original cuda kernels if they are found in your environment, and is slower on the prefill i.e. requires a "warmup run" due to high cpu overhead, see [here](https://github.com/state-spaces/mamba/issues/389#issuecomment-2171755306) and [also here](https://github.com/state-spaces/mamba/issues/355#issuecomment-2147597457). Without compilation, the `torch_forward` implementation is faster by a factor 3 to 4. Further, there are no positional embeddings in this model, but there is an `attention_mask` and a specific logic to mask out hidden states in two places in the case of batched generation, see [here](https://github.com/state-spaces/mamba/issues/66#issuecomment-1863563829) as well. Due to this, in addition to the reimplementation of mamba2 kernels, batched generation and cached generation are expected to have slight discrepancies. Further, the results given by the cuda kernels or the torch forward are expected to be slightly different. The SSM algorithm heavily relies on tensor contractions, which have matmul equivalents but the order of operations is slightly different, making the difference greater at smaller precisions.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small nit: Can we add left padding to the tips so people avoid using right padding (which likely doesn't work as expected) 👀

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's by default activated, but you're right, better mention it here!

Another note, shutdown of hidden states corresponding to padding tokens is done in 2 places and mostly has been tested with left-padding. Right-padding will propagate noise down the line and is not guaranteed to yield satisfactory results. `tokenizer.padding_side = "left"` ensures you are using the correct padding side.

This model was contributed by [Molbap](https://huggingface.co/Molbap), with tremendous help from [Anton Vlasjuk](https://github.com/vasqu).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @vasqu thanks for your input in this PR! 🥳

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes thanks a lot @vasqu , glad to have had your insights along the way, it was incredibly helpful!

The original code can be found [here](https://github.com/state-spaces/mamba).


# Usage

### A simple generation example:
```python
from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
import torch
model_id = 'mistralai/Mamba-Codestral-7B-v0.1'
tokenizer = AutoTokenizer.from_pretrained(model_id, revision='refs/pr/9', from_slow=True, legacy=False)
model = MambaForCausalLM.from_pretrained(model_id, revision='refs/pr/9')
input_ids = tokenizer("Hey how are you doing?", return_tensors= "pt")["input_ids"]

out = model.generate(input_ids, max_new_tokens=10)
print(tokenizer.batch_decode(out))
```

Here's a draft script for finetuning:
```python
from trl import SFTTrainer
from peft import LoraConfig
from transformers import AutoTokenizer, Mamba2ForCausalLM, TrainingArguments
model_id = 'mistralai/Mamba-Codestral-7B-v0.1'
tokenizer = AutoTokenizer.from_pretrained(model_id, revision='refs/pr/9', from_slow=True, legacy=False)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left" #enforce padding side left

model = Mamba2ForCausalLM.from_pretrained(model_id, revision='refs/pr/9')
dataset = load_dataset("Abirate/english_quotes", split="train")
# Without CUDA kernels, batch size of 2 occupies one 80GB device
# but precision can be reduced.
# Experiments and trials welcome!
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=2,
logging_dir='./logs',
logging_steps=10,
learning_rate=2e-3
)
lora_config = LoraConfig(
r=8,
target_modules=["embeddings", "in_proj", "out_proj"],
task_type="CAUSAL_LM",
bias="none"
)
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
peft_config=lora_config,
train_dataset=dataset,
dataset_text_field="quote",
)
trainer.train()
```


## Mamba2Config

[[autodoc]] Mamba2Config

## Mamba2Model

[[autodoc]] Mamba2Model
- forward

## Mamba2LMHeadModel

[[autodoc]] Mamba2ForCausalLM
- forward
14 changes: 14 additions & 0 deletions src/transformers/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -544,6 +544,7 @@
],
"models.m2m_100": ["M2M100Config"],
"models.mamba": ["MambaConfig"],
"models.mamba2": ["Mamba2Config"],
"models.marian": ["MarianConfig"],
"models.markuplm": [
"MarkupLMConfig",
Expand Down Expand Up @@ -2545,6 +2546,13 @@
"MambaPreTrainedModel",
]
)
_import_structure["models.mamba2"].extend(
[
"Mamba2ForCausalLM",
"Mamba2Model",
"Mamba2PreTrainedModel",
]
)
_import_structure["models.marian"].extend(["MarianForCausalLM", "MarianModel", "MarianMTModel"])
_import_structure["models.markuplm"].extend(
[
Expand Down Expand Up @@ -5225,6 +5233,7 @@
)
from .models.m2m_100 import M2M100Config
from .models.mamba import MambaConfig
from .models.mamba2 import Mamba2Config
from .models.marian import MarianConfig
from .models.markuplm import (
MarkupLMConfig,
Expand Down Expand Up @@ -7026,6 +7035,11 @@
MambaModel,
MambaPreTrainedModel,
)
from .models.mamba2 import (
Mamba2ForCausalLM,
Mamba2Model,
Mamba2PreTrainedModel,
)
from .models.marian import MarianForCausalLM, MarianModel, MarianMTModel
from .models.markuplm import (
MarkupLMForQuestionAnswering,
Expand Down
1 change: 1 addition & 0 deletions src/transformers/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,7 @@
lxmert,
m2m_100,
mamba,
mamba2,
marian,
markuplm,
mask2former,
Expand Down
2 changes: 2 additions & 0 deletions src/transformers/models/auto/configuration_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,7 @@
("lxmert", "LxmertConfig"),
("m2m_100", "M2M100Config"),
("mamba", "MambaConfig"),
("mamba2", "Mamba2Config"),
("marian", "MarianConfig"),
("markuplm", "MarkupLMConfig"),
("mask2former", "Mask2FormerConfig"),
Expand Down Expand Up @@ -439,6 +440,7 @@
("m2m_100", "M2M100"),
("madlad-400", "MADLAD-400"),
("mamba", "Mamba"),
("mamba2", "mamba2"),
("marian", "Marian"),
("markuplm", "MarkupLM"),
("mask2former", "Mask2Former"),
Expand Down
4 changes: 4 additions & 0 deletions src/transformers/models/auto/modeling_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,7 @@
("lxmert", "LxmertModel"),
("m2m_100", "M2M100Model"),
("mamba", "MambaModel"),
("mamba2", "Mamba2Model"),
("marian", "MarianModel"),
("markuplm", "MarkupLMModel"),
("mask2former", "Mask2FormerModel"),
Expand Down Expand Up @@ -309,6 +310,7 @@
("luke", "LukeForMaskedLM"),
("lxmert", "LxmertForPreTraining"),
("mamba", "MambaForCausalLM"),
("mamba2", "Mamba2ForCausalLM"),
("mega", "MegaForMaskedLM"),
("megatron-bert", "MegatronBertForPreTraining"),
("mobilebert", "MobileBertForPreTraining"),
Expand Down Expand Up @@ -393,6 +395,7 @@
("luke", "LukeForMaskedLM"),
("m2m_100", "M2M100ForConditionalGeneration"),
("mamba", "MambaForCausalLM"),
("mamba2", "Mamba2ForCausalLM"),
("marian", "MarianMTModel"),
("mega", "MegaForMaskedLM"),
("megatron-bert", "MegatronBertForCausalLM"),
Expand Down Expand Up @@ -471,6 +474,7 @@
("jetmoe", "JetMoeForCausalLM"),
("llama", "LlamaForCausalLM"),
("mamba", "MambaForCausalLM"),
("mamba2", "Mamba2ForCausalLM"),
("marian", "MarianForCausalLM"),
("mbart", "MBartForCausalLM"),
("mega", "MegaForCausalLM"),
Expand Down
1 change: 1 addition & 0 deletions src/transformers/models/auto/tokenization_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -270,6 +270,7 @@
("lxmert", ("LxmertTokenizer", "LxmertTokenizerFast" if is_tokenizers_available() else None)),
("m2m_100", ("M2M100Tokenizer" if is_sentencepiece_available() else None, None)),
("mamba", (None, "GPTNeoXTokenizerFast" if is_tokenizers_available() else None)),
("mamba2", (None, "GPTNeoXTokenizerFast" if is_tokenizers_available() else None)),
("marian", ("MarianTokenizer" if is_sentencepiece_available() else None, None)),
(
"mbart",
Expand Down
58 changes: 58 additions & 0 deletions src/transformers/models/mamba2/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# Copyright 2024 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from typing import TYPE_CHECKING

from ...utils import (
OptionalDependencyNotAvailable,
_LazyModule,
is_torch_available,
)


_import_structure = {
"configuration_mamba2": ["Mamba2Config", "Mamba2OnnxConfig"],
}

try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_mamba2"] = [
"Mamba2ForCausalLM",
"Mamba2Model",
"Mamba2PreTrainedModel",
]


if TYPE_CHECKING:
from .configuration_mamba2 import Mamba2Config, Mamba2OnnxConfig

try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_mamba2 import (
Mamba2ForCausalLM,
Mamba2Model,
Mamba2PreTrainedModel,
)
else:
import sys

sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
Loading
Loading