-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kaggle :: GPU P100 :: TypeError: LoraLayer_update_layer() got an unexpected keyword argument 'use_dora' #201
Comments
Just encountered the same error on Colab. Seems to be a new issue |
Just downgrade HF PEFT to !pip install --force-reinstall --no-cache-dir peft==0.8.2 |
Oh my I will get this fixed ASAP |
Yeah, it's because HuggingFace just merged their DoRA branch to main in the last days. Probably that new argument is slipping through. |
It would be great if we could integrate PEFT internally in Unsloth to prevent these reverse breaking changes in external packages. |
Thanks @RonanKMcGovern for sending me here. Let's set up CI using PEFT and unsloth main to prevent this in the future. Do you want to set it up on your side or should we look into adding it to PEFT? Regarding this specific error, if possible, add |
@BenjaminBossan Should be fine in the future hopefully - I rewrote the code to use |
Doing some tests on my end and will push it asap!! Sorry everyone for the issue and also thanks for notifying me! |
@DeanChugall @dsbyprateekg @Jonaskouwenhoven Again sorry - just fixed it!! On Kaggle / Colab, a reinstall of Unsloth will have to take place - no need to disconnect - just press restart and run all. For local machines: Again sorry and also thanks for notifying me!! |
@danielhanchen Thanks a lot for the quick response and the fix.
Can you please check and help me to resolve this as well? |
@dsbyprateekg That's a weird bug - do u have a more complete error trace - ie are u just using our notebook? |
It's my bad, I forgot to attach the logs. |
@dsbyprateekg Is ur Kaggle instance connected to the internet? |
Yes. |
Hmm weird bug indeed |
@dsbyprateekg Oh try |
@DeanChugall Thanks again! It solved my issue and I am able to proceed. |
@dsbyprateekg Oh the datasets issue is fine as well? Also I'll reopen this temporarily for people who might have the same issue!! I'll close this in a few days :) |
@danielhanchen Yes, datasets issue was also resolved. But now facing another error- While running the training command- trainer = SFTTrainer( Logs are attached. |
so the issue is resolved once I commented the line The next error is with command File /opt/conda/lib/python3.10/site-packages/trl/trainer/sft_trainer.py:331, in SFTTrainer.train(self, *args, **kwargs) File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1624, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) File :272, in _fast_inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) File /opt/conda/lib/python3.10/site-packages/transformers/trainer_callback.py:370, in CallbackHandler.on_train_begin(self, args, state, control) File /opt/conda/lib/python3.10/site-packages/transformers/trainer_callback.py:414, in CallbackHandler.call_event(self, event, args, state, control, **kwargs) File /opt/conda/lib/python3.10/site-packages/transformers/integrations/integration_utils.py:767, in WandbCallback.on_train_begin(self, args, state, control, model, **kwargs) File /opt/conda/lib/python3.10/site-packages/transformers/integrations/integration_utils.py:740, in WandbCallback.setup(self, args, state, model, **kwargs) File /opt/conda/lib/python3.10/site-packages/wandb/sdk/wandb_init.py:1195, in init(job_type, dir, config, project, entity, reinit, tags, group, name, notes, magic, config_exclude_keys, config_include_keys, anonymous, mode, allow_val_change, resume, force, tensorboard, sync_tensorboard, monitor_gym, save_code, id, settings) File /opt/conda/lib/python3.10/site-packages/wandb/sdk/wandb_init.py:1172, in init(job_type, dir, config, project, entity, reinit, tags, group, name, notes, magic, config_exclude_keys, config_include_keys, anonymous, mode, allow_val_change, resume, force, tensorboard, sync_tensorboard, monitor_gym, save_code, id, settings) File /opt/conda/lib/python3.10/site-packages/wandb/sdk/wandb_init.py:306, in _WandbInit.setup(self, kwargs) File /opt/conda/lib/python3.10/site-packages/wandb/sdk/wandb_login.py:317, in _login(anonymous, key, relogin, host, force, timeout, _backend, _silent, _disable_warning, _entity) File /opt/conda/lib/python3.10/site-packages/wandb/sdk/wandb_login.py:247, in _WandbLogin.prompt_api_key(self) UsageError: api_key not configured (no-tty). call wandb.login(key=[your_api_key]) |
@dsbyprateekg On wandb: import os
os.environ["WANDB_DISABLED"] = "true" then for TrainingArgs: seed = 3407,
output_dir = "outputs",
report_to = "none", |
@danielhanchen I have added my wandb login but now I am facing Please check logs and see if you find something wrong here. |
@dsbyprateekg Oh on the topic of Kaggle - would the Mistral notebook we have help? https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook I tested that vigourously, so hopefully that one doesn't have any issues |
Hi,
I am trying to run Alpaca + Gemma 7b full example.ipynb in Kaggle environment and getting following error-
while running the below code-
Installed libraries versions are: langchain-0.1.9, langchain-community-0.0.24, langchain-core-0.1.27, sentence-transformers-2.4.0
Please have a look at this issue.
The text was updated successfully, but these errors were encountered: