Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

convert_module in BitsandbytesPrecision is called before configure_model #18936

Closed
lucadiliello opened this issue Nov 3, 2023 · 2 comments · Fixed by #19060 or #19061
Closed

convert_module in BitsandbytesPrecision is called before configure_model #18936

lucadiliello opened this issue Nov 3, 2023 · 2 comments · Fixed by #19060 or #19061
Labels
bug Something isn't working precision: bnb Bitsandbytes quantization ver: 2.1.x ver: 2.2.x
Milestone

Comments

@lucadiliello
Copy link
Contributor

lucadiliello commented Nov 3, 2023

Bug description

BitsandbytesPrecision.convert_module is called before LightningModule.configure_model, thus raising the error https://github.com/Lightning-AI/lightning/blob/f5f4d0a26471400975fdb6ea59337eaf5c51b62f/src/lightning/fabric/plugins/precision/bitsandbytes.py#L102
because no Linear layer is found since the actual model is not instantiated yet.

What version are you seeing the problem on?

v2.1, master

How to reproduce the bug

No response

Error messages and logs

Traceback (most recent call last):
  File "/home/ubuntu/anaconda3/envs/nlp/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/ubuntu/anaconda3/envs/nlp/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/ubuntu/development/Transformers-Framework/transformers_framework/__main__.py", line 277, in <module>
    main(args)
  File "/home/ubuntu/development/Transformers-Framework/transformers_framework/__main__.py", line 171, in main
    trainer.fit(model, datamodule=datamodule, ckpt_path=hyperparameters.ckpt_path)
  File "/home/ubuntu/anaconda3/envs/nlp/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 545, in fit
    call._call_and_handle_interrupt(
  File "/home/ubuntu/anaconda3/envs/nlp/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 43, in _call_and_handle_interrupt
    return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
  File "/home/ubuntu/anaconda3/envs/nlp/lib/python3.10/site-packages/lightning/pytorch/strategies/launchers/subprocess_script.py", line 102, in launch
    return function(*args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/nlp/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 581, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/home/ubuntu/anaconda3/envs/nlp/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 933, in _run
    self.strategy.connect(model)
  File "/home/ubuntu/anaconda3/envs/nlp/lib/python3.10/site-packages/lightning/pytorch/strategies/strategy.py", line 113, in connect
    model = cast(pl.LightningModule, self.precision_plugin.convert_module(model))
  File "/home/ubuntu/anaconda3/envs/nlp/lib/python3.10/site-packages/lightning/fabric/plugins/precision/bitsandbytes.py", line 102, in convert_module
    raise TypeError(
TypeError: You are using the bitsandbytes precision plugin, but your model has no Linear layers. This plugin won't work for your model.

Environment

Current environment
  • CUDA:
    - GPU:
    - Tesla V100-SXM2-16GB
    - Tesla V100-SXM2-16GB
    - Tesla V100-SXM2-16GB
    - Tesla V100-SXM2-16GB
    - Tesla V100-SXM2-16GB
    - Tesla V100-SXM2-16GB
    - Tesla V100-SXM2-16GB
    - Tesla V100-SXM2-16GB
    - available: True
    - version: 12.1
  • Lightning:
    - bleurt-pytorch: 0.0.1
    - lightning: 2.1.0
    - lightning-utilities: 0.9.0
    - pytorch-lightning: 2.1.0
    - torch: 2.1.0+cu121
    - torchmetrics: 1.2.0
    - torchvision: 0.16.0
  • Packages:
    - accelerate: 0.23.0
    - aiohttp: 3.8.6
    - aiosignal: 1.3.1
    - anykeystore: 0.2
    - apex: 0.9.10.dev0
    - async-timeout: 4.0.3
    - attrs: 23.1.0
    - bitsandbytes: 0.41.1
    - bleurt-pytorch: 0.0.1
    - blingfire: 0.1.8
    - boto3: 1.28.69
    - botocore: 1.31.69
    - certifi: 2023.7.22
    - charset-normalizer: 3.3.1
    - click: 8.1.7
    - contourpy: 1.1.1
    - cryptacular: 1.6.2
    - cycler: 0.12.1
    - datasets: 2.14.6
    - deepspeed: 0.11.1
    - defusedxml: 0.7.1
    - dill: 0.3.7
    - einops: 0.7.0
    - filelock: 3.12.4
    - flash-attn: 2.0.4
    - fonttools: 4.43.1
    - frozenlist: 1.4.0
    - fsspec: 2023.10.0
    - ftfy: 6.1.1
    - greenlet: 3.0.1
    - hjson: 3.1.0
    - huggingface-hub: 0.17.3
    - hupper: 1.12
    - idna: 3.4
    - jinja2: 3.1.2
    - jmespath: 1.0.1
    - joblib: 1.3.2
    - kiwisolver: 1.4.5
    - lightning: 2.1.0
    - lightning-utilities: 0.9.0
    - llvmlite: 0.41.1
    - markdown-it-py: 3.0.0
    - markupsafe: 2.1.3
    - matplotlib: 3.8.0
    - mdurl: 0.1.2
    - mpmath: 1.3.0
    - multidict: 6.0.4
    - multiprocess: 0.70.15
    - networkx: 3.2
    - ninja: 1.11.1.1
    - nltk: 3.8.1
    - numba: 0.58.1
    - numpy: 1.26.1
    - oauthlib: 3.2.2
    - packaging: 23.2
    - pandas: 2.1.1
    - pastedeploy: 3.0.1
    - pbkdf2: 1.3
    - pillow: 10.1.0
    - pip: 23.3
    - plaster: 1.1.2
    - plaster-pastedeploy: 1.0.1
    - protobuf: 3.20.3
    - psutil: 5.9.6
    - py-cpuinfo: 9.0.0
    - pyarrow: 13.0.0
    - pydantic: 1.10.13
    - pygments: 2.16.1
    - pyparsing: 3.1.1
    - pyramid: 2.0.2
    - pyramid-mailer: 0.15.1
    - python-dateutil: 2.8.2
    - python3-openid: 3.2.0
    - pytorch-lightning: 2.1.0
    - pytz: 2023.3.post1
    - pyyaml: 6.0.1
    - regex: 2023.10.3
    - repoze.sendmail: 4.4.1
    - requests: 2.31.0
    - requests-oauthlib: 1.3.1
    - rich: 13.6.0
    - s3transfer: 0.7.0
    - safetensors: 0.4.0
    - scikit-learn: 1.3.2
    - scipy: 1.11.3
    - sentence-transformers: 2.2.2
    - sentencepiece: 0.1.99
    - setuptools: 68.0.0
    - six: 1.16.0
    - sqlalchemy: 2.0.23
    - sympy: 1.12
    - tensorboardx: 2.6.2.2
    - threadpoolctl: 3.2.0
    - timm: 0.9.8
    - tokenizers: 0.14.1
    - torch: 2.1.0+cu121
    - torchmetrics: 1.2.0
    - torchvision: 0.16.0
    - tqdm: 4.66.1
    - transaction: 3.1.0
    - transformer-engine: 0.13.0+8eae4ce
    - transformers: 4.34.1
    - translationstring: 1.4
    - triton: 2.1.0
    - typing-extensions: 4.8.0
    - tzdata: 2023.3
    - urllib3: 2.0.7
    - velruse: 1.1.1
    - venusian: 3.0.0
    - wcwidth: 0.2.9
    - webob: 1.8.7
    - wheel: 0.41.2
    - wtforms: 3.1.1
    - wtforms-recaptcha: 0.3.2
    - xformers: 0.0.22.post7
    - xxhash: 3.4.1
    - yarl: 1.9.2
    - zope.deprecation: 5.0
    - zope.interface: 6.1
    - zope.sqlalchemy: 3.1
  • System:
    - OS: Linux
    - architecture:
    - 64bit
    - ELF
    - processor: x86_64
    - python: 3.10.12
    - release: 6.2.0-1014-aws
    - version: Arbitrary lr_scheduler? #14~22.04.1-Ubuntu SMP Thu Oct 5 22:43:45 UTC 2023

More info

No response

cc @carmocca @awaelchli

@lucadiliello lucadiliello added bug Something isn't working needs triage Waiting to be triaged by maintainers labels Nov 3, 2023
@awaelchli
Copy link
Contributor

Thanks for opening this issue Luca.
Since configure_model is general purpose, yes we should move the convert call to a later point after configure_model(). In case you implemented configure_model for use with FSDP, then this combination is not yet supported. The main work item is #18679 to enable it.

@awaelchli awaelchli removed the needs triage Waiting to be triaged by maintainers label Nov 4, 2023
@awaelchli awaelchli added this to the 2.1.x milestone Nov 4, 2023
@awaelchli awaelchli added the precision: bnb Bitsandbytes quantization label Nov 4, 2023
@carmocca
Copy link
Contributor

carmocca commented Nov 6, 2023

@awaelchli Would you move it from Strategy.connect to Strategy.setup?

Also, I wonder if we should move LightningModule.configure_model beside LightningModule.setup (before or after) to support loading with restore_checkpoint_after_setup==True: https://github.com/Lightning-AI/lightning/blob/master/src/lightning/pytorch/trainer/trainer.py#L949-L957. Do you remember if there's any limitation that doesn't allow this? This was originally changed in #7652

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working precision: bnb Bitsandbytes quantization ver: 2.1.x ver: 2.2.x
Projects
None yet
3 participants