Skip to content

Help Needed: Implementing QLoRA on AutoGPTQ Model - Unsupported Module Error #1858

Discussion options

You must be logged in to vote

I resolved the issue by experimenting with two different approaches instead of trying to quantize with AutoGPTQ and then apply LoRA directly.

Approach 1: I applied 4-bit quantization using the bitsandbytes library and then applied LoRA using PEFT.
Approach 2: I loaded the GPTQ model via Transformers, applied the GPTQ configuration, and then applied LoRA using PEFT.
Following the advice given, I loaded the quantized model through Transformers before applying LoRA, which resolved the compatibility issues.

Replies: 3 comments 4 replies

Comment options

You must be logged in to vote
2 replies
@BenjaminBossan
Comment options

@SkanderGhariani
Comment options

Comment options

You must be logged in to vote
1 reply
@BenjaminBossan
Comment options

Answer selected by SkanderGhariani
Comment options

You must be logged in to vote
1 reply
@lp-noel
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
4 participants