Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Converting Compressed LLaMA2 Model to Hugging Face-Compatible Format #12

Open
codeit1792 opened this issue Jul 18, 2024 · 1 comment
Open

Comments

@codeit1792
Copy link

Issue: Converting Compressed LLaMA2 Model to Hugging Face-Compatible Format

Description

We have successfully compressed a LLaMA2 model with 4.4 billion parameters. However, I am encountering issues when trying to convert the compressed model to a Hugging Face-compatible format. Specifically, when I use the model.save_pretrained(output_dir) and tokenizer.save_pretrained(output_dir) methods, the model parameters revert to the original 6.7 billion, and the output becomes worse and incoherent.

Steps to Reproduce

  1. Compress a LLaMA2 model to 4.4 billion parameters.

  2. Use the following code to save the model:

    import torch
    from transformers import AutoModelForCausalLM, AutoTokenizer
    
    def save_compressed_model(model, tokenizer, output_dir):
        # Save the model and tokenizer using Hugging Face's save_pretrained method
        model.save_pretrained(output_dir, safe_serialization=True)
        tokenizer.save_pretrained(output_dir)
    
    # Load your compressed model
    model_path = "path_to_your_compressed_model"
    tokenizer_path = "path_to_your_tokenizer"
    output_dir = "path_to_output_directory"
    
    model = torch.load(model_path)
    tokenizer = AutoTokenizer.from_pretrained(tokenizer_path)
    
    # Save the model and tokenizer
    save_compressed_model(model, tokenizer, output_dir)
  3. Attempt to use the model from the output directory.

Observed Behavior

  • The model parameters revert to the original 6.7 billion.
  • The model output becomes worse and generates random gibberish.

Expected Behavior

  • The model should retain its compressed state with 4.4 billion parameters.
  • The model output should remain coherent and consistent with the compressed model's performance.

Additional Context

I have also attempted to convert the model to GGUF format, but encountered similar issues. Any guidance on correctly converting and saving the compressed model for Hugging Face would be greatly appreciated.

Thank you for your assistance!

@codeit1792
Copy link
Author

Just to simplify, we are able compress and use the svdllm models. However, we are unable to convert them to Hugging Face formats like safetensors or GGUF. All our conversion attempts have resulted in the models getting distorted or modified. Can you please help us figure this out?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant