You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I know that the current convert.py execution fails cause this type of model isn't supported
current error:
Loading model file models/pytorch_model.bin
Traceback (most recent call last):
File "/content/llama.cpp/convert.py", line 1208, in <module>
main()
File "/content/llama.cpp/convert.py", line 1157, in main
params = Params.load(model_plus)
File "/content/llama.cpp/convert.py", line 288, in load
params = Params.loadHFTransformerJson(model_plus.model, hf_config_path)
File "/content/llama.cpp/convert.py", line 203, in loadHFTransformerJson
n_embd = config["hidden_size"]
KeyError: 'hidden_size'
Environment and Context
I am currently running all of this in a google colab notebook
SDK version, e.g. for Linux:
Python 3.10.12
GNU Make 4.3
Built for x86_64-pc-linux-gnu
g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
I request help to accomplish this conversion. Can someone please suggest a method to convert this flan model to GGUF.
The text was updated successfully, but these errors were encountered:
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
I was trying to convert google/flan-t5-large model to GGUF format using this colab.
I am importing the model this way
Current Behavior
I know that the current convert.py execution fails cause this type of model isn't supported
current error:
Environment and Context
I am currently running all of this in a google colab notebook
I request help to accomplish this conversion. Can someone please suggest a method to convert this flan model to GGUF.
The text was updated successfully, but these errors were encountered: