You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Followed the instructions to create a new model repo and add the required files via Git. When I test the uploaded model via the HF sandbox, I get the following error:
Loading umm-maybe/StackStar_Santa requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error.
It's unclear which configuration file it's referring to, but I did notice the config.json references the parent model (santacoder), instead of mine, and changed that. I also executed the configuration_gpt2_mq.py, which does nothing. There's no trust_remote_code option in either of these files; from what I understand it's an option when running local inference using AutoModelForCausalLM.from_pretrained. It's not clear how to set this option for on-line inference via the HuggingFace Hub.
The text was updated successfully, but these errors were encountered:
Hi, thank you for the response. Let me clarify that I want to run inference using the Huggingface Accelerated CPU API, not locally. I can't find the equivalent place to set this option... unless you're saying to set this before running model.push_to_hub()?
Followed the instructions to create a new model repo and add the required files via Git. When I test the uploaded model via the HF sandbox, I get the following error:
Loading umm-maybe/StackStar_Santa requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option
trust_remote_code=True
to remove this error.It's unclear which configuration file it's referring to, but I did notice the config.json references the parent model (santacoder), instead of mine, and changed that. I also executed the configuration_gpt2_mq.py, which does nothing. There's no trust_remote_code option in either of these files; from what I understand it's an option when running local inference using AutoModelForCausalLM.from_pretrained. It's not clear how to set this option for on-line inference via the HuggingFace Hub.
The text was updated successfully, but these errors were encountered: