-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MODEL] Add Telechat2 (China Telecom) #1106
Conversation
@1096125073 Do you have HF model url for this model? We need to download this model and ci test |
Yes, you can try this one |
@1096125073 Thank you for the PR. I will have @LRL-ModelCloud take over the PR changes from here. He will fix this PR so it can run correctly in GPTQModel with CI tests. Right now, this code will not run since gptqmodel uses different structure on how to define models. |
Sorry, I just discovered that the properties of this library are different from AutoGPTQ and have been updated |
No problem. We wil fix this. But stop force-pushing so we can fix this PR. |
@1096125073 We are testing.
https://huggingface.co/Tele-AI/TeleChat2-7B/blob/main/modeling_telechat2.py#L186 The assert codes here are strange. Why is it forcing (asserting) |
Sorry, this is being handled by another team, but we are currently organizing the code and submitting the PR to the transformer library. I believe this will be resolved soon. |
@1096125073 We are running into |
yes, 7b run as float16 |
Add support for the Telechat model (VLLM now supports the Telechat model).