Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

miss to read moe_ffn weights from converted tm model #2698

Merged
merged 4 commits into from
Nov 4, 2024

Conversation

lvhan028
Copy link
Collaborator

@lvhan028 lvhan028 commented Nov 1, 2024

Fix #2689

@lvhan028 lvhan028 merged commit 5f577c2 into InternLM:main Nov 4, 2024
9 checks passed
lvhan028 added a commit that referenced this pull request Nov 5, 2024
* miss to read moe_ffn weights

* fix linting

* fix linting

* fix linting
AllentDan pushed a commit to AllentDan/lmdeploy that referenced this pull request Nov 13, 2024
* miss to read moe_ffn weights

* fix linting

* fix linting

* fix linting
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug] chat with converted mixtral-8x7b model, raise RuntimeError
3 participants