Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detailed documentation about model parallelism #214

Open
ZSL98 opened this issue Oct 28, 2024 · 0 comments
Open

Detailed documentation about model parallelism #214

ZSL98 opened this issue Oct 28, 2024 · 0 comments

Comments

@ZSL98
Copy link

ZSL98 commented Oct 28, 2024

Hi, I am trying to use the model parallelism feature but I found the documentation is really unclear. /doc/parallelism/README.md say that the code is in the adapter, then where exactly is the code? Or could you please answer my problem below?
Thank you very much!

For example, when the world_size and global expert num are both 8, the basic expert parallelism method is like:

        fastermoe = FMoETransformerMLP(num_expert=1, 
                                    d_model=config.hidden_size, 
                                    d_hidden=config.ffn_hidden_size,
                                    world_size=8,
                                    top_k=2)
        y = fastermoe(x)

Now I want to change it to TP=8, which means each expert is split to 8 slices. What is the easiest way to realize it? I just want to do forwarding. What about TP=4?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant