Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Development Roadmap (2024 Q3) #634

Closed
15 of 29 tasks
Ying1123 opened this issue Jul 17, 2024 · 19 comments
Closed
15 of 29 tasks

Development Roadmap (2024 Q3) #634

Ying1123 opened this issue Jul 17, 2024 · 19 comments

Comments

@Ying1123
Copy link
Member

Ying1123 commented Jul 17, 2024

Here is the development roadmap for 2024 Q3. Contributions and feedback are welcome.

Server API

Performance

Parallelism

Quantization

Observability

Model Coverage

Hardware Coverage

Language API

LoRA Support

Usage examples

Others

@Ying1123 Ying1123 pinned this issue Jul 17, 2024
@zhyncs
Copy link
Member

zhyncs commented Jul 17, 2024

Support W8A4 quantization with fp8 activation and int4 weight.

typo: W8A4 -> W4A8

@Ying1123
Copy link
Member Author

Support W8A4 quantization with fp8 activation and int4 weight.

typo: W8A4 -> W4A8

Thanks! Changed.

@LinqingZhong
Copy link

May I ask if there is an example for using llava-next-interleave with multi images ?

@anatoli26
Copy link

I guess ROCm support is under Hardware Coverage - AMD support. Any ETA for this?

@usaxena-asapp
Copy link

Hey @Ying1123 - are you okay with open source contributions from developers outside the core team? Looking to find more places I can contribute and I'm excited about SGLang. Just wondering.

@Ying1123
Copy link
Member Author

Hey @Ying1123 - are you okay with open source contributions from developers outside the core team? Looking to find more places I can contribute and I'm excited about SGLang. Just wondering.

Hi @usaxena-asapp, definitely! There is no strict definition of a "core team," and I'm just a volunteer to coordinate. If you contribute a lot, you are a core member! Let me know if you need any help from people with experience. My suggestion is to start with small issues and PRs and join discussions. If you want to start a big one, you can start with a simple proposal to trigger collaborations from the community.

@Ying1123
Copy link
Member Author

I guess ROCm support is under Hardware Coverage - AMD support. Any ETA for this?

Hi @usaxena-asapp, thanks for the question, we list it in the roadmap, but we might just start with some basic tests. Optimizations will depend on how many people and resources we can get.

@anatoli26
Copy link

we list it in the roadmap, but we might just start with some basic tests. Optimizations will depend on how many people and resources we can get.

Have you tried talking to AMD for hardware samples (e.g. a pair of W7900) and software collaboration? They are trying hard to be on par with NVIDIA in software stack: AMD is Becoming a Software Company. Here's the Plan. The author of the article has some great connections with the AMD people, maybe you could write him (W1zzard under the title) to ask for contacts at AMD responsible for relations with FOSS projects?

@ghchris2021
Copy link

IDK if there's any potential interest to broaden the concepts involved in "Hardware Coverage" but in case it may raise some ideas to consider in the future:

You mention CPU support, AMD support, but there are higher level frameworks that MAY considerably help with supporting different hardware backends (CPU, GPU) so you don't necessarily have to put as much work / focus into supporting a SPECIFIC backend -- they may ease / largely solve running on more than one for the same effort.
For instance OpenCL, SYCL, Vulkan compute, maybe OpenACC, and others are somewhat portable parallel computing frameworks and support some CPU(s) and some GPU(s) typically at least a couple if not several.

IIRC OpenCL can run on Nvidia, Amd, Intel GPUs as well as Intel & AMD & I think some ARM CPUs.

IIRC SYCL runs on Intel GPUs, Intel / AMD CPUs, and I believe also NVIDIA GPUs. It may run on AMD GPUs but I'm not so sure about that.

There are higher level still frameworks / implementations that can encapsulate / provide some of the tools / implementations for such open standards e.g.

https://github.com/AdaptiveCpp/AdaptiveCpp

targets SYCL but also provides C++ std:: paralellism programming models.

POCL, RustiCL, and several other (intel, amd, nvidia, ...) development packages / solutions support particular instances of platforms with functional compatible OpenCL support.

Besides the NVIDIA, AMD GPUs Intel has generations of data center / enterprise / business / consumer grade GPUs which are strong in their capabilities and they've got the same tooling / documentation / etc. across the product line insofar as supporting stuff like SYCL, OneAPI, OpenVINO, DPC++, libraries like OneDNN, etc. etc. for GPU families and CPUs.

There exist vulkan wrappers and higher level middleware that encapsulate the details of Vulkan compute programming and expose easier to use developer interfaces / solutions for general parallel compute, math / arithmetic / matrix / vector / NN etc. stuff.

IIRC all major gpus NVIDIA / AMD / Intel have Vulkan compatible runtimes and development options available and several ARM SOC etc. GPUs as well. So it as a middleware layer could help support numerous platforms for a single quantum of effort to target Vulkan based operations for the primary memory / NN / linear algebra etc. related calculations that can be accelerated.

So I'm just suggesting trying to reach for tools to support multiple standards based platforms if that eases your work and also broadens / accelerates the support of more platforms.

@CSEEduanyu
Copy link

I noticed that the speculate decode function has been implemented in the branch https://github.com/sgl-project/sglang/pull/270/commits, why was this commit closed? How long will it take to support speculate decode? Thank you for your reply.

@TimDettmers
Copy link

This is an awesome project! Thank you for this. @Ying1123 I am interested in using SGLang for multi-LoRA deployments for a project. The alternative is currently vLLM, but I like SGLang better. I am curious about the current state and timeline for supporting S-LoRA-like deployment.

@Ying1123
Copy link
Member Author

This is an awesome project! Thank you for this. @Ying1123 I am interested in using SGLang for multi-LoRA deployments for a project. The alternative is currently vLLM, but I like SGLang better. I am curious about the current state and timeline for supporting S-LoRA-like deployment.

Hi @TimDettmers, happy to hear from you! I got the same request from another SGLang user, so I am actively working on the multi-LoRA module, which is expected to have a first runnable version in a week. You are welcome to join our Slack and send me your sample script!

@hxer7963
Copy link
Contributor

hxer7963 commented Sep 7, 2024

Have no plan to support W8A8 quantization?

@brotherchen
Copy link

Can the current excellent performance (compared to vllm) be understood as excellent engineering implementation (such as using multiple processes to reduce CPU overhead) and more efficient scheduling strategies? And I want to know whether support for pipeline parallelism is being considered.

@merrymercy
Copy link
Contributor

@hxer7963 fp8 W8A8 quantization is supported.

@brotherchen yes. Pipeline parallelism will be in the plan of Q4

@CedricHwong
Copy link

Planning to use Sglang on Intel Gaudi 2, but I have not tried it yet. Would like to know the current support level?

@mingfeima
Copy link

Planning to use Sglang on Intel Gaudi 2, but I have not tried it yet. Would like to know the current support level?

@xinyu-intel we don't have binding for Gaudi 2 yet, right?

@xinyu-intel
Copy link

@CedricHwong @mingfeima Hi, glad to see the requests for sglang on Intel Gaudi. Currently, it's not implemented and we are evaluating the feasibility.

@zhyncs zhyncs unpinned this issue Oct 11, 2024
@zhyncs
Copy link
Member

zhyncs commented Nov 1, 2024

Most of the list in this 2024 Q3 roadmap has been completed, and the unfinished parts have been migrated to the 2024 Q4 roadmap. This issue is now closed. For those interested in the latest roadmap, please follow #1487

@zhyncs zhyncs closed this as completed Nov 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests