You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
WARNING 12-26 13:00:41 rocm.py:17] fork method is not supported by ROCm. VLLM_WORKER_MULTIPROC_METHOD is overridden to spawn instead.
Traceback (most recent call last):
File "/opt/conda/envs/py_3.9/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/envs/py_3.9/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/sgl-workspace/sglang/python/sglang/launch_server.py", line 6, in
from sglang.srt.server import launch_server
File "/sgl-workspace/sglang/python/sglang/srt/server.py", line 47, in
from sglang.srt.managers.data_parallel_controller import (
File "/sgl-workspace/sglang/python/sglang/srt/managers/data_parallel_controller.py", line 25, in
from sglang.srt.managers.io_struct import (
File "/sgl-workspace/sglang/python/sglang/srt/managers/io_struct.py", line 24, in
from sglang.srt.managers.schedule_batch import BaseFinishReason
File "/sgl-workspace/sglang/python/sglang/srt/managers/schedule_batch.py", line 40, in
from sglang.srt.configs.model_config import ModelConfig
File "/sgl-workspace/sglang/python/sglang/srt/configs/model_config.py", line 24, in
from sglang.srt.layers.quantization import QUANTIZATION_METHODS
File "/sgl-workspace/sglang/python/sglang/srt/layers/quantization/init.py", line 25, in
from sglang.srt.layers.quantization.fp8 import Fp8Config
File "/sgl-workspace/sglang/python/sglang/srt/layers/quantization/fp8.py", line 31, in
from sglang.srt.layers.moe.fused_moe_triton.fused_moe import padding_size
File "/sgl-workspace/sglang/python/sglang/srt/layers/moe/fused_moe_triton/init.py", line 4, in
import sglang.srt.layers.moe.fused_moe_triton.fused_moe # noqa
File "/sgl-workspace/sglang/python/sglang/srt/layers/moe/fused_moe_triton/fused_moe.py", line 14, in
from sgl_kernel import moe_align_block_size as sgl_moe_align_block_size
ModuleNotFoundError: No module named 'sgl_kernel'
Checklist
Describe the bug
After getting last source code of sglang I'm not able to run it.
Reproduction
python3 -m sglang.launch_server --model DeepSeek-V3 --tp 8 --trust-remote-code
WARNING 12-26 13:00:41 rocm.py:17]
fork
method is not supported by ROCm. VLLM_WORKER_MULTIPROC_METHOD is overridden tospawn
instead.Traceback (most recent call last):
File "/opt/conda/envs/py_3.9/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/envs/py_3.9/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/sgl-workspace/sglang/python/sglang/launch_server.py", line 6, in
from sglang.srt.server import launch_server
File "/sgl-workspace/sglang/python/sglang/srt/server.py", line 47, in
from sglang.srt.managers.data_parallel_controller import (
File "/sgl-workspace/sglang/python/sglang/srt/managers/data_parallel_controller.py", line 25, in
from sglang.srt.managers.io_struct import (
File "/sgl-workspace/sglang/python/sglang/srt/managers/io_struct.py", line 24, in
from sglang.srt.managers.schedule_batch import BaseFinishReason
File "/sgl-workspace/sglang/python/sglang/srt/managers/schedule_batch.py", line 40, in
from sglang.srt.configs.model_config import ModelConfig
File "/sgl-workspace/sglang/python/sglang/srt/configs/model_config.py", line 24, in
from sglang.srt.layers.quantization import QUANTIZATION_METHODS
File "/sgl-workspace/sglang/python/sglang/srt/layers/quantization/init.py", line 25, in
from sglang.srt.layers.quantization.fp8 import Fp8Config
File "/sgl-workspace/sglang/python/sglang/srt/layers/quantization/fp8.py", line 31, in
from sglang.srt.layers.moe.fused_moe_triton.fused_moe import padding_size
File "/sgl-workspace/sglang/python/sglang/srt/layers/moe/fused_moe_triton/init.py", line 4, in
import sglang.srt.layers.moe.fused_moe_triton.fused_moe # noqa
File "/sgl-workspace/sglang/python/sglang/srt/layers/moe/fused_moe_triton/fused_moe.py", line 14, in
from sgl_kernel import moe_align_block_size as sgl_moe_align_block_size
ModuleNotFoundError: No module named 'sgl_kernel'
Environment
python3 -m sglang.check_env
WARNING 12-26 13:04:01 rocm.py:17]
fork
method is not supported by ROCm. VLLM_WORKER_MULTIPROC_METHOD is overridden tospawn
instead.Python: 3.9.19 (main, May 6 2024, 19:43:03) [GCC 11.2.0]
ROCM available: True
GPU 0,1,2,3,4,5,6,7: AMD Instinct MI300X
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.4
ROCM_HOME: /opt/rocm
HIPCC: HIP version: 6.2.41133-dd7f95766
ROCM Driver Version: 6.8.5
PyTorch: 2.5.0a0+gitcedc116
sglang: 0.4.1
flashinfer: Module Not Found
triton: 3.0.0
transformers: 4.45.2
torchao: 0.7.0
numpy: 1.26.4
aiohttp: 3.10.10
fastapi: 0.115.2
hf_transfer: 0.1.8
huggingface_hub: 0.26.1
interegular: 0.3.3
modelscope: 1.21.0
orjson: 3.10.12
packaging: 24.1
psutil: 6.1.0
pydantic: 2.9.2
multipart: 0.0.20
zmq: 26.2.0
uvicorn: 0.32.0
uvloop: 0.21.0
vllm: 0.6.3.dev13+g16583707.d20241022
openai: 1.58.1
anthropic: 0.42.0
decord: 0.6.0
AMD Topology:
============================ ROCm System Management Interface ============================
=============================== Link Type between two GPUs ===============================
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7
GPU0 0 XGMI XGMI XGMI XGMI XGMI XGMI XGMI
GPU1 XGMI 0 XGMI XGMI XGMI XGMI XGMI XGMI
GPU2 XGMI XGMI 0 XGMI XGMI XGMI XGMI XGMI
GPU3 XGMI XGMI XGMI 0 XGMI XGMI XGMI XGMI
GPU4 XGMI XGMI XGMI XGMI 0 XGMI XGMI XGMI
GPU5 XGMI XGMI XGMI XGMI XGMI 0 XGMI XGMI
GPU6 XGMI XGMI XGMI XGMI XGMI XGMI 0 XGMI
GPU7 XGMI XGMI XGMI XGMI XGMI XGMI XGMI 0
================================== End of ROCm SMI Log ===================================
ulimit soft: 1048576
The text was updated successfully, but these errors were encountered: