Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] 使用两个GPU运行“Qwen2-VL-2B”,启动后还没有任何请求就有一颗GPU自动满载运行 #2582

Closed
3 tasks done
jianliao opened this issue Oct 11, 2024 · 10 comments

Comments

@jianliao
Copy link

jianliao commented Oct 11, 2024

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

使用两个GPU运行Qwen/Qwen2-VL-2B-Instruct,启动后还没有任何请求就有一颗GPU自动满载运行

Reproduction

lmdeploy serve api_server --backend pytorch --tp 2 Qwen/Qwen2-VL-2B-Instruct
image

Environment

sys.platform: linux
Python: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0,1: NVIDIA GeForce RTX 3090
CUDA_HOME: /home/jliao/miniconda3/envs/lmdeploy
NVCC: Cuda compilation tools, release 12.6, V12.6.20
GCC: gcc (Ubuntu 13.2.0-23ubuntu4) 13.2.0
PyTorch: 2.3.1+cu121
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.3.6 (Git Hash 86e6af5974177e513fd3fee58425e1063e7f1361)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 12.1
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 8.9.2
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.9.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.3.1, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, 

TorchVision: 0.18.1+cu121
LMDeploy: 0.6.1+
transformers: 4.45.1
gradio: Not Found
fastapi: 0.115.0
pydantic: 2.9.2
triton: 2.3.1
NVIDIA Topology: 
	GPU0	GPU1	CPU Affinity	NUMA Affinity	GPU NUMA ID
GPU0	 X 	PHB	0-31	0		N/A
GPU1	PHB	 X 	0-31	0		N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

Error traceback

N/A
@wangaocheng
Copy link

我遇到一个更奇怪的问题,我通过官方提供的Docker openmmlab/lmdeploy:latest安装了lmdeploy,我运行 lmdeploy serve api_server Qwen/Qwen2-VL-7B-Instruct --server-port 6001 --tp 2 的时候,docker 直接弹窗了。

Snipaste_2024-10-12_01-56-53

@jianliao
Copy link
Author

@wangaocheng 我看到这个issue了,建议你要么换成2B先试一试,要么或者不要用docker,直接跑在原生系统里试一试能否看到更多的错误信息。

@wangaocheng
Copy link

Windows 无法安装Triton,会出现 test failed!报错无法启动。

@grimoire
Copy link
Collaborator

正常的,nccl 等同步的 kernel

@jianliao
Copy link
Author

@grimoire 请问这是目前的pytorch backend的局限性吗?有没有什么办法解决这个问题吗?

@grimoire
Copy link
Collaborator

空跑的 nccl kernel 应该没啥开销的,也不影响其他 stream 上 kernel 的使用的,也就显示的利用率高点,应该不是啥大问题吧

@jianliao
Copy link
Author

jianliao commented Oct 16, 2024

空跑的 nccl kernel 应该没啥开销的,也不影响其他 stream 上 kernel 的使用的,也就显示的利用率高点,应该不是啥大问题吧

不是利用率高点,是一直保持在100%。。。

@grimoire
Copy link
Collaborator

理论上等待时用的 cuda core 很少的,就算100%也能同时在其他 stream 里跑别的 kernel 的。#2607 (review) 这里加了个 barrier,应该就不会 100% 了。

Copy link

github-actions bot commented Nov 1, 2024

This issue is marked as stale because it has been marked as invalid or awaiting response for 7 days without any further response. It will be closed in 5 days if the stale label is not removed or if there is no further response.

@github-actions github-actions bot added the Stale label Nov 1, 2024
Copy link

github-actions bot commented Nov 7, 2024

This issue is closed because it has been stale for 5 days. Please open a new issue if you have similar issues or you have any new updates now.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants