Replies: 1 comment 7 replies
-
The error information indicates the following code suffered in your env. import torch
device_name = torch.cuda.get_device_name(0).lower() Please check if |
Beta Was this translation helpful? Give feedback.
7 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, getting this error when running in docker with latest
openmmlab/lmdeploy
container (sha256:3020bcad368e74a4b7e346c5fc991ed538e0776a30740819ab03cb6e7a55e728
).Other than the newer CUDA in the container (running Debian so 12.2 is latest in repo), everything lines up and should operate fine. Running
ollama
using the same docker configuration works just fine. I've of course tested the tried-and-true reboot, no luck.Inside container
nvidia-smi
:Outside container
nvidia-smi
(on host):Inside container
lm-deploy check_env
:Inside container
lmdeploy serve api_server Qwen/Qwen2.5-Coder-1.5B
:docker-compose entry:
Beta Was this translation helpful? Give feedback.
All reactions