Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

找不到cuda gpu #29

Open
lanchongdashygo opened this issue Jul 3, 2023 · 2 comments
Open

找不到cuda gpu #29

lanchongdashygo opened this issue Jul 3, 2023 · 2 comments

Comments

@lanchongdashygo
Copy link

您好,可以请教一下,cuda版本是多少比较合适呢,错误提示如下,在跑demo_r.py时发现cuda不可用,但是可以成功跑通demo.py文件,目前环境是pytorch11.7 CUDA11.3 3090 24GB
Loading checkpoint shards: 0%| | 0/8 [00:01<?, ?it/s]
Traceback (most recent call last):
File "./demo_r.py", line 186, in
main()
File "./demo_r.py", line 111, in main
model = model_class.from_pretrained(args.model_path, device_map = device_map).half()
File "/root/miniconda3/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2478, in from_pretrained
) = cls._load_pretrained_model(
File "/root/miniconda3/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2794, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File "/root/miniconda3/lib/python3.8/site-packages/transformers/modeling_utils.py", line 663, in _load_state_dict_into_meta_model
set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/accelerate/utils/modeling.py", line 149, in set_module_tensor_to_device
new_value = value.to(device)
File "/root/miniconda3/lib/python3.8/site-packages/torch/cuda/init.py", line 229, in _lazy_init
torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available

@LiuHC0428
Copy link
Owner

我们目前使用的是cuda11.2 3090 24GB

@BlakcPink
Copy link

CUDA_VISIBLE_DEVICES=0 python ./demo.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants