You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
您好,可以请教一下,cuda版本是多少比较合适呢,错误提示如下,在跑demo_r.py时发现cuda不可用,但是可以成功跑通demo.py文件,目前环境是pytorch11.7 CUDA11.3 3090 24GB
Loading checkpoint shards: 0%| | 0/8 [00:01<?, ?it/s]
Traceback (most recent call last):
File "./demo_r.py", line 186, in
main()
File "./demo_r.py", line 111, in main
model = model_class.from_pretrained(args.model_path, device_map = device_map).half()
File "/root/miniconda3/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2478, in from_pretrained
) = cls._load_pretrained_model(
File "/root/miniconda3/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2794, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File "/root/miniconda3/lib/python3.8/site-packages/transformers/modeling_utils.py", line 663, in _load_state_dict_into_meta_model
set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/accelerate/utils/modeling.py", line 149, in set_module_tensor_to_device
new_value = value.to(device)
File "/root/miniconda3/lib/python3.8/site-packages/torch/cuda/init.py", line 229, in _lazy_init
torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available
The text was updated successfully, but these errors were encountered:
您好,可以请教一下,cuda版本是多少比较合适呢,错误提示如下,在跑demo_r.py时发现cuda不可用,但是可以成功跑通demo.py文件,目前环境是pytorch11.7 CUDA11.3 3090 24GB
Loading checkpoint shards: 0%| | 0/8 [00:01<?, ?it/s]
Traceback (most recent call last):
File "./demo_r.py", line 186, in
main()
File "./demo_r.py", line 111, in main
model = model_class.from_pretrained(args.model_path, device_map = device_map).half()
File "/root/miniconda3/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2478, in from_pretrained
) = cls._load_pretrained_model(
File "/root/miniconda3/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2794, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File "/root/miniconda3/lib/python3.8/site-packages/transformers/modeling_utils.py", line 663, in _load_state_dict_into_meta_model
set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/accelerate/utils/modeling.py", line 149, in set_module_tensor_to_device
new_value = value.to(device)
File "/root/miniconda3/lib/python3.8/site-packages/torch/cuda/init.py", line 229, in _lazy_init
torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available
The text was updated successfully, but these errors were encountered: