-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
怎么指定单GPU #20
Comments
我fix下,用了device=auto的原因。 |
8c82cda done |
收到,感谢 |
这里我测试了一下,发现还是不行
这里现象是加载的时候确实会按指定加载,但是预测的时候会都参与计算 |
研究了一下这里的代码,好像不涉及上面我说的问题,不知道怎么排查了😓 |
export CUDA_VISIBLE_DEVICES=0 这样设置只一个gpu可见,export CUDA_VISIBLE_DEVICES=0,1,2 是多个可见。 这是官方建议的处理。 |
我这边想问下,T5 模型训练时,如何用多块卡 |
T5 暂不支持多卡训练。 |
这种方式测试不好使 |
好的,我加下lora的device |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.(由于长期不活动,机器人自动关闭此问题,如果需要欢迎提问) |
textgen/examples/chatglm/training_chatglm_adgen_demo.py
Line 90 in 54f90b2
当有多块显卡的时候,默认全部加载,试过了
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
和
args={'use_lora': True, 'output_dir': args.output_dir, "max_length": args.max_length, 'n_gpu':0},
以及
cuda_device = 0
都没法做到,请教一下,如何设置用单GPU训练或者预测
2023-04-17 02:18:32.967 | DEBUG | chatglm.chatglm_model:init:92 - Device: cuda:0
这里打印结果如上,但是还是会加载多块GPU
The text was updated successfully, but these errors were encountered: