-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TransE on FB15K 链接预测任务显存溢出 #10
Comments
你好,问题出现在evaluation.py代码中,在类LinkPredictionEvaluator的trans_load_data函数中可设置测试过程中batch_size大小 |
好的,感谢,我修改为除以500试一下。请问您在多源链接预测上的DB15K中共享ID是文件中给定的已对齐的所有实体(4500+10500)进行共享ID吗? |
是的,在EN-ZN数据集给定的15000个种子对齐实体上进行共享ID |
谢谢您的回复!一般来说,训练集、验证集和测试集的划分为 6:2:2 或 8:1:1,而这个多源链接预测的比例是 18:1:1,这样的划分比例是为了达到提升效果吗?或者说如果常规划分的话效果还有提升吗?不知道有没有做这方面的实验?感谢答复! |
同学你好, |
您好,使用TransE在数据集FB15K上进行链接预测任务,在100epoch后进行评估报错内存溢出;于是我将bacth_size改为原来的一半50,依然溢出,显卡为2080Ti,内存为12GB。请问一下怎么解决呀?
#####################################################
epoch 99, avg. triple loss: 0.0121, cost time: 176.2252s
E:\LP_code\muKG-main\src
E:\LP_code\muKG-main\src
0%| | 0/50 [00:00<?, ?it/s]epoch 100, avg. triple loss: 0.0123, cost time: 174.0753s
0%| | 0/50 [00:01<?, ?it/s]
Traceback (most recent call last):
File "E:/LP_code/muKG-main/src/py/main.py", line 41, in
model.run()
File "E:\LP_code\muKG-main\src\py\model\general_models.py", line 217, in run
self.model.run()
File "E:\LP_code\muKG-main\src\torch\kge_models\kge_trainer.py", line 165, in run
flag = self.valid.print_results()
File "E:\LP_code\muKG-main\src\py\evaluation\evaluation.py", line 392, in print_results
self.evaluate()
File "E:\LP_code\muKG-main\src\py\evaluation\evaluation.py", line 306, in evaluate
score = self.fomulate(self.model.get_score(candidates, r_embeds, t_embeds))
File "E:\LP_code\muKG-main\src\torch\kge_models\basic_model.py", line 165, in get_score
return self.calc(h, r, t)
File "E:\LP_code\muKG-main\src\torch\kge_models\TransE.py", line 39, in calc
score = (h + r) - t
RuntimeError: CUDA out of memory. Tried to allocate 5.57 GiB (GPU 0; 12.00 GiB total capacity; 5.59 GiB already allocated; 4.92 GiB free; 5.61 GiB reserved in total by PyTorch)
Process finished with exit code 1
The text was updated successfully, but these errors were encountered: