We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
直接用CDial-GPT2_LCCC-base 预测 预测那部分代码有修改,不然跑不通
output = model(input_ids, token_type_ids=token_type_ids) logits = output.logits logits = logits[0, -1, :] / args.temperature
不管输入是什么,结果都如下
[12997, 7635, 12997, 7635, 12997, 12997, 12997, 12997, 7635, 12997, 7635, 12997, 7635, 12997, 7635, 12997, 7635, 12997, 7635, 12997, 12997, 12997, 7635, 7635, 12997, 7635, 12997, 7635, 7635, 12997] 囌 僦 囌 僦 囌 囌 囌 囌 僦 囌 僦 囌 僦 囌 僦 囌 僦 囌 僦 囌 囌 囌 僦 僦 囌 僦 囌 僦 僦 囌
The text was updated successfully, but these errors were encountered:
一样的问题,同问~
Sorry, something went wrong.
注释掉这一句:logits, *_ = model(input_ids, token_type_ids=token_type_ids) 然后改成: output = model(input_ids, token_type_ids=token_type_ids) logits = output.logits logits = logits[0, -1, :] / args.temperature logits = top_filtering(logits, top_k=args.top_k, top_p=args.top_p) probs = F.softmax(logits, dim=-1) 出来就是正常的对话了
在interact.py中试了一下依然不行,依然乱码,请问还需要修改其他地方吗
No branches or pull requests
直接用CDial-GPT2_LCCC-base 预测
预测那部分代码有修改,不然跑不通
output = model(input_ids, token_type_ids=token_type_ids)
logits = output.logits
logits = logits[0, -1, :] / args.temperature
不管输入是什么,结果都如下
[12997, 7635, 12997, 7635, 12997, 12997, 12997, 12997, 7635, 12997, 7635, 12997, 7635, 12997, 7635, 12997, 7635, 12997, 7635, 12997, 12997, 12997, 7635, 7635, 12997, 7635, 12997, 7635, 7635, 12997]
囌 僦 囌 僦 囌 囌 囌 囌 僦 囌 僦 囌 僦 囌 僦 囌 僦 囌 僦 囌 囌 囌 僦 僦 囌 僦 囌 僦 僦 囌
The text was updated successfully, but these errors were encountered: