-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【PaddlePaddle Hackathon 4】No.1:为 Paddle 新增 finfo API #50765
Comments
您好,请您给出最小复现代码方便我们排查问题。 |
paddle提供头文件吗,如果他不提供的话,没有办法在你那边实现,除非把代码嵌入到paddle的源代码了, 我在这个文件里实现了finfo用来获得paddle浮点数的一些属性,你可以把这个PR跑一下 https://github.com/PaddlePaddle/Paddle/pull/50767/files#diff-ddc1bfe863ba0a2599ffb7e8aeb0880016cccc318d8bcf9e0f252268dde08a05 |
我刚才修改过pr的代码了,因为提交了pr之后float16的问题会导致review不过,所以说我把pr中的关于float16的测试样例删掉了 |
可以这样排查下 1) Python层paddle.float16 bind 到c++ 对应的数据类型是不是framework::proto::VarType::FP16 2) 加一些日志,看下 switch case float16分支有没有走到 3) 如果走到 float16分支了,检查 eps 和 resolution代码写的对不对,可以同时参考numpy/torch/tensorflow。 |
Paddle/python/paddle/framework/dtype.py Lines 22 to 36 in db170b2
在dtype.py中确实是对float16和FP16进行了绑定 |
而 pybind.cc中switch case 没有错,我也使用python代码测试过,确实已经赋值正确了 |
close due to the following PR is merged: |
请提出你的问题 Please ask your question
Paddle实现的float16类型与torch的float16类型实现并不一致,使用
case framework::proto::VarType::FP16: bits = 16; eps = std::numeric_limits<float16>::epsilon(); max = std::numeric_limits<float16>::max(); min = std::numeric_limits<float16>::lowest(); tiny = std::numeric_limits<float16>::min(); resolution = std::pow(10, -std::numeric_limits<float16>::digits10); dtype = "float16"; break;
以上代码获得到的eps和resolution并不和pytorch一致
The text was updated successfully, but these errors were encountered: