-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Size layer #5021
base: master
Are you sure you want to change the base?
Size layer #5021
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #5021 +/- ##
===========================================
- Coverage 94.72% 89.47% -5.26%
===========================================
Files 765 307 -458
Lines 229654 89796 -139858
===========================================
- Hits 217551 80344 -137207
+ Misses 12103 9452 -2651 ☔ View full report in Codecov by Sentry. |
tools/onnx/onnx2ncnn.cpp.orig accidently committed ? |
done. |
yeah, add to operators doc |
done as well. |
print the additional dimension into test which was forgotten. Co-authored-by: nihui <shuizhuyuanluo@126.com>
thanks. I forgot to add the additional dimension to the test when printing. |
src/layer/size.cpp
Outdated
if (top_blob.empty()) | ||
return -100; | ||
|
||
top_blob[0] = bottom_blob.w * bottom_blob.h * bottom_blob.d * bottom_blob.c; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
onnx size outputs an integer tensor
The code implemented here by ncnn size will return a float tensor
For very large numbers, there is a loss of precision when using float
Will this cause difficulty in subsequent layer processing? For example, perform other addition, subtraction or modulo division operations on the result of size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for things like rms norm etc, it might have a little impact. but, the issue is that the size of batches (which ncnn does not have by itself), will have this. other than devisions for normalizations, I dont think it has other impacts. but if necessary (which is better), I might change the types from float to int. should anything else be done to make it compatible with integers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm worried that your original intention was to return integer, but you accidentally returned float tensor. Now it seems that I have unnecessary worries. :D
Regarding batch, the process of converting the model to ncnn will require that the batch is always 1. There may be special cases, but that will also be solved in the converter, such as intepret nchw to cdhw in ncnn as a workaround. The layer itself doesn't care about it
how would I assign it as an integer? the types are all ints, and multiplications will return an integer. |
@nihui |
// access by channel / depth
int* p = mat.channel(i);
int* p = mat.depth(i);
// access by row
int* p = mat.row<int>(i);
// access by element
int* p = mat;
// set value at 1,2,3,4 to 567
mat.channel(1).depth(2).row<int>(3)[4] = 567; |
this is Size layer, equivalent to size operator in onnx