-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2D downsampling of uint8 data inefficient #737
Comments
It's unexpected for me that performance matters here. I thought interpolation performance would be mostly bound by bandwidth |
Just checked - downloading a 4k x 4k uint8 JPG patch is 100-150 ms. Similar to current downsampling behavior |
Wow, that's a crazy fast download! But also, doesn't that mean that there's basically no inefficiency if we use pipelining? At the same time, it maybe doesn't matter and we should just put tinybrain in instead of default torch behavior. It's not a hard fix. |
@nkemnitz we already use tinybrain for segmentation as of a while ago. Should this be closed? zetta_utils/zetta_utils/tensor_ops/common.py Lines 391 to 405 in 1254139
|
Still relevant for |
All our tensors are passed as NCXYZ to torch and converted to float32. That's not just a copy, but also 4x more memory.
Another thing to consider is that CloudVolume data already is in Fortran order, which tinybrain expects
The text was updated successfully, but these errors were encountered: