-
Notifications
You must be signed in to change notification settings - Fork 249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test with known camera pose #10
Comments
I just added the four test scenes from Figure 9 (airplants, pond, fern, t-rex) to the google drive supplement, you can find them here now: Here's an explanation of the The pose matrix is a 3x4 camera-to-world affine transform concatenated with a 3x1 column [image height, image width, focal length] along axis=1. The rotation (first 3x3 block in the camera-to-world transform) is stored in a somewhat unusual order, which is why there are the transposes. From the point of view of the camera, the three axes are So the steps to reproduce this should be (if you have a set of 3x4 poses for your images, plus focal lengths and close/far depth bounds):
Hopefully that helps explain my pose processing after colmap. Let me know if you have any more questions. |
Hi, |
That's right. 90 degrees clockwise. |
Hi, Yours sincerely, |
"Disparity" is a bit of an overloaded term that can also mean inverse depth. The reprojection math correctly accounts for focal length as well (such as here). |
Hi, Regards, |
Focal length is in pixels, to fit with the equation you mentioned earlier: This is what is output by COLMAP and other camera calibration code when estimating intrinsics for a pinhole camera model. |
Hi @bmild , Yours sincerely, |
I've only worked with extrinsic/intrinsic camera poses and corresponding depth ranges. From that webpage, I can't tell what units the disparity ranges are in -- but if you manage to convert those back to real world depths and know the spacing between cameras, you should be able to reconstruct the right poses matrices I think. |
Hi, |
This is a subtle point. We output 1 channel for opacity (put through a sigmoid to get in [0,1]). To get the blending weights, we take the other 4 channels, append an all-zero channel, then pass through a softmax to get 5 numbers that sum to one. (You could just output 5 channels and softmax, but this makes the function bijective. It probably does not make much difference in practice.) |
Hi @bmild |
can some explain to me if i want only to extract the bounds/bds.npy of every image that i have, how can i do this? and what i need exactly ? |
Hi,
Thank you very much for your excellent work and your open code source!
I have followed your tutorial and got really amazing synthesis results, however, when I test with some other light field data, it seems that colmap can't work correctly and some errors happened. To avoid the problem caused by colmap, I want to skip the img2poses step and give directly camera poses to the following step, is there any way to feed camera poses for it? (I found in your code, you have done some processings like transpose to the estimated poses, but not much comments to explain these processings, could you please give some explaination over camera pose processing after img2poses?)
As for other test data in your paper, I'm very intreseted their output, but I didn't find a download link, are these data available to the public?
Thank you very much for your attention.
Yours sincerely,
Jinglei
The text was updated successfully, but these errors were encountered: