You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, COLMAP generates extrinsics for each output frame without testing if they are suitable for NeRF rendering. The NeRF worker can only support 360-degree captures with the camera facing inward toward the subject in each input frame. Moreover, there is a limited bounding box where camera rays are cast to render the NeRFs, and it is necessary to verify that the extrinsic reentering and scaling appropriately capture the object.
To tackle this issue, the first step is to create a standalone function in the COLMAP folder that detects the percentage of extrinsic camera rays intersecting the NeRF bounding box. This function will take in the transforms_data.json output from COLMAP, read the intrinsic and extrinsic matrices, project the rays for each frame into the scene, and count and report the number of rays that intersect the bounding box. A corresponding visualization of this process using MATPLOTLIB will be part of this feature and will closely resemble the extrinsic visualization in to_cam.py.
The text was updated successfully, but these errors were encountered:
Some helpful steps breaking this down further:
Step 1: Create a standalone function
Step 2: Read Intrinsic and Extrinsic Matrices (similar to to_cam.py code)
Step 3: Project Rays and Count Intersections
Step 4: Report the Results
Step 5: Create a Corresponding Visualization
Currently, COLMAP generates extrinsics for each output frame without testing if they are suitable for NeRF rendering. The NeRF worker can only support 360-degree captures with the camera facing inward toward the subject in each input frame. Moreover, there is a limited bounding box where camera rays are cast to render the NeRFs, and it is necessary to verify that the extrinsic reentering and scaling appropriately capture the object.
To tackle this issue, the first step is to create a standalone function in the COLMAP folder that detects the percentage of extrinsic camera rays intersecting the NeRF bounding box. This function will take in the transforms_data.json output from COLMAP, read the intrinsic and extrinsic matrices, project the rays for each frame into the scene, and count and report the number of rays that intersect the bounding box. A corresponding visualization of this process using MATPLOTLIB will be part of this feature and will closely resemble the extrinsic visualization in to_cam.py.
The text was updated successfully, but these errors were encountered: