You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How to set the .yaml config file?
There are some scene-related items in the .yaml config file that I'd like to know how you set them, such as:
min_depth and max_depth (how to set which points rays should start and stop?)
up (is it the camera's up? in Lego it's 0,0,1 and in Fern it's 0,1,0. why is that?
scene_center (how do you know where is scene's center is?)
trajectory type and trajectory scale (is figure_eight for light field kind of images?)
How to set the .pth camera setting Here you explained the .pth file, but by following your instructions, I couldn't train NeRF correctly. Here is what I mean:
As a test, I wanted to convert LLFF's fern camera setting to pytorch3d camera setting. bmild explains here how its .npy file works. As a result for camera positions I got:
this is your camera position:
this is my camera positions (based on LLFF camera positions):
the x-y orientation of points is the same, but in your case, they are more packed together. However, the z value of the cameras is not that related.
I understand that the more packed the cameras are, the more overlaps their rays have in the scene so the question is how do you set it? Is it a single transformation you apply to LLFF camera poses? Also, my camera rotation and yours are also different a bit. Mine is like they're diverging and yours are converging.
Do you also take LLFF data and then convert them? or did you originally derive camera positions from images?
Thank you.
The text was updated successfully, but these errors were encountered:
❓ How to set .yaml and .pth files correctly?
Hi @bottler ,
My question has two parts:
There are some scene-related items in the .yaml config file that I'd like to know how you set them, such as:
Here you explained the .pth file, but by following your instructions, I couldn't train NeRF correctly. Here is what I mean:
As a test, I wanted to convert LLFF's fern camera setting to pytorch3d camera setting. bmild explains here how its .npy file works. As a result for camera positions I got:
this is your camera position:
this is my camera positions (based on LLFF camera positions):
the x-y orientation of points is the same, but in your case, they are more packed together. However, the z value of the cameras is not that related.
I understand that the more packed the cameras are, the more overlaps their rays have in the scene so the question is how do you set it? Is it a single transformation you apply to LLFF camera poses? Also, my camera rotation and yours are also different a bit. Mine is like they're diverging and yours are converging.
Do you also take LLFF data and then convert them? or did you originally derive camera positions from images?
Thank you.
The text was updated successfully, but these errors were encountered: