Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

in NeRF, how to set the config (.yaml file) and camera setting (.pth file) correctly and optimally? #1380

Closed
BlackPi3 opened this issue Nov 11, 2022 · 1 comment

Comments

@BlackPi3
Copy link

BlackPi3 commented Nov 11, 2022

❓ How to set .yaml and .pth files correctly?

Hi @bottler ,

My question has two parts:

  1. How to set the .yaml config file?
    There are some scene-related items in the .yaml config file that I'd like to know how you set them, such as:
  • min_depth and max_depth (how to set which points rays should start and stop?)
  • up (is it the camera's up? in Lego it's 0,0,1 and in Fern it's 0,1,0. why is that?
  • scene_center (how do you know where is scene's center is?)
  • trajectory type and trajectory scale (is figure_eight for light field kind of images?)
  1. How to set the .pth camera setting
    Here you explained the .pth file, but by following your instructions, I couldn't train NeRF correctly. Here is what I mean:
    As a test, I wanted to convert LLFF's fern camera setting to pytorch3d camera setting. bmild explains here how its .npy file works. As a result for camera positions I got:
    this is your camera position:
    image
    this is my camera positions (based on LLFF camera positions):
    image
    the x-y orientation of points is the same, but in your case, they are more packed together. However, the z value of the cameras is not that related.
    I understand that the more packed the cameras are, the more overlaps their rays have in the scene so the question is how do you set it? Is it a single transformation you apply to LLFF camera poses? Also, my camera rotation and yours are also different a bit. Mine is like they're diverging and yours are converging.
    Do you also take LLFF data and then convert them? or did you originally derive camera positions from images?

Thank you.

@davnov134
Copy link
Contributor

Hi, the Implicitron yaml files contain settings for Blender. Please follow the Implicitron readme to run our NeRF implementation on the Blender Synthetic scenes (the readme describes both training and visualisation):
https://github.com/facebookresearch/pytorch3d/tree/main/projects/implicitron_trainer#nerf

@bottler bottler closed this as completed Aug 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants