Neural_Radiance_Fields
+-configs
+-data
+-images
+-outputs
+-README.md
+-report
+-ta_images
__init__.py
checkpoints.pth
data_utils.py
dataset.py
environment.yml
implicit.py
main.py
ray_utils.py
render_functions.py
renderer.py
sampler.py
- Download and extract the files.
- Make sure you meet all the requirements given on: https://github.com/848f-3DVision/assignment2/tree/main
- Or reinstall the necessary stuff using 'environment.yml':
conda env create -f environment.yml
conda activate l3d
- The data folder consists of all the data necessary for the code.
- Uncompress the file
lego.png.zip
in the data folder - The images folder has all the images/gifs generated after running the codes.
- All the necessary instructions for running the code are given in README.md.
- The folder report has the html file that leads to the webpage.
After making changes to:
get_pixels_from_image
inray_utils.py
andget_rays_from_pixels
inray_utils.py
Run the code:
python main.py --config-name=box
The code renders the xy_grid and the ray bundle is saved as xygrid.png and raybundle.png respectively in the images folder.
Grid | Rays |
---|---|
After making changes in StratifiedSampler
in sampler.py
Run the code:
python main.py --config-name=box
The code renders the sample points and is saved as sample_points.png in the images folder.
Sampled Points |
---|
After making the necessary changes in VolumeRenderer._compute_weights
, VolumeRenderer._aggregate
and VolumeRenderer.forward
,
Run the code:
python main.py --config-name=box
The code renders the box volume defined in the configs/box.yaml
and the renderings are saved as part_1.gif and the depth is saved as depth.png in the images folder.
Rendered Box with color | Depth Image |
---|---|
Implement the get_random_pixels_from_image
method in ray_utils.py
Change the 'loss' initially set to 'None' to 'MSELoss'
The 'MSELoss' calculates the mean squared error between the predicted colors and ground truth colors rgb_gt
.
Train the model by running the code:
python main.py --config-name=train_box
The code renders a spiral sequence of the optimized volume in images/part_2.gif
Center = (0.25, 0.25, 0.00)
length of sides = (2.00, 1.50, 1.50)
Optimized Volume |
---|
To train a NeRF on the lego bulldozer dataset, Run the code:
python main.py --config-name=nerf_lego
This creates a NeRF with the NeuralRadianceField
class in implicit.py
, and uses it as the implicit_fn
in VolumeRenderer
.
The NeRF is trained for 250 epochs on 128x128 images.
In order to make changes in the model parameters, image_size etc., visit the configs/nerf_lego.yaml
file.
Hyperparameters were kept the same.
Included harmonic embeddings into the architecture.
n_layers_xyz: 6
n_hidden_neurons_xyz: 128
n_hidden_neurons_dir: 64
append_xyz: [3]
Optimized NeRF |
---|
The html code for the webpage is stored in the report folder along with the images/gifs. Clicking on the webpage.md.html file will take you directly to the webpage.