Important
This repository is a fork of the original repo LearnableEarthParser. For more informations, check out the original repository. Special thanks to @romainloiseau for his great work!
This fork adds specific changes detailed below, and is dedicated to work in the combination with this special version of EarthParserDataset and the 3D Lidar scenes of genBoxes.
For reproducibility purposes, you can set up and use the following environment for LearnableEarthParser:
conda env create -f environment_work.yml
conda activate lep-original
Due to the caracteristics of Lidar sampling, especially in the context of autonomous driving, the scenes can often be highly occluded.
Therefore, to obtain meaningful prototypes, the strategy explored here consists of applying different masking techniques in one crucial part of the reconstruction loss: compute_l_PX, or
The following masking strategies have been implemented in learnableearthparser/model/ours.py :
-
New curriculum options.
-
Added logging of new numerical values to Tensorboard.
-
Added visualisation of Lidar position and masking in Tensorboard.
-
Option to force ground truth positions of prototypes during training.
-
New prototype shapes.
-
First mesh prototypes implementation.
-
Video rendering script to show the training process.
For this script, a special conda environment is required.
conda env create -f env_render_video.yml conda activate render
-
New config files for different experiment setups. You can for example have a look at one here: configs/experiment/5complex_light_boxProto_velod_allScene_SINGLE.yaml. It makes use of the new options listed above.