Training the HoliCity V1 through MaskRCNN (Detectron2).
Downloading the HoliCityV1 dataset from the homepage holicity.io,
which includes split-v1, image, plane .
Unzip into the folder dataset/
and reorganized as follows: (The clean-filelist.txt already existed in the folder dataset/
. )
dataset/
image/
2008-07/
2008-09/
...
plane/
2008-07/
2008-09/
...
split/
v1/
clean-filelist.txt
filelist.txt
train-middlesplit.txt
test-middlesplit.txt
valid-middlesplit.txt
You can download our reference pre-trained models from Google Drive.
Those models were trained with HoliCity/init.py
for 100k iterations.
The default batch size assumes your have a graphics card with 8GB video memory, e.g., GTX 1080Ti or RTX 2080Ti. You may reduce the batch size if you have less video memory.
CUDA_VISIBLE_DEIVCES=0 python main.py -s train -m HoliCityV1
It will build the train (HoliCityV1_train_coco_format.json) and valid (HoliCityV1_valid_coco_format.json) json file
in the folder data/HoliCityV1_v1/
first. (It will cost about 1.5 and 0.5 hours respectively.)
To test the pretrained MaskRCNN above on your own images, you need change the HoliCity/init.py
self.ckpt = /the/checkpointfile/you/trained/
def predict(self):
self.img_dirs_list = [the paths list of your own images]
and execute
CUDA_VISIBLE_DEIVCES=0 python main.py -s predict -m HoliCityV1
This project is licensed under the MIT License - see the LICENSE.md file for details