You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to train fcos3d with nuscenes dataset, while the whole datasets is really huge, sometimes it is hard for me to download all the data. Therefore, I train the fcos3d with nuscenes v1.0-mini data. The following is the steps:
mmdetection3d version: release v0.17.1
I downloaded the data from nuscenes website:
unzip the v1.0-mini.tgz
modify the v1.0-mini inside v1.0-mini to v1.0-trainval
symbolic link the datasets: ln -s v1.0-mini mmdetection3d/data
Create the date with: python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes --workers 20
the created folder as follow:
the trained model:
test the model: python demo/mono_det_demo.py demo/data/nuscenes/n015-2018-07-24-11-22-45+0800__CAM_BACK__1532402927637525.jpg demo/data/nuscenes/n015-2018-07-24-11-22-45+0800__CAM_BACK__1532402927637525_mono3d.coco.json configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune.py work_dirs/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d/latest.pth
The result shows as below:
Compare with the result of the demo, this result is really bad.
I'm wondering if the bad result is caused by the training datasets?
The text was updated successfully, but these errors were encountered:
There is only one minor problem in your implementation: you can set --version to v1.0-mini to generate infos only for that mini-data version, but I guess this point won't affect your results.
I did not try to train the model with the mini version, but I guess it is possible that the limited data results in such bad performance. Someone has tried with 1/10 full dataset before and there was also a serious performance degradation. If you are interested in training models with a small dataset, you can try with KITTI, and the PRs #964 and #1014 for FCOS3D++ are ready to be merged into v1.0.0.dev0. The only difference is the evaluation metric is different such that the finally expected results can be a little different (KITTI expects only the most accurate ones while nuScenes expects more comprehensive predictions.)
Hello mmdetection3d team,
I want to train fcos3d with nuscenes dataset, while the whole datasets is really huge, sometimes it is hard for me to download all the data. Therefore, I train the fcos3d with nuscenes v1.0-mini data. The following is the steps:
mmdetection3d version: release v0.17.1
I downloaded the data from nuscenes website:
unzip the v1.0-mini.tgz
modify the v1.0-mini inside v1.0-mini to v1.0-trainval
symbolic link the datasets: ln -s v1.0-mini mmdetection3d/data
modify tools/create_data.py
elif args.dataset == 'nuscenes' and args.version != 'v1.0-mini':
train_version = f'{args.version}-trainval'
nuscenes_data_prep(
root_path=args.root_path,
info_prefix=args.extra_tag,
version=train_version,
dataset_name='NuScenesDataset',
out_dir=args.out_dir,
max_sweeps=args.max_sweeps)
#test_version = f'{args.version}-test'
#nuscenes_data_prep(
# root_path=args.root_path,
# info_prefix=args.extra_tag,
# version=test_version,
# dataset_name='NuScenesDataset',
# out_dir=args.out_dir,
# max_sweeps=args.max_sweeps)
Create the date with: python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes --workers 20
the created folder as follow:
the trained model:
test the model: python demo/mono_det_demo.py demo/data/nuscenes/n015-2018-07-24-11-22-45+0800__CAM_BACK__1532402927637525.jpg demo/data/nuscenes/n015-2018-07-24-11-22-45+0800__CAM_BACK__1532402927637525_mono3d.coco.json configs/fcos3d/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d_finetune.py work_dirs/fcos3d_r101_caffe_fpn_gn-head_dcn_2x8_1x_nus-mono3d/latest.pth
The result shows as below:
Compare with the result of the demo, this result is really bad.
I'm wondering if the bad result is caused by the training datasets?
The text was updated successfully, but these errors were encountered: