-
Notifications
You must be signed in to change notification settings - Fork 5
Running With Docker
Follow this section to setup the Docker image, GPU drivers and GPU access from inside Docker. At the end of this section you should be able to build all C++ code successfully.
-
Install Docker using the steps here for Ubuntu 16.04.
-
Install Nvidia-Docker toolkit for GPU access. Follow steps here (for Docker version > 19) or here (for Docker version < 19) depending on your Docker version.
-
Upgrade your NVidia drivers to 440.33 which are compatible with CUDA 10.2 used by the code
-
Pull the latest version of docker image:
docker pull thecatalyst25/perch_debug:5.0
-
Clone this repo which contains the code locally (skip this step if you already have it cloned) :
git clone https://github.com/SBPL-Cruz/perception
-
Clone the
fast_gicp
repo in the perception folder. Also clone its submodules :cd perception git clone https://github.com/SBPL-Cruz/fast_gicp -b gicp_cuda cd fast_gicp git submodule update --init --recursive
-
Make empty directories for storing python script outputs, perch outputs, for storing datasets and for storing any trained segmentation models :
mkdir perch_output mkdir model_output mkdir trained_models mkdir datasets
-
You can run docker by mounting the required folders (replace stuff inside <> by absolute paths). Depending on your Nvidia toolkit you may have to use
--gpus all
instead of--runtime nvidia
:docker run --runtime nvidia \ -it --net=host \ -v <local path to directory containing trained_models>:/data/models \ -v <local path to directory containing your datasets>:/data/YCB_Video_Dataset \ -v <local path to directory perch_output>:/data/perch_output \ -v <local path to directory model_output>:/data/model_outputs \ -v <local path to cloned "perception" repo>:/ros_python3_ws/src/perception thecatalyst25/perch_debug:5.0
-
Build the workspace (from inside the Docker shell). The workspace needs to rebuilt every time the Docker image is run :
source /opt/ros/kinetic/setup.bash cd ros_python3_ws/ catkin init catkin build object_recognition_node
- Build time of
sbpl_perception
package : 3 minutes 40 seconds - Build time of
object_recognition_node
package : 2 minutes and 37.6 seconds
- To visualize the poses in RVIZ, make sure ROS-Kinetic is also outside the docker and
rviz
is launched outside the Docker with theperception/sbpl_perception/src/scripts/tools/fat_dataset/rviz.rviz
config file opened. To let RVIZ get the models for pose markers, make a symbolic link outside docker like so :ln -s <local path to datasets directory> /data
-
Download the YCB Video dataset by following the comments here to your local datasets folder
-
Download the YCB Video object models from this link the place the downloaded models folder into the YCB_Video_Dataset folder.
-
Download the annotations in COCO format from here and put them in the YCB_Video_Dataset folder download in the previous step.
-
Download the trained MaskRCNN model from here and put into your local trained_models folder.
-
Run the Docker image and build the code.
-
Run the code from inside Docker :
roscore& #skip this if roscore is running outside source /ros_python3_ws/devel/setup.bash Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5; cd /ros_python3_ws/src/perception/sbpl_perception/src/scripts/tools/fat_dataset python fat_pose_image.py --config config_docker.yaml
-
You can check the output in the
perch_output
folder created earlier or on RVIZ (note that the fixed frame in RVIZ should be set tocamera
for this experiment). -
Config files that can be modified as per requirement :
# Contains settings related to segmentation model used and annotation file being used <local path to perception repo>/sbpl_perception/src/scripts/tools/config_docker.yaml # Contains parameters settings related to PERCH 2.0 code <local path to perception repo>/sbpl_perception/config/pr3_env_config.yaml
- Download the Jenga clutter dataset and annotations from this link. It contains the original capture images as well as the cropped images that are used for running pose estimation. Put it into your local datasets folder.
- Run the Docker image and build the code.
- Run the code from inside Docker :
roscore& #skip this if roscore is running outside source /ros_python3_ws/devel/setup.bash Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5; cd /ros_python3_ws/src/perception/sbpl_perception/src/scripts/tools/fat_dataset python fat_pose_image.py --config config_jenga_docker.yaml
- You can check the output in the perch_output folder created earlier or on RVIZ (note that the fixed frame in RVIZ should be set to
camera
for this experiment). - Config files that can be modified as per requirement :
# Contains settings related to annotation file and camera viewpoint being used <local path to perception repo>/sbpl_perception/src/scripts/tools/config_jenga_docker.yaml # Contains parameters settings related to PERCH 2.0 code <local path to perception repo>/sbpl_perception/config/pr3_jenga_env_config.yaml
-
Download the SameShape dataset and annotations from this link and place it in your local datasets folder.
-
Run the Docker image and build the code.
-
Run the code from inside Docker :
roscore& #skip this if roscore is running outside source /ros_python3_ws/devel/setup.bash Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5; cd /ros_python3_ws/src/perception/sbpl_perception/src/scripts/tools/fat_dataset python fat_pose_image.py --config_crate_dataset_docker.yaml
-
You can check the output in the perch_output folder created earlier or on RVIZ (note that the fixed frame in RVIZ should be set to
table
for this experiment). -
Config files that can be modified as per requirement :
# Contains settings related to device (cpu, gpu) type used <local path to perception repo>/sbpl_perception/src/scripts/tools/config_crate_dataset_docker.yaml # Contains parameters settings related to PERCH 2.0 code (GPU) <local path to perception repo>/sbpl_perception/config/roman_gpu_env_config.yaml # Contains parameters settings related to PERCH code or BF-ICP baseline (CPU) <local path to perception repo>/sbpl_perception/config/roman_env_config.yaml
- Download the Conveyor dataset and annotations from this link and place it in your local datasets folder.
- Running this dataset requires YCB Video object models. If you don't have the YCB Video Dataset already downloaded follow these steps to get the models :
- Create a folder YCB_Video_Dataset in your local dataset folder.
- Download the YCB Video object models from this link the place the downloaded models folder into the YCB_Video_Dataset follder.
- Run the Docker image and build the code.
- Run the code from inside Docker :
roscore& #skip this if roscore is running outside source /ros_python3_ws/devel/setup.bash Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5; cd /ros_python3_ws/src/perception/sbpl_perception/src/scripts/tools/fat_dataset python fat_pose_image.py --config_conveyor_docker.yaml
- You can check the output in the perch_output folder created earlier or on RVIZ (note that the fixed frame in RVIZ should be set to
camera
for this experiment). - Config files that can be modified as per requirement :
# Contains settings related to device (cpu, gpu) type used <local path to perception repo>/sbpl_perception/src/scripts/tools/config_conveyor_docker.yaml # Contains parameters settings related to PERCH 2.0 code (GPU) <local path to perception repo>/sbpl_perception/config/pr2_gpu_conv_env_config.yaml # Contains parameters settings related to PERCH code or BF-ICP baseline (CPU) <local path to perception repo>/sbpl_perception/config/pr2_conv_env_config.yaml
-
Download the SameShape dataset and annotations from this link and place it in your local datasets folder.
-
Run the Docker image and build the code.
-
Run the code from inside Docker :
roscore& #skip this if roscore is running outside source /ros_python3_ws/devel/setup.bash Xvfb :5 -screen 0 800x600x24 & export DISPLAY=:5; cd /ros_python3_ws/src/perception/sbpl_perception/src/scripts/tools/fat_dataset python fat_pose_image.py --config_can_dataset_docker.yaml
-
You can check the output in the perch_output folder created earlier or on RVIZ (note that the fixed frame in RVIZ should be set to
table
for this experiment). -
Config files that can be modified as per requirement :
# Contains settings related to image dir, model dir etc. <local path to perception repo>/sbpl_perception/src/scripts/tools/config_can_dataset_docker.yaml # Contains parameters settings related to PERCH 2.0 code (GPU) <local path to perception repo>/sbpl_perception/config/pr2_gpu_env_config.yaml
This is useful for singularity since no write permissions inside the image, so workspace can't built inside the image :
- Build singularity image from the docker image :
singularity build --fix-perms $SCRATCH/perch_debug.simg docker://thecatalyst25/perch_debug:3.0
- Clone packages into workspace outside :
git clone https://github.com/SBPL-Cruz/ros_python3_ws cd ros_python3_ws/src git clone https://github.com/SBPL-Cruz/improved-mha-planner -b renamed git clone https://github.com/SBPL-Cruz/sbpl_utils.git -b renamed git clone https://github.com/SBPL-Cruz/perception -b gpu_icp_ycb
- Enter debug image shell after mounting above dir and go the folder.
singularity shell --nv -B /pylon5/ir5fq3p/likhache/aditya/datasets/SameShape/:/data/SameShape -B /pylon5/ir5fq3p/likhache/aditya/datasets/roman/:/data/roman -B /pylon5/ir5fq3p/likhache/aditya/datasets/YCB_Video_Dataset:/data/YCB_Video_Dataset -B /pylon5/ir5fq3p/likhache/aditya/fb_mask_rcnn/maskrcnn-benchmark/trained_models:/data/models -B /pylon5/ir5fq3p/likhache/aditya/perch/perch_output:/data/perch_output -B /pylon5/ir5fq3p/likhache/aditya/perch/model_output:/data/model_outputs -B /pylon5/ir5fq3p/likhache/aditya/perch/ros_python3_ws:/data/ros_ws -B /pylon5/ir5fq3p/likhache/aditya/perch/ros_python3_ws/src/perception/sbpl_perception/src/scripts/tools/fat_dataset/temp:/local $SCRATCH/perch_debug.simg cd /data/ros_ws/
- Run (to build workspace, since workspace is outside it doesn't need to rebuilt everytime container is started) :
export CPLUS_INCLUDE_PATH=/usr/include/python3.6m/:$CPLUS_INCLUDE_PATH source /opt/ros/kinetic/setup.bash catkin init catkin build sbpl_perception
- Run code :
source /opt/ros/kinetic/setup.bash roscore& source /data/ros_ws/devel/setup.bash cd /data/ros_ws/src/perception/sbpl_perception/src/scripts/tools/fat_dataset python fat_pose_image.py --config config_docker.yaml
https://github.com/SBPL-Cruz/perception
cd docker
docker build -t perch .