hms robot development platform (beta)
Sample packages for robot development kickstart.
Undergraduate Researcher Project in Human Machine Systems Lab. in Korea University
- HW : Jetson Nano
- OS + frameworks : ubuntu 20.04 + ros2-foxy download from here
- swap mem : follow here or below
cd ~ git clone https://github.com/JetsonHacksNano/installSwapfile cd installSwapfile ./installSwapfile.sh sudo reboot
- CUDA : cudatoolkit 10 + cudnn7 + gcc7 linking
Latest cuda configuration for Jetson Nano by far! (November, 2022)
- install CUDA
sudo apt install -y cuda-core-10-0\ cuda-cublas-dev-10-0\ cuda-cudart-dev-10-0\ cuda-libraries-dev-10-0\ cuda-toolkit-10-0
- install cudnn
sudo apt install libcudnn7-dev
- export cuda path
export PATH=/usr/local/cuda-10.0/bin${PATH:+:${PATH}} export LD_LIBRARY_PATH=/usr/local/cuda/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
- link gcc/g++ version (for opencv build)_
sudo apt install gcc-7 g++-7 sudo ln -s /usr/bin/gcc-7 /usr/local/cuda-10.0/bin/gcc sudo ln -s /usr/bin/g++-7 /usr/local/cuda-10.0/bin/g++
- Sensor
- camera : Intel Realsense D455
- sdk install : follow here and here (download from source)
- WARNING : cmake with python binding, CUDA, RSUSB !!!
cmake ../ -DBUILD_PYTHON_BINDINGS:bool=true -DFORCE_RSUSB_BACKEND=true -DBUILD_WITH_CUDA=true
- Many github issues are suffering from installing librealsense2 with python binding at arm64 CPU architecture with ubuntu 20.04, especially for Jetson Nano since Nvidia is not releasing the official image for it by far. However, I found the trick!
- lidar : rplidar_s1
- sdk install : follow here
- camera : Intel Realsense D455
- Major python dependencies : mediapipe, tensorflow 2.4, gTTS, SpeechRecognition, playsound
If it is your first time running rosdep:
rosdep init rosdep update
Then run:
cd ~/your_ws rosdep install --from-paths src
- Responsible for getting, processing, filtering, logging all the sensor data
- Currently only supports RGB & Depth camera
Multiple publishers for each sensor connected
Open a new terminal and insert:
ros2 run hrdp_sensors_beta sensors
- Contains: face_detection, pose_detection, 3d_sneakers_objectron
- with tensorflow 2.4+, you can enjoy personalized detection functions with customized tf/tflie models. However, in this repository we are using jdk to use tf models without tf!
Subscribed to camera + Face detection model + Detection call service
Open a new terminal and insert:
ros2 run hrdp_perception_beta face_detection
Then to request the service:
ros2 service call /hrdp_perception_beta/face_detection example_interfaces/srv/SetBool "{data : True}"
Subscribed to camera + MobileNet + Sneakers objectron model + MultiArray publishers
WARNING : you should require specific .tflite model's' for objectron node.
You can download all the files from here, or from the older branches of mediapipe repo if deprecated, and add it to :{your_python_dist-packages}/mediapipe/modules/objectron
Then, open a new terminal and insert:
ros2 run hrdp_perception_beta sneakers_objectron
- Current actuator suppported : dynamixel
Keyboard input interface + Twist publisher
Open a new terminal and insert:
ros2 run hrdp_actuators_beta keyboard_control
Subscribed to user_voice + String publisher
Open a new terminal and insert:
ros2 run hrdp_actuators_beta voice_control
- Functions of ears and mouth for a robot.
- Simple AI speaker functions : clock, timer, joke
- Before running any nodes, please run
organs/microphone_connection_check.py
and correct DEVICE_INDEX variable properly!
Microphone connection + TTS algorithm + String publisher
Open a new terminal and insert:
ros2 run hrdp_human_interface_beta user_voice_listener
- Integrated launch files that contain multiple nodes from the distant packages.
Camera publisher + camera subscriber + sneakers_objectron model + Rotation publisher + Translation publisher
ros2 launch hrdp_launch hrdp_sneakers_objectron_launch.xml