ROS workspace for the Wayne Robotics team's IGVC '21 robot: Veronica. The repo has has been re-organized into a workspace to contain all the core packages in accordance with the current software architecture here. The previous vision package has also been merged into this repo.
Sub-modules | About |
---|---|
Alexa | The robot's Alexa layer that runs roslaunch files from voice commands using Alexa skills and Flask-Ask. |
GUI | Simple GUI to launch different modes. Uses Tkinter/Qt in Python. |
Motor Driver | The node on the Arduino that listens to command velocities and translates to differential-drive PWM output. |
Status Indicator LED Strip | The RGB LED strip that shows the robot's status / diagnostics codes. |
IGVC Solids | Contains the CAD files of the 3D-Printed cases, machined uprights and others. |
Vision Master and Vision | Initial vision workspaces that have been merged into this base repo. |
Software | version |
---|---|
Ubuntu | 18.04 |
ROS | Melodic |
OpenCV | 3.4.1 |
Python | 3.6 |
- Github recently renamed the 'master' branch to 'main' branch so make sure to commit to the correct branch.
- Add a .gitignore file and add everything except /src and /media.
- The weight files for the neural networks will be large (~500MB for YOLOv3) and exceed file size for regular code commits. Install Large file Storage, Git LFS from here.
Simply clone and build as follows:
git clone https://github.com/waynerobotics/veronica.git && cd veronica/
catkin_make
For the full high level oveview of each mode, go to the wiki.
The main competition software flow should align with the below diagram. In competition mode, the robot will be initialized for a full course run. The robot will have access to all its major functions ie(lane keeping, waypoint nav, obstacle avoidance).
The current version uses OpenCV and the histogram to calculate lane lines. Below shows the visualization of all the previous steps. The equations of left and right lanes are calculated and the vehicle's relative position is calculated along with the radius of curvatures.
After lanes are detected, the lanes are converted to a sensor_msgs/LaserScan using lane_laser_scan.py and merged using ira_laser_tools. The output is a merged scan that includes both lanes and other obstacles in one LaserScan message as show in the image below.