The main objective is to perform tasks like line detection, hitting a buoy, and dropping markers underwater by using input data from sensors like IMU, monocular cameras, and a pressure sensor. To decrease the complexity of the task, we broke it down into levels of abstraction with the help of ROS stacks.
S. No. | Operating System | ROS Version | Build Status |
---|---|---|---|
1. | Ubuntu 14.04 LTS | Indigo Igloo |
We explain the tasks at hand in the following sub-sections
This is the first of the task that needs to be handled, the robot needs to follow the lines to guide through the arena and for this it should be able to come in the center of the line present beneath it as well as align itself with the line. This requires precise motion controls to stabilize and align the robot with the center of the line.
The task is to hit a circular buoy present at a definite height below the surface of water. The challenges involved are to localize the robot with respect to the center of the buoy using camera data and to move the robot steadily with rapidly fluctuating camera data. To make the system more robust we need to avoid the absurd motion due to noisy and sometimes false data obtained from camera due to improper detection and also need to handle the movement of the robot when the buoy is out of the camera frame.
The task is to pass the robot through an L-shaped gate without hitting its’ ends within a certain region as defined by the competition rules. The task was to locate the orientation(distance and angle) and the center of the gate from the robot’s position and then performing the required manoeuvre.
The task at hand is to shoot the torpedo through a launcher when the robot is at a certain distance from two separately colored cupids such that it passes through the center of the targets.
The task at hand is gargantuan and needs to be abstracted out into different modules. ROS is perfect for helping out in this abstraction. For more information on different layers present in the code, refer to the following documentations:
- debug_layer
- master_layer: coordinates the tasks to be performed
- task_handler_layer: implement individual tasks
- motion_library_layer: handling individual calibrated motion
- hardware_layer
- Create a catkin worspace following the guidelines given here
mkdir -p ~/catkin_ws/src
catkin_init_workspace
cd ~/catkin_ws/
catkin_make
- Clone this repository to your catkin workspace
cd ~/catkin_ws/src
git clone https://github.com/AUV-IITK/auv2016.git
- Run the build.sh script
~/catkin_ws/src/auv2016/build.sh
To get started with contributing to this repository, look out for open issues here. Kindly read the Developer's Guide before sending a pull request! :)