-
Notifications
You must be signed in to change notification settings - Fork 0
Project specifications
We initially decided to do recognition and tracking of ground object using OpenCV, using one embedded camera on the drone. However this might be too difficult to achieve. Therefore we have decided to go down the route of 3D rendering of the environment using stereo vision from 2 cameras also using OpenCV. We also have replace the drone by an hexapod robot.
We are planning on doing preliminary tests using webcams on a computer. This will be done with two cameras on a laptop, strongly attached to each other. Using OpenCV and PCL for rendering.
After this we will use a robot to attach the cameras to and test.
- How to make affordable 3D point mapping.
- Autonomous 3D mapping of surrounding environments using stereo-vision.
- Generic framework
It hasn't been done to our knowledge therefore there are further investigations to follow. We are aiming to make it open source so we can reuse other open source components and gain some external contributors to the project (after the MSc).
- OpenCV is an open source library that allows manipulation of video stream for computer vision. In our case we are going to use it to aggregate pictures from different cameras and to extract them into a point cloud.
- The PCL (http://pointclouds.org/) library which is an other open source project can be used to generate the 3d images from the cloud point got from OpenCV.