Skip to content

Rainlabuw/ML-based-optimal-control

Repository files navigation

LinkedIn


Machine Learning based Perception for Optimal Motion Planning and Control

Niyousha Rahimi, RAIN Lab University of Washington
View Demo · Report Bug

Table of Contents
  1. About The Project
  2. Requirements
  3. Getting Started with Unreal Engin
  4. Main Project
  5. Usage
  6. Roadmap
  7. Contact
  8. Acknowledgements

About The Project

In this work, a Machine Learning based perception module is developed using Mask-RCNN (and Bayesian Neural Networks) with RGB-D images to estimate the position of objects in an Unreal Engin environment. Two methods are offered for autonomous navigation:

  • Sampling based approaches such as RRT^* and Astar
  • stochastic optimal control: Successive convexification for path planning.

It should be mentioned that some parts of the project is still under development.

Requirements

Main requirements are as follows:

GPU and processor I used:

Intel(R) Core(TM) i7-8850H CPU @ 2.60 GHz, 2592 Mhz, 6 Core(s), 12 Logical Processor(s)

Nvidia Quadro P2000

Getting Started with Unreal Engin

Please download and install unreal engin. I have created an unreal engine environment of an airport. This environment can be downloaded from here:

Make sure to load AirportShowcase and hit play before running any code.

AirportShowcase

Building the initial map

There are three frame of references we need to consider:

  1. Unreal-engin coordinate frame
  2. The moving vehicle coordinate frame (the origin of the airsim's coordinate frame is placed at the position of camera when the simulation was started)
  3. The map's coordinate frame

The following figure demonstrates these coordinate frames and their origins:

Code is provided in map.py for building an initial occupancy map.

Main project

The main project is carried out in car-sim.py.

Please update root-directory in Mask_RCNN.py and car_sim.py.

Given the inirtial occupancy map, RRT^* is used to generate a path from the start point (300,50) to the end point (25,250). When the vehicle reaches the vacinity of the unknown-obstacle, it starts processing images taken from the scene. Maks-RCNN is used to detect the obstacle in the image, then depth map is used to determin the position of the unknown-obstacle. The map is then updated using this information, and a new path is generated. The vehicle then follows the new path to the goal location.

Usage

Here's a demo of the simulation:

View Demo

Roadmap

Please look out for these updates in coming days:

  1. debugging successive convexification for path planning.
  2. using Bayesian NN to predict accuracy of the estimates.

Contact

Niyousha Rahimi - nrahimi@uw.edu

RAIN Lab, University of Washington

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published