Skip to content

Latest commit

 

History

History
45 lines (32 loc) · 3.63 KB

README.md

File metadata and controls

45 lines (32 loc) · 3.63 KB

EVDodgeNet: Deep Dynamic Obstacle Dodging with event cameras

EVDodgeNet by Perception & Robotics Group at the Department of Computer Science, University of Maryland- College Park and Robotics & Perception Group at Department of Informatics, University of Zurich & ETH Zurich.

Code

Please follow our Wiki for dataset download, instructions to run code and create custom dataset. Incase you don't understand something or find issues please create a GitHub issue and we'll resolve it as soon as possible. though we haven't released the hardware setup details, a guide to hardware setup used in our lab can be found here.

Check out our Youtube video which depicts the proposed framework of our bio-inspired perceptual design for quadrotors.

EVDodgeNet: Deep Dynamic Obstacle Dodging with event cameras

Dynamic obstacle avoidance on quadrotors requires low latency. A class of sensors that are particularly suitable for such scenarios are event cameras. In this paper, we present a deep learning based solution for dodging multiple dynamic obstacles on a quadrotor with a single event camera and onboard computation. Our approach uses a series of shallow neural networks for estimating both the ego-motion and the motion of independently moving objects. The networks are trained in simulation and directly transfer to the real world without any fine-tuning or retraining.

We successfully evaluate and demonstrate the proposed approach in many real-world experiments with obstacles of different shapes and sizes, achieving an overall success rate of 70% including objects of unknown shape and a low light testing scenario. To our knowledge, this is the first deep learning based solution to the problem of dynamic obstacle avoidance using event cameras on a quadrotor. Finally, we also extend our work to the pursuit task by merely reversing the control policy, proving that our navigation stack can cater to different scenarios.

Publication:

If you find our work useful please do cite us as follows:

@inproceedings{Sanket2019EVDodgeEA,
  title={EVDodgeNet: Deep Dynamic Obstacle Dodging with event cameras},
  author={Nitin J. Sanket and Chethan M. Parameshwara and Chahat Deep Singh and Ashwin V. Kuruttukulam and Cornelia Fermuller and Davide Scaramuzza and Yiannis Aloimonos},
  year={2019}
}

Maintainers:

What will not be released?

We do not plan on releasing the code for EVSegFlowNet described in the paper, nor the hardware setup tutorial (since Intel Aero quadrotor is not sold anymore) and code for the control policy. Please do not email us asking about any of the aforementioned things.

Acknowledgements

The code for EVHomographyNet is based on the code from Unsupervised Deep Homography.

License:

Copyright (c) 2019 Perception and Robotics Group (PRG)