This repository is a fork from Microsoft's AirSim platform. It is currently being optimized for obstacle avoidance training models for drones and vehicles, as well as experimentation with Capsule Convolutional Neural Networks (CCNN). The end of goal of this project is to develop a platform to easily test swarm-based, collective sampling state estimation networks and communications patterns.
- Design a easy-to-use testing harness for verification of state-estimation, collective agent networks.
- Design, test and implement a software-in-the-loop training process for obstacle avoidance models based upon image and distance data (stereo cameras and lidar/radar data)
- Research Capsule Convultional Neural Networks for use in obstacle avoidance and adverserial agent detection.
- Design and implement a distributed, Deep reinforcement learning platform for drones utilizing either Microsoft Azure or AWS for training of complex control modueles and SLAM components.
For more details, see the use precompiled binaries document.
This is the basic documentation View our detailed documentation on all aspects of AirSim. A more detailed documentation of changes made and how to utilize this platform will be written during and upon completion of this project
There are two ways you can generate training data from AirSim for deep learning. The easiest way is to simply press the record button in the lower right corner. This will start writing pose and images for each frame. The data logging code is pretty simple and you can modify it to your heart's content.
A better way to generate training data exactly the way you want is by accessing the APIs. This allows you to be in full control of how, what, where and when you want to log data.
Yet another way to use AirSim is the so-called "Computer Vision" mode. In this mode, you don't have vehicles or physics. You can use the keyboard to move around the scene, or use APIs to position available cameras in any arbitrary pose, and collect images such as depth, disparity, surface normals or object segmentation.
Press F10 to see various options available for weather effects. You can also control the weather using APIs. Press F1 to see other options available.
More technical details are available in AirSim paper (FSR 2017 Conference). Please cite this as:
@inproceedings{airsim2017fsr,
author = {Shital Shah and Debadeepta Dey and Chris Lovett and Ashish Kapoor},
title = {AirSim: High-Fidelity Visual and Physica l Simulation for Autonomous Vehicles},
year = {2017},
booktitle = {Field and Service Robotics},
eprint = {arXiv:1705.05065},
url = {https://arxiv.org/abs/1705.05065}
}
Please take a look at open issues if you are looking for areas to contribute to.
We are maintaining a list of a few projects, people and groups that we are aware of. If you would like to be featured in this list please make a request here.
Join our GitHub Discussions group to stay up to date or ask any questions.
We also have an AirSim group on Facebook.
- Python wrapper for Open AI gym interfaces.
- Python wrapper for Event camera simulation
- Voxel grid construction
- Programmable camera distortion
- Wind simulation
- Azure development environment with documentation
- ROS wrapper for multirotor and car.
For complete list of changes, view our Changelog
If you run into problems, check the FAQ and feel free to post issues in the AirSim repository.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This project is released under the MIT License. Please review the License file for more details.
845675d6f31d02b30ce9be5b43ef97b6e6843fc3
For more details, see the use precompiled binaries document.
This is the basic documentation View our detailed documentation on all aspects of AirSim. A more detailed documentation of changes made and how to utilize this platform will be written during and upon completion of this project
There are two ways you can generate training data from AirSim for deep learning. The easiest way is to simply press the record button in the lower right corner. This will start writing pose and images for each frame. The data logging code is pretty simple and you can modify it to your heart's content.
A better way to generate training data exactly the way you want is by accessing the APIs. This allows you to be in full control of how, what, where and when you want to log data.
Yet another way to use AirSim is the so-called "Computer Vision" mode. In this mode, you don't have vehicles or physics. You can use the keyboard to move around the scene, or use APIs to position available cameras in any arbitrary pose, and collect images such as depth, disparity, surface normals or object segmentation.
Press F10 to see various options available for weather effects. You can also control the weather using APIs. Press F1 to see other options available.
More technical details are available in AirSim paper (FSR 2017 Conference). Please cite this as:
@inproceedings{airsim2017fsr,
author = {Shital Shah and Debadeepta Dey and Chris Lovett and Ashish Kapoor},
title = {AirSim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles},
year = {2017},
booktitle = {Field and Service Robotics},
eprint = {arXiv:1705.05065},
url = {https://arxiv.org/abs/1705.05065}
}
Please take a look at open issues if you are looking for areas to contribute to.
We are maintaining a list of a few projects, people and groups that we are aware of. If you would like to be featured in this list please make a request here.
Join the AirSim group on Facebook to stay up to date or ask any questions.
- Python wrapper for Open AI gym interfaces.
- Python wrapper for Event camera simulation
- Voxel grid construction
- Programmable camera distortion
- Wind simulation
- Azure development environment with documentation
- ROS wrapper for multirotor and car.
For complete list of changes, view our Changelog
If you run into problems, check the FAQ and feel free to post issues in the AirSim repository.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This project is released under the MIT License. Please review the License file for more details.
0fce17884cee98d733ce8ede54399b20d879da70 <<<<<<< HEAD Fixed README.md ======= 845675d6f31d02b30ce9be5b43ef97b6e6843fc3