Would AirSim ever become a viable RL simulator? #61
Unanswered
HarrySoteriou
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Hi Harry, I'm a masters student doing some research about multi-agent reinforcement learning on drones (indoor target search to be specific) and finding a simulator the could at the same time do reinforcement learning and is compatible with ROS and PX4 has been a wild ride so far, I'm curious if 1 year later you've been successful in completing your project elsewhere? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have been working on an autonomous landing project using computer vision and Deep RL for almost 2 years. I had chosen AirSim due to the Python API, superior documentation, photo-realistic graphics, the diverse sensor suit and the OpenAI gym integration. But I ran into some major problems that have led me to abandon the simulator for good;
with and without the .join() command. Supposedly .join() ensures that no movement is performed until the action is completed and hopefully a mapping is learned from state s ->action a-> state s'. In my evaluation runs only when I remove the .join() argument could a trained policy achieve 100% landing success which is irrational as a new action is taken as soon as a new observation is recorded and a consistent mapping is not being utilized.
Are there any examples using the AirSim project where RL policies have converged? Or does my 5th point stand and the AirSim simulator should not be used for RL research purposes ?
Beta Was this translation helpful? Give feedback.
All reactions