You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I need to include different robots ( for ex different quadrupeds or drones or a mix of both ) in a direct RL envirnoments to experiment with training a single policy on different robots ( for example a different robot in each envirnoment) , similar to this paper.
How would this be possible ? Moreover, is Multi-agent training now possible in IsaacLab
Thanks a lot!
The text was updated successfully, but these errors were encountered:
I'm not sure how exactly they did it, but based on what I see above, they made one environment with all possible robots and cloned them into separate environments. This is fairly simple to do with the scene by having the robots in the interactive scene with an offset in their initial positions.
We are reviewing #221, and with that, you should be able to get a group of observations and rewards (which I think is what multi-agent training needs in the end). Plugging it into a library for RL training is something we have wanted to do, but its priority is currently unclear.
@Mayankm96
Building upon the question of multi-agent training, as you mentioned, it's easier to make one environment with all possible robots and then clone them. Having all the robots in all the environments might cause a significant slowdown in the simulation.
Is it possible to either disable assets in some specific environments or delete them? I am thinking of splitting the environments for each robot type so that there is only one type of robot in each environment and then handling the observation and action buffer in the DirectRLEnv. Please let me know if there is another more optimized way to achieve a multi-agent training setup; any help with this question would be much appreciated.
Hi. @Mayankm96
Similar question, however, I would like to insert two different robots and run by two different policies. Is this possible now with isaaclab? Thanks for giving me some hints and suggestions
Question
I need to include different robots ( for ex different quadrupeds or drones or a mix of both ) in a direct RL envirnoments to experiment with training a single policy on different robots ( for example a different robot in each envirnoment) , similar to this paper.
How would this be possible ? Moreover, is Multi-agent training now possible in IsaacLab
Thanks a lot!
The text was updated successfully, but these errors were encountered: