Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to include different robots of differents DOFs in the scene ? #657

Open
mohamedamr13 opened this issue Jul 8, 2024 · 3 comments
Open
Labels
question Further information is requested

Comments

@mohamedamr13
Copy link

Question

I need to include different robots ( for ex different quadrupeds or drones or a mix of both ) in a direct RL envirnoments to experiment with training a single policy on different robots ( for example a different robot in each envirnoment) , similar to this paper.

How would this be possible ? Moreover, is Multi-agent training now possible in IsaacLab
Thanks a lot!

image

@Mayankm96
Copy link
Contributor

I'm not sure how exactly they did it, but based on what I see above, they made one environment with all possible robots and cloned them into separate environments. This is fairly simple to do with the scene by having the robots in the interactive scene with an offset in their initial positions.

We are reviewing #221, and with that, you should be able to get a group of observations and rewards (which I think is what multi-agent training needs in the end). Plugging it into a library for RL training is something we have wanted to do, but its priority is currently unclear.

@Mayankm96 Mayankm96 added the question Further information is requested label Jul 8, 2024
@Ashutosh781
Copy link

@Mayankm96
Building upon the question of multi-agent training, as you mentioned, it's easier to make one environment with all possible robots and then clone them. Having all the robots in all the environments might cause a significant slowdown in the simulation.

Is it possible to either disable assets in some specific environments or delete them? I am thinking of splitting the environments for each robot type so that there is only one type of robot in each environment and then handling the observation and action buffer in the DirectRLEnv. Please let me know if there is another more optimized way to achieve a multi-agent training setup; any help with this question would be much appreciated.

@hutslib
Copy link

hutslib commented Aug 2, 2024

Hi. @Mayankm96
Similar question, however, I would like to insert two different robots and run by two different policies. Is this possible now with isaaclab? Thanks for giving me some hints and suggestions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants