Issues with rllib_training Example #146
-
I am currently working on integrating the Ray RLlib PPO algorithm into the Basilisk environment as outlined in the
Could anyone provide a working example that properly sets up the training without these errors? Any help would be greatly appreciated. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Ran into this recently, a combination of a keras and rllib update are incompatible with each other. I recommend the solution here of dropping your tf install by one version: ://github.com/ray-project/ray/issues/45050 |
Beta Was this translation helpful? Give feedback.
Ran into this recently, a combination of a keras and rllib update are incompatible with each other. I recommend the solution here of dropping your tf install by one version: ://github.com/ray-project/ray/issues/45050