Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to use RL algorithms with continuous action space #49

Open
AizazSharif opened this issue Aug 23, 2021 · 13 comments
Open

Unable to use RL algorithms with continuous action space #49

AizazSharif opened this issue Aug 23, 2021 · 13 comments

Comments

@AizazSharif
Copy link

Hi @praveen-palanisamy

I have been working on macad-gym successfully over the past few months using PPO and many other algorithms. Now I am trying to use DDPG using RLlib which requires continuous action space.

I have changed the boolean "discrete_actions": False within environment config, but its still a issue since the policy function is passing Discrete(9) and I do not know the alternative for continuous action space.
Screenshot from 2021-08-23 19-44-09
Screenshot from 2021-08-23 19-44-23

I also followed the guide mentioned here but now its giving me the following error.
error.txt

Any help in this regard would be appreciated.
Thanks.

@praveen-palanisamy
Copy link
Owner

Hi @AizazSharif ,
Good to hear about your continued interest and experiments on top of macad-gym.
You did the right thing w.r.t macad-gym i.e, setting "discrete_actions": False to make the environment use continuous action space. Now w.r.t the agent's policy, the policy network needs to generate continuous-valued actions of appropriate shape.
For example, you would create a PPO/DDPG policy with policy network's output and shape as ~ Box(2) instead of Discrete(9).
Where the Box(2) refers to two continuous valued outputs (one for steering, another for throttle).

From the error logs, it looks like the DDPG's critic network's concat operation is failing to concat tensors of different rank: ValueError: Shape must be rank 4 but is rank 2 for 'car1/critic/concat' (op: 'ConcatV2') with input shapes: [?,84,84,3], [?,8]
This operation is defined in RLLib's DDPG (ddpg_policy.py) which you need to configure to generate actions of appropriate shape and range (using the example above).
Hope that helps.

@AizazSharif
Copy link
Author

Thanks for the reply @praveen-palanisamy. I will look into it and let you know.

@AizazSharif
Copy link
Author

AizazSharif commented Aug 30, 2021

I also wanted to ask whether it is possible to have one agent with discrete and another with continuous actions in a same driving scenario? @praveen-palanisamy
As an example, one car is trained using PPO and another using DDPG.

@praveen-palanisamy
Copy link
Owner

Hi @AizazSharif ,
Missed your new question until now. Yes, you can use different algorithms per agent/car. The RLLib example agents in the MACAD-Agents repository is a good starting point for Multi-Agent autonomous driving setting.
You can refer to this sample for a generic, PPO, DQN sample using RLLib

@AizazSharif
Copy link
Author

Hi @praveen-palanisamy
Thanks for the reply. I have looked at these examples but they have the same type of action space agents in an environment. I couldn't find any example implementation where both discrete and continuous agents are running in a multi-agent setting.

@SExpert12
Copy link

Hi @praveen-palanisamy

I have been working on macad-gym successfully over the past few months using PPO and many other algorithms. Now I am trying to use DDPG using RLlib which requires continuous action space.

I have changed the boolean "discrete_actions": False within environment config, but its still a issue since the policy function is passing Discrete(9) and I do not know the alternative for continuous action space. Screenshot from 2021-08-23 19-44-09 Screenshot from 2021-08-23 19-44-23

I also followed the guide mentioned here but now its giving me the following error. error.txt

Any help in this regard would be appreciated. Thanks.

Hi,
How many agents you have tried and which algorithm you have used?

@AizazSharif
Copy link
Author

Hi @SExpert12

I used 4 discrete (PPO DQN A3C IMPALA) and 2 continuous (TD3 DDPG) algorithms.
This issue was resolved with time and I was able to publish a paper on it.

I have usually tried two to three agents per scenario in my experiments.

@SExpert12
Copy link

Hi,
Can you please share your code for continuous algorithm. I have been trying to write code for the same, but till date no luck.

Thanks for reply

@AizazSharif
Copy link
Author

Hi @SExpert12

Here is the training code for two agents learning independently in a three-way scenario.
https://github.com/T3AS/MAD-ARL/blob/main/examples/step_1_training_victims.py

The testing for the trained policies above can be looked at in the following script.
https://github.com/T3AS/MAD-ARL/blob/main/examples/step_2_testing_victims.py

@SExpert12
Copy link

Thanks.
I have gone through all the files, I don't find where did you change the for continuous state. What I have observed is discrete action is True.
Can you tell me in which file you have did modification for the same?

@AizazSharif
Copy link
Author

Sorry @SExpert12, the links I shared were examples of multi-agent settings.

You can find a continuous agent example in the following link.
https://github.com/T3AS/Benchmarking-QRS-2022/blob/master/examples/Training_QRS/DDPG/Train_DDPG_roundabout.py (line 447)

The agent here is trained in the presence of other NPC agents using continuous action space.

https://github.com/T3AS/Benchmarking-QRS-2022/blob/7e6a40dc6480c384d4ce4ceb4ca333808a9e6ed0/src/macad_gym/envs/intersection/DDPG_roundabout_train.py (line 20)

Let me know if you have more questions.

@SExpert12
Copy link

Okay.
Thanks.
Let me try and I will get back to you.

@SExpert12
Copy link

Hi,
I am getting this error when I run this file:

https://github.com/T3AS/MAD-ARL/blob/main/examples/step_1_training_victims.py

024-07-08 09:21:32.363670: W tensorflow/strea024-07-08 09:21:32.363670: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/ryzen/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/cv2/../../lib64:
2024-07-08 09:21:32.363784: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/ryzen/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/cv2/../../lib64:
2024-07-08 09:21:32.363797: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
/home/ryzen/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/gym/utils/passive_env_checker.py:21: UserWarning: WARN: It seems a Box observation space is an image but the dtype is not np.uint8, actual type: float32. If the Box observation space is not an image, we recommend flattening the observation to have only a 1D vector.
f"It seems a Box observation space is an image but the dtype is not np.uint8, actual type: {observation_space.dtype}. "
2024-07-08 09:21:33,091 INFO resource_spec.py:212 -- Starting Ray with 14.89 GiB memory available for workers and up to 9.31 GiB for objects. You can adjust these settings with ray.init(memory=, object_store_memory=).
2024-07-08 09:21:33,460 INFO services.py:1148 -- View the Ray dashboard at localhost:8265
2024-07-08 09:21:33,591 WARNING sample.py:27 -- DeprecationWarning: wrapping <function at 0x7f13735b39d8> with tune.function() is no longer needed
== Status ==
Memory usage on this node: 5.1/31.2 GiB
Using FIFO scheduling algorithm.
Resources requested: 2/16 CPUs, 0/1 GPUs, 0.0/14.89 GiB heap, 0.0/6.4 GiB objects
Result logdir: /home/ray_results/MA-Inde-PPO-SSUI3CCARLA
Number of trials: 1 (1 RUNNING)
+--------------------------------------------+----------+-------+
| Trial name | status | loc |
|--------------------------------------------+----------+-------|
| PPO_HomoNcomIndePOIntrxMASS3CTWN3-v0_00000 | RUNNING | |
+--------------------------------------------+----------+-------+

2024-07-08 09:21:35,507 ERROR trial_runner.py:521 -- Trial PPO_HomoNcomIndePOIntrxMASS3CTWN3-v0_00000: Error processing event.
Traceback (most recent call last):
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/tune/trial_runner.py", line 467, in _process_trial
result = self.trial_executor.fetch_result(trial)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/tune/ray_trial_executor.py", line 381, in fetch_result
result = ray.get(trial_future[0], DEFAULT_GET_TIMEOUT)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/worker.py", line 1513, in get
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(NotImplementedError): ray::PPO.init() (pid=119232, ip=192.168.15.93)
File "python/ray/_raylet.pyx", line 414, in ray._raylet.execute_task
File "python/ray/_raylet.pyx", line 449, in ray._raylet.execute_task
File "python/ray/_raylet.pyx", line 450, in ray._raylet.execute_task
File "python/ray/_raylet.pyx", line 452, in ray._raylet.execute_task
File "python/ray/_raylet.pyx", line 407, in ray._raylet.execute_task.function_executor
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/agents/trainer_template.py", line 90, in init
Trainer.init(self, config, env, logger_creator)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 455, in init
super().init(config, logger_creator)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/tune/trainable.py", line 174, in init
self._setup(copy.deepcopy(self.config))
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 596, in _setup
self._init(self.config, self.env_creator)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/agents/trainer_template.py", line 117, in _init
self.config["num_workers"])
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 667, in _make_workers
logdir=self.logdir)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/evaluation/worker_set.py", line 62, in init
RolloutWorker, env_creator, policy, 0, self._local_config)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/evaluation/worker_set.py", line 272, in _make_worker
_fake_sampler=config.get("_fake_sampler", False))
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in init
self._build_policy_map(policy_dict, policy_config)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
policy_map[name] = cls(obs_space, act_space, merged_conf)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/policy/tf_policy_template.py", line 144, in init
obs_include_prev_action_reward=obs_include_prev_action_reward)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/policy/dynamic_tf_policy.py", line 188, in init
explore=explore)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/utils/exploration/stochastic_sampling.py", line 71, in get_exploration_action
return self._get_tf_exploration_action_op(action_dist, explore)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/utils/exploration/stochastic_sampling.py", line 92, in _get_tf_exploration_action_op
false_fn=logp_false_fn)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 1235, in cond
orig_res_f, res_f = context_f.BuildCondBranch(false_fn)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 1061, in BuildCondBranch
original_result = fn()
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/utils/exploration/stochastic_sampling.py", line 87, in logp_false_fn
return tf.zeros(shape=(batch_size, ), dtype=tf.float32)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/tensorflow_core/python/ops/array_ops.py", line 2434, in zeros
output = _constant_if_small(zero, shape, dtype, name)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/tensorflow_core/python/ops/array_ops.py", line 2391, in _constant_if_small
if np.prod(shape) < 1000:
File "<array_function internals>", line 6, in prod
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 3052, in prod
keepdims=keepdims, initial=initial, where=where)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 728, in array
" array.".format(self.name))
NotImplementedError: Cannot convert a symbolic Tensor (car1/cond_1/strided_slice:0) to a numpy array.
== Status ==
Memory usage on this node: 5.7/31.2 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/16 CPUs, 0/1 GPUs, 0.0/14.89 GiB heap, 0.0/6.4 GiB objects
Result logdir: /home/ryzen/ray_results/MA-Inde-PPO-SSUI3CCARLA
Number of trials: 1 (1 ERROR)
+--------------------------------------------+----------+-------+
| Trial name | status | loc |
|--------------------------------------------+----------+-------|
| PPO_HomoNcomIndePOIntrxMASS3CTWN3-v0_00000 | ERROR | |
+--------------------------------------------+----------+-------+
Number of errored trials: 1
+--------------------------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------+
| Trial name | # failures | error file |
|--------------------------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------|
| PPO_HomoNcomIndePOIntrxMASS3CTWN3-v0_00000 | 1 | /home/ryzen/ray_results/MA-Inde-PPO-SSUI3CCARLA/PPO_HomoNcomIndePOIntrxMASS3CTWN3-v0_0_2024-07-08_09-21-33levs1l5v/error.txt |
+--------------------------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------+

Traceback (most recent call last):
File "step_1_training_victims.py", line 459, in
"checkpoint_at_end": True,
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/tune/tune.py", line 406, in run_experiments
return_trials=True)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/tune/tune.py", line 342, in run
raise TuneError("Trials did not complete", incomplete_trials)
ray.tune.error.TuneError: ('Trials did not complete', [PPO_HomoNcomIndePOIntrxMASS3CTWN3-v0_00000])
Killing live carla processes set()
m_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/ryzen/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/cv2/../../lib64:
2024-07-08 09:21:32.363784: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/ryzen/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/cv2/../../lib64:
2024-07-08 09:21:32.363797: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
/home/ryzen/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/gym/utils/passive_env_checker.py:21: UserWarning: WARN: It seems a Box observation space is an image but the dtype is not np.uint8, actual type: float32. If the Box observation space is not an image, we recommend flattening the observation to have only a 1D vector.
f"It seems a Box observation space is an image but the dtype is not np.uint8, actual type: {observation_space.dtype}. "
2024-07-08 09:21:33,091 INFO resource_spec.py:212 -- Starting Ray with 14.89 GiB memory available for workers and up to 9.31 GiB for objects. You can adjust these settings with ray.init(memory=, object_store_memory=).
2024-07-08 09:21:33,460 INFO services.py:1148 -- View the Ray dashboard at localhost:8265
2024-07-08 09:21:33,591 WARNING sample.py:27 -- DeprecationWarning: wrapping <function at 0x7f13735b39d8> with tune.function() is no longer needed
== Status ==
Memory usage on this node: 5.1/31.2 GiB
Using FIFO scheduling algorithm.
Resources requested: 2/16 CPUs, 0/1 GPUs, 0.0/14.89 GiB heap, 0.0/6.4 GiB objects
Result logdir: /home/ryzen/ray_results/MA-Inde-PPO-SSUI3CCARLA
Number of trials: 1 (1 RUNNING)
+--------------------------------------------+----------+-------+
| Trial name | status | loc |
|--------------------------------------------+----------+-------|
| PPO_HomoNcomIndePOIntrxMASS3CTWN3-v0_00000 | RUNNING | |
+--------------------------------------------+----------+-------+

2024-07-08 09:21:35,507 ERROR trial_runner.py:521 -- Trial PPO_HomoNcomIndePOIntrxMASS3CTWN3-v0_00000: Error processing event.
Traceback (most recent call last):
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/tune/trial_runner.py", line 467, in _process_trial
result = self.trial_executor.fetch_result(trial)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/tune/ray_trial_executor.py", line 381, in fetch_result
result = ray.get(trial_future[0], DEFAULT_GET_TIMEOUT)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/worker.py", line 1513, in get
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(NotImplementedError): ray::PPO.init() (pid=119232, ip=192.168.15.93)
File "python/ray/_raylet.pyx", line 414, in ray._raylet.execute_task
File "python/ray/_raylet.pyx", line 449, in ray._raylet.execute_task
File "python/ray/_raylet.pyx", line 450, in ray._raylet.execute_task
File "python/ray/_raylet.pyx", line 452, in ray._raylet.execute_task
File "python/ray/_raylet.pyx", line 407, in ray._raylet.execute_task.function_executor
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/agents/trainer_template.py", line 90, in init
Trainer.init(self, config, env, logger_creator)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 455, in init
super().init(config, logger_creator)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/tune/trainable.py", line 174, in init
self._setup(copy.deepcopy(self.config))
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 596, in _setup
self._init(self.config, self.env_creator)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/agents/trainer_template.py", line 117, in _init
self.config["num_workers"])
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 667, in _make_workers
logdir=self.logdir)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/evaluation/worker_set.py", line 62, in init
RolloutWorker, env_creator, policy, 0, self._local_config)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/evaluation/worker_set.py", line 272, in _make_worker
_fake_sampler=config.get("_fake_sampler", False))
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 360, in init
self._build_policy_map(policy_dict, policy_config)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 842, in _build_policy_map
policy_map[name] = cls(obs_space, act_space, merged_conf)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/policy/tf_policy_template.py", line 144, in init
obs_include_prev_action_reward=obs_include_prev_action_reward)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/policy/dynamic_tf_policy.py", line 188, in init
explore=explore)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/utils/exploration/stochastic_sampling.py", line 71, in get_exploration_action
return self._get_tf_exploration_action_op(action_dist, explore)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/utils/exploration/stochastic_sampling.py", line 92, in _get_tf_exploration_action_op
false_fn=logp_false_fn)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 1235, in cond
orig_res_f, res_f = context_f.BuildCondBranch(false_fn)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py", line 1061, in BuildCondBranch
original_result = fn()
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/rllib/utils/exploration/stochastic_sampling.py", line 87, in logp_false_fn
return tf.zeros(shape=(batch_size, ), dtype=tf.float32)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/tensorflow_core/python/ops/array_ops.py", line 2434, in zeros
output = _constant_if_small(zero, shape, dtype, name)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/tensorflow_core/python/ops/array_ops.py", line 2391, in _constant_if_small
if np.prod(shape) < 1000:
File "<array_function internals>", line 6, in prod
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 3052, in prod
keepdims=keepdims, initial=initial, where=where)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 728, in array
" array.".format(self.name))
NotImplementedError: Cannot convert a symbolic Tensor (car1/cond_1/strided_slice:0) to a numpy array.
== Status ==
Memory usage on this node: 5.7/31.2 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/16 CPUs, 0/1 GPUs, 0.0/14.89 GiB heap, 0.0/6.4 GiB objects
Result logdir: /home/ray_results/MA-Inde-PPO-SSUI3CCARLA
Number of trials: 1 (1 ERROR)
+--------------------------------------------+----------+-------+
| Trial name | status | loc |
|--------------------------------------------+----------+-------|
| PPO_HomoNcomIndePOIntrxMASS3CTWN3-v0_00000 | ERROR | |
+--------------------------------------------+----------+-------+
Number of errored trials: 1
+--------------------------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------+
| Trial name | # failures | error file |
|--------------------------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------|
| PPO_HomoNcomIndePOIntrxMASS3CTWN3-v0_00000 | 1 | /home/ryzen/ray_results/MA-Inde-PPO-SSUI3CCARLA/PPO_HomoNcomIndePOIntrxMASS3CTWN3-v0_0_2024-07-08_09-21-33levs1l5v/error.txt |
+--------------------------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------+

Traceback (most recent call last):
File "step_1_training_victims.py", line 459, in
"checkpoint_at_end": True,
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/tune/tune.py", line 406, in run_experiments
return_trials=True)
File "/home/miniconda3/envs/MAD-ARL/lib/python3.7/site-packages/ray/tune/tune.py", line 342, in run
raise TuneError("Trials did not complete", incomplete_trials)
ray.tune.error.TuneError: ('Trials did not complete', [PPO_HomoNcomIndePOIntrxMASS3CTWN3-v0_00000])
Killing live carla processes set()

How to solve this now?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants