Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modification of observations/actor states #24

Closed
Neel1302 opened this issue Aug 2, 2020 · 12 comments
Closed

Modification of observations/actor states #24

Neel1302 opened this issue Aug 2, 2020 · 12 comments

Comments

@Neel1302
Copy link

Neel1302 commented Aug 2, 2020

Hi @praveen-palanisamy,

Now that I have macad-gym setup, I am planning to setup an environment with states being global position (x,y) of the actors (cars or pedestrians) and their velocities (I think carla doesn't provide actor velocity, but I can use a time window in the past to estimate velocity): in a practical setting, this can be for instance coming from on-board GPS.

I understand you used the image itself as the state/observation, thus, do you have any recommendations regarding how to modify the observation space? (potential enhancement feature)

Thanks,
Neel

@praveen-palanisamy
Copy link
Owner

Hi @Neel1302 ,

Cool!

This should be doable by creating a new environment that provides the positions as the observation space instead of the images.

Because the MACAD-Gym was started as a platform for Deep RL, the image observation spaces were the main focus but the support for lower dimensional observations like positions exists. The read_observations method in the VehicleManager class provides the necessary information you need (including the forward-speed/velocity):

py_measurements = {
"x": cur.get_location().x,
"y": cur.get_location().y,
"pitch": cur.get_transform().rotation.pitch,
"yaw": cur.get_transform().rotation.yaw,
"roll": cur.get_transform().rotation.roll,
"forward_speed": cur.get_velocity().x,
"distance_to_goal": distance_to_goal,
"distance_to_goal_euclidean": distance_to_goal_euclidean,
"collision_vehicles": collision_vehicles,
"collision_pedestrians": collision_pedestrians,
"collision_other": collision_other,
"intersection_offroad": intersection_offroad,
"intersection_otherlane": intersection_otherlane,
"map": self._config["server_map"],
"current_scenario": self._scenario,
"next_command": next_command,
"previous_actions": self.previous_actions,
"previous_rewards": self.previous_rewards
}

Creating a new Gym-compatible RL environment using MACAD-Gym should be as easy as generating a JSON/Dict config. For example, the HomoNcomIndePOIntrxMASS3CTWN3-v0 environment is implemented in this file.
Another example for creating new environments with discrete actions is provided in the wiki here.

@Neel1302
Copy link
Author

Neel1302 commented Aug 2, 2020

Thanks @praveen-palanisamy!

I see, I created a new environment (changed the SUI and modified actors) based on the wiki. I also disabled discrete_actions as I mainly want to work with pedestrian agents (Homogeneous). I will continue to work on changing the observation space to include the measurements I need.

I would appreciate if you could leave this thread open for a bit for follow ups as I will be working on this (daily) for about a week.

Thanks,
Neel

@praveen-palanisamy
Copy link
Owner

Sounds good!
Sure. Let's keep this issue open until you have it working.

@Neel1302
Copy link
Author

Neel1302 commented Aug 3, 2020

Thanks!

Just listing two quick and minor typos in the readme that I spotted:

In the Getting Started section, Option 2 for developers:

Create a new conda env named "macad-gym" and install the required packages: conda env create -f conda_env.yaml

Activate the macad-gym conda python env: source activate carla-gym macad-gym

Also one quick question I had: after the example code runs (i.e. all episodes are done), I am getting a segmentation fault (core dumped). Did you observe this as well?

@praveen-palanisamy
Copy link
Owner

Thanks for spotting and sharing the typos! I have fixed one carla-gym -> macad-gym. What is the other one that you had spotted?

Ah! Thank you for reporting. I missed adding the env.close() method call to the example agent. I have fixed that in #25 . You can pull and then try again or add: env.close() to the last line of the basic_agent.py example script.

@Neel1302
Copy link
Author

Neel1302 commented Aug 3, 2020

The other typo was in: conda env create -f conda_env.yaml. This should be: conda env create -f conda_env.yml. I think the strikethrough on the "a" in my previous comment wasn't quite noticeable due to it being on only one letter.

Since conda expects a .yml file to create a new environment, it threw an error when I pasted the command with .yaml.

And alas, adding env.close() as you suggested solved the segmentation fault!

Thanks,
Neel

@praveen-palanisamy
Copy link
Owner

Oh I see it now! Yeah, the strike-through on the letter "a" seems camouflaged :).

Conda should work fine with an environment file with the extension .yml or .yaml. It is just that there's only conda_env.yml in this repo so conda would have complained about not being able to find any such file. It's good that you reported the typo!

Cool.

@Neel1302
Copy link
Author

Neel1302 commented Aug 3, 2020

Oh I see, makes sense. One other quick correction in:

elif collided_type == 'Pedestrian':
self.collision_pedestrians += 1

Should be:

    elif collided_type == 'Walker': # Actor type should be Walker instead of Pedestrian
        self.collision_pedestrians += 1

I checked this and it seems like the actor type from the collision sensor event (i.e. event.other_actor) for pedestrians yields 'Walker' rather than 'Pedestrian'. After I made this change (and set early_terminate_on_collision to True), the episode would terminate upon collision with other pedestrians which can help cut down training time.

@praveen-palanisamy
Copy link
Owner

Good catch! Yes. The Actor class name should be Walker for the pedestrian actors as per the latest CARLA code/docs. I'm not sure if it changed between CARLA versions but since this also works with 0.9.4 as per your report, I have made the change.

@praveen-palanisamy
Copy link
Owner

Hey @Neel1302 , Were you able to create the environment with custom observation space (position, velocity) for your needs?

@Neel1302
Copy link
Author

Neel1302 commented Aug 8, 2020

Hi @praveen-palanisamy,

Yes, indeed! Thanks for your help throughout this process. To use the custom observation space I created a function that replaces the function: _encode_obs in _reset and _step:

if cam.image is None:
print("callback_count:", actor_id, ":", cam.callback_count)
image = preprocess_image(cam.image, actor_config)
obs = self._encode_obs(actor_id, image, py_measurement)
self._obs_dict[actor_id] = obs

return (
self._encode_obs(actor_id, image, py_measurements),
reward,
done,
py_measurements,
)

with:

    def _fetch_loc_vel_obs(self, actor_id, py_measurements):
        """Encode sensor and measurements into obs based on state-space config.

        Args:
            actor_id (str): Actor identifier
            py_measurements (dict): measurement file

        Returns:
            obs (dict): location and velocity observation data for each actor
        """
        if not self._actor_configs[actor_id]["send_measurements"]:
            x = py_measurements["x"]
            y = py_measurements["y"]
            heading = py_measurements["yaw"]
            v_x = py_measurements["velocity"].x
            v_y = py_measurements["velocity"].y
            state = np.array([x,y,v_x,v_y])
            return state 
        else: print("ERROR: set send_measurements flag to False (a TODO item)")

The function returns returns into obs from the relevant measurements from py_measurements (i.e. 'x', 'y', 'yaw'. I changed the 'forward_speed' key ('forward_speed': self._actors[actor_id].get_velocity().x ---> 'velocity': self._actors[actor_id].get_velocity()) so I can get total velocity).

I was able to then import these in my training file to train a custom agent. Although, this training has not concluded, I think using macad-gym has been great to interface with CARLA especially since it supports multi-agents.

I am closing this issue, and can post a link to my fork if someone else needs these measurements as observations rather than images or feel free to include this function as an addition if you think it would help others.

Thanks again @praveen-palanisamy,
Neel

@Neel1302 Neel1302 closed this as completed Aug 8, 2020
@praveen-palanisamy
Copy link
Owner

That's nice! Glad to hear that you created the custom environment you needed.

Feel free to submit a PR. Ideally, the observation type (images or measurements only or both) should be made a configuration parameter so that creating such a custom environment requires only a configuration file change.

Good luck for your work and feel free to reach out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants