Skip to content

Commit

Permalink
Merge 0.4.15 version changes to master
Browse files Browse the repository at this point in the history
  • Loading branch information
Gamenot committed Mar 21, 2021
1 parent fc2b3cc commit 6cb8ffd
Show file tree
Hide file tree
Showing 139 changed files with 6,020 additions and 1,462 deletions.
12 changes: 3 additions & 9 deletions .github/workflows/ci-ultra-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,6 @@ jobs:
. .venv/bin/activate
scl scenario build-all ultra/scenarios/pool
pytest -v ./tests/
- name: Run Header test
run : |
cd ultra
./header_test.sh
test-package-via-setup:
runs-on: ubuntu-18.04
if: github.event_name == 'push' || github.event.pull_request.head.repo.full_name != github.repository
Expand All @@ -60,7 +55,7 @@ jobs:
cd ultra
python3.7 -m venv .venv
. .venv/bin/activate
pip install --upgrade --upgrade-strategy eager pip
pip install --upgrade pip
pip install --upgrade -e .
pip install --upgrade numpy
- name: Run test
Expand All @@ -69,7 +64,6 @@ jobs:
. .venv/bin/activate
scl scenario build-all ultra/scenarios/pool
pytest -v ./tests/test_ultra_package.py
test-package-via-wheel:
runs-on: ubuntu-18.04
if: github.event_name == 'push' || github.event.pull_request.head.repo.full_name != github.repository
Expand All @@ -93,6 +87,7 @@ jobs:
. .venv/bin/activate
pip install --upgrade --upgrade-strategy eager pip
pip install --upgrade --upgrade-strategy eager wheel
pip install --upgrade --upgrade-strategy eager -e .
python setup.py bdist_wheel
cd dist
pip install $(ls . | grep ultra)
Expand All @@ -104,7 +99,6 @@ jobs:
. .venv/bin/activate
scl scenario build-all ultra/scenarios/pool
pytest -v ./tests/test_ultra_package.py
# test-package-via-pypi:
# runs-on: ubuntu-18.04
# if: github.event_name == 'push' || github.event.pull_request.head.repo.full_name != github.repository
Expand Down Expand Up @@ -132,4 +126,4 @@ jobs:
# cd ultra
# . .venv/bin/activate
# scl scenario build-all ultra/scenarios/pool
# pytest -v ./tests/test_ultra_package.py
# pytest -v ./tests/test_ultra_package.py
19 changes: 19 additions & 0 deletions .readthedocs.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

# Required
version: 2

# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/conf.py

# Optionally set the version of Python and requirements required to build your docs
python:
version: 3.7
install:
- method: pip
path: .
extra_requirements:
- dev
51 changes: 51 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Change Log
All notable changes to this project will be documented in this file.

This changelog is to adhere to the format given at [keepachangelog](keepachangelog.com/en/1.0.0/)
and should maintain [semantic versioning](semver.org).

All text added must be human readable.

Copy and pasting the git commit messages is __NOT__ enough.

## [Unrealeased]

## [0.4.15] - 2021-03-18
### Added
- This CHANGELOG as a change log to help keep track of changes in the SMARTS project that can get easily lost.
- Hosted Documentation on `readthedocs` and pointed to the smarts paper and useful parts of the documentation in the README.
- Running imitation learning will now create a cached history_mission.pkl file in scenario folder that stores
the missions for all agents.
- Added ijson as a dependency.
- Added cached_property as a dependency.
### Changed
- Lowered CPU cost of waypoint generation. This will result in a small increase in memory usage.
- Set the number of processes used in `make test` to ignore 2 CPUs if possible.
- Use the dummy OpEn agent (open-agent version 0.0.0) for all examples.
- Improved performance by removing unused traffic light functionality.
- Limit the memory use of traffic histories by incrementally loading the traffic history file with a worker process.
### Fixed
- In order to avoid precision issues in our coordinates with big floating point numbers,
we now initially shift road networks (maps) that are offset back to the origin
using [netconvert](https://sumo.dlr.de/docs/netconvert.html).
We adapt Sumo vehicle positions to take this into account to allow Sumo to continue
using the original coordinate system. See Issue #325.
- Cleanly close down the traffic history provider thread. See PR #665.
- Improved the disposal of a SMARTS instance. See issue #378.
- Envision now resumes from current frame after un-pausing.
- Skipped generation of cut-in waypoints if they are further off-road than SMARTS currently supports to avoid process crash.
- Fix envision error 15 by cleanly shutting down the envision worker process.

## [Format] - 2021-03-12
### Added
– Describe any new features that have been added since the last version was released.
### Changed
– Note any changes to the software’s existing functionality.
### Deprecated
– Note any features that were once stable but are no longer and have thus been removed.
### Fixed
– List any bugs or errors that have been fixed in a change.
### Removed
– Note any features that have been deleted and removed from the software.
### Security
– Invite users to upgrade and avoid fixed software vulnerabilities.
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ test: build-all-scenarios
--doctest-modules \
--forked \
--dist=loadscope \
-n `nproc --ignore 1` \
-n `nproc --ignore 2` \
./envision ./smarts/contrib ./smarts/core ./smarts/env ./smarts/sstudio ./tests \
--ignore=./smarts/core/tests/test_smarts_memory_growth.py \
--ignore=./smarts/env/tests/test_benchmark.py \
Expand Down
6 changes: 6 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@

SMARTS (Scalable Multi-Agent RL Training School) is a simulation platform for reinforcement learning and multi-agent research on autonomous driving. Its focus is on realistic and diverse interactions. It is part of the [XingTian](https://github.com/huawei-noah/xingtian/) suite of RL platforms from Huawei Noah's Ark Lab.

Check out the paper at [SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for Autonomous Driving](https://arxiv.org/abs/2010.09776) for background on some of the project goals.

![](docs/_static/smarts_envision.gif)

## Multi-Agent experiment as simple as...
Expand Down Expand Up @@ -107,6 +109,10 @@ Several example scripts are provided under [`SMARTS/examples`](./examples), as w
# ...
```

## Documentation

Documentation is available at [smarts.readthedocs.io](https://smarts.readthedocs.io/en/latest)

## CLI tool

SMARTS provides a command-line tool to interact with scenario studio and Envision.
Expand Down
1 change: 1 addition & 0 deletions cli/studio.py
Original file line number Diff line number Diff line change
Expand Up @@ -145,6 +145,7 @@ def _clean(scenario):
"*.rou.alt.xml",
"social_agents/*",
"traffic/*",
"history_mission.pkl",
]
p = Path(scenario)
for file_name in to_be_removed:
Expand Down
2 changes: 1 addition & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
author = "Huawei Noah's Ark Lab."

# The full version, including alpha/beta/rc tags
release = "0.3.6"
release = "0.4.15"


# -- General configuration ---------------------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ This is done by implementing the :class:`smarts.core.agent.Agent` interface:
)
return traj
Here we are implementing a simple lane following agent using the BezierMotionPlanner. The `obs` argument to `ExampleAgent.act()` will contain the observations specified in the `AgentInterface` above, and it's expected that the return value of the `act` method matches the `ActipnSpaceType` chosen as well. (This constraint is relaxed when adapters are introduced.)
Here we are implementing a simple lane following agent using the BezierMotionPlanner. The `obs` argument to `ExampleAgent.act()` will contain the observations specified in the `AgentInterface` above, and it's expected that the return value of the `act` method matches the `ActionSpaceType` chosen as well. (This constraint is relaxed when adapters are introduced.)


AgentSpec :class:`smarts.core.agent.AgentSpec`
Expand Down
12 changes: 6 additions & 6 deletions docs/sim/agent.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
How to build an Agent
======================

SMARTS provides users the ability to custom their agents. :class:`smarts.core.agent.AgentSpec` has the following fields:
SMARTS provides users the ability to customize their agents. :class:`smarts.core.agent.AgentSpec` has the following fields:

.. code-block:: python
Expand Down Expand Up @@ -31,7 +31,7 @@ An example of how to create an `Agent` instance is shown below.
agent = agent_spec.build_agent()
We will further explain the fields of `Agent` class later on this page. You can also read the source code at :class:`smarts.env.agent`.
We will further explain the fields of the `Agent` class later on this page. You can also read the source code at :class:`smarts.env.agent`.

==============
AgentInterface
Expand Down Expand Up @@ -64,15 +64,15 @@ SMARTS provide some interface types, and the differences between them is shown i
| debug | **T** | **T** | **T** | **T** |
`max_episode_steps` controls the max running steps allowed for the agent in an episode. The default `None` setting means agents have no such limit.
You can move max_episode_steps control authority to RLlib with their config option `horizon`, but lose the ability to customize
You can move `max_episode_steps` control authority to RLlib with their config option `horizon`, but lose the ability to customize
different max_episode_len for each agent.

`action` controls the agent action type used. There are three `ActionSpaceType`: ActionSpaceType.Continuous, ActionSpaceType.Lane
and ActionSpaceType.ActuatorDynamic.

- `ActionSpaceType.Continuous`: continuous action space with throttle, brake, absolute steering angle.
- `ActionSpaceType.ActuatorDynamic`: continuous action space with throttle, brake, steering rate. Steering rate means the amount of steering angle change *per second* (either positive or negative) to be applied to the current steering angle.
- `ActionSpaceType.Lane`: discrete lane action space of strings including "keep_lane", "slow_down", "change_lane_left", "change_lane_right". (WARNING: This is the case in the current version 0.3.2b, but a newer version will soon be released. In this newer version, the action space will no longer being strings, but will be a tuple of an integer for `lane_change` and a float for `target_speed`.)
- `ActionSpaceType.Lane`: discrete lane action space of strings including "keep_lane", "slow_down", "change_lane_left", "change_lane_right". (WARNING: This is the case in the current version 0.3.2b, but a newer version will soon be released. In this newer version, the action space will no longer consist of strings, but will be a tuple of an integer for `lane_change` and a float for `target_speed`.)

For other observation options, see :ref:`observations` for details.

Expand Down Expand Up @@ -112,10 +112,10 @@ For further customization, you can try:
action=ActionSpaceType.Continuous,
)
refer to :class:`smarts/core/agent_interface` for more details.
Refer to :class:`smarts/core/agent_interface` for more details.


IMPORTANT: The generation of DrivableAreaGridMap(`drivable_area_grid_map=True`), OGM (`ogm=True`) and RGB (`rgb=True`) images may significantly slow down the environment `step()`. If your model does not consume such observations, we recommend that you set them to `False`.
IMPORTANT: The generation of a DrivableAreaGridMap (`drivable_area_grid_map=True`), OGM (`ogm=True`) and/or RGB (`rgb=True`) images may significantly slow down the environment `step()`. If your model does not consume such observations, we recommend that you set them to `False`.

IMPORTANT: Depending on how your agent model is set up, `ActionSpaceType.ActuatorDynamic` might allow the agent to learn faster than `ActionSpaceType.Continuous` simply because learning to correct steering could be simpler than learning a mapping to all the absolute steering angle values. But, again, it also depends on the design of your agent model.

Expand Down
2 changes: 1 addition & 1 deletion docs/sim/observations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -82,4 +82,4 @@ Actions

* `ActionSpaceType.Continuous`: continuous action space with throttle, brake, absolute steering angle. It is a tuple of `throttle` [0, 1], `brake` [0, 1], and `steering` [-1, 1].
* `ActionSpaceType.ActuatorDynamic`: continuous action space with throttle, brake, steering rate. Steering rate means the amount of steering angle change *per second* (either positive or negative) to be applied to the current steering angle. It is also a tuple of `throttle` [0, 1], `brake` [0, 1], and `steering_rate`, where steering rate is in number of radians per second.
* `ActionSpaceType.Lane`: discrete lane action space of *strings* including "keep_lane", "slow_down", "change_lane_left", "change_lane_right" as of version 0.3.2b, but a newer version will soon be released. In this newer version, the action space will no longer being strings, but will be a tuple of an integer for `lane_change` and a float for `target_speed`.
* `ActionSpaceType.Lane`: discrete lane action space of *strings* including "keep_lane", "slow_down", "change_lane_left", "change_lane_right" as of version 0.3.2b, but a newer version will soon be released. In this newer version, the action space will no longer consist of strings, but will be a tuple of an integer for `lane_change` and a float for `target_speed`.
2 changes: 1 addition & 1 deletion docs/sim/rllib_in_smarts.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ SMARTS RLlib Tips
Resume or continue training
---------------------------

If you want to continue an aborted experiemnt. you can set `resume=True` in `tune.run`. But note that`resume=True` will continue to use the same configuration as was set in the original experiment.
If you want to continue an aborted experiment. you can set `resume=True` in `tune.run`. But note that`resume=True` will continue to use the same configuration as was set in the original experiment.
To make changes to a started experiment, you can edit the latest experiment file in `~/ray_results/rllib_example`.

Or if you want to start a new experiment but train from an existing checkpoint, you can set `restore=checkpoint_path` in `tune.run`.
9 changes: 7 additions & 2 deletions envision/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -213,6 +213,9 @@ def run_socket(endpoint, wait_between_retries):
if not connection_established:
self._log.info(f"Attempt {tries} to connect to Envision.")
else:
# No information left to send, connection is likely done
if state_queue.empty():
break
# When connection lost, retry again every 3 seconds
wait_between_retries = 3
self._log.info(
Expand All @@ -237,14 +240,16 @@ def _send_raw(self, state: str):
self._state_queue.put(state)

def teardown(self):
if not self._headless:
if not self._headless and self._state_queue:
self._state_queue.put(Client.QueueDone())
self._process.join(timeout=3)
self._process = None
self._state_queue.close()
self._state_queue = None

if self._logging_process:
if self._logging_process and self._logging_queue:
self._logging_queue.put(Client.QueueDone())
self._logging_process.join(timeout=3)
self._logging_process = None
self._logging_queue.close()
self._logging_queue = None
4 changes: 2 additions & 2 deletions envision/web/dist/main.js

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion envision/web/dist/main.js.map

Large diffs are not rendered by default.

3 changes: 2 additions & 1 deletion envision/web/src/client.js
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,8 @@ export default class Client {
continue;
}

let item = this._stateQueues[simulationId].pop();
// Removes the oldest element
let item = this._stateQueues[simulationId].shift();
let elapsed_times = [
item.current_elapsed_time,
item.total_elapsed_time,
Expand Down
2 changes: 1 addition & 1 deletion envision/web/src/components/app.js
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ function App({ client }) {
// checks if there is new simulation running every 3 seconds.
const interval = setInterval(fetchRunningSim, 3000);
return () => clearInterval(interval);
}, []);
}, [matchedSimulationId]);

async function onStartRecording() {
recorderRef.current = new RecordRTCPromisesHandler(
Expand Down
2 changes: 1 addition & 1 deletion envision/web/src/components/control_panel.js
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ const treeData = [
],
},
{
title: "Inclucdes Social Agents",
title: "Includes Social Agents",
key: agentModes.socialObs,
},
],
Expand Down
33 changes: 28 additions & 5 deletions envision/web/src/components/simulation.js
Original file line number Diff line number Diff line change
Expand Up @@ -149,19 +149,42 @@ export default function Simulation({
setScene(scene_);
};

const sleep = (milliseconds) => {
return new Promise((resolve) => setTimeout(resolve, milliseconds));
};

// State subscription
useEffect(() => {
let stopPolling = false;
(async () => {
const msInSec = 1000;
const it = client.worldstate(simulationId);
let wstate_and_time = await it.next();
while (!wstate_and_time.done && playing) {
let prevElapsedTime = null;
let waitStartTime = null;
let wstate_and_time;
if (playing) wstate_and_time = await it.next();
while (!stopPolling && playing && !wstate_and_time.done) {
let wstate, elapsed_times;
[wstate, elapsed_times] = wstate_and_time.value;
if (!stopPolling) {
setWorldState(wstate);
onElapsedTimesChanged(...elapsed_times);
const currentTime = elapsed_times[0];
if (prevElapsedTime == null) {
// default: wait 50ms before playing the next frame
await sleep(50);
} else {
// msInSec*(currentTime-prevElapsedTime) is the time difference between
// current frame and previous frame
// Since we could have waited (Date.now() - waitStartTime) to get the current frame,
// we deduct this amount from the time we will be waiting
await sleep(
msInSec * (currentTime - prevElapsedTime) -
(Date.now() - waitStartTime)
);
}
prevElapsedTime = currentTime;

setWorldState(wstate);
onElapsedTimesChanged(...elapsed_times);
waitStartTime = Date.now();
wstate_and_time = await it.next();
}
})();
Expand Down
15 changes: 11 additions & 4 deletions examples/history_vehicles_replacement_for_imitation_learning.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,14 @@
import logging
from dataclasses import replace

from envision.client import Client as Envision
from examples import default_argument_parser
from smarts.core.agent import Agent, AgentSpec
from smarts.core.agent_interface import AgentInterface, AgentType
from smarts.core.scenario import Scenario
from smarts.core.scenario import Mission, Scenario
from smarts.core.smarts import SMARTS
from smarts.core.sumo_traffic_simulation import SumoTrafficSimulation
from smarts.core.traffic_history_provider import TrafficHistoryProvider

logging.basicConfig(level=logging.INFO)

Expand All @@ -23,14 +25,11 @@ def main(scenarios, headless, seed):
traffic_sim=SumoTrafficSimulation(headless=True, auto_start=True),
envision=Envision(),
)

for _ in scenarios:
scenario = next(scenarios_iterator)
agent_missions = scenario.discover_missions_of_traffic_histories()

for agent_id, mission in agent_missions.items():
scenario.set_ego_missions({agent_id: mission})

agent_spec = AgentSpec(
interface=AgentInterface.from_type(
AgentType.Laner, max_episode_steps=None
Expand All @@ -40,7 +39,15 @@ def main(scenarios, headless, seed):
agent = agent_spec.build_agent()

smarts.switch_ego_agent({agent_id: agent_spec.interface})
# required: get traffic_history_provider and set time offset
traffic_history_provider = smarts.get_provider_by_type(
TrafficHistoryProvider
)
assert traffic_history_provider
traffic_history_provider.set_start_time(mission.start_time)

modified_mission = replace(mission, start_time=0.0)
scenario.set_ego_missions({agent_id: modified_mission})
observations = smarts.reset(scenario)

dones = {agent_id: False}
Expand Down
Loading

0 comments on commit 6cb8ffd

Please sign in to comment.