Skip to content

Commit

Permalink
Update docs to reflect new package installation workflow. (#3362)
Browse files Browse the repository at this point in the history
- Fix old material name references.
- Update outdated code comments.
  • Loading branch information
surfnerd authored Feb 5, 2020
1 parent 40d8f39 commit b3755a5
Show file tree
Hide file tree
Showing 7 changed files with 28 additions and 16 deletions.
4 changes: 2 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@
/envs

# Environemnt logfile
*UnitySDK.log
*Project.log

# Visual Studio 2015 cache directory
/UnitySDK/.vs/
/Project/.vs/

# Autogenerated VS/MD/Consulo solution and project files
/com.unity.ml-agentsExportedObj/
Expand Down
30 changes: 20 additions & 10 deletions docs/Installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,22 +39,32 @@ The `Project` subdirectory contains many [example environments](Learning-Environ
to help you get started.

### Package Installation
ML-Agents C# SDK is transitioning to a Unity Package. While we are working on getting into the
official packages list, you can add the `com.unity.ml-agents` package to your project by
navigating to the menu `Window` -> `Package Manager`. In the package manager window click
on the `+` button.

If you intend to copy the `com.unity.ml-agents` folder in to your project, ensure that
you have the [Barracuda preview package](https://docs.unity3d.com/Packages/com.unity.barracuda@0.3/manual/index.html) installed.
<p align="center">
<img src="images/unity_package_manager_window.png"
alt="Linux Build Support"
width="500" border="10" />
</p>

**NOTE:** In Unity 2018.4 it's on the bottom right of the packages list, and in Unity 2019.3 it's on the top left of the packages list.

To install the Barracuda package in later versions of Unity, navigate to the Package
Manager window by navigating to the menu `Window` -> `Package Manager`. Click on the
`Advanced` dropdown menu to the left of the search bar and make sure "Show Preview Packages"
is checked. Search for or select the `Barracuda` package and install the latest version.
Select `Add package from disk...` and navigate into the
`com.unity.ml-agents` folder and select the `package.json` folder.

<p align="center">
<img src="images/barracuda-package.png"
alt="Barracuda Package Manager"
width="710" border="10"
height="569" />
<img src="images/unity_package_json.png"
alt="Linux Build Support"
width="500" border="10" />
</p>

If you are going to follow the examples from our documentation, you can open the `Project`
folder in Unity and start tinkering immediately.


The `ml-agents` subdirectory contains a Python package which provides deep reinforcement
learning trainers to use with Unity environments.

Expand Down
6 changes: 3 additions & 3 deletions docs/Learning-Environment-Create-New.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ agent to seek, and a Sphere to represent the Agent itself.
3. Select the Floor Plane to view its properties in the Inspector window.
4. Set Transform to Position = (0, 0, 0), Rotation = (0, 0, 0), Scale = (1, 1, 1).
5. On the Plane's Mesh Renderer, expand the Materials property and change the
default-material to *LightGridFloorSquare* (or any suitable material of your choice).
default-material to *GridMatFloor* (or any suitable material of your choice).
(To set a new material, click the small circle icon next to the current material
name. This opens the **Object Picker** dialog so that you can choose a
Expand All @@ -83,7 +83,7 @@ different material from the list of all materials currently in the project.)
3. Select the Target Cube to view its properties in the Inspector window.
4. Set Transform to Position = (3, 0.5, 3), Rotation = (0, 0, 0), Scale = (1, 1, 1).
5. On the Cube's Mesh Renderer, expand the Materials property and change the
default-material to *Block*.
default-material to *AgentBlue*.
![The Target Cube in the Inspector window](images/mlagents-NewTutBlock.png)
Expand All @@ -94,7 +94,7 @@ different material from the list of all materials currently in the project.)
3. Select the RollerAgent Sphere to view its properties in the Inspector window.
4. Set Transform to Position = (0, 0.5, 0), Rotation = (0, 0, 0), Scale = (1, 1, 1).
5. On the Sphere's Mesh Renderer, expand the Materials property and change the
default-material to *CheckerSquare*.
default-material to *Checkers_Ball*.
6. Click **Add Component**.
7. Add the Physics/Rigidbody component to the Sphere.
Expand Down
2 changes: 2 additions & 0 deletions docs/Migrating.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ The versions can be found in
## Migrating from 0.13 to latest

### Important changes
* The `UnitySDK` folder has been split into a Unity Package (`com.unity.ml-agents`) and an examples project (`Project`). Please follow the [Intallation Guide](Installation.md) to get up and running with this new repo structure.
* Several changes were made to how agents are reset and marked as done:
* Calling `Done()` on the Agent will now reset it immediately and call the `AgentReset` virtual method. (This is to simplify the previous logic in which the Agent had to wait for the next `EnvironmentStep` to reset)
* The "Reset on Done" setting in AgentParameters was removed; this is now effectively always true. `AgentOnDone` virtual method on the Agent has been removed.
Expand All @@ -31,6 +32,7 @@ The versions can be found in
* RayPerceptionSensor was inconsistent in how it handle scale on the Agent's transform. It now scales the ray length and sphere size for casting as the transform's scale changes.

### Steps to Migrate
* Follow the instructions on how to install the `com.unity.ml-agents` package into your project in the [Installation Guide](Installation.md).
* If your Agent implemented `AgentOnDone` and did not have the checkbox `Reset On Done` checked in the inspector, you must call the code that was in `AgentOnDone` manually.
* If you give your Agent a reward or penalty at the end of an episode (e.g. for reaching a goal or falling off of a platform), make sure you call `AddReward()` or `SetReward()` *before* calling `Done()`. Previously, the order didn't matter.
* If you were not using `On Demand Decision` for your Agent, you **must** add a `DecisionRequester` component to your Agent GameObject and set its `Decision Period` field to the old `Decision Period` of the Agent.
Expand Down
Binary file added docs/images/unity_package_json.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/unity_package_manager_window.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion ml-agents/tests/yamato/yamato_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ def get_base_path():

def run_standalone_build(base_path: str, verbose: bool = False) -> int:
"""
Run BuildStandalonePlayerOSX test to produce a player at UnitySDK/testPlayer
Run BuildStandalonePlayerOSX test to produce a player at Project/testPlayer
:param base_path:
:return:
"""
Expand Down

0 comments on commit b3755a5

Please sign in to comment.