Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image nav task #333

Merged
merged 17 commits into from
Apr 21, 2020
Merged

Conversation

thibautlavril
Copy link
Contributor

@thibautlavril thibautlavril commented Mar 13, 2020

Motivation and Context

Adding a new task: Image goal Navigation.

image_nav_spec

How Has This Been Tested

  • Ran new environment in ipython notebook and visualize results.
  • Unitest on sensor

Types of changes

  • New feature (non-breaking change which adds functionality)

Checklist

  • My code follows the code style of this project.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I have read the CONTRIBUTING document.
  • I have completed my CLA (see CONTRIBUTING)
  • I have added tests to cover my changes.
  • All new and existing tests passed.

@facebook-github-bot facebook-github-bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Mar 13, 2020
@codecov
Copy link

codecov bot commented Mar 13, 2020

Codecov Report

Merging #333 into master will increase coverage by 0.15%.
The diff coverage is 96.82%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #333      +/-   ##
==========================================
+ Coverage   77.36%   77.51%   +0.14%     
==========================================
  Files         108      108              
  Lines        7444     7507      +63     
==========================================
+ Hits         5759     5819      +60     
- Misses       1685     1688       +3     
Impacted Files Coverage Δ
test/test_sensors.py 96.04% <96.42%> (+0.04%) ⬆️
habitat/tasks/nav/nav.py 94.57% <96.96%> (-0.05%) ⬇️
habitat/config/default.py 100.00% <100.00%> (ø)
habitat_baselines/common/environments.py 97.72% <0.00%> (-2.28%) ⬇️
habitat/utils/visualizations/utils.py 78.94% <0.00%> (+1.05%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0cd6410...7f6961a. Read the comment docs.

@dhruvbatra
Copy link
Contributor

It will be great to have ImageGoalNav in Habitat. Thanks for working on this!

@mathfac
Copy link
Contributor

mathfac commented Mar 16, 2020

Looks great! I would add a sensor test that checks the size of the sensor output and check that it doesn't change till episode switched.

Copy link
Contributor

@mathfac mathfac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like the way you addressed missing rotation from PointNav dataset. PR is close to landing, only one thing is missing is a test. I would add similar test to test_get_observations_at, enable the added sensor and check if value is the same as observation at goal position.
Also you can add a new sensor to habitat_all_sensors_test.yaml to have some simple tests for free.

r"""Sensor for ImageGoal observations which are used in ImageGoal Navigation.

RGBSensor needs to be one of the Simulator sensors.
Currently the task is using pointnav Dataset.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently the task is using pointnav Dataset. is related to the task, not the sensor. Looks like even with dedicated dataset it may work similar.

habitat/tasks/nav/nav.py Show resolved Hide resolved
if episode.episode_id == self._current_episode_id:
return self._current_image_goal

self._current_episode_id = episode.episode_id
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe, let's move image goal evaluation to separate internal method _get_episode_image_goal or get_pointnav_episode_image_goal. Then it will be easier to update the code when the dataset will have fixed rotation as well.

@mathfac
Copy link
Contributor

mathfac commented Mar 20, 2020

@thibautlavril, to fix lint error make sure you installed pip install pre-commit && pre-commit install and re-add/reset the files to the commit so checks will run.

@drothermel
Copy link

@mathfac, I added a basic test_imagegoal_sensor (in the vein of test_pointgaol_sensor), but have some questions about your other suggested tests:

I would add a sensor test that checks the size of the sensor output and check that it doesn't change till episode switched.

What about switching an episode (at least in the way tests are constructed now) would cause the size of the output to change? If nothing, then testing that the output values are constant across the episode (as the added test does) should be sufficient?

Also you can add a new sensor to habitat_all_sensors_test.yaml to have some simple tests for free.

Initially I thought you meant I could just add IMAGEGOAL_SENSOR to the task sensor list in the config, but when I do that some tests fail because the agent isn't provided with an RGB_SENSOR (eg in test/test_baseline_agents.py, test_ppo_agents). Did you mean something else?

I would add similar test to test_get_observations_at, enable the added sensor and check if value is the same as observation at goal position.

The observation is only going to match at the goal position if the angle of the agent is the same as the (randomly chosen and inaccessible, afaik) angle that the goal observation is taken. Do you see a way around this?

@thibautlavril thibautlavril marked this pull request as ready for review March 24, 2020 20:00
@dhruvbatra
Copy link
Contributor

Hi @thibautlavril and @drothermel -- thanks for contributing this task, it's a valuable addition.

Would you mind adding a visual (video or gif) showing an example episode (or shortest path)?
See e.g.
-- ObjectNav visuals here: #284
-- VLN visuals here: https://twitter.com/ai_habitat/status/1217117883517419520

Your visuals will serve as a reference for future users and also provide us materials to explain your contributions.

Copy link
Contributor

@mathfac mathfac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just noticed the update. Thank you a lot for tedious follow up on all the comments Added answers in the code regarding points you mentioned.

Initially I thought you meant I could just add IMAGEGOAL_SENSOR to the task sensor list in the config, but when I do that some tests fail because the agent isn't provided with an RGB_SENSOR (eg in test/test_baseline_agents.py, test_ppo_agents). Did you mean something else?

Yes, that what I meant. If you didn't remove RGB_SENSOR from config then it's strange. Let's just skip this step.

The observation is only going to match at the goal position if the angle of the agent is the same as the (randomly chosen and inaccessible, afaik) angle that the goal observation is taken. Do you see a way around this?

Workaround is to replicate getting the goal rotation from episode_id, but that makes the test too stiff. The test you provided looks great.

I created the task visual @dhruvbatra mentioned, feel free to use it:
image_nav_spec

Approving, left minor comments and should be good to merge.

habitat/tasks/nav/nav.py Outdated Show resolved Hide resolved
habitat/tasks/nav/nav.py Outdated Show resolved Hide resolved
test/test_sensors.py Show resolved Hide resolved
test/test_sensors.py Show resolved Hide resolved
@mathfac
Copy link
Contributor

mathfac commented Apr 16, 2020

@thibautlavril, tried to commit your PR, but doesn't have sufficient permissions. Let's add links to the new task in the tasks table and add configs for different datasets:

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
modified: README.md
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
@ README.md:166 @ Download the Habitat related Gibson dataset following the instructions [here](ht
| Object goal navigation | MatterPort3D | [objectnav_mp3d_v1.zip](https://dl.fbaipublicfiles.com/habitat/data/datasets/objectnav/m3d/v1/objectnav_mp3d_v1.zip) | `data/datasets/objectnav/mp3d/v1/` | [`datasets/objectnav/mp3d.yaml`](config
s/datasets/objectnav/mp3d.yaml) | 170 MB |
| [Embodied Question Answering](https://embodiedqa.org/) | MatterPort3D | [eqa_mp3d_v1.zip](https://dl.fbaipublicfiles.com/habitat/data/datasets/eqa/mp3d/v1/eqa_mp3d_v1.zip) | `data/datasets/eqa/mp3d/v1/` | [`datasets/eqa/mp3d.yaml`](con
figs/datasets/eqa/mp3d.yaml) | 44 MB |
| [Visual Language Navigation](https://bringmeaspoon.org/) | MatterPort3D | [vln_r2r_mp3d_v1.zip](https://dl.fbaipublicfiles.com/habitat/data/datasets/vln/mp3d/r2r/v1/vln_r2r_mp3d_v1.zip) | `data/datasets/vln/mp3d/r2r/v1` | [`datasets/vl
n/mp3d_r2r.yaml`](configs/datasets/vln/mp3d_r2r.yaml) | 2.7 MB |
| [Image goal navigation](https://github.com/facebookresearch/habitat-api/pull/333) | Gibson | [pointnav_gibson_v1.zip](https://dl.fbaipublicfiles.com/habitat/data/datasets/pointnav/gibson/v1/pointnav_gibson_v1.zip) | `data/datasets/poin
tnav/gibson/v1/` |  [`datasets/imagenav/gibson.yaml`](configs/datasets/imagenav/gibson.yaml) | 385 MB |
| [Image goal navigation](https://github.com/facebookresearch/habitat-api/pull/333) | MatterPort3D | [pointnav_mp3d_v1.zip](https://dl.fbaipublicfiles.com/habitat/data/datasets/pointnav/mp3d/v1/pointnav_mp3d_v1.zip) | `data/datasets/poin
tnav/mp3d/v1/` | [`datasets/imagenav/mp3d.yaml`](configs/datasets/imagenav/mp3d.yaml) | 400 MB |


To use an episode dataset provide related config to the Env in [the example](#example) or use the config for [RL agent training](habitat_baselines/README.md#reinforcement-learning-rl).

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
dded: configs/datasets/imagenav/gibson.yaml
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
@ configs/datasets/imagenav/gibson.yaml:4 @
ENVIRONMENT:
  MAX_EPISODE_STEPS: 500
SIMULATOR:
  AGENT_0:
    SENSORS: ['RGB_SENSOR']
  HABITAT_SIM_V0:
    GPU_DEVICE_ID: 0
  RGB_SENSOR:
    WIDTH: 256
    HEIGHT: 256
  DEPTH_SENSOR:
    WIDTH: 256
    HEIGHT: 256
TASK:
  TYPE: Nav-v0
  SUCCESS_DISTANCE: 0.2

  SENSORS: ['IMAGEGOAL_SENSOR']
  GOAL_SENSOR_UUID: imagegoal

  MEASUREMENTS: ['DISTANCE_TO_GOAL', 'SPL']
  SPL:
    TYPE: SPL
    SUCCESS_DISTANCE: 0.2

DATASET:
  TYPE: PointNav-v1
  SPLIT: train
  DATA_PATH: data/datasets/pointnav/gibson/v1/{split}/{split}.json.gz
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
added: configs/datasets/imagenav/mp3d.yaml
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
@ configs/datasets/imagenav/mp3d.yaml:4 @
ENVIRONMENT:
  MAX_EPISODE_STEPS: 500
SIMULATOR:
  AGENT_0:
    SENSORS: ['RGB_SENSOR']
  HABITAT_SIM_V0:
    GPU_DEVICE_ID: 0
  RGB_SENSOR:
    WIDTH: 256
    HEIGHT: 256
  DEPTH_SENSOR:
    WIDTH: 256
    HEIGHT: 256
TASK:
  TYPE: Nav-v0
  SUCCESS_DISTANCE: 0.2

  SENSORS: ['IMAGEGOAL_SENSOR']
  GOAL_SENSOR_UUID: imagegoal

  MEASUREMENTS: ['DISTANCE_TO_GOAL', 'SPL']
  SPL:
    TYPE: SPL
    SUCCESS_DISTANCE: 0.2

DATASET:
  TYPE: PointNav-v1
  SPLIT: train
  DATA_PATH: data/datasets/pointnav/mp3d/v1/{split}/{split}.json.gz

Otherwise the PR is good to be merged.

@mathfac
Copy link
Contributor

mathfac commented Apr 21, 2020

Will send those changes separately. Merging.

@mathfac mathfac merged commit 3e3181e into facebookresearch:master Apr 21, 2020
dhruvbatra pushed a commit that referenced this pull request May 10, 2020
* Adds navmesh recomputation without scene file reload

* Added recomputeNavMesh function to Simulator in C++ and exposed as recompute_navmesh in python.

* Added recomputeNavMesh function to viewer demo in C++

* NavMeshSettings object exposed to python.

* Added python CI test for navmesh recomputation: test_recompute_navmesh
dannymcy pushed a commit to dannymcy/habitat-lab that referenced this pull request Jul 8, 2024
Adding a new task: Image goal Navigation.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants