-
Notifications
You must be signed in to change notification settings - Fork 494
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Image nav task #333
Image nav task #333
Conversation
Codecov Report
@@ Coverage Diff @@
## master #333 +/- ##
==========================================
+ Coverage 77.36% 77.51% +0.14%
==========================================
Files 108 108
Lines 7444 7507 +63
==========================================
+ Hits 5759 5819 +60
- Misses 1685 1688 +3
Continue to review full report at Codecov.
|
It will be great to have ImageGoalNav in Habitat. Thanks for working on this! |
Looks great! I would add a sensor test that checks the size of the sensor output and check that it doesn't change till episode switched. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like the way you addressed missing rotation from PointNav dataset. PR is close to landing, only one thing is missing is a test. I would add similar test to test_get_observations_at
, enable the added sensor and check if value is the same as observation at goal position.
Also you can add a new sensor to habitat_all_sensors_test.yaml to have some simple tests for free.
habitat/tasks/nav/nav.py
Outdated
r"""Sensor for ImageGoal observations which are used in ImageGoal Navigation. | ||
|
||
RGBSensor needs to be one of the Simulator sensors. | ||
Currently the task is using pointnav Dataset. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently the task is using pointnav Dataset.
is related to the task, not the sensor. Looks like even with dedicated dataset it may work similar.
habitat/tasks/nav/nav.py
Outdated
if episode.episode_id == self._current_episode_id: | ||
return self._current_image_goal | ||
|
||
self._current_episode_id = episode.episode_id |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe, let's move image goal evaluation to separate internal method _get_episode_image_goal
or get_pointnav_episode_image_goal
. Then it will be easier to update the code when the dataset will have fixed rotation as well.
@thibautlavril, to fix lint error make sure you installed |
@mathfac, I added a basic test_imagegoal_sensor (in the vein of test_pointgaol_sensor), but have some questions about your other suggested tests:
What about switching an episode (at least in the way tests are constructed now) would cause the size of the output to change? If nothing, then testing that the output values are constant across the episode (as the added test does) should be sufficient?
Initially I thought you meant I could just add IMAGEGOAL_SENSOR to the task sensor list in the config, but when I do that some tests fail because the agent isn't provided with an RGB_SENSOR (eg in test/test_baseline_agents.py, test_ppo_agents). Did you mean something else?
The observation is only going to match at the goal position if the angle of the agent is the same as the (randomly chosen and inaccessible, afaik) angle that the goal observation is taken. Do you see a way around this? |
Hi @thibautlavril and @drothermel -- thanks for contributing this task, it's a valuable addition. Would you mind adding a visual (video or gif) showing an example episode (or shortest path)? Your visuals will serve as a reference for future users and also provide us materials to explain your contributions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just noticed the update. Thank you a lot for tedious follow up on all the comments Added answers in the code regarding points you mentioned.
Initially I thought you meant I could just add IMAGEGOAL_SENSOR to the task sensor list in the config, but when I do that some tests fail because the agent isn't provided with an RGB_SENSOR (eg in test/test_baseline_agents.py, test_ppo_agents). Did you mean something else?
Yes, that what I meant. If you didn't remove RGB_SENSOR
from config then it's strange. Let's just skip this step.
The observation is only going to match at the goal position if the angle of the agent is the same as the (randomly chosen and inaccessible, afaik) angle that the goal observation is taken. Do you see a way around this?
Workaround is to replicate getting the goal rotation from episode_id
, but that makes the test too stiff. The test you provided looks great.
I created the task visual @dhruvbatra mentioned, feel free to use it:
Approving, left minor comments and should be good to merge.
69954d8
to
7f6961a
Compare
@thibautlavril, tried to commit your PR, but doesn't have sufficient permissions. Let's add links to the new task in the tasks table and add configs for different datasets:
Otherwise the PR is good to be merged. |
Will send those changes separately. Merging. |
* Adds navmesh recomputation without scene file reload * Added recomputeNavMesh function to Simulator in C++ and exposed as recompute_navmesh in python. * Added recomputeNavMesh function to viewer demo in C++ * NavMeshSettings object exposed to python. * Added python CI test for navmesh recomputation: test_recompute_navmesh
Adding a new task: Image goal Navigation.
Motivation and Context
Adding a new task: Image goal Navigation.
How Has This Been Tested
Types of changes
Checklist