-
Notifications
You must be signed in to change notification settings - Fork 426
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU-to-CPU batch renderer API to render color and depth sensors into images. #2044
Conversation
84d7d5d
to
dfc6e05
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left some comments. I'm not super-familiar with this code. Maybe @mosra can take a final look and give a green check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I may have missed something (I'm not that familiar with the batch renderer), but to me this looks good to go.
Motivation and Context
This changes the replay renderer API to support rendering of both color and depth sensors into images.
Semantic image views are expected to be added to the API at a later time.
Accompanying lab PR: facebookresearch/habitat-lab#1408
Prior to this change, this API could only produce color images.
Before:
After:
How Has This Been Tested
On Habitat-Lab and CI.
Types of changes
Checklist