-
Notifications
You must be signed in to change notification settings - Fork 426
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-mesh, multi-texture and multi-file support in the batch renderer #1874
Conversation
8046b1d
to
6bb02c3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I posted some questions and comments, but overall LGTM!
src/esp/gfx_batch/Renderer.cpp
Outdated
@@ -240,89 +291,131 @@ void Renderer::addFile(const Cr::Containers::StringView filename, | |||
// for glb, bps and ply?) | |||
CORRADE_INTERNAL_ASSERT_OUTPUT(importer->openFile(filename)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this assert checking if the import succeeded? And catching a malformed GLTF file?
So this is a runtime check? Is a Corrade assert the right way to catch this? I'm used to thinking of asserts as checks on code correctness and which can be compiled out in release builds. See also Habitat's ESP_CHECK
for reference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, not great at all, and I have to fix it as it asserts on me quite often :D Should be something more graceful.
I'm thinking returning a bool
here and then asserting / ESP_CHECK()
ing when you forget to check the failure and attempt to add a non-existent name (from that failed-to-load file, for example).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, the function now returns a bool
and all non-programmer errors (i.e., caused by corrupted files or unexpected data) are now runtime failures instead of assertions
Note that I'm not using Habitat's logging APIs in order to avoid linking to core
, based on our discussion in the previous PR.
src/esp/gfx_batch/Renderer.cpp
Outdated
for (Mn::UnsignedLong root : scene->childrenFor(-1)) { | ||
Cr::Containers::Array<Mn::UnsignedLong> children = | ||
scene->childrenFor(root); | ||
|
||
Cr::Containers::String name = importer->objectName(root); | ||
CORRADE_ASSERT(name, | ||
"Renderer::addFile(): node" << root << "has no name", ); | ||
state_->meshViewRangeForName.insert( | ||
{name, {offset, offset + Mn::UnsignedInt(children.size())}}); | ||
CORRADE_ASSERT_OUTPUT( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like a runtime check that might fail due to bad data. Same question here about runtime check versus assert.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, for this one I actually got an idea -- would it make sense that if you addFile()
that contains identifiers/names that are already present from files added earlier, it replaces them? Instead of failing/asserting.
Let's say, for iterative workflows where you have some simulation running already and just need to figure the best scale/color/whatever for a newly added object. Or for lights. There it might make sense that you repeatedly addFile()
and addMeshHierarchy()
until you're happy. The consequence is of course that you gradually fill GPU memory with unused meshes, but might be an okay tradeoff compared to having to suffer the delay of shutting down and restarting everything. Or it could be an opt-in flag in the config, to allow this, and be strict by default.
Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, hard to guess what users want here, but opt-in flag with strict by default sounds good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done as well, this is now a runtime error and the function will return false
in that case.
To avoid too many unused features creeping in, I didn't add the opt-in flag to replace existing names. There's a TODO and I will implement it once an actual use case appears.
.slice(&DrawCommand::indexOffsetInBytes)); | ||
scene.textureTransformationUniform); | ||
|
||
/* Submit all draw batches */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When we talk about a "batch renderer", we generally mean batching things (meshes) into fewer draw calls. I guess I always assumed we'd do batching across scenes. E.g. all the usage of ycb_composite.gltf
, namely all instances of all YCB objects, across all scenes, could be one draw call. But it looks like we don't do any batching across scenes here, correct? Why not?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Across all scenes not really due to scissoring, otherwise objects from one scene could leak to the other. (BPS did the same IIRC.)
But I have a bunch of ideas that could circumvent this limitation (custom clip planes, multi-view / multi-viewport rendering), just need to try them out ;)
Last push contains all changes that were needed to support gfx-replay from raw unoptimized glTF files (what I showed on Slack last Friday) -- in particular files that are treated as a whole (instead of as composite), texture-less materials and scene-less files (such as STLs). Next I'm going to resolve the above comments and write some initial documentation for the interface, because it now really needs documenting. Further feedback welcome, although with the high-level docs missing the code is probably too cryptic :) Keeping the PR as a draft for now as it contains a dirty attempt to fix CodeCov reports. |
3279e98
to
1cdebce
Compare
@Skylion007 I'm getting this Clang Tidy complaint:
Do you think it's useful? Because I don't, at all, and so I think I'd just disable this check altogether. Being forced to write |
ef78fc5
to
fd99dc9
Compare
It's useful in PyBind11 for finding C "bool"s that actually should have error handling on the CPython API. Probably not as useful in Habitat-Sim. |
fd99dc9
to
0c325ed
Compare
Last commit contains high-level docs for the batch renderer, mainly interesting for @0mdc. You can read the documentation source here, rendered output including links to relevant Magnum APIs and colored console output will appear here once merged. Oh, and Clang-Format is making a mess from the doc blocks, line wraps done in the middle of Doxygen
Am I allowed to disable that check? :) |
@mosra Yeah, feel free to. |
@mosra You can set a comment Pragma in the clang-format setting. If it sees a comment with a special token in it, it won't format it. |
Ah, nice. So would it be fine if I add |
025b32a
to
ab361b9
Compare
Hm, the comment pragma fixed it for most of the cases but not all. Strange, is some line wrapping excluded from the pragmas? (The code block containing the suggested clang-format change contains
|
It gets fully overwritten each frame, so there's no need to keep it around. The single-upload TODO still holds, although I'm not sure about the benefits anymore.
This significantly reduces the amount of data passed to the GPU each frame, at the cost of a more complex node transformation hierarchy processing. But that will eventually get moved to Magnum itself and heavily SIMDified.
This also prepares for additional operations that are too expensive to do after every add, like sorting by mesh ID.
Using a *.gltf file for self-documenting purposes, but bundling the KTX image inside the *.bin to reduce the amount of test files used.
Gradually regrouping / redoing the test files to avoid having too many of them.
And make them available under their filename, instead of using node names.
Currently just for vertex colors, but makes room for further expansion.
Yay, wouldn't have discovered this bug without all those regression tests.
Importing the first mesh from these and assigning a default material to it.
It's an often-used pattern, actually.
ab361b9
to
6b76543
Compare
Just so we're sure they're implemented.
6b76543
to
3a61032
Compare
Using Magnum's error output facilities instead of Habitat's logging to avoid dependency on the core library, as discussed in the previous PR.
1a4a145
to
cc1d29f
Compare
Assuming the CIs don't start failing again, this PR is done from my side, with pending TODOs either implemented or postponed (see the list in the PR description). Awaiting your feedback on the high-level documentation. In case you don't have any, I propose merging this so the gfx-replay integration can be branched off these changes. |
For future reference, here is a more aggressive hack: https://stackoverflow.com/a/51913607/2444240 |
I ended up using the most aggressive variant in certain places, |
Motivation and Context
With all
GltfSceneConverter
features needed by this code now finalized and upstreamed, I can finally publish this update. The glTF conversion being fully upstreamed also means that generating test data as well as writing code that processes several files into a single batch-optimized composite glTF file is possible directly using Habitat, without having to pull from some arbitrary Magnum branch.What the batch renderer does compared to the initial prototype from #1798 from end of June:
This ultimately could mean that one can now feed the plain old unoptimized single-mesh glTFs and it will work, making this thing slightly closer to being a replacement for the current renderer, including support for slow and inefficient rendering if desired. Which is what I want to confirm next, by attempting to load a scene replay JSON, and so the PR is a draft. TODOs for this PR so far:
has been procrastinated awayis not implemented yet. I'm sure it will be nothing complicated to add, but for now I'd like to fix it just by setting the object scale to 0.GfxBatchRendererTest
output rather noisy. Those are useful in general to avoid data loss, but not in this particular case. Mostly fixed in the engine, the one remaining warning has to wait.addFile()
returningbool
instead of assertion if a file can't be opened or any other non-programmer error happenspostponed, will add once/if actually neededRendererFileFlag::ReplaceTemplates
(name??) to overwrite mesh hierarchy templates of the same name when adding a new file instead of failingand thepostponed, will add once/if actually neededname
parameter ofaddFile()
used as a prefix, to allow handling multiple files with the conflicting namesPlus a subsequent ability to reuse the queried numeric name identifier inpostponed, will add once/if actually neededaddMeshHierarchy()
without having to do a string map lookup againHow Has This Been Tested
With sweat and tears.The original test setup remains, it's just extended with a few variants of the input glTF file -- one that has multiple meshes instead of one, one that has multiple 2D textures instead of one 2D array, and the data being split across two files. That's where the majority of the changes comes from -- the glTF files and the code needed to generate them. The actually useful code is just about 1000 lines in
Renderer.cpp
, with another 1000 for documentation in the headers, and then some more tens of thousands in Magnum itself.