You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment the code does lots of extra work for each light that has changed. We should cache the extents and transformations for each shadow caching object, optionally implement an acceleration structure to make traversals quicker.
Note, this is not critical for the prototype.
For comparison the frustum fitting takes 0.02msec for 5 meshes, and 3msec (absolutely worst case, if all the meshes are inside the frustum) for 1000 meshes. This cost only occurs if the light is moved or its parameters are changed. So probably you would need a few thousand meshes to dip below 60 fps.
However, it's still worth optimizing as it's really expensive right now. There are a few challenges associated with this though.
First of all, precalculating certain values for the meshes coming from maya is relatively cheap, since we can directly track what changed and precache certain calculations. But since we want to allow feeding the renderindex from multiple delegates, we have to go through the render delegate to list all the meshes and access their values.
@sirpalee was thinking about providing some base utilities in HdMayaDelegate (the one all custom scene delegates need to inherit from) to creating/deleting rprims and dirtying them, so we can explicitly track when an rprim have changed and update their bounding boxes in our cache. That way we can get rid of the most expensive operations, like the constant interaction with the HdRenderIndex and the Maya base classes.
Secondly, we need to create a bbox cache (similarly to UsdGeomBBoxCache), that allows quick and efficient traversal of all the bounding boxes, and keeps most of them in memory, as creating these usd classes is quite expensive.
The text was updated successfully, but these errors were encountered:
At the moment the code does lots of extra work for each light that has changed. We should cache the extents and transformations for each shadow caching object, optionally implement an acceleration structure to make traversals quicker.
Note, this is not critical for the prototype.
For comparison the frustum fitting takes 0.02msec for 5 meshes, and 3msec (absolutely worst case, if all the meshes are inside the frustum) for 1000 meshes. This cost only occurs if the light is moved or its parameters are changed. So probably you would need a few thousand meshes to dip below 60 fps.
However, it's still worth optimizing as it's really expensive right now. There are a few challenges associated with this though.
First of all, precalculating certain values for the meshes coming from maya is relatively cheap, since we can directly track what changed and precache certain calculations. But since we want to allow feeding the renderindex from multiple delegates, we have to go through the render delegate to list all the meshes and access their values.
@sirpalee was thinking about providing some base utilities in
HdMayaDelegate
(the one all custom scene delegates need to inherit from) to creating/deleting rprims and dirtying them, so we can explicitly track when an rprim have changed and update their bounding boxes in our cache. That way we can get rid of the most expensive operations, like the constant interaction with theHdRenderIndex
and the Maya base classes.Secondly, we need to create a bbox cache (similarly to
UsdGeomBBoxCache
), that allows quick and efficient traversal of all the bounding boxes, and keeps most of them in memory, as creating these usd classes is quite expensive.The text was updated successfully, but these errors were encountered: