Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rewrite buffer management #4293

Merged
merged 42 commits into from
Sep 26, 2024
Merged

Rewrite buffer management #4293

merged 42 commits into from
Sep 26, 2024

Conversation

doitsujin
Copy link
Owner

@doitsujin doitsujin commented Sep 25, 2024

Part 2 of the large rework, builds on (and inclues) #4280.

Things missing before this can be merged:

  • Add back support for sparse buffers
  • Add back support for imported buffers (11on12)

D3D11_MAP_WRITE_DISCARD

Per-draw data in D3D11, such as material parameters, transforms etc, are passed to the GPU via so-called constant buffers, which are really just small regions of memory that shaders can read from really fast (on Nvidia, anyway).

The obvious problem here is that an app doing 10000 draws per frame also needs 10000 tiny memory regions to actually write all this data to. Any sane person these days would just allocate a large buffer (say, a couple of MB), use a linear allocator and just bind that buffer with the correct offset and size. Minimal CPU cost, minimal API calls, and none of that data is needed after the current frame anyway so we don't really care what happens.

But Microsoft had other ideas.

There's no offset in SetConstantBuffers. To be clear, the above approach can be implemented with D3D11.1 which had the SetConstantBuffers1 functons added, but since that requires Windows 8 which had a user base of roughly nobody, games never really adopted this feature. Some did, but it's certainly not common.

Instead, games update their constant buffers before every draw via Map(..., D3D11_MAP_WRITE_DISCARD). This essentially just allocates some new backing storage for the buffer that is used in subsequent rendering commands, and recycles the previous backing storage once the GPU is done using it.

The problems

The current DXVK implementation ties memory allocations for DISCARDed buffers to the buffer itself and essentially stores these slices in an array. This way, DISCARD is fast since we only need to pop the last element off the array, and the following two use cases work well:

  • Game creates one constant buffer per object and only updates it once per frame (or per render pass). We will only end up with a handful of slices per buffer and not waste that much when the app doesn't render that object or a while.
  • Game creates one constant buffer per size (e.g. 256B, 512B, 1kiB) and updates the same buffer for every draw. We will end up with a very large number of slices per buffer, but since those buffers are used for everything it's actually rather efficient.

The problems start when a game (such as Shadow of the Tomb Raider) does some mishmash between the two:

Bildschirmfoto-667
Those 256B constant buffers don't look so innocent anymore

Another issue with the old DXVK implementation is that it's broken with Deferred Contexts, but I don't want to go into details.

The solution

Because lifetimes of D3D11 resources and even the DISCARDed memory allocations are unpredictable, things like a per-context linear allocator are just not a viable option. The only real way to solve to this problem seems to be to return memory allocations to the system (in this case, to the global allocator) as soon as possible instead of keeping them tied to the buffer object or trying anything clever. This way, a 256 byte buffer will only ever actually hold on to 256 bytes of memory, and any memory in flight will be freed once the GPU is done using it.

This is what most of this PR does, and a lot of refactoring was necessary to make this work:

  • Reference-counted memory allocations. Yeah, we didn't have those because it seemed crazy to ref-count individual sub-kilobyte areas of GPU memory, but it turned out to be both necessary and not actually that bad. This also fixes the deferred context issue I mentioned above.
  • Previously, the buffer and buffer view classes managed all Vulkan objects, but since we also need to keep those around until the GPU is done, these are now part of the same classes that manage memory allocations.
  • A very fast memory allocator that can service small allocations in fractions of a microsecond. This is what Rewrite memory allocator #4280 is about.

This completely fixes Shadow of the Tomb Raider, and leads to slightly improved memory usage in a number of other games.

Bildschirmfoto-668
That's better

The allocation cache

Of course, the downside with all of the above is that we're calling into the allocator at least once per draw. And while our new allocator is fast, it's not as fast as retrieving an element from an array, especially since it also has multithreading to worry about.

The following things can all happen at the same time on different threads and hit the global allocator lock:

  • Allocating memory during DISCARD, obviously.
  • Doing DISCARD on a deferred context.
  • Freeing memory that the GPU no longer needs. This happens on one of DXVK's internal workers.
  • Freeing memory that the app no longer needs and is not being used by the GPU.
  • Another DXVK worker allocating scratch memory for certain commands.
  • The app creating new resources on one or more background threads.

In other words, lots of lock contention brought my SotTR test scene from ~60 FPS all the way down to an unstable 50. This was entirely expected, but had to be fixed.

The way we deal with this now is to have a two-way cache for small buffer allocations. Basically, whenever we free a small allocation now, we put it in a list of up to 256 kiB, store that list in an array, and when a D3D11 context needs another allocation of that size, we just give it the entire list of allocations at once, so that it can handle hundreds of subsequent DISCARDs without invoking the allocator even once.

This completely fixes the performance regression, while keeping memory overhead under control as the cache is of a fixed size (~20 MiB on 64-Bit + ~2.5 MiB per D3D11 context).

Bildschirmfoto-669
Slightly higher memory usage again, but also back to old performance levels

@doitsujin doitsujin force-pushed the memory-rework-pt2 branch 5 times, most recently from 9aab4ec to edefd9e Compare September 26, 2024 06:58
@doitsujin doitsujin force-pushed the memory-rework-pt2 branch 2 times, most recently from 6d3e600 to 6ec91e8 Compare September 26, 2024 09:27
@doitsujin doitsujin force-pushed the memory-rework-pt1 branch 2 times, most recently from 9c3886b to fc86067 Compare September 26, 2024 10:46
@doitsujin doitsujin marked this pull request as ready for review September 26, 2024 11:14
@doitsujin doitsujin changed the base branch from memory-rework-pt1 to master September 26, 2024 11:43
Necessary for actual resource refactors. We want view objects
to use the resource's reference count wehenever possible.
Temporary solution that hits the allocator on every single invalidation,
which isn't great but will do for now.
No longer necessary as they have the same lifetime as the
parent buffer now. Only track the buffers themselves.
Allows refilling local caches in constant time.
This makes the entire cache available to all allocation sizes rather than
having fixed-size pools for every allocation size. Improves hit rate in
games that primarily use one constant buffer size.
Reduces ref counting overhead on the CS thread a bit.
And replace the old sparse thing.
Uses the new allocator directly.
Uses DxvkResourceAllocation to manage image backing storage,
which will allow invalidating images in the future.
Reduces ref counting overhead again and matches previous behaviour.

We should probably do something about the possible case of deferred context
execution with MAP_WRITE_DISCARD followed by MAP_WRITE_NO_OVERWRITE on the
immediate context, but we haven't seen a game rely on this yet.
... but keep the SingleUse option as-is anyway because games do not release
their command lists after submission and end up wasting massive amounts of
memory.
@doitsujin doitsujin merged commit 5e5c283 into master Sep 26, 2024
8 checks passed
@doitsujin doitsujin deleted the memory-rework-pt2 branch October 23, 2024 15:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant