-
Notifications
You must be signed in to change notification settings - Fork 865
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rewrite buffer management #4293
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
doitsujin
force-pushed
the
memory-rework-pt2
branch
5 times, most recently
from
September 26, 2024 06:58
9aab4ec
to
edefd9e
Compare
doitsujin
force-pushed
the
memory-rework-pt1
branch
from
September 26, 2024 07:16
6247d31
to
c5d1286
Compare
doitsujin
force-pushed
the
memory-rework-pt2
branch
from
September 26, 2024 07:17
edefd9e
to
f58ec14
Compare
doitsujin
force-pushed
the
memory-rework-pt1
branch
from
September 26, 2024 08:41
c5d1286
to
ad20138
Compare
doitsujin
force-pushed
the
memory-rework-pt2
branch
2 times, most recently
from
September 26, 2024 09:27
6d3e600
to
6ec91e8
Compare
doitsujin
force-pushed
the
memory-rework-pt1
branch
2 times, most recently
from
September 26, 2024 10:46
9c3886b
to
fc86067
Compare
doitsujin
force-pushed
the
memory-rework-pt2
branch
from
September 26, 2024 10:57
6ec91e8
to
24f399e
Compare
Basically lets us deal with objects that manage their own destruction, which ideally shold be all of them at some point. Also adds some missing comparison operators.
For now, this is merely a wrapper around the existing buffer slice struct in order to allow easier refactoring.
Changes DxvkMemory to be nothing more than a wrapper.
Fallback allocations are a thing.
Necessary for actual resource refactors. We want view objects to use the resource's reference count wehenever possible.
Temporary solution that hits the allocator on every single invalidation, which isn't great but will do for now.
No longer necessary as they have the same lifetime as the parent buffer now. Only track the buffers themselves.
Allows refilling local caches in constant time.
This makes the entire cache available to all allocation sizes rather than having fixed-size pools for every allocation size. Improves hit rate in games that primarily use one constant buffer size.
doitsujin
force-pushed
the
memory-rework-pt2
branch
from
September 26, 2024 11:44
24f399e
to
892ba3c
Compare
Reduces ref counting overhead on the CS thread a bit.
And replace the old sparse thing.
Uses the new allocator directly.
Uses DxvkResourceAllocation to manage image backing storage, which will allow invalidating images in the future.
Reduces ref counting overhead again and matches previous behaviour. We should probably do something about the possible case of deferred context execution with MAP_WRITE_DISCARD followed by MAP_WRITE_NO_OVERWRITE on the immediate context, but we haven't seen a game rely on this yet.
doitsujin
force-pushed
the
memory-rework-pt2
branch
from
September 26, 2024 12:25
892ba3c
to
58025d5
Compare
... but keep the SingleUse option as-is anyway because games do not release their command lists after submission and end up wasting massive amounts of memory.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Part 2 of the large rework, builds on (and inclues) #4280.
Things missing before this can be merged:
D3D11_MAP_WRITE_DISCARD
Per-draw data in D3D11, such as material parameters, transforms etc, are passed to the GPU via so-called constant buffers, which are really just small regions of memory that shaders can read from really fast (on Nvidia, anyway).
The obvious problem here is that an app doing 10000 draws per frame also needs 10000 tiny memory regions to actually write all this data to. Any sane person these days would just allocate a large buffer (say, a couple of MB), use a linear allocator and just bind that buffer with the correct offset and size. Minimal CPU cost, minimal API calls, and none of that data is needed after the current frame anyway so we don't really care what happens.
But Microsoft had other ideas.
There's no offset in
SetConstantBuffers
. To be clear, the above approach can be implemented with D3D11.1 which had theSetConstantBuffers1
functons added, but since that requires Windows 8 which had a user base of roughly nobody, games never really adopted this feature. Some did, but it's certainly not common.Instead, games update their constant buffers before every draw via
Map(..., D3D11_MAP_WRITE_DISCARD)
. This essentially just allocates some new backing storage for the buffer that is used in subsequent rendering commands, and recycles the previous backing storage once the GPU is done using it.The problems
The current DXVK implementation ties memory allocations for
DISCARD
ed buffers to the buffer itself and essentially stores these slices in an array. This way,DISCARD
is fast since we only need to pop the last element off the array, and the following two use cases work well:The problems start when a game (such as Shadow of the Tomb Raider) does some mishmash between the two:
Those 256B constant buffers don't look so innocent anymore
Another issue with the old DXVK implementation is that it's broken with Deferred Contexts, but I don't want to go into details.
The solution
Because lifetimes of D3D11 resources and even the
DISCARD
ed memory allocations are unpredictable, things like a per-context linear allocator are just not a viable option. The only real way to solve to this problem seems to be to return memory allocations to the system (in this case, to the global allocator) as soon as possible instead of keeping them tied to the buffer object or trying anything clever. This way, a 256 byte buffer will only ever actually hold on to 256 bytes of memory, and any memory in flight will be freed once the GPU is done using it.This is what most of this PR does, and a lot of refactoring was necessary to make this work:
This completely fixes Shadow of the Tomb Raider, and leads to slightly improved memory usage in a number of other games.
That's better
The allocation cache
Of course, the downside with all of the above is that we're calling into the allocator at least once per draw. And while our new allocator is fast, it's not as fast as retrieving an element from an array, especially since it also has multithreading to worry about.
The following things can all happen at the same time on different threads and hit the global allocator lock:
DISCARD
, obviously.DISCARD
on a deferred context.In other words, lots of lock contention brought my SotTR test scene from ~60 FPS all the way down to an unstable 50. This was entirely expected, but had to be fixed.
The way we deal with this now is to have a two-way cache for small buffer allocations. Basically, whenever we free a small allocation now, we put it in a list of up to 256 kiB, store that list in an array, and when a D3D11 context needs another allocation of that size, we just give it the entire list of allocations at once, so that it can handle hundreds of subsequent
DISCARD
s without invoking the allocator even once.This completely fixes the performance regression, while keeping memory overhead under control as the cache is of a fixed size (~20 MiB on 64-Bit + ~2.5 MiB per D3D11 context).
Slightly higher memory usage again, but also back to old performance levels