-
Notifications
You must be signed in to change notification settings - Fork 290
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: Move datareader caches down to internal connection #499
Conversation
To port this to netfx you'd need to port #328 first and that's a difficult job because of the complexity in the async paths. |
@Wraith2 How does this behave when MARS is enabled and there are interleaved data readers? Offhand, it looks like they would pick up each others' cached snapshot and context objects. Maybe I'm missing something, though. |
They'd work as you'd expect, one will pick up the cached instance and if the other will find there isn't one and create it's own. The first to finish will return the cleared instance and then the second will let it's instance go to gc. The only way around that would be to put them on the cache on the SNI handle but that isn't exposed as far up as the reader so it'd need some virtual plumbing. Whatever instance ends up in the cache is not linked to a particular reader, it's a blank object with no remaining state it's just existing that saves the perf. If anyone is worried about performance they won't be using mars, mars sucks for perf. This will be no worse than current in all cases and better in non-mars heavy re-use scenarios. |
@cheenamalhotra this one is pretty simple and should be able to get into 2.0 GA. |
I don't think snapshot is non-linkable to a particular reader, it contains all this info: SqlClient/src/Microsoft.Data.SqlClient/netcore/src/Microsoft/Data/SqlClient/SqlDataReader.cs Lines 5391 to 5415 in fc6aece
Metadata is mainly going to be shared. What if 1 reader creates a snapshot, while another reader accesses it (with different table being read - whose metadata don't match/conflict, wouldn't it lead to wrong data conversions? Can you write tests to test it specifically? |
I'm not sure I follow. The interlocks are there to ensure that no two readers can take the cached instance at one time. If two readers need it (mars presumably) then one will have to create it's own snapshot and use it. Those haven't changed. All that's different is the location of the cache so that it'll stay alive with the internal connection rather than being discarded with the reader or connection. |
I've been debugging this and use of
Could you elaborate a bit with a use-case where you think a cached snapshot will be re-used outside the scope of Unless you're proposing to save object creation time taken in |
Each time the user starts an async operation on the data reader (or on a command which internally uses the a reader like execute scalar) a snapshot is used to keep track of the state from the "safe" start point until another safe point is reached and reader is updated. The primary example of this is reading binary data. You start in a consistent state and the user requests an async read of a column that hasn't been seen yet. The reader snaps and then starts requesting data and keeps on requesting data until it reaches the beginning of the column that if wants and by processing the incoming data it changes the state of the reader into an inconsistent state. During the duration of the async read the user can observe that the internal state of the reader is unexpected by issuing a sync operation and it's possible that the sync operation can interfere with the async one. When the async operation has moved to the column it wants to read it can set the reader state to the current async state resyncing the state if it wants. Then it proceeds to start to start reading the requested data.. The snapshot in the reader and the snapshot in the parser implement this isolation of the async operation from the sync state. Each time you do anything async you need async state to be kept somewhere. The pattern I showed in the original code is a common one where you create a connection command and reader and then discard them all. If we keep the cached reader snapshot on the reader it's only available for that instance of the reader and is discarded with the reader. If we move it down as I suggest then each time an internal connection is linked to a connection and a reader is created the state for the async snapshot may already be present on the internal connection so it won't be created and can be recycled. Why is it useful? because the common pattern is wasteful if you are doing many small async reads because you'll never use the cached instance. The worst case scenario for this is quickly trying to read a single row creating connection command and reader each time. Sadly this sort of thing is very common because TVP are a pain to use correctly require database type support and aren't universal in other providers so aren't used in wrapping libraries like Dapper and EF, Developers also think in loops not sets far too often. This change will reduce memory usage for cases where users are doing small frequent async operations using readers. It does this by putting the state on something that is reused frequently and thus will build up the cached objects in the internal connection pool lifetime. |
Appreciate the explanation @Wraith2 :) |
I also noticed this is not even implemented in NetFx, so porting this PR isn't possible, maybe we can consider adding support for this cache in NetFx sometime in future? |
But i see this now :) Nvm, we'll take care of both. |
Ultimately combining the two codebases should result in all improvements in netcore being brought into netfx. I think TdsParser, state object and the reader will probably be at the end of that effort but I hope we do get there. |
@Wraith2 am I reading this correctly, that SqlClient supports a sync operation on a given reader instance while an async read is still in progress? If so, that's quite surprising...
Again I probably am lacking lots of context, and I'm also assuming this isn't just about supporting concurrent reader operations (as above). If so, then FWIW this is exactly what async/await takes care of for you - all local function state (the state machine) is automatically lifted to the heap when the method yields asynchronously. In modern asynchronous programming you shouldn't need to manually manage async state in most typical cases. |
It's messy. Officially no and there are checks in place that throw on direct calls of user methods in most places but because the internal api itself is all sync there's no real distinction between sync and async calls in some respects and you can end up in sutuations where async operations are forced into sync mode for various reasons. You could end up calling a public method on the reader that you might not expect to cause a read operation and it might not get protected. By async state i'm refering to both the function state which would be hoist by the compiler and what is effectively a speculative copy of some of the parser state. Imagine as taking a snapshot and then running the parser forward from that state using incoming packets but then being able to just swap in the old state to return to the previous point as if the new packets hadn't been seen. The problem I have with this approach is that speculation is always capable of mutating local state so you can't speculate without side effects. TryReadInt32 for example can (not always) mutate local state if the read crosses a packet boundary and it'll directly force a network read if it needs to read from a second packet. So calling a simple method to read an integer may mutate and may unavoidably cause and wait for IO. I have toyed with the idea or trying to rewrite some parts of this with pipelines and spans but the scope of that project deterred me. |
Can you give an example for this? Off the top of my head, I can't think of any operation on DbDataReader that users should expect to be invokable while any other async operation is still in progress... I'm guessing this may be a MARS-related thing, though I'd expect that there as well it would be the application's responsibility to never concurrently use the connection. Stepping back, there seems to be a large amount of complexity here to support something which probably shouldn't be supported (concurrent usage of reader instances)... I'm sure there's a history/context here that I'm missing though. |
I don't think it's there to support concurrent access. I think there's there because the sync and async paths are so separate and yet use the same sync-based primitives. I've honestly never tried doing sync calls while async is in progress and wouldn't expect it to be allowed. Having worked through rationalizing the async calls I recognise that it is working hard to isolate current stable state from in-progress async operations. |
OK, thanks for the explanations @Wraith2. Sounds like a very complicated state of affairs for sure, I'm still not sure what exactly it's there the support... I'm not aware of any need in other drivers to "snapshot" state at the beginning of async operations, and to present that snapshotted state while the operation is in progress, etc. But I'm out of my depth here. |
A standard pattern of using a data reader is to use using blocks like this:
Which means that the reader object is used only once. We added cached instances of backing objects for commonly used functions like
IsDbNull
andRead
but these are only useful if you re-use the reader or call the same function more than once.This PR takes the cache location for the common async state operations and moves them from the
DataReader
down to the internal connection. By doing this if users frequently do async operations eventually the majority of pooled internal connections will have cached async state objects which will further reduce memory churn.