-
Notifications
You must be signed in to change notification settings - Fork 289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add and use TryReadChars method #1544
Conversation
I have not run any benchmarks with it yet, but it seems like a reasonable change. It should speed up both sync and async paths. It also looks like you could replace usages of TryReadPlpUnicodeChars with TryReadStringWithEncoding (passing Encoding.Unicode as encoding parameter and doubled string size), if the result will be converted to a string anyway. This is the case inside TryReadSqlStringValue for example. This way you can avoid the case of "partially read characters" altogether. It looks like TryReadPlpUnicodeChars is only needed for SqlDataReader.GetChars actually. Though, this change cannot improve the async path much, because as I mentioned in #593 (comment), its time complexity is quadratic. No matter how fast you make the copying itself, all packets in the replay buffer will be copied into the characters buffer for each packet received again and again. The optimal solution would copy each packet once instead. See https://github.com/panoskj/SqlClient/tree/wip/issue/593 for a proof of concept. Async runs about as fast as the sync does, even for very large columns, e.g. 50MB. Note that I haven't reviewed the last commit much, it wasn't meant to be uploaded yet. In conclusion, perhaps we should merge the netcore and netfx files before applying a dozen of new patches to them. But except for the pending merge, I agree with this change. |
Possibly. Identifying whether the behaviour is the same can be time consuming.
We both have branches which can linearize that, just waiting on the tree merges.
I've love to. Tree merging is going slowly though. If I can get a small PR through to improve performance I'll try it. |
Nevertheless, this insight may be worth it, given the context of optimizing the async path later. But it is completely optional right now. |
It looks like this doesn't work with some of the bulkcopy tests. |
I updated without making any material change and the CI is mostly clean. The only failures are in AE and while they do look look related I have no way to replicate that environment to debug them. |
src/Microsoft.Data.SqlClient/netcore/src/Microsoft/Data/SqlClient/TdsParser.cs
Show resolved
Hide resolved
Codecov ReportBase: 70.86% // Head: 69.56% // Decreases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## main #1544 +/- ##
==========================================
- Coverage 70.86% 69.56% -1.31%
==========================================
Files 292 292
Lines 61750 61768 +18
==========================================
- Hits 43759 42967 -792
- Misses 17991 18801 +810
Flags with carried forward coverage won't be shown. Click here to find out more.
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Wraith2 LGTM, do you have any plan to port it into netfx too?
src/Microsoft.Data.SqlClient/netcore/src/Microsoft/Data/SqlClient/TdsParserStateObject.cs
Outdated
Show resolved
Hide resolved
PR #1520 is the prerequisite of this change. |
Prerequisites are met... |
I see you haven't modified any existing code inside TdsParserStateObject - you only added a new method. This means, you can now update this PR to add the new method inside the merged file (src\Microsoft\Data\SqlClient\TdsParserStateObject.cs) instead of inside the corefx version. Of course, to do this, you will have to merge main branch into this one (to get #1520's changes). Moreover, according to this comment, there may be someone working on merging TdsParser already, but there is no PR yet (they are waiting for #1844, which will help them a lot). You could port the changes to TryReadPlpUnicodeCharsChunk to netfx in order to help them. Alternatively, you could merge TryReadPlpUnicodeCharsChunk into a new common file (just merge this one method and leave the rest). That being said, I haven't checked whether TryReadPlpUnicodeCharsChunk is the same in corefx and netfx - if there are significant changes, porting/merging may be too hard. But I suspect they are identical. |
d920c94
to
a406d3d
Compare
Count someone from the MS side check the CI machines? the enclave steps are either timing out or just plain not working. |
a406d3d
to
fa4965c
Compare
fa4965c
to
2ad4391
Compare
Fixed, quick, merge while it's green! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets get this in sooner than later so it gets some miles on it during preview.
Good idea. @panoskj do you have any benches you want to try with it since you found the original issue? |
In #593 (comment) @panoskj pointed out that there was a path in the example reproduction which was copying characters from a buffer one at a time in a loop and that this was inefficient. I've done a little digging and this is an attempt to speed it up. Note that not all strings are copied this way.
Unfortunately the fix isn't as simple as copying into from the byte[]. We need to make sure that if we reach a packet boundary that we call a function capable of fetching the next packet. We also need to be aware that we could end up with a partial character in the input buffer and that needs special handling. The strategy I've settled on for this PR is to identify how much can be copied in bulk from the current packet and then pick up any remainder or simply force a new packet by calling TryReadChar() as the code currently does. This should result in most data being read in large blocks with single chars being picked up at existing boundaries which should be faster if not the fastest possible.
/cc @panoskj could you take a look and see if you agree with this and possibly run it through your benches/profiler to see if it improves the numbers you saw?