-
-
Notifications
You must be signed in to change notification settings - Fork 30.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[3.11] gh-93382: Cache result of PyCode_GetCode
in codeobject (GH-93383)
#93493
[3.11] gh-93382: Cache result of PyCode_GetCode
in codeobject (GH-93383)
#93493
Conversation
…nGH-93383) Co-authored-by: Kumar Aditya <59607654+kumaraditya303@users.noreply.github.com> Co-authored-by: Dennis Sweeney <36520290+sweeneyde@users.noreply.github.com>
PyCode_GetCode
in codeobject (GH-93383)PyCode_GetCode
in codeobject (GH-93383)
@pablogsal and @nedbat This PR improves coverage performance on 3.11 by 5-7%. Using Ned's benchmarks
An observation: bm_spectral_norm improved less than bm_sudoku because the size of its |
Unfortunately this means that whatever is making Python 3.11 slower with coverage, unfortunately is not only this :( Great investigation @Fidget-Spinner and thanks for working on this I will try to review ASAP but it would be great if @markshannon @iritkatriel @brandtbucher or @ericsnowcurrently can take a look. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm on mobile right now (probably can't get to a computer today), but here are a few thoughts based on a first look:
I also ran the benchmarks using #93493:
It's a slight improvement, but isn't solving the problem. |
Yeah I'm aware. I sent this PR in because 10% improvement on macrobenchmarks is still something! Even cProfile slowed down by 60% when profiling code in 3.11. My hunch is that accessing the full |
Hm, it looks like in some cases this actually makes things a bit slower (perhaps due to the extra memory consumption)? Maybe we should pause this PR until @markshannon has finished reworking the line number calculations, which seem to be the bulk of the issue at this point. Or at least see if it slows down pyperformance at all before merging? |
When I benchmarked on the main/3.12 branch, it didn't make anything slower #93383 (comment). However, I agree on waiting for a while. When benchmarking with Ned's benchmarks, I saw a slowdown in |
Misc/NEWS.d/next/Core and Builtins/2022-05-31-16-36-30.gh-issue-93382.Jf6gAj.rst
Outdated
Show resolved
Hide resolved
When you're done making the requested changes, leave the comment: |
Co-Authored-By: Kumar Aditya <59607654+kumaraditya303@users.noreply.github.com>
I have made the requested changes; please review again Mark. |
Thanks for making the requested changes! @markshannon: please review the changes made to this pull request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
(cherry picked from commit d52ffc1)
PyCode_GetCode
could be faster #93382PyCode_GetCode
could be faster #93382