-
-
Notifications
You must be signed in to change notification settings - Fork 18.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PERF: regression from 0.10.1 #3089
Comments
I think that the multi-level reindexing ones are possibly related to the |
bisected? |
vbench was meant exactly for that, but it hasn't been updated in a while. |
can we run it on each commit and get the graph? |
That's it's reason for being. I just added test_perf a few months ago for the devs. |
agreed....how does it update though? |
I think wes goes into the basement and presses a button. |
we're getting too chatty on the issue tracker, need an irc channel. or campfire. |
FWIW, I lurk and learn, but I'm sure others hate the noise just as much as I find it helpful. A dev list might be preferable to IRC unless you can commit resources to a bot to archive everything in a publicly searchable way. I don't know how public campfire is. |
We're definitely pushing the boundaries of S/N, that's not cool. |
fyi...i think i found this regression (my issue!)...issue a fix in a bit |
I could set up a hipchat for pandas devs if you like, but that wouldn't be public. |
#3093 fixes indexing_boolean_data_frame_rows |
@wesm , punting on the chat quesion. would it be possible to get a daily vbench going? we're discovering perf regression by mistake... |
hmm, well, i have kept all my old vb_suite logs (including several for testing #2819, #2867 against their immediate base commits) and I can't find any entries from where |
pretty easy to break perf; thxs for checking |
one thing to note is that i'm on 32-bit, that might make a difference... |
@y-p is there a way to 'save' a vb_suite db for a particular commit, so that subsequent runs of test_perf can just refer to that rather than rebuilding it? (and if not, maybe make an option in test_perf?) or maybe just save the build (rather than the db), but that would still save the get/build time |
vb_suite has a persistent db, test_perf purges the db between runs. I'll add an option |
@stephenwlin
|
@y-p thxs |
I remember that thread. "is memmove slower then memcpy". |
hmm..well, that's weird, i also have a vb_suite diff that's |
@jreback can you give me info about the machine you're using to test this (cython version, gcc version, etc.?) this commit literally does nothing but change an explicit loop into a single |
As you posted elsewhere, the x64 case is probably using SIMD / compiler optimizations
|
@stephenwlin did a great writeup for those who are interested http://mail.python.org/pipermail/pandas-dev/2013-March/000008.html going to close this issue for new, created #3114 to mark that we need a method for creating various |
@stephenwlin , that was a formidable piece. |
Found out it had to do with the fact that the width of the array to be copied is not compile-time constant: I posted a question on StackOverflow about this http://stackoverflow.com/questions/15554565/forcing-gcc-to-perform-loop-unswitching-of-memcpy-runtime-size-checks |
@stephenwlin you are deep into the c-code! (that's a good thing!) |
@jreback, can you vbench #3130 ? I went the magic number route (for now), 128 bytes ended up slowing another test (didn't look into the particulars of it, though) so I'm going with 64 bytes for now. EDIT apparently I got some things mixed up here, #3130 as tested and submitted has a 256 byte limit and it doesn't have any downside, so I'm going with that |
testing now....i'll post on that thread |
@jreback actually, i got my commits mixed up, turns out this was 256 bytes (it's the largest limit that doesn't have negative impacts on other tests, which is what we want)...I amend the comment and the issue but the commit was good (I verified the hash from my vb_suite log, just in case) |
f6ac8c0 to 05a737d
Bad news:
Good news:
???
Fixed:
The text was updated successfully, but these errors were encountered: