-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test(python): Reorganize benchmark test folder #6695
Conversation
py-polars/tests/unit/test_series.py
Outdated
def test_mean_overflow() -> None: | ||
arr = np.array([255] * (1 << 17), dtype="int16") | ||
assert arr.mean() == 255.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't test anything Polars-related, so I removed it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does! It was testing if it doesn't overflow the mean. A naive sum/count would overflow the i16
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, but it's just numpy
code though? 🤔 We don't rely on this to calculate our mean, right?
Reverted this removal, but I still don't understand why this test is in there!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The slow tests were explicitly run in benchmark folder because the polars had a release build. So we can test really slow code.
Ah interesting, learned a lot from your comments 😄 I'm gonna revert a lot of this stuff and add a bit of documentation in that folder on what's going on there. |
Yeah, maybe most ideal would be having those loose files also concerted to pytest functions and running those "release-build" tests only in CI once the benchmark has finished. That same binary is then used for those tests. |
c2fbc7b
to
d52fc8b
Compare
d6dae24
to
85a950a
Compare
85a950a
to
8c4ce49
Compare
Done, and documented in the new Test Suite README 😄 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice cleanup. Especially formalizing the release tests is a good one.
I hope we can add the full TPCH benchmarks in the future.
@@ -0,0 +1,148 @@ | |||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, that's cleaner indeed. :)
Really nice. |
Related to #6364
Changes:
pytest
tests and marked them asbenchmark
tests.benchmark
workflow, and their run durations are reported in the logs.db-benchmark
tobenchmark
and add some documentation to the benchmark script.docs/run_doc_examples.py
todocs/run_doctest.py
README.md
to thepy-polars/tests
folder, and linked to this in theCONTRIBUTING
guide. Feels good to make a bunch of this knowledge explicit to future contributors