Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix ruff compressions and performance issues #490

Merged
merged 3 commits into from
Nov 25, 2024

Conversation

cclauss
Copy link
Contributor

@cclauss cclauss commented Oct 12, 2024

% ruff check --select=C4,PERF --statistics .

3	C408   	[*] unnecessary-collection-call
2	PERF401	[ ] manual-list-comprehension
1	C400   	[*] unnecessary-generator-list
1	PERF203	[ ] try-except-in-loop
1	PERF403	[ ] manual-dict-comprehension

% ruff rule PERF401

manual-list-comprehension (PERF401)

Derived from the Perflint linter.

What it does

Checks for for loops that can be replaced by a list comprehension.

Why is this bad?

When creating a transformed list from an existing list using a for-loop,
prefer a list comprehension. List comprehensions are more readable and
more performant.

Using the below as an example, the list comprehension is ~10% faster on
Python 3.11, and ~25% faster on Python 3.10.

Note that, as with all perflint rules, this is only intended as a
micro-optimization, and will have a negligible impact on performance in
most cases.

Example

original = list(range(10000))
filtered = []
for i in original:
    if i % 2:
        filtered.append(i)

Use instead:

original = list(range(10000))
filtered = [x for x in original if x % 2]

If you're appending to an existing list, use the extend method instead:

original = list(range(10000))
filtered.extend(x for x in original if x % 2)

@@ -7,6 +7,7 @@ on:
jobs:
test:
strategy:
fail-fast: false
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please can we leave this on. It seems wasteful to run everything if something has failed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wasteful of compute time or developer time? fail-fast: false saves developers time because they see all failing jobs in a single run and can understand the root cause faster and be more efficient in fixing. If this helps them make fewer commits then it also reduces compute time.

fail-fast: true is a good default for GitHub because it reduces compute time however it forces the developer to go thru multiple commits when debugging multiple problems.,

if isinstance(child, astroid.ClassDef):
classes.append(child)
return classes
return [
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not a fan of multi-line list comprehensions. I think we sacrifice readability for a minimal performance benefit. Please could this stay as-is.

@@ -0,0 +1 @@
Fix ruff compressions and performance issues, limit new version of sphinx, add Py312,Py313 to trove classifiers
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sphinx issue has been addressed and the classifiers have been added since this was written. Please could you correct these.

@cclauss cclauss force-pushed the fix-ruff-compressions-and-perf branch from 3536d95 to ee2f8ea Compare October 26, 2024 05:23
@AWhetter AWhetter merged commit 4e2d9ae into readthedocs:main Nov 25, 2024
18 checks passed
@cclauss cclauss deleted the fix-ruff-compressions-and-perf branch November 25, 2024 05:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants