Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible typo in kernels.py:sum_reduction function #54

Closed
oleksandr-pavlyk opened this issue Nov 15, 2022 · 3 comments · Fixed by #72
Closed

Possible typo in kernels.py:sum_reduction function #54

oleksandr-pavlyk opened this issue Nov 15, 2022 · 3 comments · Fixed by #72

Comments

@oleksandr-pavlyk
Copy link
Contributor

https://github.com/soda-inria/sklearn-numba-dpex/blob/main/sklearn_numba_dpex/common/kernels.py#L221-L225

The function reads

def sum_reduction(summands):
    for kernel, result in kernels_and_empty_tensors_pairs:
        kernel(summands, result) 
        summands = result
    return result

The returned variable result is outside of its scope of definition (local variable defined in the scope of the loop).

I believe the last line should be return summands instead.

@fcharras
Copy link
Collaborator

I can change this in a PR if it's more readable, but in fact in python the scope of the variables defined in a loop is the same scope that other local variables.

e.g you can do

for i in range(10):
    pass
print(i)

output will be 9.

BTW we just merged unit tests for the reduction kernels and added a reduction for 2d tensors on axis 1 😁

(See #46 for some thinking on using custom kernels vs dpnp functions)

@oleksandr-pavlyk
Copy link
Contributor Author

Thanks for highlighting #46 !

I think return summands would read better indeed, but its just a suggestion.

@ogrisel
Copy link
Collaborator

ogrisel commented Dec 6, 2022

No strong opinion, the only difference is when the list of kernels is empty.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants