-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handled errors during validation via error_score parameter #82
base: master
Are you sure you want to change the base?
Conversation
It is ok for me. You could rebase on master branch to merge these changes. |
For the sake of consistency, it could be good to update |
I think there was some issues with CrossSubjectEvaluation, but I can check again and update this PR if possible. |
My newest commit should contain the necessary changes, however, I have not yet tested this, as I currently do not use any of the other evaluations. I do not know when I will be able to check this on my ERP benchmark. |
Did you had the time to check your code? You could add som test to verify that it is correctly working. |
When using
error_score=np.nan
in sklearn, this results in np.nan value whenver there's an error during transformations or model fitting.For my personal use case I also would like values of
np.nan
when there's an error in the validation itself. I have a dataset for which (due to early stopping) sometimes the AUC cannot be calculated for 5-folds. In current moabb, this error is raised and stops the benchmark.So far I only implemented it for
WithinSessionEvaluation
as I don't use the other ones.