-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implemented proper work with multiple threads #1361
Conversation
05c32e8
to
5460384
Compare
ed4ccb1
to
8907d32
Compare
In the next PRs please don't mix implementing common feature and bug fix. But for now we can leave it as is. |
/intelci: run |
/intelci: run |
7406562
to
1530a4a
Compare
/intelci: run |
1 similar comment
/intelci: run |
1530a4a
to
d7ed459
Compare
@olegkkruglov please rebase your branches and run intelci |
/intelci: run |
/intelci: run |
Any additional tests/examples for this functionality? Or is it covered by existing? |
You can use this reproducer for the test: import numpy as np
from sklearnex import patch_sklearn, config_context
from sklearn.datasets import make_classification
from sklearn.ensemble import BaggingClassifier
patch_sklearn()
from sklearn.svm import SVC
X, y = make_classification(
n_samples=1000,
n_features=4,
n_informative=2,
n_redundant=0,
random_state=0,
shuffle=False,
)
with config_context(target_offload="gpu"):
ExtraTreesClassifier(max_depth=2, random_state=0).fit(X, y)
# decision_function
ensemble = BaggingClassifier(
SVC(decision_function_shape="ovr"), n_jobs=3, random_state=0
).fit(X, y) The result is:
On master it fallbacks on CPU. So your branch works correct with |
Perhaps adding something similar with allow_fallback_to_host=False flag added to config_context would be a good test of whether things are working properly? |
Good point, we can update logger for that as well. |
@Mergifyio rebase |
❌ Base branch update has failedGit reported the following error:
|
/intelci: run |
sklearnex/tests/test_parallel.py
Outdated
def test_config_context_in_parallel(): | ||
x, y = make_classification(random_state=42) | ||
try: | ||
with config_context(target_offload="gpu"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with config_context(target_offload="gpu"): | |
with config_context(target_offload="gpu", allow_fallback_to_host=False): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually maybe this modification isn't necessary, but I am a bit confused by the test - when would dpctl be available but no GPU? Thanks for adding the test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dpctl is not only for gpu devices. For example, CI has dpctl installed without gpu in azure pipelines used instances.
/intelci: run |
2 similar comments
/intelci: run |
/intelci: run |
Please attach GPU CI job as well. |
http://intel-ci.intel.com/ee51704d-b860-f1a2-a5d9-a4bf010d0e2e |
@Alexsandruss should example will be updated https://github.com/intel/scikit-learn-intelex/blob/master/examples/sklearnex/n_jobs.py |
Will be in next PR with n_jobs parameter update. |
@@ -54,7 +54,9 @@ pytest --verbose --pyargs ${daal4py_dir}/daal4py/sklearn | |||
return_code=$(($return_code + $?)) | |||
|
|||
echo "Pytest of sklearnex running ..." | |||
pytest --verbose --pyargs ${daal4py_dir}/sklearnex | |||
# TODO: investigate why test_monkeypatch.py might cause failures of other tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have proper tracker for this issues?
Changes proposed in this pull request: