Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pytest failures and Lint Errors #502

Closed
zhenyuz0500 opened this issue Apr 29, 2022 · 4 comments
Closed

pytest failures and Lint Errors #502

zhenyuz0500 opened this issue Apr 29, 2022 · 4 comments
Labels
bug Something isn't working

Comments

@zhenyuz0500
Copy link
Collaborator

zhenyuz0500 commented Apr 29, 2022

Describe the bug
Recent builds create pytest failures and lint errors that need diagnosis.

Lint errors occurred in PR #492 and #496

To Reproduce
pytest -vs tests/ --cov causalml/

Screenshots
Error logs:

_% pytest -vs tests/ --cov causalml/
==================================================================================================== test session starts =====================================================================================================
platform darwin -- Python 3.8.8, pytest-6.2.3, py-1.10.0, pluggy-0.13.1 -- /Users/zhenyuzhao-zz/opt/anaconda3/bin/python
cachedir: .pytest_cache
rootdir: /Users/zhenyuzhao-zz/Documents/Programming/git_repo/causalml
plugins: anyio-2.2.0, cov-3.0.0
collected 64 items

tests/test_cevae.py::test_CEVAE
PASSED
tests/test_counterfactual_unit_selection.py::test_counterfactual_unit_selection PASSED
tests/test_datasets.py::test_get_synthetic_preds[simulate_nuisance_and_easy_treatment] PASSED
tests/test_datasets.py::test_get_synthetic_preds[simulate_hidden_confounder] PASSED
tests/test_datasets.py::test_get_synthetic_preds[simulate_randomized_trial] PASSED
tests/test_datasets.py::test_get_synthetic_summary Abs % Error of ATE MSE KL Divergence
Actuals 0.000000 0.000000 0.000000
S Learner (LR) 0.581879 0.125334 3.828739
T Learner (XGB) 0.323199 1.186263 1.424861
PASSED
tests/test_datasets.py::test_get_synthetic_preds_holdout PASSED
tests/test_datasets.py::test_get_synthetic_summary_holdout ( Abs % Error of ATE MSE KL Divergence
Actuals 0.000000 0.000000 0.000000
S Learner (LR) 0.359446 0.072330 4.033648
S Learner (XGB) 0.041486 0.319411 0.824989
T Learner (LR) 0.358963 0.037750 0.440597
T Learner (XGB) 0.085843 1.257743 1.500363
X Learner (LR) 0.358963 0.037750 0.440597
X Learner (XGB) 0.081450 0.504336 1.116033
R Learner (LR) 0.327808 0.044548 0.408275
R Learner (XGB) 0.112043 4.740827 2.079625, Abs % Error of ATE MSE KL Divergence
Actuals 0.000000 0.000000 0.000000
S Learner (LR) 0.401601 0.080840 3.944126
S Learner (XGB) 0.073994 0.283661 0.948965
T Learner (LR) 0.353086 0.033973 0.695373
T Learner (XGB) 0.090676 0.652876 1.350948
X Learner (LR) 0.353086 0.033973 0.695373
X Learner (XGB) 0.020359 0.332149 1.097056
R Learner (LR) 0.299843 0.037296 0.616230
R Learner (XGB) 0.164259 1.830500 1.492768)
PASSED
tests/test_datasets.py::test_get_synthetic_auuc Learner cum_gain_auuc
0 Actuals 3082.158899
2 T Learner (XGB) 2630.595869
3 Random 2490.139546
1 S Learner (LR) 2463.126242
PASSED
tests/test_features.py::test_load_data PASSED
tests/test_features.py::test_LabelEncoder PASSED
tests/test_features.py::test_OneHotEncoder PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:08<00:00, 1.19it/s]
PASSED
tests/test_match.py::test_nearest_neighbor_match_by_group PASSED
tests/test_match.py::test_match_optimizer PASSED
tests/test_meta_learners.py::test_synthetic_data PASSED
tests/test_meta_learners.py::test_BaseSLearner PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:02<00:00, 3.61it/s]
PASSED
tests/test_meta_learners.py::test_LRSRegressor PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:03<00:00, 2.82it/s]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:04<00:00, 2.22it/s]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:25<00:00, 2.57s/it]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:03<00:00, 2.65it/s]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:08<00:00, 1.15it/s]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:08<00:00, 1.23it/s]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:08<00:00, 1.14it/s]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:08<00:00, 1.19it/s]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:23<00:00, 2.37s/it]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:24<00:00, 2.44s/it]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:24<00:00, 2.45s/it]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:24<00:00, 2.41s/it]
PASSED
tests/test_meta_learners.py::test_TMLELearner PASSED
tests/test_meta_learners.py::test_BaseSClassifier [01:42:02] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
PASSED
tests/test_meta_learners.py::test_BaseTClassifier PASSED
tests/test_meta_learners.py::test_BaseXClassifier [01:42:03] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[01:42:04] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[01:42:05] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[01:42:05] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
PASSED
tests/test_meta_learners.py::test_BaseRClassifier /Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
[01:42:07] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[01:42:07] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[01:42:07] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[01:42:07] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[01:42:07] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
PASSED
tests/test_meta_learners.py::test_BaseRClassifier_with_sample_weights /Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
[01:42:10] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[01:42:10] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[01:42:10] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[01:42:10] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[01:42:10] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[0] validation_0-auc:2.75329
[1] validation_0-auc:-4.78198
[2] validation_0-auc:-3.52732
[3] validation_0-auc:-5.61398
[4] validation_0-auc:-4.07529
[5] validation_0-auc:-4.58928
[6] validation_0-auc:-3.94142
[7] validation_0-auc:-3.24399
[8] validation_0-auc:-6.07570
[9] validation_0-auc:-8.59653
[10] validation_0-auc:-7.84374
[11] validation_0-auc:-6.41278
[12] validation_0-auc:-4.45992
[13] validation_0-auc:-5.09416
[14] validation_0-auc:-5.12351
[15] validation_0-auc:-3.64573
[16] validation_0-auc:-3.01243
[17] validation_0-auc:-2.80259
[18] validation_0-auc:-1.67346
[19] validation_0-auc:-2.18767
[20] validation_0-auc:-2.77537
[21] validation_0-auc:-2.62919
[22] validation_0-auc:-2.49174
[23] validation_0-auc:-1.50853
[24] validation_0-auc:-0.98859
[25] validation_0-auc:-0.79339
[26] validation_0-auc:-0.52949
[27] validation_0-auc:-0.60107
[28] validation_0-auc:0.03568
[29] validation_0-auc:0.38625
PASSED
tests/test_meta_learners.py::test_pandas_input PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:11<00:00, 1.10s/it]
PASSED
tests/test_propensity.py::test_logistic_regression_propensity_model PASSED
tests/test_propensity.py::test_logistic_regression_propensity_model_model_kwargs PASSED
tests/test_propensity.py::test_elasticnet_propensity_model PASSED
tests/test_propensity.py::test_gradientboosted_propensity_model [01:42:28] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
PASSED
tests/test_propensity.py::test_gradientboosted_propensity_model_earlystopping [01:42:28] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[0] validation_0-logloss:0.68365
[1] validation_0-logloss:0.67243
[2] validation_0-logloss:0.64779
[3] validation_0-logloss:0.62546
[4] validation_0-logloss:0.60664
[5] validation_0-logloss:0.58847
[6] validation_0-logloss:0.57660
[7] validation_0-logloss:0.57160
[8] validation_0-logloss:0.56192
[9] validation_0-logloss:0.55328
[10] validation_0-logloss:0.54668
[11] validation_0-logloss:0.54318
[12] validation_0-logloss:0.54001
[13] validation_0-logloss:0.53335
[14] validation_0-logloss:0.53029
[15] validation_0-logloss:0.52541
[16] validation_0-logloss:0.52656
[17] validation_0-logloss:0.52321
[18] validation_0-logloss:0.52327
[19] validation_0-logloss:0.52239
[20] validation_0-logloss:0.52392
[21] validation_0-logloss:0.52404
[22] validation_0-logloss:0.52376
[23] validation_0-logloss:0.52374
[24] validation_0-logloss:0.52289
[25] validation_0-logloss:0.52229
[26] validation_0-logloss:0.52028
[27] validation_0-logloss:0.52110
[28] validation_0-logloss:0.52270
[29] validation_0-logloss:0.52229
[30] validation_0-logloss:0.52084
[31] validation_0-logloss:0.52308
[32] validation_0-logloss:0.52387
[33] validation_0-logloss:0.52583
[34] validation_0-logloss:0.52824
[35] validation_0-logloss:0.53039
[36] validation_0-logloss:0.52990
PASSED
tests/test_sensitivity.py::test_Sensitivity Method ATE New ATE New ATE LB New ATE UB
0 Placebo Treatment 0.680042 -0.009359 -0.022652 0.003934
0 Random Cause 0.680042 0.680049 0.667253 0.692846
0 Subset Data(sample size @0.5) 0.680042 0.682025 0.663973 0.700077
0 Random Replace 0.680042 0.678623 0.665698 0.691548
0 Selection Bias (alpha@-0.80626, with r-sqaure:... 0.680042 1.353547 1.34094 1.366155
0 Selection Bias (alpha@-0.645, with r-sqaure:0.... 0.680042 1.218846 1.206217 1.231475
0 Selection Bias (alpha@-0.48375, with r-sqaure:... 0.680042 1.084145 1.071487 1.096803
0 Selection Bias (alpha@-0.3225, with r-sqaure:0... 0.680042 0.949444 0.936748 0.96214
0 Selection Bias (alpha@-0.16125, with r-sqaure:... 0.680042 0.814743 0.802001 0.827485
0 Selection Bias (alpha@0.0, with r-sqaure:0.0 0.680042 0.680042 0.667245 0.692838
0 Selection Bias (alpha@0.16125, with r-sqaure:0... 0.680042 0.545341 0.532482 0.558199
0 Selection Bias (alpha@0.3225, with r-sqaure:0.... 0.680042 0.41064 0.397711 0.423568
0 Selection Bias (alpha@0.48375, with r-sqaure:0... 0.680042 0.275939 0.262933 0.288944
0 Selection Bias (alpha@0.645, with r-sqaure:0.0... 0.680042 0.141237 0.128146 0.154329
0 Selection Bias (alpha@0.80626, with r-sqaure:0... 0.680042 0.006536 -0.006648 0.01972
PASSED
tests/test_sensitivity.py::test_SensitivityPlaceboTreatment Method ATE New ATE New ATE LB New ATE UB
0 Random Cause 0.678358 -0.004009 -0.017335 0.009316
PASSED
tests/test_sensitivity.py::test_SensitivityRandomCause Method ATE New ATE New ATE LB New ATE UB
0 Random Cause 0.674445 0.674436 0.661632 0.68724
PASSED
tests/test_sensitivity.py::test_SensitivityRandomReplace Method ATE New ATE New ATE LB New ATE UB
0 Random Replace 0.68259 0.809523 0.796646 0.822401
PASSED
tests/test_sensitivity.py::test_SensitivitySelectionBias alpha rsqs New ATE New ATE LB New ATE UB
0 -0.800120 0.107832 0.660142 0.647300 0.672984
0 -0.640096 0.072265 0.662217 0.649428 0.675006
0 -0.480072 0.042282 0.664292 0.651535 0.677050
0 -0.320048 0.019399 0.666367 0.653619 0.679115
0 -0.160024 0.004965 0.668442 0.655683 0.681202
0 0.000000 0.000000 0.670517 0.657724 0.683311
0 0.160024 0.005063 0.672592 0.659744 0.685441
0 0.320048 0.020160 0.674667 0.661743 0.687592
0 0.480072 0.044736 0.676742 0.663720 0.689765
0 0.640096 0.077731 0.678818 0.665678 0.691957
0 0.800120 0.117713 0.680893 0.667615 0.694170 feature partial_rsqs
0 feature_0 -0.065992
1 feature_1 -0.067405
2 feature_2 -0.000011
3 feature_3 -0.000623
4 feature_4 -0.000643
5 feature_5 -0.000026
PASSED
tests/test_sensitivity.py::test_one_sided PASSED
tests/test_sensitivity.py::test_alignment PASSED
tests/test_sensitivity.py::test_one_sided_att PASSED
tests/test_sensitivity.py::test_alignment_att PASSED
tests/test_uplift_trees.py::test_make_uplift_classification PASSED
tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[threads-loky] FAILED
tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[threads-threading] FAILED
tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[threads-multiprocessing] FAILED
tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[processes-loky] FAILED
tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[processes-threading] FAILED
tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[processes-multiprocessing] FAILED
tests/test_uplift_trees.py::test_UpliftTreeClassifier PASSED
tests/test_uplift_trees.py::test_UpliftTreeClassifier_feature_importance PASSED
tests/test_utils.py::test_weighted_variance PASSED
tests/test_value_optimization.py::test_counterfactual_value_optimization PASSED

================================================================================================================================== FAILURES ==================================================================================================================================
______________________________________________________________________________________________________________ test_UpliftRandomForestClassifier[threads-loky] _______________________________________________________________________________________________________________
joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/queues.py", line 153, in feed
obj
= dumps(obj, reducers=reducers)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/reduction.py", line 271, in dumps
dump(obj, buf, reducers=reducers, protocol=protocol)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/reduction.py", line 264, in dump
_LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/cloudpickle/cloudpickle_fast.py", line 563, in dump
return Pickler.dump(self, obj)
_pickle.PicklingError: Can't pickle <cyfunction UpliftRandomForestClassifier.bootstrap at 0x7fd5fff8b5f0>: attribute lookup bootstrap on causalml.inference.tree.uplift failed
"""

The above exception was the direct cause of the following exception:

generate_classification_data = <function generate_classification_data.._generate_data at 0x7fd5d1eab430>, backend = 'loky', joblib_prefer = 'threads'

@pytest.mark.parametrize("backend", ["loky", "threading", "multiprocessing"])
@pytest.mark.parametrize("joblib_prefer", ["threads", "processes"])
def test_UpliftRandomForestClassifier(
    generate_classification_data, backend, joblib_prefer
):
    df, x_names = generate_classification_data()
    df_train, df_test = train_test_split(df, test_size=0.2, random_state=RANDOM_SEED)

    with parallel_backend(backend):
        # Train the UpLift Random Forest classifier
        uplift_model = UpliftRandomForestClassifier(
            min_samples_leaf=50,
            control_name=TREATMENT_NAMES[0],
            random_state=RANDOM_SEED,
            joblib_prefer=joblib_prefer,
        )
      uplift_model.fit(
            df_train[x_names].values,
            treatment=df_train["treatment_group_key"].values,
            y=df_train[CONVERSION].values,
        )

tests/test_uplift_trees.py:37:


causalml/inference/tree/uplift.pyx:1331: in causalml.inference.tree.uplift.UpliftRandomForestClassifier.fit
(delayed(self.bootstrap)(X, treatment, y, tree) for tree in self.uplift_forest)
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:1054: in call
self.retrieve()
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:933: in retrieve
self._output.extend(job.get(timeout=self.timeout))
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/_parallel_backends.py:542: in wrap_future_result
return future.result(timeout=timeout)
../../../../opt/anaconda3/lib/python3.8/concurrent/futures/_base.py:439: in result
return self.__get_result()


self = <Future at 0x7fd5d180afa0 state=finished raised PicklingError>

def __get_result(self):
    if self._exception:
      raise self._exception

E _pickle.PicklingError: Could not pickle the task to send it to the workers.

../../../../opt/anaconda3/lib/python3.8/concurrent/futures/_base.py:388: PicklingError
____________________________________________________________________________________________________________ test_UpliftRandomForestClassifier[threads-threading] ____________________________________________________________________________________________________________
joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/queues.py", line 153, in feed
obj
= dumps(obj, reducers=reducers)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/reduction.py", line 271, in dumps
dump(obj, buf, reducers=reducers, protocol=protocol)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/reduction.py", line 264, in dump
_LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/cloudpickle/cloudpickle_fast.py", line 563, in dump
return Pickler.dump(self, obj)
_pickle.PicklingError: Can't pickle <cyfunction UpliftTreeClassifier.evaluate_KL at 0x7fd5fff7aba0>: attribute lookup evaluate_KL on causalml.inference.tree.uplift failed
"""

The above exception was the direct cause of the following exception:

generate_classification_data = <function generate_classification_data.._generate_data at 0x7fd5d1eab430>, backend = 'threading', joblib_prefer = 'threads'

@pytest.mark.parametrize("backend", ["loky", "threading", "multiprocessing"])
@pytest.mark.parametrize("joblib_prefer", ["threads", "processes"])
def test_UpliftRandomForestClassifier(
    generate_classification_data, backend, joblib_prefer
):
    df, x_names = generate_classification_data()
    df_train, df_test = train_test_split(df, test_size=0.2, random_state=RANDOM_SEED)

    with parallel_backend(backend):
        # Train the UpLift Random Forest classifier
        uplift_model = UpliftRandomForestClassifier(
            min_samples_leaf=50,
            control_name=TREATMENT_NAMES[0],
            random_state=RANDOM_SEED,
            joblib_prefer=joblib_prefer,
        )

        uplift_model.fit(
            df_train[x_names].values,
            treatment=df_train["treatment_group_key"].values,
            y=df_train[CONVERSION].values,
        )

        predictions = {}
        predictions["single"] = uplift_model.predict(df_test[x_names].values)
        with parallel_backend("loky", n_jobs=2):
          predictions["loky_2"] = uplift_model.predict(df_test[x_names].values)

tests/test_uplift_trees.py:46:


../../../../opt/anaconda3/lib/python3.8/site-packages/sklearn/utils/_testing.py:308: in wrapper
return fn(*args, **kwargs)
causalml/inference/tree/uplift.pyx:1379: in causalml.inference.tree.uplift.UpliftRandomForestClassifier.predict
(delayed(tree.predict)(X=X) for tree in self.uplift_forest)
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:1054: in call
self.retrieve()
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:933: in retrieve
self._output.extend(job.get(timeout=self.timeout))
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/_parallel_backends.py:542: in wrap_future_result
return future.result(timeout=timeout)
../../../../opt/anaconda3/lib/python3.8/concurrent/futures/_base.py:439: in result
return self.__get_result()


self = <Future at 0x7fd5d18ab7c0 state=finished raised PicklingError>

def __get_result(self):
    if self._exception:
      raise self._exception

E _pickle.PicklingError: Could not pickle the task to send it to the workers.

../../../../opt/anaconda3/lib/python3.8/concurrent/futures/_base.py:388: PicklingError
_________________________________________________________________________________________________________ test_UpliftRandomForestClassifier[threads-multiprocessing] _________________________________________________________________________________________________________

generate_classification_data = <function generate_classification_data.._generate_data at 0x7fd5d1eab430>, backend = 'multiprocessing', joblib_prefer = 'threads'

@pytest.mark.parametrize("backend", ["loky", "threading", "multiprocessing"])
@pytest.mark.parametrize("joblib_prefer", ["threads", "processes"])
def test_UpliftRandomForestClassifier(
    generate_classification_data, backend, joblib_prefer
):
    df, x_names = generate_classification_data()
    df_train, df_test = train_test_split(df, test_size=0.2, random_state=RANDOM_SEED)

    with parallel_backend(backend):
        # Train the UpLift Random Forest classifier
        uplift_model = UpliftRandomForestClassifier(
            min_samples_leaf=50,
            control_name=TREATMENT_NAMES[0],
            random_state=RANDOM_SEED,
            joblib_prefer=joblib_prefer,
        )
      uplift_model.fit(
            df_train[x_names].values,
            treatment=df_train["treatment_group_key"].values,
            y=df_train[CONVERSION].values,
        )

tests/test_uplift_trees.py:37:


causalml/inference/tree/uplift.pyx:1331: in causalml.inference.tree.uplift.UpliftRandomForestClassifier.fit
(delayed(self.bootstrap)(X, treatment, y, tree) for tree in self.uplift_forest)
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:1054: in call
self.retrieve()
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:933: in retrieve
self._output.extend(job.get(timeout=self.timeout))
../../../../opt/anaconda3/lib/python3.8/multiprocessing/pool.py:771: in get
raise self._value
../../../../opt/anaconda3/lib/python3.8/multiprocessing/pool.py:537: in _handle_tasks
put(task)


obj = (140, 0, <joblib._parallel_backends.SafeFunction object at 0x7fd5d196bbb0>, (), {})

def send(obj):
    buffer = BytesIO()
  CustomizablePickler(buffer, self._reducers).dump(obj)

E _pickle.PicklingError: Can't pickle <cyfunction UpliftRandomForestClassifier.bootstrap at 0x7fd5fff8b5f0>: attribute lookup bootstrap on causalml.inference.tree.uplift failed

../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/pool.py:156: PicklingError
_____________________________________________________________________________________________________________ test_UpliftRandomForestClassifier[processes-loky] ______________________________________________________________________________________________________________
joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/queues.py", line 153, in feed
obj
= dumps(obj, reducers=reducers)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/reduction.py", line 271, in dumps
dump(obj, buf, reducers=reducers, protocol=protocol)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/reduction.py", line 264, in dump
_LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/cloudpickle/cloudpickle_fast.py", line 563, in dump
return Pickler.dump(self, obj)
_pickle.PicklingError: Can't pickle <cyfunction UpliftRandomForestClassifier.bootstrap at 0x7fd5fff8b5f0>: attribute lookup bootstrap on causalml.inference.tree.uplift failed
"""

The above exception was the direct cause of the following exception:

generate_classification_data = <function generate_classification_data.._generate_data at 0x7fd5d1eab430>, backend = 'loky', joblib_prefer = 'processes'

@pytest.mark.parametrize("backend", ["loky", "threading", "multiprocessing"])
@pytest.mark.parametrize("joblib_prefer", ["threads", "processes"])
def test_UpliftRandomForestClassifier(
    generate_classification_data, backend, joblib_prefer
):
    df, x_names = generate_classification_data()
    df_train, df_test = train_test_split(df, test_size=0.2, random_state=RANDOM_SEED)

    with parallel_backend(backend):
        # Train the UpLift Random Forest classifier
        uplift_model = UpliftRandomForestClassifier(
            min_samples_leaf=50,
            control_name=TREATMENT_NAMES[0],
            random_state=RANDOM_SEED,
            joblib_prefer=joblib_prefer,
        )
      uplift_model.fit(
            df_train[x_names].values,
            treatment=df_train["treatment_group_key"].values,
            y=df_train[CONVERSION].values,
        )

tests/test_uplift_trees.py:37:


causalml/inference/tree/uplift.pyx:1331: in causalml.inference.tree.uplift.UpliftRandomForestClassifier.fit
(delayed(self.bootstrap)(X, treatment, y, tree) for tree in self.uplift_forest)
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:1054: in call
self.retrieve()
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:933: in retrieve
self._output.extend(job.get(timeout=self.timeout))
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/_parallel_backends.py:542: in wrap_future_result
return future.result(timeout=timeout)
../../../../opt/anaconda3/lib/python3.8/concurrent/futures/_base.py:439: in result
return self.__get_result()


self = <Future at 0x7fd5d1898a00 state=finished raised PicklingError>

def __get_result(self):
    if self._exception:
      raise self._exception

E _pickle.PicklingError: Could not pickle the task to send it to the workers.

../../../../opt/anaconda3/lib/python3.8/concurrent/futures/_base.py:388: PicklingError
___________________________________________________________________________________________________________ test_UpliftRandomForestClassifier[processes-threading] ___________________________________________________________________________________________________________
joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/queues.py", line 153, in feed
obj
= dumps(obj, reducers=reducers)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/reduction.py", line 271, in dumps
dump(obj, buf, reducers=reducers, protocol=protocol)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/reduction.py", line 264, in dump
_LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/cloudpickle/cloudpickle_fast.py", line 563, in dump
return Pickler.dump(self, obj)
_pickle.PicklingError: Can't pickle <cyfunction UpliftTreeClassifier.evaluate_KL at 0x7fd5fff7aba0>: attribute lookup evaluate_KL on causalml.inference.tree.uplift failed
"""

The above exception was the direct cause of the following exception:

generate_classification_data = <function generate_classification_data.._generate_data at 0x7fd5d1eab430>, backend = 'threading', joblib_prefer = 'processes'

@pytest.mark.parametrize("backend", ["loky", "threading", "multiprocessing"])
@pytest.mark.parametrize("joblib_prefer", ["threads", "processes"])
def test_UpliftRandomForestClassifier(
    generate_classification_data, backend, joblib_prefer
):
    df, x_names = generate_classification_data()
    df_train, df_test = train_test_split(df, test_size=0.2, random_state=RANDOM_SEED)

    with parallel_backend(backend):
        # Train the UpLift Random Forest classifier
        uplift_model = UpliftRandomForestClassifier(
            min_samples_leaf=50,
            control_name=TREATMENT_NAMES[0],
            random_state=RANDOM_SEED,
            joblib_prefer=joblib_prefer,
        )

        uplift_model.fit(
            df_train[x_names].values,
            treatment=df_train["treatment_group_key"].values,
            y=df_train[CONVERSION].values,
        )

        predictions = {}
        predictions["single"] = uplift_model.predict(df_test[x_names].values)
        with parallel_backend("loky", n_jobs=2):
          predictions["loky_2"] = uplift_model.predict(df_test[x_names].values)

tests/test_uplift_trees.py:46:


../../../../opt/anaconda3/lib/python3.8/site-packages/sklearn/utils/_testing.py:308: in wrapper
return fn(*args, **kwargs)
causalml/inference/tree/uplift.pyx:1379: in causalml.inference.tree.uplift.UpliftRandomForestClassifier.predict
(delayed(tree.predict)(X=X) for tree in self.uplift_forest)
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:1054: in call
self.retrieve()
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:933: in retrieve
self._output.extend(job.get(timeout=self.timeout))
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/_parallel_backends.py:542: in wrap_future_result
return future.result(timeout=timeout)
../../../../opt/anaconda3/lib/python3.8/concurrent/futures/_base.py:439: in result
return self.__get_result()


self = <Future at 0x7fd5d1e8d6d0 state=finished raised PicklingError>

def __get_result(self):
    if self._exception:
      raise self._exception

E _pickle.PicklingError: Could not pickle the task to send it to the workers.

../../../../opt/anaconda3/lib/python3.8/concurrent/futures/_base.py:388: PicklingError
________________________________________________________________________________________________________ test_UpliftRandomForestClassifier[processes-multiprocessing] ________________________________________________________________________________________________________

generate_classification_data = <function generate_classification_data.._generate_data at 0x7fd5d1eab430>, backend = 'multiprocessing', joblib_prefer = 'processes'

@pytest.mark.parametrize("backend", ["loky", "threading", "multiprocessing"])
@pytest.mark.parametrize("joblib_prefer", ["threads", "processes"])
def test_UpliftRandomForestClassifier(
    generate_classification_data, backend, joblib_prefer
):
    df, x_names = generate_classification_data()
    df_train, df_test = train_test_split(df, test_size=0.2, random_state=RANDOM_SEED)

    with parallel_backend(backend):
        # Train the UpLift Random Forest classifier
        uplift_model = UpliftRandomForestClassifier(
            min_samples_leaf=50,
            control_name=TREATMENT_NAMES[0],
            random_state=RANDOM_SEED,
            joblib_prefer=joblib_prefer,
        )
      uplift_model.fit(
            df_train[x_names].values,
            treatment=df_train["treatment_group_key"].values,
            y=df_train[CONVERSION].values,
        )

tests/test_uplift_trees.py:37:


causalml/inference/tree/uplift.pyx:1331: in causalml.inference.tree.uplift.UpliftRandomForestClassifier.fit
(delayed(self.bootstrap)(X, treatment, y, tree) for tree in self.uplift_forest)
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:1054: in call
self.retrieve()
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:933: in retrieve
self._output.extend(job.get(timeout=self.timeout))
../../../../opt/anaconda3/lib/python3.8/multiprocessing/pool.py:771: in get
raise self._value
../../../../opt/anaconda3/lib/python3.8/multiprocessing/pool.py:537: in _handle_tasks
put(task)


obj = (170, 0, <joblib._parallel_backends.SafeFunction object at 0x7fd5d1955d30>, (), {})

def send(obj):
    buffer = BytesIO()
  CustomizablePickler(buffer, self._reducers).dump(obj)

E _pickle.PicklingError: Can't pickle <cyfunction UpliftRandomForestClassifier.bootstrap at 0x7fd5fff8b5f0>: attribute lookup bootstrap on causalml.inference.tree.uplift failed

../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/pool.py:156: PicklingError
============================================================================================================================== warnings summary ==============================================================================================================================
../../../../opt/anaconda3/lib/python3.8/site-packages/scipy/fft/init.py:97
The module numpy.dual is deprecated. Instead of using dual, use the functions directly from numpy or scipy.

../../../../opt/anaconda3/lib/python3.8/site-packages/scipy/special/orthogonal.py:81: 2 warnings
tests/test_datasets.py: 16 warnings
tests/test_ivlearner.py: 8 warnings
tests/test_meta_learners.py: 48 warnings
np.int is a deprecated alias for the builtin int. To silence this warning, use int by itself. Doing this will not modify any behavior and is safe. When replacing np.int, you may wish to use e.g. np.int64 or np.int32 to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

../../../../opt/anaconda3/lib/python3.8/site-packages/scipy/io/matlab/mio5.py:98
np.bool is a deprecated alias for the builtin bool. To silence this warning, use bool by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.bool_ here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

../../../../opt/anaconda3/lib/python3.8/site-packages/patsy/constraint.py:13
Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working

tests/test_cevae.py::test_CEVAE
tests/test_cevae.py::test_CEVAE
tests/test_cevae.py::test_CEVAE
To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).

tests/test_counterfactual_unit_selection.py: 1032 warnings
tests/test_meta_learners.py: 209 warnings
tests/test_value_optimization.py: 305 warnings
tostring() is deprecated. Use tobytes() instead.

tests/test_datasets.py::test_get_synthetic_preds[simulate_hidden_confounder]
tests/test_datasets.py::test_get_synthetic_preds[simulate_hidden_confounder]
tests/test_datasets.py::test_get_synthetic_preds[simulate_hidden_confounder]
tests/test_datasets.py::test_get_synthetic_preds[simulate_hidden_confounder]
invalid value encountered in true_divide

tests/test_features.py::test_load_data
tests/test_features.py::test_load_data
tests/test_features.py::test_load_data
np.object is a deprecated alias for the builtin object. To silence this warning, use object by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

tests/test_features.py::test_LabelEncoder
tests/test_features.py::test_LabelEncoder

A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

tests/test_meta_learners.py::test_BaseSClassifier
tests/test_meta_learners.py::test_BaseXClassifier
tests/test_meta_learners.py::test_BaseXClassifier
tests/test_propensity.py::test_gradientboosted_propensity_model
tests/test_propensity.py::test_gradientboosted_propensity_model_earlystopping
The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].

tests/test_meta_learners.py::test_BaseRClassifier
tests/test_meta_learners.py::test_BaseRClassifier_with_sample_weights
inspect.getargspec() is deprecated since Python 3.0, use inspect.signature() or inspect.getfullargspec()

-- Docs: https://docs.pytest.org/en/stable/warnings.html

---------- coverage: platform darwin, python 3.8.8-final-0 -----------
Name Stmts Miss Cover

causalml/init.py 3 0 100%
causalml/dataset/init.py 14 0 100%
causalml/dataset/classification.py 73 20 73%
causalml/dataset/regression.py 54 0 100%
causalml/dataset/synthetic.py 244 103 58%
causalml/feature_selection/init.py 1 1 0%
causalml/feature_selection/filters.py 153 153 0%
causalml/features.py 85 10 88%
causalml/inference/init.py 0 0 100%
causalml/inference/iv/init.py 2 0 100%
causalml/inference/iv/drivlearner.py 258 58 78%
causalml/inference/iv/iv_regression.py 17 8 53%
causalml/inference/meta/init.py 6 0 100%
causalml/inference/meta/base.py 71 17 76%
causalml/inference/meta/drlearner.py 195 41 79%
causalml/inference/meta/explainer.py 101 80 21%
causalml/inference/meta/rlearner.py 251 35 86%
causalml/inference/meta/slearner.py 178 27 85%
causalml/inference/meta/tlearner.py 170 28 84%
causalml/inference/meta/tmle.py 98 22 78%
causalml/inference/meta/utils.py 49 10 80%
causalml/inference/meta/xlearner.py 252 44 83%
causalml/inference/nn/init.py 1 0 100%
causalml/inference/nn/cevae.py 29 3 90%
causalml/inference/tf/init.py 1 1 0%
causalml/inference/tf/dragonnet.py 64 64 0%
causalml/inference/tf/utils.py 52 52 0%
causalml/inference/tree/init.py 4 0 100%
causalml/inference/tree/plot.py 100 95 5%
causalml/inference/tree/utils.py 43 36 16%
causalml/match.py 185 40 78%
causalml/metrics/init.py 7 0 100%
causalml/metrics/classification.py 11 4 64%
causalml/metrics/const.py 1 0 100%
causalml/metrics/regression.py 41 3 93%
causalml/metrics/sensitivity.py 257 50 81%
causalml/metrics/visualize.py 299 237 21%
causalml/optimize/init.py 5 0 100%
causalml/optimize/pns.py 23 20 13%
causalml/optimize/policylearner.py 54 39 28%
causalml/optimize/unit_selection.py 90 45 50%
causalml/optimize/utils.py 26 4 85%
causalml/optimize/value_optimization.py 30 3 90%
causalml/propensity.py 74 4 95%

TOTAL 3672 1357 63%

========================================================================================================================== short test summary info ===========================================================================================================================
FAILED tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[threads-loky] - _pickle.PicklingError: Could not pickle the task to send it to the workers.
FAILED tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[threads-threading] - _pickle.PicklingError: Could not pickle the task to send it to the workers.
FAILED tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[threads-multiprocessing] - _pickle.PicklingError: Can't pickle <cyfunction UpliftRandomForestClassifier.bootstrap at 0x7fd5fff8b5f0>: attribute lookup bootstrap on causalml.inference.tree.uplift failed
FAILED tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[processes-loky] - _pickle.PicklingError: Could not pickle the task to send it to the workers.
FAILED tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[processes-threading] - _pickle.PicklingError: Could not pickle the task to send it to the workers.
FAILED tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[processes-multiprocessing] - _pickle.PicklingError: Can't pickle <cyfunction UpliftRandomForestClassifier.bootstrap at 0x7fd5fff8b5f0>: attribute lookup bootstrap on causalml.inference.tree.uplift f...
========================================================================================================== 6 failed, 58 passed, 1642 warnings in 571.89s (0:09:31) ===========================================================================================================`
Environment (please complete the following information):

  • OS: [e.g. macOS, Windows, Ubuntu]
  • Python Version: [e.g. 3.6, 3.7]
  • Versions of Major Dependencies (pandas, scikit-learn, cython): [e.g. pandas==0.25, scikit-learn==0.22, cython==0.28]_
@zhenyuz0500 zhenyuz0500 added the bug Something isn't working label Apr 29, 2022
@jeongyoonlee
Copy link
Collaborator

Hi @zhenyuz0500, this error message is from UpliftRandomForestClassifier which requires Cython code to be compiled. Please build with the build_ext --inplace option and run the test again.

python setup.py build_ext --inplace
pytest -sv tests --cov causalml

@paullo0106
Copy link
Collaborator

noticed the recent lint errors too, we'll need to take a look. Take the latest master for example https://github.com/uber/causalml/runs/6236927941?check_suite_focus=true

@zhenyuz0500
Copy link
Collaborator Author

zhenyuz0500 commented May 2, 2022

It seems the errors are the same:

(base) zhenyuzhao-zz@MB0 causalml % pip install Cython

Requirement already satisfied: Cython in /Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages (0.29.23)

(base) zhenyuzhao-zz@MB0 causalml % python setup.py build_ext --inplace
running build_ext

copying build/lib.macosx-10.9-x86_64-3.8/causalml/inference/tree/causaltree.cpython-38-darwin.so -> causalml/inference/tree
copying build/lib.macosx-10.9-x86_64-3.8/causalml/inference/tree/uplift.cpython-38-darwin.so -> causalml/inference/tree

(base) zhenyuzhao-zz@MB0 causalml % pytest -sv tests --cov causalml
============================================================================================================================ test session starts =============================================================================================================================
platform darwin -- Python 3.8.8, pytest-6.2.3, py-1.10.0, pluggy-0.13.1 -- /Users/zhenyuzhao-zz/opt/anaconda3/bin/python
cachedir: .pytest_cache
rootdir: /Users/zhenyuzhao-zz/Documents/Programming/git_repo/causalml
plugins: anyio-2.2.0, cov-3.0.0
collected 64 items

tests/test_cevae.py::test_CEVAE PASSED
tests/test_counterfactual_unit_selection.py::test_counterfactual_unit_selection PASSED
tests/test_datasets.py::test_get_synthetic_preds[simulate_nuisance_and_easy_treatment] PASSED
tests/test_datasets.py::test_get_synthetic_preds[simulate_hidden_confounder] PASSED
tests/test_datasets.py::test_get_synthetic_preds[simulate_randomized_trial] PASSED
tests/test_datasets.py::test_get_synthetic_summary Abs % Error of ATE MSE KL Divergence
Actuals 0.000000 0.000000 0.000000
S Learner (LR) 0.581879 0.125334 3.828739
T Learner (XGB) 0.323199 1.186263 1.424861
PASSED
tests/test_datasets.py::test_get_synthetic_preds_holdout PASSED
tests/test_datasets.py::test_get_synthetic_summary_holdout ( Abs % Error of ATE MSE KL Divergence
Actuals 0.000000 0.000000 0.000000
S Learner (LR) 0.359446 0.072330 4.033648
S Learner (XGB) 0.041486 0.319411 0.824989
T Learner (LR) 0.358963 0.037750 0.440597
T Learner (XGB) 0.085843 1.257743 1.500363
X Learner (LR) 0.358963 0.037750 0.440597
X Learner (XGB) 0.081450 0.504336 1.116033
R Learner (LR) 0.327808 0.044548 0.408275
R Learner (XGB) 0.112043 4.740827 2.079625, Abs % Error of ATE MSE KL Divergence
Actuals 0.000000 0.000000 0.000000
S Learner (LR) 0.401601 0.080840 3.944126
S Learner (XGB) 0.073994 0.283661 0.948965
T Learner (LR) 0.353086 0.033973 0.695373
T Learner (XGB) 0.090676 0.652876 1.350948
X Learner (LR) 0.353086 0.033973 0.695373
X Learner (XGB) 0.020359 0.332149 1.097056
R Learner (LR) 0.299843 0.037296 0.616230
R Learner (XGB) 0.164259 1.830500 1.492768)
PASSED
tests/test_datasets.py::test_get_synthetic_auuc Learner cum_gain_auuc
0 Actuals 3082.158899
2 T Learner (XGB) 2630.595869
3 Random 2490.139546
1 S Learner (LR) 2463.126242
PASSED
tests/test_features.py::test_load_data PASSED
tests/test_features.py::test_LabelEncoder PASSED
tests/test_features.py::test_OneHotEncoder PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:18<00:00, 1.86s/it]
PASSED
tests/test_match.py::test_nearest_neighbor_match_by_group PASSED
tests/test_match.py::test_match_optimizer PASSED
tests/test_meta_learners.py::test_synthetic_data PASSED
tests/test_meta_learners.py::test_BaseSLearner PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:06<00:00, 1.65it/s]
PASSED
tests/test_meta_learners.py::test_LRSRegressor PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:09<00:00, 1.10it/s]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:08<00:00, 1.16it/s]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:35<00:00, 3.56s/it]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:08<00:00, 1.24it/s]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:15<00:00, 1.58s/it]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:16<00:00, 1.67s/it]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:16<00:00, 1.64s/it]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:17<00:00, 1.79s/it]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:34<00:00, 3.49s/it]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:34<00:00, 3.41s/it]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:38<00:00, 3.88s/it]
PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:36<00:00, 3.70s/it]
PASSED
tests/test_meta_learners.py::test_TMLELearner PASSED
tests/test_meta_learners.py::test_BaseSClassifier [10:59:28] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
PASSED
tests/test_meta_learners.py::test_BaseTClassifier PASSED
tests/test_meta_learners.py::test_BaseXClassifier [10:59:30] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[10:59:31] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[10:59:34] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[10:59:34] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
PASSED
tests/test_meta_learners.py::test_BaseRClassifier /Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
[10:59:37] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[10:59:37] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[10:59:37] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[10:59:37] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[10:59:37] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
PASSED
tests/test_meta_learners.py::test_BaseRClassifier_with_sample_weights /Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/xgboost/sklearn.py:888: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].
warnings.warn(label_encoder_deprecation_msg, UserWarning)
[10:59:42] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[10:59:42] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[10:59:42] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[10:59:42] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[10:59:42] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[0] validation_0-auc:2.75329
[1] validation_0-auc:-4.78198
[2] validation_0-auc:-3.52732
[3] validation_0-auc:-5.61398
[4] validation_0-auc:-4.07529
[5] validation_0-auc:-4.58928
[6] validation_0-auc:-3.94142
[7] validation_0-auc:-3.24399
[8] validation_0-auc:-6.07570
[9] validation_0-auc:-8.59653
[10] validation_0-auc:-7.84374
[11] validation_0-auc:-6.41278
[12] validation_0-auc:-4.45992
[13] validation_0-auc:-5.09416
[14] validation_0-auc:-5.12351
[15] validation_0-auc:-3.64573
[16] validation_0-auc:-3.01243
[17] validation_0-auc:-2.80259
[18] validation_0-auc:-1.67346
[19] validation_0-auc:-2.18767
[20] validation_0-auc:-2.77537
[21] validation_0-auc:-2.62919
[22] validation_0-auc:-2.49174
[23] validation_0-auc:-1.50853
[24] validation_0-auc:-0.98859
[25] validation_0-auc:-0.79339
[26] validation_0-auc:-0.52949
[27] validation_0-auc:-0.60107
[28] validation_0-auc:0.03568
[29] validation_0-auc:0.38625
[30] validation_0-auc:-0.41416
PASSED
tests/test_meta_learners.py::test_pandas_input PASSED
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:24<00:00, 2.43s/it]
PASSED
tests/test_propensity.py::test_logistic_regression_propensity_model PASSED
tests/test_propensity.py::test_logistic_regression_propensity_model_model_kwargs PASSED
tests/test_propensity.py::test_elasticnet_propensity_model PASSED
tests/test_propensity.py::test_gradientboosted_propensity_model [11:00:19] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
PASSED
tests/test_propensity.py::test_gradientboosted_propensity_model_earlystopping [11:00:20] WARNING: /opt/concourse/worker/volumes/live/7a2b9f41-3287-451b-6691-43e9a6c0910f/volume/xgboost-split_1619728204606/work/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[0] validation_0-logloss:0.68365
[1] validation_0-logloss:0.67243
[2] validation_0-logloss:0.64779
[3] validation_0-logloss:0.62546
[4] validation_0-logloss:0.60664
[5] validation_0-logloss:0.58847
[6] validation_0-logloss:0.57660
[7] validation_0-logloss:0.57160
[8] validation_0-logloss:0.56192
[9] validation_0-logloss:0.55328
[10] validation_0-logloss:0.54668
[11] validation_0-logloss:0.54318
[12] validation_0-logloss:0.54001
[13] validation_0-logloss:0.53335
[14] validation_0-logloss:0.53029
[15] validation_0-logloss:0.52541
[16] validation_0-logloss:0.52656
[17] validation_0-logloss:0.52321
[18] validation_0-logloss:0.52327
[19] validation_0-logloss:0.52239
[20] validation_0-logloss:0.52392
[21] validation_0-logloss:0.52404
[22] validation_0-logloss:0.52376
[23] validation_0-logloss:0.52374
[24] validation_0-logloss:0.52289
[25] validation_0-logloss:0.52229
[26] validation_0-logloss:0.52028
[27] validation_0-logloss:0.52110
[28] validation_0-logloss:0.52270
[29] validation_0-logloss:0.52229
[30] validation_0-logloss:0.52084
[31] validation_0-logloss:0.52308
[32] validation_0-logloss:0.52387
[33] validation_0-logloss:0.52583
[34] validation_0-logloss:0.52824
[35] validation_0-logloss:0.53039
PASSED
tests/test_sensitivity.py::test_Sensitivity Method ATE New ATE New ATE LB New ATE UB
0 Placebo Treatment 0.680042 -0.009359 -0.022652 0.003934
0 Random Cause 0.680042 0.680049 0.667253 0.692846
0 Subset Data(sample size @0.5) 0.680042 0.682025 0.663973 0.700077
0 Random Replace 0.680042 0.678623 0.665698 0.691548
0 Selection Bias (alpha@-0.80626, with r-sqaure:... 0.680042 1.353547 1.34094 1.366155
0 Selection Bias (alpha@-0.645, with r-sqaure:0.... 0.680042 1.218846 1.206217 1.231475
0 Selection Bias (alpha@-0.48375, with r-sqaure:... 0.680042 1.084145 1.071487 1.096803
0 Selection Bias (alpha@-0.3225, with r-sqaure:0... 0.680042 0.949444 0.936748 0.96214
0 Selection Bias (alpha@-0.16125, with r-sqaure:... 0.680042 0.814743 0.802001 0.827485
0 Selection Bias (alpha@0.0, with r-sqaure:0.0 0.680042 0.680042 0.667245 0.692838
0 Selection Bias (alpha@0.16125, with r-sqaure:0... 0.680042 0.545341 0.532482 0.558199
0 Selection Bias (alpha@0.3225, with r-sqaure:0.... 0.680042 0.41064 0.397711 0.423568
0 Selection Bias (alpha@0.48375, with r-sqaure:0... 0.680042 0.275939 0.262933 0.288944
0 Selection Bias (alpha@0.645, with r-sqaure:0.0... 0.680042 0.141237 0.128146 0.154329
0 Selection Bias (alpha@0.80626, with r-sqaure:0... 0.680042 0.006536 -0.006648 0.01972
PASSED
tests/test_sensitivity.py::test_SensitivityPlaceboTreatment Method ATE New ATE New ATE LB New ATE UB
0 Random Cause 0.678358 -0.004009 -0.017335 0.009316
PASSED
tests/test_sensitivity.py::test_SensitivityRandomCause Method ATE New ATE New ATE LB New ATE UB
0 Random Cause 0.674445 0.674436 0.661632 0.68724
PASSED
tests/test_sensitivity.py::test_SensitivityRandomReplace Method ATE New ATE New ATE LB New ATE UB
0 Random Replace 0.68259 0.809523 0.796646 0.822401
PASSED
tests/test_sensitivity.py::test_SensitivitySelectionBias alpha rsqs New ATE New ATE LB New ATE UB
0 -0.800120 0.107832 0.660142 0.647300 0.672984
0 -0.640096 0.072265 0.662217 0.649428 0.675006
0 -0.480072 0.042282 0.664292 0.651535 0.677050
0 -0.320048 0.019399 0.666367 0.653619 0.679115
0 -0.160024 0.004965 0.668442 0.655683 0.681202
0 0.000000 0.000000 0.670517 0.657724 0.683311
0 0.160024 0.005063 0.672592 0.659744 0.685441
0 0.320048 0.020160 0.674667 0.661743 0.687592
0 0.480072 0.044736 0.676742 0.663720 0.689765
0 0.640096 0.077731 0.678818 0.665678 0.691957
0 0.800120 0.117713 0.680893 0.667615 0.694170 feature partial_rsqs
0 feature_0 -0.065992
1 feature_1 -0.067405
2 feature_2 -0.000011
3 feature_3 -0.000623
4 feature_4 -0.000643
5 feature_5 -0.000026
PASSED
tests/test_sensitivity.py::test_one_sided PASSED
tests/test_sensitivity.py::test_alignment PASSED
tests/test_sensitivity.py::test_one_sided_att PASSED
tests/test_sensitivity.py::test_alignment_att PASSED
tests/test_uplift_trees.py::test_make_uplift_classification PASSED
tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[threads-loky] FAILED
tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[threads-threading] FAILED
tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[threads-multiprocessing] FAILED
tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[processes-loky] FAILED
tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[processes-threading] FAILED
tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[processes-multiprocessing] FAILED
tests/test_uplift_trees.py::test_UpliftTreeClassifier PASSED
tests/test_uplift_trees.py::test_UpliftTreeClassifier_feature_importance PASSED
tests/test_utils.py::test_weighted_variance PASSED
tests/test_value_optimization.py::test_counterfactual_value_optimization PASSEDCouldn't parse Python file '/Users/zhenyuzhao-zz/Documents/Programming/git_repo/causalml/causalml/feature_selection/filters.py' (couldnt-parse)

================================================================================================================================== FAILURES ==================================================================================================================================
______________________________________________________________________________________________________________ test_UpliftRandomForestClassifier[threads-loky] _______________________________________________________________________________________________________________
joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/queues.py", line 153, in feed
obj
= dumps(obj, reducers=reducers)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/reduction.py", line 271, in dumps
dump(obj, buf, reducers=reducers, protocol=protocol)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/reduction.py", line 264, in dump
_LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/cloudpickle/cloudpickle_fast.py", line 563, in dump
return Pickler.dump(self, obj)
_pickle.PicklingError: Can't pickle <cyfunction UpliftRandomForestClassifier.bootstrap at 0x7fa378fb05f0>: attribute lookup bootstrap on causalml.inference.tree.uplift failed
"""

The above exception was the direct cause of the following exception:

generate_classification_data = <function generate_classification_data.._generate_data at 0x7fa34a1ba430>, backend = 'loky', joblib_prefer = 'threads'

@pytest.mark.parametrize("backend", ["loky", "threading", "multiprocessing"])
@pytest.mark.parametrize("joblib_prefer", ["threads", "processes"])
def test_UpliftRandomForestClassifier(
    generate_classification_data, backend, joblib_prefer
):
    df, x_names = generate_classification_data()
    df_train, df_test = train_test_split(df, test_size=0.2, random_state=RANDOM_SEED)

    with parallel_backend(backend):
        # Train the UpLift Random Forest classifier
        uplift_model = UpliftRandomForestClassifier(
            min_samples_leaf=50,
            control_name=TREATMENT_NAMES[0],
            random_state=RANDOM_SEED,
            joblib_prefer=joblib_prefer,
        )
      uplift_model.fit(
            df_train[x_names].values,
            treatment=df_train["treatment_group_key"].values,
            y=df_train[CONVERSION].values,
        )

tests/test_uplift_trees.py:37:


causalml/inference/tree/uplift.pyx:1331: in causalml.inference.tree.uplift.UpliftRandomForestClassifier.fit
(delayed(self.bootstrap)(X, treatment, y, tree) for tree in self.uplift_forest)
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:1054: in call
self.retrieve()
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:933: in retrieve
self._output.extend(job.get(timeout=self.timeout))
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/_parallel_backends.py:542: in wrap_future_result
return future.result(timeout=timeout)
../../../../opt/anaconda3/lib/python3.8/concurrent/futures/_base.py:439: in result
return self.__get_result()


self = <Future at 0x7fa34a12a040 state=finished raised PicklingError>

def __get_result(self):
    if self._exception:
      raise self._exception

E _pickle.PicklingError: Could not pickle the task to send it to the workers.

../../../../opt/anaconda3/lib/python3.8/concurrent/futures/_base.py:388: PicklingError
____________________________________________________________________________________________________________ test_UpliftRandomForestClassifier[threads-threading] ____________________________________________________________________________________________________________
joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/queues.py", line 153, in feed
obj
= dumps(obj, reducers=reducers)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/reduction.py", line 271, in dumps
dump(obj, buf, reducers=reducers, protocol=protocol)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/reduction.py", line 264, in dump
_LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/cloudpickle/cloudpickle_fast.py", line 563, in dump
return Pickler.dump(self, obj)
_pickle.PicklingError: Can't pickle <cyfunction UpliftTreeClassifier.evaluate_KL at 0x7fa378fa0ba0>: attribute lookup evaluate_KL on causalml.inference.tree.uplift failed
"""

The above exception was the direct cause of the following exception:

generate_classification_data = <function generate_classification_data.._generate_data at 0x7fa34a1ba430>, backend = 'threading', joblib_prefer = 'threads'

@pytest.mark.parametrize("backend", ["loky", "threading", "multiprocessing"])
@pytest.mark.parametrize("joblib_prefer", ["threads", "processes"])
def test_UpliftRandomForestClassifier(
    generate_classification_data, backend, joblib_prefer
):
    df, x_names = generate_classification_data()
    df_train, df_test = train_test_split(df, test_size=0.2, random_state=RANDOM_SEED)

    with parallel_backend(backend):
        # Train the UpLift Random Forest classifier
        uplift_model = UpliftRandomForestClassifier(
            min_samples_leaf=50,
            control_name=TREATMENT_NAMES[0],
            random_state=RANDOM_SEED,
            joblib_prefer=joblib_prefer,
        )

        uplift_model.fit(
            df_train[x_names].values,
            treatment=df_train["treatment_group_key"].values,
            y=df_train[CONVERSION].values,
        )

        predictions = {}
        predictions["single"] = uplift_model.predict(df_test[x_names].values)
        with parallel_backend("loky", n_jobs=2):
          predictions["loky_2"] = uplift_model.predict(df_test[x_names].values)

tests/test_uplift_trees.py:46:


../../../../opt/anaconda3/lib/python3.8/site-packages/sklearn/utils/_testing.py:308: in wrapper
return fn(*args, **kwargs)
causalml/inference/tree/uplift.pyx:1379: in causalml.inference.tree.uplift.UpliftRandomForestClassifier.predict
(delayed(tree.predict)(X=X) for tree in self.uplift_forest)
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:1054: in call
self.retrieve()
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:933: in retrieve
self._output.extend(job.get(timeout=self.timeout))
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/_parallel_backends.py:542: in wrap_future_result
return future.result(timeout=timeout)
../../../../opt/anaconda3/lib/python3.8/concurrent/futures/_base.py:439: in result
return self.__get_result()


self = <Future at 0x7fa34c09beb0 state=finished raised PicklingError>

def __get_result(self):
    if self._exception:
      raise self._exception

E _pickle.PicklingError: Could not pickle the task to send it to the workers.

../../../../opt/anaconda3/lib/python3.8/concurrent/futures/_base.py:388: PicklingError
_________________________________________________________________________________________________________ test_UpliftRandomForestClassifier[threads-multiprocessing] _________________________________________________________________________________________________________

generate_classification_data = <function generate_classification_data.._generate_data at 0x7fa34a1ba430>, backend = 'multiprocessing', joblib_prefer = 'threads'

@pytest.mark.parametrize("backend", ["loky", "threading", "multiprocessing"])
@pytest.mark.parametrize("joblib_prefer", ["threads", "processes"])
def test_UpliftRandomForestClassifier(
    generate_classification_data, backend, joblib_prefer
):
    df, x_names = generate_classification_data()
    df_train, df_test = train_test_split(df, test_size=0.2, random_state=RANDOM_SEED)

    with parallel_backend(backend):
        # Train the UpLift Random Forest classifier
        uplift_model = UpliftRandomForestClassifier(
            min_samples_leaf=50,
            control_name=TREATMENT_NAMES[0],
            random_state=RANDOM_SEED,
            joblib_prefer=joblib_prefer,
        )
      uplift_model.fit(
            df_train[x_names].values,
            treatment=df_train["treatment_group_key"].values,
            y=df_train[CONVERSION].values,
        )

tests/test_uplift_trees.py:37:


causalml/inference/tree/uplift.pyx:1331: in causalml.inference.tree.uplift.UpliftRandomForestClassifier.fit
(delayed(self.bootstrap)(X, treatment, y, tree) for tree in self.uplift_forest)
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:1054: in call
self.retrieve()
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:933: in retrieve
self._output.extend(job.get(timeout=self.timeout))
../../../../opt/anaconda3/lib/python3.8/multiprocessing/pool.py:771: in get
raise self._value
../../../../opt/anaconda3/lib/python3.8/multiprocessing/pool.py:537: in _handle_tasks
put(task)


obj = (140, 0, <joblib._parallel_backends.SafeFunction object at 0x7fa34debefd0>, (), {})

def send(obj):
    buffer = BytesIO()
  CustomizablePickler(buffer, self._reducers).dump(obj)

E _pickle.PicklingError: Can't pickle <cyfunction UpliftRandomForestClassifier.bootstrap at 0x7fa378fb05f0>: attribute lookup bootstrap on causalml.inference.tree.uplift failed

../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/pool.py:156: PicklingError
_____________________________________________________________________________________________________________ test_UpliftRandomForestClassifier[processes-loky] ______________________________________________________________________________________________________________
joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/queues.py", line 153, in feed
obj
= dumps(obj, reducers=reducers)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/reduction.py", line 271, in dumps
dump(obj, buf, reducers=reducers, protocol=protocol)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/reduction.py", line 264, in dump
_LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/cloudpickle/cloudpickle_fast.py", line 563, in dump
return Pickler.dump(self, obj)
_pickle.PicklingError: Can't pickle <cyfunction UpliftRandomForestClassifier.bootstrap at 0x7fa378fb05f0>: attribute lookup bootstrap on causalml.inference.tree.uplift failed
"""

The above exception was the direct cause of the following exception:

generate_classification_data = <function generate_classification_data.._generate_data at 0x7fa34a1ba430>, backend = 'loky', joblib_prefer = 'processes'

@pytest.mark.parametrize("backend", ["loky", "threading", "multiprocessing"])
@pytest.mark.parametrize("joblib_prefer", ["threads", "processes"])
def test_UpliftRandomForestClassifier(
    generate_classification_data, backend, joblib_prefer
):
    df, x_names = generate_classification_data()
    df_train, df_test = train_test_split(df, test_size=0.2, random_state=RANDOM_SEED)

    with parallel_backend(backend):
        # Train the UpLift Random Forest classifier
        uplift_model = UpliftRandomForestClassifier(
            min_samples_leaf=50,
            control_name=TREATMENT_NAMES[0],
            random_state=RANDOM_SEED,
            joblib_prefer=joblib_prefer,
        )
      uplift_model.fit(
            df_train[x_names].values,
            treatment=df_train["treatment_group_key"].values,
            y=df_train[CONVERSION].values,
        )

tests/test_uplift_trees.py:37:


causalml/inference/tree/uplift.pyx:1331: in causalml.inference.tree.uplift.UpliftRandomForestClassifier.fit
(delayed(self.bootstrap)(X, treatment, y, tree) for tree in self.uplift_forest)
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:1054: in call
self.retrieve()
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:933: in retrieve
self._output.extend(job.get(timeout=self.timeout))
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/_parallel_backends.py:542: in wrap_future_result
return future.result(timeout=timeout)
../../../../opt/anaconda3/lib/python3.8/concurrent/futures/_base.py:432: in result
return self.__get_result()


self = <Future at 0x7fa34a0ed3a0 state=finished raised PicklingError>

def __get_result(self):
    if self._exception:
      raise self._exception

E _pickle.PicklingError: Could not pickle the task to send it to the workers.

../../../../opt/anaconda3/lib/python3.8/concurrent/futures/_base.py:388: PicklingError
___________________________________________________________________________________________________________ test_UpliftRandomForestClassifier[processes-threading] ___________________________________________________________________________________________________________
joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/queues.py", line 153, in feed
obj
= dumps(obj, reducers=reducers)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/reduction.py", line 271, in dumps
dump(obj, buf, reducers=reducers, protocol=protocol)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/loky/backend/reduction.py", line 264, in dump
_LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)
File "/Users/zhenyuzhao-zz/opt/anaconda3/lib/python3.8/site-packages/joblib/externals/cloudpickle/cloudpickle_fast.py", line 563, in dump
return Pickler.dump(self, obj)
_pickle.PicklingError: Can't pickle <cyfunction UpliftTreeClassifier.evaluate_KL at 0x7fa378fa0ba0>: attribute lookup evaluate_KL on causalml.inference.tree.uplift failed
"""

The above exception was the direct cause of the following exception:

generate_classification_data = <function generate_classification_data.._generate_data at 0x7fa34a1ba430>, backend = 'threading', joblib_prefer = 'processes'

@pytest.mark.parametrize("backend", ["loky", "threading", "multiprocessing"])
@pytest.mark.parametrize("joblib_prefer", ["threads", "processes"])
def test_UpliftRandomForestClassifier(
    generate_classification_data, backend, joblib_prefer
):
    df, x_names = generate_classification_data()
    df_train, df_test = train_test_split(df, test_size=0.2, random_state=RANDOM_SEED)

    with parallel_backend(backend):
        # Train the UpLift Random Forest classifier
        uplift_model = UpliftRandomForestClassifier(
            min_samples_leaf=50,
            control_name=TREATMENT_NAMES[0],
            random_state=RANDOM_SEED,
            joblib_prefer=joblib_prefer,
        )

        uplift_model.fit(
            df_train[x_names].values,
            treatment=df_train["treatment_group_key"].values,
            y=df_train[CONVERSION].values,
        )

        predictions = {}
        predictions["single"] = uplift_model.predict(df_test[x_names].values)
        with parallel_backend("loky", n_jobs=2):
          predictions["loky_2"] = uplift_model.predict(df_test[x_names].values)

tests/test_uplift_trees.py:46:


../../../../opt/anaconda3/lib/python3.8/site-packages/sklearn/utils/_testing.py:308: in wrapper
return fn(*args, **kwargs)
causalml/inference/tree/uplift.pyx:1379: in causalml.inference.tree.uplift.UpliftRandomForestClassifier.predict
(delayed(tree.predict)(X=X) for tree in self.uplift_forest)
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:1054: in call
self.retrieve()
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:933: in retrieve
self._output.extend(job.get(timeout=self.timeout))
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/_parallel_backends.py:542: in wrap_future_result
return future.result(timeout=timeout)
../../../../opt/anaconda3/lib/python3.8/concurrent/futures/_base.py:439: in result
return self.__get_result()


self = <Future at 0x7fa34a091b50 state=finished raised PicklingError>

def __get_result(self):
    if self._exception:
      raise self._exception

E _pickle.PicklingError: Could not pickle the task to send it to the workers.

../../../../opt/anaconda3/lib/python3.8/concurrent/futures/_base.py:388: PicklingError
________________________________________________________________________________________________________ test_UpliftRandomForestClassifier[processes-multiprocessing] ________________________________________________________________________________________________________

generate_classification_data = <function generate_classification_data.._generate_data at 0x7fa34a1ba430>, backend = 'multiprocessing', joblib_prefer = 'processes'

@pytest.mark.parametrize("backend", ["loky", "threading", "multiprocessing"])
@pytest.mark.parametrize("joblib_prefer", ["threads", "processes"])
def test_UpliftRandomForestClassifier(
    generate_classification_data, backend, joblib_prefer
):
    df, x_names = generate_classification_data()
    df_train, df_test = train_test_split(df, test_size=0.2, random_state=RANDOM_SEED)

    with parallel_backend(backend):
        # Train the UpLift Random Forest classifier
        uplift_model = UpliftRandomForestClassifier(
            min_samples_leaf=50,
            control_name=TREATMENT_NAMES[0],
            random_state=RANDOM_SEED,
            joblib_prefer=joblib_prefer,
        )
      uplift_model.fit(
            df_train[x_names].values,
            treatment=df_train["treatment_group_key"].values,
            y=df_train[CONVERSION].values,
        )

tests/test_uplift_trees.py:37:


causalml/inference/tree/uplift.pyx:1331: in causalml.inference.tree.uplift.UpliftRandomForestClassifier.fit
(delayed(self.bootstrap)(X, treatment, y, tree) for tree in self.uplift_forest)
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:1054: in call
self.retrieve()
../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/parallel.py:933: in retrieve
self._output.extend(job.get(timeout=self.timeout))
../../../../opt/anaconda3/lib/python3.8/multiprocessing/pool.py:771: in get
raise self._value
../../../../opt/anaconda3/lib/python3.8/multiprocessing/pool.py:537: in _handle_tasks
put(task)


obj = (170, 0, <joblib._parallel_backends.SafeFunction object at 0x7fa34f7ad7f0>, (), {})

def send(obj):
    buffer = BytesIO()
  CustomizablePickler(buffer, self._reducers).dump(obj)

E _pickle.PicklingError: Can't pickle <cyfunction UpliftRandomForestClassifier.bootstrap at 0x7fa378fb05f0>: attribute lookup bootstrap on causalml.inference.tree.uplift failed

../../../../opt/anaconda3/lib/python3.8/site-packages/joblib/pool.py:156: PicklingError
============================================================================================================================== warnings summary ==============================================================================================================================
../../../../opt/anaconda3/lib/python3.8/site-packages/scipy/fft/init.py:97
The module numpy.dual is deprecated. Instead of using dual, use the functions directly from numpy or scipy.

../../../../opt/anaconda3/lib/python3.8/site-packages/scipy/special/orthogonal.py:81: 2 warnings
tests/test_datasets.py: 16 warnings
tests/test_ivlearner.py: 8 warnings
tests/test_meta_learners.py: 48 warnings
np.int is a deprecated alias for the builtin int. To silence this warning, use int by itself. Doing this will not modify any behavior and is safe. When replacing np.int, you may wish to use e.g. np.int64 or np.int32 to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

../../../../opt/anaconda3/lib/python3.8/site-packages/scipy/io/matlab/mio5.py:98
np.bool is a deprecated alias for the builtin bool. To silence this warning, use bool by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.bool_ here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

../../../../opt/anaconda3/lib/python3.8/site-packages/patsy/constraint.py:13
Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working

tests/test_cevae.py::test_CEVAE
tests/test_cevae.py::test_CEVAE
tests/test_cevae.py::test_CEVAE
To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).

tests/test_counterfactual_unit_selection.py: 1032 warnings
tests/test_meta_learners.py: 209 warnings
tests/test_value_optimization.py: 305 warnings
tostring() is deprecated. Use tobytes() instead.

tests/test_datasets.py::test_get_synthetic_preds[simulate_hidden_confounder]
tests/test_datasets.py::test_get_synthetic_preds[simulate_hidden_confounder]
tests/test_datasets.py::test_get_synthetic_preds[simulate_hidden_confounder]
tests/test_datasets.py::test_get_synthetic_preds[simulate_hidden_confounder]
invalid value encountered in true_divide

tests/test_features.py::test_load_data
tests/test_features.py::test_load_data
tests/test_features.py::test_load_data
np.object is a deprecated alias for the builtin object. To silence this warning, use object by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

tests/test_features.py::test_LabelEncoder
tests/test_features.py::test_LabelEncoder

A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

tests/test_meta_learners.py::test_BaseSClassifier
tests/test_meta_learners.py::test_BaseXClassifier
tests/test_meta_learners.py::test_BaseXClassifier
tests/test_propensity.py::test_gradientboosted_propensity_model
tests/test_propensity.py::test_gradientboosted_propensity_model_earlystopping
The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].

tests/test_meta_learners.py::test_BaseRClassifier
tests/test_meta_learners.py::test_BaseRClassifier_with_sample_weights
inspect.getargspec() is deprecated since Python 3.0, use inspect.signature() or inspect.getfullargspec()

-- Docs: https://docs.pytest.org/en/stable/warnings.html

---------- coverage: platform darwin, python 3.8.8-final-0 -----------
Name Stmts Miss Cover

causalml/init.py 3 0 100%
causalml/dataset/init.py 14 0 100%
causalml/dataset/classification.py 73 20 73%
causalml/dataset/regression.py 54 0 100%
causalml/dataset/synthetic.py 244 103 58%
causalml/feature_selection/init.py 1 1 0%
causalml/features.py 85 10 88%
causalml/inference/init.py 0 0 100%
causalml/inference/iv/init.py 2 0 100%
causalml/inference/iv/drivlearner.py 258 58 78%
causalml/inference/iv/iv_regression.py 17 8 53%
causalml/inference/meta/init.py 6 0 100%
causalml/inference/meta/base.py 71 17 76%
causalml/inference/meta/drlearner.py 195 41 79%
causalml/inference/meta/explainer.py 101 80 21%
causalml/inference/meta/rlearner.py 251 35 86%
causalml/inference/meta/slearner.py 178 27 85%
causalml/inference/meta/tlearner.py 170 28 84%
causalml/inference/meta/tmle.py 98 22 78%
causalml/inference/meta/utils.py 49 10 80%
causalml/inference/meta/xlearner.py 252 44 83%
causalml/inference/nn/init.py 1 0 100%
causalml/inference/nn/cevae.py 29 3 90%
causalml/inference/tf/init.py 1 1 0%
causalml/inference/tf/dragonnet.py 64 64 0%
causalml/inference/tf/utils.py 52 52 0%
causalml/inference/tree/init.py 4 0 100%
causalml/inference/tree/plot.py 100 95 5%
causalml/inference/tree/utils.py 43 36 16%
causalml/match.py 185 40 78%
causalml/metrics/init.py 7 0 100%
causalml/metrics/classification.py 11 4 64%
causalml/metrics/const.py 1 0 100%
causalml/metrics/regression.py 41 3 93%
causalml/metrics/sensitivity.py 257 50 81%
causalml/metrics/visualize.py 299 237 21%
causalml/optimize/init.py 5 0 100%
causalml/optimize/pns.py 23 20 13%
causalml/optimize/policylearner.py 54 39 28%
causalml/optimize/unit_selection.py 90 45 50%
causalml/optimize/utils.py 26 4 85%
causalml/optimize/value_optimization.py 30 3 90%
causalml/propensity.py 74 4 95%

TOTAL 3519 1204 66%

========================================================================================================================== short test summary info ===========================================================================================================================
FAILED tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[threads-loky] - _pickle.PicklingError: Could not pickle the task to send it to the workers.
FAILED tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[threads-threading] - _pickle.PicklingError: Could not pickle the task to send it to the workers.
FAILED tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[threads-multiprocessing] - _pickle.PicklingError: Can't pickle <cyfunction UpliftRandomForestClassifier.bootstrap at 0x7fa378fb05f0>: attribute lookup bootstrap on causalml.inference.tree.uplift failed
FAILED tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[processes-loky] - _pickle.PicklingError: Could not pickle the task to send it to the workers.
FAILED tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[processes-threading] - _pickle.PicklingError: Could not pickle the task to send it to the workers.
FAILED tests/test_uplift_trees.py::test_UpliftRandomForestClassifier[processes-multiprocessing] - _pickle.PicklingError: Can't pickle <cyfunction UpliftRandomForestClassifier.bootstrap at 0x7fa378fb05f0>: attribute lookup bootstrap on causalml.inference.tree.uplift f...
========================================================================================================== 6 failed, 58 passed, 1642 warnings in 737.46s (0:12:17) ===========================================================================================================

@paullo0106
Copy link
Collaborator

Seems like similar to #461 that I ran into in my previous Macbook, but for some reasons it doesn't happen for me at the moment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants