diff --git a/README.md b/README.md index dadc3a7..3ce789f 100644 --- a/README.md +++ b/README.md @@ -22,7 +22,7 @@ ______________________________________________________________________ This repository consists of two key pieces: -1. Construction and efficient loading of tabular (flat, non-longitudinal) summary features describing patient records in MEDS over arbitrary time windows (e.g. 1 year, 6 months, etc.), which go backwards in time from a given index date. +1. Construction and efficient loading of tabular (flat, non-longitudinal) summary features describing patient records in MEDS over arbitrary time windows (e.g. 1 year, 6 months, etc.), which go backward in time from a given index date. 2. Running a basic XGBoost AutoML pipeline over these tabular features to predict arbitrary binary classification or regression downstream tasks defined over these datasets. The "AutoML" part of this is not particularly advanced -- what is more advanced is the efficient construction, storage, and loading of tabular features for the candidate AutoML models, enabling a far more extensive search over a much larger total number of features than prior systems. ## Quick Start @@ -44,8 +44,8 @@ pip install . ## Scripts and Examples -For an end to end example over MIMIC-IV, see the [MIMIC-IV companion repository](https://github.com/mmcdermott/MEDS_TAB_MIMIC_IV). -For an end to end example over Philips eICU, see the [eICU companion repository](https://github.com/mmcdermott/MEDS_TAB_EICU). +For an end-to-end example over MIMIC-IV, see the [MIMIC-IV companion repository](https://github.com/mmcdermott/MEDS_TAB_MIMIC_IV). +For an end-to-end example over Philips eICU, see the [eICU companion repository](https://github.com/mmcdermott/MEDS_TAB_EICU). See [`/tests/test_integration.py`](https://github.com/mmcdermott/MEDS_Tabular_AutoML/blob/main/tests/test_integration.py) for a local example of the end-to-end pipeline being run on synthetic data. This script is a functional test that is also run with `pytest` to verify the correctness of the algorithm. @@ -73,7 +73,7 @@ By following these steps, you can seamlessly transform your dataset, define nece ## Core CLI Scripts Overview -1. **`meds-tab-describe`**: This command processes MEDS data shards to compute the frequencies of different code-types. It differentiates codes into the following categories: +1. **`meds-tab-describe`**: This command processes MEDS data shards to compute the frequencies of different code types. It differentiates codes into the following categories: - time-series codes (codes with timestamps) - time-series numerical values (codes with timestamps and numerical values) @@ -94,9 +94,9 @@ By following these steps, you can seamlessly transform your dataset, define nece tabularization.aggs=[static/present,static/first,code/count,value/count,value/sum,value/sum_sqd,value/min,value/max]" ``` - - For the exhuastive examples of value aggregations, see [`/src/MEDS_tabular_automl/utils.py`](https://github.com/mmcdermott/MEDS_Tabular_AutoML/blob/main/src/MEDS_tabular_automl/utils.py#L24) + - For the exhaustive examples of value aggregations, see [`/src/MEDS_tabular_automl/utils.py`](https://github.com/mmcdermott/MEDS_Tabular_AutoML/blob/main/src/MEDS_tabular_automl/utils.py#L24) -3. **`meds-tab-tabularize-time-series`**: Iterates through combinations of a shard, `window_size`, and `aggregation` to generate feature vectors that aggregate patient data for each unique `patient_id` x `timestamp`. This stage (and the previous stage) use sparse matrix formats to efficiently handle the computational and storage demands of rolling window calculations on large datasets. We support parallelization through Hydra's [`--multirun`](https://hydra.cc/docs/intro/#multirun) flag and the [`joblib` launcher](https://hydra.cc/docs/plugins/joblib_launcher/#internaldocs-banner). +3. **`meds-tab-tabularize-time-series`**: Iterates through combinations of a shard, `window_size`, and `aggregation` to generate feature vectors that aggregate patient data for each unique `patient_id` x `timestamp`. This stage (and the previous stage) uses sparse matrix formats to efficiently handle the computational and storage demands of rolling window calculations on large datasets. We support parallelization through Hydra's [`--multirun`](https://hydra.cc/docs/intro/#multirun) flag and the [`joblib` launcher](https://hydra.cc/docs/plugins/joblib_launcher/#internaldocs-banner). **Example: Aggregate time-series data** on features across different `window_sizes` @@ -124,7 +124,7 @@ By following these steps, you can seamlessly transform your dataset, define nece tabularization.aggs=[static/present,static/first,code/count,value/count,value/sum,value/sum_sqd,value/min,value/max] ``` -5. **`meds-tab-xgboost`**: Trains an XGBoost model using user-specified parameters. Permutations of `window_sizes` and `aggs` can be generated using `generate-permutations` command (See the section below for descriptions). +5. **`meds-tab-xgboost`**: Trains an XGBoost model using user-specified parameters. Permutations of `window_sizes` and `aggs` can be generated using `generate-subsets` command (See the section below for descriptions). ```console meds-tab-xgboost --multirun \ @@ -132,26 +132,26 @@ By following these steps, you can seamlessly transform your dataset, define nece task_name=$TASK \ output_dir="output_directory" \ tabularization.min_code_inclusion_frequency=10 \ - tabularization.window_sizes=$(generate-permutations [1d,30d,365d,full]) \ + tabularization.window_sizes=$(generate-subsets [1d,30d,365d,full]) \ do_overwrite=False \ - tabularization.aggs=$(generate-permutations [static/present,static/first,code/count,value/count,value/sum,value/sum_sqd,value/min,value/max]) + tabularization.aggs=$(generate-subsets [static/present,static/first,code/count,value/count,value/sum,value/sum_sqd,value/min,value/max]) ``` ## Additional CLI Scripts -1. **`generate-permutations`**: Generates and prints a sorted list of all permutations from a comma separated input. This is provided for the convenience of sweeping over all possible combinations of window sizes and aggregations. +1. **`generate-subsets`**: Generates and prints a sorted list of all non-empty subsets from a comma-separated input. This is provided for the convenience of sweeping over all possible combinations of window sizes and aggregations. - For example you can directly call **`generate-permutations`** in the command line: + For example, you can directly call **`generate-subsets`** in the command line: ```console - generate-permutations [2,3,4] \ + generate-subsets [2,3,4] \ [2], [2, 3], [2, 3, 4], [2, 4], [3], [3, 4], [4] ``` This could be used in the command line in concert with other calls. For example, the following call: ```console - meds-tab-xgboost --multirun tabularization.window_sizes=$(generate-permutations [1d,2d,7d,full]) + meds-tab-xgboost --multirun tabularization.window_sizes=$(generate-subsets [1d,2d,7d,full]) ``` would resolve to: @@ -298,7 +298,7 @@ Now that we have generated tabular features for all the events in our dataset, w - **Row Selection Based on Tasks**: Only the data rows that are relevant to the specific tasks are selected and cached. This reduces the memory footprint and speeds up the training process. - **Use of Sparse Matrices for Efficient Storage**: Sparse matrices are again employed here to store the selected data efficiently, ensuring that only non-zero data points are kept in memory, thus optimizing both storage and retrieval times. -The file structure for the cached data mirrors that of the tabular data, also consisting of `.npz` files, where users must specify the directory that stores labels. Labels follow the same shard filestructure as the input meds data from step (1), and the label parquets need `patient_id`, `timestamp`, and `label` columns. +The file structure for the cached data mirrors that of the tabular data, also consisting of `.npz` files, where users must specify the directory that stores labels. Labels follow the same shard file structure as the input meds data from step (1), and the label parquets need `patient_id`, `timestamp`, and `label` columns. ## 4. XGBoost Training @@ -308,7 +308,7 @@ The final stage uses the processed and cached data to train an XGBoost model. Th - **Iterator for Data Loading**: Custom iterators are designed to load sparse matrices efficiently into the XGBoost training process, which can handle sparse inputs natively, thus maintaining high computational efficiency. - **Training and Validation**: The model is trained using the tabular data, with evaluation steps that include early stopping to prevent overfitting and tuning of hyperparameters based on validation performance. -- **Hyperaparameter Tuning**: We use [optuna](https://optuna.org/) to tune over XGBoost model pramters, aggregations, window sizes, and the minimimum code inclusion frequency. +- **Hyperparameter Tuning**: We use [optuna](https://optuna.org/) to tune over XGBoost model parameters, aggregations, window sizes, and the minimum code inclusion frequency. ______________________________________________________________________ @@ -331,15 +331,15 @@ The benchmarking tests were conducted using the following hardware and software ### MEDS-Tab Tabularization Technique -Tabularization of time-series data, as depecited above, is commonly used in several past works. The only two libraries to our knowledge that provide a full tabularization pipeline are `tsfresh` and `catabra`. `catabra` also offers a slower but more memory efficient version of their method which we denote `catabra-mem`. Other libraries either provide only rolling window functionalities (`featuretools`) or just pivoting operations (`Temporai`/`Clairvoyance`, `sktime`, `AutoTS`). We provide a significantly faster and more memory efficient method. Our findings show that on the MIMIC-IV and eICU medical datasets we significantly outperform both above-mentioned methods that provide similar functionalities with MEDS-Tab. While `catabra` and `tsfresh` could not even run within a budget of 10 minutes on as low as 10 patient's data for eICU, our method scales to process hundreds of patients with low memory usage under the same time budget. We present the results below. +Tabularization of time-series data, as depicted above, is commonly used in several past works. The only two libraries to our knowledge that provide a full tabularization pipeline are `tsfresh` and `catabra`. `catabra` also offers a slower but more memory-efficient version of their method which we denote `catabra-mem`. Other libraries either provide only rolling window functionalities (`featuretools`) or just pivoting operations (`Temporai`/`Clairvoyance`, `sktime`, `AutoTS`). We provide a significantly faster and more memory-efficient method. Our findings show that on the MIMIC-IV and eICU medical datasets, we significantly outperform both above-mentioned methods that provide similar functionalities with MEDS-Tab. While `catabra` and `tsfresh` could not even run within a budget of 10 minutes on as low as 10 patients' data for eICU, our method scales to process hundreds of patients with low memory usage under the same time budget. We present the results below. ## 2. Comparative Performance Analysis -The tables below detail computational resource utilization across two datasets and various patient scales, emphasizing the better performance of MEDS-Tab in all of the scenarios. The tables are organized by dataset and number of patients. For the analysis, the full window sizes and the aggregation method code_count were used. Additionally, we use a budget of 10 minutes for running our tests given that for such small number of patients (10, 100, and 500 patients) data should be processed quickly. Note that `catabra-mem` is omitted from the tables as it never completed within the 10 minute budget. +The tables below detail computational resource utilization across two datasets and various patient scales, emphasizing the better performance of MEDS-Tab in all of the scenarios. The tables are organized by dataset and number of patients. For the analysis, the full window sizes and the aggregation method code_count were used. Additionally, we use a budget of 10 minutes for running our tests given that for such a small number of patients (10, 100, and 500 patients) data should be processed quickly. Note that `catabra-mem` is omitted from the tables as it was never completed within the 10-minute budget. ### eICU Dataset -The only method that was able to tabularize eICU data was MEDS-Tab. We ran our method with both 100 and 500 patients, resulting in an increment by three times in the number of codes. MEDS-Tab gave efficient results in terms of both time and memory usage. +The only method that was able to tabularize eICU data was MEDS-Tab. We ran our method with both 100 and 500 patients, resulting in an increment of three times in the number of codes. MEDS-Tab gave efficient results in terms of both time and memory usage. a) 100 Patients @@ -419,7 +419,7 @@ meds-tab-xgboost do_overwrite=False \ ``` -This uses the defaults minimum code inclusion frequency, window sizes, and aggregations from the `launch_xgboost.yaml`: +This uses the default minimum code inclusion frequency, window sizes, and aggregations from the `launch_xgboost.yaml`: ```yaml allowed_codes: # allows all codes that meet min code inclusion frequency @@ -486,9 +486,9 @@ meds-tab-xgboost --multirun \ MEDS_cohort_dir="path_to_data" \ task_name=$TASK \ output_dir="output_directory" \ - tabularization.window_sizes=$(generate-permutations [1d,30d,365d,full]) \ + tabularization.window_sizes=$(generate-subsets [1d,30d,365d,full]) \ do_overwrite=False \ - tabularization.aggs=$(generate-permutations [static/present,code/count,value/count,value/sum,value/sum_sqd,value/min,value/max]) + tabularization.aggs=$(generate-subsets [static/present,code/count,value/count,value/sum,value/sum_sqd,value/min,value/max]) ``` The model parameters were set to: @@ -541,7 +541,7 @@ For a complete example on MIMIC-IV and for all of our config files, see the [MIM #### 2.2 XGBoost Optimal Found Model Parameters -Additionally, the model parameters from the highest performing run are reported below. +Additionally, the model parameters from the highest-performing run are reported below. | Task | Index Timestamp | Eta | Lambda | Alpha | Subsample | Minimum Child Weight | Number of Boosting Rounds | Early Stopping Rounds | Max Tree Depth | | ------------------------------- | ----------------- | ----- | ------ | ----- | --------- | -------------------- | ------------------------- | --------------------- | -------------- | @@ -563,7 +563,7 @@ Additionally, the model parameters from the highest performing run are reported The eICU sweep was conducted equivalently to the MIMIC-IV sweep. Please refer to the MIMIC-IV Sweep subsection above for details on the commands and sweep parameters. -For more details about eICU specific task generation and running, see the [eICU companion repository](https://github.com/mmcdermott/MEDS_TAB_EICU). +For more details about eICU-specific task generation and running, see the [eICU companion repository](https://github.com/mmcdermott/MEDS_TAB_EICU). #### 1. XGBoost Performance on eICU diff --git a/docs/source/overview.md b/docs/source/overview.md index 5132c11..af596e6 100644 --- a/docs/source/overview.md +++ b/docs/source/overview.md @@ -2,7 +2,7 @@ This repository consists of two key pieces: -1. Construction and efficient loading of tabular (flat, non-longitudinal) summary features describing patient records in MEDS over arbitrary time windows (e.g. 1 year, 6 months, etc.), which go backwards in time from a given index date. +1. Construction and efficient loading of tabular (flat, non-longitudinal) summary features describing patient records in MEDS over arbitrary time windows (e.g. 1 year, 6 months, etc.), which go backward in time from a given index date. 2. Running a basic XGBoost AutoML pipeline over these tabular features to predict arbitrary binary classification or regression downstream tasks defined over these datasets. The "AutoML" part of this is not particularly advanced -- what is more advanced is the efficient construction, storage, and loading of tabular features for the candidate AutoML models, enabling a far more extensive search over a much larger total number of features than prior systems. ## Quick Start @@ -24,14 +24,14 @@ pip install . ## Scripts and Examples -For an end to end example over MIMIC-IV, see the [MIMIC-IV companion repository](https://github.com/mmcdermott/MEDS_TAB_MIMIC_IV). -For an end to end example over Philips eICU, see the [eICU companion repository](https://github.com/mmcdermott/MEDS_TAB_EICU). +For an end-to-end example over MIMIC-IV, see the [MIMIC-IV companion repository](https://github.com/mmcdermott/MEDS_TAB_MIMIC_IV). +For an end-to-end example over Philips eICU, see the [eICU companion repository](https://github.com/mmcdermott/MEDS_TAB_EICU). See [`/tests/test_integration.py`](https://github.com/mmcdermott/MEDS_Tabular_AutoML/blob/main/tests/test_integration.py) for a local example of the end-to-end pipeline being run on synthetic data. This script is a functional test that is also run with `pytest` to verify the correctness of the algorithm. ## Core CLI Scripts Overview -1. **`meds-tab-describe`**: This command processes MEDS data shards to compute the frequencies of different code-types. It differentiates codes into the following categories: +1. **`meds-tab-describe`**: This command processes MEDS data shards to compute the frequencies of different code types. It differentiates codes into the following categories: - time-series codes (codes with timestamps) - time-series numerical values (codes with timestamps and numerical values) @@ -52,9 +52,9 @@ See [`/tests/test_integration.py`](https://github.com/mmcdermott/MEDS_Tabular_Au tabularization.aggs=[static/present,static/first,code/count,value/count,value/sum,value/sum_sqd,value/min,value/max]" ``` - - For the exhuastive examples of value aggregations, see [`/src/MEDS_tabular_automl/utils.py`](https://github.com/mmcdermott/MEDS_Tabular_AutoML/blob/main/src/MEDS_tabular_automl/utils.py#L24) + - For the exhaustive examples of value aggregations, see [`/src/MEDS_tabular_automl/utils.py`](https://github.com/mmcdermott/MEDS_Tabular_AutoML/blob/main/src/MEDS_tabular_automl/utils.py#L24) -3. **`meds-tab-tabularize-time-series`**: Iterates through combinations of a shard, `window_size`, and `aggregation` to generate feature vectors that aggregate patient data for each unique `patient_id` x `timestamp`. This stage (and the previous stage) use sparse matrix formats to efficiently handle the computational and storage demands of rolling window calculations on large datasets. We support parallelization through Hydra's [`--multirun`](https://hydra.cc/docs/intro/#multirun) flag and the [`joblib` launcher](https://hydra.cc/docs/plugins/joblib_launcher/#internaldocs-banner). +3. **`meds-tab-tabularize-time-series`**: Iterates through combinations of a shard, `window_size`, and `aggregation` to generate feature vectors that aggregate patient data for each unique `patient_id` x `timestamp`. This stage (and the previous stage) uses sparse matrix formats to efficiently handle the computational and storage demands of rolling window calculations on large datasets. We support parallelization through Hydra's [`--multirun`](https://hydra.cc/docs/intro/#multirun) flag and the [`joblib` launcher](https://hydra.cc/docs/plugins/joblib_launcher/#internaldocs-banner). **Example: Aggregate time-series data** on features across different `window_sizes` @@ -82,7 +82,7 @@ See [`/tests/test_integration.py`](https://github.com/mmcdermott/MEDS_Tabular_Au tabularization.aggs=[static/present,static/first,code/count,value/count,value/sum,value/sum_sqd,value/min,value/max] ``` -5. **`meds-tab-xgboost`**: Trains an XGBoost model using user-specified parameters. Permutations of `window_sizes` and `aggs` can be generated using `generate-permutations` command (See the section below for descriptions). +5. **`meds-tab-xgboost`**: Trains an XGBoost model using user-specified parameters. Permutations of `window_sizes` and `aggs` can be generated using `generate-subsets` command (See the section below for descriptions). ```console meds-tab-xgboost --multirun \ @@ -90,26 +90,26 @@ See [`/tests/test_integration.py`](https://github.com/mmcdermott/MEDS_Tabular_Au task_name=$TASK \ output_dir="output_directory" \ tabularization.min_code_inclusion_frequency=10 \ - tabularization.window_sizes=$(generate-permutations [1d,30d,365d,full]) \ + tabularization.window_sizes=$(generate-subsets [1d,30d,365d,full]) \ do_overwrite=False \ - tabularization.aggs=$(generate-permutations [static/present,static/first,code/count,value/count,value/sum,value/sum_sqd,value/min,value/max]) + tabularization.aggs=$(generate-subsets [static/present,static/first,code/count,value/count,value/sum,value/sum_sqd,value/min,value/max]) ``` ## Additional CLI Scripts -1. **`generate-permutations`**: Generates and prints a sorted list of all permutations from a comma separated input. This is provided for the convenience of sweeping over all possible combinations of window sizes and aggregations. +1. **`generate-subsets`**: Generates and prints a sorted list of all non-empty subsets from a comma-separated input. This is provided for the convenience of sweeping over all possible combinations of window sizes and aggregations. - For example you can directly call **`generate-permutations`** in the command line: + For example, you can directly call **`generate-subsets`** in the command line: ```console - generate-permutations [2,3,4] \ + generate-subsets [2,3,4] \ [2], [2, 3], [2, 3, 4], [2, 4], [3], [3, 4], [4] ``` This could be used in the command line in concert with other calls. For example, the following call: ```console - meds-tab-xgboost --multirun tabularization.window_sizes=$(generate-permutations [1d,2d,7d,full]) + meds-tab-xgboost --multirun tabularization.window_sizes=$(generate-subsets [1d,2d,7d,full]) ``` would resolve to: diff --git a/pyproject.toml b/pyproject.toml index 1dd0be7..2fe30be 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [project] name = "meds-tab" -version = "0.0.1" +version = "0.0.2" authors = [ { name="Matthew McDermott", email="mattmcdermott8@gmail.com" }, { name="Nassim Oufattole", email="noufattole@gmail.com" }, @@ -22,7 +22,7 @@ meds-tab-tabularize-static = "MEDS_tabular_automl.scripts.tabularize_static:main meds-tab-tabularize-time-series = "MEDS_tabular_automl.scripts.tabularize_time_series:main" meds-tab-cache-task = "MEDS_tabular_automl.scripts.cache_task:main" meds-tab-xgboost = "MEDS_tabular_automl.scripts.launch_xgboost:main" -generate-permutations = "MEDS_tabular_automl.scripts.generate_permutations:main" +generate-subsets = "MEDS_tabular_automl.scripts.generate_subsets:main" [project.optional-dependencies] diff --git a/src/MEDS_tabular_automl/scripts/generate_permutations.py b/src/MEDS_tabular_automl/scripts/generate_subsets.py similarity index 60% rename from src/MEDS_tabular_automl/scripts/generate_permutations.py rename to src/MEDS_tabular_automl/scripts/generate_subsets.py index 749acc2..7d6cf5d 100644 --- a/src/MEDS_tabular_automl/scripts/generate_permutations.py +++ b/src/MEDS_tabular_automl/scripts/generate_subsets.py @@ -3,10 +3,10 @@ def format_print(permutations: list[tuple[str, ...]]) -> None: - """Prints all permutations in a visually formatted string. + """Prints all sets in a visually formatted string. Args: - permutations: The list of all possible permutations of length > 1. + permutations: The list of all possible permutations of length >= 1. Examples: >>> format_print([('2',), ('2', '3'), ('2', '3', '4'), ('2', '4'), ('3',), ('3', '4'), ('4',)]) @@ -19,26 +19,26 @@ def format_print(permutations: list[tuple[str, ...]]) -> None: print(out_str) -def get_permutations(list_of_options: list[str]) -> None: - """Generates and prints all possible permutations from a list of options. +def get_subsets(list_of_options: list[str]) -> None: + """Generates and prints all possible subsets of length >= 1 from a list of options. Args: list_of_options: The list of options. Examples: - >>> get_permutations(['2', '3', '4']) + >>> get_subsets(['2', '3', '4']) [2],[2,3],[2,3,4],[2,4],[3],[3,4],[4] """ - permutations = [] + sets = [] for i in range(1, len(list_of_options) + 1): - permutations.extend(list(combinations(list_of_options, r=i))) - format_print(sorted(permutations)) + sets.extend(list(combinations(list_of_options, r=i))) + format_print(sorted(sets)) def main(): - """Generates and prints all possible permutations from given list of options.""" + """Generates and prints all possible non-empty subsets from given list of options.""" list_of_options = list(sys.argv[1].strip("[]").split(",")) - get_permutations(list_of_options) + get_subsets(list_of_options) if __name__ == "__main__": diff --git a/tests/test_integration.py b/tests/test_integration.py index 3c0bee8..ecc229b 100644 --- a/tests/test_integration.py +++ b/tests/test_integration.py @@ -236,11 +236,9 @@ def test_integration(): out_f.parent.mkdir(parents=True, exist_ok=True) df.write_parquet(out_f) - stderr, stdout_ws = run_command( - "generate-permutations", ["[30d]"], {}, "generate-permutations window_sizes" - ) + stderr, stdout_ws = run_command("generate-subsets", ["[30d]"], {}, "generate-subsets window_sizes") stderr, stdout_agg = run_command( - "generate-permutations", ["[static/present,static/first]"], {}, "generate-permutations aggs" + "generate-subsets", ["[static/present,static/first]"], {}, "generate-subsets aggs" ) stderr, stdout = run_command( diff --git a/tests/test_tabularize.py b/tests/test_tabularize.py index 0a33409..3f64574 100644 --- a/tests/test_tabularize.py +++ b/tests/test_tabularize.py @@ -351,12 +351,8 @@ def run_command(script: str, args: list[str], hydra_kwargs: dict[str, str], test def test_xgboost_config(): MEDS_cohort_dir = "blah" - stderr, stdout_ws = run_command( - "generate-permutations", ["[30d]"], {}, "generate-permutations window_sizes" - ) - stderr, stdout_agg = run_command( - "generate-permutations", ["[static/present]"], {}, "generate-permutations aggs" - ) + stderr, stdout_ws = run_command("generate-subsets", ["[30d]"], {}, "generate-subsets window_sizes") + stderr, stdout_agg = run_command("generate-subsets", ["[static/present]"], {}, "generate-subsets aggs") xgboost_config_kwargs = { "MEDS_cohort_dir": MEDS_cohort_dir, "do_overwrite": False,