Skip to content

Commit

Permalink
enhance parallel_config
Browse files Browse the repository at this point in the history
  • Loading branch information
linjing-lab committed Nov 23, 2023
1 parent d0fa4f6 commit 9651774
Show file tree
Hide file tree
Showing 6 changed files with 14 additions and 11 deletions.
11 changes: 6 additions & 5 deletions released_box/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ train_val:
- patience: *int=10*, define value coordinate with tolerance to expand detect length. [10, 100].
- backend: *str='threading'*, configure accelerate backend used in inner process. 'threading', 'multiprocessing', 'loky'.
- n_jobs: *int=-1*, define numbers of jobs with manually set by users need. -1 or any int value > 0. (if n_jobs=1, parallel processing will be turn off to save cuda memory.)
- prefer: *str='threads'*, configure soft hint to choose the default backend. 'threads', 'processes'. (prefer 'threading' & 'threads' when users try fails by setting 'loky' and 'processes' or turn to v1.6.1.)
- early_stop: *bool=False*, define whether to enable early_stop process. False or True.

test:
Expand All @@ -68,7 +69,7 @@ save or load:
| `__init__` | input_: int<br />num_classes: int<br />hidden_layer_sizes: Tuple[int]=(100,)<br />device: str='cuda'<br />*<br />activation: str='relu'<br />inplace_on: bool=False<br />criterion: str='CrossEntropyLoss'<br />solver: str='adam'<br />batch_size: int=32<br />learning_rate_init: float=1e-2<br />lr_scheduler: Optional[str]=None | Initialize Classifier or Regressier Based on Basic Information of the Dataset Obtained through Data Preprocessing and Feature Engineering. |
| print_config | / | Return Initialized Parameters of Multi-layer Perceptron and Graph. |
| data_loader | features: TabularData<br />labels: TabularData<br />ratio_set: Dict[str, int]={'train': 8, 'test': 1, 'val': 1}<br />worker_set: Dict[str, int]={'train': 8, 'test': 2, 'val': 1}<br />random_seed: Optional[int]=None | Using `ratio_set` and `worker_set` to Load the Numpy Dataset into `torch.utils.data.DataLoader`. |
| train_val | num_epochs: int=2<br />interval: int=100<br />tolerance: float=1e-3<br />patience: int=10<br />backend: str='threading'<br />n_jobs: int=-1<br />early_stop: bool=False | Using `num_epochs`, `tolerance`, `patience` to Control Training Process and `interval` to Adjust Print Interval with Accelerated Validation Combined with `backend` and `n_jobs`. |
| train_val | num_epochs: int=2<br />interval: int=100<br />tolerance: float=1e-3<br />patience: int=10<br />backend: str='threading'<br />n_jobs: int=-1<br />prefer: str='threads'<br />early_stop: bool=False | Using `num_epochs`, `tolerance`, `patience` to Control Training Process and `interval` to Adjust Print Interval with Accelerated Validation Combined with `backend` and `n_jobs`. |
| test | sort_by: str='accuracy'<br />sort_state: bool=True | Sort Returned Test Result about Correct Classes with `sort_by` and `sort_state` Which Only Appears in Classification. |
| save | con: bool=True<br />dir: str='./model' | Save Trained Model Parameters with Model `state_dict` Control by `con`. |
| load | con: bool=True<br />dir: str='./model' | Load Trained Model Parameters with Model `state_dict` Control by `con`. |
Expand All @@ -83,7 +84,7 @@ save or load:
| print_config | / | Return Initialized Parameters of Multi-layer Perceptron and Graph. |
| data_loader | features: TabularData<br />labels: TabularData<br />ratio_set: Dict[str, int]={'train': 8, 'test': 1, 'val': 1}<br />worker_set: Dict[str, int]={'train': 8, 'test': 2, 'val': 1}<br />random_seed: Optional[int]=None | Using `ratio_set` and `worker_set` to Load the Regression Dataset with Numpy format into `torch.utils.data.DataLoader`. |
| set_freeze | require_grad: Dict[int, bool] | freeze some layers by given `requires_grad=False` if trained model will be loaded to execute experiments. |
| train_val | num_epochs: int=2<br />interval: int=100<br />tolerance: float=1e-3<br />patience: int=10<br />backend: str='threading'<br />n_jobs: int=-1<br />early_stop: bool=False | Using `num_epochs`, `tolerance`, `patience` to Control Training Process and `interval` to Adjust Print Interval with Accelerated Validation Combined with `backend` and `n_jobs`. |
| train_val | num_epochs: int=2<br />interval: int=100<br />tolerance: float=1e-3<br />patience: int=10<br />backend: str='threading'<br />n_jobs: int=-1<br />prefer: str='threads'<br />early_stop: bool=False | Using `num_epochs`, `tolerance`, `patience` to Control Training Process and `interval` to Adjust Print Interval with Accelerated Validation Combined with `backend` and `n_jobs`. |
| test | / | Test Module Only Show with Loss at 3 Stages: Train, Test, Val |
| save | con: bool=True<br />dir: str='./model' | Save Trained Model Parameters with Model `state_dict` Control by `con`. |
| load | con: bool=True<br />dir: str='./model' | Load Trained Model Parameters with Model `state_dict` Control by `con`. |
Expand All @@ -96,7 +97,7 @@ save or load:
| print_config | / | Return Initialized Parameters of Multi-layer Perceptron and Graph. |
| data_loader | features: TabularData<br />labels: TabularData<br />ratio_set: Dict[str, int]={'train': 8, 'test': 1, 'val': 1}<br />worker_set: Dict[str, int]={'train': 8, 'test': 2, 'val': 1}<br />random_seed: Optional[int]=None | Using `ratio_set` and `worker_set` to Load the Binary-classification Dataset with Numpy format into `torch.utils.data.DataLoader`. |
| set_freeze | require_grad: Dict[int, bool] | freeze some layers by given `requires_grad=False` if trained model will be loaded to execute experiments. |
| train_val | num_epochs: int=2<br />interval: int=100<br />tolerance: float=1e-3<br />patience: int=10<br />backend: str='threading'<br />n_jobs: int=-1<br />early_stop: bool=False | Using `num_epochs`, `tolerance`, `patience` to Control Training Process and `interval` to Adjust Print Interval with Accelerated Validation Combined with `backend` and `n_jobs`. |
| train_val | num_epochs: int=2<br />interval: int=100<br />tolerance: float=1e-3<br />patience: int=10<br />backend: str='threading'<br />n_jobs: int=-1<br />prefer: str='threads'<br />early_stop: bool=False | Using `num_epochs`, `tolerance`, `patience` to Control Training Process and `interval` to Adjust Print Interval with Accelerated Validation Combined with `backend` and `n_jobs`. |
| test | sort_by: str='accuracy'<br />sort_state: bool=True | Test Module con with Correct Class and Loss at 3 Stages: Train, Test, Val |
| save | con: bool=True<br />dir: str='./model' | Save Trained Model Parameters with Model `state_dict` Control by `con`. |
| load | con: bool=True<br />dir: str='./model' | Load Trained Model Parameters with Model `state_dict` Control by `con`. |
Expand All @@ -109,7 +110,7 @@ save or load:
| print_config | / | Return Initialized Parameters of Multi-layer Perceptron and Graph. |
| data_loader | features: TabularData<br />labels: TabularData<br />ratio_set: Dict[str, int]={'train': 8, 'test': 1, 'val': 1}<br />worker_set: Dict[str, int]={'train': 8, 'test': 2, 'val': 1}<br />random_seed: Optional[int]=None | Using `ratio_set` and `worker_set` to Load the Multi-classification Dataset with Numpy format into `torch.utils.data.DataLoader`. |
| set_freeze | require_grad: Dict[int, bool] | freeze some layers by given `requires_grad=False` if trained model will be loaded to execute experiments. |
| train_val | num_epochs: int=2<br />interval: int=100<br />tolerance: float=1e-3<br />patience: int=10<br />backend: str='threading'<br />n_jobs: int=-1<br />early_stop: bool=False | Using `num_epochs`, `tolerance`, `patience` to Control Training Process and `interval` to Adjust Print Interval with Accelerated Validation Combined with `backend` and `n_jobs`. |
| train_val | num_epochs: int=2<br />interval: int=100<br />tolerance: float=1e-3<br />patience: int=10<br />backend: str='threading'<br />n_jobs: int=-1<br />prefer: str='threads'<br />early_stop: bool=False | Using `num_epochs`, `tolerance`, `patience` to Control Training Process and `interval` to Adjust Print Interval with Accelerated Validation Combined with `backend` and `n_jobs`. |
| test | sort_by: str='accuracy'<br />sort_state: bool=True | Sort Returned Test Result about Correct Classes with `sort_by` and `sort_state` Which Only Appears in Classification. |
| save | con: bool=True<br />dir: str='./model' | Save Trained Model Parameters with Model `state_dict` Control by `con`. |
| load | con: bool=True<br />dir: str='./model' | Load Trained Model Parameters with Model `state_dict` Control by `con`. |
Expand All @@ -122,7 +123,7 @@ save or load:
| print_config | / | Return Initialized Parameters of Multi-layer Perceptron and Graph. |
| data_loader | features: TabularData<br />labels: TabularData<br />ratio_set: Dict[str, int]={'train': 8, 'test': 1, 'val': 1}<br />worker_set: Dict[str, int]={'train': 8, 'test': 2, 'val': 1}<br />random_seed: Optional[int]=None | Using `ratio_set` and `worker_set` to Load the Multi-outputs Dataset with Numpy format into `torch.utils.data.DataLoader`. |
| set_freeze | require_grad: Dict[int, bool] | freeze some layers by given `requires_grad=False` if trained model will be loaded to execute experiments. |
| train_val | num_epochs: int=2<br />interval: int=100<br />tolerance: float=1e-3<br />patience: int=10<br />backend: str='threading'<br />n_jobs: int=-1<br />early_stop: bool=False | Using `num_epochs`, `tolerance`, `patience` to Control Training Process and `interval` to Adjust Print Interval with Accelerated Validation Combined with `backend` and `n_jobs`. |
| train_val | num_epochs: int=2<br />interval: int=100<br />tolerance: float=1e-3<br />patience: int=10<br />backend: str='threading'<br />n_jobs: int=-1<br />prefer: str='threads'<br />early_stop: bool=False | Using `num_epochs`, `tolerance`, `patience` to Control Training Process and `interval` to Adjust Print Interval with Accelerated Validation Combined with `backend` and `n_jobs`. |
| test | / | Test Module Only Show with Loss at 3 Stages: Train, Test, Val |
| save | con: bool=True<br />dir: str='./model' | Save Trained Model Parameters with Model `state_dict` Control by `con`. |
| load | con: bool=True<br />dir: str='./model' | Load Trained Model Parameters with Model `state_dict` Control by `con`. |
Expand Down
2 changes: 1 addition & 1 deletion released_box/perming/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,4 +27,4 @@
'Multi-outputs': Ranker
}

__version__ = '1.9.2'
__version__ = '1.9.3'
Loading

0 comments on commit 9651774

Please sign in to comment.