Skip to content

Commit

Permalink
Chore(doc): merge multitask training doc (deepmodeling#4395)
Browse files Browse the repository at this point in the history
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Updated multi-task mode documentation to clarify the correct link for
model freezing.
- Enhanced fine-tuning documentation for TensorFlow and PyTorch, adding
clarity on processes and configurations.
- Consolidated multi-task training references in the documentation for
easier navigation.
- Removed deprecated TensorFlow multi-task training documentation,
redirecting users to the PyTorch backend.
- Revised multi-task training documentation to emphasize the transition
to PyTorch as the sole supported backend.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
  • Loading branch information
iProzd authored Nov 22, 2024
1 parent c27f630 commit d1712c9
Show file tree
Hide file tree
Showing 5 changed files with 6 additions and 10 deletions.
2 changes: 1 addition & 1 deletion doc/freeze/freeze.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ $ dp --pt freeze -o model.pth

in the folder where the model is trained. The output model is called `model.pth`.

In [multi-task mode](../train/multi-task-training-pt.md), you need to choose one available heads (e.g. `CHOSEN_BRANCH`) by `--head`
In [multi-task mode](../train/multi-task-training), you need to choose one available heads (e.g. `CHOSEN_BRANCH`) by `--head`
to specify which model branch you want to freeze:

```bash
Expand Down
2 changes: 1 addition & 1 deletion doc/train/finetuning.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ Then, prepare a suitable input script for multitask fine-tuning `multi_input.jso

- Suppose the new dataset for fine-tuning is named `DOWNSTREAM_DATA`, and the datasets to be retained from multitask pre-trained model are `PRE_DATA1` and `PRE_DATA2`. One can:

1. Refer to the [`multi-task-training`](./multi-task-training-pt.md) document to prepare a multitask training script for two systems,
1. Refer to the [`multi-task-training`](./multi-task-training) document to prepare a multitask training script for two systems,
ideally extracting parts (i.e. {ref}`model_dict <model/model_dict>`, {ref}`loss_dict <loss_dict>`, {ref}`data_dict <training/data_dict>` and {ref}`model_prob <training/model_prob>` parts) corresponding to `PRE_DATA1` and `PRE_DATA2` directly from the training script of the pre-trained model.
2. For `DOWNSTREAM_DATA`, select a desired branch to fine-tune from (e.g., `PRE_DATA1`), copy the configurations of `PRE_DATA1` as the configuration for `DOWNSTREAM_DATA` and insert the corresponding data path into the {ref}`data_dict <training/data_dict>`,
thereby generating a three-system multitask training script.
Expand Down
3 changes: 1 addition & 2 deletions doc/train/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,7 @@ Training
training-advanced
train-input
parallel-training
multi-task-training-tf
multi-task-training-pt
multi-task-training
tensorboard
gpu-limitations
finetuning
5 changes: 0 additions & 5 deletions doc/train/multi-task-training-tf.md

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,9 @@
**Supported backends**: PyTorch {{ pytorch_icon }}
:::

<!-- we plan to drop TensorFlow backend multi-task training. Replace with the PyTorch one -->
:::{warning}
We have deprecated TensorFlow backend multi-task training, please use the PyTorch one.
:::

## Theory

Expand Down

0 comments on commit d1712c9

Please sign in to comment.