Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update docs for 2.2 #3884

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,3 @@ Action Recognition


action_classification
action_detection
4 changes: 2 additions & 2 deletions docs/source/guide/get_started/cli_commands.rst
Original file line number Diff line number Diff line change
Expand Up @@ -339,11 +339,11 @@ The results will be saved in ``./otx-workspace/`` folder by default. The output

(otx) ...$ otx train --model <model-class-path-or-name> --task <task-type> --data_root <dataset-root>

For example, if you want to use the ``otx.algo.detection.atss.ATSS`` model class, you can train it as shown below.
For example, if you want to use the ``otx.algo.classification.torchvision_model.TVModelForMulticlassCls`` model class, you can train it as shown below.

.. code-block:: shell

(otx) ...$ otx train --model otx.algo.detection.atss.ATSS --model.variant mobilenetv2 --task DETECTION ...
(otx) ...$ otx train --model otx.algo.classification.torchvision_model.TVModelForMulticlassCls --model.backbone mobilenet_v3_small ...

.. note::
You also can visualize the training using ``Tensorboard`` as these logs are located in ``<work_dir>/tensorboard``.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/guide/get_started/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ according to your system environment.

.. note::

Currently, only torch==2.1.1 was fully validated. (older versions are not supported due to security issues).
Currently, only torch==2.2 was fully validated. (older versions are not supported due to security issues).


3. Once the package is installed in the virtual environment, you can use full
Expand Down
2 changes: 2 additions & 0 deletions docs/source/guide/tutorials/advanced/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,5 +8,7 @@ Advanced Tutorials
semi_supervised_learning
huggingface_model
multi_gpu
low_rank_adaptation
torch_compile

.. Once we have enough material, we might need to categorize these into `data`, `model learning` sections.
39 changes: 39 additions & 0 deletions docs/source/guide/tutorials/advanced/low_rank_adaptation.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
LoRA: Low Rank Adaptation for Classification Tasks
===================================================

.. note::

LoRA is only supported for VisionTransformer models.
See the model in otx.algo.classification.vit.

Overview
--------

OpenVINO™ Training Extensions now supports Low Rank Adaptation (LoRA) for classification tasks using Transformer models.
LoRA is a parameter-efficient approach to adapt pre-trained models by introducing low-rank matrices that capture important adaptations without the need to retrain the entire model.

Benefits of LoRA
----------------

- **Efficiency**: LoRA allows for efficient adaptation of large pre-trained models with minimal additional parameters.
- **Performance**: By focusing on key parameters, LoRA can achieve competitive performance with less computational overhead.
- **Flexibility**: LoRA can be applied to various parts of the transformer model, providing flexibility in model tuning.

How to Use LoRA in OpenVINO™ Training Extensions
------------------------------------------------

.. tab-set::

.. tab-item:: API

.. code-block:: python

from otx.algo.classification.vit import VisionTransformerForMulticlassCls

model = VisionTransformerForMulticlassCls(..., lora=True)

.. tab-item:: CLI

.. code-block:: bash

(otx) ...$ otx train ... --model.lora True
41 changes: 41 additions & 0 deletions docs/source/guide/tutorials/advanced/torch_compile.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
[BETA] Enable torch.compile
============================

.. warning::
Not currently supported on all models.
As far as we check, it is available for Classification Task models and some segmentation models.
We will continue to optimize this and do not guarantee performance for now.

Overview
--------

OpenVINO™ Training Extensions now integrates the `torch.compile` feature from PyTorch, allowing users to optimize their models for better performance.
This feature compiles the model's operations into optimized lower-level code, which can significantly improve execution speed and reduce memory usage.

Benefits of torch.compile
-------------------------

- **Performance Optimization**: Compiled models run faster by executing optimized low-level operations.
- **Reduced Memory Footprint**: Optimized models can use less memory, which is beneficial for deploying models on resource-constrained devices.
For more information on the benefits of `torch.compile`, refer to the official `PyTorch documentation <https://pytorch.org/docs/stable/generated/torch.compile.html>`_.

How to Use torch.compile in OpenVINO™ Training Extensions
----------------------------------------------------------

**Prepare OTXModel**: Ensure that model is compatible with `torch.compile`. When building the model, give the `torch_compile` option `True`.

.. tab-set::

.. tab-item:: API

.. code-block:: python

from otx.algo.classification.vit import VisionTransformerForMulticlassCls

model = VisionTransformerForMulticlassCls(..., torch_compile=True)

.. tab-item:: CLI

.. code-block:: bash

(otx) ...$ otx train ... --model.torch_compile True
Loading
Loading