Skip to content

Commit

Permalink
[RELEASE] Merge back v1.0.1 (#1899)
Browse files Browse the repository at this point in the history
* [RELEASE][DOC] Fix wrong info in documentation (#1849)
* updated dataset formats info. Fix for multilabel classification
* revert file
* revert file. minor
* added warning to instance segmentation
* revert changes

* [Enhance] Separate installation for each tasks on release task (#1869)
* separte_import
* align with pre commit
* update unit test code
* add separate task env pre  merge test & aplly it to github action
* add multiprocess to requirement

* [FIX][REL1.0] Fix Geti integration issues (#1885)
* Fix ote_config -> otx_config

* [FIX] hang issue when tracing a stack in certain scenario (#1868)
fix: use primitive library

* [FIX][POT] Set stat_requests_number parameter to 1 (#1870)
Set POT stat_requests_number parameter to 1 in order to lower RAM footprint

* [FIX] Training error when batch size is 1 (#1872)
fix: drop last batch

* Recover detection num_workers=2
* Remove nbmake from base requirements
* Add py.typed in package

* [FIX] Arrange scale between bbox preds and bbox targets in ATSS (#1880)
Arrange scale between bbox preds and bbox targets

* [FIX][RELEASE1.0] Remove cfg dump in ckpt (#1895)
* Remove cfg dump in ckpt
* Fix pre-commit

* Release v1.0.1

* [FIX] Prevent torch 2.0.0 installation (#1896)

* Add torchvision & torchtext in requirements/anomaly.txt with fixed version

* Update requirements/anomaly.txt

* Fix _model_cfg -> _recipe_cfg due to cfg merge

---------

Signed-off-by: Songki Choi <songki.choi@intel.com>
Co-authored-by: Prokofiev Kirill <kirill.prokofiev@intel.com>
Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com>
Co-authored-by: Inhyuk Cho <andy.inhyuk.jo@intel.com>
Co-authored-by: Nikita Savelyev <nikita.savelyev@intel.com>
Co-authored-by: Jaeguk Hyun <jaeguk.hyun@intel.com>
Co-authored-by: Jihwan Eom <jihwan.eom@intel.com>
  • Loading branch information
7 people authored Mar 17, 2023
1 parent 8f2c882 commit 573fb5d
Show file tree
Hide file tree
Showing 9 changed files with 40 additions and 42 deletions.
17 changes: 17 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,23 @@

All notable changes to this project will be documented in this file.

## \[v1.0.1\]

### Enhancements

- Refine documents by proof review
- Separate installation for each tasks
- Improve POT efficiency by setting stat_requests_number parameter to 1

### Bug fixes

- Fix missing classes in cls checkpoint
- Fix action task sample codes
- Fix label_scheme mismatch in classification
- Fix training error when batch size is 1
- Fix hang issue when tracing a stack in certain scenario
- Fix pickling error by Removing mmcv cfg dump in ckpt

## \[v1.0.0\]

> _**NOTES**_
Expand Down
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,9 @@ OpenVINO™ Training Extensions is a low-code transfer learning framework for Co
The CLI commands of the framework allows users to train, infer, optimize and deploy models easily and quickly even with low expertise in the deep learning field. OpenVINO™ Training Extensions offers diverse combinations of model architectures, learning methods, and task types based on [PyTorch](https://pytorch.org) and [OpenVINO™
toolkit](https://software.intel.com/en-us/openvino-toolkit).

OpenVINO™ Training Extensions provides a "model template" for every supported task type, which consolidates necessary information to build a model. Model templates are validated on various datasets and serve one-stop shop for obtaining the best models in general. If you are an experienced user, you can configure your own model based on [torchvision](https://pytorch.org/vision/latest/index.html), [pytorchcv](https://github.com/osmr/imgclsmob), [mmcv](https://github.com/open-mmlab/mmcv) and [OpenVINO Model Zoo (OMZ)](https://github.com/openvinotoolkit/open_model_zoo).
OpenVINO™ Training Extensions provides a "model template" for every supported task type, which consolidates necessary information to build a model.
Model templates are validated on various datasets and serve one-stop shop for obtaining the best models in general.
If you are an experienced user, you can configure your own model based on [torchvision](https://pytorch.org/vision/latest/index.html), [pytorchcv](https://github.com/osmr/imgclsmob), [mmcv](https://github.com/open-mmlab/mmcv) and [OpenVINO Model Zoo (OMZ)](https://github.com/openvinotoolkit/open_model_zoo).

Furthermore, OpenVINO™ Training Extensions provides automatic configuration of task types and hyperparameters.
The framework will identify the most suitable model template based on your dataset, and choose the best hyperparameter configuration. The development team is continuously extending functionalities to make training as simple as possible so that single CLI command can obtain accurate, efficient and robust models ready to be integrated into your project.
Expand Down
32 changes: 9 additions & 23 deletions otx/algorithms/common/tasks/nncf_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@
import io
import json
import os
from collections.abc import Mapping
from copy import deepcopy
from typing import Dict, List, Optional

Expand Down Expand Up @@ -336,29 +335,16 @@ def save_model(self, output_model: ModelEntity):
hyperparams_str = ids_to_strings(cfg_helper.convert(self._hyperparams, dict, enum_to_str=True))
labels = {label.name: label.color.rgb_tuple for label in self._labels}

config = deepcopy(self._recipe_cfg)

def update(d, u): # pylint: disable=invalid-name
for k, v in u.items(): # pylint: disable=invalid-name
if isinstance(v, Mapping):
d[k] = update(d.get(k, {}), v)
else:
d[k] = v
return d

modelinfo = torch.load(self._model_ckpt, map_location=torch.device("cpu"))
modelinfo = update(
dict(model=modelinfo),
{
"meta": {
"nncf_enable_compression": True,
"config": config,
},
"config": hyperparams_str,
"labels": labels,
"VERSION": 1,
model_ckpt = torch.load(self._model_ckpt, map_location=torch.device("cpu"))
modelinfo = {
"model": model_ckpt,
"config": hyperparams_str,
"labels": labels,
"VERSION": 1,
"meta": {
"nncf_enable_compression": True,
},
)
}
self._save_model_post_hook(modelinfo)

torch.save(modelinfo, buffer)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ learning_parameters:
warning: null
num_workers:
affects_outcome_of: NONE
default_value: 0
default_value: 2
description:
Increasing this value might improve training speed however it might
cause out of memory errors. If the number of workers is set to zero, data loading
Expand Down
18 changes: 6 additions & 12 deletions otx/algorithms/detection/tasks/nncf.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,9 @@
from otx.algorithms.common.adapters.mmcv.utils import remove_from_config
from otx.algorithms.common.tasks.nncf_base import NNCFBaseTask
from otx.algorithms.detection.adapters.mmdet.nncf import build_nncf_detector
from otx.algorithms.detection.adapters.mmdet.utils.config_utils import (
should_cluster_anchors,
)
from otx.api.entities.datasets import DatasetEntity
from otx.api.entities.inference_parameters import InferenceParameters
from otx.api.entities.model import ModelEntity
Expand Down Expand Up @@ -110,17 +113,8 @@ def _optimize_post_hook(
output_model.performance = performance

def _save_model_post_hook(self, modelinfo):
config = modelinfo["meta"]["config"]
if hasattr(config.model, "bbox_head") and hasattr(config.model.bbox_head, "anchor_generator"):
if getattr(
config.model.bbox_head.anchor_generator,
"reclustering_anchors",
False,
):
generator = config.model.bbox_head.anchor_generator
modelinfo["anchors"] = {
"heights": generator.heights,
"widths": generator.widths,
}
if self._recipe_cfg is not None and should_cluster_anchors(self._recipe_cfg):
modelinfo["anchors"] = {}
self._update_anchors(modelinfo["anchors"], self._recipe_cfg.model.bbox_head.anchor_generator)

modelinfo["confidence_threshold"] = self.confidence_threshold
4 changes: 2 additions & 2 deletions otx/api/configuration/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
#


import otx.api.configuration.helper as ote_config_helper # for 'ote' backward compatibility
import otx.api.configuration.helper as otx_config_helper # for backward compatibility
import otx.api.configuration.helper as cfg_helper # pylint: disable=reimported
from otx.api.configuration.elements import metadata_keys
from otx.api.configuration.elements.configurable_enum import ConfigurableEnum
Expand All @@ -27,7 +27,7 @@
__all__ = [
"metadata_keys",
"cfg_helper",
"ote_config_helper",
"otx_config_helper",
"ConfigurableEnum",
"ModelLifecycle",
"Action",
Expand Down
1 change: 0 additions & 1 deletion requirements/base.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Base Algo Requirements. #
natsort>=6.0.0
nbmake
prettytable
protobuf>=3.20.0
pyyaml
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@ def find_yaml_recipes():
return results


package_data = {"": ["requirements.txt", "README.md", "LICENSE"]} # Needed for exportable code
package_data = {"": ["requirements.txt", "README.md", "LICENSE", "py.typed"]}
package_data.update(find_yaml_recipes())

setup(
Expand Down
2 changes: 1 addition & 1 deletion tox.ini
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ commands =

[testenv:bandit-scan]
skip_install = true
deps =
deps =
bandit
allowlist_externals =
bandit
Expand Down

0 comments on commit 573fb5d

Please sign in to comment.