Skip to content

Commit

Permalink
Merge branch 'master' into tj/plugin/template/test/preprocess-opencv
Browse files Browse the repository at this point in the history
  • Loading branch information
mlukasze authored Jul 15, 2024
2 parents 319a448 + ae4404d commit df404c7
Show file tree
Hide file tree
Showing 47 changed files with 757 additions and 437 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -133,8 +133,7 @@ Performance Information F.A.Q.

.. dropdown:: Why are INT8 optimized models used for benchmarking on CPUs with no VNNI support?

The benefit of low-precision optimization using the OpenVINO™
toolkit model optimizer extends beyond processors supporting VNNI
The benefit of low-precision optimization extends beyond processors supporting VNNI
through Intel® DL Boost. The reduced bit width of INT8 compared to FP32
allows Intel® CPU to process the data faster. Therefore, it offers
better throughput on any converted model, regardless of the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,16 +68,14 @@ to OpenVINO template extension ``Identity`` class.
The mapping doesn’t involve any attributes, as operation Identity doesn’t have them.

Extension objects, like just constructed ``extension`` can be used to add to the
OpenVINO runtime just before the loading a model that contains custom operations:
OpenVINO runtime just before loading a model that contains custom operations:

.. doxygensnippet:: docs/articles_en/assets/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_read_model]

Or extensions can be constructed in a separately compiled shared library.
Separately compiled library can be used in Model Optimizer or ``benchmark_app``.
Read about how to build and load such a library in the chapter of “Create library with extensions” in
:doc:`Introduction to OpenVINO Extension <../openvino-extensibility>`.
However, extensions can also be constructed in a separately compiled shared library, that is suitable for loading models with custom operations in a Python application or tools like ``benchmark_app``.
For details on how to build and load such library, check the following :ref:`guide <create_a_library_with_extensions>`.

If operation have multiple inputs and/or outputs they will be mapped in order.
The type of elements in input/output tensors should match expected types in the surrounding operations.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,7 @@ opset1
This specification document describes ``opset1`` operation set supported in OpenVINO™.
Support for each particular operation from the list below depends on the capabilities of an inference plugin
and may vary among different hardware platforms and devices. Examples of operation instances are provided as IR xml
snippets. Such IR is generated by the Model Optimizer. The semantics match corresponding OpenVINO operation classes
declared in ``namespace opset1``.
snippets. The semantics match corresponding OpenVINO operation classes declared in ``namespace opset1``.


Table of Contents
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,7 @@ opset10
This specification document describes the ``opset10`` operation set supported in OpenVINO™.
Support for each particular operation from the list below depends on the capabilities of an inference plugin
and may vary among different hardware platforms and devices. Examples of operation instances are provided as IR xml
snippets. Such IR is generated by the Model Optimizer. The semantics match corresponding OpenVINO operation classes
declared in ``namespace opset10``.
snippets. The semantics match corresponding OpenVINO operation classes declared in ``namespace opset10``.


Table of Contents
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,7 @@ opset11
This specification document describes the ``opset11`` operation set supported in OpenVINO™.
Support for each particular operation from the list below depends on the capabilities of an inference plugin
and may vary among different hardware platforms and devices. Examples of operation instances are provided as IR xml
snippets. Such IR is generated by the Model Optimizer. The semantics match corresponding OpenVINO operation classes
declared in ``namespace opset11``.
snippets. The semantics match corresponding OpenVINO operation classes declared in ``namespace opset11``.


Table of Contents
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,7 @@ opset12
This specification document describes the ``opset12`` operation set supported in OpenVINO™.
Support for each particular operation from the list below depends on the capabilities of an inference plugin
and may vary among different hardware platforms and devices. Examples of operation instances are provided as IR xml
snippets. Such IR is generated by the Model Optimizer. The semantics match corresponding OpenVINO operation classes
declared in ``namespace opset12``.
snippets. The semantics match corresponding OpenVINO operation classes declared in ``namespace opset12``.


Table of Contents
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,7 @@ opset13
This specification document describes the ``opset13`` operation set supported in OpenVINO™.
Support for each particular operation from the list below depends on the capabilities of an inference plugin
and may vary among different hardware platforms and devices. Examples of operation instances are provided as IR xml
snippets. Such IR is generated by the Model Optimizer. The semantics match corresponding OpenVINO operation classes
declared in ``namespace opset13``.
snippets. The semantics match corresponding OpenVINO operation classes declared in ``namespace opset13``.


Table of Contents
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,7 @@ opset14
This specification document describes the ``opset14`` operation set supported in OpenVINO™.
Support for each particular operation from the list below depends on the capabilities of an inference plugin
and may vary among different hardware platforms and devices. Examples of operation instances are provided as IR xml
snippets. Such IR is generated by the Model Optimizer. The semantics match corresponding OpenVINO operation classes
declared in ``namespace opset14``.
snippets. The semantics match corresponding OpenVINO operation classes declared in ``namespace opset14``.


Table of Contents
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,7 @@ opset2
This specification document describes ``opset2`` operation set supported in OpenVINO™.
Support for each particular operation from the list below depends on the capabilities of an inference plugin
and may vary among different hardware platforms and devices. Examples of operation instances are provided as IR xml
snippets. Such IR is generated by the Model Optimizer. The semantics match corresponding OpenVINO operation classes
declared in ``namespace opset2``.
snippets. The semantics match corresponding OpenVINO operation classes declared in ``namespace opset2``.


Table of Contents
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,7 @@ opset3
This specification document describes ``opset3`` operation set supported in OpenVINO™.
Support for each particular operation from the list below depends on the capabilities of an inference plugin
and may vary among different hardware platforms and devices. Examples of operation instances are provided as IR xml
snippets. Such IR is generated by the Model Optimizer. The semantics match corresponding OpenVINO operation classes
declared in ``namespace opset3``.
snippets. The semantics match corresponding OpenVINO operation classes declared in ``namespace opset3``.


Table of Contents
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,7 @@ opset4
This specification document describes ``opset4`` operation set supported in OpenVINO™.
Support for each particular operation from the list below depends on the capabilities of an inference plugin
and may vary among different hardware platforms and devices. Examples of operation instances are provided as IR xml
snippets. Such IR is generated by the Model Optimizer. The semantics match corresponding OpenVINO operation classes
declared in ``namespace opset4``.
snippets. The semantics match corresponding OpenVINO operation classes declared in ``namespace opset4``.


Table of Contents
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,7 @@ opset5
This specification document describes ``opset5`` operation set supported in OpenVINO™.
Support for each particular operation from the list below depends on the capabilities of an inference plugin
and may vary among different hardware platforms and devices. Examples of operation instances are provided as IR xml
snippets. Such IR is generated by the Model Optimizer. The semantics match corresponding OpenVINO operation classes
declared in ``namespace opset5``.
snippets. The semantics match corresponding OpenVINO operation classes declared in ``namespace opset5``.


Table of Contents
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,7 @@ opset6
This specification document describes ``opset6`` operation set supported in OpenVINO™.
Support for each particular operation from the list below depends on the capabilities of an inference plugin
and may vary among different hardware platforms and devices. Examples of operation instances are provided as IR xml
snippets. Such IR is generated by the Model Optimizer. The semantics match corresponding OpenVINO operation classes
declared in ``namespace opset6``.
snippets. The semantics match corresponding OpenVINO operation classes declared in ``namespace opset6``.


Table of Contents
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,7 @@ opset7
This specification document describes the ``opset7`` operation set supported in OpenVINO™.
Support for each particular operation from the list below depends on the capabilities of an inference plugin
and may vary among different hardware platforms and devices. Examples of operation instances are provided as IR xml
snippets. Such IR is generated by the Model Optimizer. The semantics match corresponding OpenVINO operation classes
declared in ``namespace opset7``.
snippets. The semantics match corresponding OpenVINO operation classes declared in ``namespace opset7``.


Table of Contents
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,7 @@ opset8
This specification document describes the ``opset8`` operation set supported in OpenVINO™.
Support for each particular operation from the list below depends on the capabilities of an inference plugin
and may vary among different hardware platforms and devices. Examples of operation instances are provided as IR xml
snippets. Such IR is generated by the Model Optimizer. The semantics match corresponding OpenVINO operation classes
declared in ``namespace opset8``.
snippets. The semantics match corresponding OpenVINO operation classes declared in ``namespace opset8``.


Table of Contents
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,7 @@ opset9
This specification document describes the ``opset9`` operation set supported in OpenVINO™.
Support for each particular operation from the list below depends on the capabilities of an inference plugin
and may vary among different hardware platforms and devices. Examples of operation instances are provided as IR xml
snippets. Such IR is generated by the Model Optimizer. The semantics match corresponding OpenVINO operation classes
declared in ``namespace opset9``.
snippets. The semantics match corresponding OpenVINO operation classes declared in ``namespace opset9``.


Table of Contents
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -85,8 +85,7 @@ Step 1: Download and Install OpenVINO Core Components
.. code-block:: sh
cd <user_home>/Downloads
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2024.2/windows/w_openvino_toolkit_windows_2024.2.0.15519.5c0f38f83f6_x86_64.zip --output openvino_2024.2.0.zip --output openvino_2024.2.0.zip
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2024.2/windows/w_openvino_toolkit_windows_2024.2.0.15519.5c0f38f83f6_x86_64.zip --output openvino_2024.2.0.zip
.. note::

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,12 @@ Below is a list of cases where input/output layout is important:
* Applying the :doc:`preprocessing <../optimize-preprocessing>` steps, such as subtracting means, dividing by scales, resizing an image, and converting ``RGB`` <-> ``BGR``.
* Setting/getting a batch for a model.

* Doing the same operations as used during the model conversion phase. For more information, refer to the :doc:`Model Optimizer Embedding Preprocessing Computation <../../../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-embedding-preprocessing-computation>` guide.
* Doing the same operations as used during the model conversion phase. For more information, refer to the:

* :doc:`Convert to OpenVINO <../../../model-preparation/convert-model-to-ir>`
* `OpenVINO Model Conversion Tutorial <https://docs.openvino.ai/2024/notebooks/convert-to-openvino-with-output.html>`__
* :doc:`[LEGACY] Model Optimizer Embedding Preprocessing Computation <../../../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-embedding-preprocessing-computation>` guide.

* Improving the readability of a model input and output.

Syntax of Layout
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
.. {#openvino_docs_OV_UG_ways_to_get_stateful_model}
Obtaining a Stateful OpenVINO Model
====================================
======================================

If the original framework does not offer a dedicated API for working with states, the
resulting OpenVINO IR model will not be stateful by default. This means it will not contain
Expand All @@ -23,7 +23,7 @@ and you have three ways to do it:
.. _ov_ug_make_stateful:

MakeStateful Transformation
###########################
###############################

The MakeStateful transformation changes the structure of the model by replacing the
user-defined pairs of Parameter and Results with the Assign and ReadValue operations:
Expand Down Expand Up @@ -83,7 +83,7 @@ Parameter/Result tensor names. If there are no tensor names,
.. _ov_ug_low_latency:

LowLatency2 Transformation
##########################
###############################

The LowLatency2 transformation changes the structure of a model containing
:doc:`TensorIterator <../../../documentation/openvino-ir-format/operation-sets/operation-specs/infrastructure/tensor-iterator-1>`
Expand All @@ -102,8 +102,7 @@ the current State API implementation. Input values are ignored, and the initial
for the ReadValue operations are set to zeros unless the user specifies otherwise via
:doc:`State API <../stateful-models>`.

Applying LowLatency2 Transformation
++++++++++++++++++++++++++++++++++++
To apply LowLatency2 Transformation, follow the instruction below:

1. Get :doc:`ov::Model <../integrate-openvino-with-your-application/model-representation>`,
for example:
Expand Down Expand Up @@ -195,11 +194,11 @@ Applying LowLatency2 Transformation
somewhere in the model.

In such a case, trim non-reshapable layers via
:doc:`Model Optimizer command-line <../../../documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-setting-input-shapes>`
arguments: ``--input`` and ``--output``.
:doc:`Conversion Parameters <../../model-preparation/conversion-parameters>`:
``--input`` and ``--output``. For example, check the `OpenVINO Model Conversion Tutorial <https://docs.openvino.ai/2024/notebooks/convert-to-openvino-with-output.html>`__.

For example, the parameter and the problematic constant in the picture above can be
trimmed using the ``--input Reshape_layer_name`` command-line option. The problematic
As for the parameter and the problematic constant in the picture above, it can be
trimmed by using the ``--input Reshape_layer_name`` command-line option. The problematic
constant can be also replaced using OpenVINO, as shown in the following example:

.. tab-set::
Expand All @@ -210,27 +209,7 @@ Applying LowLatency2 Transformation
:language: cpp
:fragment: [ov:replace_const]



Obtaining TensorIterator/Loop Operations using Model Optimizer
###############################################################

**ONNX and frameworks supported via ONNX format:** *LSTM, RNN, GRU* original layers are
converted to the GRU/RNN/LSTM Sequence operations. *ONNX Loop* layer is converted to the
OpenVINO Loop operation.

**TensorFlow:** *BlockLSTM* is converted to a TensorIterator operation. TensorIterator
body contains LSTM Cell operation. Modifications such as Peepholes and InputForget are
not supported. The *While* layer is converted to a TensorIterator. TensorIterator body
can contain any supported operations. However, dynamic cases where the count of iterations
cannot be calculated during shape inference (Model Optimizer conversion) are not supported.

**TensorFlow2:** *While* layer is converted to a Loop operation. The Loop body can contain
any supported operations.



Creating a Model via OpenVINO API
Stateful Model from Scratch
##################################

The main approach to obtaining stateful OpenVINO IR models is converting from other
Expand All @@ -251,3 +230,17 @@ a sink from `ov::Model` after deleting the node from the graph with the `delete_
:language: cpp
:fragment: [ov:state_network]

.. note::

**ONNX and frameworks supported via ONNX format:** *LSTM, RNN, GRU* original layers are
converted to the GRU/RNN/LSTM Sequence operations. *ONNX Loop* layer is converted to the
OpenVINO Loop operation.

**TensorFlow:** *BlockLSTM* is converted to a TensorIterator operation. The TensorIterator
body contains LSTM Cell operation. Modifications such as Peepholes and InputForget are
not supported. The *While* layer is converted to a TensorIterator. The TensorIterator body
can contain any supported operations. However, dynamic cases where the count of iterations
cannot be calculated during shape inference are not supported.

**TensorFlow2:** *While* layer is converted to a Loop operation. The Loop body can contain
any supported operations.
Loading

0 comments on commit df404c7

Please sign in to comment.