- Ubuntu 20.04/22.04, 64 bit (supported also on Windows, under WSL2)
- Python 3.8/3.9/3.10, including
pip
andvirtualenv
- Hailo Dataflow Compiler v3.29.0 (Obtain from hailo.ai)
- HailoRT 4.19.0 (Obtain from hailo.ai) - required only for inference on Hailo-8.
- The Hailo Model Zoo supports Hailo-8 / Hailo-10H connected via PCIe only.
- Nvidia’s Pascal/Turing/Ampere GPU architecture (such as Titan X Pascal, GTX 1080 Ti, RTX 2080 Ti, or RTX A4000)
- GPU driver version 525
- CUDA 11.8
- CUDNN 8.9
The model requires the corresponding Dataflow Compiler version, and the optional HailoRT version. Therefore it is recommended to use the Hailo Software Suite, that includes all of Hailo's SW components and insures compatibility across products versions.
The Hailo Software Suite is composed of the Dataflow Compiler, HailoRT, TAPPAS and the Model Zoo (:ref:`see diagram below <sw_suite_figure>`).
Install the Hailo Dataflow compiler and enter the virtualenv (visit hailo.ai for further instructions).
Install the HailoRT - required only for inference on Hailo-8 / Hailo-10H (visit hailo.ai for further instructions).
Clone the Hailo Model Zoo repo:
git clone https://github.com/hailo-ai/hailo_model_zoo.git
Run the setup script:
cd hailo_model_zoo; pip install -e .
For setting up datasets please see DATA.
Verify Hailo-8 / Hailo-10 is connected via PCIe (required only to run on Hailo-8 / Hailo-10. Full-precision / emulation run on GPU.)
hailortcli fw-control identify
Note
hailortcli is HailoRT command-line tool for interacting with Hailo devices.
Expected output:
(hailo) Running command 'fw-control' with 'hailortcli' Identifying board Control Protocol Version: 2 Firmware Version: 4.6.0 (release,app) Logger Version: 0 Board Name: Hailo-8 Device Architecture: HAILO8_B0 Serial Number: HLUTM20204900071 Part Number: HM218B1C2FA Product Name: HAILO-8 AI ACCELERATOR M.2 MODULE
If you want to upgrade to a specific Hailo Model Zoo version within a suite or on top of a previous installation not in the suite.
Pull the specific repo branch:
git clone -b v2.6 https://github.com/hailo-ai/hailo_model_zoo.git
Run the setup script:
cd hailo_model_zoo; pip install -e .
The following scheme shows high-level view of the model-zoo evaluation process, and the different stages in between.
By default, each stage executes all of its previously necessary stages according to the above diagram. The post-parsing stages also have an option to start from the product of previous stages (i.e., the Hailo Archive (HAR) file), as explained below. The operations are configured through a YAML file that exist for each model in the cfg folder. For a description of the YAML structure please see YAML.
NOTE: Hailo Model Zoo provides the following functionality for Model Zoo models only. If you wish to use your custom model, use the Dataflow Compiler directly.
The pre-trained models are stored on AWS S3 and will be downloaded automatically when running the model zoo into your data directory. To parse models into Hailo's internal representation and generate the Hailo Archive (HAR) file:
hailomz parse <model_name>
- The default compilation target is Hailo-8. To compile for different architecture (Hailo-15H for example), use
--hw_arch hailo15h
as CLI argument:
hailomz parse <model_name> --hw-arch hailo15h
- To customize the parsing behavior, use
--start-node-names
andor--end-node-names
flags:
hailomz parse <model_name> --start-node-names <name1> --end-node-names <name1> <name2>
To optimize models, convert them from full precision into integer representation and generate a quantized Hailo Archive (HAR) file:
hailomz optimize <model_name>
To optimize the model starting from a previously generated HAR file:
hailomz optimize <model_name> --har /path/to/model.har
You can use your own images by giving a directory path to the optimization process, with the following supported formats (.jpg,.jpeg,.png):
hailomz optimize <model_name> --calib-path /path/to/calibration/imgs/dir/
- This step requires data for calibration. For additional information please see OPTIMIZATION.
In order to achieve highest performance, use the performance flag:
hailomz optimize <model_name> --performance
The flag will be ignored on models that do not support this feature. The default and performance model scripts are located on hailo_model_zoo/cfg/alls/
To add input conversion to the model, use the input conversion flag:
hailomz optimize <model_name> --input-conversion nv12_to_rgb
- Do not use the flag if an input conversion already exist in the alls or in the YAML.
To add input resize to the model, use the resize flag:
hailomz optimize <model_name> --resize 1080 1920
- Do not use the flag if resize already exist in the alls or in the YAML.
To adjust the number of classes in post-processing configuration, use classes flag:
hailomz optimize <model_name> --classes 80
- Use this flag only if post-process exists in the alls or in the YAML.
To generate the model profiler report:
hailomz parse <model_name> hailo profiler path/to/model.har
- When profiling a Quantized HAR file (the result of the optimization process), the report contains information about your model and accuracy.
- When profiling a Compiled HAR file (the result of the compilation process), the report contains the expected performance on the Hailo hardware (as well as the accuracy information).
To run the Hailo compiler and generate the Hailo Executable Format (HEF) file:
hailomz compile <model_name>
By default the compilation target is Hailo-8. To compile for a different architecture use --hw-arch
command line argument:
hailomz compile <model_name> --hw-arch hailo15h
To generate the HEF starting from a previously generated HAR file:
hailomz compile <model_name> --har /path/to/model.har
- When working with a generated HAR, the previously chosen architecture will be used.
In order to achieve the best performance, use the performance flag:
hailomz optimize <model_name> --performance --hw-arch hardware
The flag will be ignored on models that do not support this feature. The default and performance model scripts are located on hailo_model_zoo/cfg/alls/
To add input conversion to the model, use the input conversion flag:
hailomz compile <model_name> --input-conversion nv12_to_rgb
Do not use the flag if an input conversion already exist in the alls or in the YAML.
To add input resize to the model, use the resize flag:
hailomz compile <model_name> --resize 1080 1920
Do not use the flag if resize already exist in the alls or in the YAML.
To evaluate models in full precision:
hailomz eval <model_name>
To evaluate models starting from a previously generated Hailo Archive (HAR) file:
hailomz eval <model_name> --har /path/to/model.har
To evaluate models with the Hailo emulator (after quantization to integer representation - fast_numeric):
hailomz eval <model_name> --target emulator
To evaluate models on Hailo-8 / Hailo-10:
hailomz eval <model_name> --target hardware
If multiple devices are available, it's possible to select a specific one. Make sure to run on a device compatible to the compiled model.
# Device id looks something like 0000:41:00.0 hailomz eval <model_name> --target <device_id> # This command can be used to list available devices hailomz eval --help
To limit the number of images for evaluation use the following flag:
hailomz eval <model_name> --data-count <num-images>
To eval model with additional input conversion, use the input conversion flag:
hailomz eval <model_name> --input-conversion nv12_to_rgb
Do not use the flag if an input conversion already exist in the alls or in the YAML.
To eval model with input resize, use the resize flag:
hailomz eval <model_name> --resize 1080 1920
Do not use the flag if resize already exist in the alls or in the YAML.
To explore other options (for example: changing the default batch-size) use:
hailomz eval --help
- Currently MZ evaluation can be done only on hailo8 and hailo10h.
To run visualization (without evaluation) and generate the output images:
hailomz eval <model_name> --visualize
To create a video file from the network predictions:
hailomz eval <model_name> --visualize --video-outpath /path/to/video_output.mp4
You can easily print information of any network exists in the model zoo, to get a sense of its input/output shape, parameters, operations, framework etc.
To print a model-zoo network information:
hailomz info <model_name>
Here is an example for printing information about mobilenet_v1:
hailomz info mobilenet_v1
Expected output:
<Hailo Model Zoo Info> Printing mobilenet_v1 Information <Hailo Model Zoo Info> task: classification input_shape: 224x224x3 output_shape: 1x1x1001 operations: 0.57G parameters: 4.22M framework: tensorflow training_data: imagenet train validation_data: imagenet val eval_metric: Accuracy (top1) full_precision_result: 71.02 source: https://github.com/tensorflow/models/tree/v1.13.0/research/slim license_url: https://github.com/tensorflow/models/blob/v1.13.0/LICENSE
We can use multiple disjoint models in the same binary. This is useful for running several small models on the device.
python hailo_model_zoo/multi_main.py <config_name>
In some situations you might want to convert the tfrecord file to npy file (for example, when explicitly using the Dataflow Compiler for quantization). In order to do so, run the command:
python hailo_model_zoo/tools/conversion_tool.py /path/to/tfrecord_file resnet_v1_50 --npy