In this lab, we are going to use the Model Downloader to download a Caffe* Classification model - SqueezeNet v1.1 from Open Model Zoo. Then use Model Optimizer to convert the model into Intermediate Representation format with both FP32 and FP16 data precision.
In this section, you will use the Model Downloader to download a public pre-trained Caffe* classfication model.
cd /opt/intel/openvino_2021/deployment_tools/open_model_zoo/tools/downloader/
python3 downloader.py -h
python3 downloader.py --print_all
python3 downloader.py --name squeezenet1.1 -o /opt/intel/workshop
cd /opt/intel/workshop/public/squeezenet1.1
ls
You will see the downloaded Caffe* model:
squeezenet1.1.caffemodel squeezenet1.1.prototxt squeezenet1.1.prototxt.orig
To learn more about this model, you can either click HERE, or:
cd /opt/intel/openvino_2021/deployment_tools/open_model_zoo/models/public/squeezenet1.1
gedit squeezenet1.1.md
Note: From the model description file, you will need to understand the input and output layer name, shape of the input layer, color order and mean value or scale value if applicable for this mode
In this session, you will use the Model Optimizer to convert the downloaded Caffe* classfication model to IR format with both FP32 and FP16 data precisions.
cd /opt/intel/openvino_2021/deployment_tools/model_optimizer/
python3 mo.py -h
A list of general parameters for Model Optimizer will be printed out, to learn more about each parameter, you can refer to this Document.
For the model downloaded from Open Model Zoo, you can always find a model.yml file which contains the Model Optimizer parameters required to convert this particular model.
python3 mo.py \
--input_shape=[1,3,227,227] \
--input=data \
--output=prob \
--mean_values=data[104.0,117.0,123.0] \
--input_model=/opt/intel/workshop/public/squeezenet1.1/squeezenet1.1.caffemodel \
--input_proto=/opt/intel/workshop/public/squeezenet1.1/squeezenet1.1.prototxt \
--data_type FP32 \
-o /opt/intel/workshop/Squeezenet/FP32 \
--model_name squeezenet1.1_fp32
cd /opt/intel/workshop/Squeezenet/FP32
ls
You will see three fils were created under this folder, the .xml file is the topology file of the model, while the .bin file is the weight and bias.
squeezenet1.1_fp32.bin squeezenet1.1_fp32.mapping squeezenet1.1_fp32.xml
cd /opt/intel/openvino_2021/deployment_tools/model_optimizer/
python3 mo.py \
--input_shape=[1,3,227,227] \
--input=data \
--output=prob \
--mean_values=data[104.0,117.0,123.0] \
--input_model=/opt/intel/workshop/public/squeezenet1.1/squeezenet1.1.caffemodel \
--input_proto=/opt/intel/workshop/public/squeezenet1.1/squeezenet1.1.prototxt \
--data_type FP16 \
-o /opt/intel/workshop/Squeezenet/FP16 \
--model_name squeezenet1.1_fp16
cd /opt/intel/workshop/Squeezenet/FP16
ls
You will see three fils were created under this folder, the .xml file is the topology file of the model, while the .bin file is the weight and bias.
squeezenet1.1_fp16.bin squeezenet1.1_fp16.mapping squeezenet1.1_fp16.xml
To learn more about converting a Caffe* model using Model Optimizer, please refer to this OpenVINO documentation Converting a Caffe* Model