diff --git a/docs/tidl_osr_debug.md b/docs/tidl_osr_debug.md
index 7ff6a9c..133e017 100644
--- a/docs/tidl_osr_debug.md
+++ b/docs/tidl_osr_debug.md
@@ -46,7 +46,8 @@ As an example for ONNX out of box example script user can run in ARM only mode a
```
For user's custom model they can refer [here](../examples/osrt_python/README.md#example-apis-for-tidl-offload-or-delegation-in-osrts) to enable ARM only mode
* User can set debug_level = 1 or 2 to enable verbose debug log during model compilation and during model inference
-* If model infernece works fine in ARM only mode but model compilation fails with C7x-MMA offload, then try dispatching some of the layers (less commonly used layer type) to ARM by using “deny_list” option.
+* If model inference works fine in ARM only mode but model compilation fails with C7x-MMA offload, then try dispatching some of the layers (less commonly used layer type) to ARM by using “deny_list” option.
+* Additionally, it is recommended to disable default onnx graph optimizations (i.e. set session option for graph_optimization_level to onnxruntime.GraphOptimizationLevel.ORT_DISABLE_ALL)
# Steps to Debug Error Scenarios for target(EVM/device) execution
diff --git a/docs/version_compatibility_table.md b/docs/version_compatibility_table.md
index f152250..5e40fdd 100644
--- a/docs/version_compatibility_table.md
+++ b/docs/version_compatibility_table.md
@@ -5,6 +5,7 @@
|EdgeAI TIDL Tools TAG | AM62 | AM62A | AM68A/J721S2 (TDA4AL, TDA4VL) | AM68PA/J721E (TDA4VM)| AM69A/J784S4(TDA4AP, TDA4VP,TDA4AH, TDA4VH)|
| ---------------------------- |:--------------:|:---------------:|:--------------:|:--------------:|:-------------:|
+| 09_01_07_00 | 09_01_00_08 | Processor SDK LINUX : 09.01.00.07 | Processor SDK LINUX 09.01.00.06
Processor SDK RTOS 09.01.00.06 | Processor SDK LINUX 09.01.00.06
Processor SDK RTOS 09.01.00.06 | Processor SDK LINUX 09.01.00.06
Processor SDK RTOS 09.01.00.06 |
| 09_01_06_00 | 09_01_00_08 | Processor SDK LINUX : 09.01.00.07 | Processor SDK LINUX 09.01.00.06
Processor SDK RTOS 09.01.00.06 | Processor SDK LINUX 09.01.00.06
Processor SDK RTOS 09.01.00.06 | Processor SDK LINUX 09.01.00.06
Processor SDK RTOS 09.01.00.06 |
| 09_01_04_00 | 09_01_00_08 | Processor SDK LINUX : 09.01.00.07 | Processor SDK LINUX 09.01.00.06
Processor SDK RTOS 09.01.00.06 | Processor SDK LINUX 09.01.00.06
Processor SDK RTOS 09.01.00.06 | Processor SDK LINUX 09.01.00.06
Processor SDK RTOS 09.01.00.06 |
| 09_01_03_00 | 09_01_00_08 | Processor SDK LINUX : 09.01.00.07 | Processor SDK LINUX 09.01.00.06
Processor SDK RTOS 09.01.00.06 | Processor SDK LINUX 09.01.00.06
Processor SDK RTOS 09.01.00.06 | Processor SDK LINUX 09.01.00.06
Processor SDK RTOS 09.01.00.06 |
diff --git a/examples/osrt_cpp/README.md b/examples/osrt_cpp/README.md
index 9a2b9ee..b9dbb2d 100644
--- a/examples/osrt_cpp/README.md
+++ b/examples/osrt_cpp/README.md
@@ -10,16 +10,16 @@
## Introduction
- - CPP APIs os the DL runtime offered by solutions only supports the model inference. So the user is expeted to run the [Python Examples](../../README.md#python-exampe) on PC to generate the model artifacts.
+ - CPP APIs for DL runtimes only support model inference. So the user is expeted to run the [Python Examples](../../README.md#python-exampe) on PC to generate the model artifacts.
- CPP API require yaml file reading lib. So the user is expected to install libyaml-cpp-dev by running command "sudo apt-get install libyaml-cpp-dev"
-> Note : We are plannign to clean-up and unify the user inetrface for CPP examples by next release. We are also planning to add more CPP exmaples.
+
## Setup
- Prepare the Environment for the Model compilation by follwoing the setup section [here](../../README.md#setup)
## Build
- - Build the CPP examples using cmake from repository base directory. Create a build folder for your generated build files.
+ - Build the CPP examples using cmake from repository base directory. Create a build folder for your generated build files
```
mkdir build && cd build
@@ -96,12 +96,16 @@
- -v : verbose (set to 1)
## Validation on Target
-- Build and run steps remains same for PC emaultionn and target. Copy the below folders from PC to the EVM where this repo is cloned before ruunning the examples
-
+- Build and run steps remains same for PC emulation and target. Copy the below folders from PC to the EVM where this repo is cloned before ruunning the examples
```
./model-artifacts
./models
```
+- For ONNX runtime export the following on device, prior to execution:
+ ```
+ export TIDL_RT_ONNX_VARDIM=1
+ ```
+
## Running pre-compiled model from modelzoo
- To run precomiled model from model zoo run the follwoing commands( as an example: cl-0000_tflitert_mlperf_mobilenet_v1_1.0_224_tflite)
- Fetch the tar link from model zoo and wget the file
@@ -118,9 +122,10 @@
cd ../
./bin/Release/tfl_main -z "cl-0000_tflitert_mlperf_mobilenet_v1_1.0_224_tflite/" -v 1 -i "test_data/airshow.jpg" -l "test_data/labels.txt" -a 1 -d 1
```
-- to run on target , copy the below folders from PC to the EVM where this repo is cloned before ruunning the examples
+- To run on target , copy the below folders from PC to the EVM where this repo is cloned before running the examples
```
./model-artifacts
./models
```
+
diff --git a/setup.sh b/setup.sh
index f92ffe3..6da410b 100755
--- a/setup.sh
+++ b/setup.sh
@@ -196,7 +196,7 @@ cp_osrt_lib()
SCRIPTDIR=`pwd`
-REL=09_01_06_00
+REL=09_01_07_00
skip_cpp_deps=0
skip_arm_gcc_download=0
skip_x86_python_install=0
@@ -240,13 +240,20 @@ shift # past argument
done
set -- "${POSITIONAL[@]}" # restore positional parameters
-#Check if tools are built for
-if [ $TIDL_TOOLS_TYPE == GPU ];then
- tidl_gpu_tools=1
+#Check if CPU or GPU tools
+if [ -z "$TIDL_TOOLS_TYPE" ];then
+ echo "Defaulting to CPU tools"
+ tidl_gpu_tools=0
else
- tidl_gpu_tools=0
+ echo "TIDL_TOOLS_TYPE set to :$TIDL_TOOLS_TYPE"
+ if [ $TIDL_TOOLS_TYPE == GPU ];then
+ tidl_gpu_tools=1
+ else
+ tidl_gpu_tools=0
+ fi
fi
+
version_match=`python3 -c 'import sys;r=0 if sys.version_info >= (3,6) else 1;print(r)'`
if [ $version_match -ne 0 ]; then
echo 'python version must be >= 3.6'