This repository has been archived by the owner on Nov 11, 2023. It is now read-only.
Releases: PINTO0309/openvino2tensorflow
Releases · PINTO0309/openvino2tensorflow
openvino2tensorflow v1.31.1
- Security update
- README update
openvino2tensorflow v1.31.0
- OpenVINO 2021.4.582 -> 2022.1.0
- Remove
MXNet
openvino2tensorflow v1.30.3
- CUDA 11.6
- TensorRT 8.4.0
- cuDNN 8.4
- TensorFlow v2.9.0
- TensorFlow Lite v2.9.0
- PyTorch v1.12.0 (with grid_sample)
- onnxruntime-gpu v1.12.0
openvino2tensorflow v1.30.2
- #106 transpose channel for prelu replacement when prelu channel number is 3 @zye1996
- https://github.com/PINTO0309/simple-onnx-processing-tools v1.0.22
openvino2tensorflow v1.30.1
Added workaround for avoiding conversion errors.
- GroupConvolution
- MatMul
- NonZero
openvino2tensorflow v1.30.0
- CUDA 11.6
- TensorRT 8.4.0
- cuDNN 8.4
- TensorFlow v2.9.0-rc0
- TensorFlow Lite v2.9.0-rc0
- PyTorch v1.12.0 (with grid_sample)
- onnxruntime v1.12.0
openvino2tensorflow v1.29.6
openvino2tensorflow v1.29.5
https://github.com/PINTO0309/simple-onnx-processing-tools
- sne4onnx
- snd4onnx
- snc4onnx
- scs4onnx
- sog4onnx
- sam4onnx
- soc4onnx
- scc4onnx
openvino2tensorflow v1.29.4
- Added automatic NCHW conversion function for ONNX
- Add
--disable_onnx_nchw_conversion
option- Disables conversion to NCHW when generating ONNX and generates ONNX in NHWC format
openvino2tensorflow v1.29.3
- Support for
OpenVINOExecutionProvider
for onnxruntime_gpu