New in this Release
Description | Notes |
---|---|
Improved performance for 16-bit leakyRELU | This requires an updated version of C7x/MMA firmware (10_00_08_00) and needs to have advanced_options:c7x_firmware set to 10_00_08_00 |
Fixed in this Release
ID | Description | Affected Platforms |
---|---|---|
TIDL-5158 | Models compiled with add_data_convert_ops=0 and with unsigned input results in accuracy loss | All except AM62 |
TIDL-5139 | Layer with multiple inputs and having one input in 16-bit and at least one input in 8-bit may result in accuracy degradation | All except AM62 |
TIDL-5045 | Matmul layer with both inputs with different signedness returns error during compilation (Note: This will require a firmware update) | Only impacts AM68PA (TDA4VM) |
TIDL-5041 | TIDL model compilation fails when the model contains a convolution with kernel size (1x1) and input of height = 1 and width = 1 | All except AM62 |
TIDL-4964 | Model compilation may return following error message "Error : Error Code = <ERR_UNKNOWN>" for networks having concat layer | All except AM62 |
TIDL-4858 | Clip layer without a max initializer results in poor accuracy | All except AM62 |
TIDL-4854 | Model compilation results in a segmentation fault in certain cases when the same input is being fed twice to an elementwise Mul operator | All except AM62 |
TIDL-4737 | Model compilation exits with error message "Mixed Precision is not supported with matmul layer, Aborting" when network has MatMul layer and running with mixed precision | All except AM62 |
TIDL-4679 | Custom unsupported layer removed from subgraph instead of delegated to ARM | All except AM62 |
TIDL-4631 | Model compilation fails with following error message : "Output Transpose is not supported on this device" | Only impacts AM68PA (TDA4VM) |