-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[NNAdapter][TIM-VX] Add fill_like,split,slice and fix conv2d_transpose #8177
[NNAdapter][TIM-VX] Add fill_like,split,slice and fix conv2d_transpose #8177
Conversation
Thanks for your contribution! |
namespace verisilicon_timvx { | ||
|
||
int ConvertSlice(Converter* converter, hal::Operation* operation) { | ||
#if 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#if 1 这种代码不要出现在 PR中
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit
@@ -56,6 +56,12 @@ std::shared_ptr<tim::vx::Tensor> CreateTimVXTensor( | |||
const NNAdapterOperandType* type, | |||
void* buffer = nullptr, | |||
std::vector<int32_t> dimensions = {}); | |||
|
|||
std::shared_ptr<tim::vx::Tensor> CreateConstantTimVXTensor( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
是否有必要加?除了converter会调用?还有哪里会使用?engine.cc ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit, 删除该接口,现在使用addXXXoperand就不再需要这接口了
} | ||
NNAdapterOperand* filter_operand = nullptr; | ||
bool is_quant_mode = false; | ||
if (filter_precison == PRECISION(kInt8)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
是否可以参考conv2d ?有些没有必要的代码是否能删掉?由于现在pengyang在 lite pass 已经增加混合精度的处理,所以这里理论上应该更少的类型判断才对
@@ -25,6 +26,7 @@ int ConvertConv2DTranspose(Converter* converter, hal::Operation* operation) { | |||
CONV_2D_TRANSPOSE_OPERATION_EXTRACT_INPUTS_OUTPUTS | |||
// Dynamic shapes are still not supported | |||
NNADAPTER_CHECK_EQ(input_operand->type.dimensions.dynamic_count, 0); | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为什么加这么多空行?如果有必要区分,建议加一行注释
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为什么 // Convert to tim-vx tensors and operators 上面反而不空一行(用于分隔非硬件和硬件的代码)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit
int32_t multiplier = 0; | ||
std::vector<int32_t> filter_dimensions( | ||
filter_operand->type.dimensions.data, | ||
filter_operand->type.dimensions.data + | ||
filter_operand->type.dimensions.count); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit
if (is_depthwise_mode) { | ||
multiplier = output_channel_size / group; | ||
NNADAPTER_CHECK_GT(filter_operand->type.dimensions.count, 2); | ||
// Oc,1,H,W -> 1,Oc,H,W | ||
filter_dimensions[0] = filter_dimensions[1]; | ||
filter_dimensions[1] = output_channel_size; | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同上
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit
@@ -48,18 +50,24 @@ int ConvertConv2DTranspose(Converter* converter, hal::Operation* operation) { | |||
if (!input_tensor) { | |||
input_tensor = converter->ConvertOperand(input_operand); | |||
} | |||
|
|||
std::vector<int32_t> filter_permutation = {1, 0, 2, 3}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为什么需要执行 oc, ic, h, w -> ic,oc,h,w? TIM-VX 默认是WHIcOc的啊 https://github.com/VeriSilicon/TIM-VX/blob/eecbe264b6a934ad6b089aa4701fa3fa77335201/include/tim/vx/ops/deconv.h#L63
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
正常conv是Oc为filter的batch数,而deconv中的Oc正是filter的channel数,所以这里需要transpose一下,这个修改经过单测和模型验证
int32_t multiplier = 0; | ||
std::vector<int32_t> filter_dimensions( | ||
filter_operand->type.dimensions.data, | ||
filter_operand->type.dimensions.data + | ||
filter_operand->type.dimensions.count); | ||
|
||
if (is_depthwise_mode) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
deconv存在depthwise的情况吗?如果不存在就不要做如下处理
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit
@@ -48,6 +48,13 @@ class Converter { | |||
// operand | |||
std::shared_ptr<tim::vx::Tensor> ConvertOperand( | |||
hal::Operand* operand, std::vector<int32_t> dimensions = {}); | |||
// Add a constant TIM-VX tensor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为什么不加在AddTensor下面?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit, 删除该接口,现在使用addXXXoperand就不再需要这接口了
@@ -94,5 +94,16 @@ std::shared_ptr<tim::vx::Tensor> Converter::ConvertOperand( | |||
return tensor; | |||
} | |||
|
|||
std::shared_ptr<tim::vx::Tensor> Converter::AddConstantTensor( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为什么不加在AddTensor下面?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit, 删除该接口,现在使用addXXXoperand就不再需要这接口了
tim::vx::DataType precision, | ||
const float* quant_scale, | ||
const int32_t* zero_point) { | ||
auto tensor = CreateConstantTimVXTensor( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为什么不直接调用CreateTimVXTensor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit, 删除该接口,现在使用addXXXoperand就不再需要这接口了
auto output_tensor = converter->ConvertOperand(output_operand); | ||
std::vector<int32_t> dimensions = {1}; | ||
std::shared_ptr<tim::vx::Tensor> dummy_tesnor; | ||
if (IsUInt8AsymmPerLayerQuantType(input_operand->type.precision)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
按照之前讨论的加pass的方法改一下,这个代码太丑了
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
在新提交代码中engine.cc的build model阶段,新增函数调用ConvertFillLikeIntoEltwiseMul,在进入driver hal的convert前就替换掉fill like算子
input_tensor = converter->ConvertOperand(input_operand); | ||
} | ||
auto output_tensor = converter->ConvertOperand(output_operand); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
不要无缘无故加空行,如果需要隔开,建议加一行注释
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit
} | ||
auto output_tensor = converter->ConvertOperand(output_operand); | ||
|
||
auto output_dim = output_operand->type.dimensions.count; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
output_rank或者 output_dimensions_count
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit
auto output_tensor = converter->ConvertOperand(output_operand); | ||
|
||
auto output_dim = output_operand->type.dimensions.count; | ||
std::vector<int32_t> axis; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
建议用mapped_axes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit
for (uint32_t i = 0; i < axes_count; i++) { | ||
axis.push_back(output_dim - 1 - axes[i]); | ||
} | ||
std::vector<int32_t> start; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
建议用mapped_starts,变量取名不要随心所欲
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit
else | ||
start.push_back(0); | ||
} | ||
std::vector<int32_t> length; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mapped_lengths
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit
break; | ||
} | ||
} | ||
if (match_axis) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为什么不用 ? :
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit
break; | ||
} | ||
} | ||
if (match_axis) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为什么不用 ? :
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit
} | ||
std::vector<int32_t> start; | ||
for (int i = 0; i < output_dim; i++) { | ||
bool match_axis = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
直接用found或者matched就行了
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit
split_timvx.push_back(split[i]); | ||
output_tensors[i] = converter->ConvertOperand(output_operands[i]); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
建议用加一行注释代替空行
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit
std::vector<std::shared_ptr<tim::vx::Tensor>> output_tensors(split.size()); | ||
// WHCN | ||
uint32_t axis_timvx = input_operand->type.dimensions.count - 1 - axis; | ||
std::vector<uint32_t> split_timvx; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mapped_split
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit
} | ||
std::vector<std::shared_ptr<tim::vx::Tensor>> output_tensors(split.size()); | ||
// WHCN | ||
uint32_t axis_timvx = input_operand->type.dimensions.count - 1 - axis; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mapped_axis
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit
} | ||
std::vector<std::shared_ptr<tim::vx::Tensor>> output_tensors(split.size()); | ||
// WHCN | ||
uint32_t axis_timvx = input_operand->type.dimensions.count - 1 - axis; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
是否需要判断axis的正负号?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里不需要,在前面SPLIT_OPERATION_EXTRACT_INPUTS_OUTPUTS中已经处理了axis为负数的情况并转为非负数
@@ -130,12 +236,6 @@ int ConvertConv2dTranspose(Converter* converter, OpInfo* op, Scope* scope) { | |||
auto fuse_code_operand = converter->AddConstantOperand(fuse_code_value); | |||
|
|||
// Output operand | |||
auto output_name = op->Output("Output").front(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
单测都补上
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit
问题太多了,先改一遍我再看看吧@yingshengBD |
RemoveOperand(model, value_operand); | ||
hal::Operand* dummy_operand; | ||
const std::vector<int32_t> dimensions({1}); | ||
if (IsUInt8AsymmPerLayerQuantType(input_operand->type.precision)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
应该是这个公式吧 input_operand * zero_operand + value_operand,zero_operand 可以用
auto zero_operand = AddOperand(model); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed in new commit,新增fill any like 单测
// Convert to tim-vx tensors and operators | ||
auto input_tensor = converter->GetMappedTensor(input_operand); | ||
if (!input_tensor) { | ||
input_tensor = converter->ConvertOperand(input_operand); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
以后需要注意下,不建议直接加一个空行,如果需要隔开,可以加一行注释
#include "utility/utility.h" | ||
|
||
namespace nnadapter { | ||
static void InsertAdd(hal::Model* model, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
先这样吧,后面我来改这部分的代码
test=develop test=huawei_ascend_npu
test=develop test=huawei_ascend_npu
test=develop test=huawei_ascend_npu
test=develop test=huawei_ascend_npu
test=develop test=huawei_ascend_npu test=verisilicon_timvx
test=develop test=huawei_ascend_npu test=verisilicon_timvx
test=develop test=huawei_ascend_npu test=verisilicon_timvx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
test=develop test=huawei_ascend_npu