Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[NNAdapter][TIM-VX] Add fill_like,split,slice and fix conv2d_transpose #8177

Merged
merged 7 commits into from
Jan 15, 2022
Merged

[NNAdapter][TIM-VX] Add fill_like,split,slice and fix conv2d_transpose #8177

merged 7 commits into from
Jan 15, 2022

Conversation

yingshengBD
Copy link
Collaborator

test=develop test=huawei_ascend_npu

@paddle-bot-old
Copy link

Thanks for your contribution!

namespace verisilicon_timvx {

int ConvertSlice(Converter* converter, hal::Operation* operation) {
#if 1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#if 1 这种代码不要出现在 PR中

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit

@@ -56,6 +56,12 @@ std::shared_ptr<tim::vx::Tensor> CreateTimVXTensor(
const NNAdapterOperandType* type,
void* buffer = nullptr,
std::vector<int32_t> dimensions = {});

std::shared_ptr<tim::vx::Tensor> CreateConstantTimVXTensor(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是否有必要加?除了converter会调用?还有哪里会使用?engine.cc ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit, 删除该接口,现在使用addXXXoperand就不再需要这接口了

}
NNAdapterOperand* filter_operand = nullptr;
bool is_quant_mode = false;
if (filter_precison == PRECISION(kInt8)) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是否可以参考conv2d ?有些没有必要的代码是否能删掉?由于现在pengyang在 lite pass 已经增加混合精度的处理,所以这里理论上应该更少的类型判断才对

@@ -25,6 +26,7 @@ int ConvertConv2DTranspose(Converter* converter, hal::Operation* operation) {
CONV_2D_TRANSPOSE_OPERATION_EXTRACT_INPUTS_OUTPUTS
// Dynamic shapes are still not supported
NNADAPTER_CHECK_EQ(input_operand->type.dimensions.dynamic_count, 0);

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

为什么加这么多空行?如果有必要区分,建议加一行注释

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

为什么 // Convert to tim-vx tensors and operators 上面反而不空一行(用于分隔非硬件和硬件的代码)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit

int32_t multiplier = 0;
std::vector<int32_t> filter_dimensions(
filter_operand->type.dimensions.data,
filter_operand->type.dimensions.data +
filter_operand->type.dimensions.count);

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit

if (is_depthwise_mode) {
multiplier = output_channel_size / group;
NNADAPTER_CHECK_GT(filter_operand->type.dimensions.count, 2);
// Oc,1,H,W -> 1,Oc,H,W
filter_dimensions[0] = filter_dimensions[1];
filter_dimensions[1] = output_channel_size;
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit

@@ -48,18 +50,24 @@ int ConvertConv2DTranspose(Converter* converter, hal::Operation* operation) {
if (!input_tensor) {
input_tensor = converter->ConvertOperand(input_operand);
}

std::vector<int32_t> filter_permutation = {1, 0, 2, 3};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

为什么需要执行 oc, ic, h, w -> ic,oc,h,w? TIM-VX 默认是WHIcOc的啊 https://github.com/VeriSilicon/TIM-VX/blob/eecbe264b6a934ad6b089aa4701fa3fa77335201/include/tim/vx/ops/deconv.h#L63

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

正常conv是Oc为filter的batch数,而deconv中的Oc正是filter的channel数,所以这里需要transpose一下,这个修改经过单测和模型验证

int32_t multiplier = 0;
std::vector<int32_t> filter_dimensions(
filter_operand->type.dimensions.data,
filter_operand->type.dimensions.data +
filter_operand->type.dimensions.count);

if (is_depthwise_mode) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deconv存在depthwise的情况吗?如果不存在就不要做如下处理

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit

@@ -48,6 +48,13 @@ class Converter {
// operand
std::shared_ptr<tim::vx::Tensor> ConvertOperand(
hal::Operand* operand, std::vector<int32_t> dimensions = {});
// Add a constant TIM-VX tensor
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

为什么不加在AddTensor下面?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit, 删除该接口,现在使用addXXXoperand就不再需要这接口了

@@ -94,5 +94,16 @@ std::shared_ptr<tim::vx::Tensor> Converter::ConvertOperand(
return tensor;
}

std::shared_ptr<tim::vx::Tensor> Converter::AddConstantTensor(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

为什么不加在AddTensor下面?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit, 删除该接口,现在使用addXXXoperand就不再需要这接口了

tim::vx::DataType precision,
const float* quant_scale,
const int32_t* zero_point) {
auto tensor = CreateConstantTimVXTensor(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

为什么不直接调用CreateTimVXTensor

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit, 删除该接口,现在使用addXXXoperand就不再需要这接口了

auto output_tensor = converter->ConvertOperand(output_operand);
std::vector<int32_t> dimensions = {1};
std::shared_ptr<tim::vx::Tensor> dummy_tesnor;
if (IsUInt8AsymmPerLayerQuantType(input_operand->type.precision)) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

按照之前讨论的加pass的方法改一下,这个代码太丑了

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

在新提交代码中engine.cc的build model阶段,新增函数调用ConvertFillLikeIntoEltwiseMul,在进入driver hal的convert前就替换掉fill like算子

input_tensor = converter->ConvertOperand(input_operand);
}
auto output_tensor = converter->ConvertOperand(output_operand);

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

不要无缘无故加空行,如果需要隔开,建议加一行注释

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit

}
auto output_tensor = converter->ConvertOperand(output_operand);

auto output_dim = output_operand->type.dimensions.count;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

output_rank或者 output_dimensions_count

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit

auto output_tensor = converter->ConvertOperand(output_operand);

auto output_dim = output_operand->type.dimensions.count;
std::vector<int32_t> axis;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

建议用mapped_axes

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit

for (uint32_t i = 0; i < axes_count; i++) {
axis.push_back(output_dim - 1 - axes[i]);
}
std::vector<int32_t> start;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

建议用mapped_starts,变量取名不要随心所欲

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit

else
start.push_back(0);
}
std::vector<int32_t> length;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mapped_lengths

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit

break;
}
}
if (match_axis)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

为什么不用 ? :

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit

break;
}
}
if (match_axis)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

为什么不用 ? :

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit

}
std::vector<int32_t> start;
for (int i = 0; i < output_dim; i++) {
bool match_axis = false;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

直接用found或者matched就行了

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit

split_timvx.push_back(split[i]);
output_tensors[i] = converter->ConvertOperand(output_operands[i]);
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

建议用加一行注释代替空行

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit

std::vector<std::shared_ptr<tim::vx::Tensor>> output_tensors(split.size());
// WHCN
uint32_t axis_timvx = input_operand->type.dimensions.count - 1 - axis;
std::vector<uint32_t> split_timvx;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mapped_split

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit

}
std::vector<std::shared_ptr<tim::vx::Tensor>> output_tensors(split.size());
// WHCN
uint32_t axis_timvx = input_operand->type.dimensions.count - 1 - axis;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mapped_axis

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit

}
std::vector<std::shared_ptr<tim::vx::Tensor>> output_tensors(split.size());
// WHCN
uint32_t axis_timvx = input_operand->type.dimensions.count - 1 - axis;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是否需要判断axis的正负号?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里不需要,在前面SPLIT_OPERATION_EXTRACT_INPUTS_OUTPUTS中已经处理了axis为负数的情况并转为非负数

@@ -130,12 +236,6 @@ int ConvertConv2dTranspose(Converter* converter, OpInfo* op, Scope* scope) {
auto fuse_code_operand = converter->AddConstantOperand(fuse_code_value);

// Output operand
auto output_name = op->Output("Output").front();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

单测都补上

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit

@hong19860320
Copy link
Collaborator

问题太多了,先改一遍我再看看吧@yingshengBD

@hong19860320 hong19860320 changed the title [NNAdapter][TIM-VX]add tim-vx OP: fill_like,split,slice;fix tim-vx OP:conv2d_transpose [NNAdapter][TIM-VX]add fill_like,split,slice and fix conv2d_transpose Jan 3, 2022
@hong19860320 hong19860320 changed the title [NNAdapter][TIM-VX]add fill_like,split,slice and fix conv2d_transpose [NNAdapter][TIM-VX] Add fill_like,split,slice and fix conv2d_transpose Jan 3, 2022
RemoveOperand(model, value_operand);
hal::Operand* dummy_operand;
const std::vector<int32_t> dimensions({1});
if (IsUInt8AsymmPerLayerQuantType(input_operand->type.precision)) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

应该是这个公式吧 input_operand * zero_operand + value_operand,zero_operand 可以用

auto zero_operand = AddOperand(model);

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in new commit,新增fill any like 单测

// Convert to tim-vx tensors and operators
auto input_tensor = converter->GetMappedTensor(input_operand);
if (!input_tensor) {
input_tensor = converter->ConvertOperand(input_operand);
}

Copy link
Collaborator

@hong19860320 hong19860320 Jan 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

以后需要注意下,不建议直接加一个空行,如果需要隔开,可以加一行注释

#include "utility/utility.h"

namespace nnadapter {
static void InsertAdd(hal::Model* model,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

先这样吧,后面我来改这部分的代码

test=develop test=huawei_ascend_npu
test=develop test=huawei_ascend_npu
test=develop test=huawei_ascend_npu test=verisilicon_timvx
test=develop test=huawei_ascend_npu test=verisilicon_timvx
test=develop test=huawei_ascend_npu test=verisilicon_timvx
Copy link
Collaborator

@hong19860320 hong19860320 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@hong19860320 hong19860320 merged commit 7dfbd5a into PaddlePaddle:develop Jan 15, 2022
WeiLi233 pushed a commit to WeiLi233/Paddle-Lite that referenced this pull request Mar 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants