Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

what tool did you use to convert mxnet to caffe ? #67

Closed
GarrickLin opened this issue Feb 28, 2018 · 93 comments
Closed

what tool did you use to convert mxnet to caffe ? #67

GarrickLin opened this issue Feb 28, 2018 · 93 comments

Comments

@GarrickLin
Copy link

No description provided.

@GarrickLin
Copy link
Author

I guess you are using MXNet2Caffe and it hard coded batchnorm eps as 0.001 but eps=2e-05 in mxnet. I will test it.

@nttstar
Copy link
Collaborator

nttstar commented Feb 28, 2018

@GarrickLin Thank you. I didn't find out it. Please tell me the result once you finish it~

@GarrickLin
Copy link
Author

After replacing eps from 0.001 to 2e-05 in prototxt file, LFW result improved at 99.50%. There is still remaining some difference.

@nttstar
Copy link
Collaborator

nttstar commented Feb 28, 2018

Can you share the converting scripts here?

@GarrickLin
Copy link
Author

model.prototxt.txt

@nttstar
Copy link
Collaborator

nttstar commented Feb 28, 2018

@GarrickLin I mean the modified MXNet2Caffe program, can you share it? At least the input layers, PRelu layers must be adjusted to match our pretrained model.

@GarrickLin
Copy link
Author

GarrickLin/MXNet2Caffe/blob/master/prototxt_basic.py

Or you can just replace all 0.001 to 2e-05 in model.prototxt.

@nttstar
Copy link
Collaborator

nttstar commented Feb 28, 2018

@GarrickLin Thank you very much. Then other caffe guys can use it.

@GarrickLin
Copy link
Author

Can you achieve the same result as mxnet model after modifying eps ?

@nttstar
Copy link
Collaborator

nttstar commented Mar 1, 2018

@GarrickLin I do not have time to test using Caffe model... I think someone else may want to.

@GarrickLin
Copy link
Author

GarrickLin/MXNet2Caffe/blob/master/mxnet2caffe.py

I have fix the prelu support, and now the results are exactly the same.

@nttstar
Copy link
Collaborator

nttstar commented Mar 1, 2018

@GarrickLin Very promising, thank you! Which model did you test? Resnet50?

@nttstar
Copy link
Collaborator

nttstar commented Mar 1, 2018

@GarrickLin Can I copy your code to this repo?

@GarrickLin
Copy link
Author

Yes you can

@GarrickLin
Copy link
Author

I have just tested with LResNet34E-IR

@anguoyang
Copy link

hi@GarrickLin @nttstar, thank you all for your great help.
I have tried your tool to convert LResNet50, it generated only the input layer for caffe:

name: "mxnet-mdoel"
layer {
name: "data"
type: "Input"
top: "data"
input_param {
shape: { dim: 1 dim: 3 dim: 64 dim: 64 }
}
}

@nttstar
Copy link
Collaborator

nttstar commented Mar 5, 2018

@GarrickLin Can you help?

@ysc703
Copy link

ysc703 commented Mar 12, 2018

@GarrickLin @nttstar
I got the same file like anguoyang, and the message is as follow:
0, op:null , name:data -> data
1, op:_copy , name:id -> id
use shared weight -> data
{'bottom': ['data'],
u'inputs': [[0, 0, 0]],
u'name': u'id',
u'op': u'_copy',
'params': ['data'],
'share': True,
'top': u'id'}
Warning! Unknown mxnet op:_copy

Can you give more details about how to convert the insightface model from mxnet to caffe?
Thank a lot!

@YoWhatever
Copy link

@ysc703 Same problem!
@GarrickLin could you help?

@GarrickLin
Copy link
Author

I am really busy nowadays. I'll take care of this as soon as I have time.

@myfighterforever
Copy link

@YoWhatever @ysc703 @anguoyang I have met the same problem. Did you fix it?

@GarrickLin
Copy link
Author

LResNet50.prototxt.txt
what you need to do is just skip those layers

@ysc703
Copy link

ysc703 commented Mar 22, 2018

@GarrickLin Thanks for the the proto file and the recommendation.
It worked after I added some codes in json2prototxt.py and prototxt_basic.py as fellow:
In json2prototxt.py
_if str(node_i['op']) == '_copy' or str(node_i['op']) == '_minus_scalar' or
str(node_i['op']) == 'mul_scalar' or str(node_i['op']) == 'Dropout':
continue

In prototxt_basic.py
if info[attrstr]['act_type'] == 'prelu':
...
txt_file.write(' type: "PReLU"\n')

Although the bottom names of conv0 layer and pre_fc1 layer need to be modified, it is really nice.
Thanks a lot!

@aitaoquanyi
Copy link

@ysc703 can you share your modify code? thanks!

@ewrfcas
Copy link

ewrfcas commented Mar 22, 2018

maybe this:
in "json2prototxt.py":
if str(node_i['op']) == '_copy' or str(node_i['op']) == '_minus_scalar' or str(node_i['op']) == 'mul_scalar' or str(node_i['op']) == '_mul_scalar' or str(node_i['op']) == 'Dropout' or (str(node_i['op']) == 'null' and str(node_i['name']) != 'data'): continue

in "prototxt_basic.py":
def LeakyReLU(txt_file, info):
if info[attrstr]['act_type'] == 'prelu':
txt_file.write('layer {\n')
txt_file.write(' bottom: "%s"\n' % info['bottom'][0])
txt_file.write(' top: "%s"\n' % info['top'])
txt_file.write(' name: "%s"\n' % info['top'])
txt_file.write(' type: "PReLU"\n')
txt_file.write('}\n')
txt_file.write('\n')
else:
raise Exception("unsupported Activation")

I am not sure about whether "elu_param { alpha: 0.25 }" is required for PReLU. The bottom names of conv0 and pre_fc1 should be change to "data" and "bn1", because some layers are removed.

@class8hawk
Copy link

@ewrfcas thanks

@GarrickLin
Copy link
Author

GarrickLin commented Mar 22, 2018

MXNet2Caffe is updated now

@GarrickLin
Copy link
Author

@ewrfcas yes you should manually modify the prototxt file

@nwdxbx
Copy link

nwdxbx commented Mar 27, 2018

@nttstar @GarrickLin when I use your MXNet2Caffe code to convert mxnet LMobileNetE to caffe . It appear: Check failed: status == CUDNN_STATUS_SUCCESS (4 vs. 0) CUDNN_STATUS_INTERNAL_ERROR. But when I convert mxnet model-r50-am-lfw to caffe,it's good. Did you fix it?

@GarrickLin
Copy link
Author

@xubenxiang Sorry, I didn't try LMobileNetE.

@duweidai
Copy link

I converted mxnet50 to resnet50, and run the check_results. I got two same results. because, caffe prototxt not have preprocess, so if you want to got a right check_results ,you have to revise a few the .py

@yang0817manman
Copy link

yang0817manman commented Nov 14, 2018

hi, @GarrickLin @nttstar .Thank you for your sharing.When I trained se-resnet50-arcloss with ms1m datasets then I want to converted mxnetmodel to caffemodel but in the stage of "json2prototxt.py" the code have yet not worked well. Can you help me?The jsonfile and the error are as following:
model-symbol.json.tar.gz
the command lines :
"parser = argparse.ArgumentParser(description='Convert MXNet jason to Caffe prototxt')
parser.add_argument('--mx-json', type=str, default='model_mxnet/model-r50-softmax1e3/model-symbol.json')
parser.add_argument('--cf-prototxt', type=str, default='model_caffe/model-r50_deploy.prototxt')
args = parser.parse_args()"
the error is:
"/usr/bin/python2.7 /home/algorithm-5/MXNet2Caffe/json2prototxt.py
Warning! Unknown mxnet op:_copy
0, op:null , name:data -> data
1, op:_copy , name:id -> id
use shared weight -> data
Process finished with exit code 1"

@jimeffry
Copy link

@yang0817manman
I have modified the code, according to GarrickLin‘s coding. https://github.com/jimeffry/MXNet2Caffe This maybe help you.

@yang0817manman
Copy link

yang0817manman commented Nov 19, 2018

Thanks for your reply. But when I run the code of "json2prototxt.py" and the many unknown layers appeared. How to modify the code I should ?Can you help me ?Thanks a lot. @jimeffry
"
Warning! Skip Unknown mxnet op:broadcast_mul
799, op:elemwise_add , name:_plus22 -> _plus22
804, op:BatchNorm , name:stage4_unit3_bn1 -> stage4_unit3_bn1
806, op:Convolution , name:stage4_unit3_conv1 -> stage4_unit3_conv1
811, op:BatchNorm , name:stage4_unit3_bn2 -> stage4_unit3_bn2
813, op:LeakyReLU , name:stage4_unit3_relu1 -> stage4_unit3_relu1
815, op:Convolution , name:stage4_unit3_conv2 -> stage4_unit3_conv2
820, op:BatchNorm , name:stage4_unit3_bn3 -> stage4_unit3_bn3
821, op:Pooling , name:stage4_unit3_se_pool1 -> stage4_unit3_se_pool1
824, op:Convolution , name:stage4_unit3_se_conv1 -> stage4_unit3_se_conv1
826, op:LeakyReLU , name:stage4_unit3_se_relu1 -> stage4_unit3_se_relu1
829, op:Convolution , name:stage4_unit3_se_conv2 -> stage4_unit3_se_conv2
830, op:Activation , name:stage4_unit3_se_sigmoid -> stage4_unit3_se_sigmoid
831, op:broadcast_mul , name:broadcast_mul23 -> broadcast_mul23
Warning! Skip Unknown mxnet op:broadcast_mul
832, op:elemwise_add , name:_plus23 -> _plus23
837, op:BatchNorm , name:bn1 -> bn1
841, op:FullyConnected , name:pre_fc1 -> pre_fc1
('fullconnect: ', u'512', ['pre_fc1_weight', 'pre_fc1_bias'])
846, op:BatchNorm , name:fc1 -> fc1
847, op:BlockGrad , name:blockgrad0 -> blockgrad0
Warning! Skip Unknown mxnet op:BlockGrad
848, op:L2Normalization , name:fc1n -> fc1n
Warning! Skip Unknown mxnet op:L2Normalization
851, op:L2Normalization , name:l2normalization0 -> l2normalization0
use shared weight -> fc7_weight
Warning! Skip Unknown mxnet op:L2Normalization
852, op:FullyConnected , name:fc7 -> fc7
('fullconnect: ', u'85164', [])
854, op:one_hot , name:one_hot0 -> one_hot0
use shared weight -> softmax_label
Warning! Skip Unknown mxnet op:one_hot
855, op:pick , name:pick0 -> pick0
use shared weight -> softmax_label
Warning! Skip Unknown mxnet op:pick
856, op:_div_scalar , name:_divscalar0 -> _divscalar0
Warning! Skip Unknown mxnet op:_div_scalar
858, op:Activation , name:activation0 -> activation0
860, op:elemwise_mul , name:_mul0 -> _mul0
Warning! Skip Unknown mxnet op:elemwise_mul
861, op:_rminus_scalar , name:_rminusscalar0 -> _rminusscalar0
Warning! Skip Unknown mxnet op:_rminus_scalar
862, op:sqrt , name:sqrt0 -> sqrt0
Warning! Skip Unknown mxnet op:sqrt
864, op:elemwise_sub , name:_minus0 -> _minus0
Warning! Skip Unknown mxnet op:elemwise_sub
867, op:where , name:where0 -> where0
Warning! Skip Unknown mxnet op:where
868, op:elemwise_sub , name:_minus1 -> _minus1
Warning! Skip Unknown mxnet op:elemwise_sub
869, op:expand_dims , name:expand_dims0 -> expand_dims0
Warning! Skip Unknown mxnet op:expand_dims
870, op:broadcast_mul , name:broadcast_mul24 -> broadcast_mul24
Warning! Skip Unknown mxnet op:broadcast_mul
871, op:elemwise_add , name:_plus24 -> _plus24
872, op:SoftmaxOutput , name:softmax -> softmax
873, op:SoftmaxActivation , name:softmaxactivation0 -> softmaxactivation0
Warning! Skip Unknown mxnet op:SoftmaxActivation
874, op:log , name:log0 -> log0
Warning! Skip Unknown mxnet op:log
875, op:one_hot , name:one_hot1 -> one_hot1
use shared weight -> softmax_label
Warning! Skip Unknown mxnet op:one_hot
876, op:elemwise_mul , name:_mul1 -> _mul1
Warning! Skip Unknown mxnet op:elemwise_mul
877, op:sum , name:sum0 -> sum0
Warning! Skip Unknown mxnet op:sum
878, op:_div_scalar , name:_divscalar1 -> _divscalar1
Warning! Skip Unknown mxnet op:_div_scalar
879, op:BlockGrad , name:blockgrad1 -> blockgrad1
Warning! Skip Unknown mxnet op:BlockGrad
"

@liu1004010308
Copy link

I converted mxnet50 to resnet50, and run the check_results. I got two same results. because, caffe prototxt not have preprocess, so if you want to got a right check_results ,you have to revise a few the .py

@ddwktv can you tell me how to modify a few the .py. I am worried. thanks.

@liu1004010308
Copy link

I converted mxnet50 to resnet50, and run the check_results. I got two same results. because, caffe prototxt not have preprocess, so if you want to got a right check_results ,you have to revise a few the .py

@ddwktv can you tell me how to modify a few the .py. I am worried. thanks.

i have finished it and result is ok. you should change caffe's input nimg = (nimg-127.5) / 128

@yang0817manman
Copy link

I want to converted mxnetmodel into caffemodel all the time but some error of "Warning! Skip Unknown mxnet op" always appeared. Can you tell me how to modify the code of "json2prototxt.py" or "mxnet2caffe.py"? Thanks a lot. @liu1004010308

@liu1004010308
Copy link

I want to converted mxnetmodel into caffemodel all the time but some error of "Warning! Skip Unknown mxnet op" always appeared. Can you tell me how to modify the code of "json2prototxt.py" or "mxnet2caffe.py"? Thanks a lot. @liu1004010308

https://blog.csdn.net/Wuzebiao2016/article/details/81735487
this paper can tell you what to do. ofcourse this paper isn't the best one but it maybe helpful for you.

@yang0817manman
Copy link

@liu1004010308,Thanks. I have tried to do it according to this blog and modify the prototxt file some times but it doesn't work.

@liu1004010308
Copy link

@liu1004010308,Thanks. I have tried to do it according to this blog and modify the prototxt file some times but it doesn't work.
i have finished it and result is ok. you should change caffe's input :nimg = (nimg-127.5) / 128

@jiankang1991
Copy link

Does anyone know how to transfer from mxnet to caffe of MobileFacenet model? For the transformed caffe model, I also get the same feature results of different tensor input. How to solve this problem? Thank you.

@77h2l
Copy link

77h2l commented Nov 24, 2018

I converted mxnet50 to resnet50, and run the check_results. I got two same results. because, caffe prototxt not have preprocess, so if you want to got a right check_results ,you have to revise a few the .py

@ddwktv can you tell me how to modify a few the .py. I am worried. thanks.

i have finished it and result is ok. you should change caffe's input nimg = (nimg-127.5) / 128

hi,did you modify the mxnet's json file to run the checkout_result.py,and how did you modify the prototxt file,can you share,thx.

@ddfabbro
Copy link

ddfabbro commented Nov 28, 2018

what tool did you use to convert mxnet to caffe ?

I used MMdnn to convert from MXNet to Caffe and it was very effortless. Their README explains on how to get started, however the dependencies can be pretty messy, so if it doesn't work, I suggest using Docker as follows:


  1. With Docker, you can pull an image of MMdnn and start a container with all dependencies installed
docker run -it -v /path/to/local/folder:/path/to/container/folder mmdnn/mmdnn:cpu.small

Important: You need to specify a shared folder between your local filesystem and the container with -v /path/to/local/folder:/path/to/container/folder. Also, /path/to/container/folder doesn't really matter as Docker will create the folders that doesn't exist.

  1. Inside the container, I installed git
apt update && apt install git

and upgraded mmdnn package with

pip install -U git+https://github.com/Microsoft/MMdnn.git@master
  1. Finally, you should place the models inside the container through the shared folder and convert to Caffe using
mmconvert -sf mxnet -iw model-0000.params -in model-symbol.json --inputShape 3,112,112 -df caffe -o caffe_model -dw caffe_model.npy

@77h2l
Copy link

77h2l commented Dec 29, 2018

I converted mxnet50 to resnet50, and run the check_results. I got two same results. because, caffe prototxt not have preprocess, so if you want to got a right check_results ,you have to revise a few the .py

@ddwktv can you tell me how to modify a few the .py. I am worried. thanks.

i have finished it and result is ok. you should change caffe's input nimg = (nimg-127.5) / 128

hi,did you modify the mxnet's json file to run the checkout_result.py,and how did you modify the prototxt file,can you share,thx.

I figure out that the version of caffe matters.When using NVIDIA'caffe, the result of the converted model seems incorrect,since NVIDIA's caffe implements BN and scale layer together,so you'll find that the blobs mismatch.BTW,can anyone improve the tools so that people need not to manually modify the pt file.

@meanmee
Copy link

meanmee commented Mar 19, 2019

@ddfabbro It seems like using mmdnn to cvt model from mxnet to caffe will loss some precisions, I tested it by getting the feature of the same pic using mxnet and the converted caffe model, the similarity between the two features is 97.7%.
And also I tested it with my own testset, the top 1 acc using the converted caffe model is 1.5% lower than the mxnet model.

@EtheneXiang
Copy link

@GarrickLin I use MXNet2Caffe to convert mobilefacenet layers from mxnet to caffe. I use the transfered prototxt and caffemodel to extract feature, but I found all the pic's feature are the same. I don't know what's the problem, Does the MXNet2Caffe have problem when transfor mxnet to caffe? The prototxt I transfored as below.
model-0.prototxt.txt

@wsx276166228
i have met the same probloms as you said,all pic's feature are the same

@meanmee
Copy link

meanmee commented Jun 21, 2019

The MMDNN works for me with no precision decline.
But you have to modify something because of the minus and the multiply operation on the top of the network.

@Laulian
Copy link

Laulian commented Aug 19, 2019

I used to convert mobilefacenet from mxnet to caffe with MXNet2Caffe https://github.com/GarrickLin/MXNet2Caffe,but the output features totally different. I check some layer‘s’weights and features ,they remain the same.
Finally,I found the keypoint is the eps param in the bn layer. The default value in caffe is 1.0e-3,but 1.0e-5 in mxnet, when I changed it,the pic feature's difference became extremely small (differ only after the decimal point six digits ) MxNet2Caffe-mobilefacenet

@shiyuanyin
Copy link

@

I converted mxnet50 to resnet50, and run the check_results. I got two same results. because, caffe prototxt not have preprocess, so if you want to got a right check_results ,you have to revise a few the .py

@ddwktv can you tell me how to modify a few the .py. I am worried. thanks.

i have finished it and result is ok. you should change caffe's input nimg = (nimg-127.5)

@liu1004010308
Hi, I transform mxnet res50,but ,check,reult not same ,could you help me
图片

@shiyuanyin
Copy link

I converted mxnet50 to resnet50, and run the check_results. I got two same results. because, caffe prototxt not have preprocess, so if you want to got a right check_results ,you have to revise a few the .py

@ddwktv
Hi , I transform mxnet res50, but the check result is differrent ,and I -127.5/128, Could I ask your help?
def compare_models(prefix_mxnet, prefix_caffe, size):
netmx = PredictorMxNet(prefix_mxnet, 0, size)
model_file = prefix_caffe + ".prototxt"
pretrained_file = prefix_caffe + ".caffemodel"
netcaffe = PredictorCaffe(model_file, pretrained_file, size)
tensor = np.ones(size, dtype=np.float32)
out_mx = netmx.forward(tensor)
print out_mx
netcaffe.forward((tensor-127.5)0.0078125)
out_caffe = netcaffe.blob_by_name("pre_fc1")
print out_caffe.data
# print("
********************")
# print softmax(out_caffe.data)
# out_caffe = netcaffe.blob_by_name("fc2")
# print out_caffe.data
# #print softmax(out_caffe.data)
print "done"

@shiyuanyin
Copy link

@GarrickLin Hi , I convert the LResNet50 model to caffe, and when I run check_results.py, the fc1 output is totally different, I've done preprocess for caffe. So, any problems? I found the output of any layer after conv0 is wrong..

@marcosly
Hi I also have different out in the mxnet to caffe res50IR and mxnet resIR50, Ihave modify the caffe model processing ,Have you solved it?

@Laulian
Copy link

Laulian commented Aug 23, 2019

@shiyuanyin you can try this,https://github.com/Laulian/MxNet2Caffe-mobilefacenet,I just tested with resnet-50

@qingchenwuhou
Copy link

I realize the accurate conversion from MXNet model to Caffe Model by MMdnn.
In order to ensure the same preprocess before input into both networks, in verification exp I use the MXNet preprocess and convert mx-NDArray to numpy, then input the 224*224 numpy into caffe network. The process obtains the accurate result.
The same preprocess is important in check exp, where formerly I used caffe preprocess by mistake and was confused by the wrong results. And you will find MXNet preprocess contains steps(subtract mean, divide std, imresize realized by opencv not cv2, crop) which are different from Caffe preprocess. So In Caffe prediction code, you should execute the same preprocess.
The command I use in mmdnn is:

mmtoir -f mxnet -n downed_mxnet_model/resnext-101-symbol.json -w downed_mxnet_model/resnext-101-0000.params -d converted_caffe_model/resnext101 --inputSh ape 3,224,224
mmtocode -f caffe -n converted_caffe_model/resnext101.pb -w converted_caffe_model/resnext101.npy -d converted_caffe_model/resnext101_caffe_converted.py - dw converted_caffe_model/resnext101_caffe_converted.npy
mmtomodel -f caffe -in converted_caffe_model/resnext101_caffe_converted.py -iw converted_caffe_model/resnext101_caffe_converted.npy -o converted_caffe_mo del/resnext101_final

@shiyuanyin
Copy link

@Laulian ,
Thank you ,Yes, I try the author's res50 model is right ,same out ,but I use differ dataset finutune,model,the out not the same?, I compare two mxnet transform prototxt is the same ,so is caffemdel,have some different. but I dont know why ?

@Laulian
Copy link

Laulian commented Aug 24, 2019

@shiyuanyin check the last layer's output of each bottleneck in resnet-50,that is the Eltwise layer.And you will figure out where the difference generated.

@shiyuanyin
Copy link

@ Laulian ,Thank you your reply ,now transform my model is ok ,but I m not sure ,is wd or save model_0000 problem, because I use wd=0.0005 ,get the model is OK, before save model I use include fc model to save no fc model model-0000

@xiakj
Copy link

xiakj commented Nov 14, 2019

@jimeffry 我也用的MxNet2Caffe进行的模型转换,在caffe上跑的识别结果非常离谱,可能是什么原因,谢谢

@xiakj
Copy link

xiakj commented Nov 23, 2019

@jeehyun100 @jimeffry I test resnet50 caffemodel using lfw face dataset, the result is unnormal: the same face pair has higher cosine similarity, the value is around 0.8-0.9, and the different face pair also has high cosing similarity, the value is around 0.6. I thought this is not correct, the different face pair should not get so high similarity. Can anyone explain this question? thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests