Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

TypeError: -0.034149169921875 has type float, but expected one of: int, long #15326

Open
wmlba opened this issue Jun 22, 2019 · 2 comments
Open

Comments

@wmlba
Copy link

wmlba commented Jun 22, 2019

Note: Providing complete information in the most concise form is the best way to get help. This issue template serves as the checklist for essential information to most of the technical issues and bug reports. For non-technical issues and feature requests, feel free to present the information in what you believe is the best form.

For Q & A and discussion, please start a discussion thread at https://discuss.mxnet.io

Description

I am trying to convert a very simple image classification model from MXNet to ONNX and I get the error below:

TypeError Traceback (most recent call last)
in ()
----> 1 converted_model_path = onnx_mxnet.export_model(sym, params, [input_shape], np.float32, onnx_file)

~/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/contrib/onnx/mx2onnx/export_model.py in export_model(sym, params, input_shape, input_type, onnx_file_path, verbose)
81 onnx_graph = converter.create_onnx_graph_proto(sym_obj, params_obj, input_shape,
82 mapping.NP_TYPE_TO_TENSOR_TYPE[data_format],
---> 83 verbose=verbose)
84 elif isinstance(sym, symbol.Symbol) and isinstance(params, dict):
85 onnx_graph = converter.create_onnx_graph_proto(sym, params, input_shape,

~/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/contrib/onnx/mx2onnx/export_onnx.py in create_onnx_graph_proto(self, sym, params, in_shape, in_type, verbose)
251 initializer=initializer,
252 index_lookup=index_lookup,
--> 253 idx=idx
254 )
255

~/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/contrib/onnx/mx2onnx/export_onnx.py in convert_layer(node, **kwargs)
90 raise AttributeError("No conversion function registered for op type %s yet." % op)
91 convert_func = MXNetGraph.registry_[op]
---> 92 return convert_func(node, **kwargs)
93
94 @staticmethod

~/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/contrib/onnx/mx2onnx/_op_translations.py in convert_weights_and_inputs(node, **kwargs)
180 dims=dims,
181 vals=np_arr.flatten().tolist(),
--> 182 raw=False,
183 )
184 )

~/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/onnx/helper.py in make_tensor(name, data_type, dims, vals, raw)
171 field = mapping.STORAGE_TENSOR_TYPE_TO_FIELD[
172 mapping.TENSOR_TYPE_TO_STORAGE_TENSOR_TYPE[data_type]]
--> 173 getattr(tensor, field).extend(vals)
174
175 tensor.dims.extend(dims)

TypeError: -0.034149169921875 has type float, but expected one of: int, long

Environment info (Required)

MXNet 1.4
ONNX 1.11.0
Linux env.

What to do:

  1. Download the diagnosis script from https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py
  2. Run the script using python diagnose.py and paste its output here.

Arch : ('64bit', '')
------------Pip Info-----------
Version : 10.0.1
Directory : /home/ec2-user/anaconda3/envs/amazonei_mxnet_p36/lib/python3.6/site-packages/pip
----------MXNet Info-----------
Version : 1.4.0
Directory : /home/ec2-user/anaconda3/envs/amazonei_mxnet_p36/lib/python3.6/site-packages/mxnet
Commit Hash : b5811362951
----------System Info----------
Platform : Linux-4.14.123-86.109.amzn1.x86_64-x86_64-with-glibc2.9
system : Linux
node : ip-172-16-191-155
release : 4.14.123-86.109.amzn1.x86_64
version : #1 SMP Mon Jun 10 19:44:53 UTC 2019
----------Hardware Info----------
machine : x86_64
processor : x86_64
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 2299.814
BogoMIPS: 4600.06
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 46080K
NUMA node0 CPU(s): 0,1
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0024 sec, LOAD: 0.4924 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.1753 sec, LOAD: 0.4681 sec.
Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.2536 sec, LOAD: 0.4653 sec.
Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0282 sec, LOAD: 0.5889 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0055 sec, LOAD: 0.1260 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0035 sec, LOAD: 0.0306 sec.

Package used (Python/R/Scala/Julia):
(I'm using Python)

Build info (Required if built from source)

Compiler (gcc/clang/mingw/visual studio):
Jupyter Notebook

MXNet commit hash:
(Paste the output of git rev-parse HEAD here.)

Build config:
(Paste the content of config.mk, or the build command.)

Error Message:

(Paste the complete error message, including stack trace.)

Minimum reproducible example

(If you are using your own code, please provide a short script that reproduces the error. Otherwise, please provide link to the existing example.)

I trained a model by following the notebook below:
https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/imageclassification_caltech/Image-classification-transfer-learning-highlevel.ipynb

Steps to reproduce

(Paste the commands you ran that produced the error.)

  1. Train a model in the notebook above.
  2. Run the code below on the model artifacts
    converted_model_path = onnx_mxnet.export_model(sym, params, [input_shape], np.float32, onnx_file)

What have you tried to solve it?

@leleamol
Copy link
Contributor

@mxnet-label-bot add [Bug, ONNX]

@vandanavk
Copy link
Contributor

@wmlba Could you share the symbol and params file of the trained model?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants