Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tf->tvm cuda mobilenet fail #1

Closed
VertexC opened this issue Feb 2, 2021 · 3 comments
Closed

tf->tvm cuda mobilenet fail #1

VertexC opened this issue Feb 2, 2021 · 3 comments

Comments

@VertexC
Copy link
Owner

VertexC commented Feb 2, 2021

python infer_perf/tf2tvm.py mobilenet --backend=cuda --size=256

Traceback (most recent call last):
  File "infer_perf/tf2tvm.py", line 69, in <module>
    duration = util.simple_bench(runner, args.size)
  File "/scratch/dl-infer-perf/infer_perf/util.py", line 7, in simple_bench
    runner(data_size)
  File "infer_perf/tf2tvm.py", line 48, in runner
    module.run()
  File "/scratch/tvm/python/tvm/contrib/graph_runtime.py", line 206, in run
    self._run()
  File "/scratch/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
    raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (3) /scratch/tvm/build/libtvm.so(TVMFuncCall+0x5f) [0x7f6b518cc54f]
  [bt] (2) /scratch/tvm/build/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::detail::PackFuncVoidAddr_<4, tvm::runtime::CUDAWrappedFunc>(tvm::runtime::CUDAWrappedFunc, std::vector<tvm::runtime::detail::ArgConvertCode, std::allocator<tvm::runtime::detail::ArgConvertCode> > const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0xb6) [0x7f6b5197f5d6]
  [bt] (1) /scratch/tvm/build/libtvm.so(tvm::runtime::CUDAWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*, void**) const+0x7f7) [0x7f6b5197f4c7]
  [bt] (0) /scratch/tvm/build/libtvm.so(+0x1af35e2) [0x7f6b5197b5e2]
  File "/scratch/tvm/src/runtime/cuda/cuda_module.cc", line 105
  File "/scratch/tvm/src/runtime/library_module.cc", line 78
TVMError:
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
---------------------------------------------------------------

  Check failed: ret == 0 (-1 vs. 0) : CUDAError: cuModuleLoadData(&(module_[device_id]), data_.c_str()) failed with error: CUDA_ERROR_INVALID_PTX
@VertexC
Copy link
Owner Author

VertexC commented Feb 2, 2021

okay with llvm as backend

@VertexC
Copy link
Owner Author

VertexC commented Feb 2, 2021

seem to be a known shape related issue of tvm apache/tvm#1027

@VertexC VertexC closed this as completed Feb 9, 2021
@VertexC VertexC reopened this Feb 9, 2021
@VertexC
Copy link
Owner Author

VertexC commented Feb 9, 2021

solved by running each benchmark inside a process (python multiprocessing)

@VertexC VertexC closed this as completed Feb 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant