We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
python infer_perf/tf2tvm.py mobilenet --backend=cuda --size=256
Traceback (most recent call last): File "infer_perf/tf2tvm.py", line 69, in <module> duration = util.simple_bench(runner, args.size) File "/scratch/dl-infer-perf/infer_perf/util.py", line 7, in simple_bench runner(data_size) File "infer_perf/tf2tvm.py", line 48, in runner module.run() File "/scratch/tvm/python/tvm/contrib/graph_runtime.py", line 206, in run self._run() File "/scratch/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__ raise get_last_ffi_error() tvm._ffi.base.TVMError: Traceback (most recent call last): [bt] (3) /scratch/tvm/build/libtvm.so(TVMFuncCall+0x5f) [0x7f6b518cc54f] [bt] (2) /scratch/tvm/build/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::detail::PackFuncVoidAddr_<4, tvm::runtime::CUDAWrappedFunc>(tvm::runtime::CUDAWrappedFunc, std::vector<tvm::runtime::detail::ArgConvertCode, std::allocator<tvm::runtime::detail::ArgConvertCode> > const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0xb6) [0x7f6b5197f5d6] [bt] (1) /scratch/tvm/build/libtvm.so(tvm::runtime::CUDAWrappedFunc::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*, void**) const+0x7f7) [0x7f6b5197f4c7] [bt] (0) /scratch/tvm/build/libtvm.so(+0x1af35e2) [0x7f6b5197b5e2] File "/scratch/tvm/src/runtime/cuda/cuda_module.cc", line 105 File "/scratch/tvm/src/runtime/library_module.cc", line 78 TVMError: --------------------------------------------------------------- An internal invariant was violated during the execution of TVM. Please read TVM's error reporting guidelines. More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793. --------------------------------------------------------------- Check failed: ret == 0 (-1 vs. 0) : CUDAError: cuModuleLoadData(&(module_[device_id]), data_.c_str()) failed with error: CUDA_ERROR_INVALID_PTX
The text was updated successfully, but these errors were encountered:
okay with llvm as backend
llvm
Sorry, something went wrong.
seem to be a known shape related issue of tvm apache/tvm#1027
solved by running each benchmark inside a process (python multiprocessing)
No branches or pull requests
python infer_perf/tf2tvm.py mobilenet --backend=cuda --size=256
The text was updated successfully, but these errors were encountered: