Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ONNXRuntime Python interface #177

Closed
zhiqwang opened this issue Sep 27, 2021 · 0 comments · Fixed by #178
Closed

Add ONNXRuntime Python interface #177

zhiqwang opened this issue Sep 27, 2021 · 0 comments · Fixed by #178
Labels
deployment Inference acceleration for production enhancement New feature or request

Comments

@zhiqwang
Copy link
Owner

zhiqwang commented Sep 27, 2021

🚀 The feature

Add ONNXRuntime Python interface.

Motivation, pitch

My current plan is to put the core code for ORT inference into the directory yolort/runtime. And furthermore we could name the python scripts as ort_modeling.py, you can also use others if you have a better suggestion. We plan to add more python inferencing backends in the future in this directory. And then add a CLI tool named run_model.py (just for example) to the directory tools for use cases.

cc @itsnine

Originally posted by @zhiqwang in #176 (comment)

@zhiqwang zhiqwang added enhancement New feature or request deployment Inference acceleration for production labels Sep 27, 2021
@zhiqwang zhiqwang assigned zhiqwang and unassigned zhiqwang Sep 28, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
deployment Inference acceleration for production enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant