Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model server inference using Python Mediapipe #2768

Open
sayanmutd opened this issue Oct 26, 2024 · 3 comments
Open

Model server inference using Python Mediapipe #2768

sayanmutd opened this issue Oct 26, 2024 · 3 comments
Labels
help wanted Extra attention is needed

Comments

@sayanmutd
Copy link

How can i loop over a set of detection results to run classifier over detected regions using mediapipe graph. It will be helpful if i get a graph example using openvino inference calculator

@sayanmutd sayanmutd added the bug Something isn't working label Oct 26, 2024
@atobiszei atobiszei added help wanted Extra attention is needed and removed bug Something isn't working labels Oct 28, 2024
@mzegla
Copy link
Collaborator

mzegla commented Oct 28, 2024

Could you share more details? You mention python in the title. Do you mean python mediapipe package or do you want to execute python code in mediapipe graph in OVMS?

I assume the flow you want to achieve would look like:

  1. Put image on the input of the graph
  2. Run it through detection model
  3. Extract detected regions and run them through classifier
  4. Return some kind of combined results

Is that right?

@sayanmutd
Copy link
Author

@mzegla Thanks for your prompt response. I want to excecute python code alongwith media pipe graph. I want to iterate through OVMS_PY_TENSOR containing detected regions inside the mediapipe graph to facilitate classification over those detected regions.

@mzegla
Copy link
Collaborator

mzegla commented Oct 29, 2024

Well, python execution is enabled via separate node in the mediapipe graph (there is a dedicated calculator for that).
Checkout: https://github.com/openvinotoolkit/model_server/blob/main/docs/python_support/quickstart.md

I suppose the easiest solution would be to have your whole processing done in Python - so you would have a single node that loads your detection and classification model in initialize method of OvmsPythonModel and then run detection, extraction and classification in the execute method.
See: https://github.com/openvinotoolkit/model_server/blob/main/docs/python_support/reference.md

Note that you will likely need to extend docker image with layers containing python packages you need like numpy, opencv etc.
(openvino is already available) - https://github.com/openvinotoolkit/model_server/blob/main/docs/python_support/reference.md#building-docker-image

The other solution is to have multiple nodes: detection, extraction and classification where only extraction is done in Python and detection/classification is executed via CPP based calculators. This approach also requires converter nodes between nodes mentioned earlier.

See CLIP demo for reference:
https://github.com/openvinotoolkit/model_server/tree/main/demos/python_demos/clip_image_classification

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants