-
Notifications
You must be signed in to change notification settings - Fork 832
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prediction API should define a dedicated image type #1478
Comments
For your use cases and models can you illustrate what steps you are running in the python wrapper on the binData? I think there are two considerations:
|
@cliveseldon See above. I gave an example on what we do to convert binData to an image. From there you can do:
#2 was my focus since #1 can be accomplished a number of different ways (including as a file attachment). |
I think there are too many variants to cover for a solution. |
Then why not offer an API to turn X into an image as a convenience function? |
Happy to accept a PR here. My concern would be there are many image libraries and types of images so you would end up having a very long list. |
Currently the message format of Seldon Core is defined by the protobuf spec listed here:
https://docs.seldon.io/projects/seldon-core/en/latest/reference/apis/prediction.html
Models that do a lot of image processing (e.g. object segmentation) are forced to use bytes binData by base64 encoding an image before sending a POST.
That in turn means every single Seldon Core request has to have code to convert the passed in byte array *X" to an image like so (e.g. Python, RGB image as input):
`
from PIL import Image
img = Image.open(io.BytesIO(X)).convert('RGB')
`
Instead, it would be nice if the protobuf format defined some kind of dedicated imgData type which would act like binData but would be serialized to an image (e.g. numpy array for Python) when it arrives at the model's predict() function so the model writer can use it immediately instead of having to convert "bytes" to "images" every single time as per above.
I realized this is harder than it sounds since recognizing different image formats and serializing them on the wire has its issues.
It just seems that unserializing image data at the Model node level is somewhat awkward given how common image data is as input.
The text was updated successfully, but these errors were encountered: