-
Notifications
You must be signed in to change notification settings - Fork 863
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segment Anything Fast example #2802
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you provide a notebook example as we decide in the meeting?
return images | ||
|
||
def inference(self, data): | ||
return self.mask_generator.generate(data[0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
data is a batch of requests (ie. a batch of images). data[0] means the first image. Here does not process a batch of requests. Is my understand correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, currently SamAutomaticMaskGenerator
doesn't support batching. We have brought this up internally
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case I think it would be better to call the generator batch_size times to process the whole batch or assert with an error if batch_size != 1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added an assert
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some comments. Most important thing is that the model-config.yaml is missing. Otherwise its already in good shape.
|
||
# If the image is sent as bytesarray | ||
if isinstance(image, (bytearray, bytes)): | ||
image = Image.open(io.BytesIO(image)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we switch this to use torchvision.io.decode_image to leverage the gpu for decoding instead?
return images | ||
|
||
def inference(self, data): | ||
return self.mask_generator.generate(data[0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case I think it would be better to call the generator batch_size times to process the whole batch or assert with an error if batch_size != 1?
examples/large_models/segment_anything_fast/install_segment_anything_fast.sh
Show resolved
Hide resolved
examples/large_models/segment_anything_fast/install_segment_anything_fast.sh
Outdated
Show resolved
Hide resolved
|
||
|
||
image = cv2.imread(image_path) | ||
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're only using opencv to perform bgr2rgb and vice versa we can remove the dependency and use Image.open in both cases and perform rgb2bgr in handler with img[:,:,::-1]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kept opencv since SAM, SAM Fast are using it and it makes it consistent
@agunapal Would also be good to add a test which gets unskipped if PT 2.2 and the other dependencies are available |
sam_checkpoint = ctx.model_yaml_config["handler"]["sam_checkpoint"] | ||
|
||
self.model = sam_model_registry[model_type](checkpoint=sam_checkpoint) | ||
self.model.to(self.device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.device will not be set if cuda is unavailable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. Added default as cpu. Not sure if we should just assert if no cuda?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
default cpu should be fine
handler: | ||
profile: true | ||
model_type: "vit_h" | ||
sam_checkpoint: "/home/ubuntu/serve/examples/large_models/segment_anything_fast/sam_vit_h_4b8939.pth" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use a relative path here or not have this hard coded?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for torch compiler, do we need to pass any other additional flags in the config file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have only 1 known way to do this https://github.com/pytorch/serve/tree/master/examples/large_models/Huggingface_accelerate/llama2
However, this was inconvenient for repeated experiments. We have to move the weights before creating mar file or redownload them.
The current approach is simple.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No flags needed for torch.compile. Everything is done in SAM Fast
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
sam_checkpoint = ctx.model_yaml_config["handler"]["sam_checkpoint"] | ||
|
||
self.model = sam_model_registry[model_type](checkpoint=sam_checkpoint) | ||
self.model.to(self.device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
default cpu should be fine
Description
This PR shows how to use Segment Anything Fast with TorchServe
Fixes #(issue)
Type of change
Please delete options that are not relevant.
Feature/Issue validation/testing
Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.
Output
Checklist: