-
Notifications
You must be signed in to change notification settings - Fork 923
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does sam2 have any parameters to adjust the inference result? #299
Comments
Thanks. I change the base plus model and the results doesn't get better. I use bbox setting just by copying the code in jupyter notebook and putting it in |
That gif is a screencapture of using this script.
I think it would be tricky to do with the auto mask generator alone. The default point grid covers the whole image and is going to pick up loads of stuff in the background that will make it hard to deal with, so you could try using a custom point_grid that is limited to the center of the image. You could also try adjusting the min_mask_region_area setting, to see if that can help to filter out 'small' masks. If you don't mind bringing in other models, you could also try using an object (person) detector to at least get a bounding box around the person and use that to ignore all the masks outside. Or similarly, you could maybe use a depth prediction model to ignore any masks that come from parts of the image that are 'too far away' to be the person. Otherwise I think it's difficult to target specific objects with the auto mask generator, since the SAM models alone don't have a way to classify the segmentation results. |
The following is the result I predicted, may I ask if there is any way to improve the result? I have adjusted mask_threshold to -1.0, -0.5,-0.2, and max_hole_area to 1, 20. None of these methods worked.
The text was updated successfully, but these errors were encountered: