-
Notifications
You must be signed in to change notification settings - Fork 8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
YOLACT: Add real-time Instance Segmentation #3048
Comments
This is (another) exciting and impressive potential enhancement. |
@AlexeyAB have you implemented the yolAct architecture in the instance segmenter.c ?? |
Hi @AlexeyAB, I really need it. When do you think we can use this feature? Thanks in advance. |
Why not use https://github.com/dbolya/yolact ? |
@zpmmehrdad suggest to use @LukeAI has suggest its good in fps and is implemented in pytorch |
@abhigoku10 Hi, Thanks for suggesting, I want to use it in C++. |
@zpmmehrdad yes even i wanted to use c++ later i shifted my whole development environment pls let me know if come across anyother repo |
CenterMask:Real-Time Anchor-Free Instance Segmentation: https://arxiv.org/pdf/1911.06667.pdf |
YOLACT++: Better Real-time Instance Segmentation
The different from yolact:
|
Add real-time Instance Segmentation:
Look at: https://github.com/pjreddie/darknet/blob/master/examples/instance-segmenter.c
YOLACT++ https://arxiv.org/abs/1912.06218v1
YOLACT (You Only Look At CoefficienTs): https://arxiv.org/abs/1904.02689v1 and https://github.com/dbolya/yolact
Add additional output-prototype-layer with full-resolution masks (
W x H x K
) withupsampling_layers=subsampling_layers
, where isK=32
number of prototype masksUse
(4+1+classes+K)*anchors
outputs of Detector instead of(4+1+classes)*anchors
. Then we do NMS or Fast-NMS. Then we do GEMM between output-prototype-layer[W x H][K]
and found N-bboxes[N][K]
to get[WxH][N]
masks for N found bboxes, and applylogistic_activation
.During training: to compute mask loss, we simply take the pixel-wise binary cross entropy between assembled masks M and the ground truth masks. We crop with the ground truth bounding box and divide Lmask by the ground truth bounding box area.
Test on:
The text was updated successfully, but these errors were encountered: