You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for awesome work!
I have a question about text removal task.
There is an AAAI paper solving text removal in a fully supervised way. EnsNet(https://arxiv.org/pdf/1812.00723.pdf)
It used basic FCN and perceptual/style/content loss... for text removal task.
Have you ever think about this?
Of course, the major downside of this paper is that it used supervised data
The text was updated successfully, but these errors were encountered:
Thank you very much for the paper ! I tend to favor non-GAN models as I have limited GPU power before (><), but now I am able to use mixed precision training. I will look at the details the proposed new method.
The current progress is actually not the model ( I hope ) but on processing the training data. All training images come from translation groups that have different criteria and rules in image processing. Some types of words are removed in some groups, but some are left unchanged in some other groups. That makes the data very noisy, and model predictions are not consistent. Using various loss functions nor regularization don't alleviate this problem. On the other hand, text segmentation is the key step for image inpainting, so I need accurate segmentation around words.
Thank you for awesome work!
I have a question about text removal task.
There is an AAAI paper solving text removal in a fully supervised way.
EnsNet(https://arxiv.org/pdf/1812.00723.pdf)
It used basic FCN and perceptual/style/content loss... for text removal task.
Have you ever think about this?
Of course, the major downside of this paper is that it used supervised data
The text was updated successfully, but these errors were encountered: