You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have you tried a combination of multiple convolutional layers followed by a fully-connected layer with a decoder mirror of that?
I have observed that you have mentioned about performance hurt due to fully connected layer. But does the combination of the two (fully-connected and convolutional layers) work?
Thanks
The text was updated successfully, but these errors were encountered:
just wanted to follow up on this! Ive tried some fc layers and I notice complete underfitting of my data on cifar10. However a simple model with mainly conv layers produces instantly good results. Quite strange wouldn't you guys agree?!
I think the issue is that a fully convolutional autoencoder is inherently overcomplete, that is the latent space is of a higher dimension than the input dimension. The input data has 3072 (3 * 32 * 32) dimensions, but the latent dimension is 8192 (16 * 16 * 32), so the autoencoder isn't losing any information to justify learning hidden representations.
Thanks for the nice repository!
Thanks
The text was updated successfully, but these errors were encountered: