Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fully Connected Layer #1

Open
07Agarg opened this issue Oct 10, 2020 · 3 comments
Open

Fully Connected Layer #1

07Agarg opened this issue Oct 10, 2020 · 3 comments

Comments

@07Agarg
Copy link

07Agarg commented Oct 10, 2020

Thanks for the nice repository!

  • Have you tried a combination of multiple convolutional layers followed by a fully-connected layer with a decoder mirror of that?
  • I have observed that you have mentioned about performance hurt due to fully connected layer. But does the combination of the two (fully-connected and convolutional layers) work?

Thanks

@bdytx5
Copy link

bdytx5 commented Aug 30, 2023

just wanted to follow up on this! Ive tried some fc layers and I notice complete underfitting of my data on cifar10. However a simple model with mainly conv layers produces instantly good results. Quite strange wouldn't you guys agree?!

@ayaz-amin
Copy link

I think the issue is that a fully convolutional autoencoder is inherently overcomplete, that is the latent space is of a higher dimension than the input dimension. The input data has 3072 (3 * 32 * 32) dimensions, but the latent dimension is 8192 (16 * 16 * 32), so the autoencoder isn't losing any information to justify learning hidden representations.

@bdytx5
Copy link

bdytx5 commented Dec 1, 2024

that would make sense. I will have to circle back to this at some point and try lower dimension latent spaces

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants