Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding random noise to latent representation increase the accuracy in the autoencoder #60

Open
monk1337 opened this issue Mar 2, 2020 · 0 comments

Comments

@monk1337
Copy link

monk1337 commented Mar 2, 2020

I am working on a graph autoencoder project, it consists of dense layers like this :

dense ([10, 756] )--> dense ( [10, 512] ) --> latent ( [ 10, 256] ) --> dense ( [10, 512] ) --> dense ([10, 756])
This is a very simple setup, and training it with mean square error loss i am getting accuracy ~0.79
I thought to experiment with variational autoencoder but it didn't work out well on this problem, But the strange thing is if I am adding noise in latent representation, it actually increases the accuracy to ~ 0.85

Adding noise in latent then setup is
dense ([10, 756] )--> dense ( [10, 512] ) --> latent ( [ 10, 256] ) + random_normal([ 10, 256] ) --> dense ( [10, 512] ) --> dense ([10, 756])
It's giving good accuracy but I want to know the theory behind it, I thought it's denoising auto-encoder but since in denosing auto-encoder we add noise in input samples not in latent space so we can't say it's denosing auto-encoder. How should I explain why it's increasing the accuracy.

I am not using kl-divergence or variational autoencoder, then how it's increasing?

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant