You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am working on a graph autoencoder project, it consists of dense layers like this :
dense ([10, 756] )--> dense ( [10, 512] ) --> latent ( [ 10, 256] ) --> dense ( [10, 512] ) --> dense ([10, 756])
This is a very simple setup, and training it with mean square error loss i am getting accuracy ~0.79
I thought to experiment with variational autoencoder but it didn't work out well on this problem, But the strange thing is if I am adding noise in latent representation, it actually increases the accuracy to ~ 0.85
Adding noise in latent then setup is dense ([10, 756] )--> dense ( [10, 512] ) --> latent ( [ 10, 256] ) + random_normal([ 10, 256] ) --> dense ( [10, 512] ) --> dense ([10, 756])
It's giving good accuracy but I want to know the theory behind it, I thought it's denoising auto-encoder but since in denosing auto-encoder we add noise in input samples not in latent space so we can't say it's denosing auto-encoder. How should I explain why it's increasing the accuracy.
I am not using kl-divergence or variational autoencoder, then how it's increasing?
Thank you!
The text was updated successfully, but these errors were encountered:
I am working on a graph autoencoder project, it consists of dense layers like this :
dense ([10, 756] )--> dense ( [10, 512] ) --> latent ( [ 10, 256] ) --> dense ( [10, 512] ) --> dense ([10, 756])
This is a very simple setup, and training it with mean square error loss i am getting accuracy ~0.79
I thought to experiment with variational autoencoder but it didn't work out well on this problem, But the strange thing is if I am adding noise in latent representation, it actually increases the accuracy to ~ 0.85
Adding noise in latent then setup is
dense ([10, 756] )--> dense ( [10, 512] ) --> latent ( [ 10, 256] ) + random_normal([ 10, 256] ) --> dense ( [10, 512] ) --> dense ([10, 756])
It's giving good accuracy but I want to know the theory behind it, I thought it's denoising auto-encoder but since in denosing auto-encoder we add noise in input samples not in latent space so we can't say it's denosing auto-encoder. How should I explain why it's increasing the accuracy.
I am not using kl-divergence or variational autoencoder, then how it's increasing?
Thank you!
The text was updated successfully, but these errors were encountered: