You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To summarize, larger beta will result in a more disentangled latent representation but lower-fidelity reconstructions. Smaller beta will not impose disentangling as much, allowing for higher-fidelity reconstructions. At beta = 1, the B-VAE is equivalent to a plain VAE, so it should is usually set to a value greater than one.
Determining the proper beta depends on the problem and your goals. You can try several values for beta with your data, and you can create a custom training regimen that changes beta over time. This implementation assumes a constant beta, but you can rebuild the model with a different beta during training.
In your case, you use beta = 100. So, how to choose proper beta value (not constant)? And large or small beta value is good or not?
The text was updated successfully, but these errors were encountered: