Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updates to VAE (and ops) to enable variable batch sizes #26

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

dribnet
Copy link

@dribnet dribnet commented Apr 1, 2018

This is a great collection of generative models for TensorFlow, all nicely wrapped in a common class interface. I'd like to use this as a basis for ongoing work I'm migrating to TensorFlow. I'm interested in using this code not only to test MNIST models, but also as a way of generating a series of reference models using several other datasets which can be reused and shared.

So as a first proposed change, I'd like to separate the batch_size from the model definition to instead be a runtime variable by using a placeholder. This allows:

  1. a trained model can be later opened without knowing the batch_size used at training time
  2. the encoder/decoder can be called on x/z of variable length
  3. a trained model can be later refined via transfer learning at a different batch_size

I've done a quick version of this for the VAE model and verified that this still works on that model (at least on the latest TensorFlow) and enables (1) and (2) above. If you are open to the spirit of this change, I'm happy to rework the implementation if you'd like this cleaned up further.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant