[Want to Help!!] Welcome to JEN-1-COMPOSER-pytorch Discussions! #2
0417keito
announced in
Announcements
Replies: 1 comment
-
I am trying to run JEN-1 by my own in a Colab Notebook, but I have an issue with de ckpt_path, because I can't find it. I hope someone can help. https://colab.research.google.com/drive/15YFORXT4YZyHv2oNdItaDpLh8M1fTbK7?usp=sharing. This is my Colab |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have created a JEN-1-COMPOSER(here) repository and post the discussions there here as well.
👋What is the discussion for?
In implementing JEN-1-Composer, there are a few points that are unclear and we would like to discuss them.
For this purpose, the following is a summary of what has been done and what has not been done.
What has been done
What I want to discuss.
First, select the number of tracks determined by the curriculum training, then randomly select a non-zero time step ti for each sample in the batch for the selected tracks, and for the remaining tracks, for each sample in the batch, select [0, ti, T I implemented this in trainer.py, is this correct?
Furthermore, is k the maximum number of tracks for input and output for any curriculum training stage? If we don't match the maximum number of tracks k, I don't think we will be able to train with a single model because the input and output channels of the model will not match for each stage.
For the generation of surroundings, condition generation, and co-generation, I simply calculated x_t for the selected track and the remaining track, respectively, based on the selected time step, and then concatenated the two calculated x_t. Is this correct?
Beta Was this translation helpful? Give feedback.
All reactions