Skip to content

Running Time Estimation and Ideas for Improving #20

Answered by brianhie
achigbrow asked this question in Q&A
Discussion options

You must be logged in to vote

Hi @achigbrow, thanks for reaching out! Training the model so that it fits in smaller GPU memory can be done by lowering the minibatch size, though this could influence training dynamics. Training the model on full-length Spike with the current code is also quite slow going even on a GPU -- if I recall correctly, I remember it taking a week or so.

Running escape inference is about 10 hours, and can also better fit in memory by lowering "inference batch size."

Restricting to the RBD could be a good option if you are extremely resource constrained and should largely require pointing the script to a different FASTA file. Hope that helps!

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@achigbrow
Comment options

Answer selected by achigbrow
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants