Skip to content

Releases: aqlaboratory/openfold

New Documentation for OpenFold

13 May 10:12
f434a27
Compare
Choose a tag to compare

With this release, we include a new home for OpenFold documentation located at: https://openfold.readthedocs.io/.

We hope that the guides provided in the documentation will help users with common workflows, as well as issues that commonly occur.

A few quality of life changes are also included:

  • Adds scripts for creating the OpenFold training set from the datasets that are stored on RODA. We will aim to host the processed datasets on RODA as well in the near future.
  • Adds a script for converting OpenFold v1 weights into OpenFold v2 weights, see this page for more info
  • Adds --experiment_config_json option to both run_pretrained_openfold.py and train_openfold.py to more easily edit model config settings in openfold/config.py

What's Changed

New Contributors

Full Changelog: v2.0.0...v.2.1.0

v2.0.0

08 Feb 16:04
bb3f51e
Compare
Choose a tag to compare

Major Changes

  • SoloSeq inference: Single Sequence Inference using ESM-1b embeddings with template features is now supported. Check out SoloSeq in the README for more information.
  • Multimer : Inference in multimer mode using the AlphaFold-Multimer weights is now supported. Check out Multimer in the README for more instructions, or try out multimer inference in the Colab notebook.
  • Addition of a custom DeepSpeed DS4Sci_EvoformerAttention kernel for 13X reduced peak device memory requirement, leading to 15% faster training and 4x speedup during inference. Test it out using the use_deepspeed_evo_attention option in openfold/config.py. More information in the README.

All Changes

New Contributors

Full Changelog: v1.0.1...v2.0.0

OpenFold v1.0.1

23 Nov 20:46
Compare
Choose a tag to compare

OpenFold as of the release of our manuscript. Many new features, including FP16 training + more stable training.

What's Changed

  • use multiple models for inference by @decarboxy in #117
  • Update input processing by @brianloyal in #116
  • adding a caption to the image in the readme by @decarboxy in #133
  • Properly handling file outputs when multiple models are evaluated by @decarboxy in #142
  • Fix for issue in download_mgnify.sh by @josemduarte in #166
  • Fix tag-sequence mismatch when predicting for multiple fastas by @sdvillal in #164
  • Support openmm >= 7.6 by @sdvillal in #163
  • Fixing issue in download_uniref90.sh by @josemduarte in #171
  • Fix propagation of use_flash for offloaded inference by @epenning in #178
  • Update deepspeed version to 0.5.10 by @NZ99 in #185
  • Fixes errors when processing .pdb files by @NZ99 in #188
  • fix incorrect learning rate warm-up after restarting from ckpt by @Zhang690683220 in #182
  • Add opencontainers image-spec to Dockerfile by @SauravMaheshkar in #128
  • Write inference and relaxation timings to a file by @brianloyal in #201
  • Minor fixes in setup scripts by @timodonnell in #202
  • Minor optimizations & fixes to support ESMFold by @nikitos9000 in #199
  • Drop chains that are missing (structure) data in training by @timodonnell in #210
  • adding a script for threading a sequence onto a structure by @decarboxy in #206
  • Set pin_memory to True in default dataloader config. by @NZ99 in #212
  • Fix missing subtract_plddt argument in prep_output call by @mhrmsn in #217
  • fp16 fixes by @beiwang2003 in #222
  • Set clamped vs unclamped FAPE for each sample in batch independently by @ar-nowaczynski in #223
  • Fix probabilities type (int -> float) by @atgctg in #225
  • Small fix for prep_mmseqs_dbs. by @jonathanking in #232

New Contributors

Full Changelog: v1.0.0...v1.0.1

OpenFold v1.0.0

22 Jun 08:09
Compare
Choose a tag to compare

OpenFold at the time of the release of our original model parameters and training database. Adds countless improvements over the previous beta release, including, but not limited to:

  • Many bugfixes contribute to stabler, more correct, and more versatile training
  • Options to run OpenFold using our original weights
  • Custom attention kernels and alternative attention implementations that greatly reduce peak memory usage
  • A vastly superior Colab notebook that runs inference many times faster than the original
  • Efficient scripts for computation of alignments, including the option to run MMSeqs2's alignment pipeline
  • Vastly improved logging during training & inference
  • Careful optimizations for significantly improved speeds & memory usage during both inference and training
  • Opportunistic optimizations that dynamically speed up inference on short (< ~1500 residues) chains
  • Certain changes borrowed from updates made to the AlphaFold repo, including bugfixes, GPU relaxation, etc.
  • "AlphaFold-Gap" support allows inference on complexes using OpenFold and AlphaFold weights
  • WIP OpenFold-Multimer implementation on the multimer branch
  • Improved testing for the data pipeline
  • Partial CPU offloading extends the upper limit on inference sequence lengths
  • Docker support
  • Missing features from the original release, including learning rate schedulers, distillation set support, etc.

Full Changelog: v0.1.0...v1.0.0

OpenFold v0.1.0

18 Nov 20:10
Compare
Choose a tag to compare

The initial release of OpenFold.

Full Changelog: https://github.com/aqlaboratory/openfold/commits/v0.1.0