Skip to content

v0.11.0

Compare
Choose a tag to compare
@irenedea irenedea released this 13 Aug 17:16

🚀 LLM Foundry v0.11.0

New Features

LLM Foundry CLI Commands (#1337, #1345, #1348, #1354)

We've added CLI commands for our commonly used scripts.

For example, instead of calling composer llm-foundry/scripts/train.py parameters.yaml, you can now do composer -c llm-foundry train parameters.yaml.

Docker Images Contain All Optional Dependencies (#1431)

LLM Foundry Docker images now have all optional dependencies.

Support for Llama3 Rope Scaling (#1391)

To use it, you can add the following to your parameters:

model:
    name: mpt_causal_lm
    attn_config:
      rope: true
      ...
      rope_impl: hf
      rope_theta: 500000
      rope_hf_config:
        type: llama3
        ...

Tokenizer Registry (#1386)

We now have a tokenizer registry so you can easily add custom tokenizers.

LoadPlanner and SavePlanner Registries (#1358)

We now have LoadPlanner and SavePlanner registries so you can easily add custom checkpoint loading and saving logic.

Faster Auto-packing (#1435)

The auto packing startup is now much faster. To use auto packing with finetuning datasets, you can add packing_ratio: auto to your config like so:

  train_loader:
    name: finetuning
    dataset:
      ...
      packing_ratio: auto

What's Changed

New Contributors

Full Changelog: v0.10.0...v0.11.0