Download, interact, and finetune models locally.
Explore the docs »
View Demo
·
Report Bugs
·
Suggest Features
·
Join Discord
·
Follow on Twitter
Note: Transformer Lab is actively being worked on. Please join our Discord or follow us on Twitter for updates. Questions, feedback and contributions are highly valued!
Transformer Lab is an app that allows anyone to experiment with Large Language Models.
Transformer Lab is proud to be supported by Mozilla through the Mozilla Builders Program
Transformer Lab allows you to:
- 💕 One-click Download Hundreds of Popular Models:
- Llama3, Phi3, Mistral, Mixtral, Gemma, Command-R, and dozens more
- ⬇ Download any LLM from Huggingface
- 🎶 Finetune / Train Across Different Hardware
- Finetune using MLX on Apple Silicon
- Finetune using Huggingface on GPU
- ⚖️ RLHF and Preference Optimization
- DPO
- ORPO
- SIMPO
- Reward Modeling
- 💻 Work with LLMs Across Operating Systems:
- Windows App
- MacOS App
- Linux
- 💬 Chat with Models
- Chat
- Completions
- Preset (Templated) Prompts
- Chat History
- Tweak generation parameters
- Batched Inference
- Tool Use / Function Calling (in alpha)
- 🚂 Use Different Inference Engines
- MLX on Apple Silicon
- Huggingface Transformers
- vLLM
- Llama CPP
- 🧑🎓 Evaluate models
- 📖 RAG (Retreival Augmented Generation)
- Drag and Drop File UI
- Works on Apple MLX, Transformers, and other engines
- 📓 Build Datasets for Training
- Pull from hundreds of common datasets available on HuggingFace
- Provide your own dataset using drag and drop
- 🔢 Calculate Embeddings
- 💁 Full REST API
- 🌩 Run in the Cloud
- You can run the user interface on your desktop/laptop while the engine runs on a remote or cloud machine
- Or you can run everything locally on a single machine
- 🔀 Convert Models Across Platforms
- Convert from/to Huggingface, MLX, GGUF
- 🔌 Plugin Support
- Easily pull from a library of existing plugins
- Write your own plugins to extend functionality
- 🧑💻 Embedded Monaco Code Editor
- Edit plugins and view what's happening behind the scenes
- 📝 Prompt Editing
- Easily edit System Messages or Prompt Templates
- 📜 Inference Logs
- While doing inference or RAG, view a log of the raw queries sent to the LLM
And you can do the above, all through a simple cross-platform GUI.
Click here to download Transformer Lab.
Read this page to learn how to install and use.
To build the app yourself, pull this repo, and follow the steps below:
npm install
npm start
To package apps for the local platform:
npm run package
Distributed under the AGPL V3 License. See LICENSE.txt
for more information.
If you found Transformer Lab useful in your research or applications, please cite using the following BibTeX:
@software{transformerlab,
author = {Asaria, Ali},
title = {Transformer Lab: Experiment with Large Language Models},
month = December,
year = 2023,
url = {https://github.com/transformerlab/transformerlab-app}
}
- @aliasaria - Ali Asasria
- @dadmobile - Tony Salomone