ai
The simplest way to run LLaMA on your local machine
Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!
Drop in a screenshot and convert it to clean code (HTML/Tailwind/React/Vue)
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
🕵️♂️ Library designed for developers eager to explore the potential of Large Language Models (LLMs) and other generative AI through a clean, effective, and Go-idiomatic approach.
Distribute and run LLMs with a single file.
A playground for creative exploration that uses SDXL Turbo.
The subtitles and translations are generated in real-time and displayed as pop-ups.
Foundational Models for State-of-the-Art Speech and Text Translation
Enhanced ChatGPT Clone: Features Agents, Anthropic, AWS, OpenAI, Assistants API, Azure, Groq, o1, GPT-4o, Mistral, OpenRouter, Vertex AI, Gemini, Artifacts, AI model switching, message search, Code…
Command line artificial intelligence - Multi-vendor generation in your terminal
Fully private LLM chatbot that runs entirely with a browser with no server needed. Supports Mistral and LLama 3.
Orchestrate AI models, containers, microservices, and more. Turn your servers into a powerful development environment.
User-friendly AI Interface (Supports Ollama, OpenAI API, ...)
Repository of model demos using TT-Buda
An Open Source implementation of Notebook LM with more flexibility and features
A holistic way of understanding how Llama and its components run in practice, with code and detailed documentation.