Releases: crjaensch/PromptoLab
Fourth Public Release
Introducing PromptoLab—your new ally in prompt optimization! Designed to empower users in crafting and refining AI interactions, PromptoLab has officially reached MVP status with its exciting version 0.4 release.
The main new feature of version 0.4 is the support of both the llm command-line tool and the LiteLLM library as LLM provider APIs. The latter also supports running locally installed LLMs via Ollama.
With PromptoLab, navigating the complexities of prompt design has never been more intuitive or exciting. Ready to optimize your AI? Dive into PromptoLab 0.4 today!
v0.3.1
Third Public Release - including .dmg for macOS
Introducing PromptoLab—your new ally in prompt optimization! Designed to empower users in crafting and refining AI interactions, PromptoLab has officially reached MVP status with its exciting version 0.3 release.
Key Features of PromptoLab 0.3
Prompts Catalog: Seamlessly define and organize prompts using three distinct prompt categories. This centralized hub ensures your prompts are always accessible and well-structured for easy reuse.
LLM Playground: Dive into experimentation with two dynamic options. Choose to run a selected prompt from your catalog or request improvements for a given prompt and then save the improved prompt in the catalog. Customize your experience further by selecting your preferred LLM model and tweaking three critical LLM control parameters. Support for {{variable}} syntax in prompts enables quick testing of prompt variations through an interactive variables table.
Prompts Test Set Manager: Simplify testing for complex system prompts in generative AI applications. Define and manage test cases to ensure your system prompts guide LLM responses effectively across various user prompts.
Evaluation Playground: Assess the impact of prompt adjustments with ease. This powerful tool helps you evaluate whether modifications to a system prompt enhance or hinder LLM responses across diverse user scenarios, giving you the confidence to optimize with precision. Also, create an evaluation results HTML report to easily recall and review multiple evaluation runs.
File storage for the prompts catalog and all test sets has been moved to a subdirectory (called ".promptolab") of the user's home directory. This simplifies sharing prompts and test sets, especially since the introduction of the brand new PromptoLab-Data repo.
With PromptoLab, navigating the complexities of prompt design has never been more intuitive or exciting. Ready to optimize your AI? Dive into PromptoLab 0.3 today!
v0.2.1
Mostly a bugfix release that corrects some issues installing PromptoLab on Windows computers.
Full Changelog: v0.2.0...v0.2.1
Second public release of PromptoLab
Introducing PromptoLab—your new ally in prompt optimization! Designed to empower users in crafting and refining AI interactions, PromptoLab has officially reached MVP status with its exciting version 0.2 release.
Key Features of PromptoLab 0.2
-
Prompts Catalog: Seamlessly define and organize prompts using three distinct prompt categories. This centralized hub ensures your prompts are always accessible and well-structured for easy reuse.
-
LLM Playground: Dive into experimentation with two dynamic options. Choose to run a selected prompt from your catalog or request improvements for a given prompt and then save the improved prompt in the catalog. Customize your experience further by selecting your preferred LLM model and tweaking three critical LLM control parameters.
-
Prompts Test Set Manager: Simplify testing for complex system prompts in generative AI applications. Define and manage test cases to ensure your system prompts guide LLM responses effectively across various user prompts.
-
Evaluation Playground: Assess the impact of prompt adjustments with ease. This powerful tool helps you evaluate whether modifications to a system prompt enhance or hinder LLM responses across diverse user scenarios, giving you the confidence to optimize with precision. Also, create an evaluation results HTML report to easily recall and review multiple evaluation runs.
With PromptoLab, navigating the complexities of prompt design has never been more intuitive or exciting. Ready to optimize your AI? Dive into PromptoLab 0.2 today!
First public release
Introducing PromptoLab—your new ally in prompt optimization! Designed to empower users in crafting and refining AI interactions, PromptoLab has officially reached MVP status with its exciting version 0.1 release.
Key Features of PromptoLab 0.1
-
Prompts Catalog: Seamlessly define and organize prompts using three distinct prompt categories. This centralized hub ensures your prompts are always accessible and well-structured for easy reuse.
-
LLM Playground: Dive into experimentation with two dynamic options. Choose to run a selected prompt from your catalog or request improvements for a given prompt. Customize your experience further by selecting your preferred LLM model and tweaking three critical LLM control parameters.
-
Prompts Test Set Manager: Simplify testing for complex system prompts in generative AI applications. Define and manage test cases to ensure your system prompts guide LLM responses effectively across various user prompts.
-
Evaluation Playground: Assess the impact of prompt adjustments with ease. This powerful tool helps you evaluate whether modifications to a system prompt enhance or hinder LLM responses across diverse user scenarios, giving you the confidence to optimize with precision.
With PromptoLab, navigating the complexities of prompt design has never been more intuitive or exciting. Ready to optimize your AI? Dive into PromptoLab 0.1 today!