Skip to content

Releases: rudolfolah/chaincrafter

v0.2.3

13 Sep 13:48
Compare
Choose a tag to compare
  • Python package for chaincrafter updated to include the install_requires so that PyYAML is installed correctly when chaincrafter is a dependency

v0.2.1

11 Aug 19:42
Compare
Choose a tag to compare

What's Changed

  • Python: rename math.py example file to avoid conflict with math module by @rudolfolah in #10
  • Load chains and prompts from YAML using Catalogs by @rudolfolah in #11

Full Changelog: v0.2.0...v0.2.1

Catalogs

A catalog is a collection of chains and prompts stored in a YAML file.

stable-diffusion-xl (4)

v0.2.0

08 Aug 13:20
Compare
Choose a tag to compare

What's Changed

Documentation

Python

Full Changelog: v0.1.0...v0.2.0

initial release: Python

07 Aug 14:16
Compare
Choose a tag to compare

The initial version of Chaincrafter has been released and published for Python. It currently supports OpenAI, with plans to support local LLMs such as gpt4all and llama.cpp.

Using OpenAI was simplest to test out the asynchronous support with asyncio and the experiments.

Async support

Async lets you make multiple requests to the LLM at the same time:

Example Code
import asyncio

from chaincrafter import Chain, Prompt
from chaincrafter.models import OpenAiChat

chat_model = OpenAiChat(
    temperature=0.9,
    model_name="gpt-3.5-turbo",
    presence_penalty=0.1,
    frequency_penalty=0.2,
)


def make_chain(country):
    system_prompt = Prompt("You are a helpful assistant who responds to questions about the world")
    followup_prompt = Prompt("{city} sounds like a nice place to visit. What is the population of {city}?")
    hello_prompt = Prompt(f"Hello, what is the capital of {country}? Answer only with the city name.")
    return Chain(
        system_prompt,
        (hello_prompt, "city"),
        (followup_prompt, "followup_response"),
    )


async def main():
    chain_france = make_chain("France")
    chain_china = make_chain("China")
    results = await asyncio.gather(
        chain_france.async_run(chat_model),
        chain_china.async_run(chat_model),
    )
    for messages in results:
        for message in messages:
            print(f"{message['role']}: {message['content']}")

asyncio.run(main())

Experiments support

Experiments allow you to test combinations of model parameters with the same prompt. You can use this to compare models, model parameters and to compare them over time.

Example Code
from chaincrafter import Chain, Prompt
from chaincrafter.experiments import OpenAiChatExperiment

system_prompt = Prompt("You are a helpful assistant who responds to questions about the world")
hello_prompt = Prompt("Hello, what is the capital of France? Answer only with the city name.")
followup_prompt = Prompt("{city} sounds like a nice place to visit. What is the population of {city}?")
chain = Chain(
    system_prompt,
    (hello_prompt, "city"),
    (followup_prompt, "followup_response"),
)
experiment = OpenAiChatExperiment(
    chain,
    model_name=["gpt-4", "gpt-3.5-turbo"],
    temperature=[0.7, 1.5],
    presence_penalty=[0.1],
    frequency_penalty=[0.2],
)
experiment.run()
print(experiment.results)
# CSV Output
print(experiment.to_csv())
# JSON Output
print(experiment.to_json())
# Pandas DataFrame Output
print(experiment.to_pandas_df())
# Pandas DataFrame Visualize
print(experiment.visualize())

Links

stable-diffusion-xl (5)