Skip to content

Releases: parakeet-nest/parakeet

v0.2.3 🥧 [pie]

06 Dec 09:27
Compare
Choose a tag to compare

Update of the Extism dependency.

v0.2.2 🧁 [cupcake]

29 Oct 06:16
Compare
Choose a tag to compare

What's new in v0.2.2?

Flock Agents

Inspired by: Swarm by OpenAI

Flock is a Parakeet package for creating and managing AI agents using the Ollama backend. It provides a simple way to create conversational agents, orchestrate interactions between them, and implement function calling capabilities.

📝 Documentation

Example
agent := flock.Agent{
    Name: "Bob",
    Model: "qwen2.5:3b",
    OllamaUrl: "http://localhost:11434",
    Options: llm.SetOptions(map[string]interface{}{
        option.Temperature: 0.7,
        option.TopK: 40,
        option.TopP: 0.9,
    }),
}

// Setting static instructions
agent.SetInstructions("Help the user with their queries.")

// Setting dynamic instructions with context
agent.SetInstructions(func(contextVars map[string]interface{}) string {
    userName := contextVars["userName"].(string)
    return fmt.Sprintf("Help %s with their queries.", userName)
})

orchestrator := flock.Orchestrator{}

response, _ := orchestrator.Run(
    agent,
    []llm.Message{
        {Role: "user", Content: "Hello, what's the best pizza?"},
    },
    map[string]interface{}{"userName": "Sam"},
)

fmt.Println(response.GetLastMessage().Content)

v0.2.1 🧇 [waffle]

13 Oct 08:43
Compare
Choose a tag to compare

What's new in v0.2.1?

Contextual Retrieval

Inspired by: Introducing Contextual Retrieval

2 new methods are available in the content package:

  • CreateChunkContext
  • CreateChunkContextWithPromptTemplate

CreateChunkContext generates a succinct context for a given chunk within the whole document content.
This context is intended to improve search retrieval of the chunk.

CreateChunkContextWithPromptTemplate generates a contextual response based on a given prompt template and document content.
It interpolates the template with the provided document and chunk content, then uses an LLM to generate a response.

UI Helpers

2 new methods are available in the ui package:

If you use Parakeet to create CLI applications, you can use the ui package to create a (very) simple UI.

  • Input
  • Println

Input displays a prompt with the specified color and waits for user input.

Println prints the provided strings with the specified color using the lipgloss styling library.

CLI Helpers

8 new methods are available in the cli package:

  • Settings parses command-line arguments and flags.
  • FlagValue retrieves the value of a flag by its name from a slice of Flag structs.
  • HasArg checks if an argument with the specified name exists in the provided slice of arguments.
  • HasFlag checks if a flag with the specified name exists in the provided slice of flags.
  • ArgsTail extracts the names from a slice of Arg structs and returns them as a slice of strings.
  • FlagsTail takes a slice of Flag structs and returns a slice of strings containing the names of those flags.
  • FlagsWithNamesTail takes a slice of Flag structs and returns a slice of strings, where each string is a formatted pair of the flag's name and value in the form "name=value".
  • HasSubsequence checks if the given subsequence of strings (subSeq) is present in the tail of the provided arguments (args).

Example:

// default values
ollamaUrl := "http://localhost:11434"
chatModel := "llama3.1:8b"
embeddingsModel := "bge-m3:latest"

args, flags := cli.Settings()

if cli.HasFlag("url", flags) {
    ollamaUrl = cli.FlagValue("url", flags)
}

if cli.HasFlag("chat-model", flags) {
    chatModel = cli.FlagValue("chat-model", flags)
}

if cli.HasFlag("embeddings-model", flags) {
    embeddingsModel = cli.FlagValue("embeddings-model", flags)
}

switch cmd := cli.ArgsTail(args); cmd[0] {
case "create-embeddings":
    fmt.Println(embeddingsModel)
case "chat":
    fmt.Println(chatModel)
default:
    fmt.Println("Unknown command:", cmd[0])
}
New samples
  • 52-constraints: Preventing an LLM from talking about certain things
  • 53-constraints: Preventing an LLM from talking about certain things
  • 54-constraints-webapp: Preventing an LLM from talking about certain things
  • 55-create-npc: Create a NPC with nemotron-mini and chat with him
  • 56-jean-luc-picard: Chat with Jean-Luc Picard
  • 57-jean-luc-picard-rag: Chat with Jean-Luc Picard + RAG
  • 58-michael-burnham: Chat with Michael Burnham
  • 59-jean-luc-picard-contextual-retrieval: Chat with Jean-Luc Picard + Contextual Retrieval
  • 60-safety-models: Safety Models fine-tuned for content safety classification of LLM inputs and responses

v0.2.0 🍕 [pizza]

15 Sep 12:37
Compare
Choose a tag to compare

What's new in v0.2.0?

New way to set the options

Problem:
The omitempty tag prevents a field from being serialised if its value is the zero value for the field's type (e.g., 0.0 for float64).

That means when Temperature equals 0.0, the field is not serialised (then Ollama will use the Temperature default value, which equals 0.8).

The problem will happen for every value equal to 0 or 0.0

Solution(s):

Set all the fields:
options := Options{
		NumPredict: -1,

		NumKeep:          4,
		Temperature:      0.8,
		TopK:             40,
		TopP:             0.9,
		TFSZ:             1.0,
		TypicalP:         1.0,
		RepeatLastN:      64,
		RepeatPenalty:    1.1,
		PresencePenalty:  0.0,
		FrequencyPenalty: 0.0,
		Mirostat:         0,
		MirostatTau:      5.0,
		MirostatEta:      0.1,
		PenalizeNewline:  true,
		Seed:             -1,
}
Default Options + overriding:
options := llm.DefaultOptions()
// override the default value
options.Temperature = 0.5
Use the SetOptions helper:

Define only the fields you want to override:

options := llm.SetOptions(map[string]interface{}{
  "Temperature": 0.5,
})

The SetOptions helper will set the default values for the fields not defined in the map.

Or use the SetOptions helper with the option enums:

options := llm.SetOptions(map[string]interface{}{
  option.Temperature: 0.5,
  option.RepeatLastN: 2,
})

Note: the results should be more accurate.

New sample.
  • 51-genai-webapp: GenAI web application demo

v0.1.9 🌭 [hot dog]

13 Sep 04:00
Compare
Choose a tag to compare

What's new in v0.1.9?

  • llm.GetModelsList(url string) (ModelList, int, error)
  • llm.GetModelsListWithToken(url, tokenHeaderName, tokenHeaderValue string) (ModelList, int, error)
  • llm.ShowModelInformationWithToken(url, model , tokenHeaderName, tokenHeaderValue string) (ModelInformation, int, error)
  • llm.PullModelWithToken(url, model, tokenHeaderName, tokenHeaderValue string) (PullResult, int, error)

v0.1.8 🍔 [hamburger]

09 Sep 05:12
Compare
Choose a tag to compare

What's new in v0.1.8?

  • Return a string of the answer at the end of the stream: func ChatWithOpenAIStream(url string, query llm.OpenAIQuery, onChunk func(llm.OpenAIAnswer) error) (string, error) {}
  • Helper to create Embedding objects with the OpenAI API: CreateEmbeddingWithOpenAI(url string, query llm.OpenAIQuery4Embedding, id string) (llm.VectorRecord, error) {}

v0.1.7 🥯 [bagel]

06 Sep 19:31
Compare
Choose a tag to compare

What's new in v0.1.7?

  • A website is available at https://parakeet-nest.github.io/parakeet/
  • (Ollama) Generate completion: replace llm.Query with llm.GenQuery and llm.Answer with llm.GenAnswer + 🐛 fix
  • Add Suffix field to llm.GenQuery
  • OpenAI API Chat completion support (only tested with the gpt-4o-mini model):
    • func ChatWithOpenAI(url string, query llm.OpenAIQuery) (llm.OpenAIAnswer, error) {}
    • func ChatWithOpenAIStream(url string, query llm.OpenAIQuery, onChunk func(llm.OpenAIAnswer) error) error {}
    • Tools: planned.

Ollama provides experimental compatibility with parts of the OpenAI API. As it's experimental, I prefer to keep the completion methods of Ollama and OpenAI "separated."

  • New samples in the examples directory:
    • 44-chat-openai
    • 45-chat-stream-openai
    • 47-function-calling-xp: call several tools in the same prompt
    • 48-testing-models: test models with different prompts
      • yi-coder/01-completion: write an algorithm
      • yi-coder/02-insertion: find a problem in the code (and fix it)
      • yi-coder/03-qa: ask a question about the code
      • yi-coder/04-gitlab-ci: explain a CI/CD pipeline
      • mathstral/01-completion: solve a math problem

v0.1.6 🥨 [pretzel]

03 Sep 16:43
Compare
Choose a tag to compare

What's new in v0.1.6?

  • Move the cosine similarity function to the similarity package
  • Implement the Jaccard index calculation for text similarity (🚧 experimental)
  • Implement the Levenshtein Distance calculation for set similarity (🚧 experimental)
  • Renaming of methods in the tools package:
    • tools.GenerateSystemInstructions() to tools.tools.GenerateSystemToolsInstructions()
    • tools.GenerateContent() to tools.GenerateAvailableToolsContent()
    • tools.GenerateInstructions() to tools.GenerateUserToolsInstructions()
  • Improve function calling in the tools package

v0.1.5 🥖 [baguette]

31 Aug 17:31
Compare
Choose a tag to compare

What's new

  • tools.GenerateSystemInstructions() string generates a string containing the system content instructions for using "function calling". (✋ Use it only if the LLM does not implement function calling).
  • content.SplitMarkdownByLevelSections(content string, level int) []string allows choosing the level of the section you want to split
  • content.ParseMarkdown(content string) []*Chunk chunk a markdown document. (🚧 experimental)
  • content.ParseMarkdownWithLineage(content string) []Chunk chunk a markdown document while maintaining semantic meaning and preserving the relationship between sections.
  • New types: QA, IO and Card (🚧 experimental, used to create prompt, context, datasets...)
  • Unit tests in progress

v0.1.4 🥐 [croissant]

24 Aug 11:43
Compare
Choose a tag to compare

What's new in v0.1.4?

New split methods (to create document chunks) are available in the content package:

  • content.SplitMarkdownBySections()
  • content.SplitAsciiDocBySections()
  • content.SplitHTMLBySections()