Releases: log10-io/log10
Releases · log10-io/log10
0.8.0
What's Changed
New
-
[feature] add cli to rerun and compare a logged completion with other models by @wenzhe-log10 in #141
log10 completions benchmark_models --help Usage: log10 completions benchmark_models [OPTIONS] Options: --ids TEXT Completion ID --tags TEXT Filter completions by specific tags. Separate multiple tags with commas. --limit TEXT Specify the maximum number of completions to retrieve. --offset TEXT Set the starting point (offset) from where to begin fetching completions. --models TEXT Comma separated list of models to compare --temperature FLOAT Temperature --max_tokens INTEGER Max tokens --top_p FLOAT Top p --analyze_prompt Run prompt analyzer on the messages. -f, --file TEXT Specify the filename for the report in markdown format. --help Show this message and exit.
examples:
- compare using a completion id with models
log10 completions benchmark_models --ids 25572f3c-c2f1-45b0-9de8-d96be4c4e544 --models=gpt-3.5-turbo,mistral-small-latest,claude-3-haiku-20240307
- compare with tags
summ_test
and use 2 completions with modelclaude-3-haiku
. Also we call the analyze_prompt to get suggestions on the prompt, and save everything into a report.md file.
log10 completions benchmark_models --tags summ_test --limit 2 --models=claude-3-haiku-20240307 --analyze_prompt -f report.md
-
add load.log10(lamini) to support lamini sdk and add example by @wenzhe-log10 in #143
import lamini
from log10.load import log10
log10(lamini)
llm = lamini.Lamini("meta-llama/Llama-2-7b-chat-hf")
response = llm.generate("What's 2 + 9 * 3?")
print(response)
- update make logging tests by @wenzhe-log10 in #139
Fixes
- avoid calling async callback in litellm.completion call by @wenzhe-log10 in #135
- fix cli import issue when magentic is not installed by @wenzhe-log10 in #140
- fix prompt analyzer _suggest by @wenzhe-log10 in #142
New Contributors
Full Changelog: 0.7.5...0.8.0
0.7.5
What's Changed
Performance optimization for async completions
- use httpx async client for openai async calls by @wenzhe-log10 in #137
- use uuid to generate completion id and session id by @wenzhe-log10 in #138
Full Changelog: 0.7.4...0.7.5
0.7.4
What's Changed
New LLM models support:
- support mistral python sdk with
load.log10(mistralai)
by @wenzhe-log10 in #133 - add log10 callback for litellm compeltion and examples by @wenzhe-log10 in #132
Fix
- set feedback list task_id default to "" instead of None by @wenzhe-log10 in #131
Full Changelog: 0.7.3...0.7.4
0.7.3
What's Changed
- rename logging examples by @wenzhe-log10 in #126
- revert python requirements to 3.9+ by @wenzhe-log10 in #127
- Install dependency for autofeedback feature using
pip install log10-io[autofeedback_icl]
- Install dependency for autofeedback feature using
Full Changelog: 0.7.2...0.7.3
0.7.2
What's Changed
Feature
-
support claude vision and add examples by @wenzhe-log10 in #123
Example: examples/logging/anthropic_messages_image.py -
add vertexai gemini support and refactor the log_row code by @wenzhe-log10 in #124
Example: examples/logging/vertexai_gemini_chat.py
import vertexai
from vertexai.preview.generative_models import GenerationConfig, GenerativeModel
from log10.load import log10
log10(vertexai)
# change these to your own project and location
project_id = "YOUR_PROJECT_ID"
location = "YOUR_LOCATION"
vertexai.init(project=project_id, location=location)
model = GenerativeModel("gemini-1.0-pro")
chat = model.start_chat()
prompt = "What's the top 5 largest constellations you can find in North American during March?"
generation_config = GenerationConfig(
temperature=0.9,
max_output_tokens=128,
)
response = chat.send_message(prompt, generation_config=generation_config)
print(response.text)
Fixes
- Fix claude messages logging by @wenzhe-log10 in 9d396d7
Full Changelog: 0.7.1...0.7.2
0.7.1: Set kind field completion in log_sync
0.7.0: CLI docs, Claude-3 and stream support
What's Changed
Features
- add cli doc by @wenzhe-log10 in #117
- add anthropic messages support and stream by @wenzhe-log10 in #118
Fixes
- call flatten_messages when retrieving the completions for examples by @wenzhe-log10 in #119
Full Changelog: 0.6.7...0.7.0
0.6.7 CLI compeltions, feedback, and feedback-task; AutoFeedback ICL
What's Changed
Features
- CLI for completions, feedback, and feedback-task by @nqn @wenzhe-log10 in #113
list
,get
, anddownload
your completions. You can filter by tags name, created date, etc.list
,get
, anddownload
feedback. You can filter bytask_id
.list
andget
feedback tasks.- Run
log10 --help
and its subcommand to get detailed usage info.
- add autofeedback ICL and cli log10 feedback predict by @wenzhe-log10 in #115
- Leverage your current feedback and AI by using our AutoFeedback feature to generate feedback automatically. More info here.
Full Changelog: 0.6.6...0.6.7
0.6.6
What's Changed
Bug fix
- fix multi session tags and add an example by @wenzhe-log10 in #112
Full Changelog: 0.6.5...0.6.6
0.6.5
What's Changed
- Add
openai.AsyncOpenAI
and stream and support by @wenzhe-log10 in #109 - Add tools and functions support by @nqn in #110
- bump version 0.6.5 by @wenzhe-log10 in #111
Full Changelog: 0.6.4...0.6.5