Skip to content

Releases: log10-io/log10

0.16.0

05 Dec 17:39
13eccc4
Compare
Choose a tag to compare

What's Changed

Dependency upgrade

Full Changelog: 0.15.0...0.16.0

0.15.0

23 Oct 19:01
0dc731a
Compare
Choose a tag to compare

What's Changed

Feature

  • add Completions.mock_chat_completions to insert logs into Log10 by @wenzhe-log10 in #328

Maintenance

Upgrade and test dependencies

Full Changelog: 0.14.1...0.15.0

0.14.1

09 Oct 00:13
50759a9
Compare
Choose a tag to compare

What's Changed

Pytest Log10 Evaluations

Example output and note the Log10 Evaluation URL: at the end:

========================================= test session starts ==========================================
platform darwin -- Python 3.12.2, pytest-8.3.3, pluggy-1.5.0
rootdir: test_log10_eval
plugins: metadata-3.1.1, anyio-4.6.0, log10-io-0.14.0
collected 1 item

test_example.py F                                                                                [100%]

=============================================== FAILURES ===============================================
_____________________________________________ test_example _____________________________________________

    def test_example():
>       assert 0 == 1
E       assert 0 == 1

test_example.py:4: AssertionError
========================================== Log10 Eval Report ===========================================
Log10 Eval is enabled.
Test run: tests-06aedd36-69af-45bb-918a-b1cab4e79298
Log10 Evaluation URL: log10.io/app/test-org/evaluations?id=06aedd36-69af-45bb-918a-b1cab4e79298
Report saved to: test_log10_eval/.pytest_log10_eval_reports/tests-06aedd36-69af-45bb-918a-b1cab4e79298.report.json
Report successfully uploaded to Log10
======================================= short test summary info ========================================
FAILED test_example.py::test_example - assert 0 == 1
========================================== 1 failed in 27.13s ==========================================

Full Changelog: 0.14.0...0.14.1

0.14.0

07 Oct 17:37
da6e950
Compare
Choose a tag to compare

What's Changed

Pytest plugin Log10 Managed Evaluation

A pytest plugin for managing evaluation in Log10 platform.

Installation

After configuring the Log10 environment variables,

pip install log10-io

Usage

To enable the Log10 managed evaluation reporting, you need to use the --log10 option or set log10 = true in your pytest configuration file. Once enabled, execute your tests as you normally would:

pytest tests --log10

This will run your tests and upload the results to the Log10 platform for managed evaluation.

Please refer to the doc for detailed options, such as running it locally.

PRs

@wenzhe-log10 in #316 and #318

Maintenance

Full Changelog: 0.13.0...0.14.0

0.14.0-rc2

04 Oct 16:38
a17610c
Compare
Choose a tag to compare
0.14.0-rc2 Pre-release
Pre-release

What's Changed

Full Changelog: 0.14.0-rc1...0.14.0-rc2

0.14.0-rc1

03 Oct 22:36
0cc363d
Compare
Choose a tag to compare
0.14.0-rc1 Pre-release
Pre-release

What's Changed

Full Changelog: 0.13.0...0.14.0rc1

0.13.0

02 Oct 20:54
e4e13bf
Compare
Choose a tag to compare

What's Changed

API change

Add log10_tags context manager and with_log10_tags decorator by @wenzhe-log10 in #299
This PR introduces:

  1. log10_session Enhancement - Improved handling of tags in nested log10_session calls. Tags will now be appended instead of overwritten.

Example:

with log10_session(tags=["A"]):
  with log10_session(tags=["B"]):
    # New behavior: tags will be ["A", "B"]
    # Old behavior: tags were just ["B"]

Note: this only affects nested log10_session usage. Single log10_session calls are unchanged.

  1. Introducing log10_tags context manager and with_log10_tags function decorator to add tags
    • A new context manager for adding tags to logs without creating a new session ID.
    • Can be used independently or with log10_session.
    • Tags will be appended in nested case.

Example link

Misc

Full Changelog: 0.12.0...0.13.0

0.12.0

29 Aug 03:14
acef8b2
Compare
Choose a tag to compare

What's Changed

CLI

  • add cli auto-prompt analyze - prompt analyzer in cli by @wenzhe-log10 in #288
    • Please check the CLI doc for detailed usage for input prompt/messages.
  • cli benchmark_models - support OpenAI fine-tuned models by @wenzhe-log10 in #287

OpenAI Compatibility

  • Generalize perplexity logic for streaming end detection across OpenAI compatible models by @kxtran in #286
    • Tested with Cerebras, Mistral, and Perplexity.

Maintenance

Full Changelog: 0.11.0...0.12.0

0.11.0

23 Aug 03:47
59c40e8
Compare
Choose a tag to compare

What's Changed

New

  • Add async streaming support for OpenAI compatible models (e.g Mistral and Perplexity) by @kxtran in #281

Full Changelog: 0.10.11...0.11.0

0.10.11

22 Aug 02:27
974ce9f
Compare
Choose a tag to compare

What's Changed

CLI

Full Changelog: 0.10.10...0.10.11