Skip to content

Releases: codeofdusk/gptcmd

v2.0.1

29 Nov 13:31
Compare
Choose a tag to compare

Welcome to the second 29 November 2024 release of Gptcmd.

Bug fixes

  • Gptcmd now once again correctly handles unquoted file paths on Windows, thanks @Grahamwp.

v2.0.0

29 Nov 12:53
Compare
Choose a tag to compare

Welcome to the 29 November 2024 release of Gptcmd! This is a very substantial release that introduces multi-provider and multi-account support, adds the ability to attach images to messages for use with vision models, implements a new configuration system and message editor, and enhances cost estimation, streamed responses, and the general command-line experience.

Important notes

Python 3.7 deprecation

Python 3.7 reached end-of-life on 27 June 2023. While Gptcmd currently maintains best-effort compatibility with Python 3.7, this support is deprecated and will be removed in the next release. Updating to Python 3.8.6 or later is strongly recommended.

JSON file compatibility

JSON files created with older versions of Gptcmd can be loaded with this version, but any files saved with Gptcmd version 2.0.0 or later will be incompatible with previous versions due to changes in the JSON format. Users attempting to load the newer JSON format using a previous release will be instructed to update Gptcmd to version 2.0.0.

New features

  • An image command has been added, which allows images to be attached to messages by file or URL. Consult the readme for instructions on using Gptcmd with vision models.
  • Gptcmd can now be used with additional providers besides OpenAI:
  • Cost estimation has been completely rewritten:
    • The new cost estimator supports nearly all OpenAI models.
    • Gptcmd now provides cost estimates for streamed responses that complete successfully.
    • Incomplete cost estimates (estimates for sessions where not every response has a cost estimate available) can optionally be enabled.
    • Gptcmd now takes the discount on cached prompt tokens into account when calculating estimated OpenAI costs.
  • Gptcmd can now display prompt and sampled token usage on streamed responses.
  • Gptcmd now has a configuration system for setting application options and specifying credentials for large language model provider accounts. Consult the readme for more information about the configuration format.
    • An account command has been added to switch between configured large language model provider accounts.
  • The flip command has been replaced with a move command that allows for arbitrary message reordering.
  • The slice command has been replaced with a copy command that appends copies of a message range to a specified thread.
  • Gptcmd now supports the use of an external text editor for some operations:
    • With no arguments, the user, assistant, system, and say commands now open an external editor for message composition.
    • An edit command has been added, which opens the selected message in an external editor so its content can be modified.
  • Command feedback across the application has been significantly improved.

Changes

  • Gptcmd now displays token usage by request, not by session.
  • The n API parameter, which controls the number of responses generated by OpenAI models, is no longer supported.
  • The default temperature setting of 0.6 has been removed, so no temperature value is sent with API requests unless explicitly set. This means that the default temperature for OpenAI requests is now effectively set to 1.
  • Gptcmd now works with API parameters and the selected model on a per-account (not per-thread) basis and no longer saves these parameters to JSON files.
  • By default, Gptcmd now streams responses when possible.
  • Gptcmd now uses gpt-4o by default in OpenAI sessions.
  • The retry command now deletes from the end up to the last assistant message, not the entire span of messages after the last user message, before resending. In conversations of alternating user and assistant messages, this change has no effect. However, this greatly simplifys the use of retry with models that allow the generated assistant response to be constrained with a custom prefix.

Bug fixes

  • Gptcmd is now much more stable when streaming responses.
  • Gptcmd's command parsing has been improved, including better handling of quoted file paths and more predictable behaviour with message ranges containing a single negative index.
  • General improvements to stability and performance have been introduced as part of a larger refactoring effort.

v1.2.0

22 Jul 12:42
Compare
Choose a tag to compare

Welcome to the 22 July 2024 release of Gptcmd!

New features

  • gpt-4-turbo is now used by default if available.

Bug fixes

  • Fixed role validation in the name command (thanks @Grahamwp).

v1.1.0

19 Feb 19:56
Compare
Choose a tag to compare

Welcome to the 19 February 2024 release of Gptcmd!

New features

  • In place of "OK", Gptcmd now reports more descriptive messages to confirm the success of various actions across the application.
  • A new command-line flag (-m or --model) has been added to switch to a specific OpenAI model on launch. For example, gptcmd -m gpt-4-turbo-preview will use the new GPT-4 model.
  • Guidance on API parameter usage in the readme has been updated, especially concerning limiting output length when running GPT at high temperatures.

Bug fixes

  • Fixed a crash in message thread deserialization.
  • When loading a thread from disk with non-default API parameters, Gptcmd no longer always warns to save changes even when none have been made.
  • Attempting to run retry in a thread containing only assistant messages no longer causes Gptcmd to crash.

v1.0.3

07 Nov 19:20
Compare
Choose a tag to compare

Welcome to the 7 November 2023 release of Gptcmd!

This release makes Gptcmd compatible with the v1 OpenAI SDK, which will enable new functionality in future updates.

v1.0.2

28 Oct 22:41
Compare
Choose a tag to compare

Welcome to the 28 October 2023 release of Gptcmd!

Bug fixes

  • Improved validation of message roles (user/assistant/system) across the application. Thanks @RobGallo for reporting!

v1.0.1

13 Aug 21:45
Compare
Choose a tag to compare

Welcome to the 13 August 2023 release of Gptcmd!

Bug fixes

  • Cost estimation on the latest version of gpt-4 (gpt-4-0613) is now enabled.

v1.0.0

12 Jun 06:02
Compare
Choose a tag to compare

Welcome to the initial release of Gptcmd! You can get it on PyPI. Enjoy!