Skip to content

Latest commit

 

History

History
149 lines (100 loc) · 5.1 KB

CHANGELOG.md

File metadata and controls

149 lines (100 loc) · 5.1 KB

Changelog

26 June 2023

  • Add this changelog :)
  • Add sha256 hashes on release so you can verify the binaries
  • All binaries are automatically generated with Github actions
  • Add signal handling for SIGHUP (macOS, Linux) and CTRL_CLOSE_EVENT (Windows) to fix issue #16
  • This allows you to run chat as a subprocess. The chat subprocess now quits properly if parent app is closed.
  • Version information
  • Fix segfault on/help

22 June 2023

  • Pull request from @154pinkchairs merged. Thanks. :)
  • The pull request #18 has the two fixes below:
  • Properly handle file paths including tildes 18e9f36
  • Handle buffer allocation errors 6800dfb
  • Better debug mode compilation. May fix issue #9

16 June 2023

  • Adds --save_dir option so you can change save directory location
  • Default location is ./saves on the same directory as the chat binary
  • See issue #13 for more details

15 June 2023

  • Fixes for old macOS.
  • Use -DOLD_MACOS=ON option when compiling with CMake.
  • Tested to compile on High Sierra and Xcode 10

14 June 2023

  • You can name saves with ./save NAME and ./load NAME
  • You can toggle saving and loading off with --no-saves flag

13 June 2023

  • Save/load state with ./save and ./load
  • Reset context with ./reset, help with ./help
  • Makes a ./saves folder
  • Note that a single save can take up to 2Gb
  • You can wrap the AI response with tokens using --b_token and --e_token
  • See issue #12 for more details

5 June 2023

  • Fix when using json to specify names for logfiles. Fixes issue #11

4 June 2023

  • Fix said ability to reset context... :)

3 June 2023

  • Ability to reset context

30 May 2023

  • Save and load chat logs
  • Use --save_log and --load_log
  • AVX512 option for compilation -DAVX512=ON

17 May 2023

  • Update gpt4all backend to v0.1.1 61a963a
  • Full Windows Visual Studio compatibility. Finally fixes issue #1
  • Builds from source on aarch64 Linux. Fixes issue #3
  • Full MPT support. Fixes issue #4

v0.1.9

16 May 2023

  • Code cleaning and reordering
  • llmodel_create_model() function

v0.1.8

13 May 2023

v0.1.7

12 May 2023

v0.1.6

4 May 2023

  • Parse parameters from json files
  • Use -j FNAME or--load_json FNAME

v0.1.5

3 May 2023

  • MinGW compilation on Windows

v0.1.4

1 May 2023

  • v0.1.4 had no tags
  • It was part of cmake-release.yml rewrite to enable MinGW e7e1ebf

v0.1.3

1 May 2023

  • Add loading of prompt template files
  • Use --load_template for loading
  • See prompt_template_sample.txt for a sample

v0.1.2

30 April 2023

  • Automatic memory handling for the model

v0.1.1

29 April 2023

  • Windows compilation fixes

v0.1.0

29 April 2023