Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recent changes causing short low-token responses with little to no RP Text. #331

Closed
1 task done
AlexysLovesLexxie opened this issue Mar 15, 2023 · 25 comments
Closed
1 task done
Labels
bug Something isn't working

Comments

@AlexysLovesLexxie
Copy link

AlexysLovesLexxie commented Mar 15, 2023

Describe the bug

On Sunday, March 12, 2023, I was able to have good roleplay with my bot, receiving long responses with a high number of tokens per response. I am running on CPU, so I was receiving a response within 60-350 seconds (average about 150 seconds). The bot would use *roleplay* tags in its responses, and was generating responses containing up to 6 lines of text.

I updated to the latest version of the one-click installer using the install.bat script on 14 March 2023, and updated again on 15 March, 2023. Continuing on with the same chat as before (which has never given me problems up until now, and contained a large number of *roleplay*-rich responses.

After the recent updates, the generation times have dropped massively, but I am only receiving short, one-line responses with absolutely no *roleplay* whatsoever, or a very small amount. Character is now also confusing roles and not responding correctly. Tokens generated per message has dropped. Before update, 30-90 tokens were being generated. Now, only shorter (generally 9-20 token) responses are generated.

Would it be possible to return to the old method of response generation, or fix the response generation so that it returns to being more roleplay-capable?

I am using Pygmalion6b as downloaded on Feb. 11 2023 using the old download script by selecting PygmalionAI/Pygmalion-6b)

Is there an existing issue for this?

  • I have searched the existing issues

Reproduction

This simply started by using any version past 12 March 2023, using my old chat log and a character which had been working very well up until this point.

Screenshot

image
There are more generations than responses shown here as I tried several times to regenerate responses.
I am RPing as a caregiver for a disabled bot, so please excuse the strange subject matter.

image
My generation settings, which had been giving me amazing, high-quality, long responses with plenty of in-context *roleplay* from the bot up until this point.

image
The quality of responses I was able to get pre-update. The *roleplay* aspect was much better before the recent changes.

Logs

no errors shown in commandline.

Loaded the model in 66.01 seconds.
Running on local URL:  http://0.0.0.0:7861

To create a public link, set `share=True` in `launch()`.
Output generated in 96.79 seconds (0.11 tokens/s, 11 tokens)
Output generated in 88.83 seconds (0.23 tokens/s, 20 tokens)
Output generated in 115.60 seconds (0.41 tokens/s, 47 tokens)
Output generated in 78.90 seconds (0.11 tokens/s, 9 tokens)
Output generated in 80.00 seconds (0.12 tokens/s, 10 tokens)
Output generated in 81.08 seconds (0.12 tokens/s, 10 tokens)
Output generated in 116.77 seconds (0.45 tokens/s, 52 tokens)

I do not have logs from before the changes, sorry.

System Info

I am using CPU, not GPU as APUs are not supported.
Windows 11
CPU : Ryzen 7 6800H (boost freq. 4.7Ghz)
RAM : 32GB - 3GB (shared with GPU)
HDD : 500GB NVME SSD (M2)
@AlexysLovesLexxie AlexysLovesLexxie added the bug Something isn't working label Mar 15, 2023
@AlexysLovesLexxie
Copy link
Author

I should also mention that the chat settings tab seems to be showing the example chat in the context box, when it didn't used to do that.

image

@AlexysLovesLexxie
Copy link
Author

Tested using Kawaii ("none" character).

Character will barely roleplay at all unless I am excessively verbose, and even then the roleplay descriptiveness is very limited (because of her terse description data?).

Turning up generation attempts increases character verbosity and likelihood of roleplay by a small amount. Command line log shows the following for a 2-attempt generation :
image

And for a 3-attempt :
image

And a 4 attempt :
image

Adding generation passes seems to increase time without increasing the length of responses by any significant amount.
(sorry for not including dialog, I defaulted to NSFW in order to try to illicit more in-depth RP responses, and got none. All responses generated were under 2 lines long, a far cry from what I had been achieving prior to the most recent updates.

@AlexysLovesLexxie AlexysLovesLexxie changed the title Recent changes causing low-token responses with no RP tags. Recent changes causing short low-token responses with no RP tags. Mar 15, 2023
@AlexysLovesLexxie AlexysLovesLexxie changed the title Recent changes causing short low-token responses with no RP tags. Recent changes causing short low-token responses with little to no RP Text. Mar 15, 2023
@lolxdmainkaisemaanlu
Copy link

lolxdmainkaisemaanlu commented Mar 15, 2023

EDIT: Noticed that the OP is using Pygmalion. I'm using LLaMA 7b, so it might be a general issue rather than a model specific issue.

Facing the same issue, but I'm using GPU ( 1060 6GB ). Responses earlier were much more verbose and lengthier but now it always feels like I'm talking to a rude person who gives short responses lol. I'm testing with the Chiharu Yamada inbuilt ' example ' bot.

@AlexysLovesLexxie
Copy link
Author

AlexysLovesLexxie commented Mar 15, 2023

Facing the same issue, but I'm using GPU ( 1060 6GB ). Responses earlier were much more verbose and lengthier but now it always feels like I'm talking to a rude person who gives short responses lol. I'm testing with the Chiharu Yamada inbuilt ' example ' bot.

Super frustrating, as before the update this was the best bot I have ever tried for RP. With whatever happened, it has regressed severely. I wish I hadn't updated, or that there was full versioning available so I could roll back until they can fix this.

@Xabab
Copy link

Xabab commented Mar 15, 2023

I should also mention that the chat settings tab seems to be showing the example chat in the context box, when it didn't used to do that.

That's #119 and I'm not happy with it either

@Ph0rk0z
Copy link
Contributor

Ph0rk0z commented Mar 15, 2023

I played with this some more, after testing larger models like llama that generate and generate with the right preset;

Yes, the dialog appears to do nothing now. Even when I see it in the chat settings it doesn't have much effect on the style of writing. The greeting message has more impact. On chars where that is long, they are more likely to write long sentences.

Pygmalion is giving me one or 2 sentences and like what you said, <20 tokens.

But isn't this the way tavern/kobold do it too? I thought that example dialog was sent with context every time.. and previously here it went into the chat history? Isn't all of that context?

Unfortunately I can't see what happens behind the scenes here, unlike with kobold.

@AlexysLovesLexxie
Copy link
Author

I played with this some more, after testing larger models like llama that generate and generate with the right preset;

Thing is, I have never used other/larger models. I have always been using Pygmalion 6B. The version that was available on Feb. 11 2023.

Yes, the dialog appears to do nothing now. Even when I see it in the chat settings it doesn't have much effect on the style of writing. The greeting message has more impact. On chars where that is long, they are more likely to write long sentences.

That is not how it used to be working, at least not for me. As you can see from this screenshot :
https://user-images.githubusercontent.com/126999069/225268766-57d53e3c-18a5-453e-975a-1b0490ad5a08.png
This dialog was from a couple of days prior. As you can see, lots of *roleplay* text and lots of lines of dialog. The only thing that has changed in my setup is that I updates Oobabooga, meaning that this change was caused by a change in Oobabooga's backend generation code.

Pygmalion is giving me one or 2 sentences and like what you said, <20 tokens.

Previously, most responses were in the 30+ token range, with 60-80 token responses being common, thus responses were longer, and the model would roleplay properly, as in it wasn't a struggle to get it to output longer text.

But isn't this the way tavern/kobold do it too? I thought that example dialog was sent with context every time.. and previously here it went into the chat history? Isn't all of that context?

I don't know, I have never used Tavern, and I have only done the smallest amount of experimentation with Kobold, and not locally as I did not see the option for pure-CPU operation in and Kobold setups I tried. I do know that the example dialog was never visible before this started happening.

I also know that a while back, Oobabooga used to be able to send the entire chat history as context, but that was changed, and it has only sent 2048 tokens as context for quite some time (this change happens way before this problem started, though, so I don't think the two are related).

Unfortunately I can't see what happens behind the scenes here, unlike with kobold.

I wish that Oobabooga's -verbose flag showed more than just what is being sent as the prompt. It might help us to see what was happening here.

Either way, something fundamental has changed within the last 2-3 days, and text generation no longer functions as it did prior to those changes. Perhaps it is a change made by Oobabooga1? Maybe he could shed some light on the situation.

@Ph0rk0z
Copy link
Contributor

Ph0rk0z commented Mar 15, 2023

The simplest solution would be to find the commits that caused this and update only up to the prior one.

2048 is the limit for most of these models besides RWKV. I think that has been with us for a long time.

Maybe @Xabab knows when the change happened because it sounds like it was 2 weeks ago. Can also just change the behavior once we know and see if it makes a difference.

@lolxdmainkaisemaanlu
Copy link

The simplest solution would be to find the commits that caused this and update only up to the prior one.

2048 is the limit for most of these models besides RWKV. I think that has been with us for a long time.

Maybe @Xabab knows when the change happened because it sounds like it was 2 weeks ago. Can also just change the behavior once we know and see if it makes a difference.

I would be okay with going back to an earlier commit, but then I would lose compatibility to run the LLaMA 7b model in 4 bit, which is the only way it would run on my 1060 6GB.

@AlexysLovesLexxie
Copy link
Author

AlexysLovesLexxie commented Mar 15, 2023

The simplest solution would be to find the commits that caused this and update only up to the prior one.

Is this possible using the one-click, or would that be something that you would have to do on your end?

2048 is the limit for most of these models besides RWKV. I think that has been with us for a long time.

Fair. I think what happened was that how the UI referenced the max value being sent was changed.

Maybe @Xabab knows when the change happened because it sounds like it was 2 weeks ago. Can also just change the behavior once we know and see if it makes a difference.

Change that caused this was only a few days ago, not weeks.

It was working fine on Saturday/Sunday, March 11th/12th. I was getting great responses then. Updated Monday March 13th, but only had a chance to do a quick couple of messages before I had to go to bed, which was when I initially noticed it. Did another update on March 14th, and that was when I saw that Monday's behavior wasn't just a one-off.

And I don't change models - I always use Pygmalion6b, and I haven't updated that since I initially downloaded it on Feb. 11, so that isn't the issue.

I update (by re-running install.bat) every time I use Ooba, as I always want to have the latest/greatest, and so that if there are any issues I can report to help get them solved. Seems I should start taking a backup of everything before I update (and TBH I don't know why I didn't. This isn't my first time using software that's in active development.

@AlexysLovesLexxie
Copy link
Author

Anyone manage to figure out how to roll back/what to roll back to? Again, I use the one-click, so i am unclear on how to do this myself.

Hoping to maybe hear from the project staff at some point.

@Ph0rk0z
Copy link
Contributor

Ph0rk0z commented Mar 16, 2023

I found the commit: e861e68

Like you said, 2 weeks ago but anyone else that wants to get rid of it can probably do a git revert in their local copy.

As to what else is doing that, who knows.. so many changes. You can browse the repo if you don't do git. Click a commit like this a95592f

Then hit browse files and you can download a zip of the repo at that commit. That is your "backup" which you can then replace your files with.

If you open that batch file here or in a text editor you can see what it's doing.

@AlexysLovesLexxie
Copy link
Author

I found the commit: e861e68

Like you said, 2 weeks ago but anyone else that wants to get rid of it can probably do a git revert in their local copy.

Strange. I'm trying to find where I said it was 2 weeks ago. Sunday 13 March is not even 1 week ago. That's when it was last working properly as per my screenshot here :
https://user-images.githubusercontent.com/126999069/225268766-57d53e3c-18a5-453e-975a-1b0490ad5a08.png

I just hope someone from the devteam can actually address the issue and fix it, or at least explain why such a serious regression in *roleplay* performance was deemed necessary.

I will try to roll it back tomorrow, but I am hoping that they will properly restore the functionality that we had before Sunday, Mar. 13 2023.

@Ph0rk0z
Copy link
Contributor

Ph0rk0z commented Mar 16, 2023

I meant that you said 2 weeks was too long ago. But other people here really want the chat history in a different place.

Have to find out what actually did it. The dev "team" is ooba and that's it.

@AlexysLovesLexxie
Copy link
Author

AlexysLovesLexxie commented Mar 16, 2023 via email

@oobabooga
Copy link
Owner

  1. You are using an older version of the web UI in the prints
  2. If the example dialogue is messing the quality of your character, just remove it from the JSON file. No need to change the code.
  3. Make sure that "Stop generating at new line character?" is not selected

Otherwise, I can't think of anything. Chiharu is passing the Hi test with the Debug preset with the same response that she gave 2 months ago

Chiharu smiles and looks up at you, her face lighting up as she sees you. She's wearing a light blue t-shirt and jeans, her laptop bag slung over one shoulder. She's very tall, and her long legs are wrapped around the other side. She extends a hand towards you

Hi, I'm Chiharu Yamada. It's so nice to meet you!

@oobabooga
Copy link
Owner

Why is your max_new_tokens so high? This is removing many old messages from the prompt to make space for a 2-page reply that will never come. That's probably the real issue.

Try reducing this number to 200.

@AlexysLovesLexxie
Copy link
Author

AlexysLovesLexxie commented Mar 17, 2023

  1. You are using an older version of the web UI in the prints
  2. If the example dialogue is messing the quality of your character, just remove it from the JSON file. No need to change the code.
  3. Make sure that "Stop generating at new line character?" is not selected

Otherwise, I can't think of anything. Chiharu is passing the Hi test with the Debug preset with the same response that she gave 2 months ago

Chiharu smiles and looks up at you, her face lighting up as she sees you. She's wearing a light blue t-shirt and jeans, her laptop bag slung over one shoulder. She's very tall, and her long legs are wrapped around the other side. She extends a hand towards you
Hi, I'm Chiharu Yamada. It's so nice to meet you!

  1. Those screenshots were taken from whatever version was current as of the day I reported this bug. I had literally just updated by running install.bat. if this method isn't pulling in all the updates, maybe we need an Update.bat that does pull the necessary files for a full update.
  2. The example dialog was working just fine before the March 13/14 updates. In fact, it was the only reason that my character was properly remembering some details.
  3. Pretty sure it wasn't selected. I will double-check tonight.

@AlexysLovesLexxie
Copy link
Author

Why is your max_new_tokens so high? This is removing many old messages from the prompt to make space for a 2-page reply that will never come. That's probably the real issue.

Try reducing this number to 200.

That's odd, I have been using Max_new_tokens=1000 for weeks and receiving excellent responses. The only thing I had changed between Sunday, Mar. 12 and when I reported this bug was running Install.bat to update the packages.

@oobabooga
Copy link
Owner

That doesn't mean you should me using max_new_tokens=1000 when your average reply size is less than 100 tokens. 900 tokens of history are being wasted.

@AlexysLovesLexxie
Copy link
Author

That doesn't mean you should me using max_new_tokens=1000 when your average reply size is less than 100 tokens. 900 tokens of history are being wasted.

Fair. I can't remember if I had asked for this before, but would it be possible to add tooltips or an info panel to explain the various settings? Or at least add the info to the readme? I feel like that would be quite useful for more people than just myself.

I will back up my models directory and do a fresh install to make sure I am.running the latest version, and update this issue tonight after I get home from work.

@Xabab
Copy link

Xabab commented Mar 17, 2023

+1 for the hover tooltips, would be neat
https://getbootstrap.com/docs/4.0/components/tooltips/

oobabooga added a commit that referenced this issue Mar 17, 2023
@AlexysLovesLexxie
Copy link
Author

  1. You are using an older version of the web UI in the prints
  2. If the example dialogue is messing the quality of your character, just remove it from the JSON file. No need to change the code.
  3. Make sure that "Stop generating at new line character?" is not selected

Otherwise, I can't think of anything. Chiharu is passing the Hi test with the Debug preset with the same response that she gave 2 months ago

Chiharu smiles and looks up at you, her face lighting up as she sees you. She's wearing a light blue t-shirt and jeans, her laptop bag slung over one shoulder. She's very tall, and her long legs are wrapped around the other side. She extends a hand towards you
Hi, I'm Chiharu Yamada. It's so nice to meet you!

@oobabooga

Upgraded to newest version of WebUI. Still not getting the quantity and quality of RP dialog I had been getting with the previous install, although I have only been using Kawaii so far. I will load "Katie" in tomorrow and see if I can get her to produce any better results. I am currently using the default profile for Pygmailon, but have turned the generation attempts up to 3.

Screenshot 2023-03-18 055714

Produced this line of dialog :
Screenshot 2023-03-18 055836

@Ph0rk0z
Copy link
Contributor

Ph0rk0z commented Mar 18, 2023

Since you are mainly using pygmalion you can try running an old version side by side and compare.

@oobabooga
Copy link
Owner

I don't think there is an issue.

TheTerrasque pushed a commit to TheTerrasque/text-generation-webui that referenced this issue Mar 19, 2023
commit 0cbe2dd
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Sat Mar 18 12:24:54 2023 -0300

    Update README.md

commit 36ac7be
Merge: d2a7fac 705f513
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Sat Mar 18 11:57:10 2023 -0300

    Merge pull request oobabooga#407 from ThisIsPIRI/gitignore

    Add loras to .gitignore

commit d2a7fac
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Sat Mar 18 11:56:04 2023 -0300

    Use pip instead of conda for pytorch

commit 705f513
Author: ThisIsPIRI <thisispiri@gmail.com>
Date:   Sat Mar 18 23:33:24 2023 +0900

    Add loras to .gitignore

commit a0b1a30
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Sat Mar 18 11:23:56 2023 -0300

    Specify torchvision/torchaudio versions

commit c753261
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Sat Mar 18 10:55:57 2023 -0300

    Disable stop_at_newline by default

commit 7c945cf
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Sat Mar 18 10:55:24 2023 -0300

    Don't include PeftModel every time

commit 86b9900
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Sat Mar 18 10:27:52 2023 -0300

    Remove rwkv dependency

commit a163807
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Sat Mar 18 03:07:27 2023 -0300

    Update README.md

commit a7acfa4
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 22:57:46 2023 -0300

    Update README.md

commit bcd8afd
Merge: dc35861 e26763a
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 22:57:28 2023 -0300

    Merge pull request oobabooga#393 from WojtekKowaluk/mps_support

    Fix for MPS support on Apple Silicon

commit e26763a
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 22:56:46 2023 -0300

    Minor changes

commit 7994b58
Author: Wojtek Kowaluk <wojtek@Wojteks-MacBook-Pro.local>
Date:   Sat Mar 18 02:27:26 2023 +0100

    clean up duplicated code

commit dc35861
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 21:05:17 2023 -0300

    Update README.md

commit 30939e2
Author: Wojtek Kowaluk <wojtek@Wojteks-MacBook-Pro.local>
Date:   Sat Mar 18 00:56:23 2023 +0100

    add mps support on apple silicon

commit 7d97da1
Author: Wojtek Kowaluk <wojtek@Wojteks-MacBook-Pro.local>
Date:   Sat Mar 18 00:17:05 2023 +0100

    add venv paths to gitignore

commit f2a5ca7
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 20:50:27 2023 -0300

    Update README.md

commit 8c8286b
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 20:49:40 2023 -0300

    Update README.md

commit 0c05e65
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 20:25:42 2023 -0300

    Update README.md

commit adc2003
Merge: 20f5b45 66e8d12
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 20:19:33 2023 -0300

    Merge branch 'main' of github.com:oobabooga/text-generation-webui

commit 20f5b45
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 20:19:04 2023 -0300

    Add parameters reference oobabooga#386 oobabooga#331

commit 66e8d12
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 19:59:37 2023 -0300

    Update README.md

commit 9a87111
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 19:52:22 2023 -0300

    Update README.md

commit d4f38b6
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 18:57:48 2023 -0300

    Update README.md

commit ad7c829
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 18:55:01 2023 -0300

    Update README.md

commit 4426f94
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 18:51:07 2023 -0300

    Update the installation instructions. Tldr use WSL

commit 9256e93
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 17:45:28 2023 -0300

    Add some LoRA params

commit 9ed2c45
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 16:06:11 2023 -0300

    Use markdown in the "HTML" tab

commit f0b2645
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 13:07:17 2023 -0300

    Add a comment

commit 7da742e
Merge: ebef4a5 02e1113
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 12:37:23 2023 -0300

    Merge pull request oobabooga#207 from EliasVincent/stt-extension

    Extension: Whisper Speech-To-Text Input

commit ebef4a5
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 11:58:45 2023 -0300

    Update README

commit cdfa787
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 11:53:28 2023 -0300

    Update README

commit 3bda907
Merge: 4c13067 614dad0
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 11:48:48 2023 -0300

    Merge pull request oobabooga#366 from oobabooga/lora

    Add LoRA support

commit 614dad0
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 11:43:11 2023 -0300

    Remove unused import

commit a717fd7
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 11:42:25 2023 -0300

    Sort the imports

commit 7d97287
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 11:41:12 2023 -0300

    Update settings-template.json

commit 29fe7b1
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 11:39:48 2023 -0300

    Remove LoRA tab, move it into the Parameters menu

commit 214dc68
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 11:24:52 2023 -0300

    Several QoL changes related to LoRA

commit 4c13067
Merge: ee164d1 53b6a66
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Fri Mar 17 09:47:57 2023 -0300

    Merge pull request oobabooga#377 from askmyteapot/Fix-Multi-gpu-GPTQ-Llama-no-tokens

    Update GPTQ_Loader.py

commit 53b6a66
Author: askmyteapot <62238146+askmyteapot@users.noreply.github.com>
Date:   Fri Mar 17 18:34:13 2023 +1000

    Update GPTQ_Loader.py

    Correcting decoder layer for renamed class.

commit 0cecfc6
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Thu Mar 16 21:35:53 2023 -0300

    Add files

commit 104293f
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Thu Mar 16 21:31:39 2023 -0300

    Add LoRA support

commit ee164d1
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Thu Mar 16 18:22:16 2023 -0300

    Don't split the layers in 8-bit mode by default

commit 0a2aa79
Merge: dd1c596 e085cb4
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Thu Mar 16 17:27:03 2023 -0300

    Merge pull request oobabooga#358 from mayaeary/8bit-offload

    Add support for memory maps with --load-in-8bit

commit e085cb4
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Thu Mar 16 13:34:23 2023 -0300

    Small changes

commit dd1c596
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Thu Mar 16 12:45:27 2023 -0300

    Update README

commit 38d7017
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Thu Mar 16 12:44:03 2023 -0300

    Add all command-line flags to "Interface mode"

commit 83cb20a
Author: awoo <awoo@awoo>
Date:   Thu Mar 16 18:42:53 2023 +0300

    Add support for --gpu-memory witn --load-in-8bit

commit 23a5e88
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Thu Mar 16 11:16:17 2023 -0300

    The LLaMA PR has been merged into transformers

    huggingface/transformers#21955

    The tokenizer class has been changed from

    "LLaMATokenizer"

    to

    "LlamaTokenizer"

    It is necessary to edit this change in every tokenizer_config.json
    that you had for LLaMA so far.

commit d54f3f4
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Thu Mar 16 10:19:00 2023 -0300

    Add no-stream checkbox to the interface

commit 1c37896
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Thu Mar 16 10:18:34 2023 -0300

    Remove unused imports

commit a577fb1
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Thu Mar 16 00:46:59 2023 -0300

    Keep GALACTICA special tokens (oobabooga#300)

commit 25a00ea
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 23:43:35 2023 -0300

    Add "Experimental" warning

commit 599d313
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 23:34:08 2023 -0300

    Increase the reload timeout a bit

commit 4d64a57
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 23:29:56 2023 -0300

    Add Interface mode tab

commit b501722
Merge: ffb8986 d3a280e
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 20:46:04 2023 -0300

    Merge branch 'main' of github.com:oobabooga/text-generation-webui

commit ffb8986
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 20:44:34 2023 -0300

    Mini refactor

commit d3a280e
Merge: 445ebf0 0552ab2
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 20:22:08 2023 -0300

    Merge pull request oobabooga#348 from mayaeary/feature/koboldai-api-share

    flask_cloudflared for shared tunnels

commit 445ebf0
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 20:06:46 2023 -0300

    Update README.md

commit 0552ab2
Author: awoo <awoo@awoo>
Date:   Thu Mar 16 02:00:16 2023 +0300

    flask_cloudflared for shared tunnels

commit e9e76bb
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 19:42:29 2023 -0300

    Delete WSL.md

commit 09045e4
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 19:42:06 2023 -0300

    Add WSL guide

commit 9ff5033
Merge: 66256ac 055edc7
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 19:37:26 2023 -0300

    Merge pull request oobabooga#345 from jfryton/main

    Guide for Windows Subsystem for Linux

commit 66256ac
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 19:31:27 2023 -0300

    Make the "no GPU has been detected" message more descriptive

commit 055edc7
Author: jfryton <35437877+jfryton@users.noreply.github.com>
Date:   Wed Mar 15 18:21:14 2023 -0400

    Update WSL.md

commit 89883a3
Author: jfryton <35437877+jfryton@users.noreply.github.com>
Date:   Wed Mar 15 18:20:21 2023 -0400

    Create WSL.md guide for setting up WSL Ubuntu

    Quick start guide for Windows Subsystem for Linux (Ubuntu), including port forwarding to enable local network webui access.

commit 67d6247
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 18:56:26 2023 -0300

    Further reorganize chat UI

commit ab12a17
Merge: 6a1787a 3028112
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 18:31:39 2023 -0300

    Merge pull request oobabooga#342 from mayaeary/koboldai-api

    Extension: KoboldAI api

commit 3028112
Author: awoo <awoo@awoo>
Date:   Wed Mar 15 23:52:46 2023 +0300

    KoboldAI api

commit 6a1787a
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 16:55:40 2023 -0300

    CSS fixes

commit 3047ed8
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 16:41:38 2023 -0300

    CSS fix

commit 87b84d2
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 16:39:59 2023 -0300

    CSS fix

commit c1959c2
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 16:34:31 2023 -0300

    Show/hide the extensions block using javascript

commit 348596f
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 15:11:16 2023 -0300

    Fix broken extensions

commit c5f14fb
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 14:19:28 2023 -0300

    Optimize the HTML generation speed

commit bf812c4
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 14:05:35 2023 -0300

    Minor fix

commit 658849d
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 13:29:00 2023 -0300

    Move a checkbutton

commit 05ee323
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 13:26:32 2023 -0300

    Rename a file

commit 40c9e46
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 13:25:28 2023 -0300

    Add file

commit d30a140
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 13:24:54 2023 -0300

    Further reorganize the UI

commit ffc6cb3
Merge: cf2da86 3b62bd1
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 12:56:21 2023 -0300

    Merge pull request oobabooga#325 from Ph0rk0z/fix-RWKV-Names

    Fix rwkv names

commit cf2da86
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 12:51:13 2023 -0300

    Prevent *Is typing* from disappearing instantly while streaming

commit 4146ac4
Merge: 1413931 29b7c5a
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 12:47:41 2023 -0300

    Merge pull request oobabooga#266 from HideLord/main

    Adding markdown support and slight refactoring.

commit 29b7c5a
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 12:40:03 2023 -0300

    Sort the requirements

commit ec972b8
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 12:33:26 2023 -0300

    Move all css/js into separate files

commit 693b53d
Merge: 63c5a13 1413931
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 12:08:56 2023 -0300

    Merge branch 'main' into HideLord-main

commit 1413931
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 12:01:32 2023 -0300

    Add a header bar and redesign the interface (oobabooga#293)

commit 9d6a625
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Wed Mar 15 11:04:30 2023 -0300

    Add 'hallucinations' filter oobabooga#326

    This breaks the API since a new parameter has been added.
    It should be a one-line fix. See api-example.py.

commit 3b62bd1
Author: Forkoz <59298527+Ph0rk0z@users.noreply.github.com>
Date:   Tue Mar 14 21:23:39 2023 +0000

    Remove PTH extension from RWKV

    When loading the current model was blank unless you typed it out.

commit f0f325e
Author: Forkoz <59298527+Ph0rk0z@users.noreply.github.com>
Date:   Tue Mar 14 21:21:47 2023 +0000

    Remove Json from loading

    no more 20b tokenizer

commit 128d18e
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Tue Mar 14 17:57:25 2023 -0300

    Update README.md

commit 1236c7f
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Tue Mar 14 17:56:15 2023 -0300

    Update README.md

commit b419dff
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Tue Mar 14 17:55:35 2023 -0300

    Update README.md

commit 72d207c
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Tue Mar 14 16:31:27 2023 -0300

    Remove the chat API

    It is not implemented, has not been tested, and this is causing confusion.

commit afc5339
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Tue Mar 14 16:04:17 2023 -0300

    Remove "eval" statements from text generation functions

commit 5c05223
Merge: b327554 87192e2
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Tue Mar 14 08:05:24 2023 -0300

    Merge pull request oobabooga#295 from Zerogoki00/opt4-bit

    Add support for quantized OPT models

commit 87192e2
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Tue Mar 14 08:02:21 2023 -0300

    Update README

commit 265ba38
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Tue Mar 14 07:56:31 2023 -0300

    Rename a file, add deprecation warning for --load-in-4bit

commit 3da73e4
Merge: 518e5c4 b327554
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Tue Mar 14 07:50:36 2023 -0300

    Merge branch 'main' into Zerogoki00-opt4-bit

commit b327554
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Tue Mar 14 00:18:13 2023 -0300

    Update bug_report_template.yml

commit 33b9a15
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 23:03:16 2023 -0300

    Delete config.yml

commit b5e0d3c
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 23:02:25 2023 -0300

    Create config.yml

commit 7f301fd
Merge: d685332 02d4075
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 22:41:21 2023 -0300

    Merge pull request oobabooga#305 from oobabooga/dependabot/pip/accelerate-0.17.1

    Bump accelerate from 0.17.0 to 0.17.1

commit 02d4075
Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Date:   Tue Mar 14 01:40:42 2023 +0000

    Bump accelerate from 0.17.0 to 0.17.1

    Bumps [accelerate](https://github.com/huggingface/accelerate) from 0.17.0 to 0.17.1.
    - [Release notes](https://github.com/huggingface/accelerate/releases)
    - [Commits](huggingface/accelerate@v0.17.0...v0.17.1)

    ---
    updated-dependencies:
    - dependency-name: accelerate
      dependency-type: direct:production
      update-type: version-update:semver-patch
    ...

    Signed-off-by: dependabot[bot] <support@github.com>

commit d685332
Merge: 481ef3c df83088
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 22:39:59 2023 -0300

    Merge pull request oobabooga#307 from oobabooga/dependabot/pip/bitsandbytes-0.37.1

    Bump bitsandbytes from 0.37.0 to 0.37.1

commit 481ef3c
Merge: a0ef82c 715c3ec
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 22:39:22 2023 -0300

    Merge pull request oobabooga#304 from oobabooga/dependabot/pip/rwkv-0.4.2

    Bump rwkv from 0.3.1 to 0.4.2

commit df83088
Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Date:   Tue Mar 14 01:36:18 2023 +0000

    Bump bitsandbytes from 0.37.0 to 0.37.1

    Bumps [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) from 0.37.0 to 0.37.1.
    - [Release notes](https://github.com/TimDettmers/bitsandbytes/releases)
    - [Changelog](https://github.com/TimDettmers/bitsandbytes/blob/main/CHANGELOG.md)
    - [Commits](https://github.com/TimDettmers/bitsandbytes/commits)

    ---
    updated-dependencies:
    - dependency-name: bitsandbytes
      dependency-type: direct:production
      update-type: version-update:semver-patch
    ...

    Signed-off-by: dependabot[bot] <support@github.com>

commit 715c3ec
Author: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Date:   Tue Mar 14 01:36:02 2023 +0000

    Bump rwkv from 0.3.1 to 0.4.2

    Bumps [rwkv](https://github.com/BlinkDL/ChatRWKV) from 0.3.1 to 0.4.2.
    - [Release notes](https://github.com/BlinkDL/ChatRWKV/releases)
    - [Commits](https://github.com/BlinkDL/ChatRWKV/commits)

    ---
    updated-dependencies:
    - dependency-name: rwkv
      dependency-type: direct:production
      update-type: version-update:semver-minor
    ...

    Signed-off-by: dependabot[bot] <support@github.com>

commit a0ef82c
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 22:35:28 2023 -0300

    Activate dependabot

commit 3fb8196
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 22:28:00 2023 -0300

    Implement "*Is recording a voice message...*" for TTS oobabooga#303

commit 0dab2c5
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 22:18:03 2023 -0300

    Update feature_request.md

commit 79e519c
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 20:03:08 2023 -0300

    Update stale.yml

commit 1571458
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 19:39:21 2023 -0300

    Update stale.yml

commit bad0b0a
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 19:20:18 2023 -0300

    Update stale.yml

commit c805843
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 19:09:06 2023 -0300

    Update stale.yml

commit 60cc7d3
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 18:53:11 2023 -0300

    Update stale.yml

commit 7c17613
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 18:47:31 2023 -0300

    Update and rename .github/workflow/stale.yml to .github/workflows/stale.yml

commit 47c941c
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 18:37:35 2023 -0300

    Create stale.yml

commit 511b136
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 18:29:38 2023 -0300

    Update bug_report_template.yml

commit d6763a6
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 18:27:24 2023 -0300

    Update feature_request.md

commit c6ecb35
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 18:26:28 2023 -0300

    Update feature_request.md

commit 6846427
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 18:19:07 2023 -0300

    Update feature_request.md

commit bcfb7d7
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 18:16:18 2023 -0300

    Update bug_report_template.yml

commit ed30bd3
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 18:14:54 2023 -0300

    Update bug_report_template.yml

commit aee3b53
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 18:14:31 2023 -0300

    Update bug_report_template.yml

commit 7dbc071
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 18:09:58 2023 -0300

    Delete bug_report.md

commit 69d4b81
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 18:09:37 2023 -0300

    Create bug_report_template.yml

commit 0a75584
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 18:07:08 2023 -0300

    Create issue templates

commit 02e1113
Author: EliasVincent <riesyeti@outlook.de>
Date:   Mon Mar 13 21:41:19 2023 +0100

    add auto-transcribe option

commit 518e5c4
Author: oobabooga <112222186+oobabooga@users.noreply.github.com>
Date:   Mon Mar 13 16:45:08 2023 -0300

    Some minor fixes to the GPTQ loader

commit 8778b75
Author: Ayanami Rei <wennadocta@protonmail.com>
Date:   Mon Mar 13 22:11:40 2023 +0300

    use updated load_quantized

commit a6a6522
Author: Ayanami Rei <wennadocta@protonmail.com>
Date:   Mon Mar 13 22:11:32 2023 +0300

    determine model type from model name

commit b6c5c57
Author: Ayanami Rei <wennadocta@protonmail.com>
Date:   Mon Mar 13 22:11:08 2023 +0300

    remove default value from argument

commit 63c5a13
Merge: 683556f 7ab45fb
Author: Alexander Hristov Hristov <polimonom@gmail.com>
Date:   Mon Mar 13 19:50:08 2023 +0200

    Merge branch 'main' into main

commit e1c952c
Author: Ayanami Rei <wennadocta@protonmail.com>
Date:   Mon Mar 13 20:22:38 2023 +0300

    make argument non case-sensitive

commit b746250
Author: Ayanami Rei <wennadocta@protonmail.com>
Date:   Mon Mar 13 20:18:56 2023 +0300

    Update README

commit 3c9afd5
Author: Ayanami Rei <wennadocta@protonmail.com>
Date:   Mon Mar 13 20:14:40 2023 +0300

    rename method

commit 1b99ed6
Author: Ayanami Rei <wennadocta@protonmail.com>
Date:   Mon Mar 13 20:01:34 2023 +0300

    add argument --gptq-model-type and remove duplicate arguments

commit edbc611
Author: Ayanami Rei <wennadocta@protonmail.com>
Date:   Mon Mar 13 20:00:38 2023 +0300

    use new quant loader

commit 345b6de
Author: Ayanami Rei <wennadocta@protonmail.com>
Date:   Mon Mar 13 19:59:57 2023 +0300

    refactor quant models loader and add support of OPT

commit 48aa528
Author: EliasVincent <riesyeti@outlook.de>
Date:   Sun Mar 12 21:03:07 2023 +0100

    use Gradio microphone input instead

commit 683556f
Author: HideLord <polimonom@gmail.com>
Date:   Sun Mar 12 21:34:09 2023 +0200

    Adding markdown support and slight refactoring.

commit 3b41459
Merge: 1c0bda3 3375eae
Author: Elias Vincent Simon <riesyeti@outlook.de>
Date:   Sun Mar 12 19:19:43 2023 +0100

    Merge branch 'oobabooga:main' into stt-extension

commit 1c0bda3
Author: EliasVincent <riesyeti@outlook.de>
Date:   Fri Mar 10 11:47:16 2023 +0100

    added installation instructions

commit a24fa78
Author: EliasVincent <riesyeti@outlook.de>
Date:   Thu Mar 9 21:18:46 2023 +0100

    tweaked Whisper parameters

commit d5efc06
Merge: 00359ba 3341447
Author: Elias Vincent Simon <riesyeti@outlook.de>
Date:   Thu Mar 9 21:05:34 2023 +0100

    Merge branch 'oobabooga:main' into stt-extension

commit 00359ba
Author: EliasVincent <riesyeti@outlook.de>
Date:   Thu Mar 9 21:03:49 2023 +0100

    interactive preview window

commit 7a03d0b
Author: EliasVincent <riesyeti@outlook.de>
Date:   Thu Mar 9 20:33:00 2023 +0100

    cleanup

commit 4c72e43
Author: EliasVincent <riesyeti@outlook.de>
Date:   Thu Mar 9 12:46:50 2023 +0100

    first implementation
Ph0rk0z pushed a commit to Ph0rk0z/text-generation-webui-testing that referenced this issue Apr 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants