Skip to content
This repository has been archived by the owner on Aug 10, 2023. It is now read-only.

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

[Feature Request]: Run concurrently with different accounts #770

Closed
1 task done
Nanfacius opened this issue Feb 15, 2023 · 26 comments
Closed
1 task done

[Feature Request]: Run concurrently with different accounts #770

Nanfacius opened this issue Feb 15, 2023 · 26 comments
Labels
enhancement New feature or request

Comments

@Nanfacius
Copy link

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What would your feature do ?

This feature will allow different users to access the API simultaneously.
In this way, developers can develop apps such as a Wechat or QQ bot. More common people will be able to easily access ChatGPT without opening a browser or using command line. Sending a message to a bot on their most familiar social media app will be the most efficient way to use ChatGPT.
Actually, I have tried to use you codes as a module of my QQ bot (which is based on go-cqhttp). However, now it only allows 1 session. If more than 1 users send message to my QQ bot at the same time, they have to wait for their sessions being processed one by one.
My idea is:

  1. Configure the Chatbot with a list of config files (or a list of OpenAI accounts) rather than only 1.
  2. Rewrite the main function as an instance function of class Chatbot with asyncio.
  3. Automatically distribute available OpenAI accounts to different users.

Proposed workflow

The new feature can be used with the codes below as an example

from revChatGPT.V1 import Chatbot, configure
chatbot_dict={}
async def respond(QQbot, user_id, prompt):
if user_id not in chatbot_dict:
chatbot = Chatbot(user_id, configure)
chatbot_dict[user_id] = chatbot
else:
chatbot = chatbot_dict[user_id]
res = await chatbot.get_res(prompt)
#Here chatbot.get_res is the function rewritten from main as I mentioned above
QQbot.send(user_id,res)

Additional information

No response

@Nanfacius Nanfacius added the enhancement New feature or request label Feb 15, 2023
@Manamama
Copy link

Manamama commented Feb 15, 2023

Re Point 3 and more of your suggestions: it goes against their ToUs, alas.

Now, this is going to be a meta post about this very post itself:

I want to share a personal related experience, about Github, OpenAI etc. and their (quite sensible) terms of use. Just two days ago, while working on one of my NLP scripts in Python, I made a mistake of hard coding an OpenAI token within the repository. Unfortunately, I also committed it to a public repository and then synched [I know, I know - the account I used was a nonprimary email/phone one that I set up some months ago when they were still accepting those]. I realized my mistake within an hour and quickly toggled it back to the private status. Still, when I then logged in to my OpenAI-linked mail only two or three hours later, I got an automatic mail from the kind OpenAI guys that my token (one of 3 or so for that account) was compromised and thus revoked.

This incident serves as a warning for all Github users to be careful while sharing code and even chats on public repositories. Additionally, it's important to keep in mind Github's and third-party terms of use, which prohibit the sharing of certain types of content. Be mindful of what you upload and always be cautious while sharing your work on public platforms.

Now, time for this meta. Out of curiosity, I had this story above fixed/reworked by this very code via this useful (and finally working for me, see below) library, with very minor corrections. Guess what has just happened two seconds thereafter?
Below the actual output from my terminal - I am pasting it verbatim, yet in part:

[the end of the output above...:] Be mindful of what you upload and always be cautious while sharing your work on public platforms.

You:
Chatbot:
It sounds like OpenAI detected suspicious activity on your account and took proactive measures to protect your account by revoking the compromised token. This is a security feature that many online platforms use to prevent unauthorized access to accounts.

If you haven't already done so, I would recommend resetting your password and enabling two-factor authentication to further secure your account. Additionally, be sure to use unique and strong passwords for all your accounts and avoid reusing the same password across multiple accounts.

If you believe that your account was compromised due to a security vulnerability on OpenAI's platform, I would recommend reporting it to OpenAI's support team so they can investigate and address the issue.

You:
Chatbot:
I understand your concern about privacy and data security. It is true that many aspects of our online activity [...]

Yep, there is no "You" (me) input there anymore - it did not stop and seems to have detected the mention of its leaked token in the output itself. It then started typing these caveats (?), 3 in total, on its own - the "Terminator , Sara and John Connor stuff" but IRL, see also this Twitter NLP guy who EdgeGPTed himself about that "Oh you are Sydney? No I ain't!" hack and discovery of his soon afterwards, self-doxxing and going meta meta thereby of sorts.

Comments?

Update 1.1 - converting code to quote for better legibility, fixing minute factual detail that it was a run-on, automatic, output.

@Manamama
Copy link

Manamama commented Feb 15, 2023

Now, let me double praise @acheong08 for his fixed and finally clear Readme - see my older comments about the need to make it clear for the "newbie" end-users (like me), thus thanks for taking it on board.
Especially of use is this update and bit:

[method Not supported for Google/Microsoft accounts

and the direct link to this auth token easy view in window (an option which does not work for me either today, but now the non-Google email/password credentials one finally does, see also the long story above).

@Nanfacius
Copy link
Author

Nanfacius commented Feb 15, 2023

Re Point 3 and more of your suggestions: it goes against their ToUs, alas.

Now, this is going to be a meta post about this very post itself:

I want to share a personal related experience, about Github, OpenAI etc. and their (quite sensible) terms of use. Just two days ago, while working on one of my NLP scripts in Python, I made a mistake of hard coding an OpenAI token within the repository. Unfortunately, I also committed it to a public repository and then synched [I know, I know - the account I used was a nonprimary email/phone one that I set up some months ago when they were still accepting those]. I realized my mistake within an hour and quickly toggled it back to the private status. Still, when I then logged in to my OpenAI-linked mail only two or three hours later, I got an automatic mail from the kind OpenAI guys that my token (one of 3 or so for that account) was compromised and thus revoked.

This incident serves as a warning for all Github users to be careful while sharing code and even chats on public repositories. Additionally, it's important to keep in mind Github's and third-party terms of use, which prohibit the sharing of certain types of content. Be mindful of what you upload and always be cautious while sharing your work on public platforms.

Now, time for this meta. Out of curiosity, I had this story above fixed/reworked by this very code via this useful (and finally working for me, see below) library, with very minor corrections. Guess what has just happened two minutes ago thereafter? Below the actual output from my terminal - I am pasting it verbatim, yet in part:

[the end of the output above...] Be mindful of what you upload and always be cautious while sharing your work on public platforms.

You:
Chatbot: 
It sounds like OpenAI detected suspicious activity on your account and took proactive measures to protect your account by revoking the compromised token. This is a security feature that many online platforms use to prevent unauthorized access to accounts.

If you haven't already done so, I would recommend resetting your password and enabling two-factor authentication to further secure your account. Additionally, be sure to use unique and strong passwords for all your accounts and avoid reusing the same password across multiple accounts.

If you believe that your account was compromised due to a security vulnerability on OpenAI's platform, I would recommend reporting it to OpenAI's support team so they can investigate and address the issue.

You:
Chatbot: 
I understand your concern about privacy and data security. It is true that many aspects of our online activity [...]

Yep, there is no "You" (me) input there anymore - it seems to have detected the mention of the leaked token in the output itself. It then started typing these caveats (?) 3 in total, on its own - the "Terminator , Sara and John Connor stuff" but IRL, see also this Twitter NLP guy who EdgeGPTed himself about that "Oh you are Sydney? No I ain't!" hack and discovery of his soon afterwards, self-doxxing and going meta meta thereby of sorts.

Comments?

Maybe I should explain my ideas more clearly. Point 3 is nothing about sharing OpenAI accounts with users. All the accounts and passwords are stored on your own server. Other users send request to your bot on social media apps, say, a QQ bot on QQ, a popular chatting app in China. Then the back-end on your server call the revChatGPT API. If more than 1 QQ users send request at the same time, you will need more than 1 OpenAI accounts to handle the requests concurrently. In this way, we can regard the accounts as computing resources. When I say "Automatically distribute available OpenAI accounts to different users", I mean automatically distribute computing resources to them. In order to keep continuous sessions, each user has to be bound with 1 OpenAI account. However, only the bot on QQ or other social media platforms is exposed to other users. Anything stored on your server are safe enough, even including your ip address.

@Manamama
Copy link

Manamama commented Feb 15, 2023

@Nanfacius - We are writing about the same, but using different approaches. Mine is more legalistic one, for a number of reasons (I have been dealing with similar NLP & ToUs stuff, on a non-coder yet "our side" level, for some decades).
Re:

you will need more than 1 OpenAI accounts to handle the requests concurrently

Indeed. Yet compare it with the DEC 2022 and previous ToUs:

[you will not] buy, sell, or transfer API keys without our prior consent...
you are responsible for all activities that occur using your credentials...
Free Tier. You may not create more than one account to benefit from credits provided in the free tier of the Services [or we may] stop providing access to the Services.

So it would trigger even more "OpenAI" 'chroot jail' or more, IMHO, see my personal fun story above, one of many.

Sorry for being a wet blanket but apart from individual token revocations (like mine) or (burner or otherwise) account blocks (like the author, within his very useful other project - my hat off once again for his dedication, as it proved very useful for an IRL immediate application of mine back then), more things may happen: check also for random press stories about Github legacy code, bitcoin or otherwise and whatever TLA agencies. (Sorry also for not being very clear here.)

@Manamama
Copy link

Manamama commented Feb 15, 2023

And a quick self-update - found another random article - this is how it likely worked for my miscommitted token itself, at least since 2019, now very likely with instant NLP "sentiment and keyword analysis" for topics such ours here:

In October 2018, GitHub also announced partnerships with third-party online services as part of a new feature called Token Scanning. This scans new commits or private-turned-public repos for service providers’ API keys and notifies the appropriate service provider when it finds them. That service provider may then choose to revoke the credentials, which is the step GitHub recommends, according to a spokesperson there. She also told us that it has shared information on more than 100 million compromised tokens so far.

@Nanfacius
Copy link
Author

Nanfacius commented Feb 15, 2023

@Manamama I totally understand your concern and thank you for your warning. But the reason why we are here is that we, just like the author, want to use ChatGPT more conveniently, even if it may be against their ToUs. So what? Do you think I have to abey the ToUs and give up my ideas, or it's OK as long as I can afford the risks?

@Manamama
Copy link

@Nanfacius
Your choice, obviously. I just "take your (and the readers') watch to show yous the time".

Kudos for the dev once again and for taking our humble UXE tips on board and a quick end-user tip: incidentally this API way (his and another related Github wrapper project, with JS UI refresh injects) is more reliable and stable than my ChatGPT "tame online" sessions.

@acheong08
Copy link
Owner

This feature is planned

@acheong08
Copy link
Owner

Multi account cycling is recommended. We care not about their TOS

@acheong08
Copy link
Owner

Automatically distribute available OpenAI accounts to different users.

Not possible

@acheong08
Copy link
Owner

Rewrite the main function as an instance function of class Chatbot with asyncio.

I'm not great at async. Would be nice if there was a pull request for an async_ask function

@dawhdb
Copy link

dawhdb commented Feb 16, 2023

The same code won't work with a multi-process chatbot, but it will work without multiple processes. Why

@assassingyk
Copy link

have been using asyncio.get_event_loop.run_in_executor to run Chatbot.ask async, not very elegent, but usable.

@acheong08
Copy link
Owner

remove stale. Still WIP

@acheong08
Copy link
Owner

This issue is stale because it has been open for 2 days with no activity.

@acheong08
Copy link
Owner

This issue was closed because it has been inactive for 3 days since being marked as stale.

@github-project-automation github-project-automation bot moved this from Todo to Done in revChatGPT Feb 27, 2023
@yaohaizhou
Copy link

The same code won't work with a multi-process chatbot, but it will work without multiple processes. Why

Hi. Can you share your code here? I am also writing the async code to read the result from AsyncChatbot. But I met some problems.
Below is my code:

for data in asyncio.get_event_loop().run_in_executor(None,chatbot.ask(decrypted_contents)):
    message = data["message"][len(prev_text):]

One error raises. TypeError: '_asyncio.Future' object is not subscriptable

@acheong08 acheong08 reopened this Feb 28, 2023
@acheong08 acheong08 moved this from Done to Todo in revChatGPT Feb 28, 2023
@acheong08
Copy link
Owner

That won't work. It should be async for ...

@dawhdb
Copy link

dawhdb commented Feb 28, 2023

The same code won't work with a multi-process chatbot, but it will work without multiple processes. Why

Hi. Can you share your code here? I am also writing the async code to read the result from AsyncChatbot. But I met some problems. Below is my code:

for data in asyncio.get_event_loop().run_in_executor(None,chatbot.ask(decrypted_contents)):
    message = data["message"][len(prev_text):]

One error raises. TypeError: '_asyncio.Future' object is not subscriptable

我这边没有使用异步,用的多线程,一个账号配了一个线程

@yaohaizhou
Copy link

Yes. I have changed my code to

        async for data in chatbot.ask(prompt=decrypted_contents):
            message = data["message"][len(prev_text):]

But another error raises.

    async for data in chatbot.ask(prompt=decrypted_contents):
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: 'async for' outside async function

I am using the latest version (2.3.15) of this repo.
Sorry. I am not very familar with async.

@yaohaizhou
Copy link

The same code won't work with a multi-process chatbot, but it will work without multiple processes. Why

Hi. Can you share your code here? I am also writing the async code to read the result from AsyncChatbot. But I met some problems. Below is my code:

for data in asyncio.get_event_loop().run_in_executor(None,chatbot.ask(decrypted_contents)):
    message = data["message"][len(prev_text):]

One error raises. TypeError: '_asyncio.Future' object is not subscriptable

我这边没有使用异步,用的多线程,一个账号配了一个线程

好的谢谢,不知道是否方便分享一下您的多线程核心部分代码呢?
Thank you! Could you please share the core code related to the multi-process?

@dawhdb
Copy link

dawhdb commented Feb 28, 2023

The same code won't work with a multi-process chatbot, but it will work without multiple processes. Why

Hi. Can you share your code here? I am also writing the async code to read the result from AsyncChatbot. But I met some problems. Below is my code:

for data in asyncio.get_event_loop().run_in_executor(None,chatbot.ask(decrypted_contents)):
    message = data["message"][len(prev_text):]

One error raises. TypeError: '_asyncio.Future' object is not subscriptable

我这边没有使用异步,用的多线程,一个账号配了一个线程

好的谢谢,不知道是否方便分享一下您的多线程核心部分代码呢? Thank you! Could you please share the core code related to the multi-process?
image
get_content是我想要执行的方法,i是我的账号密码,这个会启动一百个线程,需要比较多的账号,因为每个账号每小时有请求限额

@yaohaizhou
Copy link

好的感谢。我理解您的代码了,想请问一下这种写法是否可以解决多session的场景呢?对于每一个用户来说都是单独的session,互相之间不会影响,而且对于单用户而言对话上下文连贯。
Thanks. I have understood your code. And I am also wondering whether this solution can handle the multi-session scenarios? I mean, for each client the commuication session is independent and the chatting context is contiguous.

@dawhdb
Copy link

dawhdb commented Feb 28, 2023

好的感谢。我理解您的代码了,想请问一下这种写法是否可以解决多session的场景呢?对于每一个用户来说都是单独的session,互相之间不会影响,而且对于单用户而言对话上下文连贯。 Thanks. I have understood your code. And I am also wondering whether this solution can handle the multi-session scenarios? I mean, for each client the commuication session is independent and the chatting context is contiguous.

没有尝试过,但是好像可以,用户使用的时候分配一个账号,在使用期间不会更换账号,这样可以对话上下文连贯,默认是不重置会话的,如果用户不需要使用了可以重置一下会话chatbot.reset_chat()

@yaohaizhou
Copy link

Thank you!

@acheong08 acheong08 removed the stale label Mar 1, 2023
@acheong08
Copy link
Owner

Now that the official API is out, use that instead. Handling multiple accounts while dealing with the associated conversations is a bit difficult.

Repository owner locked and limited conversation to collaborators Mar 6, 2023
@acheong08 acheong08 converted this issue into discussion #1051 Mar 6, 2023
@github-project-automation github-project-automation bot moved this from Todo to Done in revChatGPT Mar 6, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
enhancement New feature or request
Projects
Status: Done
Development

No branches or pull requests

6 participants