-
-
Notifications
You must be signed in to change notification settings - Fork 4.5k
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
[Feature Request]: Run concurrently with different accounts #770
Comments
Re Point 3 and more of your suggestions: it goes against their ToUs, alas. Now, this is going to be a meta post about this very post itself:
Now, time for this meta. Out of curiosity, I had this story above fixed/reworked by this very code via this useful (and finally working for me, see below) library, with very minor corrections. Guess what has just happened two seconds thereafter?
Yep, there is no "You" (me) input there anymore - it did not stop and seems to have detected the mention of its leaked token in the output itself. It then started typing these caveats (?), 3 in total, on its own - the "Terminator , Sara and John Connor stuff" but IRL, see also this Twitter NLP guy who EdgeGPTed himself about that "Oh you are Sydney? No I ain't!" hack and discovery of his soon afterwards, self-doxxing and going meta meta thereby of sorts. Comments? Update 1.1 - converting code to quote for better legibility, fixing minute factual detail that it was a run-on, automatic, output. |
Now, let me double praise @acheong08 for his fixed and finally clear Readme - see my older comments about the need to make it clear for the "newbie" end-users (like me), thus thanks for taking it on board.
and the direct link to this auth token easy view in window (an option which does not work for me either today, but now the non-Google email/password credentials one finally does, see also the long story above). |
Maybe I should explain my ideas more clearly. Point 3 is nothing about sharing OpenAI accounts with users. All the accounts and passwords are stored on your own server. Other users send request to your bot on social media apps, say, a QQ bot on QQ, a popular chatting app in China. Then the back-end on your server call the revChatGPT API. If more than 1 QQ users send request at the same time, you will need more than 1 OpenAI accounts to handle the requests concurrently. In this way, we can regard the accounts as computing resources. When I say "Automatically distribute available OpenAI accounts to different users", I mean automatically distribute computing resources to them. In order to keep continuous sessions, each user has to be bound with 1 OpenAI account. However, only the bot on QQ or other social media platforms is exposed to other users. Anything stored on your server are safe enough, even including your ip address. |
@Nanfacius - We are writing about the same, but using different approaches. Mine is more legalistic one, for a number of reasons (I have been dealing with similar NLP & ToUs stuff, on a non-coder yet "our side" level, for some decades).
Indeed. Yet compare it with the DEC 2022 and previous ToUs:
So it would trigger even more "OpenAI" 'chroot jail' or more, IMHO, see my personal fun story above, one of many. Sorry for being a wet blanket but apart from individual token revocations (like mine) or (burner or otherwise) account blocks (like the author, within his very useful other project - my hat off once again for his dedication, as it proved very useful for an IRL immediate application of mine back then), more things may happen: check also for random press stories about Github legacy code, bitcoin or otherwise and whatever TLA agencies. (Sorry also for not being very clear here.) |
And a quick self-update - found another random article - this is how it likely worked for my miscommitted token itself, at least since 2019, now very likely with instant NLP "sentiment and keyword analysis" for topics such ours here:
|
@Manamama I totally understand your concern and thank you for your warning. But the reason why we are here is that we, just like the author, want to use ChatGPT more conveniently, even if it may be against their ToUs. So what? Do you think I have to abey the ToUs and give up my ideas, or it's OK as long as I can afford the risks? |
@Nanfacius Kudos for the dev once again and for taking our humble UXE tips on board and a quick end-user tip: incidentally this API way (his and another related Github wrapper project, with JS UI refresh injects) is more reliable and stable than my ChatGPT "tame online" sessions. |
This feature is planned |
Multi account cycling is recommended. We care not about their TOS |
Not possible |
I'm not great at async. Would be nice if there was a pull request for an async_ask function |
The same code won't work with a multi-process chatbot, but it will work without multiple processes. Why |
have been using asyncio.get_event_loop.run_in_executor to run Chatbot.ask async, not very elegent, but usable. |
remove stale. Still WIP |
This issue is stale because it has been open for 2 days with no activity. |
This issue was closed because it has been inactive for 3 days since being marked as stale. |
Hi. Can you share your code here? I am also writing the async code to read the result from AsyncChatbot. But I met some problems. for data in asyncio.get_event_loop().run_in_executor(None,chatbot.ask(decrypted_contents)):
message = data["message"][len(prev_text):] One error raises. TypeError: '_asyncio.Future' object is not subscriptable |
That won't work. It should be |
我这边没有使用异步,用的多线程,一个账号配了一个线程 |
Yes. I have changed my code to async for data in chatbot.ask(prompt=decrypted_contents):
message = data["message"][len(prev_text):] But another error raises.
I am using the latest version (2.3.15) of this repo. |
好的谢谢,不知道是否方便分享一下您的多线程核心部分代码呢? |
好的感谢。我理解您的代码了,想请问一下这种写法是否可以解决多session的场景呢?对于每一个用户来说都是单独的session,互相之间不会影响,而且对于单用户而言对话上下文连贯。 |
没有尝试过,但是好像可以,用户使用的时候分配一个账号,在使用期间不会更换账号,这样可以对话上下文连贯,默认是不重置会话的,如果用户不需要使用了可以重置一下会话chatbot.reset_chat() |
Thank you! |
Now that the official API is out, use that instead. Handling multiple accounts while dealing with the associated conversations is a bit difficult. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Is there an existing issue for this?
What would your feature do ?
This feature will allow different users to access the API simultaneously.
In this way, developers can develop apps such as a Wechat or QQ bot. More common people will be able to easily access ChatGPT without opening a browser or using command line. Sending a message to a bot on their most familiar social media app will be the most efficient way to use ChatGPT.
Actually, I have tried to use you codes as a module of my QQ bot (which is based on go-cqhttp). However, now it only allows 1 session. If more than 1 users send message to my QQ bot at the same time, they have to wait for their sessions being processed one by one.
My idea is:
Proposed workflow
The new feature can be used with the codes below as an example
from revChatGPT.V1 import Chatbot, configure
chatbot_dict={}
async def respond(QQbot, user_id, prompt):
if user_id not in chatbot_dict:
chatbot = Chatbot(user_id, configure)
chatbot_dict[user_id] = chatbot
else:
chatbot = chatbot_dict[user_id]
res = await chatbot.get_res(prompt)
#Here chatbot.get_res is the function rewritten from main as I mentioned above
QQbot.send(user_id,res)
Additional information
No response
The text was updated successfully, but these errors were encountered: