mychatgpt
is a small and useful Python module that provides functions for interacting with OpenAI's GPT models to create conversational agents. This module allows users to have interactive conversations with the GPT models and keeps track of the conversation history in your Python Projects and Jupyter Notebooks.
Now implemented with vision, hearing and drawing functions.
Install package with pip:
pip install mychatgpt
Install latest version through git
pip install git+https://github.com/johndef64/mychatgpt.git
To use this module, you need an OpenAI API key. You have to provide your API key when requested once and it will be stored as openai_api_key.txt
in your working directory.
The module provides the following main class:
-
GPT()
-
Core Functions:
-
GPT().ask_gpt(prompt, *args)
:
This basic function takes a prompt as input and generates a single response from the GPT chosen model. It returns the generated response and updatechat_log.json
. You can simply useop.ask_gpt(prompt)
and keep the default parameters. -
GPT().send_message(message,*args)
:
This core function allows for a more interactive conversation with the GPT chosen model. It takes a message as input, generates a response from the model, and updates the conversation history. It also logs the conversation in thechat_log.json
file.
This function is implemented with GPT vision, Text2Speech and Dall-E functionalities.
-
-
Main Function:
GPT().chat(message,*args)
: Main chat function tuned with class parameters.
from mychatgpt import GPT
op = GPT(assistant='', # in-build assistant name
persona='', # any known character
format=None, # output format (latex,python,markdown)
translate=False, # translate outputs
translate_jap=False, # translate in jap outputs
save_log=True, # save log file
to_clip=True, # send reply t clipboard
print_token=True, # print token count
model='gpt-4o-mini', # set openai main model
talk_model='gpt-4o-2024-08-06', # set openai speak model
dalle="dall-e-2", # set dall-e model
image_size='512x512', # set generated image size
)
op.chat('Your message goes here',
max=1000, # max tokens in reply
image=None, # insert an image path to activate gpt vision
paste=False, # append clipboard to message
create=False # create an image
)
# Usage Example
op.add_persona('Elon Musk') # add custom personality to your agent
op.chat("""What do you think about OpenAI?""")
- Initialize a conversational agent:
GPT('Character')
elon = GPT('Elon Musk')
elon.chat("""What do you think about OpenAI?""")
- Activate GPT Vision:
op.chat("What’s in this image?", image="url/path",*parameters*)
insert in your chat context gpt-vision, activate a multimodal chat
vincent = GPT('Vincent Van Gogh')
vincent.chat("""Tell me what you see.""",
image=vincent.dummy_img)
- Image Generation in chat:
op.chat("prompt", create=Ture, *parameters*)
ask your conversational agent to create an image
vincent = GPT('Vincent Van Gogh')
vincent.dalle = "dall-e-3" # change dall-e model
vincent.chat("""Tell me what you see. Can you paint it?""",
image=vincent.dummy_img,
create = True)
- Image Generation main function
op.create_image(prompt,*parameters*)
op.create_image(prompt= "a cute kitten",
model="dall-e-2",
size='512x512',
response_format='b64_json',
quality="standard",
time_flag=True, show_image=True)
- Whisper and Text-to-Speech
op.whisper(filepath, # audio.mp3, audio.wav
translate = False,
response_format = "text",
print_transcriprion = True)
op.text2speech(text,
voice="alloy",
filename="speech.mp3",
model="tts-1",
speed=1,
play=False)
voices = ['alloy', 'echo', 'fable', 'onyx', 'nova', 'shimmer']
response_formats = ["mp3", "flac", "aac", "opus"]
- Speak With...
op.speak(message='',
system=None,
voice='nova',
language='eng',
tts= 'tts-1', max=1000, printall=False)
# Use an in-build assistant or any character of your choice, example:
socrates = GPT('Socrates')
socrates.speak('Tell me about the Truth.', 'onyx')
# Endless chat, keyboard controlled
socrates.speak_loop(system=None,
voice='nova', tts= 'tts-1', max=1000, language='eng', printall=False, exit_chat='stop')
- Talk With...
dua = GPT('Dua Lipa')
dua.talk(voice='nova', language='eng', gpt='gpt-4o', tts= 'tts-1', max=1000, printall=False)
# Endless talk, keyboard controlled
dua.talk_loop(voice='nova', language='eng', gpt='gpt-4o', tts= 'tts-1', max=1000, printall=False, chat='alt' , exit='shift')
nietzsche = GPT('Friedrich Nietzsche')
nietzsche.talk_loop('onyx')
The module also provides additional utility functions for managing the conversation, such as clearing the chat history, setting a persona, and setting system instructions, save/load chats.
choose_model()
clear_chat()
expand_chat()
save_chat()
load_chat()
load_file()
To set-up multiple conversations or change the API-key, follow the example proposed in mychatgpt_trial.ipynb
op.display_assistants()
from mychatgpt import delamain
# Call an assistant simply by name
delamain.chat('your message',
gpt='gpt-4o',
max = 1000,
clip=True)
#n.b. assistants sends reply to clipboard by default
The code in this module assumes that the conversation history is stored in a global variable named chat_thread
. Use print(op.chat_thread)
to show conversation history and op.chat_thread.pop()
to remove last interacition. op.send_message('clearchat')
to start a new conversation.
Using op.send_message()
, the code checks if the total number of tokens exceeds the model's maximum context length (gpt 3.5 turbo-16k: 16,384 tokens). If it does, a warning message indicates that the token limit is being reached and then then the first part of the conversation will automatically be deleted to make room for the next interaction.
Some other python applications executable in Terminal that take advantage of openai modulo features:
- auto-gpt
- gpt-cli
- rec-whisper-paste
Written by: JohnDef64
This module only facilitates interaction with the OpenAI API and keeps track of it. OpenAI holds all rights to ChatGPT.