Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: openai adapter does not work #576

Closed
CyprienRicque opened this issue Nov 26, 2023 · 6 comments
Closed

[Bug]: openai adapter does not work #576

CyprienRicque opened this issue Nov 26, 2023 · 6 comments

Comments

@CyprienRicque
Copy link

CyprienRicque commented Nov 26, 2023

Current Behavior

The example in the readme produces the error APIRemovedInV1

image

Steps To Reproduce

!pip install openai gptcache
# example in the readme
from gptcache import cache
from gptcache.adapter import openai

cache.init()
cache.set_openai_key()

question = 'what is github'
answer = openai.ChatCompletion.create(
      model='gpt-3.5-turbo',
      messages=[
        {
            'role': 'user',
            'content': question
        }
      ],
    )
print(answer)

try at: https://colab.research.google.com/drive/1TjA2plt9ZXLHIQVvZ763Nj6fzshYGSoN?usp=sharing

Environment

Google colab
python 3.10.12
openai 1.3.5
gptcache 0.1.42

Anything else?

likely related to #570

@SimFG
Copy link
Collaborator

SimFG commented Nov 27, 2023

yes, as you can see from the issue, it is due to the incompatibility of the openai upgrade interface. Before being compatible with openai 1.x, you can try to use the get and put methods of gptcache to implement cache

@CyprienRicque
Copy link
Author

Ok thank you for the direction! I'll implement it

@judahkshitij
Copy link

judahkshitij commented Dec 7, 2023

yes, as you can see from the issue, it is due to the incompatibility of the openai upgrade interface. Before being compatible with openai 1.x, you can try to use the get and put methods of gptcache to implement cache

@SimFG @CyprienRicque How can we in a Python program create and then interact with a cache without involving any LLM model? Asking because I want to benchmark some cache settings in a Python code without worrying about setting up an LLM model to interact with. I tried calling cache.put("Hi", "Hi back") but got the error AttributeError: 'Cache' object has no attribute 'put'. Is there a way to use cache just using get and put in Python code (i.e., after creating and initializing cache with some settings like distance thresahold, etc. in the Python code rather than starting cache as a server) without involving any LLM? Any help on this is appreciated.

@judahkshitij
Copy link

@SimFG Also, after I stop a running gptcache server to change some settings in the config yaml file (such as distance/similarity threshold) and restart the server using the cmd "gptcache_server -s 127.0.0.1 -p 8000 -f gptcache_server_config.yaml", I get the following error:

start to install package: ruamel-yaml
successfully installed package: ruamel-yaml
Traceback (most recent call last):
File "/mnt/nfshome/judah.kshitij/.conda/envs/alpaca-lora_env/bin/gptcache_server", line 8, in
sys.exit(main())
File "/mnt/nfshome/judah.kshitij/.conda/envs/alpaca-lora_env/lib/python3.8/site-packages/gptcache_server/server.py", line 178, in main
init_conf = init_similar_cache_from_config(config_dir=args.cache_config_file)
File "/mnt/nfshome/judah.kshitij/.conda/envs/alpaca-lora_env/lib/python3.8/site-packages/gptcache/adapter/api.py", line 221, in init_similar_cache_from_config
data_manager = manager_factory(**storage_config)
File "/mnt/nfshome/judah.kshitij/.conda/envs/alpaca-lora_env/lib/python3.8/site-packages/gptcache/manager/factory.py", line 125, in manager_factory
return get_data_manager(s, v, o, e)
File "/mnt/nfshome/judah.kshitij/.conda/envs/alpaca-lora_env/lib/python3.8/site-packages/gptcache/manager/factory.py", line 200, in get_data_manager
return SSDataManager(cache_base, vector_base, object_base, eviction_base, max_size, clean_size, eviction)
File "/mnt/nfshome/judah.kshitij/.conda/envs/alpaca-lora_env/lib/python3.8/site-packages/gptcache/manager/data_manager.py", line 247, in init
self.eviction_base.put(self.s.get_ids(deleted=False))
File "/mnt/nfshome/judah.kshitij/.conda/envs/alpaca-lora_env/lib/python3.8/site-packages/gptcache/manager/eviction/memory_cache.py", line 59, in put
self._cache[obj] = True
File "/mnt/nfshome/judah.kshitij/.conda/envs/alpaca-lora_env/lib/python3.8/site-packages/cachetools/init.py", line 217, in setitem
cache_setitem(self, key, value)
File "/mnt/nfshome/judah.kshitij/.conda/envs/alpaca-lora_env/lib/python3.8/site-packages/cachetools/init.py", line 79, in setitem
self.popitem()
File "/mnt/nfshome/judah.kshitij/.conda/envs/alpaca-lora_env/lib/python3.8/site-packages/gptcache/manager/eviction/memory_cache.py", line 15, in wrapper
wrapper_func(keys)
TypeError: 'NoneType' object is not callable

Only way the server starts is when I also change the cache dir of the cache in the yaml config file. How can I fix this issue? I am not sure why just changing the simi threshold in yaml config and restarting the same server would give above error. Any insights for resolving above issue is appreciated.

@judahkshitij
Copy link

yes, as you can see from the issue, it is due to the incompatibility of the openai upgrade interface. Before being compatible with openai 1.x, you can try to use the get and put methods of gptcache to implement cache

@SimFG Are you suggesting here that until openai adapter becomes compatible with openai 1.x, we use cache by starting the gptcache server and access it using get and put methods? Any help on providing more details on this is appreciated.

@SimFG
Copy link
Collaborator

SimFG commented Jan 8, 2024

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants