-
Notifications
You must be signed in to change notification settings - Fork 15.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AttributeError: 'GPT4All' object has no attribute 'model_type' (langchain 0.0.190) #5720
Comments
Hey @christoph-daehne, I just downloaded the model and ran your code, but can't reproduce your error. For me the model loads just fine in langchain-0.0.190: !pip install -U langchain
from langchain.llms import GPT4All
# see https://gpt4all.io/models/ggml-mpt-7b-instruct.bin
llm = GPT4All(
model="/home/sral/models/gpt4all/ggml-mpt-7b-instruct.bin",
#backend='gptj',
top_p=0.5,
top_k=0,
temp=0.1,
repeat_penalty=0.8,
n_threads=12,
n_batch=16,
n_ctx=2048)
|
I have the same issue as @christoph-daehne
When I change directly From Line 150 of
|
@christoph-daehne @dotandpixel I think I figured it out. After creating a new python environment from scratch and running the test code, I was able to reproduce the issue. As it turns out, GPT4All's python bindings, which Langchain's GPT4All LLM code wraps, have changed in a subtle way, however the change is as of yet unreleased. I am on the latest version from https://github.com/nomic-ai/gpt4all which probably explains why I encountered the problem seen in #5651 to begin with. However, since the new code in GPT4All is unreleased, my fix has created a scenario where Langchain's GPT4All wrapper has become incompatible with the currently released version of GPT4All. I will submit another pull request to turn this into a backwards-compatible change. In the meantime, you should be able to get your code to run by upgrading to a newer version of GPT4All's Python bindings. Further technical detailThe issue is with GPT4All and the semantics of class GPT4All():
"""Python API for retrieving and interacting with GPT4All models.
Attribuies:
model: Pointer to underlying C model.
"""
def __init__(self, model_name: str, model_path: str = None, model_type: str = None, allow_download=True):
"""
Constructor
Args:
model_name: Name of GPT4All or custom model. Including ".bin" file extension is optional but encouraged.
model_path: Path to directory containing model file or, if file does not exist, where to download model.
Default is None, in which case models will be stored in `~/.cache/gpt4all/`.
model_type: Model architecture to use - currently, options are 'llama', 'gptj', or 'mpt'. Only required if model
is custom. Note that these models still must be built from llama.cpp or GPTJ ggml architecture.
Default is None.
allow_download: Allow API to download models from gpt4all.io. Default is True.
"""
self.model = None
# Model type provided for when model is custom
if model_type:
self.model = GPT4All.get_model_from_type(model_type)
# Else get model from gpt4all model filenames
else:
self.model = GPT4All.get_model_from_name(model_name) In contrast to this, the newer, unreleased version uses a class GPT4All():
"""Python API for retrieving and interacting with GPT4All models.
Attribuies:
model: Pointer to underlying C model.
"""
def __init__(self, model_name: str, model_path: str = None, model_type: str = None, allow_download = True):
"""
Constructor
Args:
model_name: Name of GPT4All or custom model. Including ".bin" file extension is optional but encouraged.
model_path: Path to directory containing model file or, if file does not exist, where to download model.
Default is None, in which case models will be stored in `~/.cache/gpt4all/`.
model_type: Model architecture. This argument currently does not have any functionality and is just used as
descriptive identifier for user. Default is None.
allow_download: Allow API to download models from gpt4all.io. Default is True.
"""
self.model_type = model_type
self.model = pyllmodel.LLModel()
# Retrieve model and download if allowed
model_dest = self.retrieve_model(model_name, model_path=model_path, allow_download=allow_download)
self.model.load_model(model_dest) |
@bwv988 Awesome :) What a nasty-to-find little detail. Thank you very much. I will try it out |
i was able to make it work with: |
Fixes #5720. A more in-depth discussion is in my comment here: #5720 (comment) In a nutshell, there has been a subtle change in the latest version of GPT4Alls Python bindings. The change I submitted yesterday is compatible with this version, however, this version is as of yet unreleased and thus the code change breaks Langchain's wrapper under the currently released version of GPT4All. This pull request proposes a backwards-compatible solution.
@bwv988 Thank you Ralph for the detailed and speedy response. I'll check it out. |
In Replaced
for this
with this result
|
…. (langchain-ai#5743) Fixes langchain-ai#5720. A more in-depth discussion is in my comment here: langchain-ai#5720 (comment) In a nutshell, there has been a subtle change in the latest version of GPT4Alls Python bindings. The change I submitted yesterday is compatible with this version, however, this version is as of yet unreleased and thus the code change breaks Langchain's wrapper under the currently released version of GPT4All. This pull request proposes a backwards-compatible solution.
System Info
Hi, this is related to #5651 but (on my machine ;) ) the issue is still there.
Versions
Who can help?
@pakcheera @bwv988 First of all: thanks for the report and the fix :). Did this issues disappear on you machines?
Information
Related Components
Reproduction
Error message
As you can see in gpt4all.py:156 contains the changed from the fix of #5651
Code
FYI I am following this example in a blog post.
Expected behavior
I expect an instance of GPT4All instead of a stacktrace.
The text was updated successfully, but these errors were encountered: