Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gemini configuration doesn't seem to work. #379

Closed
Bredansky opened this issue Sep 13, 2024 · 7 comments
Closed

Gemini configuration doesn't seem to work. #379

Bredansky opened this issue Sep 13, 2024 · 7 comments

Comments

@Bredansky
Copy link

Stacktrace, when the bot tries to upload a resume.

2024-09-13 18:48:20,157 - httpx - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 401 Unauthorized"
2024-09-13 18:48:20,158 - src.utils - ERROR - Failed to generate resume: 'LoggerChatModel' object has no attribute 'logger'
2024-09-13 18:48:20,160 - src.utils - ERROR - Traceback: Traceback (most recent call last):
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/lib_resume_builder_AIHawk/gpt_resume_job_description.py", line 119, in __call__
    reply = self.llm(messages)
            ^^^^^^^^^^^^^^^^^^
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/langchain_core/_api/deprecation.py", line 180, in warning_emitting_wrapper
    return wrapped(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1016, in __call__
    generation = self.generate(
                 ^^^^^^^^^^^^^^
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 634, in generate
    raise e
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 624, in generate
    self._generate_with_cache(
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 846, in _generate_with_cache
    result = self._generate(
             ^^^^^^^^^^^^^^^
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 589, in _generate
    response = self.client.create(**payload)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/openai/_utils/_utils.py", line 277, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 646, in create
    return self._post(
           ^^^^^^^^^^^
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/openai/_base_client.py", line 1266, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/openai/_base_client.py", line 942, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/openai/_base_client.py", line 1046, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: AIzaSyAU***************************YiKY. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/src/linkedIn_easy_applier.py", line 428, in _create_and_upload_resume
    resume_pdf_base64 = self.resume_generator_manager.pdf_base64(job_description_text=job.description)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/lib_resume_builder_AIHawk/manager_facade.py", line 78, in pdf_base64
    self.resume_generator.create_resume_job_description_text(style_path, job_description_text, temp_html_path)
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/lib_resume_builder_AIHawk/resume_generator.py", line 37, in create_resume_job_description_text
    gpt_answerer.set_job_description_from_text(job_description_text)
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/lib_resume_builder_AIHawk/gpt_resume_job_description.py", line 247, in set_job_description_from_text
    output = chain.invoke({"text": job_description_text})
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2878, in invoke
    input = context.run(step.invoke, input, config)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 4474, in invoke
    return self._call_with_config(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1785, in _call_with_config
    context.run(
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/langchain_core/runnables/config.py", line 398, in call_func_with_variable_args
    return func(input, **kwargs)  # type: ignore[call-arg]
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 4330, in _invoke
    output = call_func_with_variable_args(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/langchain_core/runnables/config.py", line 398, in call_func_with_variable_args
    return func(input, **kwargs)  # type: ignore[call-arg]
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kyrylo/LinkedIn_AIHawk_automatic_job_application/virtual/lib/python3.12/site-packages/lib_resume_builder_AIHawk/gpt_resume_job_description.py", line 133, in __call__
    self.logger.error(f"Unexpected error occurred: {str(e)}, retrying in {retry_delay} seconds... (Attempt {attempt + 1}/{max_retries})")
    ^^^^^^^^^^^
AttributeError: 'LoggerChatModel' object has no attribute 'logger'

2024-09-13 18:48:20,160 - src.utils - ERROR - Failed to find form elements: 'LoggerChatModel' object has no attribute 'logger'

My config.yaml

llm_model_type: gemini
llm_model: models/gemini-1.5-flash
llm_api_url: ""

It seems like despite having gemini setup, the bot still tries to make a call to OpenAPI. I would appreciate any help. Thanks.

@feder-cr
Copy link
Collaborator

@Bredansky I tried gemini, but it doesn't seem to fuznioanre through no fault of our own, gemini seems not to work because API doesn't rstitute answers for questions that involve answers that are too short

@feder-cr feder-cr reopened this Sep 13, 2024
@feder-cr feder-cr closed this as not planned Won't fix, can't repro, duplicate, stale Sep 13, 2024
@bmabir17
Copy link

bmabir17 commented Oct 7, 2024

I am also getting this error, but using ollama instead of openai or gemini. Is ollama supported for this resume generator?

@erperejildo
Copy link

erperejildo commented Oct 9, 2024

I'm not sure if I'm doing something wrong, but following the instructions, I think I can either have a paid GPT key or a free one using Gemini:

llm_api_key: [Your OpenAI or Ollama API key or Gemini API key]
Replace with your OpenAI API key for GPT integration
To obtain an API key, follow the tutorial at: https://medium.com/@lorenzozar/how-to-get-your-own-openai-api-key-f4d44e60c327
Note: You need to add credit to your OpenAI account to use the API. You can add credit by visiting the OpenAI billing dashboard.
According to the OpenAI community and our users' reports, right after setting up the OpenAI account and purchasing the required credits, users still have a Free account type. This prevents them from having unlimited access to OpenAI models and allows only 200 requests per day. This might cause runtime errors such as:
Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. ...}}
{'error': {'message': 'Rate limit reached for gpt-4o-mini in organization on requests per day (RPD): Limit 200, Used 200, Requested 1.}}
OpenAI will update your account automatically, but it might take some time, ranging from a couple of hours to a few days.
You can find more about your organization limits on the official page.
For obtaining Gemini API key visit Google AI for Devs

I put the API key I got from Gemini in secrets.yaml but sometimes I get this error (and it only points to GPT):

Unexpected error occurred: Error code: 401 - {'error': {'message': 'Incorrect API key provided: AIzaSyBh***************************wtMQ. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

If I have instead the GPT key it works, but I got soon the You exceeded your current quota message

@bmabir17
Copy link

@feder-cr AIHawk-FOSS/lib_resume_builder_AIHawk#38 this issue is the root cause of this error

@cabatchi
Copy link

Also have this error, is there a way to modify the files of this current main build in order to be able to use Gemini?

@arhabhasan
Copy link

arhabhasan commented Oct 28, 2024

To use GEMINI API KEY:

llm_model_type: gemini
llm_model: 'gemini-1.5-flash'
#llm_api_url: 'https://api.pawan.krd/cosmosrp/v1' <- Comment this line by adding #

Note:
As it is mentioned in the Readme llm_api_url is not required for GEMINI API key.
Various llm_model parameters can be used from this link: https://ai.google.dev/gemini-api/docs/models/gemini

@cabatchi
Copy link

To use GEMINI API KEY:

llm_model_type: gemini llm_model: 'gemini-1.5-flash' #llm_api_url: 'https://api.pawan.krd/cosmosrp/v1' <- Comment this line by adding #

Note: As it is mentioned in the Readme llm_api_url is not required for GEMINI API key. Various llm_model parameters can be used from this link: https://ai.google.dev/gemini-api/docs/models/gemini

this still results in the same LoggerChatModel error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants