Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for GROQ API #65

Closed
walloutlet opened this issue Sep 15, 2024 · 7 comments
Closed

Support for GROQ API #65

walloutlet opened this issue Sep 15, 2024 · 7 comments
Assignees

Comments

@walloutlet
Copy link

walloutlet commented Sep 15, 2024

Re-opening this feature request from #57.

Since I'm not overly experienced with pull requests and programming in Home Assistant, I'll put what I've learnt and done here. This issue/feature has been solved and I now have Groq Vision LLM working in Home Assistant.

Groq endpoint used in Home Assistant: 'https://api.groq.com/openai/v1/chat/completions'
LLM Vision version: 1.1.1
Home Assistant Core version: 2024.8.3
Home Assistant Supervisor: 2024.09.1
Home Assistant OS: 13.1

Two issues to resolve:

Issue 1

<config_flow.py>
The validation fails for Groq due to the need to use the endpoint 'https://api.groq.com/openai/v1/chat/completions'. Specifically the fact that Groq uses 'openai' in the URL path. The validation code lobs this off, I simply fixed this issue for myself as I am only using Groq in my Custom OpenAI setup by adding '/openai' to the endpoint variable in the customer_openai on line 144:

endpoint = "/openai/v1/models"

I will note that it is absolutely imperative to put the '/' at the beginning, as the split logic removes it.

The resolution would be to redo the split operations or simply create a Groq function. This is really why I didn't do a pull request because this obviously would mess up the validation for any other custom API endpoints and whenever I played around with the variables, Home Assistant spit out errors that a 'Could not parse endpoint: cannot access local variable 'variable_name' where it is not associated with a value' error.

Issue 2

<request_handlers.py>
The order in which the text is added to the data json library matters. The prompt is required to be placed first, then whatever other text values such as the tags and then the image data. Below is the updated function that I have loaded into my Home Assistant.

    async def openai(self, model, api_key, endpoint=ENDPOINT_OPENAI):
        # Set headers and payload
        headers = {'Content-type': 'application/json',
                   'Authorization': 'Bearer ' + api_key}
        data = {"model": model,
                "messages": [{"role": "user", "content": [
                ]}],
                "max_tokens": self.max_tokens,
                "temperature": self.temperature
                }

        # append the message (prompt) to the beginning of the request
        data["messages"][0]["content"].append(
            {"type": "text", "text": self.message}
        )

        # Add the images to the request
        for image, filename in zip(self.base64_images, self.filenames):
            tag = ("Image " + str(self.base64_images.index(image) + 1)
                   ) if filename == "" else filename
            data["messages"][0]["content"].append(
                {"type": "text", "text": tag + ":"})
            data["messages"][0]["content"].append(
                {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image}", "detail": self.detail}})

        response = await self._post(
            url=endpoint, headers=headers, data=data)

        response_text = response.get(
            "choices")[0].get("message").get("content")
        return response_text

I haven't tested whether this works with OpenAI directly, cause well, I'm cheap and I don't have an API key to use. So I'll defer to others to test. Assuming this would be fine with OpenAI, the resolution is to just move the block of code ahead of the images.

FYI... Groq does not support multiple images to be sent to it, so that may require the code to be placed in its own function to limit the images to 1. Assuming this will change at some point, it's using an older version of Llava right now.

For anyone trying to mod the files directly in Home Assistant, don't forget to Restart Home Assistant for the changes to take affect.

Hope this helps.

@walloutlet
Copy link
Author

Tagging @johannsky to this Feature Request.

@valentinfrlch
Copy link
Owner

Thanks for your detailed description and proposed changes. I will look into this!

valentinfrlch added a commit that referenced this issue Sep 15, 2024
valentinfrlch added a commit that referenced this issue Sep 15, 2024
@valentinfrlch
Copy link
Owner

Got it to work! A beta is out (v1.1.3-beta.2) in case you have some time to test it.

@walloutlet
Copy link
Author

Got it to work! A beta is out (v1.1.3-beta.2) in case you have some time to test it.

Will take a look and let you know. Currently Home Assistant is busy, weekend = everyone home, therefore no touchy the automations. LOL!

@walloutlet
Copy link
Author

walloutlet commented Sep 27, 2024

Finally had a chance to sit down and spend some time on this. Been able to use my Ollama build, OpenAI and switched form Custom OpenAI to Groq with no issues. Runs just as it should in my testing. Appreciate the fast turn around.

PS... I am currently using v1.1.3-beta3

@walloutlet
Copy link
Author

walloutlet commented Sep 27, 2024

Closing this request as it has been resolved with the latest version.

@valentinfrlch
Copy link
Owner

Thanks for testing! Will merge this into main then.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants