Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: inference-mistral-extension #2072

Closed
0xSage opened this issue Feb 18, 2024 · 6 comments · Fixed by #2569
Closed

feat: inference-mistral-extension #2072

0xSage opened this issue Feb 18, 2024 · 6 comments · Fixed by #2569
Assignees
Labels
good first issue Good for newcomers type: feature request A new feature
Milestone

Comments

@0xSage
Copy link
Contributor

0xSage commented Feb 18, 2024

Problem
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

We ❤️ mistral

Success Criteria
A clear and concise description of what you want to happen.

  • similar to openai remote api
  • lets have a mistral remote api

Additional context
Add any other context or screenshots about the feature request here.

@0xSage 0xSage added good first issue Good for newcomers type: feature request A new feature labels Feb 18, 2024
@hiro-v
Copy link
Contributor

hiro-v commented Feb 19, 2024

Mistral uses OpenAI compatible APIs for chat/completion and embedding.
We can support this with integration docs. Should not create another extension.

However, I think it's a good time that we think through and add remote api provider abstraction as users might need to configure multiple providers and switch between them.
As in my observation, almost all providers (already have openai compatible API or not), will add OpenAI compatible API support.

We can focus on OpenAI compatible API providers first:

  • LMStudio, Nitro, Ollama
  • OpenAI platform, Azure OpenAI, Mistral, Openrouter, Anyscale, Together.ai, Lepton.ai
  • vLLM, etc

@louis-jan
Copy link
Contributor

louis-jan commented Feb 19, 2024

Mistral uses OpenAI compatible APIs for chat/completion and embedding. We can support this with integration docs. Should not create another extension.

However, I think it's a good time that we think through and add remote api provider abstraction as users might need to configure multiple providers and switch between them. As in my observation, almost all providers (already have openai compatible API or not), will add OpenAI compatible API support.

We can focus on OpenAI compatible API providers first:

  • LMStudio, Nitro, Ollama
  • OpenAI platform, Azure OpenAI, Mistral, Openrouter, Anyscale, Together.ai, Lepton.ai
  • vLLM, etc

Is this the one I mentioned earlier for you to refactor, @hiro-v? Would you like to take ownership of this issue?

@hiro-v
Copy link
Contributor

hiro-v commented Feb 19, 2024

I can after the leave, can work on this if it's not picked up on March 1st.
But I think @hieu-jan has been working on the documentation for the integration and trying to work in Jan App pod, it's a good issue to follow

@louis-jan
Copy link
Contributor

As a part of Inference Provider revamp epic

@Inchoker Inchoker self-assigned this Apr 2, 2024
@Van-QA Van-QA added this to the v0.4.11 milestone Apr 3, 2024
@louis-jan louis-jan reopened this Apr 9, 2024
@Van-QA
Copy link
Contributor

Van-QA commented Apr 9, 2024

hi @Inchoker, the case of invalid API for Mistral extension, is throwing something amiss ❌

Image

Expectation ✅ :
Similar to other extension, to highlight the invalid API key: Invalid API key. Please check your API key from Settings and try again.

Image

see related PR: #2645

@Van-QA
Copy link
Contributor

Van-QA commented Apr 11, 2024

image
looking good on Jan v0.4.10-369 ✅

@Van-QA Van-QA closed this as completed Apr 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers type: feature request A new feature
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

6 participants