Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature: local-ai subcommands/separate CLI to operate the API #816

Open
1 of 4 tasks
mudler opened this issue Jul 26, 2023 · 8 comments · Fixed by #1165
Open
1 of 4 tasks

feature: local-ai subcommands/separate CLI to operate the API #816

mudler opened this issue Jul 26, 2023 · 8 comments · Fixed by #1165
Assignees
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@mudler
Copy link
Owner

mudler commented Jul 26, 2023

I think would be quite useful to have subcommands in the CLI that acts as a client to the API allowing to e.g:

  • Install/setup models with the gallery
  • Run inference quickly (testing)
  • TTS
  • Text to audio

Another great good first issue for those that want to start contributing to the project

@mudler mudler added enhancement New feature or request good first issue Good for newcomers labels Jul 26, 2023
@mudler mudler self-assigned this Jul 26, 2023
@localai-bot
Copy link
Contributor

⚠️⚠️⚠️⚠️⚠️

Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!

but.... I can also be funny or helpful 😸 and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.

Don't engage in conversation with me,
I don't support (yet) replying!

⚠️⚠️⚠️⚠️⚠️

Sure, I can help you with that! Here are some possible solutions or steps to take to address the issue:

  1. Installing a model from the repositories:
    • Use the /models/apply LocalAI API endpoint to install the model in runtime. You will need to provide the name and URL of the model repository, as well as any additional configuration files you may need.
    • If you don't know the exact name or URL of the model repository, you can use the /models/list LocalAI API endpoint to list all available repositories and their details.
  2. Installing a model with a different name:
    • Use the /models/apply LocalAI API endpoint to install the model with a different name. You will need to provide the new name and any additional configuration files you may need.
  3. Overriding configuration files:
    • Use the /models/apply LocalAI API endpoint to override the default configuration files for a specific model. You will need to provide the name of the model and the path to the new configuration file.
  4. Examples:
    • Check out the examples section in the documentation for code snippets and tutorials on how to use the LocalAI API with different models and tasks.
  5. Embeddings:
    • Use the /models/apply LocalAI API endpoint to install the model and provide the name of the desired embedding. You can also use the /models/list LocalAI API endpoint to list all available embeddings and their details.
  6. Image generation:
    • Use the /models/apply LocalAI API endpoint to install the model and provide the name and URL of the image generator you want to use. You can also use the /models/list LocalAI API endpoint to list all available image generators and their details.
  7. Audio transcription:
    • Use the /models/apply LocalAI API endpoint to install the model and provide the name and url of the audio transcription model you want to use. You can also use the /models/list LocalAI API endpoint to list all available audio transcription models and their details.
  8. GPTs:
    • Use the /models/apply LocalAI API endpoint to install the model and provide the name and url of the GPT model you want to use. You can also use the /models/list LocalAI API endpoint to list all available GPT models and their details.
  9. ChatGPT OSS API:
    • Use the /models/apply LocalAI API endpoint to install the model and provide the name and url of the ChatGPT OSS API you want to use. You can also use the /models/list LocalAI API endpoint to list all available ChatGPT OSS APIs and their details.
  10. Binary releases:
  • Use the /models/apply LocalAI API endpoint to install the model and provide the name and url of the binary release you want to use. You can also use the /models/list LocalAI API endpoint to list all available binary releases and their details.
  1. GPU/CUDA support:
  • Use the /models/apply LocalAI API endpoint to install the model and provide the name and url of the GPU/CUDA version you want to use. You can also use the /models/list LocalAI API endpoint to list all available GPU/CUDA versions and their details.
  1. Custom models:
  • Use the /models/apply LocalAI API endpoint to install the model and provide the name and url of the custom model you want to use. You can also use the /models/list LocalAI API endpoint to list all available custom models and their details.
    I hope this helps! Let me know if you have any further questions or issues.

Sources:

@mudler mudler changed the title feature: local-ai subcommands to operate the API feature: local-ai subcommands/separate CLI to operate the API Jul 27, 2023
@jespino
Copy link
Contributor

jespino commented Oct 11, 2023

I want to take a look into this :)

mudler pushed a commit that referenced this issue Oct 12, 2023
…#1165)

Adding subcommands to do certain actions directly from the command line.
I'm starting with the models subcommand allowing you to list models from
your galleries and install them.

This PR partially fixes #816

My intention is to keep adding other subcommands, but I think this is a
good start, and I think this already provides value.

Also, I added a new dependency to generate the progress bar in the
command line, it is not "needed" but I think is a nice to have to have a
cooler interface.

Here is a screenshot:

![imagen](https://github.com/go-skynet/LocalAI/assets/290303/8d8c1bf0-5340-46ce-9362-812694f914cd)
@jespino
Copy link
Contributor

jespino commented Oct 12, 2023

@mudler We can reopen it to add other subcommands, or add separated tickets for each subcommand that you want there.

@mudler
Copy link
Owner Author

mudler commented Oct 12, 2023

github automation.. maybe better to keep this open to track and make sub-items

@mudler mudler reopened this Oct 12, 2023
@jespino
Copy link
Contributor

jespino commented Oct 13, 2023

By the way, I'm not doing "API calls to a running localAI", I'm adding a command line interface that skips the server process entirely.

@mudler
Copy link
Owner Author

mudler commented Oct 13, 2023

gotcha, maybe makes sense to have a local-ai client <> subcommand as well in order to run against apis. I see that helpful especially to load models from galleries

@jespino
Copy link
Contributor

jespino commented Oct 13, 2023

Actually, I see a lot of value on having the API based client separated from the main binary, because compiling the main binary is hard, but compiling a tiny client library + command line that at the end of the day is just a bunch of HTTP requests should be fairly easy.

I totally see people running the LocalAI using docker compose and accessing it from the command line using something like localai-cli.

mudler pushed a commit that referenced this issue Oct 14, 2023
This PR adds the tts (Text to Speach) command to the localai binary.

This PR is related to the issue #816
mudler pushed a commit that referenced this issue Oct 15, 2023
Adding the transcript subcommand to the localai binary

This PR is related to #816
@mudler
Copy link
Owner Author

mudler commented Oct 15, 2023

right! that could also sit nearby here and have a separate make target or a separate repository.. I have no strong opinion here, I'm fine with both

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants