Skip to content

Commit

Permalink
Site Clean up - How to Clean up (#1342)
Browse files Browse the repository at this point in the history
* Create easy-request.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update easy-request.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update easy-request.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update easy-request.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update easy-request.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update easy-request.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update easy-request-curl.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update easy-request-openai-v0.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update easy-request-openai-v1.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update easy-request.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Delete docs/content/howtos/easy-request-openai-v1.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Delete docs/content/howtos/easy-request-openai-v0.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Delete docs/content/howtos/easy-request-curl.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update and rename easy-model-import-downloaded.md to easy-model.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update _index.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update easy-setup-docker-cpu.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update easy-setup-docker-gpu.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update easy-setup-docker-gpu.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update easy-setup-docker-cpu.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Delete docs/content/howtos/autogen-setup.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update _index.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Delete docs/content/howtos/easy-request-autogen.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update easy-model.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update _index.en.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update _index.en.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update _index.en.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update _index.en.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

* Update _index.md

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>

---------

Signed-off-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>
  • Loading branch information
lunamidori5 authored Dec 1, 2023
1 parent 2b2007a commit 6b312a8
Show file tree
Hide file tree
Showing 14 changed files with 102 additions and 230 deletions.
2 changes: 1 addition & 1 deletion docs/content/faq/_index.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Here are answers to some of the most common questions.

<details>

Most ggml-based models should work, but newer models may require additions to the API. If a model doesn't work, please feel free to open up issues. However, be cautious about downloading models from the internet and directly onto your machine, as there may be security vulnerabilities in lama.cpp or ggml that could be maliciously exploited. Some models can be found on Hugging Face: https://huggingface.co/models?search=ggml, or models from gpt4all are compatible too: https://github.com/nomic-ai/gpt4all.
Most gguf-based models should work, but newer models may require additions to the API. If a model doesn't work, please feel free to open up issues. However, be cautious about downloading models from the internet and directly onto your machine, as there may be security vulnerabilities in lama.cpp or ggml that could be maliciously exploited. Some models can be found on Hugging Face: https://huggingface.co/models?search=gguf, or models from gpt4all are compatible too: https://github.com/nomic-ai/gpt4all.

</details>

Expand Down
10 changes: 5 additions & 5 deletions docs/content/getting_started/_index.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ To run with GPU Accelleration, see [GPU acceleration]({{%relref "features/gpu-ac
mkdir models

# copy your models to it
cp your-model.bin models/
cp your-model.gguf models/

# run the LocalAI container
docker run -p 8080:8080 -v $PWD/models:/models -ti --rm quay.io/go-skynet/local-ai:latest --models-path /models --context-size 700 --threads 4
Expand All @@ -43,7 +43,7 @@ docker run -p 8080:8080 -v $PWD/models:/models -ti --rm quay.io/go-skynet/local-

# Try the endpoint with curl
curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
"model": "your-model.bin",
"model": "your-model.gguf",
"prompt": "A long time ago in a galaxy far, far away",
"temperature": 0.7
}'
Expand All @@ -67,7 +67,7 @@ cd LocalAI
# git checkout -b build <TAG>

# copy your models to models/
cp your-model.bin models/
cp your-model.gguf models/

# (optional) Edit the .env file to set things like context size and threads
# vim .env
Expand All @@ -79,10 +79,10 @@ docker compose up -d --pull always

# Now API is accessible at localhost:8080
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"your-model.bin","object":"model"}]}
# {"object":"list","data":[{"id":"your-model.gguf","object":"model"}]}

curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
"model": "your-model.bin",
"model": "your-model.gguf",
"prompt": "A long time ago in a galaxy far, far away",
"temperature": 0.7
}'
Expand Down
8 changes: 2 additions & 6 deletions docs/content/howtos/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,10 @@ This section includes LocalAI end-to-end examples, tutorial and how-tos curated

- [Setup LocalAI with Docker on CPU]({{%relref "howtos/easy-setup-docker-cpu" %}})
- [Setup LocalAI with Docker With CUDA]({{%relref "howtos/easy-setup-docker-gpu" %}})
- [Seting up a Model]({{%relref "howtos/easy-model-import-downloaded" %}})
- [Making requests via Autogen]({{%relref "howtos/easy-request-autogen" %}})
- [Making requests via OpenAi API V0]({{%relref "howtos/easy-request-openai-v0" %}})
- [Making requests via OpenAi API V1]({{%relref "howtos/easy-request-openai-v1" %}})
- [Making requests via Curl]({{%relref "howtos/easy-request-curl" %}})
- [Seting up a Model]({{%relref "howtos/easy-model" %}})
- [Making requests to LocalAI]({{%relref "howtos/easy-request" %}})

## Programs and Demos

This section includes other programs and how to setup, install, and use of LocalAI.
- [Python LocalAI Demo]({{%relref "howtos/easy-setup-full" %}}) - [lunamidori5](https://github.com/lunamidori5)
- [Autogen]({{%relref "howtos/autogen-setup" %}}) - [lunamidori5](https://github.com/lunamidori5)
91 changes: 0 additions & 91 deletions docs/content/howtos/autogen-setup.md

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -59,9 +59,6 @@ What this does is tell ``LocalAI`` how to load the model. Then we are going to *
name: lunademo
parameters:
model: luna-ai-llama2-uncensored.Q4_K_M.gguf
temperature: 0.2
top_k: 40
top_p: 0.65
```
Now that we have the model set up, there a few things we should add to the yaml file to make it run better, for this model it uses the following roles.
Expand Down Expand Up @@ -100,9 +97,6 @@ context_size: 2000
name: lunademo
parameters:
model: luna-ai-llama2-uncensored.Q4_K_M.gguf
temperature: 0.2
top_k: 40
top_p: 0.65
roles:
assistant: 'ASSISTANT:'
system: 'SYSTEM:'
Expand All @@ -112,7 +106,7 @@ template:
completion: lunademo-completion
```
Now that we got that setup, lets test it out but sending a request by using [Curl]({{%relref "easy-request-curl" %}}) Or use the [OpenAI Python API]({{%relref "easy-request-openai-v1" %}})!
Now that we got that setup, lets test it out but sending a [request]({{%relref "easy-request" %}}) to Localai!
## Adv Stuff
Alright now that we have learned how to set up our own models, here is how to use the gallery to do alot of this for us. This command will download and set up (mostly, we will **always** need to edit our yaml file to fit our computer / hardware)
Expand Down
1 change: 0 additions & 1 deletion docs/content/howtos/easy-request-autogen.md

This file was deleted.

35 changes: 0 additions & 35 deletions docs/content/howtos/easy-request-curl.md

This file was deleted.

50 changes: 0 additions & 50 deletions docs/content/howtos/easy-request-openai-v0.md

This file was deleted.

28 changes: 0 additions & 28 deletions docs/content/howtos/easy-request-openai-v1.md

This file was deleted.

85 changes: 85 additions & 0 deletions docs/content/howtos/easy-request.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@

+++
disableToc = false
title = "Easy Request - All"
weight = 2
+++

## Curl Request

Curl Chat API -

```bash
curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "lunademo",
"messages": [{"role": "user", "content": "How are you?"}],
"temperature": 0.9
}'
```

## Openai V1 - Recommended

This is for Python, ``OpenAI``=>``V1``

OpenAI Chat API Python -
```python
from openai import OpenAI

client = OpenAI(base_url="http://localhost:8080/v1", api_key="sk-xxx")

messages = [
{"role": "system", "content": "You are LocalAI, a helpful, but really confused ai, you will only reply with confused emotes"},
{"role": "user", "content": "Hello How are you today LocalAI"}
]
completion = client.chat.completions.create(
model="lunademo",
messages=messages,
)

print(completion.choices[0].message)
```
See [OpenAI API](https://platform.openai.com/docs/api-reference) for more info!

## Openai V0 - Not Recommended

This is for Python, ``OpenAI``=``0.28.1``

OpenAI Chat API Python -

```python
import os
import openai
openai.api_base = "http://localhost:8080/v1"
openai.api_key = "sx-xxx"
OPENAI_API_KEY = "sx-xxx"
os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY

completion = openai.ChatCompletion.create(
model="lunademo",
messages=[
{"role": "system", "content": "You are LocalAI, a helpful, but really confused ai, you will only reply with confused emotes"},
{"role": "user", "content": "How are you?"}
]
)

print(completion.choices[0].message.content)
```

OpenAI Completion API Python -

```python
import os
import openai
openai.api_base = "http://localhost:8080/v1"
openai.api_key = "sx-xxx"
OPENAI_API_KEY = "sx-xxx"
os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY

completion = openai.Completion.create(
model="lunademo",
prompt="function downloadFile(string url, string outputPath) ",
max_tokens=256,
temperature=0.5)

print(completion.choices[0].text)
```
Loading

0 comments on commit 6b312a8

Please sign in to comment.