- Mac (Apple silicon is preferred, but Intel Mac works too)
- Linux (Ubuntu 20.04 to 24.04 perferred) on any hardware
- Windows WSL with Ubuntu
curl -sSfL 'https://raw.githubusercontent.com/GaiaNet-AI/gaianet-node/main/install.sh' | bash
Optional step 1a: Copy the follow two GGUF files from the USB drive to your ~/gaianet/
directory.
- Phi-3-mini-4k-instruct-Q5_K_M.gguf
- all-MiniLM-L6-v2-ggml-model-f16.gguf
gaianet init
gaianet start
tail -f ~/gaianet/log/start-llamaedge.log
Now, you can ask a question about Paris
Where is Paris?
Plan me a trip to visit the museums in Paris.
Where is Beijing?
Review the answer and the log to see the context added to the LLM prompt!
curl -X POST https://localhost:8080/v1/chat/completions \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{"messages":[{"role":"system", "content": "You are a helpful assistant."}, {"role":"user", "content": "Where is Paris?"}]}'
Review the answer and the log to see the context added to the LLM prompt!
gaianet stop
Change the LLM to Llama-3-8b
gaianet config \
--chat-url https://huggingface.co/gaianet/Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Q5_K_M.gguf \
--chat-ctx-size 8192 \
--prompt-template llama-3-chat
Change the knowledge base to London with its own embedding model
gaianet config \
--snapshot https://huggingface.co/datasets/gaianet/london/resolve/main/london_768_nomic-embed-text-v1.5-f16.snapshot.tar.gz \
--embedding-url https://huggingface.co/gaianet/nomic-embed-text-gguf/resolve/main/nomic-embed-text-v1.5.f16.gguf \
--embedding-ctx-size 8192 \
--qdrant-limit 1
Optional step 8a: Copy the follow two GGUF files from the USB drive to your ~/gaianet/
directory.
- Meta-Llama-3-8B-Instruct-Q5_K_M.gguf
- nomic-embed-text-v1.5.f16.gguf
gaianet init
gaianet start
curl -X POST https://localhost:8080/v1/chat/completions \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{"messages":[{"role":"system", "content": "You are a helpful assistant."}, {"role":"user", "content": "What is the population of London?"}]}'
Review the answer and the log to see the context added to the LLM prompt!
Log into https://dify.ai/
Select "Settings | Model Provider". From the list, you can add an OpenAI-API-compatible provider.
Step 12a: Add a new chat LLM
- API endpoint: https://1234...abcd.gaianet.network/v1
- Type: Chat
- Model name: Meta-Llama-3-8B-Instruct-Q5_K_M
Use
https://llama3.gaianet.network/v1
as the API endpoint andMeta-Llama-3-8B-Instruct.Q5_K_M
as the model name, if you do not have your own node.
Step 12b: Add a new embedding model
- API endpoint: https://1234...abcd.gaianet.network/v1
- Type: Embedding
- Model name: nomic-embed-text-v1.5.f16
Use
https://llama3.gaianet.network/v1
as the API endpoint andall-MiniLM-L6-v2-ggml-model-f16
as the model name, if you do not have your own node.
gaianet stop
Go to: https://tools.gaianet.xyz/
- Upload a text document
- Name the vector collection
default
- Select the nomic model for embedding
Hit the "Make RAG" button
Merge the updated JSON into the ~/gaianet/config.json
file. Example:
{
"embedding": "https://huggingface.co/gaianet/nomic-embed-text-gguf/resolve/main/nomic-embed-text-v1.5.f16.gguf",
"embedding_ctx_size": 768,
"snapshot": "https://huggingface.co/datasets/max-id/gaianet-qdrant-snapshot/resolve/main/default-ebc61456-0f6e-4e91-a2c6-cc1f090a5c0b/default-ebc61456-0f6e-4e91-a2c6-cc1f090a5c0b.snapshot"
}
gaianet init
gaianet start
Ask a few questions about the document you just uploaded.
Concepts: https://github.com/GaiaNet-AI/docs/blob/main/docs/creator-guide/knowledge/concepts.md
Steps: https://github.com/GaiaNet-AI/docs/blob/main/docs/creator-guide/knowledge/markdown.md
Use llama.cpp: https://github.com/GaiaNet-AI/docs/blob/main/docs/creator-guide/finetune/llamacpp.md