Kinode process for interacting with LLMs.
To run the lccp component, follow these steps:
Terminal 1: Download llama.cpp
from the GitHub repository: https://github.com/ggerganov/llama.cpp
cd llama.cpp
make
./server -m ../llama.cpp-sharding.cpp/phi2.gguf --embedding --port 3000
Terminal 2 Start a fake node by running:
kit f
Terminal 3 Build and start the lccp service:
kit bs lccp/
TODO
Run the tester script in your fakenode:
lccp_tester:llm:kinode
Within the tester, you can see how different requests and responses are handled.
Terminal 1 Start a fake node by running:
kit f
Terminal 2 Build and start the openai service:
kit bs openai/
TODO
Run the tester script in your fakenode:
Terminal 1 Run the tester script
openai_tester:llm:kinode
Within the tester, you can see how different requests and responses are handled.
- Make a clean interface. This is a higher level question regarding process communication.
- Cleaner call functions.