Skip to content

kinode-dao/llm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

70 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM

Kinode process for interacting with LLMs.

Local LLMs

To run the lccp component, follow these steps:

Terminal 1: Download llama.cpp from the GitHub repository: https://github.com/ggerganov/llama.cpp

cd llama.cpp
make
./server -m ../llama.cpp-sharding.cpp/phi2.gguf --embedding --port 3000

Terminal 2 Start a fake node by running:

kit f

Terminal 3 Build and start the lccp service:

kit bs lccp/

Running Local LLMs with Messages

TODO

Running Local LLMS with Test Scripts

Run the tester script in your fakenode:

lccp_tester:llm:kinode

Within the tester, you can see how different requests and responses are handled.

Online APIs

Terminal 1 Start a fake node by running:

kit f

Terminal 2 Build and start the openai service:

kit bs openai/

Calling APIs Through Messages

TODO

Calling APIs Through Test Scripts

Run the tester script in your fakenode:

Terminal 1 Run the tester script

openai_tester:llm:kinode

Within the tester, you can see how different requests and responses are handled.

TODOS

  • Make a clean interface. This is a higher level question regarding process communication.
  • Cleaner call functions.

Releases

No releases published

Packages

No packages published

Languages