-
Notifications
You must be signed in to change notification settings - Fork 63
foreign exception error #76
Comments
again catch and throw but we shouldn't be crashing or exiting when this happens. also are any of these models supported right now? https://gpt4all.io/models/models.json |
This is not a llama based model, so it cannot be used on llama.cpp backend. |
yeah should it crash though?? log an error I understand. but crash?? i have this code wrapped in a try/catch from my side and it isn't catching. |
this error is not catchable at current stage. or you can open issue on llama.cpp github space. loading model file is unsafe for rust side and i can only get a nullptr when loaded failed. but in this case, i cant even get any response from the low level side. generally, you shouldnt use this model with llama.cpp at all. the termination and error is not thrown to upper level. its more like playing fire. |
in my use case i have an api that uses this library, it allows people to pick from a list of models. Its possible I have models that aren't supported, so ideally it wouldn't take down my entire backend if someone picked an incompatible file. you can't do try/catch around file loading? |
yea. basically this is very early stage library and not production ready. every release i v tagged with pre release. i admit this is essentially a toy app. and my upper stream llama.cpp/llm-rs even updating every few hours. how can i promise that everything just work in purpose? |
or maybe you can do something with it maybe find a way to fix. i wont suggest you use it in any production scenario or you have to keep the dangerous away from user input. |
i will leave this temporily as bug till i find a way to fix |
double confirmed this error should be handled in llama.cpp |
Hey, @ralyodio don't know if you managed to work around this, but my solution was using const { ModelType } = require("@llama-node/core");
const { LLM } = require("llama-node");
const { LLMRS } = require("llama-node/dist/llm/llm-rs.cjs");
const path = require("path");
const modelPath = process.argv[2]
const modelType = process.argv[3]
const prmpt = process.argv[4]
const llama = new LLM(LLMRS);
const toChatTemplate = (prompt: string) => `### Instruction:
${prompt}
### Response:`;
const run = async () => {
const params = {
prompt: toChatTemplate(prmpt),
numPredict: 128,
temperature: 0.8,
topP: 1,
topK: 40,
repeatPenalty: 1,
repeatLastN: 64,
seed: 0,
feedPrompt: true,
};
await llama.load({ modelPath, modelType });
await llama.createCompletion(params, (response) => {
process.stdout.write(response.token);
});
};
run(); |
This fatal error will crash the whole app even try-catch + worker_thread is used. Better if you can just throw a JS error (sub-class so we can switch-case the instanceof error) instead! The error thrown on my case that the modelPath not exist. |
I have a similar issue as well. Hope it gets fixed in llama.cpp or wherever. |
The text was updated successfully, but these errors were encountered: