-
Notifications
You must be signed in to change notification settings - Fork 10.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
backend : add eval callback #4935
Conversation
40cdb39
to
83f3d7a
Compare
ggml-backend.c
Outdated
if (sched->callback_eval(t, true, sched->callback_eval_user_data) && // ask | ||
!sched->callback_eval(t, false, sched->callback_eval_user_data)) { // eval | ||
break; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the ask callback really necessary here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've changed the implementation to ask only once per node in a split
ggml-backend.c
Outdated
// TODO: should we clear the callbacks? | ||
//sched->callback_eval = NULL; | ||
//sched->callback_eval_user_data = NULL; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is fine, we don't need to clear the callbacks here, the reset
function is meant to prepare the sched for the next graph evaluation, resetting the allocators and the backend assignments (similar to ggml_allocr_reset
).
Fixes ggerganov#5029 introduced in ggerganov#4935
* backend : add eval callback ggml-ci * backend : group nodes in a single compute when user don't need them * backend : clean-up the implementation ggml-ci * simple : do not perform tensor data copy if not needed * simple : fix * simple : no need for ggml_is_contiguous + fix bool parse * llama : fix callback placement in llama_context_params * backend : avoid double-ask callback calls * simple : restore examples, imatrix will serve as a demo
* backend : add eval callback ggml-ci * backend : group nodes in a single compute when user don't need them * backend : clean-up the implementation ggml-ci * simple : do not perform tensor data copy if not needed * simple : fix * simple : no need for ggml_is_contiguous + fix bool parse * llama : fix callback placement in llama_context_params * backend : avoid double-ask callback calls * simple : restore examples, imatrix will serve as a demo
ref: #4931
The callback currently observes the softmax results in the attention, but can be customized in any way:
llama.cpp/examples/simple/simple.cpp
Lines 9 to 34 in 81d4162
Skip last CLI arg (or set to
0
) to disable the callback