Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama-bench : log benchmark progress #9287

Merged
merged 2 commits into from
Sep 6, 2024
Merged

Conversation

akx
Copy link
Contributor

@akx akx commented Sep 3, 2024

Simply adds logging for the parameter set index and number of parameter sets before each test in llama-bench, to make it easier to gauge how long it'll still take.


@slaren
Copy link
Collaborator

slaren commented Sep 3, 2024

This will break every use case except when redirecting stdout.

@slaren
Copy link
Collaborator

slaren commented Sep 3, 2024

Are you aware that you can use -oe to choose a printer to use with stderr?

I could see this implemented as type of printer that only prints a detailed progress report so that you can use eg. llama-bench -o json -oe progress > out.json. You can add more methods to the printer class if necessary to be able to print a more detailed progress progress. Do not use the LOG macro.

@akx
Copy link
Contributor Author

akx commented Sep 3, 2024

This will break every use case except when redirecting stdout.

Every use case? 😱 😄

Fair, I added a commit to only print them when -verbose (i.e. when you'd get more stuff on stderr anyway). Would that work?

Are you aware that you can use -oe to choose a printer to use with stderr?

Yep, I am (via #9288 :) )

@akx
Copy link
Contributor Author

akx commented Sep 4, 2024

@slaren Mind taking another look? Thanks :)

@slaren
Copy link
Collaborator

slaren commented Sep 5, 2024

I do not really see the point of this change. Even with verbose, this would add output that I would rather not see in verbose mode. It seems that it is trying to solve the same problem for which -oe was added, and for me that works well enough.

@akx
Copy link
Contributor Author

akx commented Sep 5, 2024

My UC is simply being able to see the progress of a benchmark run more granularly.

For instance, it's not quite clear that there's a warmup thing – I was initially mightily confused seeing 6 llama_print_timingses for what was supposed to be a 5-benchmark run, etc.

I think progress indication is also useful to figure out if e.g. a remote benchmark run may have crashed or timed out.

@slaren
Copy link
Collaborator

slaren commented Sep 6, 2024

I wouldn't want to see this even in verbose mode, but that wouldn't be a problem if it was optional and gated behind a command line parameter. Also please change the LOG to fprintf to stderr directly, the logging library is not used in llama-bench.

@akx
Copy link
Contributor Author

akx commented Sep 6, 2024

but that wouldn't be a problem if it was optional and gated behind a command line parameter.

Sure.

Also please change the LOG to fprintf to stderr directly

Will do.

the logging library is not used in llama-bench.

It is (since #8672 last week), which is why I was using it:

LOG_TEE("%s: failed to parse cpu-mask: %s\n", __func__, t.cpu_mask.c_str());

LOG_TEE("%s: threadpool create failed : n_threads %d\n", __func__, tpp.n_threads);

Should these get changed too?

@slaren
Copy link
Collaborator

slaren commented Sep 6, 2024

Should these get changed too?

Yes, thanks for pointing that. It can be done in a different PR.
Edit: pushed a fix together with other fix.

@slaren slaren merged commit 134bc38 into ggerganov:master Sep 6, 2024
50 of 51 checks passed
dsx1986 pushed a commit to dsx1986/llama.cpp that referenced this pull request Oct 29, 2024
* llama-bench : add optional progress messages
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants