Skip to content

llama-bench: enhance benchmark with improved token throughput measurements #12874

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

thevishalagarwal
Copy link

This PR adds separate measurements for end-to-end, prompt processing, and token generation throughput in llama-bench. The changes allow for more detailed performance analysis by separately tracking and reporting:

  • End-to-end throughput (e2e t/s)
  • Prompt processing throughput (prompt t/s)
  • Token generation throughput (gen t/s)

The current implementation of t/s throughput metric is incorrect when -pg flag is specified. It uses the formula (n_prompt+n_gen)/e2e_time which does not accurately represent throughput and leads to misleading interpretation. The correct e2e throughput should be calculated as n_gen/e2e_time

Benefits

  • More accurate and granular performance metrics
  • Better visibility into prompt processing vs token generation performance

Old output

> .llama-bench.exe -m C:\drive\models\gguf\Qwen2.5-0.5B-Instruct-Q4_K_M.gguf -pg 512,128 -pg 1000,200
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4080, compute capability 8.9, VMM: yes
| model                          |       size |     params | backend    | ngl |          test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | ------------: | -------------------: |
| qwen2 1B Q4_K - Medium         | 373.71 MiB |   494.03 M | CUDA       |  99 |         pp512 |    39230.93 ± 650.08 |
| qwen2 1B Q4_K - Medium         | 373.71 MiB |   494.03 M | CUDA       |  99 |         tg128 |       496.01 ± 17.34 |
| qwen2 1B Q4_K - Medium         | 373.71 MiB |   494.03 M | CUDA       |  99 |   pp512+tg128 |       2292.94 ± 4.15 |
| qwen2 1B Q4_K - Medium         | 373.71 MiB |   494.03 M | CUDA       |  99 |  pp1000+tg200 |      2644.50 ± 14.50 |

New output

> .\llama-bench.exe -m C:\drive\models\gguf\Qwen2.5-0.5B-Instruct-Q4_K_M.gguf -pg 512,128 -pg 1000,200
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4080, compute capability 8.9, VMM: yes
| model                          |       size |     params | backend    | ngl |          test |              e2e t/s |           prompt t/s |              gen t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | ------------: | -------------------: | -------------------: | -------------------: |
| qwen2 1B Q4_K - Medium         | 373.71 MiB |   494.03 M | CUDA       |  99 |         pp512 |         76.15 ± 3.28 |   38990.59 ± 1679.44 |          0.00 ± 0.00 |
| qwen2 1B Q4_K - Medium         | 373.71 MiB |   494.03 M | CUDA       |  99 |         tg128 |       501.70 ± 12.97 |          0.00 ± 0.00 |       501.70 ± 12.97 |
| qwen2 1B Q4_K - Medium         | 373.71 MiB |   494.03 M | CUDA       |  99 |   pp512+tg128 |       454.77 ± 10.17 |   40268.22 ± 4440.88 |       476.56 ± 11.24 |
| qwen2 1B Q4_K - Medium         | 373.71 MiB |   494.03 M | CUDA       |  99 |  pp1000+tg200 |        441.66 ± 4.69 |    39033.18 ± 777.59 |        468.15 ± 5.09 |

@thevishalagarwal thevishalagarwal changed the title Enhance llama-bench with improved token throughput measurements llama-bench: enhance benchmark with improved token throughput measurements Apr 10, 2025
Copy link
Collaborator

@JohannesGaessler JohannesGaessler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My personal opinion is that a rate of tokens over both prompt processing and token generation is not a useful metric. This is because you are calculating the average of two clearly different phases of execution. I think a better metric would be just the total runtime of the test. Related discussion: #7199 . In any case, I think the way the information is presented with this PR is an improvement over master and I would still be willing to review and merge it unless someone else objects.

Other considerations:

  • With these changes the documentation in the README file has become outdated, please update it prior to merging.
  • The line width of the default prints is becoming too long I think. I would be fine with dropping the model size and number of parameters.
  • I assume this Pr will have broken scripts/compare_llama_bench.py. It would be nice if this was fixed but I'm also fine with doing the fix myself.

Comment on lines +999 to +1002
"embeddings", "n_prompt", "n_gen", "test_time",
"avg_e2e_ns", "stddev_e2e_ns", "avg_e2e_ts", "stddev_e2e_ts",
"avg_prompt_ns", "stddev_prompt_ns", "avg_prompt_ts", "stddev_prompt_ts",
"avg_gen_ns", "stddev_gen_ns", "avg_gen_ts", "stddev_gen_ts"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please preserve vertical alignment.

@0cc4m
Copy link
Collaborator

0cc4m commented Apr 13, 2025

I agree that there is room for improvement here. I have never used the pp+tg tests because the output didn't give me any useful information, so I would like that to change. The way this is handled by other applications is by calculating the combined pp+tg number as the total test time divided by the number of tokens generated. This gives you a useful metric of how fast you can generate tokens in back-to-back requests with a specific prompt size to process each time.

I don't think we should extend the table with separate pp and tg t/s counts, since the default tests keep them on separate rows anyways. That would only make sense if we wanted to default to pp+tg tests (which can also be discussed).

@JohannesGaessler
Copy link
Collaborator

The way this is handled by other applications is by calculating the combined pp+tg number as the total test time divided by the number of tokens generated. This gives you a useful metric of how fast you can generate tokens in back-to-back requests with a specific prompt size to process each time.

I disagree, that is in my view not a useful metric for comparison because the value that the rate is normalized to doesn't make sense.

I don't think we should extend the table with separate pp and tg t/s counts, since the default tests keep them on separate rows anyways. That would only make sense if we wanted to default to pp+tg tests (which can also be discussed).

What I think would be useful as a default for a table is generating some amount of tokens on an empty context and then the same amount of tokens with a non-empty context. From that you can roughly estimate both the maximum speed and how that speed declines with more context.

What I would think would be best but also high-effort would be to first record the prompt processing and generation evaluation times in a differential way. Then, in a second step you could fit a polynomial to the runtime as a function of context size and plot the results. A t/s value as a function of context size can be obtained by transforming the y axis.

@thevishalagarwal
Copy link
Author

thevishalagarwal commented Apr 14, 2025

My personal opinion is that a rate of tokens over both prompt processing and token generation is not a useful metric. This is because you are calculating the average of two clearly different phases of execution. I think a better metric would be just the total runtime of the test.

Thanks for the review. I agree that e2e t/s is not a very useful metric. Separate pp and tg metrics are more useful to understand as these are two distinct phases. Total runtime is also not very helpful IMO since the runtime will vary based on the prompt length and the number of tokens generated and the total runtime doesn't give much insight on perf for either prompt or generation phase.

Instead of total runtime, a better metric is time to first token (TTFT). This is alternative to pp t/s. We can use TTFT if no one has any objection.

IMO, the separate pp and tg tests doesn't make sense either. However, we should keep pp+tg tests as default (if others agree). This is also consistent with other LLM-related libraries.

My final recommendation will be

  • use TTFT and tg t/s as metrics
  • remove separate pp and tg tests as default. Prefer pp+tg tests

@JohannesGaessler
Copy link
Collaborator

use TTFT and tg t/s as metrics

No, I think for pp and tg on their own it makes more sense to provide t/s instead of the runtime, I only think it doesn't make sense to provide a t/s value for a mixture of pp and t/g.

Co-authored-by: Johannes Gäßler <[email protected]>
@thevishalagarwal
Copy link
Author

Separate metrics for pp/tg and pp+tg tests is confusing and I don't think we should do that

@0cc4m
Copy link
Collaborator

0cc4m commented Apr 15, 2025

The way this is handled by other applications is by calculating the combined pp+tg number as the total test time divided by the number of tokens generated. This gives you a useful metric of how fast you can generate tokens in back-to-back requests with a specific prompt size to process each time.

I disagree, that is in my view not a useful metric for comparison because the value that the rate is normalized to doesn't make sense.

Why? Tokens generated is the metric that the user cares about. Sure, it's less relevant than splitting up the metrics, but it is not useless.

I agree with a text generation test for empty and full context to get min and max expected speeds. A graph is even better, but would take too long to measure to make it the default.

@JohannesGaessler
Copy link
Collaborator

Why? Tokens generated is the metric that the user cares about. Sure, it's less relevant than splitting up the metrics, but it is not useless.

The relevant metrics for a good user experience as I see them are a low latency until the first token is generated and a high rate of tokens during generation. But because the initial latency is relative to the length of the prompt it makes more sense to instead provide a rate at which tokens are processed. On a fundamental level, if more metrics are to be added they need to be justified in some way, either by providing useful information on their own or by facilitating comparisons. I don't see a situation where a rate of tokens relative to the runtime of pp + tg is ever useful information in isolation. And for comparisons of some pp + tg runs the total runtime is a better metric because lower/higher values correlate better with better/worse performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants