Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable vllm inference benchmark run on persistent TPUVM #625

Draft
wants to merge 41 commits into
base: master
Choose a base branch
from

Conversation

ManfeiBai
Copy link
Collaborator

@ManfeiBai ManfeiBai commented Mar 19, 2025

Description

Enable vllm inference benchmark run on persistent TPUVM

Tests

one-shot test: http://shortn/_g3d7a0xpbm

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run one-shot tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed.

@ManfeiBai ManfeiBai requested a review from tengyifei as a code owner March 20, 2025 18:12
Copy link
Collaborator

@tengyifei tengyifei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you update the PR description?

"--dataset=ShareGPT_V3_unfiltered_cleaned_split.json --tokenizer=meta-llama/Meta-Llama-3-8B "
"--request-rate=1 --backend=vllm --num-prompts=300 --max-input-length=1024 "
"--max-output-length=1024 --file-prefix=benchmark --models=meta-llama/Meta-Llama-3-8B "
"\\\"--output-bucket=gs://manfeipublic\\\"' && docker stop testooo && docker rm testooo\"" # Usunięto sudo
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where do you process metrics?

@ManfeiBai ManfeiBai changed the title Create vllm-nightly.py [Draft] enable vllm inference benchmark run on persistant TPUVM Mar 26, 2025
@ManfeiBai ManfeiBai marked this pull request as draft March 26, 2025 23:12
@ManfeiBai ManfeiBai changed the title [Draft] enable vllm inference benchmark run on persistant TPUVM Enable vllm inference benchmark run on persistant TPUVM Mar 27, 2025
@ManfeiBai ManfeiBai changed the title Enable vllm inference benchmark run on persistant TPUVM Enable vllm inference benchmark run on persistent TPUVM Mar 27, 2025
@ManfeiBai
Copy link
Collaborator Author

Can you update the PR description?

Thanks, updated

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants