Skip to content

Commit 559e4c8

Browse files
authored
Use metrics as port name in manifest and podMonitoring CR in vllm (#845)
1 parent 987b910 commit 559e4c8

File tree

2 files changed

+2
-1
lines changed

2 files changed

+2
-1
lines changed

benchmarks/inference-server/vllm/manifest-templates/vllm.tftpl

+1
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,7 @@ spec:
5151
- name: vllm
5252
ports:
5353
- containerPort: 80
54+
name: metrics
5455
image: "vllm/vllm-openai:v0.5.5"
5556
command: ["python3", "-m", "vllm.entrypoints.openai.api_server"]
5657
args: ["--model", "${model_id}", "--tensor-parallel-size", "${gpu_count}", "--port", "80", "--swap-space", "${swap_space}", "--disable-log-requests"]

benchmarks/inference-server/vllm/monitoring-templates/vllm-podmonitoring.yaml.tftpl

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,5 +8,5 @@ spec:
88
matchLabels:
99
app: vllm
1010
endpoints:
11-
- port: 80
11+
- port: metrics
1212
interval: 15s

0 commit comments

Comments
 (0)