Skip to content

Empty metric exposed #699

Open
Open
@reismade

Description

@reismade

Hi,

I hope someone can show me the right direction. I'm trying to expose the average requests per second for a set of ingresses so it can be consumed by an HPA. I figured it might be the best idea to aggregate the nginx_ingress_controller_requests for the particular namespace of these ingresses. But the metric is empty. I tried so many things, now I ran out of ideas.

This is my configmap:

apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-adapter-config
namespace: monitoring
data:
config.yaml: |
rules:
- seriesQuery: 'nginx_ingress_controller_requests{namespace!="",ingress!=""}'
resources:
overrides:
namespace: {resource: "namespace"}
name:
matches: "nginx_ingress_controller_requests"
as: "nginx_requests_per_second_total"
metricsQuery: 'sum(rate(nginx_ingress_controller_requests{namespace="{{.Namespace}}"}[1m]))'

When I test this metric in my prometheus UI and replace {{.Namespace}} with the namespace, I can see beautiful data.
Also,
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"custom.metrics.k8s.io/v1beta1","resources":[{"name":"namespaces/nginx_requests_per_second_total","singularName":"","namespaced":false,"kind":"MetricValueList","verbs":["get"]}]}

From what I understood, my metric should be accessible under
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/<namespace>/metrics/nginx_requests_per_second_total"
But it returns
Error from server (NotFound): the server could not find the metric nginx_requests_per_second_total for namespaces <namespace>
And what I also think is strange, is
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/*/metrics/nginx_requests_per_second_total" {"kind":"MetricValueList","apiVersion":"custom.metrics.k8s.io/v1beta1","metadata":{},"items":[]}

Any hints? My APIservice status is up, my prometheus-adapter pod is running.

This is how I deployed the adapter:

apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-adapter
namespace: monitoring
labels:
app: prometheus-adapter
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-adapter
template:
metadata:
labels:
app: prometheus-adapter
spec:
serviceAccountName: default
containers:
- name: prometheus-adapter
image: registry.k8s.io/prometheus-adapter/prometheus-adapter:v0.12.0
args:
- --config=/etc/adapter/config.yaml
- --prometheus-url=http://prometheus.monitoring.svc.cluster.local:9090
- --cert-dir=/tmp/cert
- --secure-port=6443
ports:
- containerPort: 6443
volumeMounts:
- name: config-volume
mountPath: /etc/adapter
- name: cert-dir
mountPath: /tmp/cert
volumes:
- name: config-volume
configMap:
name: prometheus-adapter-config
- name: cert-dir
emptyDir: {}

Plus ClusterRole, Rolebinding, ClusterRoleBinding, and APIService.

In the adapter logs, there are multiple instances of this error:
E0508 20:33:36.357142 1 authentication.go:73] "Unable to authenticate the request" err="verifying certificate SN=260221586800750098980590820228684657053, SKID=, AKID=2F:13:C5:34:85:95:68:B7:2B:99:75:9B:12:C4:E6:05:38:D3:55:81 failed: x509: certificate signed by unknown authority"

But I don't know which request is triggering this exactly. My APIService was created with
insecureSkipTLSVerify: true

Any ideas for me?

Best regards,
Mario

Metadata

Metadata

Assignees

No one assigned

    Labels

    needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions