-
-
Notifications
You must be signed in to change notification settings - Fork 359
[Question] Increased metric pull rates from v49 to v60 #1622
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
There have been a rather substantial amount of changes in between those releases, v0.49.0...v0.60.0. There was a substantial refactor in the way resources are associated to metrics. It's very possible the old version was filtering out a lot of resources that didn't match. If you could try some versions in-between it would help narrow things down a bit. Perhaps v0.58.0 which is before some refactoring to how queries are batched? It also includes a new |
Hi, We were using v0.57.1, but after switching to Grafana Alloy’s prometheus.exporter.cloudwatch, we observed the same cost increase. Currently, we are using Grafana Alloy v1.5.0, which includes yace v0.61.0. |
Can you provide some more info on the configuration you're using? I ran v0.57.1 and v0.61.0 with apiVersion: v1alpha1
discovery:
jobs:
- type: AWS/EC2
regions: [us-east-2]
includeContextOnInfoMetrics: true
metrics:
- name: CPUUtilization
statistics:
- Average and both produce the same number of metrics requested + calls
|
Hi the config we are currently using is this:
We've noticed the increased across all metrics. All of our configs are following a similar pattern to the one above. We also tried v58 but that kept the costs the same as v60. I will try v57.1 this week to see if that drops the cost back down. |
🤔 v58 was before any changes occurred which were intended to change how the requests were batched. There's this PR #1325 but it only moved existing logic. |
I've had 57.1 running for a few days now and there hasnt been any change in the cost when compared to v58. I'll keep trying versions this week to see if i can find out when the metrics go up. |
Hi,
We recently updated out yace version from 49.0 to v60 and noticed that our cost has increased about 50% using the same configs.
Im wondering if there has been any changes that caused yace to pull more metrics. AWS confirmed that we are requesting more GMD calls and overall more metrics.
In our configs we run:
We can see that if we revert our version back to 49 the costs go back down. Any help on this topic would be greatly appreciated.
The text was updated successfully, but these errors were encountered: