-
Notifications
You must be signed in to change notification settings - Fork 2.7k
[receiver/dockerstats] not generating per container metrics #33303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
I'm not sure I fully understand your issue, but it's seemingly not to do with the docker stats receiver. It just seems like the prometheus exporter isn't exporting what you expect, or there is some misunderstanding about what it makes available. I think this would be more helpful if you identified one component that wasn't operating as expected. The docker stats receiver and the prometheus exporter have nothing to do with each other. If your problem is that the docker stats receiver isn't reporting a metric that it should be, then it's a problem with the docker stats. If the prom exporter isn't doing what you think it should, that's a problem with the prom exporter (or a misconfiguration). From what I can see it seems that the docker stats receiver is producing all of the information it should, and then the prom exporter is stripping some of the information that you expect. You can verify this by replacing the prom exporter with the |
I'm experiencing the same issue, but my exporter is Using the debugger, I can see the labels in
|
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
@schewara #21247 was closed because it's not an issue with the docker stats receiver, and it's expected behaviour of the prometheus exporter.
I agree the documentation could use some work here. In particular, defining what goes into resource attributes and what goes into metric data point attributes. It is working as expected. Resource attributes are in relation to the resource (the container metadata) so the container name, ID, and any env vars/labels on the container are in resource attributes. Then the data point attributes are related to specific data points, like which network interface was this specific data point measuring.
I don't think it's the jurisdiction of the docker stats receiver to change it's behaviour because a specific wire protocol works a certain way. IMO, this is working as expected. If you want resource attributes to be added into prom, you can use the It seemed the real problem was that you lost other metrics when you used |
hi, i stumbled upon not having the docker_stats receiver available at all, in the latest upstream collector which i installed from docs (on ubuntu), getting an error out of the box:
does it mean the current default collector supplied in the deb package that's shown in docs at the quickstart section, doesn't include the docker receiver, so i should build a custom one and include the receiver in it? (are there official images on dockerhub that are better to use instead?) |
nevermind, saw that there's 'core' and 'otelcol-contrib' , the contrib version has the docker receiver 😁 i'll keep it here since the thread comes up in google for the error search, if others wonder why it happens right after following the quckstart in docs. i might not be the only dummy out there 😆 |
I am aware, that I threw in a couple of line protocols, but mainly for comparison and to provide a broader and systems view of a common setup which has
and showcasing that in it's current state, working with all these multiple signals and sources is quite painful, as all these parts are massively interfering with, instead of complimenting each other and a good chance of potential data loss, which at least to my standards, is considered not a good thing. But let's break it down a bit further to hopefully make it more understandable and with the focus on the Using
|
Thanks @schewara for extra clarification. I understand your issues better now.
This isn't correct, all the metrics coming from the dockerstats receiver have those resource attributes. I would double check how you are interpreting that log output. Remember resource attributes are at the top level and many metric data points can be grouped under them. There is a test in the receiver that demonstrates this, for example the expected metrics in this test are all grouped under the same resource: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/dockerstatsreceiver/testdata/mock/single_container/expected_metrics.yaml If this turns out to not have resource attributes in some scenarios, that's a valid bug that needs to be fixed.
I see your issue more clearly now. It's that the dockerstats receiver doesn't set I'd also like to add the this is technically a "MUST" for the semantic convention, not a "MUST" to adhere to OTLP spec. It's still valid OTLP without service.name as far as I understand. |
@jamesmoessis I am glad, that I was able to make it more clear.
Thank you for the pointer to the test. I will check again and let you know if I missed something here on my end.
I agree with you on this one and would understand it the same way. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Component(s)
exporter/prometheus, receiver/dockerstats, receiver/prometheus
What happened?
Description
We have a collector running (in docker), which is supposed to collect
receiver/dockerstats
receiver/prometheus
anddocker_sd_config
exporter/prometheusexporter
A similar issue was already reported but was closed without any real solution -> #21247
Steps to Reproduce
prometheus/node-exporter
otel/opentelemetry-collector-contrib
container/metrics
endpoint of the prometheus exporterExpected Result
Individual metrics for each container running on the same host.
Actual Result
Only metrics which have Data point attributes are shown like the following, plus the metrics coming from the prometheus receiver.
Test scenarios and observations
exporter/prometheus
-resource_to_telemetry_conversion
- enabledWhen enabling the config options, the following was observed
receiver/dockerstats
metrics are available as expectedreceiver/prometheus
metrics are goneI don't really know how the prometheus receiver converts the scraped metrics into an otel object, but it looks like that it creates individual metrics + a
target_info
metric only containing Data point attributes but no Resource attributes.This would explain, that the metrics disappear, as from what it seems, all existing metric labels are wiped and replaced with nothing.
manually setting attribute labels
Trying to set manual static attributes through the attributes processor only added a new label, to the single metrics, but did not produce individual container metrics
After going through all the logs and searching through all the documentation I discovered the
Setting resource attributes as metric labels section from the prometheus exporter, when implemented (see the commented out sections of the config), metrics from the dockerstats receiver showed up on the exporters
/metrics
endpoint, but are still missing some crucial labels, which might need to be added manually as well.Findings
Based on all the observations during testing and trying things out, these are my takeaways for the current shortcomings of the 3 selected components and how they are not very good integrated with each other.
receiver/dockerstats
resource
ordatapoint
attributecontainer_labels_to_metric_labels
andenv_vars_to_metric_labels
settings is incorrect, as they are not added as a datapoint attribute and therefore never show up in any metric labelsjob
and aninstance
label,by using the
service.namespace,service.name,service.instance.id
resource attributes, which then hopefully get picked up correctly by the exporter to convert it into the right label.receiver/prometheus
docker_sd_configs
are added as resource attributes to the scraped metrics.But as I can't find the link to the source right now I am either mistaken or it just is not the case, looking at the log outputs and the
target_info
metrics.exporter/prometheusexporter
target_info
metric, I am missing the resource attributes from the dockerstats metrics. Maybe this is due to the missing service attributes or some other reason, but I was unable to see any errors or warnings in the standard logresource_to_telemetry_conversion
functionality left me a bit speechless, that it wipes all datapoint attributes, especially when there are no resource attributes available.Also activating it would mean, that I would loose (as an example) the
interface
information from thecontainer.network.io.usage.rx_bytes
metric, without any idea from where the actual value is taken or calculated from.A warning in the documentation would be really helpful, or a flag to adjust the behavior based on individual needs.
Right now I am torn between manually transform all the labels of the dockerstats receiver, or
create duplicate pipelines with a duplicated exporter, but either way there is some room for improvement to have everything working together smoothly.
Collector version
otel/opentelemetry-collector-contrib:0.101.0
Environment information
Environment
Docker
OpenTelemetry Collector configuration
Log output
receiver/dockerstats
metric with a Data point attribute, but no Resource attributereceiver/dockerstats
metric with no Data point attribute, but Resource attributesThe text was updated successfully, but these errors were encountered: