Skip to content

[detectors-aws] incomplete containerId detected for AWS ECS Fargate #2455

Open
@Steffen911

Description

@Steffen911

What version of OpenTelemetry are you using?

    "@opentelemetry/api": "^1.9.0",
    "@opentelemetry/core": "^1.26.0",
    "@opentelemetry/exporter-trace-otlp-proto": "^0.53.0",
    "@opentelemetry/instrumentation": "^0.53.0",
    "@opentelemetry/instrumentation-aws-sdk": "^0.44.0",
    "@opentelemetry/instrumentation-http": "^0.53.0",
    "@opentelemetry/instrumentation-ioredis": "^0.43.0",
    "@opentelemetry/instrumentation-winston": "^0.40.0",
    "@opentelemetry/resource-detector-aws": "^1.6.1",
    "@opentelemetry/resource-detector-container": "^0.4.1",
    "@opentelemetry/resources": "^1.26.0",
    "@opentelemetry/sdk-node": "^0.53.0",
    "@opentelemetry/sdk-trace-base": "^1.26.0",
    "@opentelemetry/sdk-trace-node": "^1.26.0",
    "@opentelemetry/winston-transport": "^0.6.0",

What version of Node are you using?

20

What did you do?

I initialize my tracing similar to the setup below as one of the first actions within my Node.js express server. I bundle the whole application via Docker and deploy it to AWS ECS on Fargate.

import { NodeSDK } from "@opentelemetry/sdk-node";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
import { AwsInstrumentation } from "@opentelemetry/instrumentation-aws-sdk";
import {
  envDetector,
  processDetector,
  Resource,
} from "@opentelemetry/resources";
import { awsEcsDetectorSync } from "@opentelemetry/resource-detector-aws";
import { containerDetector } from "@opentelemetry/resource-detector-container";
import { env } from "@/src/env.mjs";

  const sdk = new NodeSDK({
    resource: new Resource({
      "service.name": env.OTEL_SERVICE_NAME,
    }),
    traceExporter: new OTLPTraceExporter({
      url: `${env.OTEL_EXPORTER_OTLP_ENDPOINT}/v1/traces`,
    }),
    instrumentations: [
      new AwsInstrumentation(),
    ],
    resourceDetectors: [
      envDetector,
      processDetector,
      awsEcsDetectorSync,
      containerDetector,
    ],
  });

  sdk.start();
}

What did you expect to see?

I would expect to see something like <taskId>-<containerId> as the container.id in my span attributes. This would fit the conventions that observability vendors like Datadog use. For a task named c23e5f76c09d438aa1824ca4058bdcab I'd expect to see something like c23e5f76c09d438aa1824ca4058bdcab-1234678 for a single container.

What did you see instead?

As part of aws/amazon-ecs-agent#1119 AWS apparently shifted their cgroup naming convention to something like /ecs/<taskId>/<taskId>-<containerId>. Given the 64 character limit it usually cuts off in the middle of taskId. For the example above, this would yield a container.id value like 438aa1824ca4058bdcab/c23e5f76c09d438aa1824ca4058bdcab-1234678.

Is there some way to consistently receive only the part after the / in the cgroup name, i.e. the last chunk? Happy to contribute in case this seems desirable. It should probably follow a regex based approach like in DataDog/dd-trace-js#1176.

Metadata

Metadata

Assignees

Labels

bugSomething isn't workinggood first issuepkg:resource-detector-awspriority:p2Bugs and spec inconsistencies which cause telemetry to be incomplete or incorrectup-for-grabsGood for taking. Extra help will be provided by maintainers

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions