Skip to content

Commit 1c93c94

Browse files
nirgadrobbins-msftdrewbylmolkovacartermp
authored
LLM Semantic Conventions: Initial PR (open-telemetry#825)
Co-authored-by: Drew Robbins <[email protected]> Co-authored-by: Drew Robbins <[email protected]> Co-authored-by: Liudmila Molkova <[email protected]> Co-authored-by: Phillip Carter <[email protected]> Co-authored-by: Patrice Chalin <[email protected]>
1 parent f12a4d3 commit 1c93c94

File tree

11 files changed

+356
-0
lines changed

11 files changed

+356
-0
lines changed

.chloggen/first-gen-ai.yaml

+22
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
# Use this changelog template to create an entry for release notes.
2+
#
3+
# If your change doesn't affect end users you should instead start
4+
# your pull request title with [chore] or use the "Skip Changelog" label.
5+
6+
# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
7+
change_type: new_component
8+
9+
# The name of the area of concern in the attributes-registry, (e.g. http, cloud, db)
10+
component: gen-ai
11+
12+
# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
13+
note: Introducing semantic conventions for GenAI clients.
14+
15+
# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists.
16+
# The values here must be integers.
17+
issues: [327]
18+
19+
# (Optional) One or more lines of additional information to render under the primary note.
20+
# These lines will be padded with 2 spaces and then inserted directly into the document.
21+
# Use pipe (|) for multiline entries.
22+
subtext:

.github/CODEOWNERS

+7
Original file line numberDiff line numberDiff line change
@@ -78,4 +78,11 @@
7878
/model/metrics/dotnet/ @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-dotnet-approver @open-telemetry/semconv-http-approvers
7979
/docs/dotnet/ @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-dotnet-approver @open-telemetry/semconv-http-approvers
8080

81+
# Gen-AI semantic conventions approvers
82+
/model/registry/gen-ai.yaml @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-llm-approvers
83+
/model/metrics/gen-ai.yaml @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-llm-approvers
84+
/model/trace/gen-ai.yaml @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-llm-approvers
85+
/docs/gen-ai/ @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-llm-approvers
86+
/docs/attributes-registry/llm.md @open-telemetry/specs-semconv-approvers @open-telemetry/semconv-llm-approvers
87+
8188
# TODO - Add semconv area experts

.github/ISSUE_TEMPLATE/bug_report.yaml

+1
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,7 @@ body:
4141
- area:feature-flag
4242
- area:file
4343
- area:gcp
44+
- area:gen-ai
4445
- area:graphql
4546
- area:heroku
4647
- area:host

.github/ISSUE_TEMPLATE/change_proposal.yaml

+1
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,7 @@ body:
3434
- area:feature-flag
3535
- area:file
3636
- area:gcp
37+
- area:gen-ai
3738
- area:graphql
3839
- area:heroku
3940
- area:host

.github/ISSUE_TEMPLATE/new-conventions.yaml

+1
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,7 @@ body:
4343
- area:feature-flag
4444
- area:file
4545
- area:gcp
46+
- area:gen-ai
4647
- area:graphql
4748
- area:heroku
4849
- area:host

docs/README.md

+1
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,7 @@ Semantic Conventions are defined for the following areas:
2727
* [Exceptions](exceptions/README.md): Semantic Conventions for exceptions.
2828
* [FaaS](faas/README.md): Semantic Conventions for Function as a Service (FaaS) operations.
2929
* [Feature Flags](feature-flags/README.md): Semantic Conventions for feature flag evaluations.
30+
* [Generative AI](gen-ai/README.md): Semantic Conventions for generative AI (LLM, etc.) operations.
3031
* [GraphQL](graphql/graphql-spans.md): Semantic Conventions for GraphQL implementations.
3132
* [HTTP](http/README.md): Semantic Conventions for HTTP client and server operations.
3233
* [Messaging](messaging/README.md): Semantic Conventions for messaging operations and systems.

docs/attributes-registry/llm.md

+59
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
<!--- Hugo front matter used to generate the website version of this page:
2+
linkTitle: LLM
3+
--->
4+
5+
# Large Language Model
6+
7+
<!-- toc -->
8+
9+
- [Generic LLM Attributes](#generic-llm-attributes)
10+
- [Request Attributes](#request-attributes)
11+
- [Response Attributes](#response-attributes)
12+
- [Event Attributes](#event-attributes)
13+
14+
<!-- tocstop -->
15+
16+
## Generic LLM Attributes
17+
18+
### Request Attributes
19+
20+
<!-- semconv registry.gen_ai(omit_requirement_level,tag=llm-generic-request) -->
21+
| Attribute | Type | Description | Examples | Stability |
22+
|---|---|---|---|---|
23+
| `gen_ai.request.max_tokens` | int | The maximum number of tokens the LLM generates for a request. | `100` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
24+
| `gen_ai.request.model` | string | The name of the LLM a request is being made to. | `gpt-4` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
25+
| `gen_ai.request.temperature` | double | The temperature setting for the LLM request. | `0.0` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
26+
| `gen_ai.request.top_p` | double | The top_p sampling setting for the LLM request. | `1.0` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
27+
| `gen_ai.system` | string | The name of the LLM foundation model vendor. | `openai` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
28+
29+
`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
30+
31+
| Value | Description | Stability |
32+
|---|---|---|
33+
| `openai` | OpenAI | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
34+
<!-- endsemconv -->
35+
36+
### Response Attributes
37+
38+
<!-- semconv registry.gen_ai(omit_requirement_level,tag=llm-generic-response) -->
39+
| Attribute | Type | Description | Examples | Stability |
40+
|---|---|---|---|---|
41+
| `gen_ai.response.finish_reasons` | string[] | Array of reasons the model stopped generating tokens, corresponding to each generation received. | `[stop]` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
42+
| `gen_ai.response.id` | string | The unique identifier for the completion. | `chatcmpl-123` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
43+
| `gen_ai.response.model` | string | The name of the LLM a response was generated from. | `gpt-4-0613` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
44+
| `gen_ai.usage.completion_tokens` | int | The number of tokens used in the LLM response (completion). | `180` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
45+
| `gen_ai.usage.prompt_tokens` | int | The number of tokens used in the LLM prompt. | `100` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
46+
<!-- endsemconv -->
47+
48+
### Event Attributes
49+
50+
<!-- semconv registry.gen_ai(omit_requirement_level,tag=llm-generic-events) -->
51+
| Attribute | Type | Description | Examples | Stability |
52+
|---|---|---|---|---|
53+
| `gen_ai.completion` | string | The full response received from the LLM. [1] | `[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
54+
| `gen_ai.prompt` | string | The full prompt sent to an LLM. [2] | `[{'role': 'user', 'content': 'What is the capital of France?'}]` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
55+
56+
**[1]:** It's RECOMMENDED to format completions as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
57+
58+
**[2]:** It's RECOMMENDED to format prompts as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
59+
<!-- endsemconv -->

docs/gen-ai/README.md

+25
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
<!--- Hugo front matter used to generate the website version of this page:
2+
linkTitle: Generative AI
3+
path_base_for_github_subdir:
4+
from: tmp/semconv/docs/gen-ai/_index.md
5+
to: gen-ai/README.md
6+
--->
7+
8+
# Semantic Conventions for Generative AI systems
9+
10+
**Status**: [Experimental][DocumentStatus]
11+
12+
**Warning**:
13+
The semantic conventions for GenAI and LLM are currently in development.
14+
We encourage instrumentation libraries and telemetry consumers developers to
15+
use the conventions in limited non-critical workloads and share the feedback
16+
17+
This document defines semantic conventions for the following kind of Generative AI systems:
18+
19+
* LLMs
20+
21+
Semantic conventions for LLM operations are defined for the following signals:
22+
23+
* [LLM Spans](llm-spans.md): Semantic Conventions for LLM requests - *spans*.
24+
25+
[DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.26.0/specification/document-status.md

docs/gen-ai/llm-spans.md

+84
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
<!--- Hugo front matter used to generate the website version of this page:
2+
linkTitle: LLM requests
3+
--->
4+
5+
# Semantic Conventions for LLM requests
6+
7+
**Status**: [Experimental][DocumentStatus]
8+
9+
<!-- Re-generate TOC with `markdown-toc --no-first-h1 -i` -->
10+
11+
<!-- toc -->
12+
13+
- [Configuration](#configuration)
14+
- [LLM Request attributes](#llm-request-attributes)
15+
- [Events](#events)
16+
17+
<!-- tocstop -->
18+
19+
A request to an LLM is modeled as a span in a trace.
20+
21+
**Span kind:** MUST always be `CLIENT`.
22+
23+
The **span name** SHOULD be set to a low cardinality value describing an operation made to an LLM.
24+
For example, the API name such as [Create chat completion](https://platform.openai.com/docs/api-reference/chat/create) could be represented as `ChatCompletions gpt-4` to include the API and the LLM.
25+
26+
## Configuration
27+
28+
Instrumentations for LLMs MAY capture prompts and completions.
29+
Instrumentations that support it, MUST offer the ability to turn off capture of prompts and completions. This is for three primary reasons:
30+
31+
1. Data privacy concerns. End users of LLM applications may input sensitive information or personally identifiable information (PII) that they do not wish to be sent to a telemetry backend.
32+
2. Data size concerns. Although there is no specified limit to sizes, there are practical limitations in programming languages and telemetry systems. Some LLMs allow for extremely large context windows that end users may take full advantage of.
33+
3. Performance concerns. Sending large amounts of data to a telemetry backend may cause performance issues for the application.
34+
35+
## LLM Request attributes
36+
37+
These attributes track input data and metadata for a request to an LLM. Each attribute represents a concept that is common to most LLMs.
38+
39+
<!-- semconv gen_ai.request -->
40+
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
41+
|---|---|---|---|---|---|
42+
| [`gen_ai.request.model`](../attributes-registry/llm.md) | string | The name of the LLM a request is being made to. [1] | `gpt-4` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
43+
| [`gen_ai.system`](../attributes-registry/llm.md) | string | The name of the LLM foundation model vendor. [2] | `openai` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
44+
| [`gen_ai.request.max_tokens`](../attributes-registry/llm.md) | int | The maximum number of tokens the LLM generates for a request. | `100` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
45+
| [`gen_ai.request.temperature`](../attributes-registry/llm.md) | double | The temperature setting for the LLM request. | `0.0` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
46+
| [`gen_ai.request.top_p`](../attributes-registry/llm.md) | double | The top_p sampling setting for the LLM request. | `1.0` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
47+
| [`gen_ai.response.finish_reasons`](../attributes-registry/llm.md) | string[] | Array of reasons the model stopped generating tokens, corresponding to each generation received. | `[stop]` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
48+
| [`gen_ai.response.id`](../attributes-registry/llm.md) | string | The unique identifier for the completion. | `chatcmpl-123` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
49+
| [`gen_ai.response.model`](../attributes-registry/llm.md) | string | The name of the LLM a response was generated from. [3] | `gpt-4-0613` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
50+
| [`gen_ai.usage.completion_tokens`](../attributes-registry/llm.md) | int | The number of tokens used in the LLM response (completion). | `180` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
51+
| [`gen_ai.usage.prompt_tokens`](../attributes-registry/llm.md) | int | The number of tokens used in the LLM prompt. | `100` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
52+
53+
**[1]:** The name of the LLM a request is being made to. If the LLM is supplied by a vendor, then the value must be the exact name of the model requested. If the LLM is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
54+
55+
**[2]:** If not using a vendor-supplied model, provide a custom friendly name, such as a name of the company or project. If the instrumetnation reports any attributes specific to a custom model, the value provided in the `gen_ai.system` SHOULD match the custom attribute namespace segment. For example, if `gen_ai.system` is set to `the_best_llm`, custom attributes should be added in the `gen_ai.the_best_llm.*` namespace. If none of above options apply, the instrumentation should set `_OTHER`.
56+
57+
**[3]:** If available. The name of the LLM serving a response. If the LLM is supplied by a vendor, then the value must be the exact name of the model actually used. If the LLM is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
58+
<!-- endsemconv -->
59+
60+
## Events
61+
62+
In the lifetime of an LLM span, an event for prompts sent and completions received MAY be created, depending on the configuration of the instrumentation.
63+
64+
<!-- semconv gen_ai.content.prompt -->
65+
The event name MUST be `gen_ai.content.prompt`.
66+
67+
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
68+
|---|---|---|---|---|---|
69+
| [`gen_ai.prompt`](../attributes-registry/llm.md) | string | The full prompt sent to an LLM. [1] | `[{'role': 'user', 'content': 'What is the capital of France?'}]` | `Conditionally Required` if and only if corresponding event is enabled | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
70+
71+
**[1]:** It's RECOMMENDED to format prompts as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
72+
<!-- endsemconv -->
73+
74+
<!-- semconv gen_ai.content.completion -->
75+
The event name MUST be `gen_ai.content.completion`.
76+
77+
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
78+
|---|---|---|---|---|---|
79+
| [`gen_ai.completion`](../attributes-registry/llm.md) | string | The full response received from the LLM. [1] | `[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]` | `Conditionally Required` if and only if corresponding event is enabled | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
80+
81+
**[1]:** It's RECOMMENDED to format completions as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
82+
<!-- endsemconv -->
83+
84+
[DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/document-status.md

model/registry/gen-ai.yaml

+87
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,87 @@
1+
groups:
2+
- id: registry.gen_ai
3+
prefix: gen_ai
4+
type: attribute_group
5+
brief: >
6+
This document defines the attributes used to describe telemetry in the context of LLM (Large Language Models) requests and responses.
7+
attributes:
8+
- id: system
9+
stability: experimental
10+
type:
11+
allow_custom_values: true
12+
members:
13+
- id: openai
14+
stability: experimental
15+
value: "openai"
16+
brief: 'OpenAI'
17+
brief: The name of the LLM foundation model vendor.
18+
examples: 'openai'
19+
tag: llm-generic-request
20+
- id: request.model
21+
stability: experimental
22+
type: string
23+
brief: The name of the LLM a request is being made to.
24+
examples: 'gpt-4'
25+
tag: llm-generic-request
26+
- id: request.max_tokens
27+
stability: experimental
28+
type: int
29+
brief: The maximum number of tokens the LLM generates for a request.
30+
examples: [100]
31+
tag: llm-generic-request
32+
- id: request.temperature
33+
stability: experimental
34+
type: double
35+
brief: The temperature setting for the LLM request.
36+
examples: [0.0]
37+
tag: llm-generic-request
38+
- id: request.top_p
39+
stability: experimental
40+
type: double
41+
brief: The top_p sampling setting for the LLM request.
42+
examples: [1.0]
43+
tag: llm-generic-request
44+
- id: response.id
45+
stability: experimental
46+
type: string
47+
brief: The unique identifier for the completion.
48+
examples: ['chatcmpl-123']
49+
tag: llm-generic-response
50+
- id: response.model
51+
stability: experimental
52+
type: string
53+
brief: The name of the LLM a response was generated from.
54+
examples: ['gpt-4-0613']
55+
tag: llm-generic-response
56+
- id: response.finish_reasons
57+
stability: experimental
58+
type: string[]
59+
brief: Array of reasons the model stopped generating tokens, corresponding to each generation received.
60+
examples: ['stop']
61+
tag: llm-generic-response
62+
- id: usage.prompt_tokens
63+
stability: experimental
64+
type: int
65+
brief: The number of tokens used in the LLM prompt.
66+
examples: [100]
67+
tag: llm-generic-response
68+
- id: usage.completion_tokens
69+
stability: experimental
70+
type: int
71+
brief: The number of tokens used in the LLM response (completion).
72+
examples: [180]
73+
tag: llm-generic-response
74+
- id: prompt
75+
stability: experimental
76+
type: string
77+
brief: The full prompt sent to an LLM.
78+
note: It's RECOMMENDED to format prompts as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
79+
examples: ["[{'role': 'user', 'content': 'What is the capital of France?'}]"]
80+
tag: llm-generic-events
81+
- id: completion
82+
stability: experimental
83+
type: string
84+
brief: The full response received from the LLM.
85+
note: It's RECOMMENDED to format completions as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)
86+
examples: ["[{'role': 'assistant', 'content': 'The capital of France is Paris.'}]"]
87+
tag: llm-generic-events

0 commit comments

Comments
 (0)