Lifting the Hood on Trace Propagation in OpenTelemetry

OpenTelemetry is really changing the game for observability, especially by offering the first widely adopted, vendor neutral telemetry libraries. Tracing was the first signal the project tackled, and its now generally available in most major programm…


This content originally appeared on DEV Community and was authored by Muutassim

OpenTelemetry is really changing the game for observability, especially by offering the first widely adopted, vendor neutral telemetry libraries. Tracing was the first signal the project tackled, and its now generally available in most major programming languages.

Personally, Im a big fan of tracing. I think its a much cleaner and more effective way to pass data to your observability provider. they can take that data and turn it into something truly useful, whether thats a Gantt style view of spans or metrics that drive alerts.

One of the most powerful features is distributed tracing. the ability to connect spans across different services into a single trace. When one service makes an RPC call to another, the trace context can follow along, giving you a complete, end-to-end view of requests across your entire platform.

In the screenshot above for example,the trace has spans from 3 different services: frontend, productservice, and statsservice.

What is trace propagation?

To connect traces across different services, you need to pass some context along with each request. This process is known as trace propagation, and its what we will be diving into in this article.

W3C Trace Context?

Unless you are working with a legacy system that already uses a different tracing format, you should stick with the W3C Trace Context Recommendation. Its the recommended standard for propagating trace context over HTTP. While its designed with HTTP in mind, many of its concepts can also be applied to other communication channels, like Kafka messages, for example.

Trace Context specifies two HTTP headers that will be used to pass context around, traceparent and tracestate.

traceparent

The traceparent HTTP header includes the root of context propagation. It consists in a comma-separated suite of fields that include:

  • The version of Trace Context being used. Only one version, 00 exists in 2023
    Then, for version 00:

  • The current Trace ID, as a 16-byte array representing the ID of the entire trace.

  • The current Span ID (called parent-id in the spec), an 8-byte array representing the ID of the parent request.

  • Flags, an 8-byte hex-encoded field which controls tracing flags such as sampling.

tracestate

The tracestate HTTP header is meant to include proprietary data used to pass specific information across traces.

Its value is a comma-separated list of key/values, where each pair is separated by an equal sign. Obviously, the trace state shouldn’t include any sensitive data.

For example, with requests coming from public API endpoints which can be called either by internal services, or by external customers, both could be passing a traceparent header. However, external ones would generate orphan spans, as the parent one is stored within the customers service, not ours.

So we add a tracestate value indicating the request comes from an internal service, and we only propagate context if that value is present.

A context is passed

With both these fields being passed, any tracing library should have enough information to provide distributed tracing.

A request could pass the following headers:

traceparent: 00-d4cda95b652f4a1592b449d5929fda1b-6e0c63257de34c92-01
tracestate: myservice=true

The traceparent header indicates a trace ID (d4cda95b652f4a1592b449d5929fda1b), a span ID (6e0c63257de34c92), and sets a flag indicating the parent span was sampled (so its likely we want to sample this one too).

The tracestate header provides a specific key/value that we can use to make appropriate decisions, such as whether we want to keep the context or not.

How OpenTelemetry implements propagation

The OpenTelemetry specification defines a Propagators interface to allow any implementation to establish its own propagation convention, such as W3C TraceContext.

A propagator must implement two methods:

  1. Inject – to insert the current span context into a carrier object (such as an HTTP headers map).
  2. Extract– to retrieve the span context from a carrier object.

For example, the following shows how this is implemented in Go:


Each instrumentation library making or receiving external calls then has the responsibility to call inject/extract to write/read the span context and have it passed around.

Extract and Inject examples

For example, the following is Rack extracting the context from the propagator to generate a new span in Ruby:

And the following is NodeJS adding the context into HTTP headers when making a request with the http package:

The full propagation flow

To put in other words, the diagram above shows what each service is expected to perform to enable propagation. The library emitting an HTTP call is expected to call inject, which will add the proper HTTP headers to the request. The library receiving HTTP requests is expected to call extract to retrieve the proper span context from the request’s HTTP headers.

Note that each language implementation of OpenTelemetry provides multiple contrib packages that allow easy instrumentation of common frameworks and libraries. Those packages will handle propagation for you. Unless you write your own framework or HTTP library, you should not need to call inject or extract yourself. All you need to do is configure the global propagation mechanism (see below).

Non-HTTP propagation

Not all services communicate through HTTP. For example, you could have one service emitting a Kafka message, and another one reading it.

The OpenTelemetry propagation API is purposefully generic, as all it does is read a hash and return a span context, or read a span context and inject data into a hash. So you could replace a hash of HTTP headers with anything you want.

The Python Kafka instrumentation calls inject on the message’s headers when producing a message:

And calls extract on those same headers when reading a message:


Any language or library that uses the same convention can benefit from distributed tracing within kafka messages, or any other communication mechanism.

Setting a propagator

Great! We now know how propagation for traces works within Otel. But how do we set it up?
Each OpenTelemetry library is expected to provide methods for setting and retrieving a global propagator.

For example, the Rust implementation provides global::set_text_map_propagator and global::get_text_map_propagator that will allow configuring and retrieving the global propagator.

As you may have seen in the above specification link, the default propagator will be a no-op:

The OpenTelemetry API MUST use no-op propagators unless explicitly configured otherwise

You should therefore always ensure your propagator of choice is properly set globally, and each library that needs to call inject or extract will then be able to retrieve it.

Each OpenTelemetry implementation will implement several propagators natively, including TraceContext, which you can use directly within your service. For example, the java one can be found as io.opentelemetry.api.trace.propagation.W3CTraceContextPropagator

Moving On

Thanks for following along with this deep dive into context propagation in Otel. Hopefully, you now have a clearer understanding of how distributed tracing works within the library. You should be equipped to implement context propagation in any library instrumentation that makes or receives calls from external services.

With distributed tracing properly set up across your platform, you will be able to see the full journey of every request, making it much easier to identify bottlenecks, trace issues, and debug problems effectively


This content originally appeared on DEV Community and was authored by Muutassim


Print Share Comment Cite Upload Translate Updates
APA

Muutassim | Sciencx (2025-08-17T23:38:46+00:00) Lifting the Hood on Trace Propagation in OpenTelemetry. Retrieved from https://www.scien.cx/2025/08/17/lifting-the-hood-on-trace-propagation-in-opentelemetry/

MLA
" » Lifting the Hood on Trace Propagation in OpenTelemetry." Muutassim | Sciencx - Sunday August 17, 2025, https://www.scien.cx/2025/08/17/lifting-the-hood-on-trace-propagation-in-opentelemetry/
HARVARD
Muutassim | Sciencx Sunday August 17, 2025 » Lifting the Hood on Trace Propagation in OpenTelemetry., viewed ,<https://www.scien.cx/2025/08/17/lifting-the-hood-on-trace-propagation-in-opentelemetry/>
VANCOUVER
Muutassim | Sciencx - » Lifting the Hood on Trace Propagation in OpenTelemetry. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/08/17/lifting-the-hood-on-trace-propagation-in-opentelemetry/
CHICAGO
" » Lifting the Hood on Trace Propagation in OpenTelemetry." Muutassim | Sciencx - Accessed . https://www.scien.cx/2025/08/17/lifting-the-hood-on-trace-propagation-in-opentelemetry/
IEEE
" » Lifting the Hood on Trace Propagation in OpenTelemetry." Muutassim | Sciencx [Online]. Available: https://www.scien.cx/2025/08/17/lifting-the-hood-on-trace-propagation-in-opentelemetry/. [Accessed: ]
rf:citation
» Lifting the Hood on Trace Propagation in OpenTelemetry | Muutassim | Sciencx | https://www.scien.cx/2025/08/17/lifting-the-hood-on-trace-propagation-in-opentelemetry/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.