Functional Events and Technical Traces: The Divorce That Broke Your System

Observability is not telemetry. It’s storytelling with truth.1. Why write this piece?Too many organisations keep two separate chronologies of what is going on:FunctionalTypical artefact: business event logs (order shipped, etc.)Readers: product & o…


This content originally appeared on Level Up Coding - Medium and was authored by Gedeón Domínguez Torán

Observability is not telemetry. It’s storytelling with truth.

1. Why write this piece?

Too many organisations keep two separate chronologies of what is going on:

Functional

  • Typical artefact: business event logs (order shipped, etc.)
  • Readers: product & ops teams
  • Strength: process mining, KPI tracking, compliance

Technical

  • Typical artefact: traces / spans / logs (RPC timings, etc.)
  • Readers: engineers & SREs
  • Strength: debugging, performance analysis, root cause tracing

When the two timelines drift apart, you get blind spots: a-functional incidents the business can’t see, phantom refunds the devs can’t reproduce, etc. Process-mining pioneer Wil van der Aalst has shown again and again that usable insights require a coherent event log that covers all perspectives of a case.

“What God has joined together, let no one separate.”

2. The cultural lens: two attitudes towards transparency

  • “It’s an IT problem.” The business lives on slide decks; real behaviour is a black box.
  • “We own the pipes.” Business folk understand how money flows through APIs, queues and cron jobs.

My empirical take: the second camp ships faster and breaks less because hypotheses can be verified against real traces instead of guess-timates.

3. A unifying blueprint

┌───────────┐      CloudEvents (+ traceparent)       ┌────────────┐
│ Domain A │ ─────────────────────────────────────▶ │ Bus / │
│ Service │ │ Event Hub │
└─────▲─────┘ └────▲───────┘
│ span:InventoryReserve │
│ │
│ OpenTelemetry spans (auto- or manual) │
│ ▼
┌─────┴─────┐ ┌────────────┐
│ Domain B │ ◀──────────────────────────────────────│ Collector │
│ Service │ CloudEvents (+ traceparent) └────────────┘
└───────────┘

Functional skeleton — CloudEvents

  • Standardised envelope (id, source, type, etc.)
  • Distributed Tracing extension adds traceparent / tracestate

Technical flesh — OpenTelemetry

  • Each service keeps a span around its business handler; context flows via HTTP headers or messaging meta-data
  • Collector / backend merges both signals; now you can pivot a single orderId across the entire chain while still seeing the DB indexes you forgot to add.

Functional timeline: events and spans side by side

Here’s how a real ecommerce flow could look when combining functional events with technical spans. Functional events give us the big picture — what happened and when — while spans offer a magnifying glass into how it happened.

[OrderCreated]        ───▶ span: CreateOrder
└─ db.insert("orders") (14ms)
[InventoryReserved] ───▶ span: ReserveStock
└─ db.query("UPDATE inventory…") (22ms)
└─ warning: "slow query (>20ms)"
└─ exception: "Out of stock" (optional)
[InvoiceIssued] ───▶ span: IssueInvoice
└─ billingAPI.generate() (35ms)
└─ exception: "Billing API timeout" (optional)
[OrderConfirmed] ───▶ span: FinalizeOrder
└─ db.write("status = confirmed") (10ms)

- The trace ID links all four events into a single process instance.
- Each span is a technical annotation of how the event was produced or consumed.
- Errors are just first-class citizens in the timeline, not anomalies to be hidden.

4. TypeScript walk-through

Functional trace: a simple ecommerce order flow

Imagine a user places an order:

  1. OrderCreated (Checkout domain)
  2. InventoryReserved (Logistics domain)
  3. InvoiceIssued (Billing domain)
  4. OrderConfirmed (Orchestrated or final event) All events carry a traceparent, and services enrich spans with technical metadata.

CloudEvent examples (simplified)

{
"type": "com.acme.order.created",
"source": "/checkout",
"id": "evt-1",
"data": { "orderId": "ORD-42", "total": 99.95 },
"traceparent": "00-abc123def4567890abc123def4567890-111aaa111aaa111a-01"
}

{
"type": "com.acme.inventory.reserved",
"source": "/logistics",
"id": "evt-2",
"data": { "orderId": "ORD-42", "reservedItems": 3 },
"traceparent": "00-abc123def4567890abc123def4567890-222bbb222bbb222b-01"
}

{
"type": "com.acme.invoice.issued",
"source": "/billing",
"id": "evt-3",
"data": { "orderId": "ORD-42", "invoiceId": "INV-123" },
"traceparent": "00-abc123def4567890abc123def4567890-333ccc333ccc333c-01"
}

Technical span examples per domain (TypeScript snippets)

These examples include both successful and failing execution paths to illustrate how traces help debug non-happy scenarios too.

Logistics: reserving stock

await tracer.startActiveSpan("ReserveStock", async span => {
const t0 = Date.now();
try {
await db.query("UPDATE inventory SET reserved = TRUE WHERE sku = $1", [sku]);
span.setAttribute("sku", sku);
span.setAttribute("latencyMs", Date.now() - t0);
} catch (err) {
span.recordException(err);
span.setStatus({ code: 2, message: "Inventory reservation failed" });
throw err;
} finally {
span.end();
}
});

Billing: issuing invoice

await tracer.startActiveSpan("IssueInvoice", async span => {
try {
const invoice = await billingAPI.generate(orderId);
span.setAttribute("invoiceId", invoice.id);
span.setStatus({ code: 1, message: "Invoice created" });
} catch (err) {
span.recordException(err);
span.setStatus({ code: 2, message: "Invoice generation failed" });
throw err;
} finally {
span.end();
}
});

Checkout: creating the order

await tracer.startActiveSpan("CreateOrder", async span => {
try {
const result = await db.insert("orders", orderData);
span.setAttribute("orderId", result.id);
} catch (err) {
span.recordException(err);
span.setStatus({ code: 2, message: "Order creation failed" });
throw err;
} finally {
span.end();
}
});
  • The trace ID links all four events into a single process instance.
  • Each span is a technical annotation of how the event was produced or consumed.
  • Errors are just first-class citizens in the timeline, not anomalies to be hidden.

Tracking data transformations across the chain

Observability is not only about what happened (e.g. OrderConfirmed), but also what changed and where. When events include meaningful payloads — and these are consistently traced — we gain a powerful lens into how data evolves across the system.

Why it matters

  • SAGAs and distributed consistency depend on multiple services reaching a coherent state, often asynchronously.
  • Failures don’t always scream. A missing flag, a truncated field, or a miscalculated amount can silently break downstream steps.
  • Business events are the breadcrumbs. If we capture them as immutable facts, we get a high-fidelity record of all transformations.

A practical example

Imagine the following transformation chain:

[OrderCreated]        data: { orderId: "42", items: 3, total: 99.95 }
└── Billing → [InvoiceIssued] data: { orderId: "42", amount: 99.95, invoiceId: "INV-1" }
└── Logistics → [InventoryReserved] data: { orderId: "42", reserved: true, items: 2 }
└── Shipment → [ShipmentScheduled] data: { orderId: "42", deliveryDate: "2025-07-01" }

At first glance, everything “worked.” But wait:

  • The items reserved are fewer than ordered.
  • The delivery date was set even though reservation was partial.
  • No compensating action (e.g. backorder) was triggered.

With a joined trace ID and semantic inspection of event data over time, we can automatically detect semantic drift or trigger alerts on state divergence — e.g. comparing items: 3 vs items: 2.

Techniques that help

  • JSON diff between events for the same case (e.g. order ID)
  • Domain-aware validations: e.g. if invoice total ≠ sum of items, flag the span
  • Derived metrics from data deltas: number of transformations, step-skipped detection

This kind of insight is impossible with spans alone — you need functional, data-carrying events tied to technical traces to reconstruct real-world problems.

Distributed systems fail in subtle ways. Tracing the data as it morphs is your best shot at spotting the cracks before they become outages.

5. Where the plan falls apart (and how to rescue it)

Async holes

  • Problem: transactional DB or legacy ESB cannot persist headers
  • Mitigation: write edge components that map orderId ↔ traceId, or use side tables/comments

Dual propagation standards

  • Problem: HTTP traceparent vs CE ce-traceparent
  • Mitigation: normalise at ingress if one is missing

Webhooks from external APIs

  • Problem: external services do not propagate your trace headers
  • Mitigation: wrap handlers to create spans with synthetic parent contexts; tag source clearly External services do not propagate your trace headers | Wrap handlers to create spans with synthetic parent contexts; tag source clearly |

6. From traces to process models

Because every CloudEvent is already case-aware, process-mining tools can reconstruct the flow:

  1. Dump CloudEvents to Parquet in your lake.
  2. Point ProM / Celonis / PM4Py at the table.
  3. Investigate bottlenecks or compliance violations.

No duplicated instrumentation required.

7. Why this matters

  • Mean-time-to-repair drops
  • Process intelligence grows
  • Org-wide shared language emerges

8. Closing argument

Observability is a contract between IT and the business: every euro that moves leaves a breadcrumb a developer can step through and a product analyst can reason about.

Unifying functional and technical traceability is how we honour that contract.

If your trace stops where your money starts, you are debugging with one eye closed.

Functional Events and Technical Traces: The Divorce That Broke Your System was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.


This content originally appeared on Level Up Coding - Medium and was authored by Gedeón Domínguez Torán


Print Share Comment Cite Upload Translate Updates
APA

Gedeón Domínguez Torán | Sciencx (2025-06-27T18:47:51+00:00) Functional Events and Technical Traces: The Divorce That Broke Your System. Retrieved from https://www.scien.cx/2025/06/27/functional-events-and-technical-traces-the-divorce-that-broke-your-system/

MLA
" » Functional Events and Technical Traces: The Divorce That Broke Your System." Gedeón Domínguez Torán | Sciencx - Friday June 27, 2025, https://www.scien.cx/2025/06/27/functional-events-and-technical-traces-the-divorce-that-broke-your-system/
HARVARD
Gedeón Domínguez Torán | Sciencx Friday June 27, 2025 » Functional Events and Technical Traces: The Divorce That Broke Your System., viewed ,<https://www.scien.cx/2025/06/27/functional-events-and-technical-traces-the-divorce-that-broke-your-system/>
VANCOUVER
Gedeón Domínguez Torán | Sciencx - » Functional Events and Technical Traces: The Divorce That Broke Your System. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/06/27/functional-events-and-technical-traces-the-divorce-that-broke-your-system/
CHICAGO
" » Functional Events and Technical Traces: The Divorce That Broke Your System." Gedeón Domínguez Torán | Sciencx - Accessed . https://www.scien.cx/2025/06/27/functional-events-and-technical-traces-the-divorce-that-broke-your-system/
IEEE
" » Functional Events and Technical Traces: The Divorce That Broke Your System." Gedeón Domínguez Torán | Sciencx [Online]. Available: https://www.scien.cx/2025/06/27/functional-events-and-technical-traces-the-divorce-that-broke-your-system/. [Accessed: ]
rf:citation
» Functional Events and Technical Traces: The Divorce That Broke Your System | Gedeón Domínguez Torán | Sciencx | https://www.scien.cx/2025/06/27/functional-events-and-technical-traces-the-divorce-that-broke-your-system/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.