Skip to main content

Traces

Traces help you understand how long it takes your application to handle incoming requests and how those requests flow through various services in your architecture. Oodle collects and analyzes trace data to help you identify performance bottlenecks, debug failures, and optimize your applications. Oodle can collect traces from all your applications - frontend, backend, Serverless functions, LLM agents etc.

What Questions Can Tracing Answer?

Tracing helps you answer critical questions about your application from performance and debugging perspective:

  • How long does it take my application to handle a given request?
  • Why is it taking my application so long to handle a request?
  • Why do some requests take longer than others?
  • What is the overall latency of requests to my application?
  • Has latency increased or decreased over time?
  • What are my application's dependencies?
  • Which service is causing the bottleneck in a request chain?

For AI applications, tracing helps you gain visibility into prompts sent to LLM models, the model's response, tools inputs and outputs etc. This helps you understand the inherently non-deterministic aspect of LLMs:

  • What are the prompts being sent to LLMs?
  • What tool calls were made in a particular user request handling?
  • What are the input and output tokens used in handling a user request?

How Tracing Works

A trace represents the complete journey of a request through your system. Each trace is made up of spans, which represent individual operations or units of work. Spans have:

  • Start time and duration - When the operation started and how long it took
  • Parent-child relationships - How operations relate to each other
  • Attributes - Metadata like HTTP methods, status codes, and service names
  • Status - Whether the operation succeeded or failed

When a request enters your system, a trace ID is generated and propagated through all services involved in handling that request. This allows you to see the complete picture of how a request was processed.

You can read about associated concepts in OpenTelemetry Traces.

Sending Traces to Oodle

Oodle supports collecting traces from multiple sources:

OpenTelemetry

The recommended way to instrument your applications. OpenTelemetry provides vendor-neutral instrumentation for most programming languages including Go, Java, Python, Node.js, and more.

OpenTelemetry Integration →

Grafana Alloy

If you're already using Grafana Alloy for metrics or logs collection, you can also use it to forward traces to Oodle.

Grafana Alloy Integration →

Sampling

Learn how to configure trace sampling to control the volume of trace data collected while maintaining visibility into your application behavior.

Trace Sampling →

For the complete list of trace integrations, see Trace Integrations.

Exploring Your Traces

Once traces are flowing into Oodle, use the Trace Explorer to analyze them:

  • Trace Explorer - Search and filter traces, view distribution charts, and identify patterns
  • Traces Summary - Summary view of all matching traces.
  • Trace View - Examine the full waterfall view of a trace with all its spans
  • Trace Insights - Analyze attribute distributions to identify root causes

Cross-Signal Correlation

Oodle connects your traces with other observability signals:

  • Logs - View logs from services involved in a trace
  • Metrics - See infrastructure metrics (CPU, memory) correlated with trace timing
  • Service Graph - Visualize dependencies between services

This unified view helps you quickly move from symptom to root cause when investigating issues.