Skip to main content

OpenAI Codex

OpenAI Codex has native OpenTelemetry support. Once enabled, Oodle collects metrics and event logs so you can track token usage, session activity, tool calls, and WebSocket request patterns across your organization.

Codex charts dashboard

Getting Started

1. Enable Telemetry

The fastest way is to use the integration tile in the Oodle UI:

  1. Navigate to Settings → Integrations
  2. Open the AI Agent Observability section
  3. Click the Codex Observability tile
  4. Select an API key and follow the steps shown

Alternatively, add the following to your Codex configuration file (~/.codex/config.toml):

#:schema https://developers.openai.com/codex/config-schema.json

[otel]
environment = "production"
log_user_prompt = true

exporter = { otlp-http = {
endpoint = "https://<LOGS_ENDPOINT>/ingest/otel/v1/logs",
protocol = "binary",
headers = {
"X-API-KEY" = "<API_KEY>",
"X-OODLE-INSTANCE" = "<INSTANCE_ID>",
},
}}

metrics_exporter = { otlp-http = {
endpoint = "https://<METRICS_ENDPOINT>/v2/otlp/metrics/<INSTANCE_ID>",
protocol = "binary",
headers = {
"X-API-KEY" = "<API_KEY>",
"X-OODLE-INSTANCE" = "<INSTANCE_ID>",
},
}}

Replace <METRICS_ENDPOINT>, <API_KEY>, and <INSTANCE_ID> with values from the integration tile.

tip

The integration tile in the Oodle UI generates a ready-to-copy config.toml with the correct endpoints and API key pre-filled.

Alternative: Environment Variables

For quick testing you can export the standard OpenTelemetry variables directly in your shell:

export OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
export OTEL_EXPORTER_OTLP_METRICS_ENDPOINT=https://<METRICS_ENDPOINT>/v2/otlp/metrics/<INSTANCE_ID>
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT=https://<LOGS_ENDPOINT>/ingest/otel/v1/logs
export OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=<API_KEY>, X-OODLE-INSTANCE=<INSTANCE_ID>"
export OTEL_LOG_USER_PROMPTS=1
export OTEL_METRIC_EXPORT_INTERVAL=10000
export OTEL_LOGS_EXPORT_INTERVAL=5000

2. Roll Out to Your Team

Commit the config.toml above to your dotfiles repository or place it in each developer's ~/.codex/ directory via your configuration management tool.

3. Verify Data

Once telemetry starts flowing, navigate to AI Assistants → Codex in the Oodle sidebar.

Charts Dashboard

The Charts tab embeds a Grafana dashboard with panels covering:

PanelDescription
Total TokensAggregate token usage over the selected range
ConversationsNumber of Codex conversations started
WebSocket RequestsTotal WebSocket requests made
Tool CallsTotal tool invocations
Token UsageTime series by token type (input, output, cached, reasoning)
Conversations Over TimeConversation start trends
Tool Call ActivityTool invocations over time
WebSocket RequestsSuccess vs failure rates
Avg Turn E2E DurationEnd-to-end latency per turn
Avg TTFTTime to first token
Avg TTFMTime to first message
Avg WS Event DurationWebSocket event processing time
Startup Prewarm DurationAgent startup prewarm latency
Shell Snapshot DurationShell snapshot capture latency

Sessions

Codex sessions table

The Sessions tab shows individual Codex sessions:

ColumnDescription
Start TimeWhen the session began
UserEmail of the developer
ModelPrimary model used
PromptsNumber of user prompts
DurationWall-clock duration
ToolsNumber of tool calls
TokensTotal tokens (input + output)
ErrorsCount of errors
note

Codex does not currently export cost data. The Cost column is hidden for Codex sessions.

Click any row to open a Session Detail drawer showing a turn-by-turn timeline of every event.

Session Detail Drawer

Codex session detail drawer

The drawer displays:

  • Session metadata — user, model, app version
  • Aggregated stats — tokens, tool calls, errors, duration (only populated fields are shown)
  • Turn-by-turn timeline — each turn is collapsible and shows individual events (SSE events, WebSocket events, tool calls). Every event row is expandable to reveal the full raw JSON payload.

What Gets Collected

Metrics

Codex exports the following as OpenTelemetry metrics (delta temporality):

MetricLabelsDescription
codex_turn_token_usagemodel, token_type, originatorToken count by type (input, output, cached, reasoning)
codex_thread_startedmodel, originatorConversations started
codex_turn_tool_callmodel, originatorTool invocations
codex_websocket_requestmodel, successWebSocket API requests
codex_turn_e2e_duration_msmodelEnd-to-end turn latency
codex_turn_ttft_msmodelTime to first token
codex_turn_ttfm_msmodelTime to first message
codex_ws_event_duration_msmodelWebSocket event processing time
codex_startup_prewarm_duration_msStartup prewarm latency
codex_shell_snapshot_duration_msShell snapshot capture latency

Events (Logs)

Events are exported via the OpenTelemetry logs protocol. Each event has an attributes.event.name field:

Event TypeKey Attributes
codex.conversation_startsConversation ID, model, user email
codex.sse_eventModel, event kind, token counts, duration
codex.websocket_eventModel, event kind, duration, success
codex.websocket_requestModel, duration, success
codex.websocket_connectModel, duration
codex.user_promptPrompt text, prompt length
codex.tool_decisionTool name, decision, source
codex.tool_resultTool name, success, duration, error

Events are grouped by conversation.id to reconstruct the turn-by-turn timeline on the Sessions tab.

Further Reading


Support

If you need assistance or have any questions, please reach out to us through: