Observability

See every agent decision, as it happens.

Traces, spans, conversations, and a live WebSocket stream. Drop in two lines. Watch your agents work.

01 · LIVE TRACE

A conversation forming, in real time.

Every LLM call, tool call, and guard decision lands on the stream as the agent runs. No refresh. No webhook setup.

tracelive · 1.28s
session.runtrace
1280ms
anthropic.messages.createllm
340ms
tool.lookup_ordertool
90ms
guard.evaluateguard
48ms
tool.process_refundtool
210ms
anthropic.messages.createllm
290ms
02 · WHAT YOU GET

Eight surfaces. One dashboard.

Everything you need to run agents in production. Nothing you don't.

traces

Every call, captured.

Tool calls, LLM requests, agent steps — full inputs, outputs, and context. Filter by agent, model, status, duration, tokens.

spans

A real tree, not a list.

Nested spans reflect the actual execution shape. Open any child without re-requesting the parent.

conversations

Multi-turn grouped automatically.

Stack-frame anchoring stitches sibling and nested LLM calls into one conversation. Zero decorators required.

live stream

WebSocket, not polling.

Watch a trace form in real time while the agent runs. No refresh needed.

pretty engine

Claude, OpenAI, Codex — readable.

Raw SDK payloads rendered as cards. Tool calls, thinking, structured outputs — no JSON squinting.

cost & usage

Know who spent what.

Per-trace, per-agent, per-model cost and token usage. Find the runaway loop before finance does.

environments

Prod, staging, dev — one click.

Every trace is tagged by environment. Filter globally. Dashboards follow.

agent metrics

Success rate, P95, errors.

Per-agent rollups with time-series charts. Spot regressions before they hit users.

03 · USE CASES

Four jobs your team stops doing by hand.

debug

Find the failing step.

Jump from a failed trace to the exact span that threw. See the input, output, and the conversation leading to it.

audit

Prove what happened.

Full request, full response, full decision, full timestamp. Export to CSV or stream to your SIEM.

cost

Cap the runaway agent.

Segment cost by model, agent, or workspace. Surface outliers before the invoice surfaces them.

segment

Split prod from staging.

Single environment filter at the workspace level. Dashboards, traces, and alerts scope accordingly.

04 · ON THE ROADMAP

Visibility today. Alerting next.

  • Threshold alerts

    Soon

    Trigger on latency, cost, error rate, or guard-violation spikes per agent. Route to Slack, email, or webhook.

05 · Integrate

Start tracing in 3 lines.

Init. Patch. Ship. No decorators, no rewrites, no proxy.

Start for free
app/agent/__init__.pypython
01import staso
02 
03staso.init(project="customer-agent")
04staso.patch_anthropic()