Reference
Span kinds, statuses, and well-known metadata keys — the shape of every Staso trace record.
staso.SpanKind — matches the backend SpanKindEnum.
| Value | Typical use |
|---|
llm | A single LLM call (chat completion, streaming, tool use). Populated by patch_openai / patch_anthropic. |
tool | A tool function invoked by an agent. Set by @st.tool. |
chain | A generic sequence of steps. Default for st.span and @st.trace. |
retriever | A vector search, SQL query, or document lookup. |
agent | An agent entry-point. Set by @st.agent. Usually the root span of a trace. |
custom | Anything that doesn't fit the above. |
staso.SpanStatus — matches the backend StatusEnum.
| Value | When |
|---|
ok | Default. The operation succeeded. |
error | Set automatically by span.record_exception or any decorator that catches an exception. |
timeout | Explicit timeout — set manually via span.set_status(SpanStatus.TIMEOUT). |
Top-level span fields written to storage:
| Field | Type | Description |
|---|
name | str | Span label. |
kind | SpanKind | See above. |
status | SpanStatus | See above. |
trace_id | uuid | Shared across every span in one trace. |
span_id | uuid | Unique per span. |
parent_span_id | uuid | null | Parent span in the trace tree. null for root. |
model | str | LLM model name. Populated on kind=llm spans. |
input_tokens | int | Prompt tokens. |
output_tokens | int | Completion tokens. |
total_tokens | int | Sum — not re-derived on the backend. |
input | dict | Free-form request payload. Serialized to JSON on flush. |
output | dict | Free-form response payload. |
metadata | dict | Well-known keys listed below, plus any custom keys you add. |
conversation_id | str | Set by st.conversation or read from context. |
agent_name | str | Set by @st.agent or st.init(agent_name=...). |
agent_id | str | Deterministic UUID5 of agent_name. |
user_id | str | Set by st.conversation(user_id=...). |
error_message | str | null | Set by record_exception. |
Every key below lives in span.metadata. The SDK reads some from LLM providers automatically (via patch_openai / patch_anthropic); the rest you set yourself. Use these exact strings so the backend can index and chart them.
| Key | Type |
|---|
temperature | float |
max_tokens | int |
top_p | float |
top_k | int |
stop_sequences | list[str] |
tool_choice | str | dict |
thinking | dict |
service_tier | str |
response_format | str | dict |
frequency_penalty | float |
presence_penalty | float |
seed | int |
| Key | Type |
|---|
stop_reason | str |
stop_sequence | str |
response_id | str |
provider | str — openai, anthropic, etc. |
| Key | Type |
|---|
cache_read_input_tokens | int |
cache_creation_input_tokens | int |
reasoning_tokens | int |
audio_input_tokens | int |
audio_output_tokens | int |
| Key | Type |
|---|
time_to_first_token | float seconds |
| Key | Type |
|---|
code.filepath | str |
code.lineno | int |
code.function | str |
| Key | Type |
|---|
vcs.commit.sha | str |
vcs.branch | str |
vcs.repo_url | str |
vcs.is_dirty | bool |
| Key | Type |
|---|
guard_action | str — allow | block | modify | escalate |
guard_reason | str |
guard_latency_ms | float |