Integrations
LLM Integrations
One line of code to auto-trace LLM provider calls. No wrappers, no changes to your existing LLM code.
Available
| Provider | Install | Enable |
|---|---|---|
| Anthropic | pip install "staso[anthropic]" | st.integrations.patch_anthropic() |
| OpenAI | pip install "staso[openai]" | st.integrations.patch_openai() |
| Claude Code | pip install staso | staso setup-claude-code --api-key ak_... |
How It Works
Call the patch function. Every API call to that provider automatically emits an LLM span — model, tokens, latency, errors. Both sync and async clients are covered. Streaming is fully supported.
st.integrations.patch_anthropic()
st.integrations.patch_openai()
# Use your LLM clients normally — traces happen automaticallyOther Providers
For providers without a built-in integration, use @st.trace(kind="llm"):
@st.trace(name="cohere_call", kind="llm")
def call_cohere(prompt: str) -> str:
response = cohere.chat(...)
return response.textYou get timing and error tracking, but not automatic token extraction.