Staso Docs
Integrations

LLM Integrations

One line of code to auto-trace LLM provider calls. No wrappers, no changes to your existing LLM code.

Available

ProviderInstallEnable
Anthropicpip install "staso[anthropic]"st.integrations.patch_anthropic()
OpenAIpip install "staso[openai]"st.integrations.patch_openai()
Claude Codepip install stasostaso setup-claude-code --api-key ak_...

How It Works

Call the patch function. Every API call to that provider automatically emits an LLM span — model, tokens, latency, errors. Both sync and async clients are covered. Streaming is fully supported.

st.integrations.patch_anthropic()
st.integrations.patch_openai()

# Use your LLM clients normally — traces happen automatically

Other Providers

For providers without a built-in integration, use @st.trace(kind="llm"):

@st.trace(name="cohere_call", kind="llm")
def call_cohere(prompt: str) -> str:
    response = cohere.chat(...)
    return response.text

You get timing and error tracking, but not automatic token extraction.