Guard Integration
Three ways to add guards, depending on how your agent runs.
1. Explicit Guard Calls (Your Own Agents)
Call guard() before executing any tool. You control what happens with the result.
Install
pip install stasoBasic Usage
import staso as st
from staso import guard
st.init(api_key="ak_...", agent_name="my-agent")
@st.tool(name="process_refund")
def process_refund(customer_id: str, amount: float) -> str:
# Check with guard before executing
result = guard(
tool_name="process_refund",
tool_input={"customer_id": customer_id, "amount": amount},
)
if result.action == "block":
return f"Refund blocked: {result.reason}"
if result.action == "modify":
amount = result.modified_input.get("amount", amount)
# Execute the refund
return do_refund(customer_id, amount)
st.shutdown()With Context
Pass extra context to help rules make better decisions:
result = guard(
tool_name="send_email",
tool_input={"to": "[email protected]", "body": body},
context={
"session_id": "session-abc",
"agent_name": "outreach-agent",
"trace_id": current_trace_id,
"environment": "production",
},
)Handling Every Action
from staso import guard, GuardResult
result = guard(tool_name="delete_records", tool_input={"table": "users", "filter": "all"})
match result.action:
case "allow":
execute_tool()
case "block":
log.warning(f"Blocked by rule '{result.rule_name}': {result.reason}")
return fallback_response()
case "modify":
execute_tool(**result.modified_input)
case "escalate":
notify_human(result.escalation_id)
return "Waiting for approval"With Escalation Polling
Block execution until a human approves or denies:
result = guard(
tool_name="wire_transfer",
tool_input={"amount": 50000, "destination": "external-account"},
wait_for_escalation=True, # block until human responds
escalation_poll_interval=3.0, # check every 3 seconds
escalation_timeout=300.0, # give up after 5 minutes
)
# result.action is now "allow" (approved) or "block" (denied/timeout)See Escalation for the full workflow.
2. Automatic with LLM Integrations (Anthropic, OpenAI)
When you use patch_anthropic() or patch_openai(), guards evaluate tool calls in LLM responses automatically. No extra code.
import staso as st
st.init(api_key="ak_...", agent_name="my-agent")
st.integrations.patch_anthropic()
import anthropic
client = anthropic.Anthropic()
# When the model returns a tool_use block, Staso evaluates it against your guard rules.
# Results are recorded on the span metadata as guard_evaluations.
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Process a refund for $4,200"}],
tools=[...],
)Guard evaluations show up in span metadata:
{
"guard_evaluations": [
{
"tool_name": "process_refund",
"action": "block",
"reason": "Refund exceeds $500 limit",
"latency_ms": 45.2,
"rules_triggered": ["max_refund_limit"]
}
]
}Important: LLM integration guards are observational -- they record what would happen but don't block the tool call. Your agent code still needs to check guard_evaluations in the span metadata if you want to enforce blocking. For automatic blocking, use explicit guard calls before executing the tool.
Disabling Auto-Evaluation
export STASO_GUARD_ENABLED=falseOr set STASO_GUARD_ENABLED=false in your environment. Guards are enabled by default.
3. CLI Agent Hooks (Claude Code, Codex)
Claude Code and Codex integrations evaluate guards before every tool execution -- and can block or modify tools in real-time.
Setup
Guards are enabled automatically when you set up a CLI agent integration:
staso setup --target claude-code --api-key ak_...
staso setup --target codex --api-key ak_...Or use the interactive wizard:
staso setupNo code changes. Guards run on every tool call during your CLI sessions.
What Happens
When Claude Code or Codex is about to execute a tool:
- Staso evaluates the tool name and input against your rules
- Block: The tool is prevented from running. The agent sees a "Blocked by guard" message.
- Modify: The tool runs with sanitized input (e.g., dangerous flags removed).
- Audit violations: The tool runs normally, but a
guard:would-blockspan is recorded.
Dashboard Spans
Every guard decision creates a dedicated span:
| Span Name | Status | Meaning |
|---|---|---|
guard:blocked:Write | error | Tool was prevented from running |
guard:modified:Bash | ok | Tool ran with modified input |
guard:would-block:Edit | ok | Audit rule flagged this, but tool ran normally |
Each span includes the guard action, triggered rules, and the original tool input.
Configuration
| Variable | Default | Description |
|---|---|---|
STASO_GUARD_ENABLED | true | Set to false to disable guard evaluation |
STASO_GUARD_TIMEOUT | 10 | Guard evaluation timeout in seconds |
These are set automatically during staso setup. To change them, edit your hook configuration or re-run setup.
Full Example
An agent that processes customer requests with guard protection:
import staso as st
from staso import guard
st.init(api_key="ak_...", agent_name="customer-service")
st.integrations.patch_anthropic()
import anthropic
client = anthropic.Anthropic()
@st.tool(name="process_refund")
def process_refund(customer_id: str, amount: float) -> str:
result = guard(
tool_name="process_refund",
tool_input={"customer_id": customer_id, "amount": amount},
context={"environment": "production"},
)
if result.action == "block":
return f"Cannot process refund: {result.reason}"
if result.action == "escalate":
return f"Refund of ${amount} requires manager approval (escalation: {result.escalation_id})"
if result.action == "modify":
amount = result.modified_input.get("amount", amount)
return f"Refund of ${amount} processed for {customer_id}"
@st.agent(name="customer-service")
def handle_request(message: str) -> str:
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": message}],
tools=[{
"name": "process_refund",
"description": "Process a customer refund",
"input_schema": {
"type": "object",
"properties": {
"customer_id": {"type": "string"},
"amount": {"type": "number"},
},
"required": ["customer_id", "amount"],
},
}],
)
# Handle tool use in response...
return response.content[0].text
with st.conversation("customer-jane"):
handle_request("I want a refund of $4,200 for order #789")
st.shutdown()Next
- Rules and actions -- how rules evaluate, action types, severity
- Escalation -- human approval workflows
- Configuration -- all environment variables