Validation Engine

Validate every decision
before it executes.

The Validation Engine checks every agent action against your policies in real-time. Invalid decisions are blocked or escalated automatically. Sub-100ms latency.

Get Started Documentation

How it works

Real-time validation

Every agent action is validated synchronously before execution. P99 latency under 100ms even at scale.

Policy enforcement

Define rules in code or via our visual builder. Policies evaluate content, context, confidence, and custom logic.

Confidence scoring

Set minimum confidence thresholds. Actions below threshold automatically route to human review.

Schema validation

Ensure agent outputs conform to expected formats. Catch malformed data before it causes problems.

Semantic checking

Validate meaning, not just structure. Detect inappropriate content, PII exposure, and policy violations.

Content filtering

Block harmful, offensive, or off-brand content automatically. Customizable to your specific requirements.

Drop-in validation
for any agent.

Add validation to your existing agent code with a single function call. Works with any LLM provider or agent framework.

  • Python, Node.js, and Go SDKs
  • LangChain and OpenAI integrations
  • REST API for custom implementations
  • Webhook callbacks for async workflows
agent.py
from attest import Attest

client = Attest(api_key="...")

# Before executing, validate the action
decision = client.validate(
    action="send_email",
    payload={
        "to": recipient,
        "subject": subject,
        "body": agent_draft
    },
    # Optional: include context for better decisions
    context={
        "conversation_id": conv_id,
        "customer_tier": "enterprise"
    }
)

if decision.approved:
    execute_action(decision.payload)
elif decision.escalated:
    # Routed to human review
    await_review(decision.review_id)
else:
    # Blocked by policy
    handle_rejection(decision.reason)

Start validating agent decisions.

Integrate in minutes. First 1,000 validations free.