Close Menu
    Facebook X (Twitter) Instagram
    • Privacy Policy
    • Terms Of Service
    • Legal Disclaimer
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Facebook X (Twitter) Instagram
    Brief ChainBrief Chain
    • Home
    • Crypto News
      • Bitcoin
      • Ethereum
      • Altcoins
      • Blockchain
      • DeFi
    • AI News
    • Stock News
    • Learn
      • AI for Beginners
      • AI Tips
      • Make Money with AI
    • Reviews
    • Tools
      • Best AI Tools
      • Crypto Market Cap List
      • Stock Market Overview
      • Market Heatmap
    • Contact
    Brief ChainBrief Chain
    Home»AI News»How to Build Contract-First Agentic Decision Systems with PydanticAI for Risk-Aware, Policy-Compliant Enterprise AI
    How to Build Contract-First Agentic Decision Systems with PydanticAI for Risk-Aware, Policy-Compliant Enterprise AI
    AI News

    How to Build Contract-First Agentic Decision Systems with PydanticAI for Risk-Aware, Policy-Compliant Enterprise AI

    December 29, 20255 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Customgpt


    In this tutorial, we demonstrate how to design a contract-first agentic decision system using PydanticAI, treating structured schemas as non-negotiable governance contracts rather than optional output formats. We show how we define a strict decision model that encodes policy compliance, risk assessment, confidence calibration, and actionable next steps directly into the agent’s output schema. By combining Pydantic validators with PydanticAI’s retry and self-correction mechanisms, we ensure that the agent cannot produce logically inconsistent or non-compliant decisions. Throughout the workflow, we focus on building an enterprise-grade decision agent that reasons under constraints, making it suitable for real-world risk, compliance, and governance scenarios rather than toy prompt-based demos. Check out the FULL CODES here.

    !pip -q install -U pydantic-ai pydantic openai nest_asyncio

    import os
    import time
    import asyncio
    import getpass
    from dataclasses import dataclass
    from typing import List, Literal

    import nest_asyncio
    nest_asyncio.apply()

    quillbot

    from pydantic import BaseModel, Field, field_validator
    from pydantic_ai import Agent
    from pydantic_ai.models.openai import OpenAIChatModel
    from pydantic_ai.providers.openai import OpenAIProvider

    OPENAI_API_KEY = os.getenv(“OPENAI_API_KEY”)
    if not OPENAI_API_KEY:
    try:
    from google.colab import userdata
    OPENAI_API_KEY = userdata.get(“OPENAI_API_KEY”)
    except Exception:
    OPENAI_API_KEY = None
    if not OPENAI_API_KEY:
    OPENAI_API_KEY = getpass.getpass(“Enter OPENAI_API_KEY: “).strip()

    We set up the execution environment by installing the required libraries and configuring asynchronous execution for Google Colab. We securely load the OpenAI API key and ensure the runtime is ready to handle async agent calls. This establishes a stable foundation for running the contract-first agent without environment-related issues. Check out the FULL CODES here.

    class RiskItem(BaseModel):
    risk: str = Field(…, min_length=8)
    severity: Literal[“low”, “medium”, “high”]
    mitigation: str = Field(…, min_length=12)

    class DecisionOutput(BaseModel):
    decision: Literal[“approve”, “approve_with_conditions”, “reject”]
    confidence: float = Field(…, ge=0.0, le=1.0)
    rationale: str = Field(…, min_length=80)
    identified_risks: List[RiskItem] = Field(…, min_length=2)
    compliance_passed: bool
    conditions: List[str] = Field(default_factory=list)
    next_steps: List[str] = Field(…, min_length=3)
    timestamp_unix: int = Field(default_factory=lambda: int(time.time()))

    @field_validator(“confidence”)
    @classmethod
    def confidence_vs_risk(cls, v, info):
    risks = info.data.get(“identified_risks”) or []
    if any(r.severity == “high” for r in risks) and v > 0.70:
    raise ValueError(“confidence too high given high-severity risks”)
    return v

    @field_validator(“decision”)
    @classmethod
    def reject_if_non_compliant(cls, v, info):
    if info.data.get(“compliance_passed”) is False and v != “reject”:
    raise ValueError(“non-compliant decisions must be reject”)
    return v

    @field_validator(“conditions”)
    @classmethod
    def conditions_required_for_conditional_approval(cls, v, info):
    d = info.data.get(“decision”)
    if d == “approve_with_conditions” and (not v or len(v) < 2):
    raise ValueError(“approve_with_conditions requires at least 2 conditions”)
    if d == “approve” and v:
    raise ValueError(“approve must not include conditions”)
    return v

    We define the core decision contract using strict Pydantic models that precisely describe a valid decision. We encode logical constraints such as confidence–risk alignment, compliance-driven rejection, and conditional approvals directly into the schema. This ensures that any agent output must satisfy business logic, not just syntactic structure. Check out the FULL CODES here.

    @dataclass
    class DecisionContext:
    company_policy: str
    risk_threshold: float = 0.6

    model = OpenAIChatModel(
    “gpt-5″,
    provider=OpenAIProvider(api_key=OPENAI_API_KEY),
    )

    agent = Agent(
    model=model,
    deps_type=DecisionContext,
    output_type=DecisionOutput,
    system_prompt=”””
    You are a corporate decision analysis agent.
    You must evaluate risk, compliance, and uncertainty.
    All outputs must strictly satisfy the DecisionOutput schema.
    “””
    )

    We inject enterprise context through a typed dependency object and initialize the OpenAI-backed PydanticAI agent. We configure the agent to produce only structured decision outputs that conform to the predefined contract. This step formalizes the separation between business context and model reasoning. Check out the FULL CODES here.

    @agent.output_validator
    def ensure_risk_quality(result: DecisionOutput) -> DecisionOutput:
    if len(result.identified_risks) < 2:
    raise ValueError(“minimum two risks required”)
    if not any(r.severity in (“medium”, “high”) for r in result.identified_risks):
    raise ValueError(“at least one medium or high risk required”)
    return result

    @agent.output_validator
    def enforce_policy_controls(result: DecisionOutput) -> DecisionOutput:
    policy = CURRENT_DEPS.company_policy.lower()
    text = (
    result.rationale
    + ” “.join(result.next_steps)
    + ” “.join(result.conditions)
    ).lower()
    if result.compliance_passed:
    if not any(k in text for k in [“encryption”, “audit”, “logging”, “access control”, “key management”]):
    raise ValueError(“missing concrete security controls”)
    return result

    We add output validators that act as governance checkpoints after the model generates a response. We force the agent to identify meaningful risks and to explicitly reference concrete security controls when claiming compliance. If these constraints are violated, we trigger automatic retries to enforce self-correction. Check out the FULL CODES here.

    async def run_decision():
    global CURRENT_DEPS
    CURRENT_DEPS = DecisionContext(
    company_policy=(
    “No deployment of systems handling personal data or transaction metadata ”
    “without encryption, audit logging, and least-privilege access control.”
    )
    )

    prompt = “””
    Decision request:
    Deploy an AI-powered customer analytics dashboard using a third-party cloud vendor.
    The system processes user behavior and transaction metadata.
    Audit logging is not implemented and customer-managed keys are uncertain.
    “””

    result = await agent.run(prompt, deps=CURRENT_DEPS)
    return result.output

    decision = asyncio.run(run_decision())

    from pprint import pprint
    pprint(decision.model_dump())

    We run the agent on a realistic decision request and capture the validated structured output. We demonstrate how the agent evaluates risk, policy compliance, and confidence before producing a final decision. This completes the end-to-end contract-first decision workflow in a production-style setup.

    In conclusion, we demonstrate how to move from free-form LLM outputs to governed, reliable decision systems using PydanticAI. We show that by enforcing hard contracts at the schema level, we can automatically align decisions with policy requirements, risk severity, and confidence realism without manual prompt tuning. This approach allows us to build agents that fail safely, self-correct when constraints are violated, and produce auditable, structured outputs that downstream systems can trust. Ultimately, we demonstrate that contract-first agent design enables us to deploy agentic AI as a dependable decision layer within production and enterprise environments.

    Check out the FULL CODES here. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

    Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.



    Source link

    quillbot
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    CryptoExpert
    • Website

    Related Posts

    Decoding the Arctic to predict winter weather | MIT News

    January 13, 2026

    How AI code reviews slash incident risk

    January 11, 2026

    Meta and Harvard Researchers Introduce the Confucius Code Agent (CCA): A Software Engineering Agent that can Operate at Large-Scale Codebases

    January 10, 2026

    3 Questions: How AI could optimize the power grid | MIT News

    January 9, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    changelly
    Latest Posts

    How to Make VIRAL AI Inspirational Finance Videos (FREE AI Course)

    January 14, 2026

    Hacking Without Coding Just Got DEADLY : 4 Dangerous New AI Tools

    January 14, 2026

    Story Protocol’s IP token surges 22%, outpacing top altcoins: check forecast

    January 14, 2026

    What’s in the new draft of the US Senate’s CLARITY Act?

    January 14, 2026

    Ethereum Overtakes L2s Base and Arbitrum on Active Users

    January 14, 2026
    frase
    LEGAL INFORMATION
    • Privacy Policy
    • Terms Of Service
    • Legal Disclaimer
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Top Insights

    Here’s Why The Bitcoin, Ethereum, And Dogecoin Prices Are Surging Today

    January 15, 2026

    US Senator Hints Crypto Market Structure Bill May Be Delayed

    January 15, 2026
    10web
    Facebook X (Twitter) Instagram Pinterest
    © 2026 BriefChain.com - All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.