Matt Coppinger
← Writing

From Strategy to Deployment: Building an Agent Template Library

AIAgentsEnterpriseTemplatesAutomation

There's a gap in most enterprise AI strategies. Leadership has bought in, the use cases are mapped, the models are ready. But someone still has to turn "we should automate contract review" into a working agent. That middle step - from strategy to deployment - is where most AI initiatives stall.

I've been building tools on both sides of this gap. The AI Planner models where AI fits and what it costs. But planning without deployment is just a spreadsheet exercise. So I built the other half: seventeen pre-built agent instruction templates covering the enterprise use cases that come up most often.

Why Templates

Agent instruction files - CLAUDE.md, AGENTS.md, .cursorrules - are becoming how we define AI behaviour. They're job descriptions for machines. The pattern is universal across Claude Code, Codex, GitHub Copilot, and Gemini CLI. But writing good ones is harder than it looks. A vague instruction file produces a vague agent.

Twenty years ago, when web frameworks matured, we didn't ask every team to architect MVC from scratch. We built starter kits. Agent instructions need the same treatment - well-structured starting points that teams customise to their context.

The Four-Agent Pattern

Every template follows the same architecture:

Extract - a fast, cheap model pulls structured data from unstructured input. Analyse - the extracted data gets enriched and scored. Evaluate - a more capable model makes judgement calls. Route - based on the evaluation, the agent takes action.

The separation matters. Extraction and classification run on Haiku-class models - fast and cheap. Judgement calls use Sonnet-class models. Organisations deploying a single frontier model for everything are burning money on tasks that don't need it.

Each template ships as a drop-in directory: a CLAUDE.md orchestrator, a config.yaml for deployment-specific parameters, individual sub-agent instruction files, and a knowledge-base/ directory for your domain context. Test prompts let you validate the pipeline on sample data within minutes.

Where the ROI Lives

I've grouped the templates by business function, matching the taxonomy in the AI Planner:

Customer Operations - ticket classification, knowledge base Q&A, sentiment-driven escalation. Consistently 200-900% ROI across every org size. High volume plus high automation potential.

Sales and Marketing - lead scoring, content creation. The lead scorer processes inbound leads in seconds rather than the hours a human SDR takes to research and qualify.

IT and Engineering - code review, documentation generation, incident triage. Moving from basic automation (inline suggestions) to partial (full PR review with architectural flags) unlocks significantly more value.

Finance - invoice processing, financial forecasting. Classic extraction-heavy use cases where a small model handles 90% of the work.

HR - CV screening, policy Q&A, onboarding content. Consistently positive ROI because manual processes are labour-intensive and data sensitivity is manageable.

Legal - contract review, regulatory monitoring, due diligence. The most expensive to run (confidential data often requires local inference) but the savings potential is enormous.

Operations and Product - QA, process documentation, PRD drafting. Lower volume but high value per task.

The Real Levers: Adoption and Automation

Most AI pilot programmes stall. Not because the models are bad, but because nobody uses them. Enterprise AI tools typically see 25-30% adoption (Deloitte 2026). And when the output quality isn't reliable enough, adoption drops further. It's a vicious cycle: low accuracy kills trust, low trust kills adoption, low adoption kills ROI, and the pilot gets shelved.

The two biggest levers I see when modelling organisations aren't model selection or deployment strategy. They're adoption and automation level - and they're linked.

A use case on manual prompting achieves roughly 4-5% effective automation. Move to partial automation (triggered workflow with human review) and you're at 25-30%. Full automation pushes past 60%. For a 250-person organisation, upgrading automation across viable use cases unlocks an additional £9,000 per month.

But automation only works if the output is good enough for people to trust it. That's why the templates use the four-stage pattern - each stage is focused, testable, and tunable. You can measure extraction accuracy independently from evaluation quality. When accuracy improves, trust follows. When trust follows, adoption follows. When adoption follows, ROI follows.

The templates are designed to make this progression natural. Start with manual prompting, validate outputs, build confidence. Add workflow triggers for partial automation. Close the loop with automated routing. You're turning up the dial, not rebuilding the system.

From Template to Production

Week 1: Validate. Run the test prompts, check outputs against expectations. Tune the knowledge base and config.

Week 2: Integrate. Connect to your data sources - CRM webhooks, email parsing, ticketing events.

Week 3: Monitor. Shadow mode - the agent processes real data but humans review every output. Track accuracy, tune thresholds.

Week 4: Automate. Increase autonomy for high-confidence cases. Keep humans in the loop for everything else. Expand as confidence grows.

What Comes Next

The templates are open source under MIT licence. But they're just the starting point. The real opportunity is the feedback loop: use the AI Planner to identify your highest-ROI use cases, deploy the matching templates, measure actual savings, and feed that data back into the planner.

Not a transformation project. A continuous cycle of plan, deploy, measure, improve. The tools exist. What's left is execution - and that's always the hard part. Templates won't solve it for you, but they'll get you to the starting line faster.