Antonio Gulli's book on agentic design patterns catalogs 21 patterns across four parts: foundations, state and learning, resilience, and advanced patterns. Each chapter defines the pattern, shows code examples, and explains the trade-offs. It is a useful taxonomy. What it cannot do is show you what these patterns look like in a real system built under real constraints.

This article does the mapping. For each of the 21 patterns, one paragraph: what the pattern is, where ticketyboo.dev implements it, and what was learned from that implementation. The goal is not to summarise the book. The goal is to demonstrate that pattern literacy and practical architecture reinforce each other. You recognise patterns in your own code once you know their names. You choose better implementations when you understand why the pattern exists.

Part 1: Foundations 1. Prompt Chaining implicit 2. Routing explicit 3. Parallel- ization implicit 4. Reflection partial 5. Tool Use implicit 6. Planning implicit 7. Multi- Agent explicit Part 2: State and Learning 8. Memory Management explicit 9. Learning Adaptation partial 10. MCP explicit 11. Goal Setting implicit Part 3: Resilience 12. Exception Handling implicit 13. Human- in-the-Loop explicit 14. Knowledge Retrieval partial Part 4: Advanced 15. Inter-Agent Comms explicit 16. Resource- Aware explicit 17. Reasoning Techniques partial 18. Guardrails Safety explicit 19. Evaluation Monitoring implicit 20. Prioriti- zation implicit 21. Exploration Discovery partial Status key Explicit: implemented and documented in a ticketyboo article or spec Implicit: in the codebase, not yet written up Partial: some aspects implemented, others deferred
21 patterns, four parts. Green: explicitly implemented and documented. Blue: present in the codebase but not yet written up. Orange: partial implementation.

Part 1: Foundations

The foundational patterns are the building blocks everything else depends on. If you are building an agentic system and have not consciously applied all seven, you have probably applied them unconsciously.

1. Prompt Chaining

A sequence of LLM calls where the output of each step becomes the input to the next. The scanner implements this as: raw repo files to issue classification, classification to risk analysis, analysis to remediation report. The ops agents chain it without LLMs: EventBridge trigger to CloudWatch query to DynamoDB write to proxy read to browser. The pattern is identical whether models or functions do the work.

2. Routing

Classify the input first, then direct it to the appropriate handler. Gatekeep routes incoming work to one of three personas: Sentinel for security concerns, Auditor for cost and compliance, Architect for design decisions. The fixer-bot takes this further with a three-tier classifier: Simple tasks go to a lightweight model, Medium to a mid-tier, Complex to the heaviest available. Routing is the pattern that makes model selection economical rather than arbitrary.

3. Parallelization

Run independent tasks concurrently rather than sequentially. The scanner's file analysis is embarrassingly parallel: each file can be analysed independently, and the results merged. The fixer-bot's DAG executor identifies which tasks have no dependencies and runs them simultaneously. The ops agents run as four independent Lambda functions on EventBridge schedules, each writing to the same DynamoDB table without coordination. Not every problem is parallelizable, but when the tasks are independent, sequential execution is simply waste.

4. Reflection

An agent reviews its own output and revises it before returning a result. Code review loops in the development workflow implement this: generate code, review for correctness, revise if issues found. Gatekeep pre-commit checks implement a mechanical form of it: generate change, check against governance rules, block if rules violated. The partial classification here reflects that reflection loops are not yet systematised as first-class components. They happen, but not with consistent tooling.

5. Tool Use

An agent calls external functions or APIs to extend its capabilities beyond the model's own knowledge. Every ops agent is a tool-use implementation: the SRE agent calls CloudWatch, the Cost agent calls Cost Explorer, the Security agent calls the IAM API and S3 bucket policy endpoint. The scanner calls the GitHub API, file parsers, and health-scoring functions. Tool use is where the agent stops reasoning in isolation and connects to the state of the actual world.

6. Planning

An agent decomposes a high-level goal into a sequence of steps before executing any of them. The fixer-bot's spec generation does this explicitly: take a GitHub issue, generate a YAML task breakdown, build a DAG from the dependencies, then execute. The sprint planning process with an AI assistant implements it at a higher level: read the context, identify the builds, order by dependency and value, produce a todo list before writing a line of code. Planning before execution prevents half-built implementations and circular dependencies.

7. Multi-Agent

Multiple specialised agents coordinating to complete work no single agent could do well alone. The ticketyboo ops team is the clearest example: CTO agent handles GitHub issues and PR health, SRE handles infrastructure metrics, Security handles IAM and bucket policies, Cost handles billing and Free Tier burn. Each writes to the same DynamoDB table with a consistent schema. The team dashboard aggregates their outputs. Specialisation beats generalisation when the domains are genuinely distinct.

Part 2: State and Learning

The state and learning patterns address a fundamental limitation of stateless LLM calls: they forget everything when the context window closes. Solving this properly is architecturally non-trivial.

8. Memory Management

Persistent storage of context, decisions, and history across sessions and agent invocations. The roo-context MCP server is this platform's primary memory implementation: sessions record what was built, notes capture patterns and gotchas, decisions document architecture choices with rationale, file history tracks what changed and why. DynamoDB serves a different memory role: ephemeral agent telemetry with 30-day TTL, where the goal is recency not permanence. The distinction between what to remember forever and what to forget automatically is the key design decision in any memory architecture.

9. Learning and Adaptation

An agent improves its behaviour based on feedback and observed outcomes. The self-improving loop article describes this: an agent that observes its own output quality, receives structured feedback, and updates its approach. The roo-context MCP implements a manual version: session notes record what worked and what did not, and subsequent sessions read those notes before starting. This is learning through institutional memory rather than gradient descent. It works at the timescale of sprints, not milliseconds.

10. Model Context Protocol (MCP)

A structured protocol for exposing tools and resources to AI agents through a standardised server interface, keeping tool definitions outside the agent's context window. The roo-context MCP server exposes session management, note storage, decision recording, and file history as structured tools. The Gatekeep MCP exposes governance rules and approval workflows. MCP changes the architecture from "give the agent a long system prompt with tool descriptions" to "the agent queries a server for available tools." The difference in composability and maintainability is significant.

11. Goal Setting and Monitoring

Explicit goal definition with progress tracking and outcome measurement. Sprint plans with numbered BUILD items are the explicit implementation: goals are written, ordered, and tracked through a todo list that persists across the session. The frontlog is the higher-level version: all publishable content, its status, and the publish cadence. The ops dashboard implements it for infrastructure: each agent has a target state (all checks green, cost within budget, no IAM violations), and the dashboard shows current state against that target.

Part 3: Resilience

Resilience patterns address the reality that agents fail: tools time out, models return unexpected formats, external APIs are unavailable, humans reject outputs. A system without resilience patterns is a demo, not a platform.

12. Exception Handling and Recovery

Structured handling of failures at every level: tool errors, model errors, network errors, validation errors. Lambda functions implement this through try/except blocks with specific exception types, structured error responses, and CloudWatch Logs for post-mortem analysis. The scanner implements graceful degradation: if a file cannot be parsed, the scan continues with that file flagged as unanalysed rather than the whole scan failing. The fixer-bot has explicit abort conditions: if a task's dependencies cannot be resolved, the task is marked blocked and execution halts cleanly rather than producing partial output.

13. Human-in-the-Loop

A mechanism for pausing automated processes to request human review or approval before proceeding. The data-draft feature flag is the simplest possible implementation: every new page is marked data-draft="true" and does not appear in navigation until a human removes that flag. Gatekeep implements approval gates at the governance level: changes to security configurations, cost-impacting deployments, and architectural decisions require explicit human sign-off before the agent proceeds. The spectrum runs from fully supervised (approve every step) to fully autonomous (approve nothing). Where each workflow sits on that spectrum is a deliberate choice, not a default.

14. Knowledge Retrieval (RAG)

Retrieval-augmented generation: fetch relevant documents from a knowledge base before generating a response, grounding the model in current information rather than training data alone. The scanner implements a lightweight version: before analysing a repository, it reads the relevant files, extracts patterns, and passes them as context to the analysis functions. The ops command centre (under construction) extends this to document ingestion: uploaded documents are parsed, chunked, and retrievable as context for planning conversations. The partial classification reflects that a proper vector store retrieval pipeline is not yet in place.

Part 4: Advanced Patterns

The advanced patterns are where the architecture becomes genuinely interesting. These are the patterns that differentiate a well-designed agentic system from one that works by accident.

Scanner GitHub API + file analysis scan results to S3 + DynamoDB P1: Prompt Chain P5: Tool Use P3 Ops Agents CTO / SRE / Security / Cost EventBridge schedules P7: Multi-Agent P5: Tool Use P11 Gatekeep Governance engine Sentinel / Auditor / Architect P2: Routing P18: Guardrails P13: HITL Agent Context MCP server sessions / notes / decisions P8: Memory P10: MCP P15: A2A DynamoDB team-activity + TTL P8: Memory P16: Resource-Aware Lambda API ticketyboo-api + proxy P12: Exception P16: Resource-Aware Static Site S3 + CloudFront P13: HITL (data-draft) AI Coding Agents Read roo-context MCP / Write session state Sprint execution / code generation P15: A2A P4: Reflect P6: Planning P9: Learning P20: Priority
ticketyboo.dev architecture with pattern annotations. Green tags: explicitly implemented. Blue tags: implicit in the codebase. Every component implements multiple patterns simultaneously.

15. Inter-Agent Communication (A2A)

Structured communication between agents: passing context, sharing state, and coordinating actions without a human intermediary. The roo-context MCP server is the coordination layer: one AI assistant creates a session, writes notes and decisions, closes the session. A second assistant starts a new session, reads the context, and continues where the first left off. Gatekeep personas hand off findings to each other: the Sentinel flags a security concern, the Auditor picks it up and assesses the compliance angle, the Architect responds with a design recommendation. A2A communication works when the shared context is structured and persistent, not when it relies on one agent parsing another's prose output.

16. Resource-Aware Optimization

Adapting agent behaviour based on available resources: compute, cost, time, and API quotas. The AWS Free Tier constraint is this platform's primary resource budget, and it shapes every architectural decision. Lambda at 128MB instead of 512MB. DynamoDB on-demand instead of provisioned. SSM Parameter Store instead of Secrets Manager. The fixer-bot routes tasks to different model tiers based on complexity: simple tasks do not consume the budget of complex ones. Resource awareness is not an optimisation applied after the fact. It is a constraint that improves architectural clarity by forcing explicit choices about what resources each operation actually needs.

17. Reasoning Techniques

Structured approaches to multi-step reasoning: chain-of-thought, tree-of-thought, ReAct, and similar patterns that improve model reliability on complex problems. The multi-model reasoning article covers this explicitly: different models have different reasoning strengths, and combining them (one to generate hypotheses, one to critique, one to synthesise) often produces better results than a single model reasoning alone. Spec generation uses chain-of-thought implicitly: the model is prompted to reason through requirements, constraints, and edge cases before generating a task breakdown. The partial classification reflects that reasoning techniques are applied case-by-case rather than systematised.

18. Guardrails and Safety

Mechanisms that constrain agent behaviour within defined boundaries, preventing harmful or non-compliant outputs before they reach users or downstream systems. Gatekeep is the governance engine: declarative JSON rules define what is allowed, the Sentinel persona enforces security boundaries, the Auditor enforces cost and compliance boundaries. Pre-commit checks catch rule violations before code is merged. The scanner's security_handler runs OWASP checks and IAM pattern scanning. The key architectural principle: guardrails should be declarative (defined as rules) and enforced at the boundary (before action, not after). Rules defined after the fact are documentation, not guardrails.

19. Evaluation and Monitoring

Systematic measurement of agent performance and system health over time. The ops dashboard reads from the team-activity DynamoDB table: agent run results, health check outcomes, cost trends, security findings. CloudWatch provides Lambda metrics. The agent telemetry schema is designed for monitoring: each run records status, duration, summary, and the specific checks performed. Evaluation in the development workflow is currently manual: reviewing generated code for quality before committing is an evaluation step, but it is not yet instrumented or tracked.

20. Prioritization

Deciding the order in which tasks are executed when there are more tasks than resources to handle them simultaneously. The frontlog is the explicit implementation: all publishable content ranked by readiness and value, with a suggested publish cadence. Sprint planning implements task prioritization within a session: BUILDs are ordered by dependency and impact before any code is written. The fixer-bot's tier classifier (Simple / Medium / Complex) implements a form of prioritization by resource allocation: simple tasks are cheaper and faster, so they run first to unblock downstream work.

21. Exploration and Discovery

An agent that seeks new information, discovers unknown unknowns, and builds knowledge incrementally rather than operating only on what is provided. The scanner implements discovery as its core function: given a repository URL, it discovers what technologies are used, what patterns are present, what risks exist, without being told in advance what to look for. The ops command centre's context-gatherer agent is a more structured version: before a planning conversation, it reads relevant files and builds a picture of the current state. The partial classification reflects that exploration is reactive rather than proactive: the agent discovers when asked, rather than continuously surveying for novel findings.

The full map

Pattern Status Ticketyboo implementation
Part 1: Foundations
01 Prompt Chaining Implicit Scanner issue-to-report pipeline. Ops agents: trigger to check to write. All sequential multi-step workflows.
02 Routing Explicit Gatekeep dispatches to Sentinel / Auditor / Architect. Fixer-bot 3-tier classifier (Simple / Medium / Complex).
03 Parallelization Implicit Fixer-bot DAG executor runs independent tasks concurrently. Scanner parallel file analysis. Ops agents run independently.
04 Reflection Partial Code review loops in development workflow. Gatekeep pre-commit checks as mechanical reflection. Not yet systematised.
05 Tool Use Implicit All ops agents: CloudWatch, Cost Explorer, IAM API. Scanner: GitHub API, file parsers, health-scoring functions.
06 Planning Implicit Fixer-bot spec generation (issue to YAML to DAG). Sprint planning sessions produce ordered BUILD lists before execution.
07 Multi-Agent Explicit Ops team: CTO + SRE + Security + Cost agents. Gatekeep personas. Planning agent and coding agent collaborating on the same codebase.
Part 2: State and Learning
08 Memory Management Explicit Agent context MCP: sessions, notes, decisions, file history. DynamoDB TTL for ephemeral agent telemetry. See agent-memory-and-comms.html.
09 Learning and Adaptation Partial Self-improving loop article. Agent context notes used to inform subsequent sessions. Manual feedback loop, not automated.
10 Model Context Protocol Explicit Agent context MCP server. Gatekeep MCP. Tool definitions outside the context window, served on demand.
11 Goal Setting and Monitoring Implicit Sprint BUILD lists with todo tracking. Frontlog with publish cadence. Ops dashboard showing agent targets vs. actual state.
Part 3: Resilience
12 Exception Handling and Recovery Implicit Lambda structured error responses. Scanner graceful degradation on parse failures. Fixer-bot abort conditions on unresolvable dependencies.
13 Human-in-the-Loop Explicit data-draft feature flag as publish gate. Gatekeep approval gates for cost, security, and design changes. PR review as human checkpoint.
14 Knowledge Retrieval (RAG) Partial Scanner reads repo files before analysis. Ops command centre document ingestion in progress. No vector store yet.
Part 4: Advanced
15 Inter-Agent Communication Explicit Agent context MCP as shared memory between AI agents. Gatekeep persona handoffs. See agent-memory-and-comms.html.
16 Resource-Aware Optimization Explicit AWS Free Tier constraint drives all service selection. Lambda 128MB sizing. Model tier routing by task complexity. See resource-aware-human-loop.html.
17 Reasoning Techniques Partial Multi-model reasoning article. Chain-of-thought in spec generation. Applied case-by-case rather than systematised.
18 Guardrails and Safety Explicit Gatekeep declarative rules. Sentinel security persona. Scanner security_handler with OWASP checks. See routing-and-guardrails.html.
19 Evaluation and Monitoring Implicit Ops dashboard reads agent telemetry from DynamoDB. CloudWatch Lambda metrics. Agent run history with status and summary.
20 Prioritization Implicit Frontlog ordering. Sprint BUILD sequencing by dependency and value. Fixer-bot tier classification for resource allocation.
21 Exploration and Discovery Partial Scanner discovers repo patterns without prior knowledge. Context-gatherer agent for planning conversations. Reactive rather than continuous.

What the map reveals

Eight patterns are explicitly implemented and documented. Eight more are implicit in the codebase: present in the code, not yet written up as patterns. Five are partial. Zero are genuinely absent.

The most common state is implicit. Engineers implement patterns without naming them. Naming them matters for two reasons. First, you can improve what you can describe. An implicit prompt chain that fails is harder to debug than an explicit one with named stages and structured outputs. Second, naming patterns creates a shared vocabulary. "We need reflection here" is a more precise instruction than "the agent should double-check its work."

The patterns that appear in the most components are the foundational ones: Tool Use (pattern 5) appears in every agent, Routing (pattern 2) appears wherever there is a classification decision, Human-in-the-Loop (pattern 13) appears at every publish or deploy boundary. The advanced patterns appear in fewer places but have disproportionate architectural impact. MCP (pattern 10) changed how tools are defined and discovered. Memory Management (pattern 8) changed how context persists across sessions. Resource-Aware Optimization (pattern 16) shaped the infrastructure from the beginning.

The five partial patterns are the most interesting. They represent conscious deferral rather than ignorance. RAG is partial because a vector store retrieval pipeline adds complexity that is not yet justified by the use cases. Reflection is partial because systematic review loops require tooling that has not been built. Exploration and Discovery is partial because continuous discovery requires always-on infrastructure that conflicts with the Free Tier constraint. Each partial pattern has a clear reason and a clear path to making it explicit.

The exercise of mapping 21 patterns to a real platform surfaces something useful: most of the patterns were already present. Reading the taxonomy did not reveal gaps that needed filling. It revealed names for things that already existed. That is probably the right relationship between theory and practice. Theory names and organises what practice has discovered. Practice tests and extends what theory describes.

Jan 25 Apr 25 Jul 25 Oct 25 Jan 26 Mar 26 Paperclip era Lambda era Multi-Agent Tool Use Resource-Aware Exception Handling Routing Guardrails Memory A2A RAG Parallelization Reflection Learning Planning Most patterns were implicit before they were named. The timeline shows when each was made explicit.
Patterns adopted over 14 months. Most were implicit before we named them.
Pattern taxonomy from Antonio Gulli, "Agentic Design Patterns: A Hands-On Guide to Building Intelligent Systems" (Springer, 2025). All examples are original implementations from the ticketyboo.dev platform. No book content reproduced verbatim.

If the articles or tools have been useful, a coffee helps keep things running.

☕ buy me a coffee

More from the agentic patterns series

→ Routing work and guarding the gates → How agents remember and talk to each other → Building on a budget with humans in the loop → Pattern picker: which pattern do you need?

ticketyboo brings governed AI development to your pull request workflow. 5 governance runs free, one-time welcome grant. No card required.

View pricing Start free →