Why LLM Agents Need Process Engineering
February 12, 2026 · 4 min read
Most agent frameworks today are glorified function chains. They work for demos. They break in production. And the irony is that there's a 40-year-old discipline that already solved the problems they're rediscovering from scratch.
The Problem with Ad-Hoc Orchestration
Here's how most AI agent systems work in 2026: you write a function that calls an LLM, parse the output, and pipe it into the next function. Maybe you add a router that picks which function to call based on the user's intent. If you're sophisticated, you use a framework that wraps this in a graph abstraction.
This works fine until it doesn't.
When Agent B fails halfway through a task, Agent A has no idea. There's no retry logic that understands the broader workflow context. There's no audit trail showing what happened and why. There's no governance layer deciding which agent should handle what, or when a human needs to step in.
In production, these aren't edge cases — they're Tuesday.
A Discipline That Already Solved This
Business Process Model and Notation (BPMN), Case Management Model and Notation (CMMN), and Decision Model and Notation (DMN) are standards that the enterprise world has used for decades to orchestrate complex workflows. They were designed for exactly the problems that agent frameworks are now stumbling into.
BPMN gives you deterministic process orchestration. When you know the steps — gather requirements, generate design, implement, test, deploy — BPMN defines the sequence, handles parallel execution, manages error boundaries, and coordinates retries. It's a workflow engine, not a script.
CMMN handles the work that doesn't follow a script. Case management is built for situations where the next step depends on what just happened. Stages activate based on conditions. Discretionary tasks can be triggered by humans or agents when the situation calls for it. It's adaptive by design.
DMN is the decision layer. Decision tables that govern routing: which agent handles this task? Which LLM model should we use? Does this need human approval? These aren't hardcoded if statements — they're auditable, versioned rules that non-engineers can read and modify.
Applying Process Engineering to Agents
The insight is straightforward: treat AI agents like you'd treat any other participant in a business process.
BPMN defines the happy path. When an agentic coding system needs to build a website, the workflow is clear: gather requirements from the user, generate a design document, implement each page, run tests, deploy. Each step is an agent task. The workflow engine handles sequencing, parallelism, and error recovery.
CMMN handles the exceptions. A requisition management case doesn't follow a linear path — tasks appear based on conditions, approvals can be requested at any point, and the case adapts as new information arrives. This is the model for complex, non-deterministic agent work.
DMN governs the decisions throughout. Which model should this agent use — a fast, cheap model for simple tasks, or a capable model for complex reasoning? Should the system proceed autonomously, or does this decision need a human in the loop? Decision tables make these rules explicit, auditable, and changeable without touching code.
What This Looks Like in Practice
Consider two scenarios handled by the same engine:
Building a therapist's website is a BPMN workflow. The steps are known: gather requirements, design the layout, implement the pages, test across devices, deploy. Each step is an agent task with clear inputs and outputs. DMN decides which model handles each step and what tools are available.
Managing project requisitions is a CMMN case. There's no fixed sequence. A request comes in, gets classified, might need approval, might spawn sub-tasks, might escalate. Stages activate based on conditions. The case adapts to reality instead of forcing reality into a predefined flow.
Same engine. Different orchestration pattern. That's the power of process engineering — you match the orchestration to the nature of the work.
Why This Matters Now
The AI industry is moving fast, but it's rediscovering problems that were solved decades ago. Error recovery, audit trails, human-in-the-loop governance, adaptive workflow management — these aren't new challenges. They're well-understood problems with well-tested solutions.
The gap isn't in the LLMs themselves. It's in the orchestration layer. The engineers who understand both worlds — who can build with modern AI models and architect with proven process engineering — will build the agent systems that actually ship to production.
That's why I built HELM. Not because the world needs another agent framework, but because the existing ones are missing the foundation that makes agents trustworthy, auditable, and production-grade.