The pattern that keeps repeating
A business identifies a manual, repetitive task. Someone suggests automating it. A tool is selected. The automation is built. Six months later, it's generating exceptions faster than the team can handle them, the edge cases are multiplying, and the original manual process has been quietly reinstated alongside the automated one to handle the failures.
This pattern is not rare. It's the default outcome for organisations that automate before they understand.
What "understanding a process" actually means
When I say process understanding, I don't mean drawing a flowchart. I mean knowing why the process does what it does — including the parts that look irrational.
Every established process has embedded logic. Some of it is good: it handles exceptions that aren't visible until they occur. Some of it is legacy: it reflects constraints that no longer exist. Some of it is waste: it's there because nobody ever questioned it. But you cannot tell which is which from the outside.
The classic mistake is to look at a process, identify the repetitive steps, and assume those steps exist purely because nobody has automated them yet. Sometimes that's true. Often it isn't. Often those repetitive steps contain judgment calls — micro-decisions made by experienced operators that look mechanical from the outside but aren't.
The three questions worth answering first
Before any automation conversation, three questions are worth working through seriously:
1. What is this process actually trying to achieve?
Not what it does. What it's trying to achieve. The goal is rarely "complete the form" or "send the email." The goal is usually something further downstream: a correct financial record, a satisfied customer, a compliant output. Understanding the real objective determines which variations matter and which are noise.
2. Where does it currently fail, and why?
Every process has failure modes. Manual rework, exceptions, escalations, complaints. These failures are information — they tell you where the process is fragile, where judgment is required, where the inputs are unpredictable. An automation that ignores these failure modes will reproduce them at scale.
3. What does good look like, and how would you know?
This is the measurement question. If you can't define what a successful execution looks like in measurable terms, you can't build a reliable automated version, and you can't monitor whether the automation is working. CTQs — Critical to Quality characteristics — are the foundation here. Define them before you build.
AI changes the economics of process analysis — not the requirement for it
One argument against thorough process analysis is that it takes too long. The business wants to move fast. Thorough Gemba walks, process mapping, and root cause analysis are six-sigma activities that take weeks.
AI changes the economics of this work significantly. Process maps can be drafted from descriptions in minutes. VOC analysis — translating customer feedback into structured CTQs — that used to take days can now take hours. Cause-and-effect analysis can be seeded with AI-generated hypotheses that the team validates rather than builds from scratch.
But AI doesn't eliminate the requirement for process understanding. It compresses the timeline. The fundamental discipline — understand before you automate — remains unchanged.
If anything, AI makes the discipline more important. The cost of building an automation has dropped dramatically. The cost of building the wrong automation — one that embeds a flawed process at scale — has not.
What a better approach looks like
The organisations that get automation right tend to follow a similar pattern:
Map the current state — not how the process is supposed to work, but how it actually works. Who does what, when, under what conditions, and what happens when things go wrong.
Measure the failure modes — quantify the exceptions, the rework, the escalations. This becomes the baseline against which the automation is measured.
Define the CTQs — before designing any solution, agree on what "good" looks like in measurable terms.
Redesign before automating — if the current process has structural problems, automate the improved design, not the current one.
Pilot and measure — test the automation at small scale, measure against the CTQs, and verify the failure modes have been addressed before scaling.
Build control mechanisms — automated processes need monitoring. Define what exception thresholds trigger human review, and build them in.
This is, essentially, DMAIC applied to automation design. The methodology exists because this pattern of work reliably produces better outcomes than jumping straight to solution.
The discipline that compounds
The organisations that build strong process understanding before automating tend to get better at automation over time. They build institutional knowledge about how their processes actually work. They develop the muscle of distinguishing genuine automation candidates from processes that require judgment. They accumulate clean process documentation that makes future automation — and future improvement — faster.
The organisations that skip this step tend to accumulate technical debt. Each fast automation creates a fragile dependency that requires maintenance, exception handling, and periodic reinvention. The shortcuts compound in the wrong direction.
Process understanding before automation isn't slow. It's the investment that makes everything that comes after faster.