I designed a four-stage workflow where AI handles extraction and flagging, and the subject matter expert focuses on the judgment calls. The system doesn't replace the reviewer. It makes the review possible.
A large content catalog spanning two platforms. A quality audit found that many products, some over a decade old, didn't meet current requirements. Expert review catches what automation can't, but there wasn't enough expert time to review everything.
The AI doesn't replace the expert's review. It produces a briefing that makes the review faster, more targeted, and better documented. The content designer steers the loop.
Generalizable across domains. Tailored by each content designer.
The workflow redraws where human attention goes.
Each expert tailors the evaluation framework to their content area.
Without the briefing, the expert reads everything looking for everything. With it, they know where to focus and what to interrogate.
Every page looks the same. The expert reads all of it with equal attention, hunting for where problems might be.
The briefing marks what needs scrutiny. The expert's time goes to judgment, not hunting.
Designing where human attention goes is the content design problem.
The thinking is the work: figuring out what the expert actually needs, and structuring the experience around that.
Designed and proposed as a pilot process. Not a shipped implementation with measured results.