← Back
Process Design · AI-Assisted QA

A decade of content and no way to review it all at human pace.

I designed a four-stage workflow where AI handles extraction and flagging, and the subject matter expert focuses on the judgment calls. The system doesn't replace the reviewer. It makes the review possible.

01

Human review is essential. It doesn't scale.

A large content catalog spanning two platforms. A quality audit found that many products, some over a decade old, didn't meet current requirements. Expert review catches what automation can't, but there wasn't enough expert time to review everything.

Reviewed at human pace
Waiting
02
AI Briefings,
Not AI Decisions.

The AI doesn't replace the expert's review. It produces a briefing that makes the review faster, more targeted, and better documented. The content designer steers the loop.

03

Four stages. Judgment stays human.

Generalizable across domains. Tailored by each content designer.

1
Structural Inventory
AI-driven
+
  • Map content to requirements at page level
  • Compile key terms with source locations
  • Build citations inventory across all material
2
Analytical Flagging
AI-driven, expert-designed
+
  • Coverage confidence: depth, not just presence
  • Gap and bridge opportunities for new material
  • Quality flags: outdated content, sensitivity concerns
  • Accessibility: language complexity, vocabulary load
3
Expert Deep Read
Human-driven, AI-informed
+
  • Evaluate whether AI assessments hold up
  • Triage flagged issues: real problems or false positives
  • Catch what the AI missed
4
Synthesis & Documentation
Human-driven
+
  • Final coverage map: met, partial, or missing
  • Prioritized remediation recommendations
  • Documentation that accelerates the next cycle
04

Drawing the line.

The workflow redraws where human attention goes.

Shifts to AI
Locating where requirements appear across hundreds of pages
Stays with the expert
The deep read and all coverage judgments
Shifts to AI
Compiling documentation alongside development
Stays with the expert
Designing the evaluation framework and quality criteria
Shifts to AI
Initial scan for coverage and quality flags
Stays with the expert
Sensitivity and bias assessment

Each expert tailors the evaluation framework to their content area.

05

Directed attention, not diffuse.

Without the briefing, the expert reads everything looking for everything. With it, they know where to focus and what to interrogate.

Every page looks the same. The expert reads all of it with equal attention, hunting for where problems might be.

The briefing marks what needs scrutiny. The expert's time goes to judgment, not hunting.

Gap identified
Quality flag
Coverage confirmed
06

Designing where human attention goes is the content design problem.

The thinking is the work: figuring out what the expert actually needs, and structuring the experience around that.

Designed and proposed as a pilot process. Not a shipped implementation with measured results.