Mae Mendoza

Content and Systems at Scale

I craft the layer between complex systems and the people using them.

More than 10 years across product, publishing, and platforms. Information architecture and content design that makes the intricate feel simple.

Scroll
01 — Audience Calibration

Same concept.
Different user.

Atlas is a fictional AI product. Each tab explains the same feature to a different audience, adjusting voice, density, and assumed knowledge based on who's reading and what they need from the information.

Onboarding · First-run experience

Atlas may occasionally include information that isn't accurate.

When Atlas searches your company's documents and drafts a response, it generates language based on patterns, not by copying text directly. This means it can sometimes present information that sounds right but isn't.

Our recommendation:

Use Atlas as a starting point. Before sharing anything important, verify key details (especially names, dates, and figures) against the original source documents.

Atlas AI · Onboarding Tooltip
In-product guidance · Workflow optimization

Some queries are more likely to produce inaccurate results.

Atlas performs best with well-scoped questions about topics covered thoroughly in your connected documents. Accuracy drops when you ask about recent events not yet in your knowledge base, request specific statistics, or chain multiple complex questions in one prompt.

Get better results:

Break multi-part questions into separate queries. When Atlas returns a specific number or date, click the source badge to verify it against the original document. If a response feels off, try rephrasing. A more specific prompt often surfaces better results.

Atlas AI · Pro Tips
API documentation · Atlas Developer Platform

Hallucination behavior in the Atlas API

Atlas generates responses probabilistically. When source document coverage is sparse for a given query, the model may produce plausible but unsupported completions rather than returning low-confidence indicators.

Recommended implementation:

Enable source_grounding: strict to constrain responses to retrieved document content. Set confidence_threshold to filter low-confidence outputs before they reach end users. Use the /verify endpoint to programmatically check claims against indexed sources. Monitor the hallucination_rate metric in your Atlas dashboard.

Atlas AI · Developer Docs
Internal briefing · Quarterly product review

Atlas will sometimes get things wrong. Here's how we're managing that.

Every AI assistant on the market today can generate confident-sounding responses that turn out to be inaccurate. This isn't a bug we can patch. It's a known limitation of the underlying technology. The risk to us is proportional to how much our teams rely on Atlas without checking its work.

What we're doing about it:

We've added source links to every Atlas response so employees can verify claims with one click. Teams in Legal and Finance operate under a mandatory review policy before acting on Atlas outputs. We're tracking error rates monthly and will flag any use case where accuracy falls below our threshold.

Atlas AI · Leadership Brief
Internal knowledge base · Customer response template

When a user reports that Atlas gave them wrong information:

Lead with acknowledgment. Don't minimize it. If Atlas told them their Q3 revenue was $4.2M and the real number was $3.8M, that's a real problem for them.

Response framework:

Thank them for flagging it. Confirm you can see the response in question. Explain that Atlas generates answers from patterns in connected documents and can sometimes produce inaccuracies. Avoid the word "hallucination." Help them find the correct information and log the incident in the accuracy tracker.

Atlas AI · Support Playbook
Policy assessment · Pre-deployment audit

System behavior: Atlas generates responses probabilistically and does not guarantee factual accuracy.

Inaccurate outputs (industry term: "hallucinations") are an inherent characteristic of large language models. Rate of occurrence varies by document coverage, query complexity, and model version. This behavior cannot be fully eliminated through configuration.

Audit checklist:

Confirm all responses include source attribution to original documents. Verify human review requirements exist for regulated workflows (legal, financial, HR). Document observed hallucination rates from testing as a baseline metric. Assess whether AI-generated content disclosure meets organizational transparency policy.

Atlas AI · Compliance Review
02 — Selected Work

Real problems.
Scalable solutions.

Three projects where the content problem was really a systems problem.

How can you serve 50 unique markets from a single content system?

Custom builds for every state don't scale. The real problem was that no one had figured out what was shared and what wasn't.

Flip the card to see what I did.
Content Architecture · Product Localization

Two outdated products needed a full rebuild, not another patch. I analyzed every state's requirements, figured out where the real coverage gaps were, and designed a modular system that could serve most markets from a single content base while supporting two product tiers.

Read the case study

How do you scale expert review across a content catalog built over a decade?

AI can flag problems fast. It can't tell you which ones actually matter. I designed the workflow that puts both to work.

Flip the card to see what I did.
Process Design · AI-Assisted QA

A growing content catalog needed a quality review that would take years at human pace. I built a four-stage workflow where AI handles extraction and flagging while the subject matter expert focuses on the judgment calls: evaluating depth, catching what automation misses, and making the remediation decisions that shape the final product.

See the workflow

How do you turn one source document into content that works for three different audiences?

Documentation can be accurate and still useless to the people who need it most. Making it work for different audiences means going beyond changing words to rethinking structure.

Flip the card to see what I did.
Content Adaptation · Audience Design

The documentation was tailored for the team. I restructured it three ways: a narrative walkthrough for stakeholders who needed the why, a reference guide for practitioners who needed the how, and an interactive experience for a broader audience encountering the subject for the first time.

See the approach
03 — How I Think

Clear frameworks.
Repeatable systems.

01 — Systems Design

Content as architecture

Building modular, scalable content systems that serve multiple products, audiences, and contexts from a single source of truth.

02 — Audience Calibration

Same concept, different user

Adjusting voice, density, assumed knowledge, and entry points based on who's reading and what they need.

03 — Information Architecture

Structure as UX

Turning dense, complex documentation into navigable systems where the right answer is always two clicks away.

04 — Quality Engineering

Rigor at scale

Building review processes, style systems, and content standards that maintain quality across teams, products, and release cycles.

Let's
connect.

Find me on LinkedIn to learn more.