The 5 Failure Modes of Bad Prompts
Every bad AI output traces back to one of five prompt failures. Learning to diagnose these failures is like learning to read error messages in programming — once you can identify the problem, the fix is obvious.
Failure Mode 1: The Vagueness Trap
The most common failure. Your prompt lacks specifics, so the model fills in the gaps with generic defaults.
❌ Vague:
"Write a blog post about productivity"
✅ Specific:
"Write a 1,200-word blog post for busy startup founders
(Series A stage, 10-30 employees) about the 'Maker Schedule'
concept from Paul Graham. Include 3 actionable techniques
they can implement this week. Tone: conversational but
authoritative, like a Y Combinator blog post. Format: H2
headers for each technique, with a real-world example under
each one."
The specific prompt constrains the model's output space. Instead of choosing from millions of possible "productivity blog posts," it now generates the one that matches your precise requirements.
Diagnosis: If the output is technically correct but not what you wanted, the prompt was too vague.
Failure Mode 2: The Contradiction
Your prompt sends conflicting signals, and the model tries to satisfy all of them — producing incoherent output.
❌ Contradictory:
"Write a technical whitepaper about quantum computing
that is easy enough for a 10-year-old to understand.
Make it rigorous and include mathematical proofs.
Keep it under 500 words."
(Technical + child-friendly + mathematical + short
= impossible constraints)
✅ Resolved:
"Write a 2,000-word explainer about quantum computing
for business executives with no physics background.
Use analogies instead of math. Cover: what it is, why
it matters for business, and a realistic 5-year outlook."
Diagnosis: If the output feels disjointed or tries to do too many things, check for contradictions.
Failure Mode 3: The Context Vacuum
The model lacks the background information it needs to produce a relevant response.
❌ No context:
"Write a response to this customer complaint"
✅ With context:
"You are a customer success manager at a B2B SaaS company
that sells project management software. A customer on the
Enterprise plan ($5,000/month) is threatening to churn
because our recent update removed the Gantt chart feature
they depended on. The feature is coming back in 2 weeks.
Write a response that: acknowledges their frustration,
explains the timeline for restoration, offers a concrete
interim solution, and reinforces the value of our platform."
Diagnosis: If the output misses important nuances or makes wrong assumptions, you did not provide enough context.
Failure Mode 4: The Format Mismatch
The model produces the right content in the wrong format — a wall of text when you wanted a table, a list when you wanted prose, JSON when you wanted YAML.
❌ No format guidance:
"Compare React, Vue, and Svelte"
✅ Explicit format:
"Compare React, Vue, and Svelte in a markdown table with
these columns: Feature, React, Vue, Svelte. Include rows
for: Learning Curve, Bundle Size, TypeScript Support,
Job Market, Community Size, Performance. Rate each as
Excellent/Good/Average."
Diagnosis: If you have to reformatter the output manually, specify the format in the prompt.
Failure Mode 5: The Scope Explosion
Your prompt asks for too much, causing the model to produce shallow coverage of everything instead of deep coverage of what matters.
❌ Scope explosion:
"Write a complete guide to starting a business"
✅ Focused scope:
"Write a step-by-step guide for registering an LLC in
Delaware as a non-US founder. Cover: the 4 required
documents, estimated costs, processing timeline, and
the one mistake that causes 60% of rejections. Assume
the reader is a solo SaaS founder with no legal
background."
Diagnosis: If the output covers everything superficially, narrow the scope and go deeper.
The Diagnostic Framework
When you get a bad output, run through this checklist:
| Check | Question | Fix |
|---|---|---|
| Specificity | Did I tell the model exactly what I want? | Add constraints, examples, and requirements |
| Consistency | Do my requirements conflict? | Remove or reconcile contradictions |
| Context | Does the model have the background it needs? | Add relevant information and assumptions |
| Format | Did I specify the output structure? | Define format, length, and organization |
| Scope | Am I asking for too much in one prompt? | Narrow focus, break into multiple prompts |
Practice this diagnostic on every unsatisfying AI output for one week, and your prompting skills will improve dramatically.
Key Takeaways
- Every bad output has a diagnosable cause — do not just re-roll and hope
- Specificity is the single most impactful improvement for most prompts
- Contradictions are subtle — read your prompt as if you are the AI
- Context is not optional — the model only knows what you tell it
- Narrower scope produces deeper, more valuable output