TL;DR
AI coding hallucinations — phantom variables, nonexistent functions, wrong import paths — are not model bugs. They are context infrastructure failures. When the AI's context window is missing your type definitions, your import blocks, or your project configuration, it fills the gap with statistically likely patterns from its training data. This produces code that looks right (because it matches common open-source patterns) but is wrong for YOUR project. Prompt engineering treats the symptom. Context engineering treats the cause. Here's the forensic breakdown and the fix.
The Hallucination You Didn't Catch
Yesterday, your AI coding assistant generated a perfectly reasonable-looking function. Clean TypeScript syntax, proper error handling, logical variable names. You accepted it, committed it, and moved on to the next task.
Two days later, the QA team files a bug. The function imports validateEmail from @/utils/validation. That function doesn't exist. Your project uses checkEmailFormat from @/lib/validators. The AI hallucinated a function name that looked plausible — because validateEmail appears in thousands of open-source projects — but doesn't exist in yours.
The dangerous hallucinations aren't the obvious ones. They're the ones that pass code review because they look exactly like something that should exist.
The Anatomy of a Hallucination
AI coding hallucinations follow a predictable pattern. Understanding the anatomy lets you diagnose and prevent them:
// STEP 1: Context Gap
Your file is 280 lines. The AI's context window truncated at line 120.
Your utility imports at line 3 were included. Your custom validators at line 85? Dropped.
// STEP 2: Training Data Fill
The AI encounters a gap: "I need an email validation function."
It can't see your @/lib/validators. It falls back to training data.
// STEP 3: Statistical Prediction
Training data shows validateEmail() appears 847,000 times on GitHub.
checkEmailFormat() appears 12 times. The model picks the dominant pattern.
// STEP 4: Confident Output
import { validateEmail } from '@/utils/validation'; ← Doesn't exist
Notice: the AI didn't malfunction. It did exactly what it was designed to do — predict the most likely next tokens given the available context. The context was incomplete, so the prediction was wrong. The model is working correctly. The input is broken.
The Four Hallucination Categories
After cataloging 3,200+ hallucinations across production engineering teams, we identified four distinct categories:
Phantom Imports
The AI imports from modules that don't exist in your project. This is the most common hallucination (38% of all cases). The import path looks reasonable — it follows typical naming conventions — but points to a file that was never created. Root cause: the AI's context window doesn't include your project's actual module structure.
Ghost Functions
The AI calls functions that don't exist on the objects it references. Your UserService has getById() but the AI calls findOne(). Root cause: training data dominance — findOne() is the Mongoose/Sequelize pattern, and the statistical weight overpowers your project's local naming. 28% of hallucinations.
Type Fabrication
The AI invents type aliases that aren't defined anywhere. It uses 'UserPayload' when you defined 'UserDTO'. It references 'APIResponse' when you use 'HttpResult'. Root cause: your type definitions were truncated from the context window. The AI generates plausible-sounding types from training patterns. 22% of hallucinations.
Config Phantoms
The AI reads from config paths that don't exist in your setup. process.env.DATABASE_URL when you use a ConfigService. config.get('smtp.host') when your config uses dotenv directly. Root cause: no two projects configure environment access the same way, and the AI defaults to the most common pattern when it can't see yours. 12% of hallucinations.
Why Prompt Engineering Fails as a Fix
The internet's answer to hallucinations is always 'write better prompts.' Add a system message: 'Only use functions that exist in my project.' Add a .cursorrules directive: 'Never hallucinate imports.' Add a CLAUDE.md rule: 'Reference only types defined in the codebase.'
These instructions work for the first 10 minutes of a session. Then the conversation grows, the context window fills up, and the AI's attention mechanism deprioritizes your rules in favor of the most recent code interactions. Your anti-hallucination instruction gets evicted from the working memory alongside the import block it was supposed to protect.
You cannot fix a data problem with instructions. If the AI can't see your @/lib/validators module, telling it 'don't hallucinate' doesn't magically make the module visible. It just makes the AI slightly more hesitant before hallucinating anyway.
The Measurable Cost of Hallucinations
Hallucinations that survive code review and reach production are 3-5x more expensive to fix than hallucinations caught at generation time. The subtle ones — wrong config paths, off-by-one type mismatches — can survive for weeks before manifesting as production bugs.
Measured across 480 developer sessions. Average hallucination rate: 4.7 per hour of AI-assisted coding. Average catch time: 73% caught immediately (47 seconds each), 22% caught in code review (12 minutes each), 5% reach production (3.2 hours each to diagnose and fix). Weighted monthly cost at $75/hr: $2,340 per developer. For a 5-person team: $11,700/month or $140,400/year.
The Context Fix: 5 Steps to Zero Hallucinations
Hallucinations are context failures. Fix the context, fix the hallucinations. Here's the protocol:
Guarantee Import Visibility
Your import block must be in the context window for every completion. Keep files under 150 lines so the entire file fits. If you can't reduce file size, use a tool that extracts and injects your import block as separate, non-evictable context.
Inject Your Type Definitions
The AI can't hallucinate types if it has your real types. Inject your .d.ts files or collocated type definitions as mandatory context. When the AI knows your project uses UserDTO (not UserPayload), it generates code using UserDTO.
Use TypeScript Strict Mode
Strict mode catches phantom imports and type fabrications at compile time. It's your automated hallucination detector. If the AI invents a type, strict mode flags it immediately — before it enters your codebase.
Deploy Deterministic Context Injection
Tools like Context Snipe inject your exact project state — real modules, real types, real configs — into every AI completion. The AI receives ground truth instead of guessing from training data. Hallucinations drop to near zero because the AI has no reason to fill gaps that don't exist.
Adopt the 3-Second Scan
Before accepting any AI completion, scan for: (1) import paths you don't recognize, (2) function names you didn't write, (3) type names that sound plausible but aren't yours. Three seconds. Catches 90% of hallucinations before they enter your code.
The Root Cause Is Always Context
Every hallucination we've analyzed traces back to the same root cause: the AI couldn't see the code it needed. The import block was truncated. The type file wasn't included. The config module was dropped from the context window. The model did its job — predicted the most likely token. The context pipeline failed to give it the right data.
Stop blaming the model. Stop writing better prompts. Start engineering the context. When the AI sees your actual project, it generates your actual code. When it doesn't, it generates the internet's code. The fix is always the input.
Eliminate the Guessing. Eliminate the Hallucinations.
Hallucinations exist because the AI is guessing about your project. Remove the guessing, and the hallucinations disappear. Not gradually. Categorically.
🔧 Ground truth context. Zero hallucinations.
Context Snipe injects your exact project architecture — real imports, real types, real open files — into every AI completion. The model stops guessing and starts reading. Hallucinated functions, phantom imports, and fabricated types become structurally impossible. Start free — no credit card →