Tactical AI Control

The Prompt Framework I Use to Prevent AI Hallucinations

February 3, 2026 10 min read Nathan Baker
The Prompt Framework I Use to Prevent AI Hallucinations

AI hallucinations are not random glitches. They are not bugs in the model. They are the predictable result of asking AI to produce output without giving it enough information to work with. When AI does not know something, it does not say "I don't know." It fills the gap with a confident guess. And that guess, delivered with the same authority as a correct answer, is what we call a hallucination. The good news is that hallucinations are largely preventable. Here is the framework I use every day to keep AI grounded in reality.

1

Understanding Why AI Hallucinates

To prevent hallucinations, you first need to understand what causes them. AI language models are, at their core, pattern completion engines. They predict the most likely next token based on everything they have seen in training and everything you have given them in the current conversation. When you ask a question that has a clear, well-documented answer, the model retrieves and assembles that answer accurately. But when you ask something that requires specific knowledge about your codebase, your architecture, or your constraints -- knowledge the model does not have -- it does not stop. It keeps predicting tokens. It fills in the blanks with plausible-sounding but fabricated details.

This is why hallucinations cluster around certain kinds of questions. Ask AI about general JavaScript syntax and you will get a reliable answer. Ask it about the specific interface contract between your UserService and your AuthRepository, and it will invent one that sounds reasonable but might be completely wrong. The hallucination rate is directly proportional to how much the answer depends on context the model does not have.

The solution, then, is obvious: give the model the context it needs so it does not have to guess.

2

The Four-Part Prompt Framework

After hundreds of hours of working with AI on production codebases, I have distilled my approach into four components. Every prompt I write for anything beyond trivial tasks includes all four. Skipping any one of them opens the door to hallucinations.

Part 1: Define the Change Scope. Tell the AI exactly what you want changed and where. Not "improve the authentication flow" but "modify the validateToken function in src/auth/tokenService.ts to handle expired refresh tokens." Scope eliminates the largest category of hallucinations: AI inventing files, functions, or modules that do not exist. When you define the scope precisely, the model knows exactly where to focus and does not have to guess about the surrounding landscape.

Part 2: List Constraints and Contracts. Every function in a production codebase exists within a web of contracts. It accepts certain inputs, returns certain outputs, throws certain errors, and makes certain guarantees to its callers. If you do not tell the AI about these contracts, it will either break them accidentally or invent new ones. List the contracts explicitly: "This function is called by the AuthMiddleware and must return a Promise that resolves to a ValidToken or throws an AuthError with code TOKEN_EXPIRED." Now the AI knows exactly what it must preserve.

Part 3: Provide Relevant Code Snippets. Do not make the AI guess what your code looks like. Paste in the function you want modified. Paste in the interface it implements. Paste in the tests that verify its behavior. This is the single most powerful anti-hallucination technique: when AI can see the actual code, it produces modifications that are consistent with what exists rather than inventing something from scratch.

Part 4: Specify What NOT to Change. This is the step most developers skip, and it is often the most important. AI models are eager to help, and they will sometimes "improve" things you did not ask them to touch. By explicitly stating what should remain unchanged -- "Do not modify the function signature. Do not change the error handling in the catch block. Do not add new dependencies." -- you create guardrails that prevent scope creep and accidental breakage.

3

Bad Prompts vs. Good Prompts

Let me show you the difference this framework makes with a real example. Suppose you need to add retry logic to an API client.

The bad prompt: "Write a function that retries failed API calls." This prompt is a hallucination factory. The AI does not know your API client. It does not know your error types. It does not know your retry strategy requirements. It will produce something generic that probably does not fit your codebase, uses different patterns, and might introduce incompatible error handling.

The good prompt: "Modify the fetchUserData function in src/api/userClient.ts to add retry logic for transient failures. Here is the current function: [paste code]. It is called by the UserProfilePage component and must continue to return Promise<UserData> or throw an ApiError. Retry up to 3 times with exponential backoff (1s, 2s, 4s) for HTTP 429 and 5xx responses only. Do not retry on 4xx client errors. Do not change the function signature or the error types. Do not add new npm dependencies -- use setTimeout for delays."

The second prompt leaves almost nothing to the imagination. The AI knows the file, the function, the callers, the return type, the retry conditions, the backoff strategy, and the constraints. There is no room to hallucinate because every gap has been filled with real information.

"The most dangerous AI output is not obviously wrong code. It is subtly wrong code that looks correct, passes basic tests, and fails in production at 2 AM."
-- Lessons from production incidents
4

Why "Write a Function That..." Always Fails

The phrase "write a function that..." is the hallmark of a prompt that will produce hallucinated code. It tells the AI to create something from nothing, with no anchor to your existing codebase. The AI must guess the language (probably right), the framework (maybe right), the patterns (probably wrong), the error handling strategy (almost certainly wrong), and the integration points (definitely wrong).

Compare this to "modify this function to... while preserving the contract with..." The word "modify" anchors the AI to existing code. The phrase "while preserving the contract" tells it what must not change. The AI is now editing, not inventing. It is working within constraints, not in a vacuum. The result is code that fits your codebase because it was derived from your codebase.

This is the fundamental shift in how you should think about AI-assisted development. You are not asking AI to write code for you. You are asking AI to modify code within the constraints you define. The developer's job is defining those constraints accurately. The AI's job is executing within them. When you frame the interaction this way, hallucinations drop dramatically because the AI has real code to work with instead of empty space to fill with guesses.

5

Advanced Techniques: Verification Prompts

Even with the four-part framework, verification is essential. After AI produces code, I use a follow-up prompt to catch any remaining hallucinations: "Review the code you just produced. List every assumption you made that is not explicitly stated in the context I provided. For each assumption, explain what would break if that assumption is wrong."

This prompt forces the model to audit its own output. It surfaces hidden assumptions -- the kind that cause production bugs weeks later. Sometimes the model reveals that it assumed a certain import path, or that a specific error type exists, or that a configuration value is available. Each of these is a potential hallucination that you can verify before the code ever reaches a commit.

Another powerful technique is the negative test prompt: "What edge cases could cause this code to fail? What inputs would produce unexpected behavior?" This turns the AI's tendency to be comprehensive into an asset. It will often identify failure modes that you would have missed, giving you a checklist of things to test before shipping.

The prompt framework described here is the core of what we teach in Module 3 of the Vibe Coding for Production workshop. In the workshop, you will practice writing prompts against real production codebases, see before-and-after comparisons of hallucination-prone vs. hallucination-resistant prompts, and build the muscle memory for including the right context every time. If you have ever been burned by AI-generated code that looked right but was not, this module will change how you work with AI permanently.

Want to Learn This Hands-On?

Join the Vibe Coding for Production workshop and learn the complete M.A.P.P.E.R. framework with live exercises, real codebases, and take-home materials.

Book Your Spot →
Nathan Baker
Nathan Baker
Founder, AutoNateAI
Nathan is a software engineer and AI workflow specialist who teaches developers the engineering discipline needed for production-quality AI-assisted development.