AI Prompt Hackers

AI Prompt Hackers

Context Poisoning - The AI Problem Nobody Warned You About

Attention Is Finite. Your Prompt Isn't.

Mar 04, 2026
∙ Paid

The Performance Tax of Over-Prompting

You’ve learned the basics, you’ve read the guides, and you’ve started building what I can only describe as AI prompt towers. Giant, multi-paragraph instructions loaded with constraints, personas, output formats, tone guidelines, length requirements, things to avoid, things to include. You think you’re being thorough.

You’re actually making it worse.


The Longer the Prompt, the Worse the Result (Usually)

I ran an informal test a while back. Same task: write a cold email for a SaaS product targeting HR managers. I gave Claude three versions of the prompt.

Version one was 47 words. Just the context, the goal, the audience.

Version two was 340 words. Added tone requirements, three competitor examples to avoid sounding like, specific length constraints, a list of banned phrases, formatting rules, a note about the buyer’s psychology.

Version three was 612 words. Everything in version two plus a persona, a content hierarchy, example sentence structures, and a reminder to “be conversational but professional.”

The 47-word prompt produced a usable email on the first try. The 612-word prompt produced something that technically followed every rule but read like it was written by a committee. Technically compliant. Completely lifeless.


Context Poisoning occurs when the "noise" of your instructions starts to drown out the "signal" of the task. The model becomes so busy navigating your guardrails that it forgets to be brilliant.


What’s Actually Happening Inside the Model

Here’s the non-technical version, and it’s good enough to be useful.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Andy Wood · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture