The 8 Prompts That Separate AI Power Users From Everyone Else
What Most People Get Wrong About Prompting AI (And What Works Instead)
Most people use AI like a search engine. They type something vague, get something generic, and decide AI “isn’t that useful.”
Meanwhile, a smaller group is pulling outputs that would’ve taken $500 and three days to get done another way, in 20 minutes flat.
The difference is the prompt.
This article gives you 8 prompts that put you in that second group. Each one targets a specific failure mode: AI that hedges instead of commits, output that sounds nothing like you, conversations that drift and get worse the longer they run.
First prompt is up.
Prompt #1: Persona injection
What it does: Locks the AI into a specific expert role before it answers, so the output comes filtered through real expertise rather than generic “helpful assistant” mode.
When to use it: Any time the default output feels too broad or too shallow. Consulting advice, specialist writing, technical reviews.
The prompt:
You are a [SPECIFIC ROLE] with [X YEARS] of experience in [SPECIFIC INDUSTRY/NICHE].
You have worked with [TYPE OF CLIENT/COMPANY] and you specialise in [SPECIFIC PROBLEM AREA].
Before you answer anything, think from that expert's perspective. What would you notice
that a generalist would miss? What assumptions would you immediately push back on?
Now, with that hat on: [YOUR ACTUAL QUESTION OR TASK]How to use it:
Fill in the role, years, industry and specialisation with as much specificity as you can
Don’t rush to the actual question - the setup is doing real work
Ask your question at the end, after the persona is set
Example input:
You are a direct-response copywriter with 15 years of experience in SaaS and B2B software.
You have worked with funded startups trying to reduce churn and improve trial-to-paid
conversions. You specialise in onboarding emails and in-app messaging.
Before you answer anything, think from that expert's perspective. What would you notice
that a generalist would miss? What assumptions would you immediately push back on?
Now, with that hat on: Review this onboarding email sequence and tell me what's killing
conversions. [PASTE EMAIL]What you’ll get: A review that doesn’t just say “be clearer.” You get the kind of pointed, uncomfortable feedback a specialist charges real money for.
Advanced note: Stack two personas for useful tension. Something like “You are both a direct-response copywriter AND a UX researcher who thinks copy problems are usually symptoms of UX problems.” The internal conflict produces more interesting outputs than either persona alone.
Prompt #2: Pre-mortem pressure test
What it does: Stress-tests your plan by imagining it already failed, then works backwards to explain why.
When to use it: Before you launch anything that’s hard to undo. A product, a campaign, a pitch.
The prompt:
I'm about to [DESCRIBE YOUR PLAN OR DECISION IN 2-3 SENTENCES].
Run a pre-mortem on this. Assume it's 12 months from now and this failed badly — not
mediocre, actually failed. What went wrong? Give me the top 5 specific failure modes,
ranked by likelihood. For each one: what would the early warning sign have been, and what
can I do now, before I start, to reduce that risk.
Don't be polite. I'd rather know the ugly version now.How to use it:
Describe your plan in plain language - no need to sell it to the AI
Keep the 12-month frame; it’s far enough out that you get real risks, not surface-level cautions
After you get the failure modes, ask: “Which of these am I most likely to ignore because I’m too close to it?”
Example input:
I'm about to launch a paid newsletter on Substack focused on AI tools for solopreneurs.
I'm planning to charge $15/month and post twice a week. I have about 400 email subscribers
already and a small LinkedIn following.
Run a pre-mortem on this. Assume it's 12 months from now and this failed badly. What went
wrong? Give me the top 5 specific failure modes, ranked by likelihood...What you’ll get: A frank ranked list of what kills newsletters like yours, with early warning signs you can actually track. Often surfaces the thing you already knew but hadn’t said out loud.
Advanced note: After the pre-mortem, run a “pre-win” with the same structure but imagining it succeeded beyond expectations. Ask what would have had to be true. The contrast between the two outputs is genuinely useful.
You just got 2 prompts that change how you set up and pressure test any AI task.
But most prompting problems happen mid-conversation - when AI goes vague, loses your context, starts hedging, or gives you a wall of text when you needed one sharp answer.
The next 6 prompts handle that:
How to force a specific answer when AI keeps dodging
How to recover when a long thread has gone sideways
How to get AI to rewrite your work without losing your voice
How to make AI reason out loud before it commits to an answer
Plus: a prompt repair template you can paste onto any underperforming prompt to fix it
Upgrade to get the complete system.
Prompt #3: Specificity demand
What it does: Breaks AI’s habit of hedging with “it depends” by forcing it to commit to one recommendation with actual numbers.
When to use it: When you ask for advice and get “there are several factors to consider” back.
