AI Prompt Hackers

AI Prompt Hackers

Why your carefully structured prompts are starting to underperform

Aesthetic prompting techniques: how to get AI output with texture and voice

May 06, 2026
∙ Paid

There’s a moment most people hit after a few months of serious AI use where the outputs start to feel the same. Not wrong. Not unhelpful. Just... flat. You ask for an analysis and get a competent analysis. You ask for a story and get a structurally correct story. You ask for a business plan and get something that could have come from any MBA textbook written between 2015 and now.

The model got the logic right. It got the facts right. And somehow the result is completely forgettable.

I’ve been thinking about why that happens, and I don’t think the answer is that the models are bad. I think the answer is that we’re prompting for the wrong thing.


We optimized for correctness and got boredom

The first wave of prompt engineering was about getting AI to stop making things up. Reduce hallucinations. Improve accuracy. Get the model to cite sources, follow instructions, stay on topic. All reasonable goals, and we mostly got there.

The second wave was about consistency. Chain-of-thought prompting, structured outputs, system prompts that constrain behavior. Again, reasonable. Again, mostly achieved.

What nobody talked about much during either wave was texture. The quality of the prose. Whether the output felt like it came from a specific sensibility or from a statistical average of every sensibility. Whether reading it produced any feeling at all.

I think that’s the gap we’re in now. Most frontier models can perform logic. They can follow instructions with a precision that would have seemed remarkable two years ago. The question that’s getting more interesting is what happens after correctness. What happens when you stop prompting for accuracy and start prompting for character.


The problem with “smart”

Smart AI outputs have a tell. They’re thorough. They cover the counterarguments. They use words like “while it’s true that” and “it’s worth noting” and they end with a balanced summary that lands nowhere in particular. They are the written equivalent of a consultant who never gives you a straight answer because they don’t want to be wrong.

The reason this happens isn’t a model failure. It’s a prompting failure. When you prompt for correctness, you get correctness. The model is doing exactly what you asked. You just didn’t ask for the thing that makes writing worth reading.

A few months ago I started experimenting with what I’d call intentionally flawed prompting. Not asking the model to make mistakes exactly, but asking it to have a point of view strong enough that it might be wrong. To write with a voice that has friction in it. To make aesthetic choices that a different writer would have made differently.

The results were different enough that I kept doing it.


Latent space, briefly explained

There’s a technical concept worth knowing here, even if you don’t care about the technical details.

When a language model processes text, it’s working in what researchers call latent space: a high-dimensional mathematical space where similar concepts cluster together. “Dog” and “cat” are close. “Dog” and “democracy” are far apart. Every word, sentence and piece of writing you’ve ever encountered has a position in this space.

When you write a prompt, you’re not just giving the model instructions. You’re pointing it toward a region of latent space. A bland, neutral prompt points toward the center of that space, toward the most average, statistically typical version of whatever you asked for. A strange, specific, textured prompt points toward the edges, toward combinations that are less common and therefore less expected.

Most prompt engineering advice is about getting to the right neighborhood. What I’m arguing is that inside each neighborhood, there’s a distribution of outputs ranging from generic to genuinely interesting, and the difference between them isn’t accuracy. It’s the aesthetic coordinates of your prompt.


What vibe curation actually means in practice

I want to be careful here not to make this sound more mystical than it is. Prompting for texture is a learnable skill with concrete techniques. The strangeness is in the output, not the method.

In practice it looks like this: instead of asking AI to “write an email announcing a price increase,” you ask it to write the email “in the voice of someone who respects their customers enough to be direct rather than diplomatic, and who finds corporate softening language slightly embarrassing.” Instead of asking for “a persuasive argument,” you ask for “an argument that sounds like it was written by someone who has lost this argument before and learned something from it.”

You’re specifying a sensibility, a mind behind the piece. You’re telling the model what kind of person would write this and what choices that person would make and refuse to make.

This is why I think the frame of “prompt engineering” is starting to feel dated. Engineering implies optimization toward a correct answer. What’s happening now is closer to curation. You’re not solving for the right output. You’re selecting from a distribution of possible outputs by shaping the aesthetic conditions that generate them.

A useful analogy: a film director and a cinematographer working together aren’t engineering a scene. They’re making thousands of small aesthetic decisions, each one narrowing the possibility space until what’s left is something with a specific feeling. The script might say “two people argue in a kitchen.” Everything else is curation.


Why this matters more for solopreneurs than anyone else

Enterprise AI use is mostly about throughput. Volume of emails processed, documents summarized, support tickets resolved. At that scale, correctness and consistency are genuinely what matter. You don’t want your customer service AI having a distinctive voice. You want it to handle volume without errors.

Solopreneurs selling digital products are in a completely different situation. Your outputs are your brand. The way your emails sound, the texture of your sales copy, the specific sensibility that runs through your content: that’s what people are actually buying when they buy from you. They can get generic information anywhere. They buy from you because of how you think.

Which means that if you’re using AI to produce generic-sounding content, you’re actively undermining the thing that differentiates you. And most AI-generated content for solopreneurs is generic-sounding, because most people are prompting for correctness rather than character.

The vibe-check economy is the term I’ve been using internally for the shift that follows. Audiences are getting better at detecting AI-generated content not because it’s factually wrong but because it has no texture. No friction. No evidence of a specific person having made specific choices. The response to that, if you’re smart about it, is to use AI differently rather than use it less.


That’s the argument. Here’s how to actually act on it.

The next section is 9 prompts built specifically around vibe curation: techniques for prompting with sensibility instead of just instructions, including how to specify aesthetic coordinates, how to introduce productive friction, how to get AI to make choices a human editor would recognize as choices, and how to test whether your output has texture or just correctness.

Plus: a reference sheet of 15 aesthetic modifiers you can drop into any prompt to shift its output away from the statistical center.

Upgrade to get the complete toolkit.


9 prompts for vibe curation

A quick note on how these work: these aren’t independent prompts you run separately. They’re techniques, each with a specific use case. Some you’ll use at the start of a session, some mid-draft, some as a check on output you’re not sure about. Run them in the order that fits your workflow.


Prompt 1: The sensibility spec

What it does: Specifies the aesthetic sensibility behind a piece of writing, separate from its topic or format.

When to use it: Before any writing task where voice matters. This replaces or supplements your usual system prompt.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Andy Wood · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture