Shelly Palmer

Getting AI to Write Like You

Within the first 10 minutes of any AI strategy session someone asks, “How do I get AI to write like me?” While the techniques for making this happen have been well-understood for a couple of years, building a continuously improving system capable of writing in your voice (or your brand’s voice) required above average technical skills and a significant time commitment. Now that everyone can vibe-code, everyone can build an agentic writing assistant grounded in their unique voice.

There are an almost infinite number of ways to accomplish this. Here’s the theory, some background, and a workflow you can follow.

The Style Sheet Phase

It all starts with a style guide. Write down (or ask an AI model to write down) what you sound like. Specify your vocabulary. List your forbidden words. Every platform from HubSpot to Jasper to ChatGPT’s custom instructions will tell you this is sufficient. Paste in your brand voice guidelines, provide a few writing samples, and the AI will sound like you.

It does not. Personalized AI produces higher-quality output than generic prompts (a Carnegie Mellon study found AI with targeted instruction improved writing quality by a full letter grade), and that is true as far as it goes. “Higher quality” and “sounds like me” are different things. What you get back with a style guide is a competent approximation that could be any informed writer on a good day. Your mother would not recognize it. Neither would your editor.

A style guide tells the AI what to avoid in the abstract. It cannot show the AI what your writing actually sounds like in practice. The gap between “authoritative, plainspoken, and direct” (which describes fifty thousand business writers) and the specific way you construct a sentence, sequence your arguments, or deliver an insight is enormous.

The JSON Context Profile

I replaced the style guide with a structured JSON file. Version 1.0 contained voice characteristics, tone descriptions, forbidden constructions, approved vocabulary, and format specifications for each type of post I write. The current iteration runs over 500 lines and specifies everything from thesis placement rules to paragraph discipline targets.

The context profile was a real improvement. The AI stopped using em dashes. It stopped writing “Here’s why this matters” and “The question becomes.” It opened with declarative facts instead of throat-clearing. The format compliance got tight.

It still did not sound like me. The voice was correct in the way a competent cover band is correct. All the notes were right. The feel was wrong.

The Critics Layer

The next iteration added a formal evaluation framework. Nine critics, each scoring a different dimension: hook strength, thesis clarity, evidence quality, business implications, structure, voice consistency, insight density, originality, and format compliance. The system runs up to five draft-and-revision cycles until the weighted average exceeds a threshold or it hits the cycle limit.

The critics caught problems the context profile alone could not. A draft that was technically compliant with every rule could still score a 6 on voice consistency because it used complex syntax carrying generic content (AI’s default) instead of simple syntax carrying sharp content (my pattern). The critics forced rewrites that pushed output closer to my actual writing.

The Definitive Writing Profile

The critics were finding patterns the context profile was not teaching. I analyzed dozens of published posts to extract what I actually do versus what I think I do. The result was a definitive writing profile that codified the specific mechanics: sentence construction, paragraph development, punctuation habits, hedging rules, and the conditions under which personal interjections earn their space.

The key discovery was the syntax-simplicity principle. My distinctive voice comes from grade-school simple sentence structures carrying expert-level content. Subject-verb-object. One idea per sentence. The contrast between simple syntax and sharp content is the voice. AI does the opposite by default: complex syntax carrying generic content.

The Exemplar Database

Three layers of rules made the output correct. The AI knew every prohibition, every auto-fail trigger, every scoring rubric. It still could not write like me, and it told me why.

The JSON files only described what AI could not do. They were a catalog of prohibitions, requirements, and criteria. What they did not provide was a concrete demonstration of what I actually do. Rules describe boundaries. Examples demonstrate territory. This is the concept behind “few shot” prompting which is a well-understood, non-scalable approach to prompting AI to sound like the examples.

I took this a step further. As you know, I write a daily blog and publish it on shellypalmer.com/blog. So, I vibe-coded a system that took the last 16,000 published posts, extracted passages, embedded them as vectors, and built a hybrid search system combining keyword matching with semantic similarity.

Before drafting a post, the system retrieves eight published paragraphs I wrote about related topics, plus examples of how I open and close posts in that format. The draft is grounded in real examples of my writing, retrieved by topic relevance.

The difference was immediate. The AI stopped producing competent approximations and started producing drafts that required minimal editing. The retrieved passages gave it a concrete voice target instead of an abstract description.

The Auto-Diff Feedback Loop

Every component described so far is static. The context profile, the critics, the writing profile, and the exemplar passages improve only when they are updated (usually by a human). But my writing evolves continuously. Without a correction mechanism, the gap between the system’s model of my voice and my actual voice would widen over time.

The auto-diff feedback loop closes that gap. When I publish a post on shellypalmer.com, the system detects the published version, matches it to the original draft using date, format, and content similarity, and extracts every sentence-level difference. Each correction is classified (tightening, vocabulary swap, forbidden pattern, grammar fix, structural change) and embedded for semantic retrieval.

The corrections accumulate. On the next blog post, the retriever queries the corrections database for edits relevant to the new topic and injects them as “corrections to apply” in the drafting prompt. The AI sees specific before-and-after examples from real editing sessions and avoids the “before” patterns.

Every published post teaches the system something. Every edit becomes a training signal for future drafts. The more I write, the better it gets. No human has to update JSON files or retrain anything. The loop runs daily.

Why You Need This for Your Brand

A prompt is a one-shot instruction. Continuous improvement requires a workflow with a robust feedback mechanism. Most organizations deploy AI writing at the prompt level (paste the style guide, provide context, generate output, edit, publish), and never tell the AI what changed. The AI makes the same mistakes on the next piece because it has no way to continuously learn.

What You Want To Build

The architecture has five layers: a context profile (what to do), critics (how to score it), a definitive writing profile (what it should feel like), an exemplar database (what it actually looks like), and an auto-diff feedback loop (what keeps getting better). You need all five. The first four get you to 80 percent. The feedback loop closes the remaining distance and keeps closing it.

Here’s a sample workflow to help you visualize the process:

The Five-Layer AI Writing System

From static rules to continuous improvement


    1    

What to do

Context Profile

A structured document containing your voice characteristics, tone rules, forbidden constructions, vocabulary preferences, and format specifications. Tells the AI what your writing should look like in the abstract.

    2    

How to score it

Critics Framework

Multiple specialized evaluators, each scoring a different dimension of the draft: voice consistency, structure, evidence quality, insight density, format compliance. Weighted scoring with minimum thresholds forces iterative rewrites until quality passes.

    3    

What it should feel like

Definitive Writing Profile

Extracted from analysis of your actual published work. Codifies the specific mechanics the context profile misses: sentence construction patterns, paragraph development habits, the precise syntax signature that makes your writing recognizably yours.

    4    

What it actually looks like

Exemplar Database

A searchable collection of your published writing, embedded as vectors for semantic retrieval. Before each draft, the system retrieves topic-relevant passages, format-matched openings and closings, and relevant past corrections. Gives the AI concrete voice targets instead of abstract descriptions.

    5    

What keeps getting better

Auto-Diff Feedback Loop

Compares every AI draft to its published (human-edited) version. Extracts sentence-level corrections, classifies each change, and embeds them for retrieval. Every edit you make becomes a training signal for future drafts. The system improves without manual updates.

Voice accuracy by layer

Rules
Score
Feel
Show
Learn
Context profile gets you started
Critics catch quality gaps
Writing profile adds feel
Exemplars ground the voice
Feedback loop closes the rest

Write
AI Generates Draft

The system loads your context profile, retrieves exemplars and past corrections, generates a draft, runs critics, and produces a final version. The draft is stored for future comparison.

Edit
You Edit and Publish

You review the AI draft, make corrections (tightening, vocabulary, structure, tone), and publish. Every edit encodes your preferences in a way no style guide can capture.

Auto
System Detects Published Version

The system matches the published piece to the original AI draft using date, format, and content similarity. Runs automatically on a daily schedule.

Auto
Extract Sentence-Level Diffs

Compares draft to published version sentence by sentence. Classifies each change: tightening, vocabulary swap, forbidden pattern removal, grammar fix, structural change.

Store
Embed and Store Corrections

Each correction gets a classification, a lesson summary, and a vector embedding. Stored for semantic retrieval on future drafts.

Learn
Next Draft Uses Past Corrections

Before the next draft, the system retrieves corrections relevant to the new topic. Injects them as specific before-and-after examples. The AI sees what you changed last time and avoids repeating the same mistakes.

The System Improves Every Time You Write

Every published piece teaches the system something new. Corrections accumulate. The gap between AI output and your authentic voice narrows with each cycle. No manual rule updates required.

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.