At some point in the last two years, the internet developed a new reading experience: you start a piece of content and immediately sense, before you can name why, that no one actually wrote it.
It's the AI tell. Not detectable by any single signal, but unmistakable when the signals accumulate.
The uncomfortable version of this for anyone using AI in their content workflow: your audience has developed the same sense. And they're not staying to finish the article.
Here are 8 signals that output is AI-generic, and the specific fix for each.
Signal 1: The overpromising opener
AI loves to open with declarations about the importance of the topic. "In today's fast-paced world..." "As businesses navigate an increasingly complex landscape..." "Content marketing has never been more important than it is right now."
These openers make no claim. They say the topic matters without saying why, to whom, or according to what evidence. They're throat-clearing.
The fix: start with a specific thing that happened, a number, a question, or a direct statement of what the piece delivers. "We ran 800 ad variants last quarter. Here's the one thing that determined which ones worked." That's an opener. The other kind is a stall.
Signal 2: Passive constructions where agency should exist
AI defaults to passive voice when action should be attributed to someone. "Decisions were made." "Results were achieved." "A new approach was implemented."
Passive voice hides who did what. Human writing, especially founder writing, tends to name the actor because the actor is the point. "We made the wrong call on pricing in month two" lands differently than "a decision was made regarding pricing."
The fix: go through the draft and find every "was [verbed]" construction. Replace it with a subject who did the action.
Signal 3: No specific numbers
"Many clients saw improvement." "Significant results were achieved." "We grew substantially."
Vague magnitude is a strong AI signal. AI generates magnitude claims when it doesn't have access to real data — so it defaults to adjectives. Human writers who lived through an outcome know the number.
The fix: replace every unmeasured claim with a real figure or remove the claim entirely. "3 of our 5 clients in Q1 saw ROAS improve by 20%+" is weaker than "all of them" but far stronger than "many clients."
Signal 4: No named people or things
AI-generated content tends to operate at the level of category rather than instance. "Brands are using automation to scale." "Founders are building in public." "Tools like this one help teams move faster."
Human writing names things. This specific brand. This specific tool. The conversation I had with this person on a Tuesday. Named specificity is almost impossible for AI to fake because it requires access to real experience.
The fix: for every category noun in the draft, ask whether you can replace it with a specific instance. Often you can. It makes the piece twice as credible and significantly harder to replicate.
Signal 5: Uniform paragraph rhythm
Read AI-generated articles and you'll notice: every section has the same shape. Topic sentence, two to three support sentences, brief conclusion. Every paragraph, the same. No variation in density. No single-sentence paragraphs for emphasis. No long subordinate-clause-heavy sentences followed by a short break.
Human writing has rhythm variation because thoughts don't all have the same weight.
The fix: read the draft aloud. Where the rhythm feels like a metronome, break it. Add a one-sentence paragraph after a complex section. Let a thought breathe longer when it earns it. Cut a paragraph in half if it's just restating what came before.
Signal 6: Lessons without the story that generated them
"It's important to test before launching." "Communication is key in client relationships." "Start with your audience, not your product."
These are conclusions without evidence. AI generates lessons because they're statistically common in content about a topic. But a lesson with no origin story has no authority — the reader has no reason to trust it over the thousand other articles saying the same thing.
The fix: for every prescriptive claim, add the specific situation that earned it. "We launched a pipeline for a client without testing the error branch — found out it was broken when the client's Shopify sync failed mid-campaign. Now we test error states before go-live." That lesson has a cost attached to it. That's the version worth reading.
Signal 7: Random bolding that emphasizes nothing
AI often bolds phrases mid-paragraph seemingly at random — not key terms, not structurally important claims, just words that seemed important to the model at the time. The bolding doesn't guide the reader's eye. It just fragments the text.
The fix: remove all mid-paragraph bolding that isn't a defined term or a headline. If a sentence is important enough to bold, make it a heading. Otherwise, trust the sentence to carry its own weight.
Signal 8: The content ends where it should begin
AI-generated content typically ends with a summary of what was just said, followed by a vague encouragement. "As we've seen, this topic is important. By applying these principles, you can achieve your goals. The future is bright for those who embrace this shift."
This closing pattern signals that the piece has no actual next step to offer. It's closing a loop that wasn't really opened.
The fix: end with something actionable or something unresolved. A specific thing to try. A question that doesn't have a clean answer. A prediction the reader can check in six months. The piece should feel like a conversation that continues, not a presentation that concluded.
The one-question test
After applying all of the above, read the piece and ask: could this have been written by someone who didn't have my specific experience, my specific data, and my specific point of view?
If yes — it's not yours yet.
The goal isn't to produce content that sounds human. It's to produce content that could only have come from you. Those are different targets, and the gap between them is exactly what makes the output worth reading.
AI can draft. It can structure. It can vary. But it cannot remember the specific Tuesday when the client called and the pipeline failed and you figured out why. That memory is the content. The AI's job is to help you write it down faster.
