Published 2026-04-04

How to spot AI writing in the wild

This page is for reading with your eyes, not running a tool. If you are reviewing a piece of text and something feels off, these are the patterns worth looking for — and the false leads worth ignoring. When you are ready to confirm what you noticed, the diagnostic can show you whether the patterns hold up under measurement.

The difference between spotting and checking

Spotting is what you do while reading: you notice something, you get a feeling, you flag a passage. Checking is the structured workflow you follow after that — collecting enough text, running a diagnostic, comparing the result against context. This page covers the first part: what to actually look for.

Strong signs of AI writing

The best tells are repeated structures, not vibes. A single odd phrase proves nothing. Multiple patterns converging across the same passage is what makes a suspicion worth investigating.

Stock phrase templates

AI models lean on phrases that sound authoritative but carry no information. They let the model keep generating tokens without committing to a specific claim.

What it looks like: "In today's rapidly evolving digital landscape, it is important to note that effective communication plays a pivotal role in ensuring successful outcomes."

Every phrase in that sentence could be swapped for another stock phrase without changing the meaning — because there is no meaning. The sentence is all frame and no picture.

Metronomic rhythm

Human writers naturally vary sentence length. A short jab, then a long elaboration, then something mid-range. AI models tend to converge on a comfortable average and stay there, paragraph after paragraph.

What it looks like: "AI detection tools analyze text patterns to identify machine-generated content. These tools examine sentence structure and word frequency distributions. The results provide useful signals about the likelihood of AI involvement. Reviewers should interpret these signals alongside contextual information."

Four sentences, all between 9 and 12 words, all landing with the same weight. Read it out loud — the rhythm is a metronome. Now compare: "Detection tools look for patterns. Whether the output means anything depends on what you know about the text, the writer, and the context it was produced in." Two sentences. 5 words and 24 words. That is what variation feels like.

Throat-clearing intros

If the first paragraph describes what the text will do instead of doing it, that is a tell. AI drafts love to announce: "In this article, we will explore..." A human writer with something to say usually just says it.

Abstract claims with no detail

AI writing tends to make safe, general statements without specific names, numbers, dates, or examples. The claims sound reasonable but collapse under the question "like what?"

What it looks like: "Many experts agree that this approach has significant benefits for organizations across various industries."

Which experts? What benefits? Which industries? The sentence passes the grammar check and fails the substance check.

Pivot crutches

"Not just X, but Y." "Not only does it do A, but it also does B." Once is fine. Three times in two paragraphs is a pattern — and a common one in synthetic marketing copy and AI-generated explainers.

Over-signposting

"First... Second... Third... Furthermore... Finally..." in a rigid ladder, every few sentences. Some structure markers are useful. A full scaffolding sequence imposed on content that does not need it looks generated — because it usually is.

Weak clues people overuse

These are things people often point to as "proof" of AI writing. They are not. They create false positives — especially against careful human writers.

  • Perfect grammar. Careful writers produce clean prose. So do professional editors. Grammar is not a tell.
  • Em dashes. Yes, AI uses them. So did Emily Dickinson. And so does anyone who writes for a living.
  • Formal tone. Institutional prose, academic writing, and legal text are formal by design. Formality is a genre convention, not a signal of origin.
  • Polished vocabulary. Using "demonstrate" instead of "show" or "utilize" instead of "use" can be a vocabulary flag in certain densities, but a single instance proves nothing. Some humans just talk like that.
  • "It sounds too good." This is a feeling, not evidence. Many false positives start here.

If you have been relying on any of these, the False Positive Hall of Fame is a useful corrective.

What a pattern cluster looks like

A single flag is noise. A cluster is signal. Here is the difference:

Noise: one stock phrase ("it is important to note") in an otherwise varied, specific, detailed 500-word passage. Probably just a writing habit.

Signal: a stock phrase in the intro, metronomic rhythm across three paragraphs, two pivot crutches, a conclusion that restates the introduction, and no named examples anywhere. That is a cluster — multiple pattern families converging. That is worth running through the diagnostic.

How to verify a suspicion

  1. Collect enough text to see whether a pattern repeats. One paragraph is not enough.
  2. Mark the specific passages that triggered your suspicion. Can you quote them? Can you name the pattern? If your evidence is "it just feels AI," you do not have evidence yet.
  3. Run the WROITER Diagnostic. Compare its flags against what you noticed. If the tool agrees, you have converging evidence. If it does not, consider that your intuition may be off.
  4. Check context: genre, revision history, whether the writer is working in a second language, whether the text was heavily edited. All of these can explain the patterns without AI.

If the evidence stays thin after all four steps, the right answer is uncertainty. That is not a failure. It is intellectual honesty.

Interpretation

A higher diagnostic score means stronger overlap with AI-typical surface patterns. It does not prove authorship, identify a writer, or settle a dispute. For score ranges and the full interpretation boundary, see How It Works. For the broader reliability case and known failure modes, see Limitations.

Where to go from here

If you spotted something and want the structured review process: How to Check AI Writing. If you want the mechanism-level explanation of what detectors actually measure: How AI Detectors Work. If you want to run the check now: