Transparency log

Method changelog

When the diagnostic method changes in a way that affects how you should interpret the output, we log it here. Major versions shift scoring behavior. Minor versions add patterns or guidance. Patches refine wording.

Standing interpretation rule

Across all versions: the score measures overlap with AI-typical surface patterns. It does not prove authorship. That boundary does not change between releases. See How It Works for score ranges and the full interpretation framework.

v1.1.0 — 2026-04-04

What changed: Published the method hub with dedicated pages for scoring logic, pattern definitions with examples, and limitations. Added score-range calibration guidelines (0–15 / 16–40 / 41–70 / 71–100). Standardized the public framing: the diagnostic is a quality signal, not an authorship detector.

Why it matters: Reviewers now have a documented path from the score to the explanation behind it. Teams building internal review processes can point to the method pages instead of guessing at what the tool does.

v1.0.0 — 2026-04-03

What changed: Initial public release. Client-side diagnostic with five pattern families (vocabulary, phrases, intros/conclusions, rhythm, signposting), rhythm metrics (word count, sentence count, average length, standard deviation, burstiness), and flag-level output (patternId, label, severity, count, detectorNote). Score range 0–100. Minimum sample size: 50 words.

Why it matters: Established the public output contract. Evidence pages on detector reliability and false positives shipped alongside the tool so the diagnostic launched with its limitations visible from day one.

When to check this page

Come back here when comparing historical runs against current ones, updating an internal review policy, or checking whether the interpretation guidance changed since you last looked. For the full method, go to the method hub.