The standard agency case study has four parts: a logo, a warm-toned hero photo, a quote from someone nobody can fact-check, and a single metric in a very large font. "240% increase in qualified leads." "3.8x pipeline growth." "Conversion up 52%."
Nothing in that template is technically false. Most of it is technically useless. Here is a field guide to writing case studies buyers actually believe — and, just as importantly, to reading the ones they don't.
The six questions every metric should answer
Before you put a number on a page, make sure you can answer these six questions about it. If you can't, the number is collateral, not proof.
- Increase from what baseline? A 240% lift from 10 leads/month to 34 is a different story from 1,000 to 3,400. Put the baseline next to the lift, always.
- Over what period? Compounding gains look enormous over 12 months and unremarkable over 24. Be explicit.
- Vs. what counterfactual? Did the number go up because of the work — or because the category grew, a competitor churned out, a funded campaign launched in parallel? Name the thing you isolated.
- Attributed how? MTA, first-touch, last-touch, self-reported on a sales call, eyeballed from a dashboard? Different methods yield wildly different numbers. Say which one you used.
- At what cost? A 3x pipeline growth driven by doubling spend is a scale story, not an efficiency story. A good case study names the inputs as carefully as the outputs.
- Did it stick? Lifts that reverted in Q+1 are not wins. They are seasonality. Good case studies report the number and the decay.
A number that survives all six questions is almost always smaller, less dramatic, and more believable than the headline most agencies run. That's not a bug. That's the point.
The qualitative side
Numbers are the skeleton. The muscle is the narrative — and the narrative has to include the things that went wrong, or the whole thing reads like propaganda. A credible case study names the thing the client was scared of, the call that was hard, the two-week period where the approach didn't seem to be working. Not because self-flagellation sells, but because the reader is trying to model the experience of hiring you, not just the outcome.
A good structural template:
- The situation. Where the business actually was, in language a CFO would recognise.
- What we tried first. Including the thing that didn't work, and what we learned from it.
- The pivot. The insight or the research or the hard conversation that changed the approach.
- What we shipped. Specific enough that a competitor reading the case study could not cleanly copy it.
- The numbers, caveated. With all six questions above answered in-line.
- What's next. Because any real engagement is still in flight.
Reading case studies with an editor's eye
Turn the lens outward. The next time you're evaluating an agency, read their case studies with a pen in your hand:
- Count the percentages that don't name a baseline. (Usually: most of them.)
- Highlight every testimonial and ask: would this quote work for literally any other vendor? If yes, it isn't a testimonial. It's filler.
- Look for the thing that went wrong. If every case study is a clean arc from struggle to triumph, you're reading marketing, not reporting.
The agencies that pass this test are rare. They are also, almost without exception, the ones who will do the work you actually wanted doing.
Why we bother
The business case for honest case studies is, ironically, a marketing argument: readers can tell. Not consciously, always — but the trust signal compounds across a dozen small tells, and buyers who've been around the block can feel the difference between a case study that respects them and one that doesn't. The agencies that respect the reader get introduced to other buyers. The ones that don't get ignored.
We've yet to meet a prospect who said, "your case studies had fewer numbers than the competition, so we went with them." We've met several who said, "yours were the only ones that felt real."