Case study: Inside a regulated company's 130‑experiment year
An unexpected experiment with voice AI that changed how we capture customer stories — and a case study from inside a regulated giant.
Inside a Regulated Enterprise: How One Team Built an Innovation Engine
I can’t share the company name, but I can share the playbook.
Inside a multi-billion-dollar, highly regulated organisation, a small innovation team faced the classic challenge: how do you move fast when every decision needs approval, when compliance is non-negotiable, and when “just try it” isn’t an option?
They didn’t ask for permission to move faster. Instead, they partnered with Legal and Risk from day one. They embedded Pretotyping as the method for turning opinion into evidence. And they built safe-to-fail guardrails that could be reused across multiple experiments.
The transformation was remarkable.
What used to be “opinion shipping” — building features based on what stakeholders thought customers wanted — became an evidence-led Innovation Engine. Those retail touchpoint screens that everyone assumed were just “noise”? They became measurable conversion channels. Time from idea to first experiment? Down to about 8 days.
In roughly 12 months, they ran over 130 live experiments.
But the real win wasn’t the volume. It was the shift in decision quality. Leaders started trusting the process because experiments consistently answered “Should we build this?” before anyone wasted time on “How do we build this?”
What made it work
Three things stood out:
They partnered with Legal and Risk early — not as gatekeepers, but as experiment design partners. Getting permission frameworks upfront meant velocity later.
They ran concurrent experiments — patterns across multiple tests improved their selection quality. What worked in one channel informed tests in three others.
They prioritised speed over perfection — imperfect experiments that shipped in 8 days beat perfect plans that took months.
The takeaway: Velocity without validated ideas is just fast failure. Pretotyping gives you both.
What happens when AI runs your customer interviews?
An unexpected experiment with voice AI that changed how we capture customer stories — and a case study from inside a regulated giant.
The AI Interviewer Experiment
Last month, I tried something I wasn’t sure would work: I let ChatGPT’s voice mode conduct a customer case study interview.
I’ve done hundreds of these interviews over the years, and I know the patterns: you ask a question, the interviewee starts answering, you’re already thinking about your next question, and suddenly you’ve missed the most important thing they just said. Then you’re scrambling to redirect the conversation back to what mattered.
But when I handed the facilitation to ChatGPT, something shifted.
The AI asked one question at a time. It held context perfectly. When someone gave a surface-level answer, I could prompt the AI mid-conversation: “Ask for a concrete example” or “That’s interesting, dig deeper there.” The AI adapted instantly, without the awkward conversational resets that happen when humans try to course-correct.
The result? A coherent, quote-ready narrative in one sitting. Less “wait, where were we?” and more focused storytelling from the person being interviewed.
Why this matters
Better interviews mean faster case studies, cleaner proof points for executives, and more useful enablement content. The quality of your customer stories directly impacts how quickly you can share proof of value across your organisation.
How to try it yourself
Brief the AI on your goal and any guardrails.
Ask it to keep single-question pacing (one question, wait for answer, then next)
Stay in the conversation to prompt follow-ups: “ask for a concrete example” or “summarise what they’ve said and move on”
Spotted in the wild:
Three things from this month that you can apply immediately:
1. Time to first experiment beats time to perfect experiment
Stop debating the perfect test design. Ship something in 8 days. Learn from real data. Adjust. Repeat.
2. Partner with Legal and Risk before you need them
The teams that move fastest in regulated environments aren’t the ones who avoid compliance — they’re the ones who build reusable approval frameworks early.
3. Concurrent experiments compound learning
One test gives you one data point. Three tests running in parallel reveal patterns. Those patterns improve your ability to choose what to build next.
Building experiment velocity in your company? I work with leadership teams to embed rapid experimentation as a core capability, increasing confidence in investing in the right products. Reply if you want to explore what this looks like for your business.
One thing
Running a retrospective this month? Try this:
Copy your sticky notes into Claude or ChatGPT and prompt: “Group and label these into themes. Output a concise summary with next-step recommendations.”
I’ve been using this after team sessions, and it’s a 10x lift in clarity and speed. What used to take 20 minutes of manual clustering now happens in seconds, and the AI often spots patterns I missed.
Tool of the Month
I’m late to the Notion party. Now that everything lives here, Notion AI is the tool I didn’t know I needed. It has context across my whole workspace — case studies, client notes, past experiments, drafts — so it can spot patterns, pull the right snippet, or resurface that one insight without me copy‑pasting. The ability to edit and iterate in real-time on a document is 10 times faster. It’s smarter because it knows what I know. This is what Gemini should be doing in Docs, Sheets, and Gmail.
Get Started with Your Innovation Engine
I run complimentary executive briefings and team sessions online, so you can join in wherever your team is based. It is a quick 30-minute talk to introduce Pretotyping, share real case studies, and show how to test which AI and innovation ideas need validation before investment.
👉 Just reply to this email if you’d like to set one up.