Back to Blog
assumption-mapper

Your Best Evidence Is Buried in Your Notes

You're already gathering evidence — in interviews, emails, and Slack threads. The problem is it never makes it into a system. So we built a way to just dump the conversation and let the tool figure out what matters.

March 4, 20267 min read
Your Best Evidence Is Buried in Your Notes

Last week I had a great user interview. Twenty-five minutes. The person was candid, specific, full of surprises. I walked away thinking that was gold. I had at least three assumptions confirmed, one blown apart, and two new ones I hadn't even articulated yet.

Then I opened my laptop, looked at Assumption Mapper, and thought: okay, which assumption was that evidence for again? What exactly did she say? Was it "we tried three tools before giving up" or "we tried a few tools"? The exact phrasing mattered, and it was already blurring.

I had the notes — a messy Google Doc with half-sentences and timestamps. But the gap between that raw mess and properly logged evidence linked to the right assumptions felt enormous. So I did what every builder does: I told myself I'd log it later.

I didn't log it later.

The Evidence Decay Problem

This kept happening. Not because I was lazy — because the operationalising was too heavy. Every piece of evidence required me to:

  1. Re-read my notes and figure out what was actually a meaningful signal
  2. Decide which assumption it related to
  3. Write a clean summary
  4. Classify the direction — does this support my belief, contradict it, or is it ambiguous?
  5. Copy the exact quote that mattered

Multiply that by every conversation, email thread, and research session, and the result is predictable: the richest insights never make it into the system. They sit in Google Docs, Notion pages, Slack DMs, email threads. They exist, technically. But they're not connected to anything. They don't inform decisions. They rot.

I started calling this evidence decay. The half-life of an unlogged insight is about 48 hours. After that, the detail is gone. You remember the conclusion but not the evidence. You remember the feeling but not the quote.

And a conclusion without evidence is just another opinion.

What I Actually Wanted

I wanted to be able to open a command palette, paste the entire messy transcript, and have the tool figure out:

  • "This quote right here — that's evidence for your assumption about enterprise buyers self-serving."
  • "This other thing they said — that's a brand-new assumption you haven't captured yet."
  • "And this part contradicts what you believe about onboarding."

Not perfectly. Not as a replacement for my judgement. But as a first pass that does 80% of the sorting, so I just have to review and approve rather than start from scratch.

That's what we built.

How It Works

The Document tab in Assumption Mapper now accepts any unstructured text — interview transcripts, meeting notes, pasted email threads, research summaries, whatever you've got. You paste it in, hit "Process with AI", and it comes back with two things:

New assumptions it found. Beliefs about the business that are implied in the conversation but you haven't formally tracked yet. Each one comes with the exact quote from the transcript, a suggested category (desirability, viability, feasibility, adaptability), an importance rating, and tags it thinks are relevant — preferring tags you've already used in the project so your taxonomy stays clean.

Evidence for existing assumptions. Statements in the transcript that support, contradict, or are ambiguous about assumptions you're already tracking. Each linked to the specific assumption it relates to, with the source quote preserved.

Here's the part that matters: it also cross-links. If the AI surfaces a new assumption and finds evidence for that same assumption in the same transcript, the evidence gets linked to the newly created assumption automatically. No manual wiring. No orphaned evidence sitting around waiting for you to connect it.

You review everything before it's imported. Every item shows up with a checkbox, default selected. You scan, deselect anything that's noise, and import. The assumptions land in your inbox with tags attached. The evidence lands linked to the right assumptions with source quotes preserved. What used to take twenty minutes of disciplined logging now takes about ninety seconds of review.

What Changed For Me

The first time I used this, I pasted the notes from that interview I'd been procrastinating on. It found four things I'd missed entirely. Not because I hadn't heard them — because they'd fallen below my threshold for "worth the effort of logging manually." One of them was a contradiction to an assumption I'd rated importance 5.

That contradiction had been sitting in my notes for five days.

Since then, I've been dumping everything — not just interviews. Email threads where a client pushes back on something. Slack conversations where a team member raises a concern. Even my own voice memos after a meeting. The bar for "worth processing" dropped dramatically because the cost dropped from twenty minutes to ninety seconds.

The volume of evidence in my projects roughly tripled in the first week. Not because I was doing more research — I was doing the same amount. I was just actually capturing it.

The Real Insight

Here's what surprised me: the feature didn't just save time. It changed what I captured.

When logging evidence was manual and effortful, I had an unconscious filter. I only logged things that felt obviously important. Things that clearly confirmed or denied a specific assumption I was already thinking about. The ambiguous stuff, the unexpected tangents, the half-formed signals — those got filtered out by the effort of logging.

With the AI doing the first pass, those subtle signals started showing up. "This person mentioned switching costs three times without being asked about it — that might be a new assumption about retention." I wouldn't have captured that manually. It wasn't worth the five minutes of effort. But it was worth reviewing a pre-filled card for two seconds and clicking "import."

The tool is better at noticing what I'm not looking for. That's a strange thing to say about something I built. But it's true.

The Limits

It's not perfect, and I want to be honest about that.

It sometimes surfaces things that are noise — particularly from rambling transcripts with a lot of filler. The review step is genuinely important; you can't blindly import everything. It occasionally misidentifies which existing assumption a piece of evidence relates to, especially when assumptions are similar. And it can't replace the kind of deep synthesis that happens when you sit with your evidence and think.

What it can replace is the grind of sorting, categorising, quoting, and linking. The mechanical work that sits between "I learned something" and "that learning is in the system." That gap is where most evidence dies. Now it doesn't have to.

The Bigger Point

I keep coming back to something I wrote in the first post about Assumption Mapper: the difference between storing assumptions and surfacing them is the entire product.

This feature is the same principle applied one step earlier. The difference between having evidence (in your notes, your inbox, your memory) and connecting evidence (to the assumptions it actually informs) is the difference between knowledge and action.

Most teams have plenty of evidence. They talk to users, they read research, they have conversations full of signal. What they don't have is a system that makes it easy to get from raw conversation to connected evidence. So the evidence stays raw. And decisions get made on intuition instead.

Paste the conversation. Let the tool sort it. Review and import. Ninety seconds from messy notes to linked evidence.

That's it. That's the whole feature. And it might be the most useful thing I've built so far.

Want to discuss this further?

I'm always happy to chat about building products and validation.

Get in touch