Strategic chess pieces representing competitive intelligence in win/loss analysis

Most win/loss programs are running on the same recycled question list. You know the one. “What were the top factors in your decision?” “How would you describe our sales process?” “What could we have done better?”

These win/loss interview questions feel thorough. They’re not. They’re polite. And polite questions get polite answers — the kind that make everyone feel good without actually telling you anything you can use.

Your buyer isn’t going to volunteer the uncomfortable stuff unprompted. They’re not going to tell you that your champion buckled under questioning in the final presentation. They’re not going to explain exactly how your competitor reframed your pricing to make it look like a liability. They’re not going to mention that the CFO got pulled into the evaluation three weeks in and immediately started pushing a different direction.

Not unless you ask.

This post is about asking.

Why Standard Win/Loss Templates Fail

The templates you find online — and the ones most programs quietly inherit from whoever ran the program before them — are designed around one implicit goal: getting the call done without awkwardness.

That’s the wrong goal.

The problem isn’t that the questions are bad. It’s that they’re safe. The win loss analysis questions most programs rely on ask about your company’s performance, not the competitive dynamics that actually drove the decision. They ask about features, not about politics. They reconstruct a clean narrative of the deal rather than exposing the messy, human reality of how B2B buying actually works.

What you end up with is a clean report that says your competitor won on price, or integrations, or “ease of use” — and your team nods along because it confirms what they already thought. That’s not intelligence. That’s noise with good formatting.

The deals you lose aren’t lost to a feature gap. They’re lost in moments you can’t see: a competing narrative landing better in the boardroom, a skeptic who never got addressed, a champion who couldn’t close the room when you weren’t there.

To find those moments, you need questions that go somewhere uncomfortable.

The Competitive Intelligence Questions That Actually Surface Intel

“When did you first hear that [Competitor] was in the evaluation, and who brought them in?”

Forget “what competitors were you evaluating.” That tells you the list. This tells you the origin story — who was the internal advocate for the competitor, and at what stage did they enter? If a competitor gets introduced late by a senior stakeholder, that’s a very different situation than appearing on the original shortlist. Understanding the champion network on the other side is as important as understanding your own.

“How did [Competitor] describe our product to you?”

This one lands differently than it sounds. You’re not asking about your own positioning — you’re asking buyers to repeat back the competitive narrative they heard. How did your competitor explain what you do? What framing did they use? What weaknesses did they highlight? This is the most direct window you have into their competitive playbook. Buyers will often repeat it almost verbatim if you give them the opening.

“Was there a moment in the evaluation where the direction seemed to shift? What happened?”

Deals rarely flip cleanly. There’s usually a moment — a meeting, a demo, a pricing conversation, a reference call — where the momentum changed. Most programs never find it because they ask about the outcome, not the inflection point. This win/loss interview question surfaces the specific interaction that moved the needle. That’s the thing worth studying.

“What did your internal champion say to the rest of the buying committee on your behalf?”

This question reveals your champion’s effectiveness — which is something most programs treat as unknowable. What objections did they face internally? Did they defend your pricing, or did they let it become a liability? Did they have language to respond to the competitor’s narrative, or did they go quiet? You built that champion, you gave them the enablement materials. This tells you whether any of it worked when the door closed.

“If we had won, what would have had to be different — specifically in the last three weeks?”

Not “what could we have done better.” That gets you vague answers. This forces specificity. It anchors on a timeframe, which makes the buyer’s memory more concrete. And it implicitly frames the deal as winnable — which makes the feedback feel less like a verdict and more like a debrief. You’ll get more useful detail from this framing than from any broad retrospective question.

“Was there something our team said or emphasized that actually worked against us?”

This one takes some nerve to ask. It’s asking the buyer to tell you where you hurt your own deal. But some of the most actionable competitive intelligence you can collect is about self-inflicted damage — messaging that landed wrong, a rep who pushed too hard, a demo that oversold something the buyer cared about proving themselves. Safe programs never ask this. It’s too uncomfortable for the interviewer.

Why These Questions Don’t Get Asked — And What to Do About It

Here’s the real problem: even when teams know they should be asking harder win/loss interview questions, they don’t.

There are two reasons. The first is interviewer bias. When a colleague, a CSM, or a product marketer conducts a win/loss interview, they carry the relationship into the room. They’re not going to press on something uncomfortable. They soften their follow-ups. They accept the first answer because going deeper feels aggressive. The buyer senses this and stays equally polite.

The second reason is social friction. Buyers are doing you a favor by agreeing to the interview. Most programs are designed to not abuse that goodwill — which means avoiding the questions that might feel pointed or accusatory.

The result is a program that collects data but not intelligence.

This is exactly the problem that Know Why is built to solve. When AI-conducted interviews replace human-led calls, there’s no relationship to protect, no awkwardness to manage. The AI follows the buyer wherever the conversation leads — asking the harder follow-up, pressing on the vague answer, circling back to the competitive moment the buyer mentioned in passing. Buyers speak more candidly to an AI than they do to a person with a stake in the outcome. The research on this is consistent: remove the human listener, and people say more.

The questions above aren’t hard to write down. They’re hard to ask in a conversation where both parties are trying to be comfortable. That’s the gap worth closing.

The Real Question

Before you redesign your question bank, ask yourself what your program is actually optimized for right now. Is it optimized for completion rates? For making the process easy on the buyer? For giving your team a regular cadence of calls they can point to?

Or is it optimized for surfacing the specific intelligence that changes how you sell, how you position, and how you win the next deal you shouldn’t lose?

If it’s the former, you’ll keep getting safe answers. If you want the latter, start with the win/loss interview questions your competitors hope you never ask.