
Enablement has a measurement problem.
You run a training. Reps complete it. Maybe they pass a quiz. And then... nothing. You move on to the next initiative and hope it moved the needle. Weeks later, someone asks if the training worked and you pull up completion rates like they mean something.
This is how most enablement teams operate today. And it’s not because they don’t care about impact — it’s because the tools and systems they’ve been given were never built to close the loop between what you teach and what actually happens in the field.
At Flockjay, we’ve been thinking about this problem obsessively. Our CEO Shaan Hathiramani calls the solution “Prove and Improve” — a closed-loop framework that connects enablement programs to behavior change and behavior change to revenue. Not in theory. In practice, with data, on a rolling basis.
Here’s how it works.
The Problem: Broken Feedback Loops
If you’re running enablement today, you probably recognize this pattern. You build a training program based on what leadership says the team needs. You deliver it. You measure completion and maybe some assessment scores. Then you wait — sometimes months — to see if the pipeline numbers move.
The feedback loops are really long. By the time you realize something didn’t work, you’re already behind and scrambling to react. The field is frustrated because they’re not getting what they need. Leadership is frustrated because they can’t see the ROI. And you’re stuck in between, trying to justify your team’s existence with metrics that don’t actually prove anything.
The data to close this gap exists. It’s sitting in your call recordings, your CRM, your LMS, your content platform. The problem is it’s all fragmented — stuck in different silos across the revenue stack. Your call scores don’t talk to your training completions. Your Salesforce data doesn’t connect to your competency assessments. And no one has the time to manually stitch it all together.
Meanwhile, enablement teams are getting smaller, not bigger. The pressure to demonstrate impact has never been higher. Completion rates aren’t going to cut it anymore.
The Framework: Three Stages of Prove and Improve
Prove and Improve is a three-stage loop. Each stage feeds the next, and the output of the third stage feeds right back into the first. That’s what makes it a loop and not just a process.
Stage 1: Identify the Gaps
Most enablement teams build their programs based on a combination of leadership requests, field feedback, and intuition. There’s nothing wrong with that — but it’s incomplete. The first stage of Prove and Improve uses data to proactively surface where reps are actually struggling.
This means connecting into your conversation intelligence tool and applying AI-driven scorecards to calls at scale. Not just flagging that a call went poorly, but measuring specific competencies — discovery technique, technical fluency, competitive positioning, objection handling — across your entire team on a rolling basis.
When you combine those call scores with CRM data like stage conversion rates, deal velocity, and win/loss patterns, you start to see a much clearer picture of where the real execution gaps are. And you don’t have to wait for a manager to flag it or a rep to ask for help.
The goal is to get to a place where your team receives a regular insights brief: here’s what’s happening in the field, here’s where the gaps are, and here’s what you should be building next. Not based on gut feel. Based on what the data is actually telling you.
A practical example: your reps are going up against a competitor that just launched a new feature. It’s showing up in your call data — reps are getting caught off guard on competitive positioning. At the same time, your scores show that cross-sell conversations around your own new module aren’t happening effectively. That combination should be a trigger. You shouldn’t have to wait for the field to tell you they need help. The data already knows.
Stage 2: Personalize the Enablement
Once you know where the gaps are, you build against them. But not the old way — not a generic SKO session or a one-size-fits-all course that everyone sits through regardless of where they actually need help.
The second stage is about delivering the right enablement to the right person at the right time. This means personalized learning paths based on individual skill gaps. It means micro-lessons generated from actual top-performer calls — not a 40-minute recording no one will watch, but a five-minute clip with annotations showing exactly what a great rep did and why it worked.
It means AI roleplay for practice, so reps can work on their specific weak spots before they’re in front of a customer. And it means contextual nudges delivered where reps actually spend their time — in Slack, in their inbox, in the 15 minutes before their next call — not buried in a platform they have to remember to log into.
The instructional designer’s role changes here. Instead of starting from a clean sheet of paper and spending weeks building a polished course, you’re curating insights from the field and using AI to generate the first draft. You’re becoming an editor and orchestrator, not a content factory. The content gets more relevant because it’s drawn from real activity, and it gets out faster because you’re not hand-building everything from scratch.
Stage 3: Measure the Impact (Prove It)
This is where most enablement programs stop — and it’s the most important part of the entire framework.
After you deliver the training, you measure whether it actually changed anything. Not just completion. Not just quiz scores. Actual behavior change in the field.
Here’s what that looks like: Before the training, you have a baseline measurement of competencies from your ongoing call scores. You know where reps stood on the specific skills you’re training against. Then at 30 days out, you measure those same competencies again using the same rubrics applied to real calls. And again at 60 days.
Did the scores change? Are reps actually doing the things you trained them on? That’s the leading indicator — behavior change.
Then you look at the lagging indicators. Did the behavior change translate to improved metrics? Better stage conversion rates? Higher win rates against that competitor? Faster deal cycles?
If behavior changed but metrics didn’t move, that tells you something important about your methodology or your assumptions. If behavior didn’t change at all, that tells you the enablement itself needs work — maybe the format, maybe the content, maybe the delivery mechanism.
Either way, you have real data. Not a hunch. Not a completion rate. Actual evidence of whether what you did worked and why.
How the Loop Closes
Here’s the part that makes Prove and Improve a framework and not just a measurement exercise: the output of Stage 3 feeds directly back into Stage 1.
The gaps that persist get re-prioritized. New gaps that emerge from the data get flagged automatically. The enablement your team builds in the next cycle is informed by what worked and what didn’t in the last one. Every cycle makes the next one more targeted and more effective.
This turns enablement from a quarterly training calendar into a continuously running engine. You’re not just delivering programs and hoping for the best — you’re running a feedback loop where every initiative generates data that shapes the next one.
And this is where the conversation with leadership fundamentally changes. Instead of walking into a QBR with completion rates and satisfaction scores, you can say: we identified these specific gaps, we built training against them, here’s how behavior changed post-training, and here’s how that’s tracking against pipeline metrics. This is the scorecard today. This is how we’re going to help them continue to improve.
That’s the kind of evidence that earns enablement a permanent seat at the revenue table.
Why This Is Possible Now
If this sounds like it should have been obvious all along, you’re right. The concept isn’t new. What’s new is the technology to actually do it.
Most of the enablement platforms in the market today were built around 2010. Seismic, Highspot, Mindtickle — they’re good at what they were designed to do, but they were built before AI could score calls against custom rubrics at scale, before you could unify competency assessments between training and live field calls, before you could generate micro-lessons from actual recordings with a click.
The integrations between these systems have always been bolt-on. Your LMS scores live in one place. Your call scores live in another. Your CRM data is somewhere else. And even if you can technically connect them, no one’s building the analytical layer that brings them together into a single, actionable picture.
AI changes the math. You can now take unstructured call data, run it through a rubric, get scores in real time, map those to competencies in your LMS, and sort your reps based on where they need help — all automatically. That just hasn’t been possible before. And if you layer in AI-generated content on top of that, the entire cycle from identifying a gap to delivering a training against it goes from weeks to hours.
The architecture matters here. You need a system that was built from the ground up to connect these data streams — not one that’s trying to bolt AI onto a 15-year-old platform. That’s why we built Flockjay the way we did. We’re not the 14th LMS or the 14th CMS. We’re the platform that makes the loop work.
What This Means for You
If you’re leading enablement today, the Prove and Improve framework changes your role in a few important ways.
First, it shifts you from content creator to insights-driven strategist. Your most valuable contribution isn’t building the next course — it’s identifying the right gap to address and measuring whether your intervention worked. The building part gets faster and more automated. The thinking part becomes your superpower.
Second, it gives you a language for ROI that leadership actually cares about. “We trained 500 reps” means nothing. “We identified a gap in competitive positioning, built a targeted program, saw a 15% improvement in call scores on that competency, and that cohort’s win rate against Competitor X improved by 8 points in 60 days” — that means everything.
Third, it compounds. Every cycle through the loop generates data that makes the next cycle better. The enablement gets sharper. The gaps get smaller. The proof gets stronger. Over time, you’re not just running programs — you’re building an enablement function that gets better at getting better.
That’s what Prove and Improve is really about. Not just proving your impact. Using the proof to continuously improve. A closed loop where accountability and iteration are the same thing.
The teams that figure this out first won’t just have better-trained reps. They’ll have a compounding advantage that’s very hard to replicate.


