The question we forget to ask before building training

Published 18 March 2026 · Updated March 2026
needs analysis L&D performance consulting instructional design

In every L&D team I have worked in or worked alongside, the same conversation happens sooner or later. A manager sends a message, raises a ticket, or books a call, and says some version of: “We need training on X.”

And then the L&D team gets to work designing training on X.

This is the central dysfunction of the profession. Not the tools we choose, not the platforms we debate, not whether microlearning is dead. The real problem is that we have normalised treating a stakeholder’s solution request as though it were a diagnosis. It isn’t. “We need training on X” tells you what someone wants. It tells you almost nothing about what the problem actually is, or whether training is the right way to address it.

A solution is not a need

Cathy Moore, whose Action Mapping methodology has shaped how a generation of learning designers work, puts the starting question plainly: not “what do people need to learn?” but “what do people need to do differently?” Those are not the same question, and conflating them is where most L&D projects go wrong before a single learning objective has been written.

The practical consequence of that reframing is significant. If the answer to “what do people need to do differently?” turns out to be “nothing, they already know what to do but don’t have the right tools,” or “they know what to do but their manager actively discourages it,” then no amount of well-designed training will close that gap. You are solving the wrong problem.

Julie Dirksen, writing from a cognitive science perspective in Design for How People Learn, makes a similar argument more precisely. She asks practitioners to identify what kind of gap exists before deciding how to address it. A knowledge gap, a skill gap, a motivation gap, and an environmental gap are four different problems requiring four different responses. Training is a reasonable answer to the first two. It is largely useless for the last two. And yet, as anyone who has spent time in a large organisation knows, plenty of training gets built in response to motivation and environment problems simply because nobody stopped to ask the question.

The performance question nobody asks

Clark Quinn frames the issue at a systems level. He argues that formal training addresses a small and often overvalued portion of what actually drives workplace performance. Most performance happens in the flow of work, in response to immediate context, supported by tools, feedback, and the environment around the person. A needs analysis that only looks at knowledge and skill gaps before recommending a course is missing most of the picture.

Mirjam Neelen and Paul Kirschner make a related point in Evidence-Informed Learning Design, grounding these arguments in the research literature rather than practitioner instinct alone. Their position is that needs analysis done well is not a box-ticking exercise before the real work begins. It is the most important intellectual work in the entire project. Get it wrong, and every subsequent design decision is built on a shaky foundation.

Two ends of the same question

Something the profession consistently misses: effective needs analysis and effective evaluation are the same question asked at different points in time.

When we do a proper needs analysis, we are asking what good performance would look like, and what is currently preventing it. When we evaluate learning, we are asking whether performance has changed and whether we can attribute that change to our intervention. If you cannot answer the first question clearly before you design, you will not be able to answer the second question meaningfully afterwards.

Will Thalheimer’s Learning-Transfer Evaluation Model makes this explicit. The higher tiers of LTEM — decision-making quality, task performance, transfer to work — only become measurable targets if you have defined them before the project starts. You cannot retroactively decide that decision-making was what you were trying to improve. Which means the needs analysis has to surface those performance targets before a single piece of content is designed.

This is why organisations that skip the front-end analysis almost always end up evaluating at the lower tiers: completion rates, reaction surveys, immediate knowledge recall. Not because those are good measures of effectiveness (they aren’t, as I argued in my last post), but because nothing more meaningful was defined at the start.

What good analysis actually looks like

A rigorous needs analysis does not need to be a lengthy or expensive process. It does need to be honest. That means resisting the pull toward solution before diagnosis, asking the performance question rather than the content question, and being willing to go back to a stakeholder and say: this is not a training problem.

That last part is the hardest. The instinct, particularly for teams whose budget and headcount are tied to output, is to build something. Saying “training won’t fix this” can feel like professional failure. It isn’t. It is the most useful thing a learning designer can offer: an honest read of what will actually drive behaviour change, rather than an expensive intervention that leaves the underlying problem untouched.

The teams I most respect treat needs analysis as performance consulting, not project scoping. They ask uncomfortable questions. They push back on solution requests. They map the full picture — knowledge, skill, motivation, environment, management support — before committing to a design approach. And when they do build training, it is built on a foundation solid enough to actually evaluate.

Which means, when the project ends, they have something worth measuring.


This is the second in a series of articles on the structural challenges facing the L&D profession. The first article covers why training evaluation falls short and what a better approach looks like.

Let's work together?

I'm available for new opportunities. Let's talk!