
Educational Measurement (Fifth Edition), edited by Educational Measurement and produced by the National Council on Measurement in Education, is out at the end of February.
It pulls together current research and guidance across the full measurement landscape. Validity. Reliability. Fairness. Interpretation. Consequences. The chapters are written by people who have spent decades arguing about this stuff properly. It’s also open access under a CC BY-NC-ND 4.0 licence, which matters.
If you work in workplace learning, you should care about this book. This is the science behind measuring learning and assessment. Not dashboards. Not post-course surveys. The actual foundations.
Too often in L&D we lean on reaction data and thin “learning” stats to justify our work. This book explains, in forensic detail, why that isn’t enough.
I’d be genuinely interested in your views once you’ve spent time with it. I’ve only worked through parts so far, but a few things stood out.
First, it confirms something I’ve felt for a long time. Most L&D evaluation would not survive serious scrutiny. Not because people are lazy, but because the field never built a shared measurement theory. We have lots of models, often held tightly or selectively applied, and many of them don’t stand up when you interrogate their assumptions.
Second, impact claims need defensible arguments, not anecdotes. If you can’t explain how your data supports a performance claim, you don’t have impact evidence. You have a story you like telling.
Third, the future of L&D measurement is judgement, not metrics. That means being clear about the decisions you’re trying to support. It means being explicit about trade-offs. It means stating your assumptions out loud and being honest about the consequences of what you choose to measure and what you choose to ignore.
None of this is comfortable.
But if we’re serious about credibility, it’s unavoidable.