
I wrote yesterday about the Cost Centre Trap and how L&D is missing the mark in many ways. It did feel like a rant, and I realised that we don’t necessarily need more noise but a bit more signal. I saw Gartner has published their 2026 CHRO Future of Work trends and it sharpened my thinking.
It confirmed the uncomfortable truth that the future-of-work debate is not happening somewhere else. It is already exposing the limits of an L&D model built around requests, courses and completions rather than capability, judgement and work redesign.
Gartner’s 2026 CHRO research, drawn from over 25,000 senior HR conversations, identifies the capabilities organisations need most urgently. In their analysis, business units that redesign how work gets done with AI are significantly more likely to exceed revenue goals. The skill that appears to unlock that value is not technical proficiency. It is systems thinking, process expertise, and the ability to define the problem before reaching for a solution.
At the same time, organisations are already producing what Gartner calls “AI workslop”: fast, high-volume output riddled with errors, generated by people who are measured on speed rather than quality. The cost of each incident runs to nearly two hours of diagnostic and remediation time. The root cause is not laziness. It is an incentive system that rewards throughput over judgement.
Both findings have direct implications for L&D. If the organisation needs systems thinkers and better judgement, and L&D is still supplying courses on request and reporting completions, the function is not just missing the moment. It is organised around the wrong things.
There are three shifts that change that.
Shift one: From service provider to problem framer
The service provider model starts with a request. Someone identifies a need, usually defined as a training gap, and L&D responds with a solution. The problem is that the request is rarely the problem. It is someone’s interpretation of the problem, filtered through what they think L&D can offer.
Problem framers start earlier and ask different questions. What is the performance gap and how do we know it exists? Is this a capability problem at all, or is it a process, incentive, or management issue that training cannot fix? What would actually need to change for performance to improve?
That requires L&D to be present in conversations where capability needs are being defined, not just in meetings where solutions are being commissioned. It requires relationships with people who own performance outcomes, not just people who hold learning budgets.
It also requires the confidence to say: this is not a training problem. That is harder than it sounds when your function is judged on how much it produces.
Shift two: From course factory to capability system designer
A course factory takes inputs and produces outputs. Someone commissions a programme; L&D builds and delivers it, completions are counted. The commissioning relationship reinforces this pattern. If sponsors only ever ask for courses and L&D only ever supplies them, both parties get comfortable with a transaction that feels productive but rarely is. Nobody asks whether anything changed.
A capability system designer breaks that pattern by changing what it offers. Instead of responding to requests, it starts with the performance question: what do people need to do differently, in what context, and what would actually support that over time? The answer sometimes includes a programme. More often it includes changes to how work is structured, what feedback loops exist, what managers are doing, and how success is measured.
This is where the Gartner finding lands directly. The organisations unlocking AI value are not the ones with the best training catalogues. They are the ones with people who can look at an entire process and redesign it. Building that kind of thinking is a capability question. It is also an L&D question if L&D is positioned to address it.
When L&D starts designing for capability rather than content, sponsor behaviour tends to shift too. Not immediately, and not without effort. But when leaders see that a different kind of conversation produces better outcomes, the request stops being “build me a course” and starts being “help me think about this problem.”
Shift three: From activity reporting to evidence for business decisions
The first two shifts change what L&D does. This one changes how it proves it matters.
Most L&D reporting answers the wrong question. Completions, ratings, hours delivered. These numbers describe activity. They tell a leader nothing useful about whether the organisation is better at doing what matters.
Evidence for business decisions looks different. It names whether a gap was rooted in skill, process, incentive or management and what changed as a result. It connects capability development to outcomes leaders are already trying to influence: performance, retention, workforce mobility, quality of work, and speed of change adoption. It uses the language of the business, not the language of learning.
This is not primarily a reporting problem. It is a positioning problem. L&D cannot produce meaningful evidence if it arrives late, works on disconnected requests, and has no visibility of the performance context it is supposed to be influencing. It also changes what sponsors ask for next time. When reporting connects to business outcomes rather than learning activity, it raises the standard of the conversation on both sides.
None of this is easy. And none of it happens by announcing a new strategy.
It happens when one conversation goes differently. When L&D asks a different question at the start of a project. When a report lands that a leader actually uses. When a sponsor says, “I hadn’t thought about it that way.”
The Cost Centre Trap is not broken by a rebrand or a new framework. It is broken incrementally, by a function that starts behaving differently and producing evidence that something has changed.
Gartner’s research tells us what organisations need right now: people who can redesign work, exercise judgement and connect capability to performance. That is not a technology problem. It is a human capability problem.
And human capability problems, when they are defined properly, are exactly what L&D should exist to solve. The question is not whether the function works hard enough. It is whether it is organised to do the work the organisation now needs.