Category Error

The image showcases a blurred background featuring a screw and a flat-head screwdriver. Overlayed are the words "CATEGORY ERROR" in bold, white typography. The design combines the visual elements of the tools with a thought-provoking phrase, suggesting a theme of mismatches or misclassifications. The contrast between the blurred object and the clear text emphasises the message.

I saw Don Taylor’s post last week about the fears people are expressing about AI and its impact on L&D. I’m keen to see the next Global Sentiment Survey results when they’re published, but I think we still don’t understand the underlying strategic problem we are failing to address.

AI skill acquisition is not a failure of implementation. It is a category error.

The stages Donald describes make sense at the level of activity, capturing chaos, experimentation, serious use and fear in how the function has reacted since ChatGPT arrived, but they do not describe capability.

That distinction matters because most L&D responses to AI are framed as skills problems. People need to learn how to use the tools, managers need guidance, and organisations need to be “AI ready”. Once that framing is accepted, the solution space collapses quickly into courses, content and guidance, and the training loop repeats.

The Global Sentiment Survey already shows this tension emerging. By 2025, respondents demonstrate a much greater understanding of AI than in 2023, alongside growing anxiety about what it means for the profession. That anxiety is often interpreted as fear of replacement or deskilling, but I think that misreads the signal.

The issue is not whether people can use AI. It is what happens to thinking, judgement and diagnostic capability when cognitive work is systematically offloaded to tools.

This is where the category error sits. AI adoption is being treated as a learning problem when it is actually a system design problem.

AI changes where effort sits within the organisation by altering who is expected to think, when, and about what. If first drafts, analysis, structure and synthesis are automated by default, then those forms of thinking stop being required. Over time, capability erodes not because people are incapable, but because the system no longer demands it.

My post last week argued that L&D is actively promoting this erosion without naming it. We encourage AI use for speed and productivity but rarely ask which forms of thinking must be protected, practised and retained. While outputs become faster and more polished, reasoning is quietly undermined.

From that perspective, much L&D work now feels automatable because it was never aimed at building organisational capability in the first place. It was aimed at producing defensible activity, and AI simply makes that reality harder to ignore.

The strategic question is not how L&D uses AI better. It is whether L&D is willing to step out of the training category altogether.

That shift means moving from responding to requests to diagnosing performance systems, from delivering content to shaping the conditions under which judgement develops, and from “skills for AI” to making deliberate decisions about where AI should not be used.

Those are not learning design questions. They are organisational ones.

If L&D continues to treat AI as a course-shaped problem, the fear reflected in the survey will be justified, not because AI replaces the function, but because the function remains focused on the wrong category of work.


#CategoryError #AI #OrganisationalCapability

Please comment...

This site uses Akismet to reduce spam. Learn how your comment data is processed.