Someone else’s problem

Alt Text: An abstract image with a blurred silhouette of a child hiding their eyes with their hands. The background features a gradient of orange and blue hues. Prominently displayed in the foreground is the text "SOMEONE ELSE'S PROBLEM" in bold white letters, creating a striking visual contrast.

If nobody owns AI in learning, AI owns your learners.

Earlier this week I showed you structural exclusion in a music festival lineup: 70% male headliners across five years. Small decisions repeated over time created a big pattern. The same thing is now happening with AI in European workplaces, especially financial services, where small procurement and design choices are quietly reshaping power, agency and opportunity at work.

Across Europe, regulators are building AI-specific protections: mandatory impact assessments, algorithmic transparency obligations and structured worker voice in decisions about AI systems. The UK, by contrast, relies on general employment law and data protection. What it does not yet have is an AI-specific framework similar to the AI Act, which means no statutory impact assessments for workplace AI and no built-in requirement for worker voice on algorithmic systems.

This creates three gaps that show up very quickly in learning and development:

A regulatory gap – no framework designed for systems that learn, optimise and reconfigure work over time.

A transparency gap – vendors can sell AI learning platforms without explaining how they work or what trade-offs they make.

A power gap – workers have no structured representation in AI decisions, and L&D is expected to deploy and govern tools without formal authority over how they are designed.

When those three gaps combine, responsibility falls to the only function close enough to see the risks as they emerge. IT governs technology, HR governs employment, compliance governs risk, and L&D governs learning. L&D becomes the de facto guardian of learning integrity, not by choice but by proximity. If nobody in your organisation owns the integrity of AI in learning, the path of least resistance will decide for you.

So in your organisation, who decides which AI learning systems are deployed? Who has the mandate to veto or redesign them if they undermine worker agency? Who makes sure developmental data does not quietly become a new performance management risk?

If your honest answer is “nobody”, what happens next where you work, and who is ready to take ownership before the pattern sets like concrete?


P.S. The WEF/OECD just published research on AI in strategic foresight, tracking adoption across 55 countries. It’s rich empirical work worth reading.

But it measures practitioner tool usage, not whether foresight influences decisions. Same governance gap showing up in a different domain: capability racing ahead of authority.

When nobody owns the outcome, the technology decides.

Link: AI in Strategic Foresight: Reshaping Anticipatory Governance

#LearningAndDevelopment #WorkplaceAI #FutureOfWork

Please comment...

This site uses Akismet to reduce spam. Learn how your comment data is processed.