The AI Literacy Gap: Why Your Leadership Team Needs to Catch Up
With the emergance of AI/GenAI, I felt that I needed to think, and write, about the impact of AI/GenAI on technology leadership. The next series of articles will be exporing this topic. As an emerging topic, these articles may well be shorter, and I will try and publish them weekly when possible.
Most engineering leadership teams I talk to have a wide spread of AI understanding. On one end, you’ve got the enthusiasts, they’ve built agents, they use AI coding assistants daily, they can explain transformer architectures over a pint. On the other end, you’ve got leaders who haven’t meaningfully engaged with any of it. They’ve read the headlines, they’ve sat through a vendor demo, and they’re quietly hoping this is a hype cycle that will pass.
It won’t pass. And the gap between these two groups is becoming a leadership problem.
Why the Gap Matters
When your leadership team has wildly different levels of AI understanding, decision-making suffers. The enthusiasts push for AI adoption in places where it doesn’t make sense, because they’re excited about the technology. The sceptics resist adoption in places where it does make sense, because they don’t understand the technology well enough to evaluate it. Neither group is making good decisions, they’re making emotional ones dressed up as strategic ones.
The more I’ve dug into this, the more I’ve realised that the literacy gap isn’t primarily about technical knowledge. It’s about judgement. A leader doesn’t need to understand how attention mechanisms work in transformers. They need to understand what AI is good at, what it’s bad at, where it’s improving, and how to evaluate whether a specific AI application will create value in their context.
That’s a different kind of literacy, more like business literacy than technical literacy. And it’s the kind that most leadership development programmes aren’t providing.
Assessing Where People Actually Are
The first step is honest assessment, and this is harder than it sounds. People overestimate their AI understanding because the topic is everywhere. Reading articles about AI is not the same as understanding AI. Watching a demo is not the same as evaluating whether the technology is ready for production use.
I’ve found it useful to ask practical questions rather than theoretical ones. Not “what is a large language model?” but “if we wanted to use AI to improve our customer support workflow, what would you need to know before making that decision?” The second question reveals whether someone can apply AI thinking to real problems, not just recite definitions.
A simple framework I’ve used: can each leader on your team articulate (a) what generative AI is actually good at today, (b) what it’s reliably bad at, (c) where it’s improving fastest, and (d) what the risks are? If they can’t answer all four with reasonable nuance, there’s a gap.
Closing the Gap Without Patronising
The challenge is that your leadership team contains adults with decades of experience who don’t want to feel like they’re back in school. The enthusiasts don’t need the basics. The sceptics don’t want to be lectured by the enthusiasts.
What I’ve found works:
Hands-on exploration, not presentations. Give every leader access to AI tools and a set of real problems to try solving with them. Not toy problems, actual work problems. “Use an AI assistant to draft the technical spec for next quarter’s project.” “Try using AI to analyse our last quarter’s incident data.” The experience of using the tools teaches more than any slide deck.
Structured sharing sessions. Have the leaders who are further ahead share what they’ve learned, not as experts teaching novices, but as colleagues sharing experiments. “I tried using Copilot for a week and here’s what I found” is a very different framing from “let me explain AI to you.”
External perspectives. Bring in someone from outside, a consultant, a peer from another company, a vendor who’s honest about limitations, to provide a baseline that doesn’t come with internal political baggage.
Focus on decisions, not technology. Frame the learning around the decisions leaders need to make: where to invest, what to adopt, how to evaluate, when to wait. This makes the learning immediately relevant rather than abstractly educational.
The Ongoing Commitment
AI literacy isn’t a one-time training event. The technology is moving fast enough that what’s true today may not be true in six months. The leaders who stay current are the ones who maintain ongoing engagement, using the tools, reading critically, experimenting, and talking to their teams about what’s working and what isn’t.
The goal isn’t to turn every leader into an AI expert. It’s to ensure that every leader has enough understanding to make informed decisions, ask good questions, and avoid both the trap of uncritical enthusiasm and the trap of uninformed scepticism.
The gap is real. Closing it is urgent. And the leaders who close it first will make better decisions than the ones who don’t.