The human edge: Leadership in the age of AI
Most conversations about AI still centre on tools, use cases, and adoption plans. That’s understandable. The technology is improving quickly, the costs are falling, and the potential impact is significant. But inside most large organisations, progress still feels uneven. There are pilots. Experiments. A handful of teams are moving faster than the rest. Lots of activity at the edges.

There are exceptions. Some leaders are investing in real systems and reworking processes end-to-end. They’re taking on more risk and learning as they go. For now, they remain the minority.
What’s striking is not the lack of ambition, but the gap between promise and lived impact. There’s a lot of talk about scale. Much less evidence of sustained change in how decisions actually get made.
That gap matters. Because beneath the technology shift, something quieter is happening inside organisations — something that has less to do with models or tools, and more to do with leadership itself.
The problem has shifted
AI hasn’t changed the need for leadership. It’s changed the shape of the problem leaders are dealing with.
For most senior leaders, the hard part was never producing information. Decisions have long been shaped by briefings, analysis, and recommendations prepared by others. What AI changes is the volume and speed of those inputs — and the number of plausible paths forward that now appear at once.
Answers are no longer scarce. They’re abundant.
That abundance creates a different kind of pressure. When multiple options look reasonable, progress doesn’t stall because there’s nothing to do. It stalls because it’s unclear where to look first.
Leadership work shifts upstream. Away from refining answers, and toward deciding which questions are worth spending time on at all.
If answers are easy to generate, the real challenge becomes choosing what deserves attention.
Where attention goes
In an environment of information abundance, attention becomes the constraint.
AI makes it easy to generate ideas, scenarios, and initiatives. Strategy options multiply. Investment cases stack up. Every problem can be explored from multiple angles, often with convincing evidence attached.
The result isn’t a shortage of insight. It’s a crowded agenda.
You see it in leadership teams trying to pursue too many things at once. Everything feels important. Everything has momentum. Very little gets stopped.
The work of leadership here is not deciding what’s right or wrong. It’s deciding what gets discussed, what gets delayed, and what doesn’t make the agenda at all.
That’s a different discipline.
Focus is reflected in how meetings are structured, which metrics are reviewed, which questions get airtime, and which initiatives survive past the first wave of enthusiasm. It’s about narrowing the field so that effort isn’t spread too thinly.
When attention is clear, decision-making downstream becomes easier. Teams don’t need to escalate every issue. They know what matters for now.
Eventually, though, narrowing the field leads to a harder moment.
A choice still has to be made.
Owning the call
Once attention is focused, someone has to decide what happens next.
This is where judgement enters — and where it can’t be automated away.
AI is good at producing structured recommendations. The logic is clear. The trade-offs are laid out. Often the answer looks calmer and more coherent than the conversation that led up to it.
But many leadership decisions don’t resolve neatly. They sit at the point where values collide. Move faster or bring people with you. Cut cost or protect trust. Ship now or get it right.
The data may be sound. The recommendation may be sensible. And still, there’s no neutral outcome.
That’s the moment judgement is required.
Not judgement as instinct or bravado, but judgement as ownership. Someone has to decide which trade-off the organisation is willing to live with — and stand behind it.
The most effective leaders don’t treat AI as something to follow blindly or ignore entirely. They treat it like a capable junior colleague. Useful. Fast. Often insightful. But in need of challenge.
They ask what assumptions sit underneath the recommendation. What might be missing. What would change the conclusion. They use AI to extend their thinking, not to avoid responsibility.
And judgement doesn’t end once a decision is made.
When an AI-informed decision is questioned — by a board, a regulator, or a team — “that’s what the system recommended” isn’t an answer. Leaders still need to explain the reasoning, the trade-offs, and why this path was chosen.
Decisions aren’t abstract. They land somewhere.
Where decisions land
Decisions don’t stay abstract for long. They land somewhere. Usually on people.
That’s where AI-enabled change becomes real. Not in strategy decks or roadmaps, but in how work feels day-to-day. How roles shift. How expectations change. How familiar tasks quietly disappear or stop being central.
Some of this is already visible. Coding, analysis, reporting, and other forms of knowledge work are changing shape. Parts of the job get faster or easier. New expectations appear before old ones have fully gone away. The work doesn’t vanish, but it feels different.
Most people aren’t worried about losing their job tomorrow. What they’re worried about is whether the work they’re good at will still matter next year. Whether the skills they’ve invested in still count. Whether they’re falling behind without realising it.
That uncertainty rarely looks like panic. It surfaces quietly. In career development conversations. In questions about what “good” looks like now. In a subtle hesitation to commit to a direction that might not last.
Leaders can’t remove that uncertainty. Pretending otherwise doesn’t help. What they can do is be clear about how change will be handled.
That starts with honesty about where value is shifting. Routine work is easier to automate. Roles built entirely around it will change. Avoiding that conversation doesn’t protect people — it delays the moment when trust is tested.
At the same time, leaders need to resist turning every shift into a threat. Most roles won’t disappear overnight. They will evolve. Coding and analysis don’t go away, but they stop being the whole job. More weight moves toward framing problems, applying judgement, and working in context.
The leadership challenge here isn’t reassurance. It’s consistency.
People cope better with uncertainty when they understand how decisions are made, how change is applied, and what will be treated as fair. They don’t need guarantees about outcomes. They need confidence in the process.
This shows up in small, practical ways. How openly trade-offs are explained. Whether decisions are revisited when assumptions change. Whether the same principles apply across teams, not just in isolated cases.
Learning also needs to feel legitimate. Change at this pace is tiring. Leaders who make it acceptable to say “I don’t know yet”, who protect time to learn, and who treat adaptation as part of the job — not a personal failing — reduce anxiety far more effectively than those who talk only about performance.
People watch closely during periods of change. Not for certainty, but for signals. Are leaders paying attention? Are they listening? Are they prepared to adjust course when something isn’t working?
That’s how trust is built — or lost — as work continues to shift.
How credibility is earned now
As work changes, leadership itself comes under closer scrutiny.
When answers were harder to come by, credibility often came from mastery. Knowing the detail. Having the answer. Being the person others deferred to. That still matters, but it’s no longer enough.
As AI becomes part of how analysis is produced, leaders are judged less on how quickly they can respond and more on how they behave when things are unclear.
Credibility is built through patterns of behaviour and consistency.
Do leaders challenge recommendations when something doesn’t sit right, or do they quietly accept them? Do they explain decisions in the same way across teams, or does the story change depending on the audience? Do they show up when outcomes are uncomfortable, not just when results are positive?
These things are noticed over time.
One of the quiet shifts here is that saying “I don’t know yet” has become a strength rather than a weakness. Not as a way to avoid responsibility, but as a signal of honesty. Leaders who are willing to sit with ambiguity, ask better questions, and work through problems openly tend to earn more trust than those who project certainty they don’t actually have. This doesn’t mean indecision. It means being clear about what is known, what isn’t, and how decisions will be made as things change.
Presence has shifted as well. It’s no longer about being involved in everything or visible everywhere. In complex organisations, that quickly becomes a bottleneck. Instead, presence is felt through clarity. Clear priorities. Clear expectations. Clear explanations when trade-offs are made.
Over time, people come to trust leaders who behave in ways they can predict. Not because the leader is always right, but because the principles stay consistent.
In an environment shaped by AI, credibility isn’t built purely through control or confidence. It’s built through reliability.
The stubbornly human edge
AI will continue to improve. The tools will get faster, cheaper, and more capable. More work will be automated. More decisions will be supported by systems rather than people.
That trajectory is largely out of any individual leader’s control.
What doesn’t scale in the same way is judgement, trust, and responsibility. Those remain stubbornly human.
They’re visible in everyday choices. What gets prioritised when everything feels urgent. Where leaders draw the line between advice and decision. How change is explained when outcomes are uncertain. How people are treated when work shifts faster than structures can keep up.
These choices shape how organisations actually operate — and what it feels like to work inside them.
The question isn’t whether AI will change how your organisation functions. It already is. The more useful question is this:
What are you deliberately choosing to keep stubbornly human as everything else changes?
References
- Mäkelä & Stephany (University of Oxford): “Complement or substitute? How AI increases the demand for human skills” (2025)
- Emeritus: “Leverage Your Human Edge in the Age of AI Leadership” (2025)
- World Economic Forum: “How we can elevate uniquely human skills in the age of AI” (2025)
- Forbes: “The Future Of Leadership In The Age Of AI” (2025)
- Berkeley Executive Education: “The Future of Work & Leadership in The Age of AI” (2026)
- McKinsey & Company: “Building leaders in the age of AI” (2026)
- NBER (Working Paper): “Measuring Human Leadership Skills with AI Agents” (2025)
David MitchellChief Growth Officer





