The risk isn’t AI. It’s what your team does with it

After my talk on human-centered AI this week, a CEO pulled me aside.

He didn’t want to talk about LLMs, vendors, or productivity gains.

He said: “We’ve invested millions in these tools, and I’m realizing my leadership team has no idea how to actually lead through them.”

I’m hearing versions of this conversation everywhere right now. And the topic of AI comes with a lot of emotion.

Last week, a CEO guest-spoke at my NYU executive master’s class, sharing that she equated AI with the next Industrial Revolution — a ground-shifting change in how and where work is done, what skills are needed, and what jobs exist.

Coincidentally, that same morning, a CHRO of a major investment bank downplayed the impact of AI, telling me it’s what he remembered of the introduction of Excel.

It was useful. New. But just like bookkeepers in the 80’s didn’t lose their jobs to Excel, nor will we — the work will just get shifted, he insisted. Nothing to get worked up about.

That perspective can feel reasonable at a distance. It lands very differently for teams experiencing rapid change—and for leaders quietly questioning how they’ll stay relevant in this new era.

That contrast captures a broader disconnect I’m seeing between how leaders interpret AI—and how employees experience it day to day.

According to research from BCG and Columbia University, 76% of leaders think their employees are enthusiastic about AI. Only 31% of employees agree.

That’s not a measurement problem. It’s a leadership blind spot.

Organizations are moving fast on technology and standing still on what actually determines whether it works: how their people lead.

AI adoption isn’t a technology challenge. It’s a leadership transition most teams haven’t made yet.

The Leadership Inertia Facing Teams Today

Most companies don’t struggle to deploy AI. They struggle to absorb it.

On the surface, the rollout looks successful.

The tools are live.
The productivity expectations are set.

But then, a subtle friction begins to set in.

Decision-making slows down.
Accountability becomes fuzzy.
Outputs are incorrect.

This isn’t "resistance to change." It’s leadership inertia — the gap between deploying technology and knowing how to lead through what it changes.

It happens when a leadership system built for a predictable world is suddenly asked to manage a world of generative outputs and shifting boundaries.

When leaders believe their teams are twice as optimistic as they actually are, friction is inevitable.

A leader I’ll call Alex told me something recently that’s stayed with me. His team had rolled out a sophisticated AI tool months ago.

On paper, it was working.

Adoption was high. The dashboards looked great.

Then, in passing, one of his direct reports mentioned that the team was spending more time QA-ing the tool’s output than it would have taken to just do the work themselves.

Alex had no idea — nobody had told him.

Not because they were hiding it, but because nobody had asked.

The rollout conversation had been about what the tool could do, not about what it was actually like to use.

This is what clarity collapse looks like in practice.

It’s not loud. It’s not a formal complaint.

It’s a quiet gap between the story leaders tell themselves about how AI is going, and the story their people would tell if anyone asked.

When people are confused or quietly working around a system, they don’t escalate; they just absorb. They build their own workarounds and stop trusting the rollout.

When you can’t see what’s actually happening, you can't course-correct — and performance follows.

The silence Alex experienced is often a symptom of a much larger leadership risk: accountability drift.

For example, if AI summarized a critical report, who’s responsible for the interpretation if the conclusion is wrong? If AI closes a low-priority service ticket, then the customer closes their account two weeks later, who owns that outcome?

Everyone was participating. No one owned the outcome.

If you looked at your most important AI initiative right now, would it be clear who owns the outcome?

The 3C Filter

The difference between teams that accelerate and those that stall isn't the quality of their algorithms. It’s the clarity of the context they operate in.

BCG’s 10-20-70 Principle underscores a pattern I see daily.

10% of the success is the algorithm, and 20% is the technology — but 70% of the success is the people and the process.

Most leaders are over-indexing on that first 30%.

High-performing teams, however, focus on the 70% — specifically on creating decision clarity, shifting humans to higher-value work, building performance resilience, and establishing guardrails for scale.

They use AI for what it’s good for: automating repetitive processing, pattern detection, and making low-risk decisions.

They put humans in charge of everything else: the final judgment on important calls, ethical tradeoffs, and context-heavy tasks.

To move past inertia, senior leaders must stop asking "How can we use this?" and start asking: "How does this change who decides and who is accountable?"

I encourage leadership teams to apply the 3C Filter to decide what stays human:

  • Consequence: If the decision is high-risk, keep it human.

  • Context: Does the choice depend on nuance and history?

  • Character: Does this reflect our values or relational judgement?

The competitive advantage won't belong to the organization with the most bots. It will belong to the leaders who can absorb error without losing momentum.

Remember that CEO who pulled me aside after my keynote?

His fear wasn't about the technology — it was about the vacuum that technology creates when leadership doesn’t recalibrate its strategy around it. When we don't design the new operating model, the technology designs it for us.

The organizations that struggle over the next few years won’t be the ones that missed a technology trend. They’ll be the ones reacting to the shift instead of designing for it.

Next
Next

What Your Toxic Boss Can Teach You About Yourself