← Back to all posts
Leadership

Leading in the Age of AI

AI doesn't make leadership irrelevant. It makes the quality of leadership more visible than ever.

Here’s what I keep hearing from leaders: “I need to figure out AI before I can lead it.”

That’s backwards. You don’t need to understand the technology to lead the transformation. You need to understand people. The leaders who are succeeding right now aren’t the ones who can explain how large language models work. They’re the ones who can create the conditions for their teams to experiment, learn, and adapt.

And that’s where the real tension lives. Organizations have spent the last two years focused on which tools to deploy, how to train people on prompts, and how to measure adoption. None of that is wrong, but it’s also not where transformation comes from. The tools are the easy part. The leadership is the work. As the technology becomes more powerful and more accessible, what’s required from leaders becomes more fundamentally human — and for many, that’s the part they’re least prepared for.

Why This Matters More Now: The Compressed Feedback Loop

The fundamentals of leadership don’t change. Direction, trust, decision-making — these still matter. What changes is the speed at which all of this is tested.

AI compresses feedback loops. Ideas that would have taken months to prototype now take days. Decisions that once required weeks of analysis can be made in hours. Feedback that used to arrive in annual reviews now shows up in real-time data. That compression is powerful, but it also puts pressure on every part of how an organization operates:

  • How fast decisions get made — and by whom
  • Whether teams have permission to act on new information
  • How much space leaders protect for strategic thinking
  • Whether the organization can reprioritize when conditions change

A manager in a typical organization spends over half their time on administrative work — approvals, status updates, context switching, back-to-back meetings. There’s no headspace for strategic thinking, let alone leading through a transformation. Add AI to that equation and you get faster information flow into a system that’s already bottlenecked at the top. The speed doesn’t create new problems. It reveals existing ones faster than you can work around them.

That’s the real story of AI and leadership. It’s a stress test on three things that have always mattered: psychological safety, decision-making, and prioritization. Let’s take each one.

Psychological Safety

The trap: When someone experiments with AI and it doesn’t work, what happens next? In too many organizations, the answer is some version of risk to reputation or performance review. People learn fast. If experimentation carries professional risk, they’ll stick with safe, incremental uses — and the organization never gets past stage one of AI adoption.

How AI accelerates this: The compressed feedback loop means experiments are more visible, more frequent, and produce results faster. There’s less time to quietly course-correct before anyone notices. This makes psychological safety more important than ever, because the pace of learning required is higher.

What helps:

  • Clarity on expected outcomes and guardrails before the experiment starts
  • Early feedback loops so course corrections happen before consequences get real
  • Behavioral agreements within teams about how failure gets treated
  • Leaders who model learning rather than knowing — genuine curiosity, admitting what they don’t understand, making decisions anyway

Underneath all of this is something that deserves to be named: disruption anxiety. The white-collar workforce is experiencing change at a scale and speed that’s unprecedented. People are afraid — of irrelevance, of being asked to do more with less, of the unknown. Leaders who pretend this isn’t happening are working against the grain of what their people are feeling. The leaders who name the fear, make space for it, and move forward with their teams rather than ahead of them are the ones whose organizations adapt.

Decision-Making

The trap: Decisions flow up by default. A team member has a question about whether to use AI for a customer analysis. They could decide, but it feels safer to ask their manager. The manager could decide, but wants buy-in from leadership. Three layers of review for a decision that probably didn’t need any of them.

How AI accelerates this: New information emerges faster than centralized decision-making can process it. The information is at the edge of the organization, but the decision-making authority is at the center. That mismatch was always a drag. With AI compressing timelines, it becomes a bottleneck that’s impossible to ignore.

What helps:

  • Moving from “Who decides?” to “Which role decides?” — for every decision type, identify the role with the most relevant information, the most skin in the game, and the ability to live with the consequences
  • Radical clarity on objectives — you can push decisions down only if people know what they’re optimizing for
  • Genuine alignment through debate, not false consensus
  • The courage to let people make decisions you wouldn’t have made, and the adaptability to say “that didn’t work, let’s try something else” when an experiment produces new information

Prioritization

The trap: Every leader I work with talks about wanting ruthless prioritization. The pattern I see is that new priorities pile on while old ones linger, creating a sprawl where everyone is working hard but clarity is missing. This isn’t a character flaw — it’s a structural problem. Most organizations don’t have the systems or habits to say no, reprioritize, or let go of something that’s already in motion.

How AI accelerates this: AI makes it possible to optimize and execute faster than ever. That’s a double-edged capability. You can optimize a process beautifully that no longer needs to exist. When strategic and tactical execution move at different speeds, teams end up executing brilliantly on yesterday’s priorities.

What helps:

  • Clarity on outcomes — not tasks, outcomes. What are we trying to achieve, and why does it matter more than the other things we could be doing?
  • Genuine alignment, which means debating tradeoffs until people can articulate why certain bets matter more
  • Treating the strategic plan as a starting hypothesis, not a contract — continuous iteration rather than annual planning cycles
  • Distinguishing between what’s truly strategic (direction-setting, capability building) and what’s tactical (how we execute this quarter)

What Leadership Maturity Looks Like

When I look at the leaders and teams navigating this well, a few things stand out. They’re not chasing every new capability. They understand that the technology will evolve faster than any organization can absorb. So instead of trying to keep up with every release, they focus on the three foundational areas where human leadership is irreplaceable:

  • Setting direction clearly — not just “we should use AI” but a genuine point of view on how AI changes the business, the strategy, and the value proposition
  • Making decisions well — pushing authority to the people closest to the problems, creating the conditions for fast, informed decisions rather than slow, consensus-driven ones
  • Creating the organizational conditions for people to move — psychological safety, role clarity, feedback loops, and the structures that let teams experiment and learn without waiting for permission

Everything else can be augmented or automated. Those three things cannot.

The leaders who are succeeding aren’t smarter than everyone else. They’re clearer. They’re braver about making decisions with incomplete information and then learning from the outcome. They treat their organization as a learning system rather than a machine that should run the same way forever.

That’s what AI tests in leadership. It tests whether you can think clearly under uncertainty, create safety while pushing toward growth, and push authority outward instead of hoarding it. Technical fluency matters far less than organizational fluency.

The fundamentals of good leadership were always there. AI just makes the difference between having them and not having them impossible to ignore.