top of page

When AI Took the Tasks, Human Judgment Became the Work


Most organizations are celebrating what AI has made faster.

Very few are asking what it’s quietly making weaker.



About the Author

I’ve been fortunate to learn alongside students, colleagues, and communities at the intersection of technology, leadership, and human development.


I’m grateful for the opportunity to have taught AI courses and to have created the Junior AI Innovation Program at The School at Columbia University. Alongside that work, I’m involved in 2Nspira and Open Goal Soccer — two very different settings that have taught me the same lesson: people grow best in environments designed with care, trust, and intention.


I also completed an Executive Master’s in Technology Management at Columbia University, which deepened my interest in how technology, leadership, and human systems interact in complex, real-world contexts. That curiosity carries into my writing, including my book Trust Is the Operating System, where I reflect on why trust, judgment, and human responsibility matter even more as systems become more automated.


Across classrooms, organizations, and communities, the work has never been about tools or hype. It’s been about asking better questions — especially what it truly takes for humans to thrive in AI-rich environments.


What I’ve seen, again and again, is that while AI can take on routine, repetitive, and mechanical work, the most meaningful human work moves in the opposite direction: toward creativity, judgment, strategic thinking, and leadership.

The question has never really been how to use AI.

It’s been how to design human roles that become more valuable because AI exists.

What follows is drawn from those shared experiences — and from what I’m continuing to learn as these changes unfold in real time.






No One Announced the Moment It Happened

There was no all-hands meeting.

No memo titled “The Nature of Work Has Changed.”

It just… shifted.

One day, your calendar rearranged itself.

The next, your meeting notes were waiting before the meeting ended.

Soon after, the first draft — the good one — wasn’t yours anymore.

At first, it felt like progress.

Then, slowly, something else disappeared.



The Moment People Don’t Talk About

Picture this.

You’re sitting in a meeting you’ve attended a hundred times before.

The slides are sharp.

The numbers line up.

The AI-generated summary is already open on everyone’s screen.

Someone asks the question that matters:

“What should we do next?”

There’s a pause.

Not the awkward kind.

The quiet kind.

People glance back at the slide — as if the answer might appear if they stare long enough.

Finally, someone says:

“Well… this is what the system recommends.”

No one challenges it.

No one builds on it.

No one asks what happens if it’s wrong.

And suddenly you realize:

The work got faster.

But the thinking got thinner.



The Illusion of Progress

Across organizations, AI is doing exactly what it promised:

  • Removing friction

  • Accelerating output

  • Cleaning up execution

But here’s the uncomfortable truth leaders rarely say out loud:

Efficiency is not the same as effectiveness.

Speed is not the same as direction.

In fact, many teams today are more productive —

and less capable of navigating change.

AI improved execution.

It did not automatically improve judgment.

This isn’t a future-of-work problem.

It’s already happening — quietly, unevenly, and largely unexamined.





Story 1: The Team That Didn’t Break — Until It Needed to Bend

A mid-sized organization rolls out AI across operations.

Reports run automatically.

Processes document themselves.

Scheduling conflicts disappear.

Leadership is thrilled.

Then the environment shifts.

A vendor fails.

Customer behavior changes.

Two priorities collide.

And the team freezes.

They don’t argue.

They don’t experiment.

They wait.

“Let’s see what the system updates.”

“Do we have enough data yet?”

“Should we escalate this?”

Before AI, the team debated more — but adapted.

After AI, they moved faster — but only when reality matched the model.

They didn’t lose intelligence.

They lost resilience.

This pattern aligns with decades of research on automation bias: when systems perform well, humans gradually disengage judgment — until conditions change and judgment is suddenly needed most.



Story 2: The Manager Who Didn’t Lose Capability — Just Conditioning

Now picture a manager you respect.

They adopt AI early.

Performance notes are drafted automatically.

Planning templates fill themselves in.

Emails write themselves.

Finally — breathing room.

But over time, something subtle changes.

They stop writing out their reasoning.

They stop sitting with ambiguity.

They stop practicing tradeoffs.

Then a hard moment arrives.

A people decision.

A values conflict.

A situation with no clean answer.

The hesitation is real.

Not because the manager forgot how to think —

but because they hadn’t practiced thinking that way in a while.

Judgment, like muscle, weakens without resistance.



What the Brain Is Quietly Doing

This isn’t a cultural failure.

It’s neurological.

Neuroscience shows that the brain adapts relentlessly to what it practices:

  • Repetition strengthens neural pathways

  • Unused pathways weaken

  • Tolerance for complexity must be exercised to remain available


Routine tasks train one kind of brain.

Strategic thinking trains another.

When AI removes repetitive cognitive work, it doesn’t “upgrade” humans to strategy.

It removes the gym where strategy used to get trained.

Free time does not produce judgment.

Deliberate friction does.



The Hidden Cost of Frictionless Work

AI is exceptional at reducing uncertainty:

  • Clear recommendations

  • Confident language

  • Optimized outcomes


But human judgment is forged in:

  • Disagreement

  • Incomplete information

  • Competing values

  • Consequences that unfold over time


When friction disappears:

  • Clarity gets mistaken for truth

  • Confidence replaces reasoning

  • Responsibility subtly shifts away from humans


This is why decision quality can decline

even as output quality improves.



The Leadership Mistake That’s Becoming Expensive

Most organizations are investing heavily in:

  • AI tools

  • Training on usage

  • Productivity metrics


Very few are investing in:

  • Judgment development

  • Decision design

  • Strategic thinking as a distributed capability


Strategy is often treated as something senior leaders do.

In reality, this shows up everywhere — in executive meetings, classrooms, product teams, schools, and boardrooms.

Strategy is something organizations practice — or lose.


So people ask:

“What’s my role now that AI does the work?”

The better question is:

What kind of thinking does my organization now require from humans — on purpose?





How Strategic Humans Are Actually Developed

This is not reskilling.


It’s reconditioning how humans think when machines execute.

These principles come from decades of innovation research, design thinking, systems problem-solving, and leadership education — including the work I taught in AI-focused courses and programs.


1. Train People to Frame Problems — Not Just Solve Them

Strategic value starts before solutions.

People must practice asking:

  • What problem are we really solving?

  • Who defines success?

  • What assumptions are quietly shaping our options?


This shifts thinking from execution to sense-making.



2. Rebuild Curiosity Through Design Thinking

Design thinking isn’t about sticky notes.

It’s about retraining curiosity.

It teaches people to:

  • Explore before deciding

  • Test assumptions safely

  • Learn without certainty


AI answers quickly.

Humans must learn to ask better questions.



3. Make Tradeoffs Explicit — and Normal

AI optimizes within constraints.

Humans decide when values collide.

Teams must practice:

  • Choosing between imperfect options

  • Naming second-order consequences

  • Standing behind decisions under uncertainty


Judgment grows when reasoning is visible — not hidden.



4. Apply Growth Mindset to Decisions, Not Just Skills

Growth mindset isn’t optimism.

It’s reflection:

  • What did we believe?

  • Where were we wrong?

  • What would we change next time?

This is how adaptive expertise forms.



5. Design Environments That Invite Thinking

Strategic thinking doesn’t emerge from rushed status updates or perfectly optimized workflows.


It requires environments that make thinking possible — and safe.

That means:

  • Time to explore scenarios, not just report progress

  • Psychological safety to disagree and challenge assumptions

  • Explicit permission to question AI outputs, not defer to them

  • Shared decision frameworks that clarify who decides what — and why


People don’t think better because they’re told to.

They think better because the environment allows it.


The crucial question is this:

Are we intentionally creating environments that strengthen human judgment — or unintentionally designing ones that slowly weaken it?




The Question That Will Separate Organizations

AI didn’t remove the need for humans.

It removed the excuse not to think.

As automation accelerates, the real question becomes:

Are we building organizations that think more deeply —

or just move faster without direction?

AI took the tasks.

Humans must take the strategy.

And the organizations that understand this early

won’t just move faster.

They’ll know where they’re going —

and why.



Resources & Further Reading

For readers who want to explore these ideas more deeply:



Where This Conversation Continues

If you’re thinking about how to retrain your people — not just to use AI, but to think, decide, and lead more strategically alongside it — I invite you to follow this newsletter and join the conversation.

In future issues, I’ll continue exploring how organizations can design roles, environments, and leadership practices that help humans become more valuable in an AI-powered world.



 
 
 

Comments


bottom of page