top of page

Why Innovation Feels Harder in 2025 — and What Great Leaders Do Differently


How Leaders Build Learning Organizations When Trust, Time, and Attention Are Under Pressure

By Jeffrey V. Cortez

Founder & CEO, 2Nspira LLC

Executive Master’s in Technology Management, Columbia University, class of 2025


Why This Matters Now — A 2025 Reflection


In 2025, leadership feels different.

Not because leaders lack ideas, tools, or ambition—but because the pace of change has outgrown the way most organizations are designed to learn.


AI is being embedded into everyday platforms, often by default.

Digital transformation is no longer episodic—it’s continuous.

Teams are fatigued by overlapping change.


Customers are more sensitive than ever to trust, privacy, and broken promises.


Leaders are caught in the middle—expected to innovate faster, adopt responsibly, protect trust, retain talent, and deliver results in increasingly compressed cycles.


I consider myself a life-long student of leadership, and over the past few years—particularly during my Executive Master’s studies at Columbia University—I’ve spent a great deal of time studying how the world’s most enduring innovators lead in moments like this.

We examined organizations such as Apple, Pixar, Google, Amazon, and Toyota, along with institutions operating under intense regulatory, ethical, and societal pressure. We studied not just what they built, but how they thought, how they learned, and how they protected trust while innovating at scale.


At the same time, I was applying these ideas in the real world—inside global nonprofits, educational institutions, and complex organizations navigating data responsibility, organizational change, and the rapid expansion of AI into everyday work.


What became clear is this:

The defining leadership challenge of 2025 is not adopting new technology.
It is building organizations that can learn faster than the world is changing—without burning out their people or breaking trust.

That’s what this article is about.


It’s not a manifesto or a prediction.


It’s a synthesis of what I’ve learned—from research, from great leaders, and from lived experience—about how innovation actually happens when the stakes are high, the tools are powerful, and the margin for error is thin.


If you’re leading in 2025 and feeling the pressure to innovate responsibly, sustainably, and humanely, I hope this perspective is useful.


Why Innovation Is Harder Now—Not Easier

The common narrative suggests innovation should be easier today.

We have better tools, more data, and unprecedented computing power.

Yet many organizations feel more stuck than ever.

The reason is simple:

The pace of change has outstripped the way most organizations are designed to learn.

Real-world example

In a global nonprofit I previously worked with, leadership approved multiple initiatives in parallel: data centralization, cloud migration, workflow automation. Each project made sense individually.

What failed wasn’t the technology.

It was the organization’s learning capacity.

Teams were executing constantly, but no one had time to pause, reflect, or adjust. Risks surfaced late. Feedback loops collapsed. Innovation slowed—not because of resistance, but because learning was treated as a luxury instead of infrastructure.

AI didn’t just accelerate workflows—it exposed weaknesses in leadership psychology:


  • fear of uncertainty,

  • intolerance for ambiguity,

  • and cultures optimized for execution, not learning.


Great innovators understand that innovation in 2025 is no longer about invention alone.

It is about absorption, sense-making, and trust preservation.



The Psychological Traits That Matter Most Right Now

1. Intellectual Humility in a World of Overconfidence

In an era of AI-generated certainty, great innovators are skeptical of easy answers.

As Adam Grant argues in Think Again, high-performing leaders don’t cling to being right—they cling to learning. They treat strategies, tools, and assumptions as hypotheses.

This trait matters more now than ever, because:


  • AI outputs can appear confident while being wrong,

  • dashboards can obscure human context,

  • and speed can mask fragility.


Real-world example

A good friend of mine at Apple shared that teams routinely prototype multiple approaches in parallel. The goal isn’t to win the argument early—it’s to converge on truth faster.

In my own work, an onboarding redesign initially “looked right” on paper. Instead of locking it in, we tested it with real users. Within days, edge cases surfaced that would have caused months of rework had we pushed forward with confidence instead of curiosity.

Certainty feels efficient.

Humility is actually faster.



2. High Standards Without Fear

Psychological safety has become a buzzword—but in 2025 it is a survival skill.

Amy Edmondson’s research and Ed Catmull’s experience at Pixar converge on a critical insight:

Fear suppresses truth faster than it suppresses failure.

In high-pressure environments, people stop speaking up long before systems break.


Real-world example

At Pixar, Braintrust sessions are famously intense. Ideas are dismantled publicly. Weaknesses are exposed early. But no one is punished for surfacing problems.

In a regulated organization I supported, innovation stalled because prior “failed pilots” had resulted in blame. Progress resumed only after leadership explicitly said:

“If an experiment fails for the right reasons, no one will be penalized.”

Once fear lifted, teams began surfacing risks early—before they became incidents.

Fear doesn’t prevent failure.

It delays the truth.



3. Comfort With Ambiguity When Answers Are Incomplete

AI, automation, and platform complexity have made many decisions probabilistic rather than deterministic.

Great innovators tolerate ambiguity long enough to:


  • understand second-order effects,

  • test safely,

  • and avoid irreversible mistakes.


Real-world example

Amazon’s “working backwards” process deliberately delays solutioning. Teams start with the press release to clarify intent before committing resources.

In my own work with AI tools, we resisted pressure to deploy broadly. Instead, we paused to ask:


  • What decisions should never be automated?

  • What data should never be touched?

  • What happens if this is wrong?

That pause prevented ethical and reputational risk—and led to more targeted, trusted adoption later.

Ambiguity is uncomfortable.

False certainty is dangerous.



How Innovation Leadership Must Evolve

From Control to Context

In volatile environments, control slows learning.

Drawing on Kotter’s work and observed practice at Apple and Amazon, great innovators lead by:


  • clarifying purpose,

  • defining constraints,

  • and granting teams autonomy within guardrails.


Real-world example

In a cross-functional innovation team I led, progress accelerated once leaders stopped prescribing solutions and instead clarified:


  • the problem to solve,

  • what could not be compromised,

  • and who owned decisions.


Ownership replaced dependency almost immediately.



From Heroics to Systems

Innovation in 2025 cannot rely on heroic individuals.

Sustainable innovation emerges from:


  • cross-functional collaboration,

  • disciplined experimentation,

  • and systems that reward truth over optics.

Real-world example

Toyota’s continuous improvement culture doesn’t rely on breakthroughs. It relies on thousands of small, safe improvements surfaced by people closest to the work.

When we shifted from celebrating “big wins” to documenting learnings from small experiments, patterns emerged that informed better strategic decisions than any single idea ever had.



From Speed to Reversibility

Eric Ries’ Lean Startup framework is often reduced to speed. In reality, its power lies in reversibility.

Great innovators ask:


  • Can we undo this?

  • Can we learn without harm?

  • Can we protect trust while experimenting?


In my work at 2Nspira, supporting organizations as a strategic advisor, sandboxing and reversibility are non-negotiable design principles. Whether we’re modernizing core systems, restructuring how data flows across the organization, or introducing AI-assisted decision support, every initiative is deliberately designed so learning happens safely—and nothing irreversible reaches production without earned confidence.


Experimentation in an Age of Ethical Risk

Innovation today carries ethical weight.

AI systems can amplify bias.

Data misuse can destroy trust overnight.

Automation can quietly erode human agency.

Great innovators respond by:


  • setting explicit ethical boundaries,

  • defining what must never be automated,

  • and keeping humans meaningfully in the loop.


Real-world example

Many organizations rushed AI features into production only to pull them back after trust issues surfaced. By contrast, teams that experimented in sandboxes with clear guardrails learned faster without public fallout.


Innovation that ignores ethics is not bold.

It is reckless.



Client Obsession When Trust Is Fragile

Customer obsession in 2025 is no longer about delight alone.

It is about:


  • consent,

  • transparency,

  • and restraint.


Amazon’s “working backwards” philosophy remains relevant—but it now includes questions like:


  • Should we do this?

  • What data are we touching?

  • How would this feel if we were the customer?


Real-world example

In an onboarding redesign, we removed steps instead of adding automation. Completion rates improved—not because the system was smarter, but because the experience felt respectful.


Trust is the most valuable innovation asset—and the easiest to lose.



What I Look for When Building Innovation Teams Today

Internal Ingredients I Must Design


  • Psychological safety with accountability

  • Clear problem ownership

  • Explicit experimentation guardrails

  • Cross-functional representation

  • Protected time for learning

  • Visible leadership sponsorship


External Realities I Must Navigate


  • Organizational risk tolerance

  • Executive incentive structures

  • Regulatory and ethical constraints

  • Legacy systems and institutional memory

  • Market and technology volatility


Great innovation leaders don’t ignore these forces.

They design around them.



A 2025 Closing Reflection

Innovation today is not about being the fastest to adopt new technology.

It is about being the most thoughtful under pressure.

The leaders who will succeed in 2025 and beyond are those who can:


  • create learning without chaos,

  • accelerate without burning out their people,

  • and innovate without breaking trust.


That is not accidental.

It is the result of intentional leadership design—

rooted in psychology, discipline, and respect for the human system.

That is the innovation leadership I study, practice, and build.

If you’re leading transformation right now, the question isn’t “what should we adopt next?” — it’s “what must we design so our organization can learn without breaking trust?”


References & Frameworks


Books (Foundational Works)


  • Edmondson, Amy C. The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth.

  • Catmull, Ed. Creativity, Inc.: Overcoming the Unseen Forces That Stand in the Way of True Inspiration.

  • Christensen, Clayton M. The Innovator’s Dilemma.

  • Christensen, Clayton M., Jeff Dyer, and Hal Gregersen. The Innovator’s DNA.

  • Rumelt, Richard P. Good Strategy / Bad Strategy.

  • Ries, Eric. The Lean Startup.

  • Kotter, John P. Accelerate: Building Strategic Agility for a Faster-Moving World.

  • Grant, Adam. Think Again: The Power of Knowing What You Don’t Know.

  • Marquet, L. David. Turn the Ship Around!

  • Bryar, Colin, and Bill Carr. Working Backwards: Insights, Stories, and Secrets from Inside Amazon.

  • Dweck, Carol S. Mindset: The New Psychology of Success.




Academic & Practitioner Articles


  • Edmondson, Amy C. “Psychological Safety and Learning Behavior in Work Teams.” Administrative Science Quarterly.

  • Edmondson, Amy C. “Strategies for Learning from Failure.” Harvard Business Review.

  • Hill, Linda A., et al. “Collective Genius.” Harvard Business Review.

  • Google. “Project Aristotle: Understanding Team Effectiveness.”

  • Westerman, George, et al. “Digital Transformation: A Roadmap for Billion-Dollar Organizations.” MIT Sloan Management Review.

  • Christensen, Clayton M. “What Is Disruptive Innovation?” Harvard Business Review.




Frameworks & Operating Models


  • Psychological Safety Model (Amy Edmondson)

  • Lean Startup / Build–Measure–Learn Loop (Eric Ries)

  • Working Backwards (Amazon)

  • Dual Operating System (John Kotter)

  • Cynefin Framework (Dave Snowden) – complexity-aware experimentation

  • Toyota Production System / Kaizen – continuous improvement

  • First-Principles Thinking – applied in product and system design

  • Human-in-the-Loop AI – responsible AI deployment principle




Organizations Studied & Referenced


  • Apple (product development, privacy-first innovation)

  • Pixar (creative culture, Braintrust)

  • Amazon (customer obsession, decision discipline)

  • Toyota (continuous improvement, systems thinking)

  • Google (team effectiveness, experimentation culture)


 
 
 

Comments


bottom of page