McKinsey's 2026 State of Organizations report surveyed over 10,000 senior executives across 15 countries. The headline finding, buried under careful corporate language, is quietly devastating: 88% of organizations are experimenting with AI. 81% report no meaningful impact on the bottom line. Most leaders know something tectonic is happening. Almost none of them know what to do about it.
The Gap Between Knowing and Becoming
There's a particular kind of organizational paralysis that sets in when a tectonic shift arrives too fast to process. Leaders sense the ground moving. They fund pilots. They form committees. They hire a Chief AI Officer. They nod along to conference presentations about "the agentic future." And yet — nothing fundamentally changes.
McKinsey calls this the failure of a "piecemeal approach." I'd call it something older: the failure of identity. Organizations are not failing at AI adoption because they lack tools or budgets or talent pipelines. They are failing because they are trying to graft a new paradigm onto a self-concept that was built for a different era.
The firm — as a legal, economic, and psychological structure — was invented to solve a specific coordination problem: how do you align the labor of many strangers toward a shared productive purpose when information is scarce, trust is local, and communication is slow? Hierarchy, departments, job descriptions, performance reviews, org charts — all of these are technologies. Clever ones, historically. But technologies built for a world of scarcity, friction, and opacity.
That world is gone. And the firm that ran on it is now running on legacy infrastructure.
Three Forces, One Diagnosis
McKinsey identifies three "tectonic forces" reshaping organizations: technology disruption, economic fragmentation, and workforce transformation. They're right about the forces. But the diagnosis misses the deeper pattern.
These three forces are not separate phenomena converging at an inconvenient moment. They are three expressions of the same underlying shift: the collapse of scarcity as the organizing principle of economic life.
When intelligence becomes abundant, when coordination becomes cheap, when knowledge can be replicated at zero marginal cost — the entire logic of the firm inverts. You no longer need a hierarchy to allocate scarce managerial attention. You no longer need departments to contain specialized knowledge. You no longer need a building to coordinate labor. The transaction costs that made the firm necessary in the first place have approached zero.
The organization was always a workaround. A patch on a world where information was scarce and trust was local. AI doesn't disrupt the firm — it reveals that the firm was always a temporary solution to a permanent coordination problem.
This is not a technology story. It's an ontology story. The question isn't "how do we adopt AI?" The question is: "what does organizing human effort look like when the old constraints dissolve?"
McKinsey gestures at this when they distinguish "AI Pioneers" from the pack — noting that pioneers who have a "clear vision for AI's future impact" see nearly 90% of leaders actively championing adoption. But what does it mean to have a clear vision? It means you've resolved the identity crisis. You know what you're becoming, not just what tool you're deploying.
What Leaders Get Wrong About Productivity
The report's productivity chapter is its most honest. Two-thirds of executives see their organizations as overly complex and inefficient. Forty-three percent cite productivity growth as their top priority. And yet — "traditional remedies relying on structural redesigns, cost cuts, and flatter hierarchies are achieving diminishing returns."
Here is the fundamental error: productivity is being measured against the wrong baseline. When leaders say they want "more productivity," they usually mean: more output from the same people doing roughly the same work. But in an age of AI agents, the question isn't how to optimize the existing workflow — it's whether the workflow itself should continue to exist.
The insight from the Intergraph thesis is relevant here: context is money. The productivity ceiling isn't hit because people work too slowly — it's hit because the information flows that coordinate work are poorly structured. Most organizations are not information systems. They are information labyrinths, where knowledge is fragmented across teams, buried in email threads, locked in the heads of people who happen to have been in the right meeting.
A genuine productivity transformation doesn't reorganize the people. It rebuilds the context graph — the living, shared memory of what the organization knows, what it intends, what it has learned, and what it owes. That's what AI-native infrastructure actually means. Not a chatbot on top of your old systems. A new substrate for organizational intelligence.
The Human-Agent Collaboration Myth
McKinsey dedicates considerable attention to "humans and AI agents: building a new world of collaboration." It's one of their nine organizing shifts. And it's mostly right — but it's framed in a way that keeps leaders stuck.
The framing is: we have humans, we have AI agents, how do we get them to work together well? This is the wrong question. It assumes the agent is a tool to be integrated into an existing human workflow. Like asking, in 1995, how do we get our employees to collaborate better with email.
The answer was never "teach people to email better." The answer was: email changes the nature of work itself. Some work that used to exist stops existing. New work emerges that was previously impossible. The entire organizational surface area shifts.
The same is true of AI agents, but at a far more fundamental level. The question is not how humans and agents collaborate. The question is: what is the work that only humans should do — and what does that mean for how we organize, compensate, develop, and lead people?
McKinsey's data points at this obliquely: "around 75% of current roles will need reshaping with new skill mixes that combine greater technological fluency with stronger social, emotional, and higher-cognitive capabilities." But reshaping roles is still the old paradigm. The bolder move is to ask: what is the organizational form that emerges when the agent handles the transaction and the human holds the relationship, the judgment, and the meaning?
The Four Moves That Actually Matter
Most AI transformation roadmaps give you a list of tools and a change management plan. What follows is not that. These are architectural moves — shifts in how you think about what an organization is and does.
- Rebuild the context graph, not the org chart. The most valuable thing any organization possesses is its accumulated context: the hard-won knowledge of what works, what customers need, what this team is actually good at. Most firms let this rot in documents no one reads and institutional memory that walks out the door when people leave. The first move is to treat organizational context as a first-class asset — durable, structured, queryable, and increasingly owned by agents that can act on it.
- Shift from compliance to sovereignty. The old firm ran on compliance — people executing within defined roles and processes. The new firm runs on sovereignty — people and agents operating with genuine autonomy within clear principles. McKinsey's data on "reflective leadership" hints at this: leaders who engage in regular self-reflection are 30% more confident in their organization's adaptability, and significantly more likely to champion AI adoption. That's because sovereignty starts internally. You can't build an organization of sovereign agents if the leader is still running on inherited scripts.
- Design for infinite games, not quarterly cycles. The most pernicious legacy of industrial-era management is the finite game mindset — optimize for this quarter, hit this milestone, win this market. Finite games have exits and winners. The coming economy of AI agents and distributed intelligence is fundamentally an infinite game: the goal is to keep value flowing, to maintain participation, to compound contribution. This changes how you think about incentives, talent, customers, and capital — everything.
- Invest in human depth, not human breadth. The instinct, when AI takes over more tasks, is to "upskill" — teach everyone a little more about AI, add prompt engineering to job descriptions, run workshops on responsible AI use. This is necessary but not sufficient. The deeper investment is in the distinctly human capabilities that AI cannot replicate: moral judgment, contextual wisdom, genuine relationship, creative synthesis, the ability to hold uncertainty without collapsing into false certainty. These are not soft skills. In the age of AI, they are the hard skills.
On Diversity and the Next Economy
McKinsey's finding on D&I deserves more than a chapter: 90% of global leaders still see it as a priority, and organizations with strong belonging scores see a 56% improvement in job performance and a 50% drop in turnover risk.
But here's the framing that gets lost: in an economy increasingly mediated by AI agents, the organizations that will be most resilient are those with the widest range of perspectives actively shaping how those agents are designed, deployed, and governed. Monocultures — of thought, background, experience — produce brittle systems. In finite games, you can get away with it. In infinite games, you cannot.
The shift from diversity as compliance to diversity as architecture is one of the most underappreciated moves available to any leadership team right now. Not "we need more diverse representation" — though that remains true — but "we need diverse cognitive architectures building our systems, or those systems will encode the blindspots of whoever built them at scale."
The Organization
Is a Protocol Now
Here is what I believe the next decade will reveal: the firm, as we know it, is giving way to the protocol. Not a technology protocol — a coordination protocol. A set of principles, incentives, and shared context that allows humans and agents to self-organize around meaningful work without requiring a hierarchy to hold them together.
McKinsey's report is a sincere attempt to help incumbent leaders navigate an inflection point. It is full of useful data and genuine insight. But it is, by nature, written from within the paradigm it is describing the end of. It assumes the organization persists. It asks how the organization adapts. It does not ask what comes after the organization.
That is the question worth asking. And the answer — if you follow the logic of abundant intelligence, zero-marginal-cost knowledge, and human sovereignty — looks less like a reorganized firm and more like a living network of contexts, agents, humans, and games: an Intergraph.
The organization hasn't disappeared. It has outgrown its container. The building has already been left. The question is only whether you're building the next thing — or reorganizing the furniture in the last one.