If AI can write emails, analyze data, and generate code, machines now outperform humans in nearly every measurable metric: speed, productivity, and task completion. By these standards, humans appear to lose—jobs, dignity, and worth. A recent management study reveals that AI can help people complete 12% more work 25% faster, but it delivers incorrect answers 19% of the time.
This statistic highlights a critical tension: we’re optimizing for efficiency while accepting a growing error rate. If we prioritize motion over direction, we risk resembling Wile E. Coyote—sprinting forward only to realize, too late, that the ground beneath us has vanished.
The reason this issue matters now is fundamental: AI and human intelligence operate differently. Despite its name, AI is recursive. Like “social media,” which isn’t truly social, AI identifies patterns in existing data, accelerates established processes, and reinforces decisions already made. It can compose a song based on your story or build a basic website—but it cannot imagine what doesn’t yet exist. It cannot dissent. It cannot empathize. It cannot sense when a decision lacks integrity.
Humans, in contrast, do more than process the world—we reimagine it. We confront unsolved problems, grapple with contradictions, and persist until we create something novel and useful. This generative capacity—to imagine rather than replicate—fuels every meaningful innovation. Yet our management systems have never learned to recognize it.
The 100-Year-Old Crisis of Performance Reviews
This isn’t a new problem; it’s a century-old one that AI’s rise has suddenly turned into a crisis.
From Scientific Management to Stack Ranking
In the early 1900s, Frederick Taylor introduced scientific management—the idea that human work could and should be standardized, measured, and optimized like industrial inputs. People became inputs; efficiency, the output.
Soon after, the U.S. Army formalized rating systems to rank soldiers against one another. Designed for military hierarchy and deployment—not development—these systems prioritized sorting over growth. When wars ended, corporations adopted both the logic and the form. By the 1950s, the annual performance review had become a corporate staple—not to develop people, ideas, or innovation, but to sort, rank, and rate.
Jack Welch later cemented this approach at GE. He implemented a system where the top 20% were rewarded and the bottom 10% were fired annually. What spread globally wasn’t just a practice—it was a premise: that humans are meant to be ranked, not linked.
What many don’t realize? Welch’s stack ranking wasn’t about improving performance. It was a tool to manage shareholder perceptions—one of several mechanisms used to cut costs while projecting growth.