We were wrong.
It is a simple admission, but a necessary one. A year ago, we looked at the map of technology and charted a clear course. After all, we all love good predictions that have a compelling narrative power.
The future, we believed, was moving to the edge. In simpler terms, it meant performing AI calculations on gadgets in people’s pockets, rather than in distant data centers. It was a story that made perfect sense. Power would decentralize. Intelligence would move from monolithic data centers into the devices in our hands, our homes, our cars. We made some investment decisions based on this seemingly inevitable shift. We prepared for a world where our gadgets did their own thinking. And, it was not just a prediction: there was evidence aplenty.
Reality had other plans. A year later, we’ve had a humbling wake-up call. The evidence is now pointing in the opposite direction. Instead of AI moving to the edge and the arrival of the great decentralization, the data center, that old bastion of the cloud, has not waned.The most advanced AI applications, particularly in the current generative AI era, continue to operate in massive cloud data centers. Our fancy phones and laptops, despite getting more powerful, mostly act as windows into AI models that live on server farms. Instead of becoming smarter brains, they have become slicker terminals. If anything, the center of gravity in computing has grown even heavier.
This is more than a missed business forecast. It is a profound lesson in the nature of innovation. We study history, we analyze trends, and we build logical frameworks to predict what comes next. But innovation is not a tidy, linear process. It is a chaotic, unpredictable force. It is defined by serendipity, by accidents, and by sudden, violent turns that make a mockery of our maps. The story of edge computing’s phantom arrival is a eulogy for a future that was supposed to be, and a humbling reminder that in the pursuit of what’s next, the most valuable asset is the willingness to admit you were wrong and not get too enamored with one’s own views on how the world must be.
The Edge of Hype: When Decentralization Seemed Inevitable
It is easy to forget the sheer force of the narrative. In late 2023 and early 2024, the promise of edge computing was not a niche technical idea; it was a palpable hum of excitement. The logic was clean and compelling. Moving artificial intelligence to the edge would solve the technology’s most pressing problems.
First, there was privacy. Processing data on your own device meant it never had to travel to a server owned by someone else. Your secrets would remain your own. Second was latency or annoying delays. With the thinking done locally, responses would be instantaneous, a crucial feature for everything from conversations with voice assistants and augmented reality to autonomous vehicles. Finally, there was cost and access. Why rent time on a remote supercomputer when your own device could do the work? This would democratize AI, making it reliable even without a perfect internet connection.
By late 2023 and early 2024, the buzz around on-device AI was palpable. Market forecasts were sky-high. Analysts projected the global edge AI market could grow from about $20–27 billion in 2024 to anywhere between $66 billion and a whopping $270 billion in a few years. A famous forecaster’s oft-cited prediction that three-quarters of enterprise data would be outside the cloud by 2025 became a rallying cry.
A wave of new gadgets and chips promised to bring AI to the edge. Leading phone makers began touting “AI phones,” and Microsoft even coined the term “AI PC” for a new generation of laptops. Devices like Samsung’s Galaxy S24 and Apple’s iPhone 15/16 came with beefed-up Neural Processing Units (NPUs) – special AI chips under the hood – and flashy demos of on-device AI features. Qualcomm’s CEO went so far as to claim their new Snapdragon X series processors could run not just small AI models but even large language models on-device. The message: your next laptop might not need the cloud to get AI work done.
Startups, too, jumped into the fray with imaginative hardware. Wearable AI assistants were a hot idea. In early 2024, a quirky little gadget called the Rabbit R1 made a splash at CES with an exciting keynote. This pocket-sized, bright orange device promised a voice-controlled AI companion that could do “just about everything your phone can do, only faster.” Around the same time, the much-hyped Humane AI Pin was showcased with the ambition to replace smartphones entirely by projecting AI-driven information onto your world. It really felt like we were on the cusp of a decentralization wave. With so many trends aligning (market demand for privacy, tech giants producing capable chips, exciting new devices, and a bit of historical analogy for good measure), the edge’s rise seemed not just feasible, but inevitable.
The Quiet Unraveling
Reality arrived not with a bang, but with a series of quiet, brutal disappointments. The revolution that was supposed to happen at the edge unraveled piece by piece.
The first to fall were the visionaries. The Humane Ai Pin and Rabbit R1, once darlings of the tech press, launched to disastrous reviews. They were meant to be the future, but they could barely function in the present. These gadgets became symbols not of a new paradigm, but of overpromise and under-delivery.
The disappointment then spread to the incumbents. Laptops with new "AI accelerator" chips from Qualcomm and others were released, but they offered little tangible benefit. In a scathing indictment, developers behind the popular AI toolkit ComfyUI placed the Qualcomm AI PC in their "F Tier" of hardware, noting bluntly that "Nothing works." Microsoft, a key promoter of the concept, quietly downgraded its Windows Copilot feature on these new PCs, turning it back into little more than a web browser bookmark. The AI PC was a marketing sticker, not a functional leap.
The deepest disappointment, however, was in the smartphone. This was the edge device par excellence, a supercomputer in every pocket. Yet it, too, failed to become a true seat of intelligence.
Top-tier smartphones like Apple’s iPhone 16 and Samsung’s latest Galaxy look and work pretty much like the models from a few years prior. Yes, they tout AI-powered camera modes and smarter assistants, but there’s little truly new that runs independently on the device. All the major phone launches emphasized AI features, yet those features often amounted to things like slightly better autocorrect, AI image editing, or voice dictation – useful, sure, but hardly the seismic shift envisioned by edge computing evangelists.
Notably, many of these “AI features” still lean on the cloud. Apple’s vaunted Apple Intelligence gave iPhones some on-device smarts, but even Apple admitted that for more “sophisticated commands” the system quietly reaches out for “help from ChatGPT” or other cloud models. In other words, your iPhone might rewrite a text message offline, but if you ask it to compose a complex email or summarize a document, it’s likely pinging a server somewhere. Apple’s commitment to privacy means those requests are anonymized and encrypted, but they’re not happening on your phone’s CPU alone.
The Gravity of the Center Making the Edge More Irrelevant
The failure of the edge was not simply a stall. It was a retreat. As the edge faltered, the center grew unimaginably powerful, growing at nearly 30% from already astronomical levels, creating a feedback loop that pulled even more of the world’s computational work back into the cloud. The pendulum of computing history, which many, including us, assumed would swing back toward a decentralized model, instead swung harder toward centralization.
The engine of this shift was a simple, brute-force reality: for AI, bigger was still better. Breakthroughs in "scaling laws" revealed that making models larger and iterative (aka, reasoning models that require multiple times more compute power) yielded qualitatively new abilities. This ignited an arms race. Companies like Meta, Google, and OpenAI began building AI training clusters of staggering scale.
This led to the ultimate paradox: centralized AI began taking over the operation of our edge devices. New "agentic" AIs from OpenAI and others can now control your browser, manage your calendar, and execute tasks on your computer. But the agent’s "brain" is not on your PC. It is in the cloud. In other words, agents of Apple’s Siri or Windows Copilot, or Google Assistant are forcing more work to be executed on devices afar. Your device is merely the hands, receiving instructions from a remote intelligence. The gravity of the center has become so strong it is now pulling the functions of the edge into its orbit.
Innovation's Crooked Path
The story of edge computing is not an isolated event. It is a classic example of a universal principle: innovation rarely travels in a straight line. History is littered with logical predictions that were derailed by messy reality. The path of progress is always crooked.
We have seen this pattern play out in multiple fields. The "paperless office," predicted since the 1970s, never fully arrived. Instead, computers and printers initially made it easier to consume more paper. If there were late-night comedy shows on innovations, most would have spent years discussing the autonomous driving promises.
LIDARs were initially expected to be the winners, but then they were expected to be the losers. At this point, all we know is that they are neither. mRNA’s promise of “plug and play” remains stalled beyond Covid. CRISPR remains a niche technology years after its initial breakthrough. Graphene, the wonder material, is unable to scale even a decade later. And quantum computing remains the most promising it has ever been, three decades later, with the dreamers. In popular conversations, it has outlasted the likes of room-temperature semiconductors, at least.
The very history of computing follows this oscillating path. The era of centralized mainframes was disrupted by the radical decentralization of the personal computer. Yet, decades later, the pendulum swung back with the rise of cloud computing, which recentralized storage and processing into massive data centers—a modern return to the mainframe model. Edge computing was supposed to be the next swing of that pendulum. It wasn't.
In each case, a seemingly obvious future was subverted. The lesson is not that prediction is useless, but that it is fraught with peril. The forces that shape technology are too complex and chaotic to be plotted on a simple graph.
Simply put, we do not think the Edge is dead. For now, we just do not know.
The Virtue of Being Wrong
So what is the final lesson in this tale of a future that failed to arrive? It is a lesson in humility, for sure. But, there is more.
In an age of accelerating innovation, the greatest danger is becoming locked into any set of assumptions and removing the option to change one's opinions in case the facts change. The companies that dominate today feel unassailable, built on a paradigm of scale that seems locked in. But history whispers that all empires are temporary. The very forces of unexpected, serendipitous innovation that cemented the cloud's dominance could be the same forces that eventually unravel it.
Our work, then, is not to be prophets. It is to be observers, with a ruthless commitment to the evidence. It is to hold our assumptions lightly and be ready to discard a cherished belief the moment reality proves it false. The failure of the edge computing forecast was not a failure of imagination. It was a failure to appreciate the wild, untamable nature of progress.
The path forward, then, demands a different mindset. It asks that we trade the comfort of grand assumptions for the hard work of continuous observation. This work is difficult; it requires a relentless watch over inscrutable topics, an acceptance that the noise of the present may feel overwhelming. We may not understand it all, but we must be willing to see it all.
The true discipline is to maintain a persistent gaze on reality as it is, not as it was imagined from the patterns of history or dreamt in the visions of a convenient future. The ghosts of history and the sirens of the future are poor guides, even when they make great podcasts, presentations, or essays. True sight comes not from forecasting what lies ahead, but from seeing what is, right now, with brutal clarity.