The End of History (As a Guide)
“The End of History” is a fool’s title. It invites future mockery the way a tall poppy invites a scythe. When Fukuyama used it in 1989, the ridicule was immediate and long-lived. After all, mockery is always easier than reading.
We are borrowing that dangerous phrase to make a narrower, uglier point. Not that events will stop, or that cycles will vanish, or that the past has become irrelevant. However, that historical analysis, treated as the primary compass, has not been particularly helpful for some time in the current GenAI era and has reached a point at which the cost of being misled is highest.
The temptation to look backward is not an intellectual failure; it is a biological instinct. When the forest creates a new sound, the amygdala scans the memory banks for a match. A twig snap means a predator; a rustle means wind. We survive by matching current signals to past patterns. History works as a guide a great number of times, let us say, completely arbitrarily, 95% of the time. That 95% success rate is exactly what makes it lethal during the other 5%. The nastiest part is that the five does not arrive with a signboard. It arrives disguised as “just another cycle,” wearing a familiar suit, speaking in familiar metaphors, and politely asking to be compared to 2000, 2008, 1973, or 1929.
The last several weeks have been a reminder of how thin those comparisons are. Not because valuations do not swing or because hype has been outlawed. But because capability has been moving in ways that were not merely “faster than expected.” They were outside the expected shape. When coding agents stop being chat toys and start behaving like tireless junior teams inside an IDE, writing, testing, refactoring, pushing, and iterating, the debate is no longer “will productivity rise?” It becomes “what was the unit of productivity we were measuring in the first place?” When systems start coordinating with other systems in workflows that look less like tools and more like organisations, an early theme we explored in The Moltbook Cascade, the question is not whether automation is coming. It is whether our old categories of “job,” “task,” and “firm” were ever stable.
Since the original release of ChatGPT, and moments such as DeepSeek’s open-weight shock, Claude Code’s emergence in real development environments, and the quiet normalisation of multi-agent workflows that would have sounded like research fiction not long ago, the breathtaking has begun to feel routine. What would once have reset assumptions now barely interrupts them. The unexpected has, in a peculiar inversion, become the expected. That is precisely why acknowledging these discontinuities matters: their implications have moved well beyond the stage at which adoption surveys told us whether AI was being used, or when historians could confidently lecture on where the technology would stagnate. Pattern-matching retains its uses, particularly in interpreting the cyclical theatre of listed markets, where history still casts a long shadow. But something more unsettling has begun to emerge. In real life, where societies and entities we hold dear are under relentless pressure from GenAI’s compounding capabilities, rather than repeating precedent, the instinct to reach backward for reassurance has itself become a primary source of risk.
The Tyranny of the Decimal
There is a specific, mathematical violence in software development that traditional analysts, looking for "cycles," fundamentally miss. For most of the past two years, the dominant intuition was that machines could assist with writing code, but not produce long, reliable systems. The reason was simple mathematics. Even a tiny imprecision rate compounds brutally. If a machine has a one in a hundred error rate per line of code, a seemingly tiny imprecision or hallucination rate of 1%, the probability of it successfully generating a clean, 500-line program without an error was minuscule. And, this was also our observation a few months ago that fuelled the hallucination watchers to announce why machines will not ever be completely usable in precision work like programming without human intervention.
But the move from GPT-4 to the current generation of agents like Claude Code isn’t a linear step; it is a collapse of the decimal. When the error rate drops from 1% to 0.01%, the math doesn't just improve; the possibilities explode. Suddenly, the machine isn't just suggesting a snippet; it is refactoring a 10,000-line repository. This isn't just "faster" coding. It is a phase shift at a time when we have run out of hyperbole for another phase shift.
Systems are now producing long, interconnected programs inside development environments, not merely fragments in chat windows but working structures that compile, execute, and iterate. The implication is not mechanical; rather, it is a repeated warning that we cannot assume that next week, month, quarter, year, or decade will return to the same point in the cycle.
This is the essence of what we described two years ago as the Super Moore Era, a cadence where capability does not merely improve, but improves in a way that collapses the relevance of even recent history, let alone of any long history of unconnected events.
We catalogued this demolition derby, and it has only accelerated since. First, the history-watchers said AI models couldn't write a sonnet. Then it couldn't pass the bar. Then it couldn't debug its own code. Then it couldn't reason. Each ceiling was presented not as a temporary technical hurdle but as something close to a law of nature. Each was demolished within months. The goalpost is not moving. It is fleeing.
The Glass Half-Dead Problem
Barely a year ago, in our Glass Half-Empty analysis, we laid out in detail how the software industry was being structurally reshaped. Before that article, our concern about the application layer ran through a different lens: a concept we have long called instant copiability.. The first level of concern was the near-total collapse in the time between a software feature being conceived and its replication everywhere. The tables in the article on Software’s bygone golden age were enough for us to note why fundamentals had shifted.
This was before the far more important second-level concerns began to emerge. At the time in end-2023 and 2024, our repeated incantations of "software is dead" or “Xaas is dead” (always couched with “as we know it” which remains important even now) were barely in the conversation. Typing those harsh words felt uncomfortable, but they were more than clickbait phrases to keep our attention on massive secular changes.
The past discomfort now feels quaint, we still do not feel there is enough attention on the real implications of the second-level change that was the main point of the Glass Half-Empty article from a year ago. The factors that are driving the entire service sector toward a near-permanent inversion are more closely tied to recent changes.
In programming, aka directing computing gears what to do, until a few months ago, machines could accelerate tasks, but not redefine them. They could assist with writing code, but not originate coherent systems. They could optimise within boundaries, but not alter the boundaries themselves. These limitations became embedded not only in technical expectations but in economic structures. Entire industries were organised around the assumption that human cognition was the fixed point and any application development was always going to need it.
The progression has been well known. First, machines could autocomplete lines. Then functions. Then they could explain code. Then refactor it. Then propose alternatives. Each step was defensible as an extension of the previous one. But extension eventually becomes substitution, and substitution eventually becomes redefinition.
This redefinition was, in some ways, a theoretical concern in the aforementioned article, but it is now a reality. The difference between “helping write code” and “writing software” turned out not to be categorical, but probabilistic. And once probability crossed certain boundaries, categories collapsed with it.
Machines now participate, however partially, in their own improvement loops. They assist in generating training data, in evaluating outputs, in proposing refinements, and in accelerating the cycle that produces their successors. This is not autonomy in the science fiction sense. It is recursion in the engineering sense. And recursion, once introduced, changes the geometry of progress.
The Dichotomy of Time
The debate has shifted. Even the staunchest sceptics can no longer refute that application development has fundamentally changed. The argument has retreated to a new defensive line: acknowledging that while the method of making software has altered, the economic outcome will remain familiar.
Those arguing that "nothing much has changed" will definitely be right in the short term, if their utility is measured by stock price movements. As historians have established emphatically, bear-market rallies—or rallies along a secular downtrend—can be the sharpest of all.
But what matters for businesses, stakeholders, and societies is the long term. The current defenders of the status quo are painting an unaltered path using cherry-picked examples from history. They are forecasting the weather in 2035 while refusing to look out the window in 2026.
This creates a peculiar dichotomy. We are trapped between decades and Wednesdays.
The secular forces at play demand we think in decades. The restructuring of the global labour market, the migration of programming from Python and Java to English and Gujarati, and the commoditisation of intelligence are tectonic plates. They move slowly. But our understanding of what is possible has a shelf life measured in hours. A position taken with conviction on a Monday can be rendered obsolete by a model release on a Wednesday. We are living in a "Super-Moore" era where the calendar itself has become a source of error.
Our inability to grasp this is not a failure of intelligence; it is a failure of intuition regarding exponentiality. We are biologically wired for linear threats: a lion approaching, a stone falling. We are not wired for the paper fold.
If you fold a standard piece of paper 42 times, it does not become a thick notebook; it reaches the moon. For the first thirty folds, the progress looks manageable, linear, even boring. Then, in the final few folds, the thickness explodes from kilometres to hundreds of thousands of kilometres. We are currently somewhere around fold thirty-five. The "history-watchers" are looking at the paper and saying, "It’s only a few inches thick; it has never reached the moon before." They are measuring the thickness of the past while ignoring the mathematics of the next fold. We have machines working on themselves and networking with each other. The new, unexpected, emergent properties are lurking at every step.
This is the deeper structural problem. Everyone, including the biggest proponents of cycles, is well aware that the most consequential forces in human history have never been cyclical. The agricultural revolution did not "mean revert." Electrification, urbanisation, the rise of literacy: these were one-directional, multi-generational forces. None of them cycled back. But they took generations to become the force they are, with enough gaps in between for various cycles to play out. The supernormal exponentiality of the current era does not leave enough room for those gaps, even if it still makes share price charts look like the same squiggly lines as the eras before.
Aesthetic of Authority: Five Datapoints and a Story
The most effective rhetorical move in the current debate sounds like this: "Every time we've worried about machines destroying jobs, it hasn't happened. The Luddites were wrong. Humans always find a way."
How many datapoints underpin this "law of nature"? Three? Five? A handful of episodes from different centuries, different technologies, different social structures, wrapped in a narrative so smooth it feels like physics.
The retreat to history is, at its core, an aesthetic choice. A well-constructed historical comparison possesses a kind of intellectual symmetry that is deeply satisfying. A chart stretching across decades, its curves bending gently toward a reassuring mean, carries an aura of inevitability. It permits the creation of slide decks that radiate competence with axes calibrated with precision, regression lines drawn with confidence, and annotations that imply mastery over uncertainty itself.
This visual authority performs a neurological function. Pattern recognition is among the brain’s oldest survival tools. When a pattern appears, even a fragile one, the mind relaxes. A regression line acts like a sedative. It implies continuity. It suggests that what is happening now is simply a variation of what has happened before, that volatility is temporary, that deviation will resolve into familiarity. The chart does not merely describe reality. It reassures the observer that reality remains domesticated.
Once that reassurance is accepted, it hardens. People do not usually revisit conclusions. They renovate justifications. The conclusion stays, because conclusions are identities. The justification changes because justifications are furniture. The analyst who built a thesis around the dot-com parallel will not wake up one morning and say, 'Claude Code changes everything I believed.' They will instead fold the new development into the existing narrative, preserving the comparison that has become more important than the reality it was supposed to explain.
By contrast, projecting forward from present observables quickly enters uncomfortable territory when the pace of change exponentiality is introduced. Logical extrapolation compounds brutally. The recursive implications of even the changes afoot become impossible to handle beyond a handful of steps that may become obvious in months, without fantasizing about possible emergent properties.
There is an asymmetry in how these two approaches are perceived. The historical chart, even when structurally irrelevant, appears disciplined. The logical projection, even when grounded in observable engineering progress, appears reckless. History offers credibility. Logic offers vulnerability. The professional incentive is obvious.
Part of the problem lies in the nature of what is changing. The technological revolutions of the past two decades were visible at the surface. A new app could be understood instantly. A new platform could be evaluated through direct interaction. Adoption was experiential. The underlying mechanisms remained abstract but were not necessary for comprehension.
That is no longer true. Today, understanding even the basic contours of competitive advantage requires confronting layers of abstraction that resist simplification. Consider advanced semiconductor packaging such as CoWoS. Its implications cannot be reduced to a headline or to a viral tweet that compares it to something we understand but bears no true relevance. This is also true in not just understanding the factors driving today’s models but even the implications of their latest features and outputs. Our struggle to simplify descriptions of the latest model's feature improvements while retaining the arguments for their remarkable significance in this essay is one example. We argue that the latest model developments led by Claude are potentially more significant than the arrival of ChatGPT. And, it is almost impossible to discuss without providing scenarios that would look nothing more than a castle built on assumptions.
And the Critical "As We Know It" Part
When we began chronicling the erosion of the legacy technology stack—headlining provocations with the "Death of Software," "The End of SaaS," or the "Detroit Risks" facing hardware—we almost always anchored those claims with the phrase "as we know it." It was not a hedge; it was a diagnostic tool. As the world finally wakes up to these changed realities, "X is Dead" has become the default viral currency for the attention economy, stripped of its anatomical precision and converted into spectacle. In response, those eager to show current fears as overblown often transform these metaphors into strawmen. They argue against a total, vertical collapse to zero, a scenario we have not seen anyone portraying, to avoid engaging with the much more unsettling reality: that the old order is evaporating, and the structures that remain will be unrecognizable.
Recognition of this evaporation is not an end state; it is the prerequisite for survival. It is not an exercise in pessimism. It is an exercise in orientation. Whether one is defending public stock valuations, justifying private investment NAVs, or attempting to pivot a multi-national enterprise, the "as we know it" qualifier serves as a bridge to the next act. We have long maintained that most industries do not vanish overnight. Instead, they "Detroit-ify:" they may have their centrality and their relevance altered, which leads to smart people within them making Herculean efforts to re-pivot.
The effort to re-pivot is where the real work begins. Many of these attempts will fail, rendered obsolete by the sheer scale of disruption or by a subsequent "Wednesday shock" that arrives before the new strategy is even dry on the page. Others may succeed for a season, only to realize that the "new normal" was merely another thirty-fifth fold on the way to the moon. But acknowledging this instability is the only path to agency. To ignore the need for redefinition because "it hasn't happened before" is to forfeit the chance of doing the right thing. The old map is gone; the only mistake greater than having no map is insisting on using the one from the previous century.
Honesty in Front of Mirrors
We understand the need to be hopeful, the need to forecast because "do not know" is not a plan, and the need for some to hold opinions steady because changing conclusions with every development risks losing one's audience and one's seat at the table. We understand why changing justifications to protect existing conclusions becomes more practical than reviewing the conclusions themselves.
But none of that changes the mirror test. In private, in front of ourselves, we do not get to outsource honesty.
The most dangerous phrase in analysis is not "this time is different." It is "we cannot know," used not as an honest admission but as an excuse to stop thinking. The humility required is the real kind: standing in front of a mirror and admitting that your experience, which has served you well for decades, may now be pointing you in exactly the wrong direction. That the patterns you spent a career recognising may be the patterns of a world that is ending. The cost of that error is not abstract. It is capital trapped in dying assumptions. It is a policy calibrated to a labour market that no longer exists. It is a generation of students trained for jobs that machines will perform more effectively and more cheaply before the students graduate.
The difficulty now is not that history has become useless. It is that its reliability has become uneven in ways that are impossible to detect. The same instinct that protects against cyclical excess becomes dangerous when it blinds us to the far more critical secular changes. Pattern recognition, which has served as the foundation of survival and analysis alike, begins to hallucinate continuity where none exists. The past, just like what has been accused of LLMs, does not announce when it has lost jurisdiction. It continues to offer answers, calmly and confidently, even after the questions themselves have changed.
There is no shame in reaching for precedent. It is, after all, how most knowledge is built. But there is danger in mistaking precedent for permission to stop observing. The first step is perhaps the simplest: to spend less time debating whether this wave is hype or reality, because that debate, comforting and cyclical as it is, is itself the distraction. The flood is not waiting for our verdict on whether it is a flood. The need to observe reality for what it is, rather than fit into any mould, is important because it is constantly altering in ways that were not expected weeks or months before.
The mirror presents the private confrontation between observation and belief. The quiet acknowledgement of surprise. The willingness to revise conclusions without waiting for permission from precedent. The discipline to follow evidence forward, rather than retreat into analogy.
We build monuments to the mean. But we are buried in the tails. And the tails are no longer rare storms. They are the climate.



