Revisiting the Statistical Parrot
Nilesh Jasani
·
April 21, 2026

AI Is Increasingly Becoming a Need at Any Price

On the sell side, one of the biggest things you learn early is that burying the message is career-ending. Most readers stop around the second line, or actually the second paragraph if you remove the habit of hyperbole. The rest skim. We are no longer in the sell side, and as a result, do not feel the compulsion to start with a conclusion. In the age where anyone can get a summary with a click, we love the long-form writing that gives us time to build to the climax.

We did that in our previous big theme piece on the Token Inequality last week. Based on the questions we got, a lot of our readers never got to the most important change of recent days - not in the AI getting more expensive, or even inaccessible, but becoming a need. Allow us to emphasize so that the point is not missed this time by capitalizing: AI IS BECOMING A NEED. With what one is reading about Mythos, the AI argument has shifted again. Six months ago, the conversation was whether artificial intelligence mattered at all or had any real use. With the Claude Code release, that argument quietly died. Now, it is whether anyone can afford not to have it at almost any price. 

There is a lot to argue about while the capabilities of the next model versions are not fully known, but the arguments below are important for anyone thinking about the changes afoot. 

Re-flogging the Long-Dead

Remember when serious people said AI was just autocomplete. A stochastic parrot, rearranging tokens by probability, dressed up in enough confidence to fool the gullible. That argument had a moment. It survived for a while on the strength of being unfalsifiable. Point to a capability, the parrot-defenders said it was pattern matching. Point to a benchmark, they said it was data contamination. Point to a novel output, they said the model had seen something close enough. The position was load-bearing for a particular kind of commentator because, without it, they would have had to update.

Claude Code was the quiet end of that debate. A parrot does not write production code that runs, debug it when it fails, refactor it when asked, and ship a working system. A parrot certainly does not take a vague English description and turn it into a compiling, passing, deployable artifact. When programmers themselves started shipping AI-generated code as a normal part of their workflow, the parrot position retreated to seminars.

Mythos is what happens after the retreat. The latest Anthropic model, released in April 2026 into a closed partnership called Project Glasswing, autonomously identified novel vulnerabilities in widely audited codebases, including a twenty-seven-year-old OpenBSD flaw and a sixteen-year-old bug in FFmpeg's H.264 codec, without targeted security training. According to Anthropic's own system card, the cybersecurity capability was not designed. The model was not built to hunt bugs. It learned to search the code space for flaws humans had stopped looking for. On the Firefox 147 exploitation trial, where the predecessor model succeeded twice in several hundred attempts, Mythos succeeded 181 times out of 250. Outputs are candidates, not finished weapons. Human validators agree with the model's severity ratings exactly 89 percent of the time, and within one severity level 98 percent of the time. The hit rate is high enough to change the game.

In some ways, the parrot or autocomplete argument has become so dated for a while that one did not need Mythos’ descriptions to beat the dead again. However, we need to re-flog the supposedly settled argument to get to a different point.  

What the Parrot Crowd Had Right

That AI is nothing but a statistical arrangement is actually a very valid argument. And, it is worth saying out loud because the strongest version of this piece runs through it. These are equations. Matrices of numbers, trained by gradient descent, running on silicon. There is no ghost. Whatever is happening in a frontier model, it is happening in arithmetic.

That concession is supposed to be the parrot-defender's trump card. It is actually the opposite. It is the reason AI has now turned into, or is soon going to be, a need.

Equations have a property that humans do not. They can be copied. A Feynman or an Einstein was non-replicable. If one country had Oppenheimer, another country could not simply hire a similar mind next quarter. Genius clustered slowly, and strategic advantages built on genius decayed slowly. That was the quiet architecture of technological competition for most of the twentieth century. The non-replicability of elite talent was the scaffolding that let certain nations, corporations, or systems feel comfortable about their edge.

Equations do not respect that scaffolding. A training pipeline that produces Mythos in one lab in April will produce functionally equivalent capability elsewhere within months. Not as a possibility. As a diffusion curve that ignores lab boundaries. By then, today’s leaders might be far ahead, or somebody else could be, but it is almost given that whatever is doable by a set of equations in one lab, if attempted, will be doable elsewhere, too.

Einstein did not scale. Models do.

The implication is simple: Anthropic might be able to decide whether anyone uses the model it finds not fit for a broad release. But, it will not be able to contain others reaching similar or higher capabilities soon.

Built Stronger, and Without A Purpose

Let us make a strong statement to start this section, too, although we will have to dial it back a bit later. Unlike most products or services, every AI model is built to be bigger or more efficient than the previous, but even its makers have little idea what it may do. You cannot specify these systems as you plan them. Often, you cannot specify them even after they are built.

Nuclear fission was unpredictable in 1938, but its engineering path was linear. Complex software has emergent bugs, but the failure modes are specifiable. For most systems, the design is the explanation. For AI models, only the behavior observed after everything is done is the explanation. Run the same prompt twice, and you may get different answers. Scale the parameters by ten percent, and capabilities appear that nobody was training for. Mythos was not built to find twenty-seven-year-old OpenBSD bugs. The cybersecurity capability fell out of broader gains in code reasoning. Nobody at Anthropic woke up one morning with a plan to build a zero-day finder. They scaled a general model, and a specialist emerged.

And the scaling is not slowing. Every model maker in the world is scaling up and will continue to, for the foreseeable future. What we know with certainty is that at every level of scaling so far, capabilities have expanded. With wider context, deeper knowledge, more parameters, the models identify more patterns than before. We have now reached a point where they find patterns that have escaped humans entirely. One can refuse to call this intelligence. That is a definitional prerogative. What is not a prerogative is the trajectory. This is not about to end.

With wider scaling comes finer pattern recognition. Some of these patterns are beautiful. Protein folding was opaque for decades, then a model saw the grammar. Drug discovery is about to stop being a lottery of screening and become something closer to design. Some of the patterns are less beautiful. Software written by humans over the last forty years is full of associative vulnerabilities: flaws that emerge from the interaction of components that no teams that built them could ever guard against, or even conceptualize, as they involve variables outside their domain. Humans did not find them because humans did not have the context window for the whole system. Models do. Mythos is the demonstration. It will not be the last one.

So the question is not what Mythos can do. One can even argue, or at least hope for, its vulnerability-detecting capabilities to be a hoax like Y2K. It is what the next model, trained on more compute with better data by a team that learned from Mythos, will do that no one has yet imagined. The honest answer is that nobody knows or can know. That is the structural fact. The capability curve does not just go up. It produces new capabilities at the steps, and the steps are not on anyone's roadmap.

Which is why any defensive posture pegged to today's capability becomes obsolete by the time it is deployed. You are not planning against a known threat. You are planning against a frontier that keeps producing threats nobody trained it to produce.

Once again, this is a genuinely new epistemic situation. In every prior technology, you could reason about what the next version would do before you built it. With these systems, you find out by building them. The people building them are optimizing for broad capability, which means the specific dangerous specialties that emerge along the way are not on any roadmap. They show up. And, when they do, models do not care whether the problems they create are solvable or not.

When Modelmakers Turn Arbiters

Realizing the powers unleashed by Mythos, Anthropic has provided restricted access to a select group for preparation. The fifty or so Glasswing partners are using it defensively, to patch critical software before equivalent capability appears in less careful hands. Anthropic chief has been meeting the White House Chief of Staff to work out how the model can be pointed at US cyber defense without being pointed inward at citizens. Whatever you think about Anthropic's choices, they are visibly trying.

Still, there are two clear problems with this.

There is no law of nature that says the next lab will be as careful. The next lab might be in a jurisdiction with different rules. It might be owned by a government that wants the capability for offense. It might be run by founders who genuinely believe broad release serves the public interest. It might be released by accident through the kind of configuration error that exposed Mythos's existence in the first place. The containment strategy works if it is universal. It will not be universal.

Once a capability is demonstrated, it becomes a target for reproduction. That process has no pause button.

The second problem is less visible, but structurally larger. When a model maker decides who gets preparatory access, the maker is also deciding who does not. Every containment strategy exposes itself in what it excludes. And what gets excluded is usually larger and more consequential than what gets included.

Consider the commercial version first, because it is the easier case. Glasswing includes a handful of major banks, cloud providers, and critical infrastructure operators. That is a small fraction of the institutions whose software Mythos can penetrate. Every bank not on the list, every hospital network, every utility, every mid-sized enterprise running legacy systems, is in a position where their vulnerability is known to the defenders inside the fence but not to those outside it. Over time, this will redraw the competitive landscape in ways the model maker did not intend and perhaps would not endorse. The firms inside Glasswing get early warning, remediation support, and the institutional muscle memory of working with frontier capability. The firms outside do not. That is an uneven playing field created by a safety decision, and while it is probably the right safety decision, it is also a commercial consequence that accumulates.

But the commercial case is the smaller one. The larger one is geopolitical.

Access to Mythos-class capability at the moment is effectively determined by the lab. Which is to say, it is effectively decided by a small number of American companies operating under American law. A government that finds itself outside the fence — a European ministry, an Asian central bank, a Middle Eastern sovereign, a Latin American regulator — has several uncomfortable options. It can wait and accept that its critical infrastructure will be defended by those participating in the early preparations. Alternatively, it may decide to lobby and accept that better access will come at a cost. Or, it can commission its own lab to build an equivalent capability at whatever speed the national balance sheet and capabilities allow. Without a doubt, a large swath of nations and corporations will not be a part of the elite group with time to prepare.

This is how capability races are structured. One lab demonstrates. Other labs feel the pressure to match. But the pressure compounds when the demonstration is held back from certain users, because the held-back users are no longer choosing whether to invest in frontier AI. They are choosing between building it themselves and remaining indefinitely exposed. A Chinese lab was going to scale regardless. A European lab may now scale because Glasswing did not include Brussels. An Indian lab may now scale because Glasswing did not include Delhi. The containment that looks responsible from inside the fence looks, from outside, like a reason to build the capability at double speed. 

In the US, China, and nations with the best model-makers, governments will not leave this decision with the labs for much longer. Once the geopolitical shape becomes clear, and access to a critical defensive capability is allocated by private firms according to their own criteria, national governments will intervene with the heaviest legal tools at their disposal. The first discussions on export controls, or for that matter, nationalizations, may not be too far away either.

And this changes the shape of the “need”. The want-to-need transition forces institutions to acquire frontier AI. The access rationing forces them to acquire it from whoever will sell. The coming geopolitical overlay forces them to acquire it from whoever their government will allow them to buy from. Three layers of compulsion, each operating on a different axis. Each, on its own, would be a forcing function. Together, they describe an environment in which the idea that AI is a discretionary technology choice looks, in retrospect, like the last argument of a disappearing world.

Lohe Ko Kat-ta Loha

Iron cuts iron. The old Hindi phrase captures the structural point of this section.

We did not fully address how one might defend in a world where machines detect vulnerabilities that humans missed.

Once a frontier model can generate attacks at a scale and sophistication that no human team can match, the only thing that defends against AI-native offense is AI-native defense. Architecture still matters. Zero-trust networks still matter. Segmentation and MFA still matter. None of them survives on their own against a model that can autonomously find zero-days in the authentication library itself. Human teams with bolt-on AI lose to AI systems with human oversight. The advantage compounds on the side that scales faster. Scale, in this case, is measured in compute, not headcount.

And the contest is not today's contest. It is a capability curve. Today you are defending against Mythos. In twelve months you are defending against whatever follows Mythos. In twenty-four months, whatever follows that. A human-scale defensive posture locks in at human timescales. An AI-augmented defensive posture moves with the frontier. Without AI, you are not just losing today. You are losing further every month.

Critics sometimes counter that once both sides have frontier models, the advantage disappears. This objection proves the larger point. When matching an AI-powered offense requires deploying an AI-powered defense at the same frontier, AI has crossed from competitive advantage into existential need. Parity itself has become expensive. Staying in the game now demands frontier capability.

There is another subtler point. In the best case, the defensive capabilities of AI may develop so quickly that at least in the cybersecurity space, the tools to protect keep coming at a far quicker rate than tools that can cause harm. This must be our hope, and there is no reason why it cannot happen. And, there is no reason why it cannot be the other way around.  

This is what turns AI from a want into a need. In adversarial systems, not adopting AI is not conservatism. It is exposure. The CFO of a mid-sized bank does not need AI to beat competitors. They need AI so that their bank is still operating in eighteen months without risks of all client information being leaked. The head of IT at a hospital network does not need AI to win procurement awards. They need AI so that patient records are not encrypted and held for ransom by a foreign actor running an open-weight model that does ninety percent of what Mythos does.

The want-to-need transition is forced, not chosen. Adopting AI does not guarantee safety. Not adopting it guarantees exposure.

Combining With Supply Constraints

Claude is already considering a move to usage-based pricing. OpenAI is widening the trusted access program for its new ChatGPT-5.4 Cyber. The queuing is no longer outside the memory makers’ doors now.

Access is rationed on three different axes, each with a different failure mode. Policy rationing is about who the lab trusts. Glasswing partners and verified defenders clear the bar. Everyone else does not. The failure mode is political realignment. Compute rationing is about physics. H100s and their successors are not arriving fast enough. The data centers are not being built fast enough. The grid connections are not being approved fast enough. The failure mode is a supply chain bottleneck. Geopolitical rationing is about which governments allow cross-border API access. The failure mode is export control. These three dynamics have different timelines and different mitigations. Treating them as one access problem guarantees misallocation.

Capabilities diffuse fast. Access to the best implementation remains slow. We covered much of this in the Token Inequality piece. What is new is that the inequality is no longer a competitive disadvantage. It is a structural exposure.

Need is a strong word. Need-at-any-price sounds like a hyperbole of an analyst. At any price does not mean reckless spending. It means an accelerated, and so far unexpected, reallocating from legacy programs that will persist regardless of outcome.

The standard of corporate care is being silently rewritten. Gross negligence in 2026 looks exactly like prudent management in 2024. When a financial institution is gutted by an automated exploit that a frontier defensive model would have caught, the subsequent lawsuits will not debate the IT budget. They will debate whether the board discharged its duty of care in a world where defensive AI had crossed into the category of mandatory.

If one says it crudely, for many decision-makers, the beliefs on AI capabilities will not matter. They no longer need a plan for potential achievements or benefits. They will look less at surveys that claim AI is hype while making their decisions in the months ahead. The decision drivers are changing. 

The quiet crossing

And maybe, it is all a hype. ChatGPT 3.5, DeepSeek, Claude Code were all substantial, unexpected technological advances. By all accounts, Mythos looks like the next one; afterall, one would not expect the US government to feel the need to mend its relationship with Anthorpic so quickly without a reason.

That said, little is known about Mythos’ actual capabilities. There is no dearth of examples of world-destroying technologies that proved to be far less. If one goes by the optimistic logic of history, no technology by itself has really hurt the world badly so far, so even this one will not. We will be glad if history repeats itself faithfully this time.

More seriously, the signs of another shift are emerging. And, with serious implications across domains in the investment space, also. “Want,” “Need,” and similar words are rhetorical with only one purpose: we need to begin observing something different now. 

Re-reading this before signing off, the sell-side habits are clearly gone. The most consequential arguments are buried in the middle of a long piece, exactly where a younger version of this author would have refused to put them. The access question and the geopolitical scramble it triggers will dominate institutional conversation in the months ahead, especially once the first serious AI-assisted attack on a critical government system goes public. The business press will fill with the complaints of corporate chieftains who discover their rivals got preferential AI access for no better reason than being larger. There is more, and it will keep coming. Perhaps the upside of forgetting how to lead with the conclusion is that the buried material becomes the next few pieces.

So to conclude, we have to circle back to parrots. Those who relied on the parrot argument, or the scaling laws, or the South Sea bubble analogy to understand what AI will not do, were never really arguing about AI. Not for a while. Their argument was about permission. Permission to not update, not reallocate, not think carefully about what was coming. That permission has been revoked. The question is no longer whether to update. It is whether the update happens by choice or by consequence.

A confession to close. We are not pleased with this conclusion, though we suspect few readers will reach it to notice. So allow us to end by pointing instead to the part of this piece that does please us, which is the image at the top. It may not have rendered well on the page you are reading. It is worth describing. The painting moves left to right across three Japanese traditions, each marking a stage in the emergence of complexity from sparseness. On the left, in the minimalist register of Suibokuga, two parrots sit on an almost empty canvas in sparse black ink. The middle adopts the structured, coloured forms of Ukiyo-e, where the scene gathers definition and weight. The right resolves into Rinpa, a dense composition of deep pigments, floral and wave patterns, and decorative gold leaf, the original two parrots now lost inside an integrated flock. Two became many. The many became something the two could not have predicted. That, at least, is a conclusion the piece earns

Related Articles on Innovation