Keeping Up with Tech When Words Begin to Fail
Nilesh Jasani
·
June 30, 2025

Another week, another leap: China’s MiniMax has delivered striking gains in efficiency and extended context, but these no longer surprise us and nor do they demand new metaphors. At GenInnov, we noted the breakthrough but felt no urge to name it, unlike with DeepSeek.  Elon Musk, meanwhile, is preparing to “redefine human knowledge,” though whether that means Grok is aiming for a novel reinforced learning or something entirely different is unclear. Andrej Karpathy’s Software 3.0 is an elegant reframing and makes for a compelling listen, but with hazy practical implications. To outsiders, Mira Murati raised $2 billion with the phrase “custom models,” a term that may not have a definition but clearly had enough to unlock capital. And only by using Kimi AI’s just released Researcher we grasped how it differs from other “deep research” offerings (review: it’s really cool). 

Real innovation now moves faster than our ability to describe it. As machines grow more capable, our language fails to keep up. Vague, recycled phrases and fleeting slogans from podcasts and panels dominate discourse but offer little clarity. This isn't just confusion; it creates real risk. Investors chase terms they barely understand, policymakers attempt to regulate without definitions, and the public often confuses breakthroughs with science fiction. We are facing a generational shift in technology without a stable vocabulary. At GenInnov, we treat this language gap not as a side issue, but as a core investment challenge we must work around every day.

Navigating the Hype vs. Reality Check

Embracing Technical Complexity and Detail

Continuous Learning and Adaptation

Stay in the Present (1): Beware of Historical Analogies

Stay in the Present (2): Ignore the Far Forecasts (and That Includes Our Own)

The New Nostradami of AI

Over the past two years, a fresh crop of “laws” has strutted across conference stages, op-eds, and podcasts, each presented as the long-awaited Rosetta Stone of AI. We’re told to obey Scaling Law (“just add more tokens and thou shalt be saved”), respect Chinchilla Law (“actually, add exactly 20× more tokens”), and reflect on the reanimated Solow Paradox (“productivity still yawns, therefore chatbots are overrated”).

When forecasts miss the mark, pundits invoke Goodhart’s Law (metrics stop mattering), the Lucas Critique (history can’t help, but here's a historic chart anyway), or Amara’s Law (overhype now, underhype later). The Bitter Lesson tells us to buy GPUs, not PhDs. Eroom’s Law shows us innovation slows in reverse. Brooks’ Law warns us that adding engineers delays the product. Cunningham’s Law says wrong answers attract better ones. The best was when the Victorian-era Jevon’s Paradox was invoked to argue that DeepSeek won’t collapse GPU demand earlier in the year.

At GenInnov, we’re not immune to this temptation. Our own slide deck features the entirely self-invented Super-Moore Law: “Everything will keep doubling until investors stop cheering.” It has yet to go viral, which we take as proof that not all pseudo-laws are doomed by Goodhart’s revenge.

When Words Stop Working

Real practitioners might roll their eyes at law-of-the-week prophecies but still fall back on catchphrases like AGI, DeepTech, or even just “AI” to carry them through complex conversations without sounding pretentious. The risk is that these shortcuts now obscure more than they reveal.

We began this writing journey 160 posts ago with a simple claim: the “AI” of 2023 bore little resemblance to what the phrase meant in the 1950s, 1970s, or even late 2022. The term “transformer” is a prime example. Its original 2017 equation with querys keys values and parameters, ony exists in outline. The working internals of today’s models are wildly different. Flash attention, rotary embeddings, sparsity masks, routing schemes, MoE gating, retrieval layers have reshaped the original methods so thoroughly that the resulting equations resemble the original less than general relativity resembles Newton’s laws.

This linguistic mismatch will only worsen as new domains take off. Robotics is about to graft cognition onto actuators. Genomics is borrowing transformer architectures for protein folding and DNA editing. Our language, built for slower eras, now struggles to compress these cross-domain breakthroughs into anything stable. A new metaphor may bring a week of clarity and a month of fresh confusion.

Beneath the noise, something harder and more important remains: staying genuinely current on fast-moving technologies is both exceedingly difficult and, paradoxically, easier than ever. It is difficult because terminology changes weekly, architectures keep leapfrogging one another, and breakthroughs often appear first on preprint servers rather than in peer-reviewed journals. At the same time, it is easier because we now have tools that can rapidly process vast information, summarize dense research, and identify core insights with remarkable precision. Used well, these technologies become the most effective way to understand technology itself.

At GenInnov, our entire workflow is built on this belief and an acceptance that we will not understand what is going on through short, great-sounding phrases. Sensible investing in innovation begins with a working knowledge of the technology’s structure, trajectory, and defensibility. We make no assumptions before first understanding whether a breakthrough is fragile or scalable, whether it has genuine moats or just momentum, and whether it offers a credible path to monetization. Forecasts come later. Market narratives come later. What matters first is depth of understanding, even if it is not always the most popular way to spend time. In our experience, it is the most necessary.

Related Articles on Innovation