Another week, another leap: China’s MiniMax has delivered striking gains in efficiency and extended context, but these no longer surprise us and nor do they demand new metaphors. At GenInnov, we noted the breakthrough but felt no urge to name it, unlike with DeepSeek. Elon Musk, meanwhile, is preparing to “redefine human knowledge,” though whether that means Grok is aiming for a novel reinforced learning or something entirely different is unclear. Andrej Karpathy’s Software 3.0 is an elegant reframing and makes for a compelling listen, but with hazy practical implications. To outsiders, Mira Murati raised $2 billion with the phrase “custom models,” a term that may not have a definition but clearly had enough to unlock capital. And only by using Kimi AI’s just released Researcher we grasped how it differs from other “deep research” offerings (review: it’s really cool).
Real innovation now moves faster than our ability to describe it. As machines grow more capable, our language fails to keep up. Vague, recycled phrases and fleeting slogans from podcasts and panels dominate discourse but offer little clarity. This isn't just confusion; it creates real risk. Investors chase terms they barely understand, policymakers attempt to regulate without definitions, and the public often confuses breakthroughs with science fiction. We are facing a generational shift in technology without a stable vocabulary. At GenInnov, we treat this language gap not as a side issue, but as a core investment challenge we must work around every day.
Navigating the Hype vs. Reality Check
- Catchphrases Spark Interest, Not Insight: Phrases like “Software 3.0,” “agents,” or “reasoning models” help people frame ideas quickly but offer little clarity when evaluating investments. Unlike longer-lived slogans like “Web3,” today’s terms often vanish within weeks. At GenInnov, we avoid them as much as possible in any investment making discussions and analysis.
- Big Claims, Thin Details: Tech celebrities often promise the extraordinary. Elon Musk’s AI is said to “rewrite the entire corpus of human knowledge,” but such declarations rarely come imply any roadmap. Most announcements are too vague to act on and too polished to interrogate.
- Innovation No Longer Fits the Elevator Pitch: Once, a few minutes could explain breakthrough apps like Google or Uber. Now, innovations in robotics or custom silicon resist compression. Understanding OpenAI’s strategy or Nvidia’s product stack requires time, not soundbites. We treat superficial simplicity as a warning sign.
- Markets Outpace Metrics: Agents, reasoning layers, custom chips, humanoids are examples of hot areas lack shared standards or benchmarks. Everyone wants to sell a “reasoning model,” but no one agrees on what that means or how to measure it. Our attention spikes when we hear terms without clear definitions.
- The More You Know, The Harder It Gets: Paradoxically, the deeper you dig into bleeding-edge tech, the harder it becomes to make confident decisions. This isn’t impostor syndrome. It is the rational response to an innovation climate where the most important developments (e.g., fine-tuned data layers, energy budgets of LEO satellites, or MoE sparsity patterns) are barely disclosed or not understood even by their creators. For biotech investors, understanding new gene-editing techniques now requires fluency in at least four distinct disciplines. For chip investors, CoWoS or DUV isn’t just a term—it’s shorthand for thousands of design trade-offs. At GenInnov, we strive for the balance between too little and too much, as there is no end to what one can know in most of these fields.
- Skip the Hybrid Hype: Buzzwords that merge ideas—like “Quantum LLMs” or “AI decentralization”—often signal conceptual shortcuts. We embrace interdisciplinary work, but not when it comes wrapped in slogans.
- Press Releases Are Not Proof: Nearly every company calls itself an innovation leader. At GenInnov, our workflows are at their strident best when evaluating the promotional corporate material to decide what warrants deeper review. Most breathless announcements and slick presentation slides fail to reveal defensibility or traction.
- Always Ask for Evidence: Extraordinary claims need demos, user numbers, or real-world metrics. If a cloud provider offers “99.999% uptime,” ask how they define downtime. When answers are fuzzy, we consider the claim unproven. In today’s landscape, skepticism is not cynicism. It is discipline.
Embracing Technical Complexity and Detail
- Don’t Fear the Acronyms: Tech jargon is alphabet soup, but each TLA (three-letter acronym) often hides a world of substance. Rather than gloss over them, we take a moment to dig in. Most of us are comfortable today with a term like EUV, despite its potential to encompass hundreds of innovations underneath. A CoWos or NV72 may not have the same popularity, but are as important to learn and subsequently be embraced in their reductionist forms, for what they represent and the importance they have.
- Peek Under the Hood of Abstractions: Modern tech is layered with convenient abstractions that make understanding easier, but often too easy. At GenInnov, we treat every abstraction as a hypothesis, not a truth. Whether it's a robotaxi marketed as “full self-driving” or a model labelled “serverless,” we tend to go somewhat below these terms to understand what they represent. We don’t need to reinvent every wheel, but great overarching terms should not become an excuse to miss things critical.
Continuous Learning and Adaptation
- Train to Think Across Disciplines: Innovation lives at the intersection. GPUs met neural networks and sparked the AI revolution. DeepSeek wasn’t built by celebrity scientists, yet its breakthroughs were real. At GenInnov, we look past resumes and rhetoric to understand technical merit, no matter where it originates.
- Learning Is Scheduled, Not Accidental: We carve out daily time to learn—not from news summaries, but from primary sources: developer logs, research papers, technical forums, and open repositories. This keeps us grounded in substance, not narrative. What we read often has nothing to do with finance but everything to do with what will move it.
- The Only Way to Understand Some Tools Is to Use Them: When language falls short, we test things ourselves where practical. Kimi AI’s Researcher felt distinct only after we used it, not when we read about it. It may not remain unique, but using it gave us a better lens than any review could. In domains where such testing is not possible, we use a variety of methods in our workflows for hypothetical scenarios.
- We Actively Challenge Our GenAI Workflows: At regular intervals, we task our internal GenAI workflows to surface the unexpected through insights we haven’t yet explored, edge cases we may have overlooked, or developments that don’t fit neatly within current investment theses. With persistent memory and richer context, these models are increasingly capable of surprising us with angles we hadn't anticipated.
Stay in the Present (1): Beware of Historical Analogies
- History Proves Everything and Nothing: Dot-com bust? Electricity? You can cherry-pick history to support any view. That’s why we avoid arguments built on nostalgia. At GenInnov, we prefer to study what just happened rather than force-fit it into a chart of inevitability.
- Linear Thinking Has Failed Us: The last two years have shattered most comfortable assumptions about GenAI, compute, and software design. Pivots have happened faster than consensus has formed. We pay close attention to what punctuates the trendlines, not what draws them.
- Trend Charts Look Smart, Rarely Are: Most viral infographics about AI diffusion or compute intensity come from people who study history more than they study the present. We’ve seen them impress in presentations, not in illuminating what is going on.
- Understanding the Present is Harder but Worth It: It’s easier to quote Clay Christensen than to parse new frameworks on sparse mixture-of-experts. At GenInnov, we try to do the hard thing: parse present developments with an open mind, even when the vocabulary or benchmarks don’t yet exist.
Stay in the Present (2): Ignore the Far Forecasts (and That Includes Our Own)
- Forecasting Tech Trajectories is a Bad Habit, Especially Now: Every major development in GenAI has surprised us. Our optimism about persistent memory is cautious precisely because our earlier bets on edge computing did not pan out. At GenInnov, we make decisions on evidence, not momentum. We keep reminding ourselves that absolutely nothing is given about where we are headed.
- Certainty Is the New Risk Factor: The more dominant a trend appears, the more vigilant we remain. It may feel like Hynix’s grip on HBM or TSMC’s foundry lead is beyond challenge, but the speed with which Samsung lost ground in both areas serves as a reminder: nothing is unassailable. In GenAI, this truth is starker. Many of the highest-profile models featured in similar write-ups just a year ago have faded from relevance.
- The Futility of Forecasting in a Copy-Paste World: This bullet point is most applicable to sectors driven by ideas, such as the software segment. In an age of instant copyability, where a novel architecture, agent stack, or use case can be replicated, tweaked, and open-sourced within weeks, trajectory thinking offers little predictive edge. The very idea of ‘defensibility’ is under stress for companies building AI models and agents. One model's research paper is another lab’s launchpad, and yesterday’s breakthrough is tomorrow’s base layer. At GenInnov, we are particularly cautious about extrapolating success to companies in the application layer. The highest-growth product lines in AI today were mostly absent from serious analysis eighteen months ago.
The New Nostradami of AI
Over the past two years, a fresh crop of “laws” has strutted across conference stages, op-eds, and podcasts, each presented as the long-awaited Rosetta Stone of AI. We’re told to obey Scaling Law (“just add more tokens and thou shalt be saved”), respect Chinchilla Law (“actually, add exactly 20× more tokens”), and reflect on the reanimated Solow Paradox (“productivity still yawns, therefore chatbots are overrated”).
When forecasts miss the mark, pundits invoke Goodhart’s Law (metrics stop mattering), the Lucas Critique (history can’t help, but here's a historic chart anyway), or Amara’s Law (overhype now, underhype later). The Bitter Lesson tells us to buy GPUs, not PhDs. Eroom’s Law shows us innovation slows in reverse. Brooks’ Law warns us that adding engineers delays the product. Cunningham’s Law says wrong answers attract better ones. The best was when the Victorian-era Jevon’s Paradox was invoked to argue that DeepSeek won’t collapse GPU demand earlier in the year.
At GenInnov, we’re not immune to this temptation. Our own slide deck features the entirely self-invented Super-Moore Law: “Everything will keep doubling until investors stop cheering.” It has yet to go viral, which we take as proof that not all pseudo-laws are doomed by Goodhart’s revenge.
When Words Stop Working
Real practitioners might roll their eyes at law-of-the-week prophecies but still fall back on catchphrases like AGI, DeepTech, or even just “AI” to carry them through complex conversations without sounding pretentious. The risk is that these shortcuts now obscure more than they reveal.
We began this writing journey 160 posts ago with a simple claim: the “AI” of 2023 bore little resemblance to what the phrase meant in the 1950s, 1970s, or even late 2022. The term “transformer” is a prime example. Its original 2017 equation with querys keys values and parameters, ony exists in outline. The working internals of today’s models are wildly different. Flash attention, rotary embeddings, sparsity masks, routing schemes, MoE gating, retrieval layers have reshaped the original methods so thoroughly that the resulting equations resemble the original less than general relativity resembles Newton’s laws.
This linguistic mismatch will only worsen as new domains take off. Robotics is about to graft cognition onto actuators. Genomics is borrowing transformer architectures for protein folding and DNA editing. Our language, built for slower eras, now struggles to compress these cross-domain breakthroughs into anything stable. A new metaphor may bring a week of clarity and a month of fresh confusion.
Beneath the noise, something harder and more important remains: staying genuinely current on fast-moving technologies is both exceedingly difficult and, paradoxically, easier than ever. It is difficult because terminology changes weekly, architectures keep leapfrogging one another, and breakthroughs often appear first on preprint servers rather than in peer-reviewed journals. At the same time, it is easier because we now have tools that can rapidly process vast information, summarize dense research, and identify core insights with remarkable precision. Used well, these technologies become the most effective way to understand technology itself.
At GenInnov, our entire workflow is built on this belief and an acceptance that we will not understand what is going on through short, great-sounding phrases. Sensible investing in innovation begins with a working knowledge of the technology’s structure, trajectory, and defensibility. We make no assumptions before first understanding whether a breakthrough is fragile or scalable, whether it has genuine moats or just momentum, and whether it offers a credible path to monetization. Forecasts come later. Market narratives come later. What matters first is depth of understanding, even if it is not always the most popular way to spend time. In our experience, it is the most necessary.