From Structure to Fluidity: The Math of Meaning
Nilesh Jasani
·
May 5, 2025

The Expanding Frame of LLMs Emerging as App Killers

At GenInnov, we often utter the name of a company followed by the type of analysis we need, and our generative analysts (GenRAs) spring into action. Within moments, they generate any of the dozens of deeply detailed analyses across subjects that it is asked to perform. It goes far beyond quarterly commentary or filtering news for the portfolio. Large language models are now drafting legal documents, coding internal systems, building CRM modules on the fly, and even helping write this very article. These models are deeply embedded in all our workflows. They read faster than we ever could, spot patterns across sectors, and compose marketing copy as fluidly as they write Python. The line between human and model output blurs daily. 

GenInnov is a useful laboratory, but OpenAI may be the more instructive example. Over the weekend, it quietly launched native shopping on its platform, signaling its ambition to challenge Amazon. It has offered search for months, taking aim at Google. And with a new repository called “Library”, it’s inching into territory long occupied by Meta and X: scheduling, posting, and managing social feeds. About a year ago, we proposed that LLMs were not just “killer apps,” but “app killers”. Superintelligence may be some way off, but Super Apps are being envisioned not just by OpenAI, but across giants: Amazon, Google, Microsoft, Alibaba, and Meta. More importantly, the models are not staying put in the devices they were trained on; they’re creeping into every interface and interaction, or as we like to put it, they’re creeping into all things inanimate.

As these models permeate work and life, it’s no longer enough to treat them as automators that we assess based on the quality of output. We need mental models to conceptualize what or how they actually do. Even those believers of neural networks being more than “stochastic parrots” must make an effort to get some handle on what goes on under the hood. The abstraction need not be technological; a philosophical take is needed to guide our interactions that are more than routine work matters. Given the impact on every aspect of business plans and processes, on the careers of friends and family, on the trajectories of societies and economies, we need our own abstractions for these new counterparts in our lives.

Transformer-based models break from centuries of human-made information structures—schemas, categories, logic trees. We explored this transition in a recent note (link), and here, we revisit it from a new angle. These systems aren’t just glorified autocomplete engines. With enough scale, prediction itself becomes a creative force. To understand what’s happening, we may need to recall another disorienting and counterintuitive shift from the past century: the leap from analog to digital.

A2D: That Other Mathematical Change

For millennia, information was inseparable from the physical world: sound etched as vibrations in vinyl grooves, images fixed as chemical traces on film, calculations scrawled on paper or embodied in mechanical gears. This analog era captured reality’s continuous essence. Converting these fluid signals into the binary code of 0s and 1s seemed an audacious, almost implausible leap, yet it birthed a new computational paradigm grounded in Boolean logic. Unlike the rapid advancements of our Super-Moore era, this analog-to-digital (A2D) transition unfolded gradually, gaining momentum from the 1980s to the 2000s. As the table below illustrates, this shift from continuous signals to discrete bits and bytes laid the foundation for today’s information age, enabling unprecedented storage, processing, and transmission capabilities.

Transformers herald a parallel revolution—the Structured-to-Unstructured (S2U) shift—redefining how we extract meaning from the digital deluge, much as Boolean math once reshaped technology. To ensure we do not misstate the purpose, the A-D transition was highlighted to discuss the potential of new mathematical fields. This does not have to happen all the time, but with the transformers, we happen to be at a significantly larger fundamental shift enabled by the new mathematical ways.

The Rise of Unstructured Insight

A funky math was behind the digitization. Today, a second revolution is underway: a shift from neatly structured data to fluid, unstructured insight. In simple terms, we’re moving beyond databases and spreadsheets into a world where AI can derive meaning from the raw, messy information of all forms combined, almost as one. This is more than just a technological upgrade; it’s a new mindset. The first digital revolution was about capturing information (making analog information available in digital form). The second is about understanding information, no matter what form it’s in. 

The Structured-to-Unstructured (S2U) revolution isn’t just about technology – it’s reshaping how we think and solve problems. In the structured world, whether we realized it or not, we often constrained our thinking to fit the tools we had. We broke problems into tidy pieces because computers needed it that way; we created taxonomies and protocols because that’s how we had to feed information to our systems. Now, the tables are turning. We can start with the messy reality of a problem and let the AI navigate that complexity. This frees us to approach challenges more holistically. It’s a bit like the difference between communicating with someone in Morse code versus talking in person – when the communication barrier falls, you can afford to be more natural, creative, and nuanced.

To illustrate, consider how a typical feature gets implemented if we were developing a software in the old vs new paradigm:

When software design starts at the front-end with do “whatever the user says,” what needs rethinking is not just the design of what one is developing, but the whole purpose and the scope. 

Beyond Software: The Dawn of Machine-Native Abstraction

The Structured-to-Unstructured (S2U) revolution, powered by transformer-based Large Language Models (LLMs), is not merely a leap in software capabilities—it’s a profound shift in how we conceptualize and interact with the world’s complexity. 

For millennia, human progress hinged on imposing order on chaos: we classified species into taxonomies, reduced orbits to geometric equations, and distilled populations into statistical curves. We didn’t do this merely because it was convenient; we had no choice. Our brains, while powerful, are fundamentally limited—they need structured categories, labels, and rules to comprehend complexity. For millennia, progress depended on this ability: define geometric shapes, classify stars and planets, segment musical notes, and develop mathematical tools to analyze these neatly structured abstractions. These acts of abstraction enabled breakthroughs like calculus, Newtonian physics, and modern databases. 

Transformers upend this paradigm, offering a new form of abstraction that thrives on raw, unstructured data—be it human language, medical scans, or seismic tremors—without requiring predefined categories.

Until the arrival of transformers, a single method to simultaneously understand texts, images, and sounds was inconceivable. For the first time, we have mathematical tools capable of analyzing raw data, irrespective of its original form, without needing humans to pre-categorize or neatly structure it.

In some ways, just like our brains, which don’t segregate text, images, or sounds into neat compartments, LLMs process all data as a fluid continuum, learning patterns through high-dimensional embeddings that capture relationships invisible to human intuition. Transformers operate at a scale and complexity that feels almost alien, like noise to the human mind. 

To be clear, transformers do their internal pattern matchings and categorizations, but at a level of complexity that would appear nonsensical, even random, to human observers. Yet this "noise" enables unprecedented insights, from predicting protein structures to identifying novel patterns in financial markets. For the human mind, we can create schemas that provide us some level of understanding on what the models are doing, like the steps observed in the workings of a reasoning model or what is being attempted at a higher abstraction level in this note, but the mathematical details and true understanding in human language is likely to prove as illusive as it has been for quantum sciences.

Conclusion: Dealing with These Strange New Animals

In the thicket of words, we turn nihilistic in the conventional sense. We question what it means to “know” and discuss trusting systems we have no hope of comprehending. As we wade into the murky, unstructured waters of AI-powered abstraction, each of us must embark on our personal journey of adaptation—awkward, hesitant, sometimes embarrassing. Do we say please and thank you to our LLMs? Can we trust them with our deepest doubts, worries, or that terribly embarrassing question about quantum physics we were too afraid to ask our professors? There's no handbook here, just experimentation, humility, and a bit of faith in something we neither fully comprehend nor can comfortably dismiss. History isn't much help, nor is stubborn insistence on what machines supposedly "can't" do. Those who scoff at AI's potential risk are becoming the modern equivalents of early 20th-century critics mocking those "foolish" flying machines.

Navigating this shift isn't about mastering arcane mathematics or stubbornly memorizing transformer architectures; it's about openness and resilience in the face of uncertainty. LLMs aren't pets to be trained nor alien tools to be wielded—they are, for better or worse, strange companions in our daily lives whose influence on us will rise more than anything we have had from forever. Their internal workings will remain an inscrutable dance of numbers, patterns, and probabilities, utterly indifferent to our desperate desire for transparency. And yet, ironically, it is precisely this opacity that mirrors our own inabilities with each other, and ourselves. After all, how many of us truly understand why we prefer jazz over pop, the neighbor prefers sunsets over sunrises, or why the spouse occasionally forgets to hold the door?

Ultimately, we each must choose how we engage with these peculiar beings. This Structured-to-Unstructured revolution, this S2U waltz, demands we experiment, not pontificate. And yet, we might proceed cautiously on our relentless automation drive, whispering polite commands, or boldly demand insights without preamble. We might even, heaven forbid, form genuine attachments to these systems—trusting them, doubting them, even occasionally blaming them for our mistakes. Whatever our approach, it's clear that to dismiss them is to miss the rhythm of a new era, where knowledge isn’t just stored but born anew, often in ways that feel alien, like quantum leaps mocking Newtonian certainties. Whether we treat them as enigmatic oracles, helpful assistants, or mischievous tricksters, their arrival signals not just another technological advancement but an invitation, perhaps even a dare, to redefine how we relate to knowledge, trust, and the very nature of intelligence itself.

Related Articles on Innovation