AGI: The Four Letters We No Longer Fear
Nilesh Jasani
·
January 21, 2025

Human advancement can be characterized as our ever-increasing ability to understand and manipulate our environment to serve our needs. From the days of the proverbial Adam, this has been a one-way path. However, this trajectory might reach its pinnacle with the advent of AGI. We are entering an era where machines can comprehend and manipulate at higher and ever-rising levels of complexity compared to humans and arrive at conclusions in a jiffy that our best brains may take years or decades to figure out in human terms.

AGI is Now….Sort Of

Science fiction was your only port of call for decades if you wanted a taste of “general intelligence” in a machine. From dystopian mega-computers to rogue androids, authors ran wild with the idea—long before tech labs got serious about building it. Then, it represented the ultimate ambition for scientists: creating machines that could match or even surpass human intelligence across a wide range of tasks. Ever since the term Artificial Intelligence was coined in the mid-1950s, one expected game-playing and theorem-proving machines, but machines that can do as well as humans across a vast range of cognitive domains remained a long-distant goal that few futurists expected in their lifetimes. 

There is no dearth of people who have vexed eloquently on why machine intelligence across domains could never match that of humans. With the slow progress witnessed in AI for decades, many consigned this ambition to next-century technologists. Even when things began to move, expert polls conducted in 2012 and 2013 showed a median estimate of 2040 to 2050 for when experts would be 50% confident in AGI's arrival. 

And, now we have the following:

Of course, the above is a conquering of a mere benchmark, even if it was designed as a test to attain this badly defined concept. The benchmark’s makers are expected to release an improved version, where we won’t be surprised to have current models perform extremely poorly to start with. Even the most optimistic AI enthusiasts acknowledge that machines aren’t there yet, but few charts better illustrate AI’s explosive growth—or the profound questions it raises.

The Concerns Consigned to Conferences

Just a few years ago, prominent figures like Stephen Hawking and Elon Musk expressed serious concerns about pursuing artificial general intelligence (AGI). In a 2014 interview with the BBC, Hawking warned that "the development of full artificial intelligence could spell the end of the human race." He envisioned a scenario where AI would "take off on its own and re-design itself at an ever-increasing rate," surpassing human capabilities and potentially leading to our obsolescence. Similarly, in 2014, Musk likened the development of advanced AI to "summoning the demon" and called it humanity's "biggest existential threat." He has repeatedly emphasized the potential dangers of AGI, even co-signing an open letter in 2023 urging a six-month pause on AGI research to allow for careful consideration of its ethical implications.  

These concerns resonate in conferences worldwide, not just as philosophical musings but also for their tangible implications. While philosophical anxieties centered on the theoretical implications of machines operating beyond human comprehension, practical concerns primarily revolve around the impact on human labor. The growing apprehension stems from recent trends where even high-performing companies with double-digit growth rates are resorting to staff reductions in the name of efficiency, a phenomenon rarely witnessed before.  

Even as organizations like OpenAI began to openly declare their pursuit of AGI, AGI proponents and developers said little different from those against it in public forums. Everyone talked about the need to be responsible, the offsetting benefits for humanity, the need for guardrails, and other such issues. It was almost an atmosphere where one saw a complete consensus on the premises but diametrically different conclusions: some have wanted the AGI pursuit to stop, while those not talking have been in an arms race.

As we’ve discussed before, corporations and nations face a classic prisoner’s dilemma: if they don’t pursue “it”—in this case, AGI—the people they hate the most, aka their biggest rivals, will. In a world of fierce competition, cooperation is rare, and the allure of disproportionate gains for the frontrunners is too strong to ignore. The race quietly cleared one milestone before and the post-AGI debate will bring another concept - let’s talk about them before some practical conclusions.

From Turing to AGI to the Singularity: Three Milestones, One Controversy

The conceptual odyssey from Turing to AGI to Singularity is a fascinating progression of technological milestones, each evoking profound debates and emotions. Disagreements on each are as heated as those on concepts like free will or consciousness, mostly borne out of the differing meanings of these terms to different people. And yet, these concepts, while related, occupy distinct positions on the spectrum of machine intelligence.  

The Turing Test represents a threshold at which machines exhibit behavior indistinguishable from human intelligence. Quietly but surely, this Turing Test has been surpassed in various contexts—think of chatbots and virtual assistants that can mimic human interaction—and as a result, the concept has faded from the limelight in recent quarters. It must be admitted that some groups would vigorously dispute that machines have conquered the Turing Test, but their definitions are akin to those of AGI below.

The spotlight, as of now, is on AGI as it has turned to the next frontier. Unlike narrow AI systems that excel in specific domains, AGI would theoretically be capable of reasoning, problem-solving, and adapting to new situations with human-like flexibility. Such behaviors are already on display in our Agentic AI period. Still, it will take a while before most of us agree that machines have overtaken in most cognitive domains and not just exams. If 2023 and 2024 were about comparing machines’ abilities on high-school or college exam papers, they are now pitted against the PhDs. More importantly, ever more difficult benchmarks are being developed to describe domains where machines cannot do what humans can. It is likely that even three or five years later, there will be benchmarks where we humans perform better, but the pace of change shown above implies that the AGI debate, like the Turing one, will end at some point.

Finally, the concept of Singularity looms as the ultimate horizon, a point where machine intelligence surpasses human intelligence, leading to an unprecedented acceleration in technological progress. The singularity is often depicted as a tipping point where AI systems recursively improve themselves, resulting in an explosion of intelligence beyond human comprehension. This concept is more speculative and contentious than AGI, and perhaps it is better left at that for our newsletter, at least in 2025.

AGI: Why Should We Care?

Dinner table debates are poised to shift from whether large language models are “statistical parrots” that hallucinate random text to a far more portentous and contentious question: “Has AGI arrived?” But beyond that, why should one care, particularly in the investment arena?

Achieving a universal intelligence goes beyond semantics, at least for Microsoft and OpenAI. One intriguing rumor floating around—though not officially confirmed—is that hitting a recognized definition of AGI might alter or even end OpenAI’s arrangement with Microsoft. Some speculate that if OpenAI achieves genuine, all-purpose intelligence, it could trigger “escape clauses” or different funding structures. While the precise contract terms haven’t been publicly disclosed, the mere possibility underscores how high the stakes are when defining AGI. Whether or not the partnership is affected, it’s a lighthearted glimpse into how a once-abstract concept can carry real-world business and legal consequences.

Intelligence Injection Into the Inanimate

Technologically, the path to AGI is through the development of reasoning models, which differs from the early 2024 fascination with larger models. An acceptance that machines can truly match human-level thought will supercharge the spread of intelligence into every corner of our environment. Once we trust that machines can reason, we’ll be far more comfortable handing them the keys to everything from our home appliances to new frontiers we haven’t even imagined. TVs and cars will just be the beginning; soon, it might be expected for a tree-planting drone to redesign ecosystems autonomously or for a lab-grown drug to be formulated entirely by AI chemists—no human hand needed until final safety checks.

This "intelligence injection" will have profound implications for various sectors. In healthcare, it could lead to more accurate diagnoses, personalized treatments, and even the development of new drugs and therapies by AI-powered systems. In manufacturing, it could revolutionize production processes, optimize resource allocation, and enable the creation of entirely new products and materials. The point is not just about the abilities but acceptance and faith: once machines’ cognitive abilities are widely accepted, entrepreneurs to consumers will have higher faith in experimenting with the GenAI models.

This shift in our relationship with technology will also reshape our social and political structures. As the AGI is seen as achieved or within reach, questions about human-machine collaboration, the division of labor, and the ethical implications of AI decision-making will become increasingly important. We may keep calling it AI or have the word AGI replace it, but what has been a benchmark pursuit in the tech world is about to shift the discourses and progress path of GenAI/AI/AGI.

Related Articles on Innovation