Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Frustration comes to all of us, and for all our hot new turns of phrase. The word “reese” lost its luster when grandparents began to wonder about its meaning. Teachers who dressed up as “6-7” for Halloween hit the nail in the coffin of General Alpha’s rallying cry. And tech CEOs who once touted the search for “artificial general intelligence,” or AGI, are now jumping in search of any other term they can find.
Until recently, artificial general intelligence was the ultimate goal of the AI industry. It was reportedly a vaguely defined term Formulated in 1997 Written by Mark Gubrud, researcher who Definition As “artificial intelligence systems that rival or exceed the human mind in complexity and speed.” The term still typically refers to artificial intelligence that is equal to or superior to human intelligence. But now, many big companies are seeking to rebrand — creating their own phrases or acronyms that (spoiler alert) still mean, essentially, the same thing.
CEOs have spent the past year downplaying “general artificial intelligence” as a milestone. Dario Amodei, CEO of Amazon-backed Anthropic, said so He said in public He “hates (hates) the term artificial general intelligence” and that he “always considered it a marketing term.” Sam Altman, CEO of OpenAI, said: In August It is “not a very useful term”. Jeff Dean, Google’s chief scientist and Gemini leader, did just that He said He “tends to stay away from AGI conversations.” Microsoft CEO Satya Nadella He said We’re “getting a little ahead of ourselves with all this hype around AGI”, and ultimately, “self-claiming some AGI achievement” is “just a meaningless standard hack”. Him too He said On a recent earnings call, he said he doesn’t believe that “artificial general intelligence (AGI) as defined, at least by us in our contract, will be achieved anytime soon.”
Instead, they push a wealth of competing terms. Meta has “superpersonal intelligence,” Microsoft has “superhuman intelligence,” Amazon has “useful general intelligence,” and Anthropic has “strong artificial intelligence.” It’s a sharp shift for all these companies, which previously bought AGI — and the fear of missing out came from not chasing it — in recent years.
Part of the problem with “general artificial intelligence” is that the more advanced AI becomes, the more vulgar the term seems — since the concept of AI “equal to human intelligence” looks different for almost everyone. “A lot of people have very different definitions of it, and the difficulty of the problem varies by factors of a trillion,” Dean said.
However, some companies are making billions of dollars off this obscure phrase, a problem that is most evident in the strange and ever-changing relationship between Microsoft and OpenAI.
In 2019, OpenAI and Microsoft famously signed a contract that included an “AGI clause.” Microsoft has granted the right to use OpenAI technology until The latter achieved artificial general intelligence. But it appears that the contract did not fully define what that meant. When the deal was renewed In OctoberThings got more complicated. Conditions have changed To say that “once OpenAI announces an AGI, that announcement will now be verified by an independent panel of experts” — meaning that now, not only will OpenAI be called upon to define what AGI means, it will be a group of industry experts — and Microsoft won’t lose all of its rights to the technology once that happens, either. The simplest way to put this whole ordeal? Just don’t say AGI.
Another problem is that AGI has developed some baggage. Technology companies have spent years detailing their concerns about how technology is used He could destruction Everything. Books were written (think: If someone built it, everyone died). Hunger strikes It made headlines. For a while, that was still good advertising — saying your technology is so powerful that you’re worried about its impact on Earth seems to attract big investors’ money. But the public, not surprisingly, I tensed at that thought. So, with complex definitions, contractual drama, and general fear around superhuman AI, it’s much easier to market less loaded terms. That’s why every tech company seems to be creating a new brand of “intelligence” of its own.
One popular general-purpose alternative to AGI is “artificial superintelligence” or ASI. ASI is artificial intelligence that is superior to human intelligence in almost every domain – compared to artificial general intelligence (AGI), which is now generally defined as artificial intelligence that is equal to human intelligence. But for some in the tech industry, even the idea of “superintelligence” has become amorphous and confused with artificial general intelligence. The multiple theoretical milestones do not even have clearly distinct timelines. Amoudi He says He expects “strong AI” to come “as early as 2026.” Altman He says He expects that artificial general intelligence will be developed in the “reasonably near future.”
So companies developed their own variants. Meta CEO Mark Zuckerberg said In January The company needed to “build (artificial) general intelligence,” but by July, it had done that Centered To the “superior personal intelligence” in the statement. It was a people-oriented power statement on AGI that “helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow into the person you aspire to be.” Zuckerberg used the statement to combat public fears of AI taking over jobs and throwing shade at Meta’s competitors, describing the company’s vision as one that “sets itself apart from others in the industry who believe that superintelligence should be centrally directed toward automating all valuable work, after which humanity will survive on a subsidy of its production.”
However, Microsoft has also renamed its project as Chasing “Human Superintelligence (HSI),” which is essentially Zuckerberg’s statement in a different font. The company is Definition of HSI As “incredibly advanced AI capabilities that are always working towards the service of people and humanity in general” and are “problem-oriented” rather than as “an unlimited, boundless entity with high degrees of autonomy”. The rebrand came complete with a New websitetopped with the term “accessible intelligence,” supported by a sepia-style background and a soft color palette, and filled with paintings and photographs of nature.
For Amazon’s part, it happened It has been rebranded that it The AGI effort is the pursuit of “useful general intelligence” or “AI that makes us smarter and gives us more power.” Late last year, the company hired the founders of Adept, an AI startup, and licensed its technology, in an effort to compete against others in the AGI race. But like other corporate branding efforts, so is Amazon Positioning Its UGI efforts are useful, easy to spot, and certainly not powerful or scary: just “enabling practical AI that can do things for us and make our customers more productive, empowered, and fulfilled.”
With “strong AI,” Anthropic doesn’t care about appearing realistic. Amodei describes it as a “nation of data center geniuses” who are “smarter than a Nobel Prize winner in most relevant fields – biology, programming, mathematics, engineering, writing, etc.” He said powerful artificial intelligence would be able to write compelling narratives, prove unsolved theorems in mathematics, and write complex codes. It will not only answer questions, but will complete complex, multi-step tasks over the course of hours, days or weeks, similar to how AI CEOs see it. Successful AI agent“Assimilate information and generate actions approximately 10x-100x the speed of a human.”
AGI and ASI were already a lot of things to take into consideration. Now we have PSI, HSI, UGI and PI as well. Cheers to the new acronyms next year will bring.