Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

We are at a unique moment for AI companies that are building their own foundation model.
First, there is an entire generation of industry veterans who made their name at big tech companies and are now working on their own. You also have legendary researchers with immense experience but vague business aspirations. There’s a clear chance that at least some of these new labs will become giant labs the size of OpenAI, but there’s also room for them to do interesting research without worrying too much about marketing.
The end result? It’s becoming harder to know who’s actually trying to make money.
To make things simpler, I suggest a sort of sliding scale for any company that makes a basic model. It’s a five-level scale, where it doesn’t matter if you’re actually making money – just if you’re trying. The idea here is to measure ambition, not success.
Think about it in these terms:
The big names are all at level 5: OpenAI, Anthropic, Gemini, etc. The scale is getting even more interesting with the new generation of labs now launching, with big dreams but ambitions that can be hard to read.
Importantly, people participating in these labs can generally choose the level they want. There is so much money in AI right now that no one would question them on a business plan. Even if the lab is just a research project, investors will consider themselves happy to participate in it. If you’re not particularly motivated to become a billionaire, you’ll likely live a happier life at level two than at level five.
TechCrunch event
San Francisco
|
October 13-15, 2026
Problems arise because it’s not always clear where an AI lab falls on this scale, and much of the current drama in the AI industry comes from this confusion. Much of the concern over OpenAI’s transition from a non-profit organization came from the fact that the lab spent years at Level 1, then jumped to Level 5 almost overnight. On the other hand, you might argue that Meta’s early AI research was Level 2, when what the company really wanted was Level 4.
With that in mind, here’s a quick summary of four of the largest contemporary AI labs, and how they measure up to that metric.
Humans and it was Big AI news this weekAnd part of the inspiration for coming up with this whole scale. The founders have a compelling pitch for the next generation of AI models, while making room for scaling to focus on communication and coordination tools.
But despite all the glowing press, Humans& has been coy about how to translate that into actual monetizable products. It seems so He does You want to build products. The team will not commit to anything specific. The most they said was that they would build some kind of artificial intelligence tool in the workplace, Replacing products like Slack, Jira, and Google Docs but also redefining how these other tools work at a fundamental level. Workplace software for the post-workplace workplace!
It’s my job to know what these things mean, and I’m still confused about that last part. But it’s specific enough that I think we can put it at level 3.
This is very difficult to evaluate! Generally speaking, if you have a former CTO and venture lead for ChatGPT raising a $2B seed round, you have to assume there’s a very specific roadmap. Mira Moratti doesn’t strike me as someone who would jump in without a plan, so come 2026, I’d feel comfortable putting TML at level 4.
But then It happened last two weeks. The departure of CTO and co-founder Barrett Zoff has made most of the headlines, in part because Special circumstances included. But at least five other employees left with Zoff, with several citing concerns about the company’s direction. Just one year later, nearly half of the executives on TML’s founding team no longer worked there. One way to read the events is that they thought they had a solid plan to become a world-class AI lab, but they discovered that the plan wasn’t as solid as they thought. Or in terms of scale, they wanted a level 4 lab but realized they were at level 2 or 3.
There is not yet enough evidence to justify a credit rating downgrade, but it is getting close.
Fei-Fei Li is one of the most respected names in AI research, best known for founding the ImageNet Challenge that launched contemporary deep learning techniques. She currently holds the Sequoia Chair at Stanford University, where she co-directs two different AI labs. I won’t bore you by going through all the different academic honors and positions, but suffice it to say that if she wanted, she could spend the rest of her life just getting awards and being told how great she is. her Very good too!
So In 2024When Lee announced that she had raised $230 million for a spatial AI company called World Labs, you might have thought we were operating at level two or below.
But that was more than a year ago, which is a long time in the world of artificial intelligence. Since then, World Labs has shipped both A complete model for generating the world and Commercial product built on top of it. During the same period, we saw real signs of demand for global modeling from both the video game and special effects industries – and none of the major labs had built anything that could compete. The result looks very much like a Tier 4 company that will probably soon graduate to Tier 5.
Founded by Ilya Sutskever, former chief scientist at OpenAI, Safe Superintelligence (or SSI) seems to be a classic example of a Tier 1 startup. Sutskever has gone to great lengths to keep SSI somewhat insulated from commercial pressures Meta takeover attempt rejected. There are no product cycles, and apart from a super-smart foundation model that is still maturing, there doesn’t seem to be any product at all. With this offer, he raised $3 billion! Sutskever has always been more interested in the science of AI than the business, and all signs point to this being a truly scientific project at its core.
However, the world of AI is moving quickly, and it would be foolish to exclude SSI from the commercial sphere altogether. on His final appearance of DwarkeshSutskever offered two reasons why SSI might shift, either “if the timelines turn out to be long, which may happen” or because “there is great value in the best and most powerful AI impacting the world.” In other words, if the search goes very well or very poorly, we may see SSI jump several levels quickly.