Flapping planes and the promise of research-based artificial intelligence


A new artificial intelligence laboratory called Planes flutter It launched on Wednesday with $180 million in seed funding from Google Ventures, Sequoia, and Index. The founding team is impressive, and the goal – finding a less data-hungry way to train large models – is a particularly interesting one.

Based on what I’ve seen so far, I’d classify them as tier two The measure of trying to make money.

But there’s something even more exciting about the Flapping Airplanes project that I couldn’t put my finger on until I read it This post is from Sequoia partner David Kahn.

As Kahn describes it, Flapping Airplanes is one of the first labs to go beyond the expansion, constant construction of data and computing that has defined much of the industry to date:

The scaling model calls for allocating a huge amount of society’s resources, as much as the economy can muster, to scaling up today’s LLM programs, in the hope that this will lead to artificial general intelligence. The research model claims that we are 2-3 research breakthroughs away from AGI, and as a result, we must devote resources to long-term research, especially projects that may take 5-10 years to fruition.

(…)

A computing-first approach would prioritize cluster scale above all else, and strongly favor short-term gains (on the order of 1-2 years) over long-term bets (on the order of 5-10 years). A search-first approach would spread out bets temporarily, and one should be willing to make a lot of bets that have a low absolute probability of success, but collectively expand the search space of what is possible.

Maybe the compute folks are right, and it’s pointless to focus on anything other than frantic server builds. But with so many companies already pointing in this direction, it’s good to see someone heading in the other direction.

Leave a Reply

Your email address will not be published. Required fields are marked *