We imagine a world where models can think at the level of humans without ingesting half the internet. The proof that this is possible is all around us: whereas current systems are trained on essentially all of accessible history, humans exceed AI capabilities despite seeing at most a few billion text tokens by adulthood. We estimate that humans are 100,000x-1,000,000x more sample efficient than existing models.1 To achieve such large gains, however, we can’t merely tweak existing techniques—we need big ideas.
Flapping Airplanes is a foundational AI research lab devoted to solving the data efficiency problem. We are not focused on one specific technical idea or vertical; we're taking a long-term view and pursuing radical new approaches. We have the resources and the will to do so.
Our singular focus is doing good, paradigm-shifting research. While we are not currently trying to commercialize, our work will ultimately unlock enormous value in enterprise settings, robotics, trading, scientific discovery, and much more.
Flapping Airplanes is a metaphor, but it also embodies our culture. Just like our name is out-of-distribution, we are working with some of the most talented researchers who think in out-of-distribution ways. Our team blends world-class research scientists with younger researchers who are already the best at what they do (our team includes IMO, IOI, IPhO medalists and more).
We are backed by Google Ventures, Sequoia Capital, and Index Ventures with participation from friends at XTX Ventures, Fundomo, Menlo Ventures, Victor Lazarte, Nova Global, Conviction, and others. Our angels and advisors include Chris Re, Andrej Karpathy, Jeff Dean, and more. So far, we have raised $180M.
1 If you don't agree with this estimate, please email disagree@flappingairplanes.com. We would be delighted to chat with you about this (be warned that we employ two US debate champions).