Connect with us

Tech

Flapping Airplanes on the future of AI: ‘We want to try really radically different things’

There’s been a bunch of exciting research-focused AI labs popping up in recent months, and Flapping Airplanes is one of the most interesting. Propelled by its young and curious founders, Flapping Airplanes is focused on finding less data-hungry ways to train AI. It’s a potential game-changer for the economics and capabilities of AI models — and with $180 million in seed funding, they’ll have plenty of runway to figure it out.

Last week, I spoke with the lab’s three co-founders — brothers Ben and Asher Spector, and Aidan Smith — about why this is an exciting moment to start a new AI lab and why they keep coming back to ideas about the human brain.

I want to start by asking, why now? Labs like OpenAI and DeepMind have spent so much on scaling their models. I’m sure the competition seems daunting. Why did this feel like a good moment to launch a foundation model company?

Ben: There’s just so much to do. So, the advances that we’ve gotten over the last five to ten years have been spectacular. We love the tools. We use them every day. But the question is, is this the whole universe of things that needs to happen? And we thought about it very carefully and our answer was no, there’s a lot more to do. In our case, we thought that the data efficiency problem was sort of really the key thing to go look at. The current frontier models are trained on the sum totality of human knowledge, and humans can obviously make do with an awful lot less. So there’s a big gap there, and it’s worth understanding. 

What we’re doing is really a concentrated bet on three things. It’s a bet that this data efficiency problem is the important thing to be doing. Like, this is really a direction that is new and different and you can make progress on it. It’s a bet that this will be very commercially valuable and that will make the world a better place if we can do it. And it’s also a bet that’s sort of the right kind of team to do it is a creative and even in some ways inexperienced team that can go look at these problems again from the ground up.

Aidan: Yeah, absolutely. We don’t really see ourselves as competing with the other labs, because we think that we’re looking at just a very different set of problems. If you look at the human mind, it learns in an incredibly different way from transformers. And that’s not to say better, just very different. So we see these different trade offs. LLMs have an incredible ability to memorize, and draw on this great breadth of knowledge, but they can’t really pick up new skills very fast. It takes just rivers and rivers of data to adapt. And when you look inside the brain, you see that the algorithms that it uses are just fundamentally so different from gradient descent and some of the techniques that people use to train AI today. So that’s why we’re building a new guard of researchers to kind of address these problems and really think differently about the AI space.

Asher: This question is just so scientifically interesting: why are the systems that we have built that are intelligent also so different from what humans do? Where does this difference come from? How can we use knowledge of that difference to make better systems? But at the same time, I also think it’s actually very commercially viable and very good for the world. Lots of regimes that are really important are also highly data constrained, like robotics or scientific discovery. Even in enterprise applications, a model that’s a million times more data efficient is probably a million times easier to put into the economy. So for us, it was very exciting to take a fresh perspective on these approaches, and think, if we really had a model that’s vastly more data efficient, what could we do with it?

Techcrunch event

Boston, MA
|
June 23, 2026

This gets into my next question, which is sort of ties in also to the name, Flapping Airplanes. There’s this philosophical question in AI about how much we’re trying to recreate what humans do in their brain, versus creating some more abstract intelligence that takes a completely different path. Aidan is coming from Neuralink, which is all about the human brain. Do you see yourself as kind of pursuing a more neuromorphic view of AI? 

Aidan: The way I look at the brain is as an existence proof. We see it as evidence that there are other algorithms out there. There’s not just one orthodoxy. And the brain has some crazy constraints. When you look at the underlying hardware, there’s some crazy stuff. It takes a millisecond to fire an action potential. In that time, your computer can do just so so many operations. And so realistically, there’s probably an approach that’s actually much better than the brain out there, and also very different than the transformer. So we’re very inspired by some of the things that the brain does, but we don’t see ourselves being tied down by it.

Ben: Just to add on to that. it’s very much in our name: Flapping Airplanes. Think of the current systems as big, Boeing 787s. We’re not trying to build birds. That’s a step too far. We’re trying to build some kind of a flapping airplane. My perspective from computer systems is that the constraints of the brain and silicon are sufficiently different from each other that we should not expect these systems to end up looking the same. When the substrate is so different and you have genuinely very different trade-offs about the cost of compute, the cost of locality and moving data, you actually expect these systems to look a little bit different. But just because they will look somewhat different does not mean that we should not take inspiration from the brain and try to use the parts that we think are interesting to improve our own systems. 

It does feel like there’s now more freedom for labs to focus on research, as opposed to, just developing products. It feels like a big difference for this generation of labs. You have some that are very research focused, and others that are sort of “research focused for now.” What does that conversation look like within flapping airplanes?

Asher: I wish I could give you a timeline. I wish I could say, in three years, we’re going to have solved the research problem. This is how we’re going to commercialize. I can’t. We don’t know the answers. We’re looking for truth. That said, I do think we have commercial backgrounds. I spent a bunch of time developing technology for companies that made those companies a reasonable amount of money. Ben has incubated a bunch of startups that have commercial backgrounds, and we actually are excited to commercialize. We think it’s good for the world to take the value you’ve created and put it in the hands of people who can use it. So I don’t think we’re opposed to it. We just need to start by doing research, because if we start by signing big enterprise contracts, we’re going to get distracted, and we won’t do the research that’s valuable.

Aidan: Yeah, we want to try really, really radically different things, and sometimes radically even things are just worse than the paradigm. We’re exploring a set of different trade offs. It’s our hope that they will be different in the long run. 

Ben: Companies are at their best when they’re really focused on doing something well, right? Big companies can afford to do many, many different things at once. When you’re a startup, you really have to pick what is the most valuable thing you can do, and do that all the way. And we are creating the most value when we are all in on solving fundamental problems for the time being. 

I’m actually optimistic that reasonably soon, we might have made enough progress that we can then go start to touch grass in the real world. And you learn a lot by getting feedback from the real world. The amazing thing about the world is, it teaches you things constantly, right? It’s this tremendous vat of truth that you get to look into whenever you want. I think the main thing that I think has been enabled by the recent change in the economics and financing of these structures is the ability to let companies really focus on what they’re good at for longer periods of time. I think that focus, the thing that I’m most excited about, that will let us do really differentiated work. 

To spell out what I think you’re referring to: there’s so much excitement around and the opportunity for investors is so clear that they are willing to give $180 million in seed funding to a completely new company full of these very smart, but also very young people who didn’t just cash out of PayPal or anything. How was it engaging with that process? Did you know, going in, there is this appetite, or was it something you discovered, of like, actually, we can make this a bigger thing than we thought.

Ben: I would say it was a mixture of the two. The market has been hot for many months at this point. So it was not a secret that no large rounds were starting to come together. But you never quite know how the fundraising environment will respond to your particular ideas about the world. This is, again, a place where you have to let the world give you feedback about what you’re doing. Even over the course of our fundraise, we learned a lot and actually changed our ideas. And we refined our opinions of the things we should be prioritizing, and what the right timelines were for commercialization.

I think we were somewhat surprised by how well our message resonated, because it was something that was very clear to us, but you never know whether your ideas will turn out to be things that other people believe as well or if everyone else thinks you’re crazy. We have been extremely fortunate to have found a group of amazing investors who our message really resonated with and they said, “Yes, this is exactly what we’ve been looking for.” And that was amazing. It was, you know, surprising and wonderful.

Aidan: Yeah, a thirst for the age of research has kind of been in the water for a little bit now. And more and more, we find ourselves positioned as the player to pursue the age of research and really try these radical ideas.

At least for the scale-driven companies, there is this enormous cost of entry for foundation models. Just building a model at that scale is an incredibly compute-intensive thing. Research is a little bit in the middle, where presumably you are building foundation models, but if you’re doing it with less data and you’re not so scale-oriented, maybe you get a bit of a break. How much do you expect compute costs to be sort of limiting your runway.

Ben: One of the advantages of doing deep, fundamental research is that, somewhat paradoxically, it is much cheaper to do really crazy, radical ideas than it is to do incremental work. Because when you do incremental work, in order to find out whether or not it does work, you have to go very far up the scaling ladder. Many interventions that look good at small scale do not actually persist at large scale. So as a result, it’s very expensive to do that kind of work. Whereas if you have some crazy new idea about some new architecture optimizer, it’s probably just gonna fail on the first rum, right? So you don’t have to run this up the ladder. It’s already broken. That’s great. 

So, this doesn’t mean that scale is irrelevant for us. Scale is actually an important tool in the toolbox of all the things that you can do. Being able to scale up our ideas is certainly relevant to our company. So I wouldn’t frame us as the antithesis of scale, but I think it is a wonderful aspect of the kind of work we’re doing, that we can try many of our ideas at very small scale before we would even need to think about doing them at large scale.

Asher: Yeah, you should be able to use all the internet. But you shouldn’t need to. We find it really, really perplexing that you need to use all the Internet to really get this human level intelligence.

So, what becomes possible  if you’re able to train more efficiently on data, right? Presumably the model will be more powerful and intelligent. But do you have specific ideas about kind of where that goes? Are we looking at more out-of-distribution generalization, or are we looking at sort of models that get better at a particular task with less experience?

Asher: So, first, we’re doing science, so I don’t know the answer, but I can give you three hypotheses. So my first hypothesis is that there’s a broad spectrum between just looking for statistical patterns and something that has really deep understanding. And I think the current models live somewhere on that spectrum. I don’t think they’re all the way towards deep understanding, but they’re also clearly not just doing statistical pattern matching. And it’s possible that as you train models on less data, you really force the model to have incredibly deep understandings of everything it’s seen. And as you do that, the model may become more intelligent in very interesting ways. It may know less facts, but get better at reasoning. So that’s one potential hypothesis. 

Another hypothesis is similar to what you said, that at the moment, it’s very expensive, both operationally and also in pure monetary costs, to teach models new capabilities, because you need so much data to teach them those things. It’s possible that one output of what we’re doing is to get vastly more efficient at post training, so with only a couple of examples, you could really put a model into a new domain. 

And then it’s also possible that this just unlocks new verticals for AI. There are certain types of robotics, for instance, where for whatever reason, we can’t quite get the type of capabilities that really makes it commercially viable. My opinion is that it’s a limited data problem, not a hardware problem. The fact that you can tele-operate the robots to do stuff is proof that that the hardware is sufficiently good. Butthere’s lots of domains like this, like scientific discovery. 

Ben: One thing I’ll also double-click on is that when we think about the impact that AI can have on the world, one view you might have is that this is a deflationary technology. That is, the role of AI is to automate a bunch of jobs, and take that work and make it cheaper to do, so that you’re able to remove work from the economy and have it done by robots instead. And I’m sure that will happen. But this is not, to my mind, the most exciting vision of AI. The most exciting vision of AI is one where there’s all kinds of new science and technologies that we can construct that humans aren’t smart enough to come up with, but other systems can. 

On this aspect, I think that first axis that Ascher was talking about around the spectrum between sort of true generalization versus memorization or interpolation of the data, I think that axis is extremely important to have the deep insights that will lead to these new advances in medicine and science. It is important that the models are very much on the creativity side of the spectrum. And so, part of why I’m very excited about the work that we’re doing is that I think even beyond the individual economic impacts, I’m also just genuinely very kind of mission-oriented around the question of, can we actually get AI to do stuff that, like, fundamentally humans couldn’t do before? And that’s more than just, “Let’s go fire a bunch of people from their jobs.”

Absolutely. Does that put you in a particular camp on, like, the AGI conversation, the like out of distribution, generalization conversation.

Asher: I really don’t exactly know what AGI means. It’s clear that capabilities are advancing very quickly. It’s clear that there’s tremendous amounts of economic value that’s being created. I don’t think we’re very close to God-in-a-box, in my opinion. I don’t think that within two months or even two years, there’s going to be a singularity where suddenly humans are completely obsolete. I basically agree with what Ben said at the beginning, which is, it’s a really big world. There’s a lot of work to do. There’s a lot of amazing work being done, and we’re excited to contribute

Well, the idea about the brain and the neuromorphic part of it does feel relevant. You’re saying, really the relevant thing to compare LLMs to is the human brain, more than the Mechanical Turk or the deterministic computers that came before.

Aidan: I’ll emphasize, the brain is not the ceiling, right? The brain, in many ways, is the floor. Frankly, I see no evidence that the brain is not a knowable system that follows physical laws. In fact, we know it’s under many constraints. And so we would expect to be able to create capabilities that are much, much more interesting and different and potentially better than the brain in the long run. And so we’re excited to contribute to that future, whether that’s AGI or otherwise.

Asher: And I do think the brain is the relevant comparison, just because the brain helps us understand how big the space is. Like, it’s easy to see all the progress we’ve made and think, wow, we like, have the answer. We’re almost done. But if you look outward a little bit and try to have a bit more perspective, there’s a lot of stuff we don’t know. 

Ben: We’re not trying to be better, per se. We’re trying to be different, right? That’s the key thing I really want to hammer on here. All of these systems will almost certainly have different trade offs of them. You’ll get an advantage somewhere, and it’ll cost you somewhere else. And it’s a big world out there. There are so many different domains that have so many different trade offs that having more system, and more fundamental technologies that can address these different domains is very likely to make the kind of AI diffuse more effectively and more rapidly through the world.

One of the ways you’ve distinguished yourself, is in your hiring approach, getting people who are very, very young, in some cases, still in college or high school. What is it that clicks for you when you’re talking to someone and that makes you think, I want this person working with us on these research problems?

Aidan: It’s when you talk to someone and they just dazzle you, they have so many new ideas and they think about things in a way that many established researchers just can’t because they haven’t been polluted by the context of thousands and thousands of papers. Really, the number one thing we look for is creativity. Our team is so exceptionally creative, and every day, I feel really lucky to get to go in and talk about really radical solutions to some of the big problems in AI with people and dream up a very different future.

Ben:  Probably the number one signal that I’m personally looking for is just like, do they teach me something new when I spend time with them? If they teach me something new, the odds that they’re going to teach us something new about what we’re working on is also pretty good. When you’re doing research, those creative, new ideas are really the priority. 

Part of my background was during my undergrad and PhD., I helped start this incubator called Prod that worked with a bunch of companies that turned out well. And I think one of the things that we saw from that was that young people can absolutely compete in the very highest echelons of industry. Frankly, a big part of the unlock is just realizing, yeah, I can go do this stuff. You can absolutely go contribute at the highest level. 

Of course, we do recognize the value of experience. People who have worked on large scale systems are great, like, we’ve hired some of them, you know, we are excited to work with all sorts of folks. And I think our mission has resonated with the experienced folks as well. I just think that our key thing is that we want people who are not afraid to change the paradigm and can try to imagine a new system of how things might work.

One of things I’ve been puzzling about is, how different do you think the resulting AI systems are going to be? It’s easy for me to imagine something like Claude Opus that just works 20% better and can do 20% more things. But if it’s just completely new, it’s hard to think about where that goes or what the end result looks like.

Asher: I don’t know if you’ve ever had the privilege of talking to the GPT-4 base model, but it had a lot of really strange emerging capabilities. For example, you could take a snippet of an unwritten blog post of yours, and ask, who do you think wrote this, and it could identify it.

There’s a lot of capabilities like this, where models are smart in ways we cannot fathom. And future models will be smarter in even stranger ways. I think we should expect the future to be really weird and the architectures to be even weirder. We’re looking for 1000x wins in data efficiency. We’re not trying to make incremental change. And so we should expect the same kind of unknowable, alien changes and capabilities at the limit.

Ben: I broadly agree with that. I’m probably slightly more tempered in how these things will eventually become experienced by the world, just as the GPT-4 base model was tempered by OpenAI. You want to put things in forms where you’re not staring into the abyss as a consumer. I think that’s important. But I broadly agree that our research agenda is about building capabilities that really are quite fundamentally different from what can be done right now.

Fantastic! Are there ways people can engage with flapping airplanes? Is it too early for that? Or they should just stay tuned for when the research and the models come out well.

Asher: So, we have Hi@flappingairplanes.com. If you just want to say hi, We also have disagree@flappingairplanes.com if you want to disagree with us. We’ve actually had some really cool conversations where people, like, send us very long essays about why they think it’s impossible to do what we’re doing. And we’re happy to engage with it. 

Ben: But they haven’t convinced us yet. No one has convinced us yet.

Asher: The second thing is, you know, we are, we are looking for exceptional people who are trying to change the field and change the world. So if you’re interested, you should reach out.

Ben: And if you have another unorthodox background, it’s okay. You don’t need two PhDs. We really are looking for folks who think differently.

source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

African defensetech Terra Industries, founded by two Gen Zers, raises additional $22M in a month

Just one month after raising $11.75 million in a round led by Joe Lonsdale’s 8VC, African defensetech Terra Industries announced that it’s raised an additional $22 million in funding, led by Lux Capital.

Nathan Nwachuku, 22, and Maxwell Maduka, 24, launched Terra Industries in 2024 to design infrastructure and autonomous systems to help African nations monitor and respond to threats. 

Terrorism remains one of the biggest threats in Africa, but much of the security intelligence on which its nations rely on come from Russia, China, or the West. In January, CEO Nwachuku said his goal was to build “Africa’s first defense prime, to build autonomous defense systems and other systems to protect our critical infrastructure and resources from armed attacks.” 

At the time, Terra had just won its first federal contract. The company has government and commercial clients, and Nwachuku said Terra had already generated more than $2.5 million in commercial revenue and was protecting assets valued at around $11 billion. 

He said this extension round came fast due to “strong momentum.” Other investors in the round include 8VC, Nova Global, and Resiliience17 Capital, which was founded by Flutterwave CEO Olugbenga Agboola. Nwachuku said investors saw “faster-than-expected traction” regarding deals and partnerships, which created urgency to preempt and increase their commitment. The round came about in just under two weeks, bringing the company’s total funding to $34 million.

Image Credits:Terra Industries

The extended raise is not that surprising. Afterall, building a defense company is not cheap. For comparison, Anduril has raised more than $2.5 billion in funding; ShieldAI has raised around $1 billion in equity; drone maker Skydio has raised around $740 million, and naval autonomous vessel maker Saronic, has raised around $830 million

Since January, Nwachuku said the company has started expanding into other African nations yet to be announced (Terra is based in Nigeria), and has secured more government and commercial contracts, including with AIC Steel, with more to be revealed this year. 

Techcrunch event

Boston, MA
|
June 23, 2026

The partnership with AIC Steel lets Terra establish a joint manufacturing facility in Saudi Arabia focused on building surveillance infrastructure and security systems. “It’s our first major manufacturing expansion outside Africa,” he said.

“The priority is working with countries where terrorism and infrastructure security are major national concerns,” Nwachuku added, citing those falling within the sub-Saharan African and Sahel region in particular. He said many of these companies have not only lost billions in infrastructure, but also thousands of lives in the past few decades. 

“We’re focused on targeting major economies where the need for infrastructure security is urgent and where our solutions can make a meaningful impact. That’s how we think about expansion.” 

source

Continue Reading

Tech

All the important news from the ongoing India AI Impact Summit

With an eye towards luring more AI investment to the country, India is hosting a four-day AI Impact Summit this week that will be attended by executives from major AI labs and Big Tech, including OpenAI, Anthropic, Nvidia, Microsoft, Google, and Cloudflare, as well as heads of state.

The event, which expects 250,000 visitors, will see Alphabet CEO Sundar Pichai, OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, Reliance Chairman Mukesh Ambani, and Google DeepMind CEO Demis Hassabis in attendance.

India’s prime minister, Narendra Modi, is scheduled to deliver a speech with French President Emmanuel Macron on Thursday.

Here are all the key updates from the event:

  • India earmarks $1.1 billion for its state-backed venture capital fund. The fund will invest in artificial intelligence and advanced manufacturing startups across the country.
  • OpenAI CEO Sam Altman said India accounts for more than 100 million weekly active ChatGPT users, second only to the U.S. He also said Indians also account for the most students using ChatGPT.
  • Blackstone has picked up a majority stake in Indian AI startup Neysa as part of a $600 million equity fundraise. Teachers’ Venture Growth, TVS Capital, 360 ONE Asset, and Nexus Venture Partners also invested. The company now plans to raise another $600 million in debt, and deploy more than 20,000 GPUs.
  • Bengaluru-based C2i, which is building a power solution for data centers, raised $15 million in a Series A round from Peak XV, with participation from Yali Deeptech and TDK Ventures.
  • HCL CEO Vineet Nayyar said Indian IT companies will focus on turning profits and not being job creators. These comments come as Indian IT stocks dip as fears of AI disrupting the IT services sector burgeon.
  • Vinod Khosla, founder of Khosla Ventures, said that industries like IT services and BPOs (Business Process Outsourcing) can “almost completely disappear” within five years because of AI. He told Hindustan Times that 250 million young people in India should be selling AI-based products and services to the rest of the world.
  • AMD is teaming up with Tata Consultancy Services (TCS) to develop rack-scale AI infrastructure based on AMD’s “Helios” platform.
  • Anthropic said that it is opening its first office in India in the city of Bengaluru. The company said that the country is the second biggest user of Claude afte the U.S.

source

Continue Reading

Tech

Fractal Analytics’ muted IPO debut signals persistent AI fears in India

As India’s first AI company to IPO, Fractal Analytics didn’t have a stellar first day on the public markets, as enthusiasm for the technology collided with jittery investors recovering from a major sell-off in Indian software stocks.

Fractal listed at ₹876 per share on Monday, below its issue price of ₹900, and then slid further in afternoon trading. The stock closed at ₹873.70, down 7% from its issue price, lending the company a market capitalization of about ₹148.1 billion (around $1.6 billion).

That price tag marks a step down from Fractal’s recent private-market highs. In July 2025, the company raised about $170 million in a secondary sale, at a valuation of $2.4 billion. It first crossed the $1 billion mark in January 2022 after raising $360 million from TPG, becoming India’s first AI unicorn.

Fractal’s IPO comes as India seeks to position itself as a key market and development hub for AI in a bid to attract investment amid increasing attention from some of the world’s most prominent AI companies. Firms such as OpenAI and Anthropic have been engaging more with the country’s government, enterprises, and developer ecosystem as they seek to tap the country’s scale, talent base, and growing appetite for AI tools and technology.

That push is on display this week in New Delhi, where India is hosting the AI Impact Summit, bringing together global technology leaders, policymakers and executives.

Fractal’s subdued debut followed a sharp recalibration of its IPO. In early February, the company decided to price the offering conservatively after its bankers advised it to, cutting the IPO size by more than 40% to ₹28.34 billion (about $312.5 million), from the original amount of ₹49 billion ($540.3 million).

Founded in 2000, Fractal sells AI and data analytics software to large enterprises across financial services, retail and healthcare, and generates the bulk of its revenue from overseas markets, including the U.S. The company pivoted toward AI in 2022 after operating as a traditional data analytics firm for over 20 years.

Techcrunch event

Boston, MA
|
June 23, 2026

Fractal touted a steadily growing business in its IPO filing, with revenue from operations rising 26% to ₹27.65 billion (around $304.8 million) in the year ended March 2025 compared to a year earlier. It also swung to a net profit of ₹2.21 billion ($24.3 million) from a loss of ₹547 million ($6 million) the previous year.

The company plans to use the IPO proceeds to repay borrowings at its U.S. subsidiary, invest in R&D, sales and marketing under its Fractal Alpha unit, expand office infrastructure in India, and pursue potential acquisitions.

source

Continue Reading