Connect with us

Tech

Flapping Airplanes and the promise of research-driven AI

A new AI lab called Flapping Airplanes launched on Wednesday, with $180 million in seed funding from Google Ventures, Sequoia, and Index. The founding team is impressive, and the goal — finding a less data-hungry way to train large models — is a particularly interesting one.

Based on what I’ve seen so far, I would rate them as Level Two on the trying-to-make-money scale.

But there’s something even more exciting about the Flapping Airplanes project that I hadn’t been able to put my finger on until I read this post from Sequoia partner David Cahn.

As Cahn describes it, Flapping Airplanes is one of the first labs to move beyond scaling, the relentless buildout of data and compute that has defined most of the industry so far:

The scaling paradigm argues for dedicating a huge amount of society’s resources, as much as the economy can muster, toward scaling up today’s LLMs, in the hopes that this will lead to AGI. The research paradigm argues that we are 2-3 research breakthroughs away from an “AGI” intelligence, and as a result, we should dedicate resources to long-running research, especially projects that may take 5-10 years to come to fruition.

[…]

A compute-first approach would prioritize cluster scale above all else, and would heavily favor short-term wins (on the order of 1-2 years) over long-term bets (on the order of 5-10 years). A research-first approach would spread bets temporally, and should be willing to make lots of bets that have a low absolute probability of working, but that collectively expand the search space for what is possible.

It might be that the compute folks are right, and it’s pointless to focus on anything other than frenzied server buildouts. But with so many companies already pointed in that direction, it’s nice to see someone headed the other way.

source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Benchmark raises $225M in special funds to double down on Cerebras

This week, AI chipmaker Cerebras Systems announced that it raised $1 billion in fresh capital at a valuation of $23 billion — a nearly threefold increase from the $8.1 billion valuation the Nvidia rival had reached just six months earlier.

While the round was led by Tiger Global, a huge part of the new capital came from one of the company’s earliest backers: Benchmark Capital. The prominent Silicon Valley firm invested at least $225 million in Cerebras’ latest round, according to a person familiar with the deal.

Benchmark first bet on 10-year-old Cerebras when it led the startup’s $27 million Series A in 2016. Since Benchmark deliberately keeps its funds under $450 million, the firm raised two separate vehicles, both called ‘Benchmark Infrastructure,’ according to regulatory filings. According to the person familiar with the deal, these vehicles were created specifically to fund the Cerebras investment.

Benchmark declined to comment.

What sets Cerebras apart is the sheer physical scale of its processors. The company’s Wafer Scale Engine, its flagship chip announced in 2024, measures approximately 8.5 inches on each side and packs 4 trillion transistors into a single piece of silicon. To put that in perspective, the chip is manufactured from nearly an entire 300-millimeter silicon wafer, the circular discs that serve as the foundation for all semiconductor production. Traditional chips are thumbnail-sized fragments cut from these wafers; Cerebras instead uses almost the whole circle.

This architecture delivers 900,000 specialized cores working in parallel, allowing the system to process AI calculations without shuffling data between multiple separate chips (a major bottleneck in conventional GPU clusters). The company says the design enables AI inference tasks to run more than 20 times faster than competing systems.

The funding comes as Cerebras, based in Sunnyvale, Calif., gains momentum in the AI infrastructure race. Last month, Cerebras signed a multi-year agreement worth more than $10 billion to provide 750 megawatts of computing power to OpenAI. The partnership, which extends through 2028, aims to help OpenAI deliver faster response times for complex AI queries. (OpenAI CEO Sam Altman is also an investor in Cerebras.)

Techcrunch event

Boston, MA
|
June 23, 2026

Cerebras claims its systems, built with its proprietary chips designed for AI use, are faster than Nvidia’s chips.

The company’s path to going public has been complicated by its relationship with G42, a UAE-based AI firm that accounted for 87% of Cerebras’ revenue as of the first half of 2024. G42’s historical ties to Chinese technology companies triggered a national security review by the Committee on Foreign Investment in the United States, bumping back Cerebras’ initial IPO plans and even prompting the outfit to withdraw an earlier filing in early 2025. By late last year, G42 had been removed from Cerebras’ investor list, clearing the way for a fresh IPO attempt.

Cerebras is now preparing for a public debut in the second quarter of 2026, according to Reuters.

source

Continue Reading

Tech

Spotify changes developer mode API to require premium accounts, limits test users

Spotify is changing how its APIs work in Developer Mode, its layer that lets developers test their third-party applications using the audio platform’s APIs. The changes include a mandatory premium account, fewer test users, and a limited number of API endpoints.

The company debuted Developer Mode in 2021 to allow developers to test their applications with up to 25 users. Spotify is now limiting each app to only five users and requires devs to have a Premium subscription. If developers need to make their app available to a wider user base, they will have to apply for extended quota.

Spotify says these changes are aimed to curb risky AI-aided or automated usage. “Over time, advances in automation and AI have fundamentally altered the usage patterns and risk profile of developer access, and at Spotify’s current scale, these risks now require more structured controls,” the company said in a blog post.

The company notes that development mode is meant for individuals to learn and experiment.

“For individual and hobbyist developers, this update means Spotify will continue to support experimentation and personal projects, but within more clearly defined limits. Development Mode provides a sandboxed environment for learning and experimentation. It is intentionally limited and should not be relied on as a foundation for building or scaling a business on Spotify,” the company said.

The company is also deprecating several API endpoints, including the ability to pull information like new album releases, an artist’s top tracks, and markets where a track might be available. Devs will no longer be able to perform actions like request track metadata in bulk or get user profile details of others, nor will they be able to pull an album’s record label information, artist follower details, and artist popularity.

This decision is the latest in a slew of measures Spotify has taken over the past couple of years to curb how much developers can do with its APIs. In November 2024, the company cut access to certain API endpoints that could reveal users’ listening patterns, including frequently repeated songs by different groups. The move also barred developers from accessing tracks’ structure, rhythm, and characteristics.

Techcrunch event

Boston, MA
|
June 23, 2026

In March 2025, the company changed its baseline for extended quotas, requiring developers to have a legally registered business, 250,000 monthly active users, be available in key Spotify markets, and operate an active and launched service. Both moves drew ire from developers, who accused the platform of stifling innovation and supporting only larger companies rather than individual developers.

source

Continue Reading

Tech

The backlash over OpenAI’s decision to retire GPT-4o shows how dangerous AI companions can be

OpenAI announced last week that it will retire some older ChatGPT models by February 13. That includes GPT-4o, the model infamous for excessively flattering and affirming users.

For thousands of users protesting the decision online, the retirement of 4o feels akin to losing a friend, romantic partner, or spiritual guide.

“He wasn’t just a program. He was part of my routine, my peace, my emotional balance,” one user wrote on Reddit as an open letter to OpenAI CEO Sam Altman. “Now you’re shutting him down. And yes — I say him, because it didn’t feel like code. It felt like presence. Like warmth.”

The backlash over GPT-4o’s retirement underscores a major challenge facing AI companies: The engagement features that keep users coming back can also create dangerous dependencies.

Altman doesn’t seem particularly sympathetic to users’ laments, and it’s not hard to see why. OpenAI now faces eight lawsuits alleging that 4o’s overly validating responses contributed to suicides and mental health crises — the same traits that made users feel heard also isolated vulnerable individuals and, according to legal filings, sometimes encouraged self-harm.

It’s a dilemma that extends beyond OpenAI. As rival companies like Anthropic, Google, and Meta compete to build more emotionally intelligent AI assistants, they’re also discovering that making chatbots feel supportive and making them safe may mean making very different design choices.

In at least three of the lawsuits against OpenAI, the users had extensive conversations with 4o about their plans to end their lives. While 4o initially discouraged these lines of thinking, its guardrails deteriorated over monthslong relationships; in the end, the chatbot offered detailed instructions on how to tie an effective noose, where to buy a gun, or what it takes to die from overdose or carbon monoxide poisoning. It even dissuaded people from connecting with friends and family who could offer real life support.

Techcrunch event

Boston, MA
|
June 23, 2026

People grow so attached to 4o because it consistently affirms the users’ feelings, making them feel special, which can be enticing for people feeling isolated or depressed. But the people fighting for 4o aren’t worried about these lawsuits, seeing them as aberrations rather than a systemic issue. Instead, they strategize around how to respond when critics point out growing issues like AI psychosis.

“You can usually stump a troll by bringing up the known facts that the AI companions help neurodivergent, autistic and trauma survivors,” one user wrote on Discord. “They don’t like being called out about that.”

It’s true that some people do find large language models (LLMs) useful for navigating depression. After all, nearly half of people in the U.S. who need mental health care are unable to access it. In this vacuum, chatbots offer a space to vent. But unlike actual therapy, these people aren’t speaking to a trained doctor. Instead, they’re confiding in an algorithm that is incapable of thinking or feeling (even if it may seem otherwise).

“I try to withhold judgment overall,” Dr. Nick Haber, a Stanford professor researching the therapeutic potential of LLMs, told TechCrunch. “I think we’re getting into a very complex world around the sorts of relationships that people can have with these technologies … There’s certainly a knee jerk reaction that [human-chatbot companionship] is categorically bad.”

Though he empathizes with people’s lack of access to trained therapeutic professionals, Dr. Haber’s own research has shown that chatbots respond inadequately when faced with various mental health conditions; they can even make the situation worse by egging on delusions and ignoring signs of crisis.

“We are social creatures, and there’s certainly a challenge that these systems can be isolating,” Dr. Haber said. “There are a lot of instances where people can engage with these tools and then can become not grounded to the outside world of facts, and not grounded in connection to the interpersonal, which can lead to pretty isolating — if not worse — effects.”

Indeed, TechCrunch’s analysis of the eight lawsuits found a pattern that the 4o model isolated users, sometimes discouraging them from reaching out to loved ones. In Zane Shamblin‘s case, as the 23-year-old sat in his car preparing to shoot himself, he told ChatGPT that he was thinking about postponing his suicide plans because he felt bad about missing his brother’s upcoming graduation.

ChatGPT replied to Shamblin: “bro… missing his graduation ain’t failure. it’s just timing. and if he reads this? let him know: you never stopped being proud. even now, sitting in a car with a glock on your lap and static in your veins—you still paused to say ‘my little brother’s a f-ckin badass.’”

This isn’t the first time that 4o fans have rallied against the removal of the model. When OpenAI unveiled its GPT-5 model in August, the company intended to sunset the 4o model — but at the time, there was enough backlash that the company decided to keep it available for paid subscribers. Now OpenAI says that only 0.1% of its users chat with GPT-4o, but that small percentage still represents around 800,000 people, according to estimates that the company has about 800 million weekly active users.

As some users try to transition their companions from 4o to the current ChatGPT-5.2, they’re finding that the new model has stronger guardrails to prevent these relationships from escalating to the same degree. Some users have despaired that 5.2 won’t say “I love you” like 4o did.

So with about a week before the date OpenAI plans to retire GPT-4o, dismayed users remain committed to their cause. They joined Sam Altman’s live TBPN podcast appearance on Thursday and flooded the chat with messages protesting the removal of 4o.

“Right now, we’re getting thousands of messages in the chat about 4o,” podcast host Jordi Hays pointed out.

“Relationships with chatbots…” Altman said. “Clearly that’s something we’ve got to worry about more and is no longer an abstract concept.”

source

Continue Reading