Connect with us

Tech

Physical Intelligence, Stripe veteran Lachy Groom’s latest bet, is building Silicon Valley’s buzziest robot brains

From the street, the only indication I’ve found Physical Intelligence’s headquarters in San Francisco is a pi symbol that’s a slightly different color than the rest of the door. When I walk in, I’m immediately confronted with activity. There’s no reception desk, no gleaming logo in fluorescent lights.

Inside, the space is a giant concrete box made slightly less austere by a haphazard sprawl of long blonde-wood tables. Some are clearly meant for lunch, dotted with Girl Scout cookie boxes, jars of Vegemite (someone here is Australian), and small wire baskets stuffed with one too many condiments. The rest of the tables tell a different story entirely. Many more of them are laden with monitors, spare robotics parts, tangles of black wire, and fully assembled robotic arms in various states of attempting to master the mundane.

During my visit, one arm is folding a pair of black pants, or trying to. It’s not going well. Another is attempting to turn a shirt inside out with the kind of determination that suggests it will eventually succeed, just not today. A third – this one seems to have found its calling – is quickly peeling a zucchini, after which it is supposed to deposit the shavings into a separate container. The shavings are going well, at least.

“Think of it like ChatGPT, but for robots,” Sergey Levine tells me, gesturing toward the motorized ballet unfolding across the room. Levine, an associate professor at UC Berkeley and one of Physical Intelligence’s cofounders, has the amiable, bespectacled demeanor of someone who has spent considerable time explaining complex concepts to people who don’t immediately grasp them. 

What I’m watching, he explains, is the testing phase of a continuous loop: data gets collected on robot stations here and at other locations — warehouses, homes, wherever the team can set up shop — and that data trains general-purpose robotic foundation models. When researchers train a new model, it comes back to stations like these for evaluation. The pants-folder is someone’s experiment. So is the shirt-turner. The zucchini-peeler might be testing whether the model can generalize across different vegetables, learning the fundamental motions of peeling well enough to handle an apple or a potato it’s never encountered.

The company operates test kitchens in this building and elsewhere, including people’s homes, Levine says, using off-the-shelf hardware to expose the robots to different environments and challenges. There’s a sophisticated espresso machine nearby, and I assume it’s for the staff until Levine clarifies that no, it’s there for the robots to learn. Any foamed lattes are data, not a perk for the dozens of engineers on the scene who are mostly peering into their computers or hovering over their mechanized experiments.

The hardware itself is deliberately unglamorous. These arms sell for about $3,500, and that’s with what Levine describes as “an enormous markup” from the vendor. If they manufactured them in-house, the material cost would drop below $1,000. A few years ago, he says, a roboticist would have been shocked these things could do anything at all. But that’s the point – good intelligence compensates for bad hardware.

Techcrunch event

Boston, MA
|
June 23, 2026

As Levine excuses himself, I’m approached by Lachy Groom, moving through the space with the purposefulness of someone who has half a dozen things happening at once. At 31, Groom still has the fresh-faced quality of Silicon Valley’s boy wonders, a designation he earned early, having sold his first company nine months after starting it at age 13 in his native Australia (this explains the Vegemite).

When I first approached him earlier, as he welcomed a small gaggle of sweatshirt-wearing visitors into the building, his response to my request for time with him was immediate: “Absolutely not, I’ve got meetings.” Now he has ten minutes, maybe.

He found what he was looking for when he started following the academic work coming out of the labs of Levine and Chelsea Finn, a former Berkeley PhD student of Levine’s who now runs her own lab at Stanford focused on robotic learning. Their names kept appearing in everything interesting happening in robotics. When he heard rumors they might be starting something, he tracked down Karol Hausman, a Google DeepMind researcher who also taught at Stanford and who Groom had learned was involved. “It was just one of those meetings where you walk out and it’s like, This is it.”

Groom never intended to become a full-time investor, he tells me, even though some might wonder why not given his track record. After leaving Stripe, where he was an early employee, he spent roughly five years as an angel investor, making early bets on companies like Figma, Notion, Ramp, and Lattice while searching for the right company to start or join himself. His first robotics investment, Standard Bots, came in 2021 and reintroduced him to a field he’d loved as a kid building Lego Mindstorms. As he jokes, he was “on vacation much more as an investor.” But investing was just a way to stay active and meet people, not the endgame. “I was looking for five years for the company to go start post-Stripe,” he says. “Good ideas at a good time with a good team – [that’s] extremely rare. It’s all execution, but you can execute like hell on a bad idea, and it’s still a bad idea.”

The two-year-old company has now raised over $1 billion, and when I ask about its runway, he’s quick to clarify it doesn’t actually burn that much. Most of its spending goes toward compute. A moment later, he acknowledges that under the right terms, with the right partners, he’d raise more. “There’s no limit to how much money we can really put to work,” he says. “There’s always more compute you can throw at the problem.”

What makes this arrangement particularly unusual is what Groom doesn’t give his backers: a timeline for turning Physical Intelligence into a money-making endeavor. “I don’t give investors answers on commercialization,” he says of backers that include Khosla Ventures, Sequoia Capital and Thrive Capital among others that have valued the company at $5.6 billion. “That’s sort of a weird thing, that people tolerate that.” But tolerate it they do, and they may not always, which is why it behooves the company to be well-capitalized now. Not because it needs to be, but because it enables the team to make long-term decisions without compromise.

Quan Vuong, another cofounder who came from Google DeepMind, explains that the strategy revolves around cross-embodiment learning and diverse data sources. If someone builds a new hardware platform tomorrow, they won’t need to start data collection from scratch – they can transfer all the knowledge the model already has. “The marginal cost of onboarding autonomy to a new robot platform, whatever that platform might be, it’s just a lot lower,” he says.

The company is already working with a small number of companies in different verticals – logistics, grocery, a chocolate maker across the street  – to test whether their systems are good enough for real-world automation. Vuong claims that in some cases, they already are. With their “any platform, any task” approach, the surface area for success is large enough to start checking off tasks that are ready for automation today.

Physical Intelligence isn’t alone in chasing this vision. The race to build general-purpose robotic intelligence – the foundation on which more specialized applications can be built, much like the LLM models that captivated the world three years ago – is heating up. Pittsburgh-based Skild AI, founded in 2023, just this month raised $1.4 billion at a $14 billion valuation and is taking a notably different approach. While Physical Intelligence remains focused on pure research, Skild AI has already deployed its “omni-bodied” Skild Brain commercially, saying it generated $30 million in revenue in just a few months last year across security, warehouses, and manufacturing. 

Skild has even taken public shots at competitors, arguing on its blog that most “robotics foundation models” are just vision-language models “in disguise” that lack “true physical common sense” because they rely too heavily on internet-scale pretraining rather than physics-based simulation and real robotics data.

It’s a pretty sharp philosophical divide. Skild AI is betting that commercial deployment creates a data flywheel that improves the model with each real-world use case. Physical Intelligence is betting that resisting the pull of near-term commercialization will enable it to produce superior general intelligence. Who’s ‘more right’ will take years to resolve.

In the meantime, Physical Intelligence operates with what Groom describes as unusual clarity. “It’s such a pure company. A researcher has a need, we go and collect data to support that need – or new hardware or whatever it is – and then we do it. It’s not externally driven.” The company had a 5-to-10-year roadmap of what the team thought would be possible. By month 18, they’d blown through it, he says.

The company has about 80 employees and plans to grow, though Groom says hopefully “as slowly as possible.” What’s the most challenging, he says, is hardware. “Hardware is just really hard. Everything we do is so much harder than a software company.” Hardware breaks. It arrives slowly, delaying tests. Safety considerations complicate everything.

As Groom springs up to rush to his next commitment, I’m left watching the robots continue their practice. The pants are still not quite folded. The shirt remains stubbornly right-side-out. The zucchini shavings are piling up nicely.

There are obvious questions, including my own, about whether anyone actually wants a robot in their kitchen peeling vegetables, about safety, about dogs going crazy at mechanical intruders in their homes, about whether all of the time and money being invested here solves big enough problems or creates new ones. Meanwhile, outsiders question the company’s progress, whether its vision is achievable, and if betting on general intelligence rather than specific applications makes sense.

If Groom has any doubts, he doesn’t show it. He’s working with people who’ve been working on this problem for decades and who believe the timing is finally right, which is all he needs to know.

Besides, Silicon Valley has been backing people like Groom and giving them a lot of rope since the beginning of the industry, knowing there’s a good chance that even without a clear path to commercialization, even without a timeline, even without certainty about what the market will look like when they get there, they’ll figure it out. It doesn’t always work out, but when it does, it tends to justify a lot of the times it didn’t.

source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Spotify changes developer mode API to require premium accounts, limits test users

Spotify is changing how its APIs work in Developer Mode, its layer that lets developers test their third-party applications using the audio platform’s APIs. The changes include a mandatory premium account, fewer test users, and a limited number of API endpoints.

The company debuted Developer Mode in 2021 to allow developers to test their applications with up to 25 users. Spotify is now limiting each app to only five users and requires devs to have a Premium subscription. If developers need to make their app available to a wider user base, they will have to apply for extended quota.

Spotify says these changes are aimed to curb risky AI-aided or automated usage. “Over time, advances in automation and AI have fundamentally altered the usage patterns and risk profile of developer access, and at Spotify’s current scale, these risks now require more structured controls,” the company said in a blog post.

The company notes that development mode is meant for individuals to learn and experiment.

“For individual and hobbyist developers, this update means Spotify will continue to support experimentation and personal projects, but within more clearly defined limits. Development Mode provides a sandboxed environment for learning and experimentation. It is intentionally limited and should not be relied on as a foundation for building or scaling a business on Spotify,” the company said.

The company is also deprecating several API endpoints, including the ability to pull information like new album releases, an artist’s top tracks, and markets where a track might be available. Devs will no longer be able to perform actions like request track metadata in bulk or get user profile details of others, nor will they be able to pull an album’s record label information, artist follower details, and artist popularity.

This decision is the latest in a slew of measures Spotify has taken over the past couple of years to curb how much developers can do with its APIs. In November 2024, the company cut access to certain API endpoints that could reveal users’ listening patterns, including frequently repeated songs by different groups. The move also barred developers from accessing tracks’ structure, rhythm, and characteristics.

Techcrunch event

Boston, MA
|
June 23, 2026

In March 2025, the company changed its baseline for extended quotas, requiring developers to have a legally registered business, 250,000 monthly active users, be available in key Spotify markets, and operate an active and launched service. Both moves drew ire from developers, who accused the platform of stifling innovation and supporting only larger companies rather than individual developers.

source

Continue Reading

Tech

The backlash over OpenAI’s decision to retire GPT-4o shows how dangerous AI companions can be

OpenAI announced last week that it will retire some older ChatGPT models by February 13. That includes GPT-4o, the model infamous for excessively flattering and affirming users.

For thousands of users protesting the decision online, the retirement of 4o feels akin to losing a friend, romantic partner, or spiritual guide.

“He wasn’t just a program. He was part of my routine, my peace, my emotional balance,” one user wrote on Reddit as an open letter to OpenAI CEO Sam Altman. “Now you’re shutting him down. And yes — I say him, because it didn’t feel like code. It felt like presence. Like warmth.”

The backlash over GPT-4o’s retirement underscores a major challenge facing AI companies: The engagement features that keep users coming back can also create dangerous dependencies.

Altman doesn’t seem particularly sympathetic to users’ laments, and it’s not hard to see why. OpenAI now faces eight lawsuits alleging that 4o’s overly validating responses contributed to suicides and mental health crises — the same traits that made users feel heard also isolated vulnerable individuals and, according to legal filings, sometimes encouraged self-harm.

It’s a dilemma that extends beyond OpenAI. As rival companies like Anthropic, Google, and Meta compete to build more emotionally intelligent AI assistants, they’re also discovering that making chatbots feel supportive and making them safe may mean making very different design choices.

In at least three of the lawsuits against OpenAI, the users had extensive conversations with 4o about their plans to end their lives. While 4o initially discouraged these lines of thinking, its guardrails deteriorated over monthslong relationships; in the end, the chatbot offered detailed instructions on how to tie an effective noose, where to buy a gun, or what it takes to die from overdose or carbon monoxide poisoning. It even dissuaded people from connecting with friends and family who could offer real life support.

Techcrunch event

Boston, MA
|
June 23, 2026

People grow so attached to 4o because it consistently affirms the users’ feelings, making them feel special, which can be enticing for people feeling isolated or depressed. But the people fighting for 4o aren’t worried about these lawsuits, seeing them as aberrations rather than a systemic issue. Instead, they strategize around how to respond when critics point out growing issues like AI psychosis.

“You can usually stump a troll by bringing up the known facts that the AI companions help neurodivergent, autistic and trauma survivors,” one user wrote on Discord. “They don’t like being called out about that.”

It’s true that some people do find large language models (LLMs) useful for navigating depression. After all, nearly half of people in the U.S. who need mental health care are unable to access it. In this vacuum, chatbots offer a space to vent. But unlike actual therapy, these people aren’t speaking to a trained doctor. Instead, they’re confiding in an algorithm that is incapable of thinking or feeling (even if it may seem otherwise).

“I try to withhold judgment overall,” Dr. Nick Haber, a Stanford professor researching the therapeutic potential of LLMs, told TechCrunch. “I think we’re getting into a very complex world around the sorts of relationships that people can have with these technologies … There’s certainly a knee jerk reaction that [human-chatbot companionship] is categorically bad.”

Though he empathizes with people’s lack of access to trained therapeutic professionals, Dr. Haber’s own research has shown that chatbots respond inadequately when faced with various mental health conditions; they can even make the situation worse by egging on delusions and ignoring signs of crisis.

“We are social creatures, and there’s certainly a challenge that these systems can be isolating,” Dr. Haber said. “There are a lot of instances where people can engage with these tools and then can become not grounded to the outside world of facts, and not grounded in connection to the interpersonal, which can lead to pretty isolating — if not worse — effects.”

Indeed, TechCrunch’s analysis of the eight lawsuits found a pattern that the 4o model isolated users, sometimes discouraging them from reaching out to loved ones. In Zane Shamblin‘s case, as the 23-year-old sat in his car preparing to shoot himself, he told ChatGPT that he was thinking about postponing his suicide plans because he felt bad about missing his brother’s upcoming graduation.

ChatGPT replied to Shamblin: “bro… missing his graduation ain’t failure. it’s just timing. and if he reads this? let him know: you never stopped being proud. even now, sitting in a car with a glock on your lap and static in your veins—you still paused to say ‘my little brother’s a f-ckin badass.’”

This isn’t the first time that 4o fans have rallied against the removal of the model. When OpenAI unveiled its GPT-5 model in August, the company intended to sunset the 4o model — but at the time, there was enough backlash that the company decided to keep it available for paid subscribers. Now OpenAI says that only 0.1% of its users chat with GPT-4o, but that small percentage still represents around 800,000 people, according to estimates that the company has about 800 million weekly active users.

As some users try to transition their companions from 4o to the current ChatGPT-5.2, they’re finding that the new model has stronger guardrails to prevent these relationships from escalating to the same degree. Some users have despaired that 5.2 won’t say “I love you” like 4o did.

So with about a week before the date OpenAI plans to retire GPT-4o, dismayed users remain committed to their cause. They joined Sam Altman’s live TBPN podcast appearance on Thursday and flooded the chat with messages protesting the removal of 4o.

“Right now, we’re getting thousands of messages in the chat about 4o,” podcast host Jordi Hays pointed out.

“Relationships with chatbots…” Altman said. “Clearly that’s something we’ve got to worry about more and is no longer an abstract concept.”

source

Continue Reading

Tech

How AI is helping solve the labor issue in treating rare diseases

Modern biotech has the tools to edit genes and design drugs, yet thousands of rare diseases remain untreated. According to executives from Insilico Medicine and GenEditBio, the missing ingredient for years has been finding enough smart people to continue the work. AI, they say, is becoming the force multiplier that lets scientists take on problems the industry has long left untouched. 

Speaking this week at Web Summit Qatar, Insilico’s president, Alex Aliper, laid out his company’s aim to develop “pharmaceutical superintelligence.” Insilico recently launched its “MMAI Gym” that aims to train generalist large language models, like ChatGPT and Gemini, to perform as well as specialist models.

The goal is to build a multimodal, multitask model that, Aliper says, can solve many different drug discovery tasks simultaneously with superhuman accuracy.

“We really need this technology to increase the productivity of our pharmaceutical industry and tackle the shortage of labor and talent in that space, because there are still thousands of diseases without a cure, without any treatment options, and there are thousands of rare disorders which are neglected,” Aliper said in an interview with TechCrunch. “So we need more intelligent systems to tackle that problem.”

Insilico’s platform ingests biological, chemical, and clinical data to generate hypotheses about disease targets and candidate molecules. By automating steps that once required legions of chemists and biologists, Insilico says it can sift through vast design spaces, nominate high-quality therapeutic candidates, and even repurpose existing drugs — all at dramatically reduced cost and time.

For example, the company recently used its AI models to identify whether existing drugs could be repurposed to treat ALS, a rare neurological disorder. 

But the labor bottleneck doesn’t end at drug discovery. Even when AI can identify promising targets or therapies, many diseases require interventions at a more fundamental biological level. 

Techcrunch event

Boston, MA
|
June 23, 2026

GenEditBio is part of the “second wave” of CRISPR gene editing, in which the process moves away from editing cells outside of the body (ex vivo) and toward precise delivery inside the body (in vivo). The company’s goal is to make gene editing a one-and-done injection directly into the affected tissue. 

“We have developed a proprietary ePDV, or engineered protein delivery vehicle, and it’s a virus-like particle,” GenEditBio’s co-founder and CEO, Tian Zhu, told TechCrunch. “We learn from nature and use AI machine learning methods to mine natural resources and find which kinds of viruses have an affinity to certain types of tissues.”

The “natural resources” Zhu is referring to is GenEditBio’s massive library of thousands of unique, nonviral, nonlipid polymer nanoparticles — essentially delivery vehicles designed to safely transport gene-editing tools into specific cells.

The company says its NanoGalaxy platform uses AI to analyze data and identify how chemical structures correlate with specific tissue targets (like the eye, liver, or nervous system). The AI then predicts which tweaks to a delivery vehicle’s chemistry will help it carry a payload without triggering an immune response. 

GenEditBio tests its ePDVs in vivo in wet labs, and the results are fed back into the AI to refine its predictive accuracy for the next round. 

Efficient, tissue-specific delivery is a prerequisite for in vivo gene editing, says Zhu. She argues that her company’s approach reduces the cost of goods and standardizes a process that has historically been difficult to scale. 

“It’s like getting an off-the-shelf drug [that works] for multiple patients, which makes the drugs more affordable and accessible to patients globally,” Zhu said. 

Her company recently received FDA approval to begin trials of CRISPR therapy for corneal dystrophy.

Combating the persistent data problem

As with many AI-driven systems, progress in biotech ultimately runs up against a data problem. Modeling the edge cases of human biology requires far more high-quality data than researchers currently can get. 

“We still need more ground truth data coming from patients,” Aliper said. “The corpus of data is heavily biased over the Western world, where it is generated. I think we need to have more efforts locally, to have a more balanced set of original data, or ground truth data, so that our models will also be more capable of dealing with it.”

Aliper said Insilico’s automated labs generate multi-layer biological data from disease samples at scale, without human intervention, which it then feeds into its AI-driven discovery platform. 

Zhu says the data AI needs already exists in the human body, shaped by thousands of years of evolution. Only a small fraction of DNA directly “codes” for proteins, while the rest acts more like an instruction manual for how genes behave. That information has historically been difficult for humans to interpret but is increasingly accessible to AI models, including recent efforts like Google DeepMind’s AlphaGenome. 

GenEditBio applies a similar approach in the lab, testing thousands of delivery nanoparticles in parallel rather than one at a time. The resulting datasets, which Zhu calls “gold for AI systems,” are used to train its models and, increasingly, to support collaborations with outside partners. 

One of the next big efforts, according to Aliper, will be building digital twins of humans to run virtual clinical trials, a process that he says is “still in nascence.”

“We’re in a plateau of around 50 drugs approved by the FDA every year annually, and we need to see growth,” Aliper said. “There is a rise in chronic disorders because we are aging as a global population … My hope is in 10 to 20 years, we will have more therapeutic options for the personalized treatment of patients.”

source

Continue Reading