Connect with us

Tech

A peek inside Physical Intelligence, the startup building Silicon Valley’s buzziest robot brains

From the street, the only indication I’ve found Physical Intelligence’s headquarters in San Francisco is a pi symbol that’s a slightly different color than the rest of the door. When I walk in, I’m immediately confronted with activity. There’s no reception desk, no gleaming logo in fluorescent lights.

Inside, the space is a giant concrete box made slightly less austere by a haphazard sprawl of long blonde-wood tables. Some are clearly meant for lunch, dotted with Girl Scout cookie boxes, jars of Vegemite (someone here is Australian), and small wire baskets stuffed with one too many condiments. The rest of the tables tell a different story entirely. Many more of them are laden with monitors, spare robotics parts, tangles of black wire, and fully assembled robotic arms in various states of attempting to master the mundane.

During my visit, one arm is folding a pair of black pants, or trying to. It’s not going well. Another is attempting to turn a shirt inside out with the kind of determination that suggests it will eventually succeed, just not today. A third — this one seems to have found its calling — is quickly peeling a zucchini, after which it is supposed to deposit the shavings into a separate container. The shavings are going well, at least.

“Think of it like ChatGPT, but for robots,” Sergey Levine tells me, gesturing toward the motorized ballet unfolding across the room. Levine, an associate professor at UC Berkeley and one of Physical Intelligence’s co-founders, has the amiable, bespectacled demeanor of someone who has spent considerable time explaining complex concepts to people who don’t immediately grasp them. 

Image Credits:Connie Loizos for TechCrunch

What I’m watching, he explains, is the testing phase of a continuous loop: data gets collected on robot stations here and at other locations — warehouses, homes, wherever the team can set up shop — and that data trains general-purpose robotic foundation models. When researchers train a new model, it comes back to stations like these for evaluation. The pants-folder is someone’s experiment. So is the shirt-turner. The zucchini-peeler might be testing whether the model can generalize across different vegetables, learning the fundamental motions of peeling well enough to handle an apple or a potato it’s never encountered.

The company also operates a test kitchen in this building and elsewhere using off-the-shelf hardware to expose the robots to different environments and challenges. There’s a sophisticated espresso machine nearby, and I assume it’s for the staff until Levine clarifies that no, it’s there for the robots to learn. Any foamed lattes are data, not a perk for the dozens of engineers on the scene who are mostly peering into their computers or hovering over their mechanized experiments.

The hardware itself is deliberately unglamorous. These arms sell for about $3,500, and that’s with what Levine describes as “an enormous markup” from the vendor. If they manufactured them in-house, the material cost would drop below $1,000. A few years ago, he says, a roboticist would have been shocked these things could do anything at all. But that’s the point — good intelligence compensates for bad hardware.

Techcrunch event

Boston, MA
|
June 23, 2026

As Levine excuses himself, I’m approached by Lachy Groom, moving through the space with the purposefulness of someone who has half a dozen things happening at once. At 31, Groom still has the fresh-faced quality of Silicon Valley’s boy wonder, a designation he earned early, having sold his first company nine months after starting it at age 13 in his native Australia (this explains the Vegemite).

When I first approached him earlier, as he welcomed a small gaggle of sweatshirt-wearing visitors into the building, his response to my request for time with him was immediate: “Absolutely not, I’ve got meetings.” Now he has 10 minutes, maybe.

Groom found what he was looking for when he started following the academic work coming out of the labs of Levine and Chelsea Finn, a former Berkeley PhD student of Levine’s who now runs her own lab at Stanford focused on robotic learning. Their names kept appearing in everything interesting happening in robotics. When he heard rumors they might be starting something, he tracked down Karol Hausman, a Google DeepMind researcher who also taught at Stanford and who Groom had learned was involved. “It was just one of those meetings where you walk out and it’s like, This is it.”

Groom never intended to become a full-time investor, he tells me, even though some might wonder why not given his track record. After leaving Stripe, where he was an early employee, he spent roughly five years as an angel investor, making early bets on companies like Figma, Notion, Ramp, and Lattice while searching for the right company to start or join himself. His first robotics investment, Standard Bots, came in 2021 and reintroduced him to a field he’d loved as a kid building Lego Mindstorms. As he jokes, he was “on vacation much more as an investor.” But investing was just a way to stay active and meet people, not the endgame. “I was looking for five years for the company to go start post-Stripe,” he says. “Good ideas at a good time with a good team — [that’s] extremely rare. It’s all execution, but you can execute like hell on a bad idea, and it’s still a bad idea.”

Image Credits:Connie Loizos for TechCrunch

The two-year-old company has now raised over $1 billion, and when I ask about its runway, he’s quick to clarify it doesn’t actually burn that much. Most of its spending goes toward compute. A moment later, he acknowledges that under the right terms, with the right partners, he’d raise more. “There’s no limit to how much money we can really put to work,” he says. “There’s always more compute you can throw at the problem.”

What makes this arrangement particularly unusual is what Groom doesn’t give his backers: a timeline for turning Physical Intelligence into a money-making endeavor. “I don’t give investors answers on commercialization,” he says of backers that include Khosla Ventures, Sequoia Capital, and Thrive Capital among others that have valued the company at $5.6 billion. “That’s sort of a weird thing, that people tolerate that.” But tolerate it they do, and they may not always, which is why it behooves the company to be well-capitalized now.

So what’s the strategy, if not commercialization? Quan Vuong, another co-founder who came from Google DeepMind, explains that it revolves around cross-embodiment learning and diverse data sources. If someone builds a new hardware platform tomorrow, they won’t need to start data collection from scratch — they can transfer all the knowledge the model already has. “The marginal cost of onboarding autonomy to a new robot platform, whatever that platform might be, it’s just a lot lower,” he says.

The company is already working with a small number of companies in different verticals — logistics, grocery, a chocolate maker across the street — to test whether their systems are good enough for real-world automation. Vuong claims that in some cases, they already are. With their “any platform, any task” approach, the surface area for success is large enough to start checking off tasks that are ready for automation today.

Physical Intelligence isn’t alone in chasing this vision. The race to build general-purpose robotic intelligence — the foundation on which more specialized applications can be built, much like the LLM models that captivated the world three years ago — is heating up. Pittsburgh-based Skild AI, founded in 2023, just this month raised $1.4 billion at a $14 billion valuation and is taking a notably different approach. While Physical Intelligence remains focused on pure research, Skild AI has already deployed its “omni-bodied” Skild Brain commercially, saying it generated $30 million in revenue in just a few months last year across security, warehouses, and manufacturing. 

Image Credits:Connie Loizos for TechCrunch

Skild has even taken public shots at competitors, arguing on its blog that most “robotics foundation models” are just vision-language models “in disguise” that lack “true physical common sense” because they rely too heavily on internet-scale pretraining rather than physics-based simulation and real robotics data.

It’s a pretty sharp philosophical divide. Skild AI is betting that commercial deployment creates a data flywheel that improves the model with each real-world use case. Physical Intelligence is betting that resisting the pull of near-term commercialization will enable it to produce superior general intelligence. Who’s “more right” will take years to resolve.

In the meantime, Physical Intelligence operates with what Groom describes as unusual clarity. “It’s such a pure company. A researcher has a need, we go and collect data to support that need — or new hardware or whatever it is — and then we do it. It’s not externally driven.” The company had a 5- to 10-year roadmap of what the team thought would be possible. By month 18, they’d blown through it, he says.

The company has about 80 employees and plans to grow, though Groom says hopefully “as slowly as possible.” What’s the most challenging, he says, is hardware. “Hardware is just really hard. Everything we do is so much harder than a software company.” Hardware breaks. It arrives slowly, delaying tests. Safety considerations complicate everything.

As Groom springs up to rush to his next commitment, I’m left watching the robots continue their practice. The pants are still not quite folded. The shirt remains stubbornly right-side-out. The zucchini shavings are piling up nicely.

There are obvious questions, including my own, about whether anyone actually wants a robot in their kitchen peeling vegetables, about safety, about dogs going crazy at mechanical intruders in their homes, about whether all of the time and money being invested here solves big enough problems or creates new ones. Meanwhile, outsiders question the company’s progress, whether its vision is achievable, and if betting on general intelligence rather than specific applications makes sense.

If Groom has any doubts, he doesn’t show it. He’s working with people who’ve been working on this problem for decades and who believe the timing is finally right, which is all he needs to know.

Besides, Silicon Valley has been backing people like Groom and giving them a lot of rope since the beginning of the industry, knowing there’s a good chance that even without a clear path to commercialization, even without a timeline, even without certainty about what the market will look like when they get there, they’ll figure it out. It doesn’t always work out. But when it does, it tends to justify a lot of the times it didn’t.

source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

ElevenLabs CEO: Voice is the next interface for AI

ElevenLabs co-founder and CEO Mati Staniszewski says voice is becoming the next major interface for AI – the way people will increasingly interact with machines as models move beyond text and screens.

Speaking at Web Summit in Doha, Staniszewski told TechCrunch voice models like those developed by ElevenLabs have recently moved beyond simply mimicking human speech — including emotion and intonation — to working in tandem with the reasoning capabilities of large language models. The result, he argued, is a shift in how people interact with technology. 

In the years ahead, he said, “hopefully all our phones will go back in our pockets, and we can immerse ourselves in the real world around us, with voice as the mechanism that controls technology.”

That vision fueled ElevenLabs’s $500 million raise this week at an $11 billion valuation, and it is increasingly shared across the AI industry. OpenAI and Google have both made voice a central focus of their next-generation models, while Apple appears to be quietly building voice-adjacent, always-on technologies through acquisitions like Q.ai. As AI spreads into wearables, cars, and other new hardware, control is becoming less about tapping screens and more about speaking, making voice a key battleground for the next phase of AI development. 

Iconiq Capital general partner Seth Pierrepont echoed that view onstage at Web Summit, arguing that while screens will continue to matter for gaming and entertainment, traditional input methods like keyboards are starting to feel “outdated.”

And as AI systems become more agentic, Pierrepont said, the interaction itself will also change, with models gaining guardrails, integrations, and context needed to respond with less explicit prompting from users. 

Staniszewski pointed to that agentic shift as one of the biggest changes underway. Rather than spelling out every instruction, he said future voice systems will increasingly rely on persistent memory and context built up over time, making interactions feel more natural and requiring less effort from users. 

Techcrunch event

Boston, MA
|
June 23, 2026

That evolution, he added, will influence how voice models are deployed. While high-quality audio models have largely lived in the cloud, Staniszewski said ElevenLabs is working toward a hybrid approach that blends cloud and on-device processing — a move aimed at supporting new hardware, including headphones and other wearables, where voice becomes a constant companion rather than a feature you decide when to engage with. 

ElevenLabs is already partnering with Meta to bring its voice technology to products, including Instagram and Horizon Worlds, the company’s virtual-reality platform. Staniszewski said he would also be open to working with Meta on its Ray-Ban smart glasses as voice-driven interfaces expand into new form factors. 

But as voice becomes more persistent and embedded in everyday hardware, it opens the door to serious concerns around privacy, surveillance, and how much personal data voice-based systems will store as they move closer to users’ daily lives — something companies like Google have already been accused of abusing.

source

Continue Reading

Tech

Substack confirms data breach affects users’ email addresses and phone numbers

Newsletter platform Substack has confirmed a data breach in an email to users. The company said that in October, an “unauthorized third party” accessed user data, including email addresses, phone numbers, and other unspecified “internal metadata.”

Substack specified that more sensitive data, such as credit card numbers, passwords, and other financial information, was unaffected.

In an email sent to users, Substack chief executive Chris Best said that the company identified the issue in February that allowed someone to access its systems. Best said that Substack has fixed the problem and started an investigation.

“I’m reaching out to let you know about a security incident that resulted in the email address and phone number from your Substack account being shared without your permission,” said Best in the email to users. “I’m incredibly sorry this happened. We take our responsibility to protect your data and your privacy seriously, and we came up short here.”

It’s not clear what exactly the issue was with its systems, and the scope of the data that was accessed. It’s also not yet known why the company took five months to detect the breach, or if it was contacted by hackers demanding a ransom. TechCrunch asked the company for more details, and we will update our story if we hear back.

Substack did not say how many users are affected. The company said that it doesn’t have any evidence that users’ data is being misused, but did not say what technical means, such as logs, it has to detect evidence of abuse. However, the company asked users to take caution with emails and texts without any particular indicators or direction.

On its website, Substack says that its site has more than 50 million active subscriptions, including 5 million paid subscriptions — a milestone it reached last March. In July 2025, the company raised $100 million in Series C funding led by BOND and The Chernin Group (TCG), with participation from a16z, Klutch Sports Group CEO Rich Paul, and Skims co-founder Jens Grede.

Techcrunch event

Boston, MA
|
June 23, 2026

source

Continue Reading

Tech

Fundamental raises $255M Series A with a new take on big data analysis

An AI lab called Fundamental emerged from stealth on Thursday, offering a new foundation model to solve an old problem: how to draw insights from the huge quantities of structured data produced by enterprises. By combining the old systems of predictive AI with more contemporary tools, the company believes it can reshape how large enterprises analyze their data.

“While LLMs have been great at working with unstructured data, like text, audio, video, and code, they don’t work well with structured data like tables,” CEO Jeremy Fraenkel told TechCrunch. “With our model Nexus, we have built the best foundation model to handle that type of data.”

The idea has already drawn significant interest from investors. The company is emerging from stealth with $255 million in funding at a $1.2 billion valuation. The bulk of it comes from the recent $225 million Series A round led by Oak HC/FT, Valor Equity Partners, Battery Ventures, and Salesforce Ventures; Hetz Ventures also participated in the Series A, with angel funding from Perplexity CEO Aravind Srinivas, Brex co-founder Henrique Dubugras, and Datadog CEO Olivier Pomel.

Called a large tabular model (LTM) rather than a large language model (LLM), Fundamental’s Nexus breaks from contemporary AI practices in a number of significant ways. The model is deterministic — that is, it will give the same answer every time it is asked a given question — and doesn’t rely on the transformer architecture that defines models from most contemporary AI labs. Fundamental calls it a foundation model because it goes through the normal steps of pre-training and fine-tuning, but the result is something profoundly different from what a client would get when partnering with OpenAI or Anthropic.

Those differences are important because Fundamental is chasing a use case where contemporary AI models often falter. Because Transformer-based AI models can only process data that’s within their context window, they often have trouble reasoning over extremely large datasets — analyzing a spreadsheet with billions of rows, for instance. But that kind of enormous structured dataset is common within large enterprises, creating a significant opportunity for models that can handle the scale.

As Fraenkel sees it, that’s a huge opportunity for Fundamental. Using Nexus, the company can bring contemporary techniques to big data analysis, offering something more powerful and flexible than the algorithms that are currently in use.

“You can now have one model across all of your use cases, so you can now expand massively the number of use cases that you tackle,” he told TechCrunch. “And on each one of those use cases, you get better performance than what you would otherwise be able to do with an army of data scientists.”

That promise has already brought in a number of high-profile contracts, including seven-figure contracts with Fortune 100 clients. The company has also entered into a strategic partnership with AWS that will allow AWS users to deploy Nexus directly from existing instances.

source

Continue Reading