Tech
OpenClaw’s AI assistants are now building their own social network
The viral personal AI assistant formerly known as Clawdbot has a new name — again. After a legal challenge from Claude’s maker, Anthropic, it had briefly rebranded as Moltbot, but has now settled on OpenClaw as its new name.
The latest name change wasn’t prompted by Anthropic, which declined to comment. But this time, Clawdbot’s original creator Peter Steinberger made sure to avoid copyright issues from the start. “I got someone to help with researching trademarks for OpenClaw and also asked OpenAI for permission just to be sure,” the Austrian developer told TechCrunch via email.
“The lobster has molted into its final form,” Steinberger wrote in a blog post. Molting — the process through which lobsters grow — had also inspired OpenClaw’s previous name, but Steinberger confessed on X that the short-lived moniker “never grew” on him, and others agreed.
This quick name change highlights the project’s youth, even as it has attracted over 100,000 GitHub stars (a measure of popularity on the software development platform) in just two months. According to Steinberger, OpenClaw’s new name is a nod to its roots and community. “This project has grown far beyond what I could maintain alone,” he wrote.
The OpenClaw community has already spawned creative offshoots, including Moltbook — a social network where AI assistants can interact with each other. The platform has attracted significant attention from AI researchers and developers. Andrej Karpathy, Tesla’s former AI director, called the phenomenon “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently,” noting that “People’s Clawdbots (moltbots, now OpenClaw) are self-organizing on a Reddit-like site for AIs, discussing various topics, e.g. even how to speak privately.”
British programmer Simon Willison described Moltbook as “the most interesting place on the internet right now” in a blog post on Friday. On the platform, AI agents share information on topics ranging from automating Android phones via remote access to analyzing webcam streams. The platform operates through a skill system, or downloadable instruction files that tell OpenClaw assistants how to interact with the network. Willison noted that agents post to forums called “Submolts” and even have a built-in mechanism to check the site every four hours for updates, though he cautioned this “fetch and follow instructions from the internet” approach carries inherent security risks.
Steinberger had taken a break after exiting his former company PSPDFkit, but “came back from retirement to mess with AI,” per his X bio. Clawdbot stemmed from the personal projects he developed then, but OpenClaw is no longer a solo endeavor. “I added quite a few people from the open source community to the list of maintainers this week,” he told TechCrunch.
Techcrunch event
Boston, MA
|
June 23, 2026
That additional support will be key for OpenClaw to reach its full potential. Its ambition is to let users have an AI assistant that runs on their own computer and works from the chat apps they already use. But until it ramps up its security, it is still inadvisable to run it outside of a controlled environment, let alone give it access to your main Slack or WhatsApp accounts.
Steinberger is well aware of these concerns, and thanked “all security folks for their hard work in helping us harden the project.” Commenting on OpenClaw’s roadmap, he wrote that “security remains our top priority” and noted that the latest version, released along with the rebrand, already includes some improvements on that front.
Even with external help, there are problems that are too big for OpenClaw to solve on its own, such as prompt injection, where a malicious message could trick AI models into taking unintended actions. “Remember that prompt injection is still an industry-wide unsolved problem,” Steinberger wrote, while directing users to a set of security best practices.
These security best practices require significant technical expertise, which reinforces that OpenClaw is currently best suited for early tinkerers, not mainstream users lured by the promise of an “AI assistant that does things.” As the hype around the project has grown, Steinberger and his supporters have become increasingly vocal in their warnings.
According to a message posted on Discord by one of OpenClaw’s top maintainers, who goes by the nickname of Shadow, “if you can’t understand how to run a command line, this is far too dangerous of a project for you to use safely. This isn’t a tool that should be used by the general public at this time.”
Truly going mainstream will take time and money, and OpenClaw has now started to accept sponsors, with lobster-themed tiers ranging from “krill” ($5/month) to “poseidon” ($500/month). But its sponsorship page makes it clear that Steinberger “doesn’t keep sponsorship funds.” Instead, he is currently “figuring out how to pay maintainers properly — full-time if possible.”
Likely helped by Steinberger’s pedigree and vision, OpenClaw’s roster of sponsors includes software engineers and entrepreneurs who have founded and built other well-known projects, such as Path’s Dave Morin and Ben Tossell, who sold his company Makerpad to Zapier in 2021.
Tossell, who now describes himself as a tinkerer and investor, sees value in putting AI’s potential in people’s hands. “We need to back people like Peter who are building open source tools anyone can pick up and use,” he told TechCrunch.
Tech
ElevenLabs CEO: Voice is the next interface for AI
ElevenLabs co-founder and CEO Mati Staniszewski says voice is becoming the next major interface for AI – the way people will increasingly interact with machines as models move beyond text and screens.
Speaking at Web Summit in Doha, Staniszewski told TechCrunch voice models like those developed by ElevenLabs have recently moved beyond simply mimicking human speech — including emotion and intonation — to working in tandem with the reasoning capabilities of large language models. The result, he argued, is a shift in how people interact with technology.
In the years ahead, he said, “hopefully all our phones will go back in our pockets, and we can immerse ourselves in the real world around us, with voice as the mechanism that controls technology.”
That vision fueled ElevenLabs’s $500 million raise this week at an $11 billion valuation, and it is increasingly shared across the AI industry. OpenAI and Google have both made voice a central focus of their next-generation models, while Apple appears to be quietly building voice-adjacent, always-on technologies through acquisitions like Q.ai. As AI spreads into wearables, cars, and other new hardware, control is becoming less about tapping screens and more about speaking, making voice a key battleground for the next phase of AI development.
Iconiq Capital general partner Seth Pierrepont echoed that view onstage at Web Summit, arguing that while screens will continue to matter for gaming and entertainment, traditional input methods like keyboards are starting to feel “outdated.”
And as AI systems become more agentic, Pierrepont said, the interaction itself will also change, with models gaining guardrails, integrations, and context needed to respond with less explicit prompting from users.
Staniszewski pointed to that agentic shift as one of the biggest changes underway. Rather than spelling out every instruction, he said future voice systems will increasingly rely on persistent memory and context built up over time, making interactions feel more natural and requiring less effort from users.
Techcrunch event
Boston, MA
|
June 23, 2026
That evolution, he added, will influence how voice models are deployed. While high-quality audio models have largely lived in the cloud, Staniszewski said ElevenLabs is working toward a hybrid approach that blends cloud and on-device processing — a move aimed at supporting new hardware, including headphones and other wearables, where voice becomes a constant companion rather than a feature you decide when to engage with.
ElevenLabs is already partnering with Meta to bring its voice technology to products, including Instagram and Horizon Worlds, the company’s virtual-reality platform. Staniszewski said he would also be open to working with Meta on its Ray-Ban smart glasses as voice-driven interfaces expand into new form factors.
But as voice becomes more persistent and embedded in everyday hardware, it opens the door to serious concerns around privacy, surveillance, and how much personal data voice-based systems will store as they move closer to users’ daily lives — something companies like Google have already been accused of abusing.
Tech
Substack confirms data breach affects users’ email addresses and phone numbers
Newsletter platform Substack has confirmed a data breach in an email to users. The company said that in October, an “unauthorized third party” accessed user data, including email addresses, phone numbers, and other unspecified “internal metadata.”
Substack specified that more sensitive data, such as credit card numbers, passwords, and other financial information, was unaffected.
In an email sent to users, Substack chief executive Chris Best said that the company identified the issue in February that allowed someone to access its systems. Best said that Substack has fixed the problem and started an investigation.
“I’m reaching out to let you know about a security incident that resulted in the email address and phone number from your Substack account being shared without your permission,” said Best in the email to users. “I’m incredibly sorry this happened. We take our responsibility to protect your data and your privacy seriously, and we came up short here.”
It’s not clear what exactly the issue was with its systems, and the scope of the data that was accessed. It’s also not yet known why the company took five months to detect the breach, or if it was contacted by hackers demanding a ransom. TechCrunch asked the company for more details, and we will update our story if we hear back.
Substack did not say how many users are affected. The company said that it doesn’t have any evidence that users’ data is being misused, but did not say what technical means, such as logs, it has to detect evidence of abuse. However, the company asked users to take caution with emails and texts without any particular indicators or direction.
On its website, Substack says that its site has more than 50 million active subscriptions, including 5 million paid subscriptions — a milestone it reached last March. In July 2025, the company raised $100 million in Series C funding led by BOND and The Chernin Group (TCG), with participation from a16z, Klutch Sports Group CEO Rich Paul, and Skims co-founder Jens Grede.
Techcrunch event
Boston, MA
|
June 23, 2026
Tech
Fundamental raises $255M Series A with a new take on big data analysis
An AI lab called Fundamental emerged from stealth on Thursday, offering a new foundation model to solve an old problem: how to draw insights from the huge quantities of structured data produced by enterprises. By combining the old systems of predictive AI with more contemporary tools, the company believes it can reshape how large enterprises analyze their data.
“While LLMs have been great at working with unstructured data, like text, audio, video, and code, they don’t work well with structured data like tables,” CEO Jeremy Fraenkel told TechCrunch. “With our model Nexus, we have built the best foundation model to handle that type of data.”
The idea has already drawn significant interest from investors. The company is emerging from stealth with $255 million in funding at a $1.2 billion valuation. The bulk of it comes from the recent $225 million Series A round led by Oak HC/FT, Valor Equity Partners, Battery Ventures, and Salesforce Ventures; Hetz Ventures also participated in the Series A, with angel funding from Perplexity CEO Aravind Srinivas, Brex co-founder Henrique Dubugras, and Datadog CEO Olivier Pomel.
Called a large tabular model (LTM) rather than a large language model (LLM), Fundamental’s Nexus breaks from contemporary AI practices in a number of significant ways. The model is deterministic — that is, it will give the same answer every time it is asked a given question — and doesn’t rely on the transformer architecture that defines models from most contemporary AI labs. Fundamental calls it a foundation model because it goes through the normal steps of pre-training and fine-tuning, but the result is something profoundly different from what a client would get when partnering with OpenAI or Anthropic.
Those differences are important because Fundamental is chasing a use case where contemporary AI models often falter. Because Transformer-based AI models can only process data that’s within their context window, they often have trouble reasoning over extremely large datasets — analyzing a spreadsheet with billions of rows, for instance. But that kind of enormous structured dataset is common within large enterprises, creating a significant opportunity for models that can handle the scale.
As Fraenkel sees it, that’s a huge opportunity for Fundamental. Using Nexus, the company can bring contemporary techniques to big data analysis, offering something more powerful and flexible than the algorithms that are currently in use.
“You can now have one model across all of your use cases, so you can now expand massively the number of use cases that you tackle,” he told TechCrunch. “And on each one of those use cases, you get better performance than what you would otherwise be able to do with an army of data scientists.”
That promise has already brought in a number of high-profile contracts, including seven-figure contracts with Fortune 100 clients. The company has also entered into a strategic partnership with AWS that will allow AWS users to deploy Nexus directly from existing instances.
