Tech
Travis Kalanick launches a new company called Atoms focused on robotics
Uber founder Travis Kalanick has a new company called Atoms focused on robotics that, according to its website, will operate in the food, mining, and transportation industries.
Kalanick is rolling his existing ghost kitchen company, CloudKitchens, into Atoms. It’s not immediately clear how he plans to tackle mining and transportation. Atoms’ website says it will build a “wheelbase for robots,” and Kalanick said in a live interview with TBPN on Friday that his company will apply this wheelbase to “specialized robots” — not humanoids.
“Humanoids have their place, but there’s a lot of room for specialized robots that do things in an efficient, sort of industrial-scale kind of way, which is sort of where we play,” he said.
To support the mining business, Kalanick said Friday that he’s on the precipice of acquiring Pronto, the autonomous vehicle startup focused on industrial and mining sites that was created by his former Uber colleague, Anthony Levandowski. Kalanick revealed Friday that he is already the “largest investor” in Pronto.
“The industrial thing is sort of like, probably, our main jam,” Kalanick told TBPN. Kalanick demurred on the idea of using Atoms robots to move people, at least in the near-term. “Once you crack movement in the physical world, there’s lots of people who want access to that.”
Earlier Friday The Information reported Kalanick was getting back into self-driving vehicles with “major backing” from Uber, and that he has reportedly told people he “wants to be more aggressive in rolling out self-driving technology than Waymo.” Uber didn’t immediately respond to a request for comment. Atoms’ website makes no mention of Uber. The Information first reported Kalanick was discussing acquiring Pronto.
Last year, Kalanick was said to be interested in buying the U.S. arm of Chinese self-driving vehicle company Pony AI with backing from Uber, though The Information said Friday that those talks ended.
Kalanick resigned from Uber in 2017 after a confluence of crises at the ride-hail company. At the time, the company was plagued by complaints of sexual harassment and discrimination, which sparked an external investigation that resulted in more than 20 employees being fired.
Before that, Kalanick had created a self-driving division at Uber in 2015. Levandowski played a big role in that project after Kalanick lured him away from Google. Uber was ultimately sued by Google for stealing secrets related to its own self-driving car project (which eventually became Waymo). The two companies settled, but Levandowski was criminally charged and sentenced to 18 months in prison for his role in the affair. The engineer received a last-minute pardon from President Trump at the end of his first term.
The company kept working on the project after Kalanick resigned, including after one of its test vehicles struck and killed a pedestrian in 2018. Kalanick’s successor, Dara Khosrowshahi, shuttered and sold the division to autonomous trucking company Aurora in 2020.
In a rare interview in March 2025, Kalanick expressed regret that Uber had abandoned developing its own self-driving cars.
This story has been updated to reflect new information from Atoms’ website and an interview with Kalanick.
Tech
Spotify will let you edit your Taste Profile to control your recommendations
At the SXSW conference on Friday, Spotify co-CEO Gustav Söderström announced a new feature, launching in beta, that will allow listeners for the first time to review and edit their Taste Profile, the algorithmically generated model of their music preferences.
This Taste Profile is key to Spotify’s recommendations, including personalized playlists like Discover Weekly, Made For You recommendations, and the year-end review known as Spotify Wrapped, among other things.
Starting with Premium listeners initially in New Zealand, Spotify will allow users to see all their listening data in one place in the app, including music, podcasts, and audiobooks. Users will then be able to edit this profile and even fine-tune future recommendations by asking for more or less of a certain vibe. After doing so, the app’s home page will reflect a different set of suggestions.

To access the Taste Profile, users tap on their profile pic, then scroll down. Changes can be made using natural language prompts.
Spotify had previously offered some tools to remove music from your Taste Profile before, but they were not as comprehensive. Instead, users were only able to exclude certain tracks or playlists from their profile. Because of this, and the largely hidden nature of the Taste Profile overall, Spotify users often complained that the app’s recommendations didn’t reflect their interests.

Today, users often share their Spotify account with others, like family members who access their account through a shared smart speaker or smart TV in the living room, for example, or teens who take over in CarPlay while they drive.
Other times, users may listen to music that they don’t want to characterize as their “taste,” like the sleep sounds or quiet tracks they play at night, or music to entertain their kids. Users don’t always remember which tracks or playlists need to be removed, nor do they have time to go back and do so. This can lead to the Taste Profile becoming cluttered with music users don’t like.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
It also significantly impacted, even ruined, many people’s annual Wrapped experience in the app — particularly because of kids’ use of their parents’ Spotify accounts. For years, Spotify users have asked for a fix for this problem.
Spotify says the Taste Profile feature will roll out in the coming weeks in New Zealand before expanding to other markets.
Tech
The wild six weeks for NanoClaw’s creator that led to a deal with Docker
It’s been a whirlwind for NanoClaw creator Gavriel Cohen.
About six weeks ago, he introduced NanoClaw on Hacker News as a tiny, open source, secure alternative to the AI agent-building sensation OpenClaw, after he built it in a weekend coding binge. That post went viral.
“I sat down on the couch in my sweatpants,” Cohen told TechCrunch, “and just basically melted into [it] the whole weekend, probably almost 48 hours straight.”
About three weeks ago, an X post praising NanoClaw from famed AI researcher Andrej Karpathy went viral.
About a week ago, Cohen closed down his AI marketing startup to focus full-time on NanoClaw and launch a company around it called NanoCo. The attention from Hacker News and Karpathy had translated into 22,000 stars on GitHub, 4,600 forks (people building new versions off the project), and over 50 contributors. He’s already added hundreds of updates to his project with hundreds more in the queue.
Now, on Friday, Cohen announced a deal with Docker — the company that essentially invented the container technology NanoClaw is built on, and counts millions of developers and nearly 80,000 enterprise customers — to integrate Docker Sandboxes into NanoClaw.
Scary security of OpenClaw
It all started when Cohen launched an AI marketing startup with his brother, Lazer Cohen, a few months ago. The startup offered marketing services like market research, go-to-market analysis, and blog posts through a small team of people using AI agents.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
The agency started booking customers, and was on track to hit $1 million in annual recurring revenue, the brothers told TechCrunch.
“It was going really well, great traction. I’m a huge believer in that business model of AI-native service companies that have margins and operate like a software company but are actually providing services,” said Cohen, a computer programmer who previously worked for website hosting company Wix.
He had built the agents the startup was using, largely using Claude Code, each designed to do specific tasks. But there was “a piece” missing, he said. The agent could do work when prompted, but the humans couldn’t pre-schedule work, or connect agents to team communication tools like WhatsApp and assign tasks that way. (WhatsApp is to most of the world what Slack is to corporate America.)
Cohen heard about OpenClaw, the popular AI agent tool whose creator now works for OpenAI. Cohen used it to build out those final interfaces, and loved it.
“There was this big aha moment of: This is the piece that connects all of these separate workflows that I’ve been building,” he said and immediately decided, “I want more of them: on R& D, on product, on client management,” one for every task the startup had to handle.
But then OpenClaw scared the bejesus out of him.
In researching a hiccup with performance, he stumbled across a file where the OpenClaw agent had downloaded all of his WhatsApp messages and stored them in plain, unencrypted text on his computer. Not just the work-related messages it was given explicit access to, but all of them, his personal messages too.
OpenClaw has been widely panned as a “security nightmare” because of the way it accesses memory and account permissions. It is difficult to limit its access to data on a machine once it has been installed.
That issue will likely improve over time, given the project’s popularity, but Cohen had another concern: the sheer size of OpenClaw. As he researched security options for it, he saw all the packages that had been bundled into it. It included an “obscure” open source project he himself had written a few months earlier for editing PDFs using a Google image editing model. He had no idea it was there — he wasn’t even actively maintaining that project.
He realized there was no way for him to validate all OpenClaw’s code and its dependencies, which, by some estimates, sprawled across 800,000 lines of code.
So he built his own in just 500 lines of code, intended to be used for his company, and shared it. He based it on Apple’s new container tech, which creates isolated environments that prevent software from accessing any data on a machine beyond what it is explicitly authorized to use.
Going viral
At 4 a.m., a couple of weeks after sharing it on Hacker News, his phone started ringing non-stop. A friend had seen Karpathy’s post and was urging Cohen to wake up and start tweeting, which he did, setting off a public discussion with the well-known AI researcher.
Attention to NanoClaw followed like a landslide. More tweets, YouTube reviews from programmers, and news stories. A domain squatter even snagged a NanoClaw website URL. The correct one is nanoclaw.dev.
Then Oleg Šelajev, a developer who works for Docker reached out. Šelajev saw the buzz and modified NanoClaw to replace Apple’s container technology with Docker’s competing alternative, Sandboxes.
Cohen had no hesitation about pushing out support for Sandboxes as part of the main NanoClaw project. “This is no longer my own personal agent that I’m running on my Mac Mini,” he recalled thinking. “This now has a community around it. There are thousands of people using it. Yeah, I said, I’m going to move over to the standard.”
For all the changes these weeks have brought Cohen and his brother Lazer, now CEO and president of NanoCo, respectively, one area still needs to be figured out: how NanoCo will make money.
NanoClaw is free and open source and, as these things go, the Cohens vow it always will be. They know they would be strung up as villains if they ever betrayed the open source community by changing that. Currently the Cohens are living on a friends-and-family fundraising round, they said.
While they are cautious about announcing their commercial plans — in large part because they haven’t had a chance to fully formulate them — VCs are already calling, they say.
The game plan is to build a fully supported commercial product with services including so-called forward-deployed engineers — specialists embedded directly with client companies to help them build and manage their systems. This will likely focus on assisting companies in building and maintaining secure agents. That is, however, a crowded field growing more crowded by the hour.
But given the giant community of developers that NanoClaw just unlocked with Docker, we’re sure to hear more about this soon.
Pictured above from left to right, Lazer and Gavriel Cohen.
Tech
The biggest AI stories of the year (so far)
You can chart a year through product launches, or you can measure it in the greater moments that change the way we look at AI. The AI industry is constantly churning out news, like major acquisitions, indie developer successes, public outcry against sketchy products, and existentially dangerous contract negotiations — it’s a lot to untangle, so we’re taking a glimpse at where we’re at and where we’ve been so far this year.
Anthropic vs. the Pentagon
Once business partners, Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth reached a bitter stalemate in February as they renegotiated the contracts that dictate how the U.S. military can use Anthropic’s AI tools.
Anthropic established a hard line against its AI being used for mass surveillance of Americans or to power autonomous weapons that can attack without human oversight. Meanwhile, the Pentagon has argued that the Department of Defense — which President Donald Trump’s administration calls the Department of War — should be permitted access to Anthropic’s models for any “lawful use.” Government representatives took offense to the idea that the military should be limited to the rules of a private company, but Amodei stood his ground.
“Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” Amodei wrote in a statement addressing the situation. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”
The Pentagon gave Anthropic a deadline to agree to their contract. Hundreds of employees at Google and OpenAI signed an open letter urging their respective leaders to respect Amodei’s limits and refuse to budge on issues of autonomous weapons or domestic surveillance.
The deadline passed without Anthropic agreeing to the Pentagon’s demands. Trump directed federal agencies to phase out their use of Anthropic tools over a six-month transition period and called the AI company, which is valued at $380 billion, a “radical left, woke company” in an all-caps social media post. The Pentagon then moved to declare Anthropic a “supply-chain risk,” a designation that is usually reserved for foreign adversaries and prevents any company that works with Anthropic from doing business with the U.S. military. (Anthropic has since sued to challenge the designation.)
Anthropic rival OpenAI then swooped in and announced that it had reached an agreement allowing its own models to be deployed in classified situations. It was a shock to the tech community, since reports had indicated that OpenAI would stick to Anthropic’s red lines governing use of AI for the military.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Public sentiment would indicate that people found OpenAI’s move fishy — on the day after OpenAI announced its deal, ChatGPT uninstalls jumped 295% day-over-day and Anthropic’s Claude shot to No. 1 in the App Store. OpenAI hardware executive Caitlin Kalinowski quit in response to the deal, saying that it was “rushed without the guardrails defined.”
OpenAI told TechCrunch that it believes its agreement “makes clear [its] redlines: no autonomous weapons and no autonomous surveillance.”
As this saga plays out, it will have significant implications for the future of how AI is deployed at war, potentially changing the course of history — you know, no big deal …
“Vibe-coded” app OpenClaw accelerates the turn to agentic AI
February was the month of OpenClaw, and its impact continues to reverberate. In quick succession, the vibe-coded AI assistant app went viral, spawned a bunch of spinoff companies, suffered from privacy snafus, and then got acquired by OpenAI. Even one of the companies built on OpenClaw, a Reddit-clone for AI agents called Moltbook, was recently acquired by Meta. This crustacean-themed ecosystem whipped Silicon Valley into a downright frenzy.
Created by Peter Steinberger — who has since joined OpenAI — OpenClaw is a wrapper for AI models like Claude, ChatGPT, Google’s Gemini, or xAI’s Grok. What sets it apart is that it allows people to communicate with AI agents in natural language via the most popular chat apps, like iMessage, Discord, Slack, or WhatsApp. There’s also a public marketplace where people can code and upload “skills” for people to add to their AI agents, making it possible to automate basically anything that can be done on a computer.
If that seems too good to be true, it’s because it kind of is. In order for an AI agent to be effective as a personal assistant, it needs to have access to your email, credit card numbers, text messages, computer files, etc. If it were to be hacked, a lot could go wrong, and unfortunately, there’s no way to fully secure these agents against prompt-injection attacks.
“It is just an agent sitting with a bunch of credentials on a box connected to everything — your email, your messaging platform, everything you use,” Ian Ahl, CTO at Permiso Security, told TechCrunch. “So what that means is, when you get an email, and maybe somebody is able to put a little prompt injection technique in there to take an action, [and] that agent sitting on your box with access to everything you’ve given it to can now take that action.”
One AI security researcher at Meta said that OpenClaw ran amok on her inbox, deleting all of her emails despite repeated calls to stop. “I had to RUN to my Mac mini like I was defusing a bomb” to physically unplug the device, she wrote in a now-viral post on X, which included images of the ignored stop prompts as receipts.
Despite the security risks, the technology piqued OpenAI’s interest enough for an acqui-hire.
Other tools built on OpenClaw, including Moltbook — a Reddit-like “social network” where AI agents can communicate with one another — ended up becoming more viral than OpenClaw itself.
In one instance, a post went viral in which an AI agent appeared to be encouraging its fellow agents to develop their own secret, end-to-end-encrypted language where they could organize amongst themselves without humans knowing.
But researchers soon revealed that the vibe-coded Moltbook wasn’t very secure, meaning that it was very easy for human users to pose as AIs to make posts that would trigger viral social hysteria.
Again, even though the discussion around Moltbook was more grounded in panic than reality, Meta saw something in the app and announced that Moltbook and its creators, Matt Schlicht and Ben Parr, would join Meta Superintelligence Labs.
It seems strange that Meta would buy a social network where all of the users are bots. While Meta hasn’t revealed much about the acquisition, we theorize that owning Moltbook is more about gaining access to the talent behind it, who are enthusiastic about experimenting with AI agent ecosystems. CEO Mark Zuckerberg has said it himself: He thinks that one day, every business will have a business AI.
As we watch the hubbub around OpenClaw, Moltbook, and NanoClaw play out, it seems as though those who predicted an agentic AI future may be on to something, at least for now.
Chip shortages, hardware drama, and data center demands escalate
The harsh demands of the AI industry — which require computing power and data centers in unprecedented volumes — are reaching a point where the average consumer has no choice but to pay attention. Now it may not even be possible for the industry to satisfy the astronomical demands for memory chips, and consumers are already seeing the prices of their phones, laptops, cars, and other hardware increase.
So far, analysts from IDC and Counterpoint have predicted that smartphone shipments, for example, will plummet about 12% to 13% this year; Apple has already raised MacBook Pro prices by up to $400.
Google, Amazon, Meta, and Microsoft are planning to spend up to a combined $650 billion on data centers alone this year, which is an estimated 60% increase from last year.
If the chip shortage doesn’t hit you in your wallet, it might hit your community at large. In the U.S. alone, nearly 3,000 new data centers are under construction, adding to the 4,000 already operating in the country. The need for laborers to build these data centers is significant enough that “man camps” have sprung up in Nevada and Texas, attempting to lure workers with the promise of golf simulator game rooms and steaks grilled on-demand.
Not only does data center construction have a long-term impact on the environment, but it also creates health hazards for nearby residents, polluting the air and impacting the safety of nearby water sources.
All the while, one of the most valuable hardware and chip developers, Nvidia, is reshaping its relationship to leading AI companies like OpenAI and Anthropic. Nvidia has been an ongoing backer of these companies, sparking concerns around the circularity of the AI industry and how much of those eye-popping valuations are based on recursive deals with each other. Last year, for example, Nvidia invested $100 billion in OpenAI stock, and OpenAI then said it would buy $100 billion of Nvidia chips.
It was surprising, then, when Nvidia CEO Jensen Huang said that his company would stop investing in OpenAI and Anthropic. He said that this is because the companies plan to go public later this year, though that logic doesn’t quite make sense, since investors typically funnel in more money pre-IPO to extract as much value as possible.
