Tech
A roadmap for AI, if anyone will listen
While Washington’s breakup with Anthropic exposed the complete lack of any coherent rules governing artificial intelligence, a bipartisan coalition of thinkers has assembled something the government has so far declined to produce: a framework for what responsible AI development should actually look like.
The Pro-Human Declaration was finalized before last week’s Pentagon-Anthropic standoff, but the collision of the two events wasn’t lost on anyone involved.
“There’s something quite remarkable that has happened in America just in the last four months,” said Max Tegmark, the MIT physicist and AI researcher who helped organize the effort, in conversation with this editor. “Polling suddenly [is showing] that 95% of all Americans oppose an unregulated race to superintelligence.”
The newly published document, signed by hundreds of experts, former officials, and public figures, opens with the no-nonsense observation that humanity is at a fork in the road. One path, which the declaration calls “the race to replace,” leads to humans being supplanted first as workers, then as decision-makers, as power accrues to unaccountable institutions and their machines. The other leads to AI that massively expands human potential.
The latter scenario depends on five key pillars: keeping humans in charge, avoiding the concentration of power, protecting the human experience, preserving individual liberty, and holding AI companies legally accountable. Among its more muscular provisions is an outright prohibition on superintelligence development until there’s scientific consensus it can be done safely and genuine democratic buy-in; mandatory off-switches on powerful systems; and a ban on architectures that are capable of self-replication, autonomous self-improvement, or resistance to shutdown.
The declaration’s release coincides with a period that makes its urgency far easier to appreciate. On the last Friday in February, Defense Secretary Pete Hegseth designated Anthropic — whose AI already runs on classified military platforms — a “supply chain risk” after the company refused to grant the Pentagon unlimited use of its technology, a label ordinarily reserved for firms with ties to China. Hours later, OpenAI cut its own deal with the Defense Department, one that legal experts say will be difficult to enforce in any meaningful way. What it all laid bare is how costly Congressional inaction on AI has become.
As Dean Ball, a senior fellow at the Foundation for American Innovation, told The New York Times afterward, “This is not just some dispute over a contract. This is the first conversation we have had as a country about control over AI systems.”
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Tegmark reached for an analogy that most people can understand when we spoke. “You never have to worry that some drug company is going to release some other drug that causes massive harm before people have figured out how to make it safe,” he said, “because the FDA won’t allow them to release anything until it’s safe enough.”
Washington turf wars rarely generate the kind of public pressure that changes laws. Instead, Tegmark sees child safety as the pressure point most likely to crack the current impasse. Indeed, the declaration calls for mandatory pre-deployment testing of AI products — particularly chatbots and companion apps aimed at younger users — covering risks including increased suicidal ideation, exacerbation of mental health conditions, and emotional manipulation.
“If some creepy old man is texting an 11-year-old pretending to be a young girl and trying to persuade this boy to commit suicide, the guy can go to jail for that,” Tegmark said. “We already have laws. It’s illegal. So why is it different if a machine does it?”
He believes that once the principle of pre-release testing is established for children’s products, the scope will widen almost inevitably. “People will come along and be like — let’s add a few other requirements. Maybe we should also test that this can’t help terrorists make bioweapons. Maybe we should test to make sure that superintelligence doesn’t have the ability to overthrow the U.S. government.”
It is no small thing that former Trump advisor Steve Bannon and Susan Rice, President Obama’s National Security Advisor, have signed the same document — along with former Joint Chiefs Chairman Mike Mullen and progressive faith leaders.
“What they agree on, of course, is that they’re all human,” says Tegmark. “If it’s going to come down to whether we want a future for humans or a future for machines, of course they’re going to be on the same side.”
Tech
Spotify will let you edit your Taste Profile to control your recommendations
At the SXSW conference on Friday, Spotify co-CEO Gustav Söderström announced a new feature, launching in beta, that will allow listeners for the first time to review and edit their Taste Profile, the algorithmically generated model of their music preferences.
This Taste Profile is key to Spotify’s recommendations, including personalized playlists like Discover Weekly, Made For You recommendations, and the year-end review known as Spotify Wrapped, among other things.
Starting with Premium listeners initially in New Zealand, Spotify will allow users to see all their listening data in one place in the app, including music, podcasts, and audiobooks. Users will then be able to edit this profile and even fine-tune future recommendations by asking for more or less of a certain vibe. After doing so, the app’s home page will reflect a different set of suggestions.

To access the Taste Profile, users tap on their profile pic, then scroll down. Changes can be made using natural language prompts.
Spotify had previously offered some tools to remove music from your Taste Profile before, but they were not as comprehensive. Instead, users were only able to exclude certain tracks or playlists from their profile. Because of this, and the largely hidden nature of the Taste Profile overall, Spotify users often complained that the app’s recommendations didn’t reflect their interests.

Today, users often share their Spotify account with others, like family members who access their account through a shared smart speaker or smart TV in the living room, for example, or teens who take over in CarPlay while they drive.
Other times, users may listen to music that they don’t want to characterize as their “taste,” like the sleep sounds or quiet tracks they play at night, or music to entertain their kids. Users don’t always remember which tracks or playlists need to be removed, nor do they have time to go back and do so. This can lead to the Taste Profile becoming cluttered with music users don’t like.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
It also significantly impacted, even ruined, many people’s annual Wrapped experience in the app — particularly because of kids’ use of their parents’ Spotify accounts. For years, Spotify users have asked for a fix for this problem.
Spotify says the Taste Profile feature will roll out in the coming weeks in New Zealand before expanding to other markets.
Tech
The wild six weeks for NanoClaw’s creator that led to a deal with Docker
It’s been a whirlwind for NanoClaw creator Gavriel Cohen.
About six weeks ago, he introduced NanoClaw on Hacker News as a tiny, open source, secure alternative to the AI agent-building sensation OpenClaw, after he built it in a weekend coding binge. That post went viral.
“I sat down on the couch in my sweatpants,” Cohen told TechCrunch, “and just basically melted into [it] the whole weekend, probably almost 48 hours straight.”
About three weeks ago, an X post praising NanoClaw from famed AI researcher Andrej Karpathy went viral.
About a week ago, Cohen closed down his AI marketing startup to focus full-time on NanoClaw and launch a company around it called NanoCo. The attention from Hacker News and Karpathy had translated into 22,000 stars on GitHub, 4,600 forks (people building new versions off the project), and over 50 contributors. He’s already added hundreds of updates to his project with hundreds more in the queue.
Now, on Friday, Cohen announced a deal with Docker — the company that essentially invented the container technology NanoClaw is built on, and counts millions of developers and nearly 80,000 enterprise customers — to integrate Docker Sandboxes into NanoClaw.
Scary security of OpenClaw
It all started when Cohen launched an AI marketing startup with his brother, Lazer Cohen, a few months ago. The startup offered marketing services like market research, go-to-market analysis, and blog posts through a small team of people using AI agents.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
The agency started booking customers, and was on track to hit $1 million in annual recurring revenue, the brothers told TechCrunch.
“It was going really well, great traction. I’m a huge believer in that business model of AI-native service companies that have margins and operate like a software company but are actually providing services,” said Cohen, a computer programmer who previously worked for website hosting company Wix.
He had built the agents the startup was using, largely using Claude Code, each designed to do specific tasks. But there was “a piece” missing, he said. The agent could do work when prompted, but the humans couldn’t pre-schedule work, or connect agents to team communication tools like WhatsApp and assign tasks that way. (WhatsApp is to most of the world what Slack is to corporate America.)
Cohen heard about OpenClaw, the popular AI agent tool whose creator now works for OpenAI. Cohen used it to build out those final interfaces, and loved it.
“There was this big aha moment of: This is the piece that connects all of these separate workflows that I’ve been building,” he said and immediately decided, “I want more of them: on R& D, on product, on client management,” one for every task the startup had to handle.
But then OpenClaw scared the bejesus out of him.
In researching a hiccup with performance, he stumbled across a file where the OpenClaw agent had downloaded all of his WhatsApp messages and stored them in plain, unencrypted text on his computer. Not just the work-related messages it was given explicit access to, but all of them, his personal messages too.
OpenClaw has been widely panned as a “security nightmare” because of the way it accesses memory and account permissions. It is difficult to limit its access to data on a machine once it has been installed.
That issue will likely improve over time, given the project’s popularity, but Cohen had another concern: the sheer size of OpenClaw. As he researched security options for it, he saw all the packages that had been bundled into it. It included an “obscure” open source project he himself had written a few months earlier for editing PDFs using a Google image editing model. He had no idea it was there — he wasn’t even actively maintaining that project.
He realized there was no way for him to validate all OpenClaw’s code and its dependencies, which, by some estimates, sprawled across 800,000 lines of code.
So he built his own in just 500 lines of code, intended to be used for his company, and shared it. He based it on Apple’s new container tech, which creates isolated environments that prevent software from accessing any data on a machine beyond what it is explicitly authorized to use.
Going viral
At 4 a.m., a couple of weeks after sharing it on Hacker News, his phone started ringing non-stop. A friend had seen Karpathy’s post and was urging Cohen to wake up and start tweeting, which he did, setting off a public discussion with the well-known AI researcher.
Attention to NanoClaw followed like a landslide. More tweets, YouTube reviews from programmers, and news stories. A domain squatter even snagged a NanoClaw website URL. The correct one is nanoclaw.dev.
Then Oleg Šelajev, a developer who works for Docker reached out. Šelajev saw the buzz and modified NanoClaw to replace Apple’s container technology with Docker’s competing alternative, Sandboxes.
Cohen had no hesitation about pushing out support for Sandboxes as part of the main NanoClaw project. “This is no longer my own personal agent that I’m running on my Mac Mini,” he recalled thinking. “This now has a community around it. There are thousands of people using it. Yeah, I said, I’m going to move over to the standard.”
For all the changes these weeks have brought Cohen and his brother Lazer, now CEO and president of NanoCo, respectively, one area still needs to be figured out: how NanoCo will make money.
NanoClaw is free and open source and, as these things go, the Cohens vow it always will be. They know they would be strung up as villains if they ever betrayed the open source community by changing that. Currently the Cohens are living on a friends-and-family fundraising round, they said.
While they are cautious about announcing their commercial plans — in large part because they haven’t had a chance to fully formulate them — VCs are already calling, they say.
The game plan is to build a fully supported commercial product with services including so-called forward-deployed engineers — specialists embedded directly with client companies to help them build and manage their systems. This will likely focus on assisting companies in building and maintaining secure agents. That is, however, a crowded field growing more crowded by the hour.
But given the giant community of developers that NanoClaw just unlocked with Docker, we’re sure to hear more about this soon.
Pictured above from left to right, Lazer and Gavriel Cohen.
Tech
Travis Kalanick launches a new company called Atoms focused on robotics
Uber founder Travis Kalanick has a new company called Atoms focused on robotics that, according to its website, will operate in the food, mining, and transportation industries.
Kalanick is rolling his existing ghost kitchen company, CloudKitchens, into Atoms. It’s not immediately clear how he plans to tackle mining and transportation. Atoms’ website says it will build a “wheelbase for robots,” and Kalanick said in a live interview with TBPN on Friday that his company will apply this wheelbase to “specialized robots” — not humanoids.
“Humanoids have their place, but there’s a lot of room for specialized robots that do things in an efficient, sort of industrial-scale kind of way, which is sort of where we play,” he said.
To support the mining business, Kalanick said Friday that he’s on the precipice of acquiring Pronto, the autonomous vehicle startup focused on industrial and mining sites that was created by his former Uber colleague, Anthony Levandowski. Kalanick revealed Friday that he is already the “largest investor” in Pronto.
“The industrial thing is sort of like, probably, our main jam,” Kalanick told TBPN. Kalanick demurred on the idea of using Atoms robots to move people, at least in the near-term. “Once you crack movement in the physical world, there’s lots of people who want access to that.”
Earlier Friday The Information reported Kalanick was getting back into self-driving vehicles with “major backing” from Uber, and that he has reportedly told people he “wants to be more aggressive in rolling out self-driving technology than Waymo.” Uber didn’t immediately respond to a request for comment. Atoms’ website makes no mention of Uber. The Information first reported Kalanick was discussing acquiring Pronto.
Last year, Kalanick was said to be interested in buying the U.S. arm of Chinese self-driving vehicle company Pony AI with backing from Uber, though The Information said Friday that those talks ended.
Kalanick resigned from Uber in 2017 after a confluence of crises at the ride-hail company. At the time, the company was plagued by complaints of sexual harassment and discrimination, which sparked an external investigation that resulted in more than 20 employees being fired.
Before that, Kalanick had created a self-driving division at Uber in 2015. Levandowski played a big role in that project after Kalanick lured him away from Google. Uber was ultimately sued by Google for stealing secrets related to its own self-driving car project (which eventually became Waymo). The two companies settled, but Levandowski was criminally charged and sentenced to 18 months in prison for his role in the affair. The engineer received a last-minute pardon from President Trump at the end of his first term.
The company kept working on the project after Kalanick resigned, including after one of its test vehicles struck and killed a pedestrian in 2018. Kalanick’s successor, Dara Khosrowshahi, shuttered and sold the division to autonomous trucking company Aurora in 2020.
In a rare interview in March 2025, Kalanick expressed regret that Uber had abandoned developing its own self-driving cars.
This story has been updated to reflect new information from Atoms’ website and an interview with Kalanick.
