Tech
The trap Anthropic built for itself
Friday afternoon, just as this interview was getting underway, a news alert flashed across my computer screen: the Trump administration was severing ties with Anthropic, the San Francisco AI company founded in 2021 by Dario Amodei. Defense Secretary Pete Hegseth had invoked a national security law to blacklist the company from doing business with the Pentagon after Amodei refused to allow Anthropic’s tech to be used for mass surveillance of U.S. citizens or for autonomous armed drones that could select and kill targets without human input.
It was a jaw-dropping sequence. Anthropic stands to lose a contract worth up to $200 million and will be barred from working with other defense contractors after President Trump posted on Truth Social directing every federal agency to “immediately cease all use of Anthropic technology.” (Anthropic has since said it will challenge the Pentagon in court.)
Max Tegmark has spent the better part of a decade warning that the race to build ever-more-powerful AI systems is outpacing the world’s ability to govern them. The MIT physicist founded the Future of Life Institute in 2014 and helped organize an open letter — ultimately signed by more than 33,000 people, including Elon Musk — calling for a pause in advanced AI development.
His view of the Anthropic crisis is unsparing: the company, like its rivals, has sown the seeds of its own predicament. Tegmark’s argument doesn’t begin with the Pentagon but with a decision made years earlier — a choice, shared across the industry, to resist binding regulation. Anthropic, OpenAI, Google DeepMind and others have long promised to govern themselves responsibly. Anthropic this week even dropped the central tenet of its own safety pledge — its promise not to release increasingly powerful AI systems until the company was confident they wouldn’t cause harm.
Now, in the absence of rules, there’s not a lot to protect these players, says Tegmark. Here’s more from that interview, edited for length and clarity. You can hear the full conversation this coming week on TechCrunch’s StrictlyVC Download podcast.
When you saw this news just now about Anthropic, what was your first reaction?
The road to hell is paved with good intentions. It’s so interesting to think back a decade ago, when people were so excited about how we were going to make artificial intelligence to cure cancer, to grow the prosperity in America and make America strong. And here we are now where the U.S. government is pissed off at this company for not wanting AI to be used for domestic mass surveillance of Americans, and also not wanting to have killer robots that can autonomously — without any human input at all — decide who gets killed.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Anthropic has staked its entire identity on being a safety-first AI company, and yet it was collaborating with defense and intelligence agencies [dating back to at least 2024]. Do you think that’s at all contradictory?
It is contradictory. If I can give a little cynical take on this — yes, Anthropic has been very good at marketing themselves as all about safety. But if you actually look at the facts rather than the claims, what you see is that Anthropic, OpenAI, Google DeepMind and xAI have all talked a lot about how they care about safety. None of them has come out supporting binding safety regulation the way we have in other industries. And all four of these companies have now broken their own promises. First we had Google — this big slogan, ‘Don’t be evil.’ Then they dropped that. Then they dropped another longer commitment that basically said they promised not to do harm with AI. They dropped that so they could sell AI for surveillance and weapons. OpenAI just dropped the word safety from their mission statement. xAI shut down their whole safety team. And now Anthropic, earlier in the week, dropped their most important safety commitment — the promise not to release powerful AI systems until they were sure they weren’t going to cause harm.
How did companies that made such prominent safety commitments end up in this position?
All of these companies, especially OpenAI and Google DeepMind but to some extent also Anthropic, have persistently lobbied against regulation of AI, saying, ‘Just trust us, we’re going to regulate ourselves.’ And they’ve successfully lobbied. So we right now have less regulation on AI systems in America than on sandwiches. You know, if you want to open a sandwich shop and the health inspector finds 15 rats in the kitchen, he won’t let you sell any sandwiches until you fix it. But if you say, ‘Don’t worry, I’m not going to sell sandwiches, I’m going to sell AI girlfriends for 11-year-olds, and they’ve been linked to suicides in the past, and then I’m going to release something called superintelligence which might overthrow the U.S. government, but I have a good feeling about mine’ — the inspector has to say, ‘Fine, go ahead, just don’t sell sandwiches.’
There’s food safety regulation and no AI regulation.
And this, I feel, all of these companies really share the blame for. Because if they had taken all these promises that they made back in the day for how they were going to be so safe and goody-goody, and gotten together, and then gone to the government and said, ‘Please take our voluntary commitments and turn them into U.S. law that binds even our most sloppy competitors’ — this would have happened instead. We’re in a complete regulatory vacuum. And we know what happens when there’s a complete corporate amnesty: you get thalidomide, you get tobacco companies pushing cigarettes on kids, you get asbestos causing lung cancer. So it’s sort of ironic that their own resistance to having laws saying what’s okay and not okay to do with AI is now coming back and biting them.
There is no law right now against building AI to kill Americans, so the government can just suddenly ask for it. If the companies themselves had earlier come out and said, ‘We want this law,’ they wouldn’t be in this pickle. They really shot themselves in the foot.
The companies’ counter-argument is always the race with China — if American companies don’t do this, Beijing will. Does that argument hold?
Let’s analyze that. The most common talking point from the lobbyists for the AI companies — they’re now better funded and more numerous than the lobbyists from the fossil fuel industry, the pharma industry and the military-industrial complex combined — is that whenever anyone proposes any kind of regulation, they say, ‘But China.’ So let’s look at that. China is in the process of banning AI girlfriends outright. Not just age limits — they’re looking at banning all anthropomorphic AI. Why? Not because they want to please America but because they feel this is screwing up Chinese youth and making China weak. Obviously, it’s making American youth weak, too.
And when people say we have to race to build superintelligence so we can win against China — when we don’t actually know how to control superintelligence, so that the default outcome is that humanity loses control of Earth to alien machines — guess what? The Chinese Communist Party really likes control. Who in their right mind thinks that Xi Jinping is going to tolerate some Chinese AI company building something that overthrows the Chinese government? No way. It’s clearly really bad for the American government too if it gets overthrown in a coup by the first American company to build superintelligence. This is a national security threat.
That’s compelling framing — superintelligence as a national security threat, not an asset. Do you see that view gaining traction in Washington?
I think if people in the national security community listen to Dario Amodei describe his vision — he’s given a famous speech where he says we’ll soon have a country of geniuses in a data center — they might start thinking: wait, did Dario just use the word ‘country’? Maybe I should put that country of geniuses in a data center on the same threat list I’m keeping tabs on, because that sounds threatening to the U.S. government. And I think fairly soon, enough people in the U.S. national security community are going to realize that uncontrollable superintelligence is a threat, not a tool. This is totally analogous to the Cold War. There was a race for dominance — economic and military — against the Soviet Union. We Americans won that one without ever engaging in the second race, which was to see who could put the most nuclear craters in the other superpower. People realized that was just suicide. No one wins. The same logic applies here.
What does all of this mean for the pace of AI development more broadly? How close do you think we are to the systems you’re describing?
Six years ago, almost every expert in AI I knew predicted we were decades away from having AI that could master language and knowledge at human level — maybe 2040, maybe 2050. They were all wrong, because we already have that now. We’ve seen AI progress quite rapidly from high school level to college level to PhD level to university professor level in some areas. Last year, AI won the gold medal at the International Mathematics Olympiad, which is about as difficult as human tasks get. I wrote a paper together with Yoshua Bengio, Dan Hendrycks, and other top AI researchers just a few months ago giving a rigorous definition of AGI. According to this, GPT-4 was 27% of the way there. GPT-5 was 57% of the way there. So we’re not there yet, but going from 27% to 57% that quickly suggests it might not be that long.
When I lectured to my students yesterday at MIT, I told them that even if it takes four years, that means when they graduate, they might not be able to get any jobs anymore. It’s certainly not too soon to start preparing for it.
Anthropic is now blacklisted. I’m curious to see what happens next — will the other AI giants stand with them and say, we won’t do this either? Or does someone like xAI raise their hand and say, Anthropic didn’t want that contract, we’ll take it? [Editor’s note: Hours after the interview, OpenAI announced its own deal with the Pentagon.]
Last night, Sam Altman came out and said he stands with Anthropic and has the same red lines. I admire him for the courage of saying that. Google, as of when we started this interview, had said nothing. If they just stay quiet, I think that’s incredibly embarrassing for them as a company, and a lot of their staff will feel the same. We haven’t heard anything from xAI yet either. So it’ll be interesting to see. Basically, there’s this moment where everybody has to show their true colors.
Is there a version of this where the outcome is actually good?
Yes, and this is why I’m actually optimistic in a strange way. There’s such an obvious alternative here. If we just start treating AI companies like any other companies — drop the corporate amnesty — they would clearly have to do something like a clinical trial before they released something this powerful, and demonstrate to independent experts that they know how to control it. Then we get a golden age with all the good stuff from AI, without the existential angst. That’s not the path we’re on right now. But it could be.
Tech
Blue Origin successfully re-uses a New Glenn rocket for the first time ever
Blue Origin has successfully reused one of its New Glenn rockets for the first time ever, marking a major milestone for the heavy-launch system as Jeff Bezos’ space company looks to compete with Elon Musk’s SpaceX.
But the overall mission’s success may be in question. Roughly two hours after the launch, Blue Origin revealed that the communications satellite that New Glenn carried to space for AST SpaceMobile wound up in an “off-nominal orbit,” meaning something may have gone wrong with the rocket’s upper stage. In other words, it appears the company missed the mark.
“We have confirmed payload separation. AST SpaceMobile has confirmed the satellite has powered on,” the company wrote on X. “We are currently assessing and will update when we have more detailed information.”
AST later said Blue Origin’s rocket placed its satellite into an orbit that was “lower than planned,” so the satellite will have to be de-orbited.
According to a timeline provided by Blue Origin prior to the launch, the upper stage of New Glenn should have performed a second burn roughly one hour after the rocket lifted off from Cape Canaveral, Florida. It’s unclear if that second burn ever happened, or if there were other problems with it, before the AST satellite was deployed.
The company accomplished the re-use feat Sunday on just the third-ever launch of New Glenn, and a little more than one year after the first flight of the new rocket system, which has been in development for more than a decade.
Making New Glenn reusable is crucial to its economics. SpaceX’s ability to re-fly Falcon 9 rocket boosters is one of the main reasons why it has come to dominate the global orbital launch market.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
While Blue Origin has already sent a commercial payload to space with New Glenn — Sunday was the second-such mission — the company wants to use the rocket for NASA moon missions, and to help both it and Amazon build space-based satellite networks. Blue Origin is currently finishing getting its first robotic moon lander ready for an attempted launch later this year.
The booster that Blue Origin re-flew on Sunday was the same one the company used in the second New Glenn mission in November. During that mission, the New Glenn booster helped put two robotic NASA spacecraft into space for a mission to Mars, before returning to a drone ship in the ocean. On Sunday, Blue Origin recovered the rocket booster a second time on a drone ship roughly 10 minutes after takeoff.
Any trouble deploying AST’s satellite could present a risk to Blue Origin’s near-term plans for New Glenn. Blue Origin has a deal with the communications company to send multiple satellites to orbit over the next few years as it works to build out its own space-based cellular broadband network.
This story has been updated with new information from Blue Origin and AST SpaceMobile.
Tech
Cracks are starting to form on fusion energy’s funding boom
It happens in every emerging industry: founders and investors push toward a common goal, until the money starts to roll in and that shared vision begins to diverge.
Cracks are emerging in the fusion power world, which I saw firsthand at The Economist’s Fusion Fest in London last week. It didn’t dampen the overall buoyant mood, lifted by fusion startups’ fundraising haul of $1.6 billion in the last 12 months. But people had differing opinions on two key questions: When should fusion startups go public? And are side businesses a distraction?
Going public was at the top of everyone’s minds. In the last four months, TAE Technologies and General Fusion have announced plans to merge with publicly traded companies. Both stand to receive hundreds of millions of dollars to keep their R&D efforts alive, and investors, some of whom have kept the faith for 20 years, finally see an opportunity to cash out.
Not everyone is in agreement. Most of those who I spoke to were worried these companies were going public far too early and that they hadn’t achieved key milestones that many view as vital in judging the progress of a fusion company.
First, a recap: TAE announced its merger with Trump Media & Technology Group in December. Though the deal isn’t yet completed, the fusion side of the business has already received $200 million of a potential $300 million in cash from the deal, giving it some runway to continue planning its power plant. (The remainder will reportedly land in its bank account once it files the S-4 form with the U.S. Securities and Exchange Commission.)
General Fusion said in January that it would go public via a reverse merger with a special purpose acquisition company. The deal could net the company $335 million and value the combined entity at $1 billion.
Both companies could use the cash.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Before the merger announcement, General Fusion was struggling to raise funds, and around this time last year it laid off 25% of its staff as CEO Greg Twinney posted a public letter pleading for investment. It received a brief reprieve in August when investors threw it a $22 million lifeline, but that sort of money doesn’t last long in the fusion world, where equipment, experiments, and employees don’t come cheap.
TAE’s position wasn’t quite as dire, but it still required some funds. Pre-merger, the company raised nearly $2 billion, which sounds like a lot, but keep in mind the company is nearly 30 years old. What’s more, its valuation pre-merger was $2 billion, according to PitchBook. Investors were breaking even at best.
Neither company has hit scientific breakeven, a key milestone that shows a reactor design has power plant potential. Many observers doubt they’ll hit that mark before other privately held startups do. One executive told me, if they were in those shoes, they’re not sure how they would fill time on quarterly earnings calls if the companies didn’t hit scientific breakeven soon.
If TAE or General Fusion doesn’t deliver results, several people feared the public markets would sour on the entire fusion industry.
Now, not all may be lost. TAE has already started marketing other products, including power electronics and radiation therapy for cancer. That could give the company some near-term revenue to placate shareholders. General Fusion, though, hasn’t revealed any such plans.
And therein lies another divide: fusion companies remain split on whether they should pursue revenue now or wait until they have a working power plant.
Some companies are embracing the opportunity to make money along the way. Not a bad strategy! Fusion is a long game, so why not improve your odds? Both Commonwealth Fusion Systems and Tokamak Energy have said they’ll be selling magnets. TAE and Shine Technologies are both in nuclear medicine.
Other startups are worried that side hustles could become a distraction. Inertia Enterprises, for example, told me that they’re laser-focused on their power plant. That jibes with what another investor told me months ago: — they were worried that fusion startups could get distracted by profitable, but tangential businesses and fall off the lead.
There wasn’t consensus on the right time to go public either. I heard a few proposed milestones. Some believe startups should first reach that scientific breakeven milestone, in which a fusion reaction generates more energy than it needs to ignite. No startup has achieved that yet. The other possibilities are facility breakeven — when the reactor makes more energy than the entire site needs to operate — and commercial viability — when a reactor makes enough electrons to sell a meaningful amount to the grid.
We may have an answer to that question sooner than later. Commonwealth Fusion Systems expects it will hit scientific breakeven sometime next year, and some think the company might use that as an opportunity to go public.
Tech
TechCrunch Mobility: Uber enters its assetmaxxing era
Welcome back to TechCrunch Mobility, your hub for the future of transportation and now, more than ever, how AI is playing a part. To get this in your inbox, sign up here for free — just click TechCrunch Mobility!
A few weeks ago, I wrote about how Uber seemed to be everywhere, all at once in the emerging autonomous vehicle technology sector. The Financial Times has now put a number on it. The FT calculated that Uber has committed more than $10 billion to buying autonomous vehicles and taking equity stakes in the companies developing the tech, according to public records and discussions with folks behind the scenes. About $2.5 billion of that is in direct investments, with the remaining $7.5 billion to be spent on buying robotaxis over the next few years, the outlet reported.
We’ve reported on Uber’s numerous investments and deals with autonomous vehicle companies across drones, robotaxis, and freight. Some of its investments include WeRide, Lucid and Nuro, Rivian, and Wayve.
This rather large number (and particularly that $7.5 billion) got me thinking about another transformative era in Uber’s history and how it has visited these asset-heavy shores before. Uber might have started with a plan to be asset light, but for a brief period it did quite the opposite.
Uber went on a moonshot spree between 2015 and 2018. It launched electric air taxi developer Uber Elevate and the in-house autonomous vehicle unit Uber ATG, which would be boosted by its acquisition of Otto in 2016. It also snapped up micromobility startup Jump in 2018.
And then in 2020, Uber pulled the asset-heavy rip cord, ostensibly leaving all of those moonshots behind. Uber sold Uber ATG to Aurora, Jump to Lime, and Elevate to Joby Aviation. But it didn’t completely divest; it kept equity stakes in all of them.
Uber is now entering into a new and different asset-heavy era. It’s not plunking down millions, or even billions, to develop the technology in-house, although I’m sure folks there would be quick to pipe up that there is always R&D happening over at Uber. Instead, it appears to be focused on owning (or perhaps leasing) the physical assets.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
That could mean interesting line items on Uber’s balance sheet in the future.
Owning fleets of robotaxis built by other companies might not have been the original vision of Uber, or its former CEO Travis Kalanick, who has said the company made a mistake when it abandoned its AV development program. But this new approach could still get it to the same end point.
A little bird

Earlier this month, I interviewed Eclipse partner Jiten Behl about the venture firm’s new $1.3 billion fund and where that money might be headed. The firm, as I wrote, intends to incubate more startups (e.g., it was behind the Rivian spinout Also). Behl wouldn’t give me details, only stating, “We’re definitely working on a couple of really cool ideas.” He also said Eclipse is particularly interested in startups that work across enterprises.
Thanks to one little bird and some document diving by senior reporter Sean O’Kane, it looks like a seed round announcement is imminent for a San Francisco-based startup working on an autonomous hauler that I’ve been told doesn’t have a driver cab. This sounds similar to what Einride has built, but since we haven’t seen it, we’ll have to wait.
The company’s roster isn’t big, but it is chock-full of Silicon Valley tech elite, including a founder who was at Uber ATG, Pronto, and Waabi. Stay tuned for more.
Got a tip for us? Email Kirsten Korosec at kirsten.korosec@techcrunch.com or my Signal at kkorosec.07, or email Sean O’Kane at sean.okane@techcrunch.com.
Deals!

Slate is back with more capital as it prepares to put its first affordable pickup trucks into production by the end of 2026.
The electric vehicle startup, which got its start with backing from Jeff Bezos, raised another $650 million in a Series C funding round led by TWG Global. Keep your eye on TWG. This is the firm run by Guggenheim Partners chief executive (and Los Angeles Dodgers owner) Mark Walter and investor Thomas Tull.
Slate has raised about $1.4 billion to date, and its previous investors include General Catalyst, Jeff Bezos’ family office, VC firm Slauson & Co., and former Amazon executive Diego Piacentini, as TechCrunch first reported last year.
Other deals that got my attention …
Glydways, a San Francisco-based startup developing personal autonomous pods designed to operate on dedicated 2-meter-wide lanes in cities, raised $170 million in a Series C funding round co-led by Suzuki Motor Corporation, ACS Group, and Khosla Ventures. Existing investors Mitsui Chemicals and Gates Frontier and new investor Obayashi Corporation also participated. But wait, there’s more.
GM and Ford are reportedly talking to the Pentagon about whether the auto industry can help the military revamp its procurement program and find cheaper, faster ways to buy vehicles, munitions, or other hardware, the New York Times reported, citing anonymous sources.
Loop, a San Francisco-based startup, raised $95 million in a Series C funding round led by Valor Equity Partners and the Valor Atreides AI Fund, and includes investments from 8VC, Founders Fund, Index Ventures, and J.P. Morgan’s late-stage fund, Growth Equity Partners.
Monarch Tractor, the startup developing electric, autonomous tractors, has moved on to (ahem) a different pasture. The startup’s assets have been acquired by Caterpillar after struggling to pivot to a software services business.
Uber is increasing its stake in Delivery Hero by 4.5%, the Financial Times reported. Uber agreed to buy about 270 million euros in shares from Prosus, the Dutch investment group and Delivery Hero’s largest shareholder.
Notable reads and other tidbits

Doug Field, the high-profile executive who shaped Ford’s electric vehicle and technology strategies over the past five years, is leaving. Notably, Ford is shaking up the organization as well, creating a “product creation and industrialization” team to be led by COO Kumar Galhotra. Any guesses where Field is headed next? Perhaps he’ll return to Silicon Valley.
Lightship, the all-electric RV startup, is expanding its Colorado-based factory by another 44,000 square feet, which will allow it to quadruple its manufacturing capacity.
Rivian and battery recycling and materials startup Redwood Materials partnered years ago. We’re now seeing the fruits of that relationship. Redwood is installing battery energy storage at Rivian’s factory in Illinois. The catch? Redwood is using 100 second-life Rivian battery packs, which will provide 10 megawatt-hours (MWh) of dispatchable energy to reduce cost and grid load during peak demand periods.
Tesla created a new self-driving app that makes it easier for owners to subscribe to its Full Self-Driving software and see statistics on how — and how often — they use it. This may not be huge news, but it did catch my eye because of the gamified qualities of these new stats.
Waymo, as per usual, has a few news items this week. The Alphabet-owned company started testing its autonomous vehicles on public roads in London. It also removed its waitlist in Miami and Orlando to scale its robotaxi services in the two cities.
One more thing …
This newsletter isn’t my only project that is leaning more heavily into robotics. My podcast, the Autonocast, is too, as the worlds of autonomous vehicles, AI, and robotics mash together. Check out this interview with Foxglove founder Adrian MacNeil, who previously worked at Cruise.
