Tech
Anthropic vs. the Pentagon: What’s actually at stake?
The past two weeks have been defined by a clash between Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth as the two battle over the military’s use of AI.
Anthropic refuses to allow its AI models to be used for mass surveillance of Americans or for fully autonomous weapons that conduct strikes without human input. At the same time, Secretary Hegseth has argued the Department of Defense shouldn’t be limited by the rules of a vendor, arguing any “lawful use” of the technology should be permitted.
On Thursday, Amodei publicly signaled that Anthropic isn’t backing down — despite threats that his company could be designated as a supply chain risk as a result. But with the news cycle moving fast, it’s worth revisiting exactly what’s at stake in the fight.
At its core, this fight is about who controls powerful AI systems — the companies that build them, or the government that wants to deploy them.
What is Anthropic worried about?
As we said above, Anthropic doesn’t want its AI models to be used for mass surveillance of Americans or for autonomous weapons with no humans in the loop for targeting and firing decisions. Traditional defense contractors typically have little say in how their products will be used, but Anthropic has argued from its inception that AI technology poses unique risks and therefore requires unique safeguards. From the company’s perspective, the question is how to maintain those safeguards when the technology is being used by the military.
The U.S. military already relies on highly automated systems, some of which are lethal. The decision to use lethal force has historically been left to humans, but there are few legal restrictions on military use of autonomous weapons. The DoD doesn’t categorically ban fully autonomous weapons systems. According to a 2023 DOD directive, AI systems can select and engage targets without human intervention, as long as they meet certain standards and pass review by senior defense officials.
That’s precisely what makes Anthropic nervous. Military technology is secretive by nature, so if the U.S. military were taking steps to automate lethal decision-making, we might not know about it until it was operational. And if it used Anthropic’s models, it could count as “lawful use.”
Techcrunch event
Boston, MA
|
June 9, 2026
Anthropic’s position isn’t that such uses should be permanently off the table. It’s that its models aren’t capable enough to support them safely yet. Imagine an autonomous system misidentifying a target, escalating a conflict without human authorization, or making a split-second lethal decision that no one can reverse. Put a less-capable AI in charge of weapons, and you get a very fast, very confident machine that’s bad at making high-stakes calls.
AI also has the power to supercharge lawful surveillance of American citizens to a concerning degree. Under current U.S. laws, surveillance of American citizens is already possible, whether through collection of texts, emails, and other communication. AI changes the equation by enabling automated large-scale pattern detection, entity resolution across datasets, predictive risk scoring, and continuous behavioral analysis.
What does the Pentagon want?
The Pentagon’s argument is that it should be able to deploy Anthropic’s technology for any lawful use it deems necessary, rather than be limited by Anthropic’s internal policies on things like autonomous weapons or surveillance.
More specifically, Secretary Hegseth has argued the Department of Defense shouldn’t be limited by the rules of a vendor and that it would engage in “lawful use” of the technology.
Sean Parnell, the Pentagon’s chief spokesperson, said in a Thursday X post that the department has no interest in conducting mass domestic surveillance or deploying autonomous weapons.
“Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes,” Parnell said. “This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions.”
He added that Anthropic has until 5:01 p.m. ET on Friday to decide. “Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW,” he said.
Despite the DoD’s stance that it simply doesn’t believe it should be limited by a corporation’s usage policies, Secretary Hegseth’s concerns about Anthropic have at times seemed connected to cultural grievance. In a speech at SpaceX and xAI offices in January, Hegseth railed against “woke AI” in a speech that some saw as a preview of his feud with Anthropic.
“Department of War AI will not be woke,” Hegseth said. “We’re building war-ready weapons and systems, not chatbots for an Ivy League faculty lounge.”
So what now?
The Pentagon has threatened to either declare Anthropic a “supply chain risk” — which effectively blacklists Anthropic from doing business with the government — or invoke the Defense Production Act (DPA) to force the company to tailor its model to the military’s needs. Hegseth has given Anthropic until 5:01 p.m. on Friday to respond. But with the deadline approaching, it’s anyone’s guess whether the Pentagon will make good on its threat.
This is not a fight either party can easily walk away from. Sachin Seth, a VC at Trousdale Ventures who focuses on defense tech, says a supply chain risk label for Anthropic could mean “lights out” for the company.
However, he said, if Anthropic is dropped from the DoD, it could be a national security issue.
“[The Department] would have to wait six to 12 months for either OpenAI or xAI to catch up,” Seth told TechCrunch. “That leaves a window of up to a year where they might be working from not the best model, but the second or third best.”
xAI is gearing up to become classified-ready and replace Anthropic, and it’s fair to say given owner Elon Musk’s rhetoric on the matter that the company would have no problem giving the DoD total control over its technology. Recent reports indicate that OpenAI may stick to the same red lines as Anthropic.
Tech
Tesla brings its robotaxi service to Dallas and Houston
Tesla is expanding its robotaxi service to Dallas and Houston, according to a social media post from the company.
The post says simply that “Robotaxi is now rolling out in Dallas & Houston 🤠” and includes a 14-second video showing Tesla vehicles driving without human monitors or drivers in the front seat.
The company now offers robotaxi service in three cities, all of them in Texas, after launching in Austin last year and starting to offer rides without safety drivers in January 2026. In a February filing, Tesla said that its Austin robotaxis have been involved in 14 crashes since launch.
It also offers a more limited ride service with human drivers in the San Francisco Bay Area.
Tesla may not be running many vehicles in either of these new markets yet, with crowdsourced data on the Robotaxi Tracker website only registering a single vehicle in each city (compared to 46 active vehicles logged in Austin).
Tech
Netflix plans to add a vertical video feed, use AI for recommendations
Netflix is going to launch a TikTok-like vertical video feed within its apps this month, and plans to use AI broadly for content creation and recommendations, the company said on Thursday.
Netflix has been testing a vertical video feed since last year. The short video feature could aid users with discovering video podcasts, along with the current slate of shows and movies. The company is also leaning more into using AI for recommendations after launching a ChatGPT-powered search feature last year.
“We have been in personalization and recommendation for two decades, but we still see tremendous room to make it better by leveraging newer technologies,” Netflix co-CEO Gregory Peters said during the company’s first-quarter conference call. “Recommendation systems based on new model architectures not only improve current personalization but also let us iterate and improve more quickly — adding support for different content types much more efficiently.”
Co-CEO Ted Sarandos said he sees AI tools improving the entire content creation process. “In general, we expect GenAI to make content better; better tools, better processes […] It takes a great artist to make great art, and AI won’t change that. But AI will give those artists better tools to bring those visions to life,” he said.
Last month, Netflix bought Ben Affleck’s AI creation company InterPositive, which, Sarandos said, has garnered interest from creators.
“With our acquisition of InterPositive, we think it accelerates our GenAI capability because it is proprietary technology created specifically for filmmakers and filmmaking, different from other GenAI video applications. While our ownership of InterPositive is very new, we have generated interest with creators who have spent time with the tools, and we are seeing momentum build around adoption,” he noted.
Netflix also mentioned that it wants to use AI to improve its ad suite, and allow for new formats and customization to get better returns. The company expects to generate ad revenue of $3 billion this year.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Netflix reported revenue of $12.25 billion in Q1 2026, up 16.2% year-year-year, and said profit jumped 83% to $5.28 billion. Alongside the first-quarter results, Netflix said its co-founder and chair, Reed Hastings, is leaving the company’s board this summer.
Notably, the company hiked subscription prices in the U.S. late last month, which could have a positive impact next quarter. The company said it ended 2025 with 325 million paying subscribers.
Tech
Bluesky confirms DDoS attack is cause of continued app outages
Bluesky’s website and app are still struggling on Friday after experiencing service interruptions that chief operating officer Rose Wang attributed to an ongoing cyberattack.
On Thursday evening, the social media company confirmed that a “sophisticated Distributed Denial-of-Service (DDoS) attack” was to blame for the issues, which had originally started on April 15 at around 8:40 p.m. ET.
Distributed denial-of-service attacks often involve pummeling apps or websites with large amounts of junk web traffic aimed at overloading and knocking its servers offline. While these kinds of cyberattacks do not involve intrusions into a company’s systems, these incidents can still be disruptive to both the company and its users.
In a post on the Bluesky account, the company shared the cause of the problem and noted that the attack was “impacting our operations, with users experiencing intermittent interruptions in service for their feeds, notifications, threads, and search.”
Bluesky said that it has not seen any evidence of unauthorized access to private data, however.
When originally reached for comment on Thursday, Bluesky only pointed us to the status.bsky.app page and account (@status.bsky.app) for updates. The company did not provide an estimated time for a fix.
The network’s status page is currently not working, however.
Bluesky said it will provide another update on the status of the attack and its mitigation by 1 p.m. ET on Friday.

Because the outages are intermittent, the Bluesky site and app will load at times, slowly, and other times will display error messages.
For instance, switching to a particular feed within the app could display a message that says, “This feed is currently receiving high traffic and is temporarily unavailable. Please try again later. Message from server: Rate Limit Exceeded.”

Popular feeds like Discover or the official Bluesky Team’s feed often see this problem, even as users’ own personal feeds are functional.
Other times, like when trying to visit a user’s profile, the site will display an error message, forcing you to refresh and try again.

Bluesky protocol engineer Bryan Newbold remarked around 3:46 a.m. ET on Wednesday, “oof, our services are getting hit pretty hard tonight.”
Notably, the service disruptions are impacting Bluesky, but other communities, like Blacksky, that run their own infrastructure on the underlying protocol that powers the decentralized social network, are still functioning.
Blacksky’s team told TechCrunch that the Bluesky outage has led to a “significant spike” in migration requests from Bluesky users over the past 12 hours, as users, devs, and other ATmosphere founders like Sebastian at Eurosky have been promoting its services.

It was clear that Bluesky’s team was in a hectic state this week while facing these issues, as one message on its status page had a typo: ” investigating an incident with service in one of our reginos [sic].”

