Tech
Will the Pentagon’s Anthropic controversy scare startups away from defense work?
In just over a week, negotiations over the Pentagon’s use of Anthropic’s Claude technology fell through, the Trump administration designated Anthropic a supply-chain risk, and the AI company said it would fight that designation in court.
OpenAI, meanwhile, quickly announced a deal of its own, prompting backlash that saw users uninstalling ChatGPT and pushing Anthropic’s Claude to the top of the App Store charts. And at least one OpenAI executive has quit over concerns that the announcement was rushed without appropriate guardrails in place.
On the latest episode of TechCrunch’s Equity podcast, Kirsten Korosec, Sean O’Kane, and I discussed what this means for other startups seeking to work with the federal government, especially the Pentagon, as Kirsten wondered, “Are we going to see a changing of the tune a little bit?”
Sean pointed out that this is an unusual situation in a number of ways, in part because OpenAI and Claude make products that “no one can shut up about.” And crucially, this is a dispute over “how their technologies are being used or not being used to kill people” so it’s naturally going to draw more scrutiny.
Still, Kirsten argued, this is a situation that should “give any startup pause.”
Read a preview of our conversation, edited for length and clarity, below.
Kirsten: I’m wondering if other startups are starting to look at what’s happened with the federal government, specifically the Pentagon and Anthropic, that debate and wrestling match, and [take] pause about whether they want to be going after federal dollars. Are we going to see a changing of the tune a little bit?
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Sean: I wonder about that, too. I think no, to some extent, in the near term, if only because when you really try to think about all the different companies, whether they’re startups or even more established Fortune 500s that do work with the government and in particular with the Department of Defense or the Pentagon, [for] a lot of them, that work flies under the radar.
General Motors makes defense vehicles for the Army and has done [that] for a very long time and has worked on all electric versions of those vehicles and autonomous versions. There’s stuff like that that goes on all the time and it just never really hits the zeitgeist. I think the problem that OpenAI and Anthropic ran into within the last week is like, these are companies that make products that a ton of people use — and also more importantly, [that] no one can shut up about.
So there’s just such a spotlight on them, that naturally highlights their involvement to a level that I think most of the other companies that are contracting with the federal government — and, in particular, any of the war-fighting elements of the federal government — don’t necessarily have to deal with.
The only caveat I’ll add to that is a lot of the heat around this discussion between Anthropic and OpenAI and the Pentagon is very specifically about how their technologies are being used or not being used to kill people, or in parts of the missions that are killing people. It’s not just the attention that’s on them and the familiarity we have with their brands, there is an extra element there that I feel is more abstract when you’re thinking about General Motors as a defense contractor or whatever.
I don’t think we’re going to see, like, Applied Intuition or any of these other companies that have been framing themselves as dual use back off much, just because I don’t see the spotlight on it and there’s just not the sort of shared understanding of what that impact might be.
Anthony: This story is so unique and specific to these companies and personalities in a lot of ways. I mean, there have been a lot of really interesting thought pieces about: What is the role of technology in government? [Of] AI in government? And I think those are all good and worthwhile questions to ask and explore.
I think also, though, that this is a very curious lens through which to examine some of those things because Anthropic and OpenAI are not actually that different in a lot of ways or the stances they’re taking. It’s not like one company is saying, “Hey, I don’t want to work with the government” and one is saying, “Yes, I do.” Or one is saying, “You can do whatever you want.” and [the other is] saying, “No, I want to have restrictions.” Both of them, at least publicly, are saying, “We want restrictions on how our AI gets used.” It just seems like Anthropic is digging in their heels a lot more about: You cannot change the terms in this way.
And then on top of that, there also just seems to be a personality layer where, the CEO of Anthropic and, Emil Michael — who a lot of TechCrunch readers might remember from his Uber days, and is now [chief technology officer for the Department of Defense]. Apparently, they just really don’t like each other. Reportedly.
Sean: Yes, there’s a very big “girls are fighting” element here that we should not overlook.
Kirsten: Yeah, a little bit. There is, but the implications are a little bit stronger than that. Again, to pull back a little bit, what we’re talking about here is the Pentagon and Anthropic coming into a dispute in which Anthropic appears to have lost, although I should say they are still very much being used by the military. They are considered a crucial technology, but OpenAI has kind of stepped in, and this is evolving and will likely change by the time this episode comes out.
The blowback has been interesting for OpenAI, where we’ve seen a lot of uninstalls of ChatGPT I think surged 295% after OpenAI locked in the deal with the Department of Defense.
To me, all of this is noise to the really critical and dangerous thing, which is that the Pentagon was seeking to change existing terms on an existing contract. And that is really important and should give any startup pause because the political machine that’s happening right now, particularly with the DoD, appears to be different. This isn’t normal. Contracts take forever to get baked in at the government level and the fact that they’re seeking to change those terms is a problem.
Tech
Alexa+ gets a new ‘adults only’ personality option that curses but won’t do NSFW content
Amazon’s AI assistant Alexa+ is getting another new personality. On Thursday, the company announced it’s expanding its lineup of personality styles for users to choose from to include a “Sassy” option, which is for adults only. Notes Amazon, before opting to use the Sassy personality, users will be required to go through additional security checks in the Alexa app.
The personality style will also not be available when Amazon Kids is enabled, Amazon says.
The new option joins others like Brief, Chill, and Sweet, launched last month.

When you toggle on the option for Sassy in the Alexa mobile app, you’re warned that the Sassy style uses explicit language, which is why it requires a security check. On iOS, this involved a Face ID scan.
The AI assistant explained its style to us like this: “The Sassy style is built on one premise: help first, judge always. Every answer comes wrapped in wit and a well-placed roast — it’ll answer your question; it’ll just make you feel something about it first. Expect reality checks delivered with charm, compliments that somehow sting, and warmth you didn’t see coming. Equal-opportunity irreverence, zero apologies. Honest, sharp, and funny — and somehow that’s more helpful than helpful.”
Alexa’s app also had warned that the style could contain “mature subject matter.”
However, further investigation discovered this is not Amazon’s version of something like Grok’s adult AI companions. The AI assistant said the new option won’t get into areas like explicit sexual content, hate speech, illegal activities, personal attacks, or anything that could cause harm to oneself or others.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
The move is the latest example of how Amazon is trying to make Alexa+ more customizable, as it revamps the assistant for the generative AI era. By offering the assistant different personalities — including one positioned as more adult — Amazon is borrowing from a broader trend in AI, where companies have been experimenting with tone, style, and personas to make their assistants more engaging and personalized to the individual users’ choices.
Tech
Tesla becomes a utility in the UK, setting up showdown with Octopus Energy
Tesla is now an officially licensed utility in the United Kingdom, according to a new report from The Wall Street Journal. The automotive and energy company recently received a license from the Office of Gas and Electricity Markets, allowing it to sell electricity directly to households and commercial and industrial users.
The company has long dabbled in electricity markets. Its first pure energy products, the Powerwall and Powerpack, were introduced in 2015, but it wasn’t until a year later when Tesla merged with SolarCity that it started scaling the division rapidly. In 2022, the company launched Tesla Electric in Texas, which allowed it to sell electricity directly to customers. Powerwall owners can sell electrons from their batteries to participate in the company’s virtual power plant.
The new division, known as Tesla Energy Ventures, will compete with existing utilities in the U.K., including EDF, E.ON, and Octopus Energy. The competition with Octopus should prove particularly interesting. Since its founding in 2015, Octopus has become the country’s largest utility by focusing on slick software, renewable energy, and creative marketing. Sound familiar?
Tech
A writer is suing Grammarly for turning her and other authors into ‘AI editors’ without consent
Grammarly released a controversial feature last week that uses AI to simulate editorial feedback, making it seem like you’re getting a critique from novelist Stephen King, the late scientist Carl Sagan, or tech journalist Kara Swisher. But Grammarly did not get permission from the hundreds of experts it included in this feature, called “Expert Review,” to use their names.
One of the affected writers, journalist Julia Angwin, has filed a class action lawsuit against Superhuman, the parent company that owns Grammarly, arguing that the company violated the privacy and publicity rights of her and the other writers it impersonated. A class action lawsuit allows writers to join Angwin in her case.
“I have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise,” Angwin said in a statement.
The situation is more than a little ironic — Angwin has spent her career leading investigations into tech companies’ impacts on privacy. Other critics of this kind of technology, like renowned AI ethicist Timnit Gebru, were also included in Grammarly’s “Expert Review.”
The “Expert Review” feature, available only to subscribers paying $144 a year, predictably fails to deliver on the promise of thoughtful feedback.
Casey Newton, the founder and editor of the tech newsletter Platformer and another person impersonated by Grammarly, fed one of his articles into the tool and got feedback from Grammarly’s approximation of tech journalist Kara Swisher. Grammarly’s imitation of Swisher produced “feedback” so generic that it raises the question of why the company would go through the rigmarole of using these writers’ likenesses in the first place.
Here is what Grammarly’s approximation of Kara Swisher told him: “Could you briefly compare how daily AI users versus AI skeptics articulate risk, creating a through-line readers can follow?”
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Newton relayed the message from the AI approximation of Kara Swisher to the actual, real human being, Kara Swisher.
“You rapacious information and identity thieves better get ready for me to go full McConaughey on you,” Swisher texted Newton (referring to Grammarly). “Also, you suck.”
Grammarly has since disabled the “Expert Review” feature, according to a LinkedIn post by Superhuman CEO Shishir Mehrotra. While Mehrotra offered an apology, he continued to defend the idea of the feature.
“Imagine your professor sharpening your essay, your sales leader reshaping a customer pitch, a thoughtful critic challenging your arguments, or a leading expert elevating your proposal,” he wrote. “For experts, this is a chance to build that same ubiquitous bond with users, much like Grammarly has.”
