Tech
OpenAI hardware exec Caitlin Kalinowski quits in response to Pentagon deal
Caitlin Kalinowski announced today that in response to OpenAI’s controversial agreement with the Department of Defense, she’s resigned from her role leading the company’s hardware team.
“This wasn’t an easy call,” Kalinowski said in a social media post. “AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”
Kalinowski, who previously led the team building augmented reality glasses at Meta, joined OpenAI in November 2024. In her announcement today, she emphasized that the decision was “about principle, not people” and said she has “deep respect” for CEO Sam Altman and the OpenAI team.
In a follow-up post on X, Kalinowski added, “To be clear, my issue is that the announcement was rushed without the guardrails defined. It’s a governance concern first and foremost. These are too important for deals or announcements to be rushed.”
An OpenAI spokesperson confirmed Kalinowski’s departure to TechCrunch.
“We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons,” the company said in a statement. “We recognize that people have strong views about these issues and we will continue to engage in discussion with employees, government, civil society and communities around the world.”
OpenAI’s agreement with the Pentagon was announced just over a week ago, after discussions between the Pentagon and Anthropic fell through as the AI company tried to negotiate for safeguards preventing its technology from being used in mass domestic surveillance or fully autonomous weapons. The Pentagon subsequently designated Anthropic a supply-chain risk. (Anthropic said it will fight the designation in court; in the meantime, Microsoft, Google, and Amazon said they will continue to make Anthropic’s Claude available to non-defense customers.)
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Then, OpenAI quickly announced a agreement of its own allowing its technology to be used in classified environments. As executives attempted to explain the deal on social media, the company described it as taking “a more expansive, multi-layered approach” that relies not just on contract language, but also technical safeguards, to protect red lines similar to Anthropic’s.
Nonetheless, the controversy appears to have damaged OpenAI’s reputation among some consumers, with ChatGPT uninstalls surging 295% and Claude climbing to the top of the App Store charts. As of Saturday afternoon, Claude and ChatGPT remain the U.S. App Store’s number one and number two free apps, respectively.
This post has been updated to correct its description of Kalinowski’s role with OpenAI.
Tech
Alexa+ gets a new ‘adults only’ personality option that curses but won’t do NSFW content
Amazon’s AI assistant Alexa+ is getting another new personality. On Thursday, the company announced it’s expanding its lineup of personality styles for users to choose from to include a “Sassy” option, which is for adults only. Notes Amazon, before opting to use the Sassy personality, users will be required to go through additional security checks in the Alexa app.
The personality style will also not be available when Amazon Kids is enabled, Amazon says.
The new option joins others like Brief, Chill, and Sweet, launched last month.

When you toggle on the option for Sassy in the Alexa mobile app, you’re warned that the Sassy style uses explicit language, which is why it requires a security check. On iOS, this involved a Face ID scan.
The AI assistant explained its style to us like this: “The Sassy style is built on one premise: help first, judge always. Every answer comes wrapped in wit and a well-placed roast — it’ll answer your question; it’ll just make you feel something about it first. Expect reality checks delivered with charm, compliments that somehow sting, and warmth you didn’t see coming. Equal-opportunity irreverence, zero apologies. Honest, sharp, and funny — and somehow that’s more helpful than helpful.”
Alexa’s app also had warned that the style could contain “mature subject matter.”
However, further investigation discovered this is not Amazon’s version of something like Grok’s adult AI companions. The AI assistant said the new option won’t get into areas like explicit sexual content, hate speech, illegal activities, personal attacks, or anything that could cause harm to oneself or others.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
The move is the latest example of how Amazon is trying to make Alexa+ more customizable, as it revamps the assistant for the generative AI era. By offering the assistant different personalities — including one positioned as more adult — Amazon is borrowing from a broader trend in AI, where companies have been experimenting with tone, style, and personas to make their assistants more engaging and personalized to the individual users’ choices.
Tech
Tesla becomes a utility in the UK, setting up showdown with Octopus Energy
Tesla is now an officially licensed utility in the United Kingdom, according to a new report from The Wall Street Journal. The automotive and energy company recently received a license from the Office of Gas and Electricity Markets, allowing it to sell electricity directly to households and commercial and industrial users.
The company has long dabbled in electricity markets. Its first pure energy products, the Powerwall and Powerpack, were introduced in 2015, but it wasn’t until a year later when Tesla merged with SolarCity that it started scaling the division rapidly. In 2022, the company launched Tesla Electric in Texas, which allowed it to sell electricity directly to customers. Powerwall owners can sell electrons from their batteries to participate in the company’s virtual power plant.
The new division, known as Tesla Energy Ventures, will compete with existing utilities in the U.K., including EDF, E.ON, and Octopus Energy. The competition with Octopus should prove particularly interesting. Since its founding in 2015, Octopus has become the country’s largest utility by focusing on slick software, renewable energy, and creative marketing. Sound familiar?
Tech
A writer is suing Grammarly for turning her and other authors into ‘AI editors’ without consent
Grammarly released a controversial feature last week that uses AI to simulate editorial feedback, making it seem like you’re getting a critique from novelist Stephen King, the late scientist Carl Sagan, or tech journalist Kara Swisher. But Grammarly did not get permission from the hundreds of experts it included in this feature, called “Expert Review,” to use their names.
One of the affected writers, journalist Julia Angwin, has filed a class action lawsuit against Superhuman, the parent company that owns Grammarly, arguing that the company violated the privacy and publicity rights of her and the other writers it impersonated. A class action lawsuit allows writers to join Angwin in her case.
“I have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise,” Angwin said in a statement.
The situation is more than a little ironic — Angwin has spent her career leading investigations into tech companies’ impacts on privacy. Other critics of this kind of technology, like renowned AI ethicist Timnit Gebru, were also included in Grammarly’s “Expert Review.”
The “Expert Review” feature, available only to subscribers paying $144 a year, predictably fails to deliver on the promise of thoughtful feedback.
Casey Newton, the founder and editor of the tech newsletter Platformer and another person impersonated by Grammarly, fed one of his articles into the tool and got feedback from Grammarly’s approximation of tech journalist Kara Swisher. Grammarly’s imitation of Swisher produced “feedback” so generic that it raises the question of why the company would go through the rigmarole of using these writers’ likenesses in the first place.
Here is what Grammarly’s approximation of Kara Swisher told him: “Could you briefly compare how daily AI users versus AI skeptics articulate risk, creating a through-line readers can follow?”
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Newton relayed the message from the AI approximation of Kara Swisher to the actual, real human being, Kara Swisher.
“You rapacious information and identity thieves better get ready for me to go full McConaughey on you,” Swisher texted Newton (referring to Grammarly). “Also, you suck.”
Grammarly has since disabled the “Expert Review” feature, according to a LinkedIn post by Superhuman CEO Shishir Mehrotra. While Mehrotra offered an apology, he continued to defend the idea of the feature.
“Imagine your professor sharpening your essay, your sales leader reshaping a customer pitch, a thoughtful critic challenging your arguments, or a leading expert elevating your proposal,” he wrote. “For experts, this is a chance to build that same ubiquitous bond with users, much like Grammarly has.”
