Tech
Will the Pentagon’s Anthropic controversy scare startups away from defense work?
In just over a week, negotiations over the Pentagon’s use of Anthropic’s Claude technology fell through, the Trump administration designated Anthropic a supply-chain risk, and the AI company said it would fight that designation in court.
OpenAI, meanwhile, quickly announced a deal of its own, prompting backlash that saw users uninstalling ChatGPT and pushing Anthropic’s Claude to the top of the App Store charts. And at least one OpenAI executive has quit over concerns that the announcement was rushed without appropriate guardrails in place.
On the latest episode of TechCrunch’s Equity podcast, Kirsten Korosec, Sean O’Kane, and I discussed what this means for other startups seeking to work with the federal government, especially the Pentagon, as Kirsten wondered, “Are we going to see a changing of the tune a little bit?”
Sean pointed out that this is an unusual situation in a number of ways, in part because OpenAI and Claude make products that “no one can shut up about.” And crucially, this is a dispute over “how their technologies are being used or not being used to kill people” so it’s naturally going to draw more scrutiny.
Still, Kirsten argued, this is a situation that should “give any startup pause.”
Read a preview of our conversation, edited for length and clarity, below.
Kirsten: I’m wondering if other startups are starting to look at what’s happened with the federal government, specifically the Pentagon and Anthropic, that debate and wrestling match, and [take] pause about whether they want to be going after federal dollars. Are we going to see a changing of the tune a little bit?
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Sean: I wonder about that, too. I think no, to some extent, in the near term, if only because when you really try to think about all the different companies, whether they’re startups or even more established Fortune 500s that do work with the government and in particular with the Department of Defense or the Pentagon, [for] a lot of them, that work flies under the radar.
General Motors makes defense vehicles for the Army and has done [that] for a very long time and has worked on all electric versions of those vehicles and autonomous versions. There’s stuff like that that goes on all the time and it just never really hits the zeitgeist. I think the problem that OpenAI and Anthropic ran into within the last week is like, these are companies that make products that a ton of people use — and also more importantly, [that] no one can shut up about.
So there’s just such a spotlight on them, that naturally highlights their involvement to a level that I think most of the other companies that are contracting with the federal government — and, in particular, any of the war-fighting elements of the federal government — don’t necessarily have to deal with.
The only caveat I’ll add to that is a lot of the heat around this discussion between Anthropic and OpenAI and the Pentagon is very specifically about how their technologies are being used or not being used to kill people, or in parts of the missions that are killing people. It’s not just the attention that’s on them and the familiarity we have with their brands, there is an extra element there that I feel is more abstract when you’re thinking about General Motors as a defense contractor or whatever.
I don’t think we’re going to see, like, Applied Intuition or any of these other companies that have been framing themselves as dual use back off much, just because I don’t see the spotlight on it and there’s just not the sort of shared understanding of what that impact might be.
Anthony: This story is so unique and specific to these companies and personalities in a lot of ways. I mean, there have been a lot of really interesting thought pieces about: What is the role of technology in government? [Of] AI in government? And I think those are all good and worthwhile questions to ask and explore.
I think also, though, that this is a very curious lens through which to examine some of those things because Anthropic and OpenAI are not actually that different in a lot of ways or the stances they’re taking. It’s not like one company is saying, “Hey, I don’t want to work with the government” and one is saying, “Yes, I do.” Or one is saying, “You can do whatever you want.” and [the other is] saying, “No, I want to have restrictions.” Both of them, at least publicly, are saying, “We want restrictions on how our AI gets used.” It just seems like Anthropic is digging in their heels a lot more about: You cannot change the terms in this way.
And then on top of that, there also just seems to be a personality layer where, the CEO of Anthropic and, Emil Michael — who a lot of TechCrunch readers might remember from his Uber days, and is now [chief technology officer for the Department of Defense]. Apparently, they just really don’t like each other. Reportedly.
Sean: Yes, there’s a very big “girls are fighting” element here that we should not overlook.
Kirsten: Yeah, a little bit. There is, but the implications are a little bit stronger than that. Again, to pull back a little bit, what we’re talking about here is the Pentagon and Anthropic coming into a dispute in which Anthropic appears to have lost, although I should say they are still very much being used by the military. They are considered a crucial technology, but OpenAI has kind of stepped in, and this is evolving and will likely change by the time this episode comes out.
The blowback has been interesting for OpenAI, where we’ve seen a lot of uninstalls of ChatGPT I think surged 295% after OpenAI locked in the deal with the Department of Defense.
To me, all of this is noise to the really critical and dangerous thing, which is that the Pentagon was seeking to change existing terms on an existing contract. And that is really important and should give any startup pause because the political machine that’s happening right now, particularly with the DoD, appears to be different. This isn’t normal. Contracts take forever to get baked in at the government level and the fact that they’re seeking to change those terms is a problem.
Tech
Tinder owner Match Group is slowing hiring to pay for its increased use of AI tools
You might think the big story out of Match Group’s first-quarter earnings is Tinder’s turnaround. The dating app’s revenue is slightly up again after quarter-after-quarter of declines.
But we’d like to point to a comment the chief financial officer made about how the company is slowing its hiring right now because it needs more money to pay for AI tools for its employees.
Ah, yes, the good ol’ “let’s blame AI” strategy!
While speaking to analysts on the first-quarter earnings call, Match Group CFO Steven Bailey talked about how the dating app giant was investing in AI technology for internal use at the company — as well as how Match was paying for it.
“We’re making a big push around AI enablement. We’re giving every employee in the company access to all the cutting-edge tools. We’re giving them the training they need to succeed. We’re setting expectations. We really want to become an AI-native company,” Bailey said.
“We think it’s a huge opportunity. But these tools cost a lot of money, as I’m sure you know, and so the way we’re helping to pay for that is by slowing our hiring plans for the rest of the year,” he added.
The company assured investors that the impact would be cost-neutral, as the slowed hiring and lower headcount would make up for the increased software expenses. Plus, Match Group is betting that the increased productivity from employees’ use of AI will ultimately increase revenue growth, the number-cruncher explained.
While on the surface this looks like another example of AI taking people’s jobs — in this case, forcing a company to lower its number of open positions — there’s likely more nuance to this story.
Let’s keep in mind that Match Group’s flagship app, Tinder, has been struggling in recent years. This quarter may be the start of a turnaround, as monthly active users declined by 7% in March compared with the far-steeper 10% drop a year ago. Tinder registrations also grew for the first time since 2024, but by a mere 1%, as Bloomberg pointed out.
This is perhaps a positive sign for Tinder. Or it might be a brief blip driven by users’ curiosity around various product improvements and new features, like IRL events. Time will tell.
Dating meets a generational shift
Match Group remains a company that has to work to squeeze more money out of an oft-dwindling, less-active user base — which, to the company’s credit, it did exactly that. Match’s revenue was $864 million in the first quarter, up 4% year-over-year. However, its next-quarter estimates are coming in lower — around $850-$860 million, down 2% to flat year-over-year.
All these struggles come after many months of what appears to be a growing disinterest in the use of dating apps by younger people. This generational shift sees people opting to meet up in real life, perhaps by pursuing an interest, like running, book clubs, or a hobby that connects them with other people, which then, in turn, expands their network, increasing their chance of meeting someone new.
The trend coincides with a resurgence of nostalgic tech, like digital cameras, flip phones, boomboxes, and even landlines, signaling a generation that’s feeling burned out by always-on connectivity and looking for analog pleasures.
Match Group is aware of this significant shift and says it’s pivoting to address the challenge by increasing the number of its own IRL events.
“Gen Z desperately wants to connect. They know they want to meet new people. They just want to do it in a low-pressure, low-stakes way that doesn’t feel like a job interview,” Match’s CFO Spencer Rascoff told investors on the call. “Traditional dating apps are very highly structured and can be intimidating to a user under 30. So, I think the growth of these alternative ways to meet new people speaks to how Gen Z is trying to find lower-pressure ways to connect.”
“We’ve obviously adapted our roadmap to this reality,” he said.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Tech
Khosla-backed robotics startup Genesis AI has gone full stack, demo shows
Genesis AI, a startup that raised a $105 million seed round to build foundational AI for robotics, has unveiled its first model, GENE-26.5, and it comes with surprise hands. In a demo video, the company showcased various advanced tasks performed by a set of robotic hands it has designed in-house.
“The model has always been the goal, because a better model means better intelligence,” Genesis co-founder and CEO Zhou Xian told TechCrunch. But the company soon realized that it needed control over the hardware. “So we decided to go full stack,” he said.
Other well-funded companies operate at the intersection of AI and robotics — such as Physical Intelligence and Skild AI. Zhou also acknowledged that “there’s probably 50 or 100 robotic hand companies out there.” But he and his co-founder Théophile Gervet hope that building their own will give them the upper hand.
The key difference is that Genesis’ hand has the same size and shape as a human hand — rather than the two-finger grippers many robotics companies have been using — reducing the gap with real-world conditions.
“That lets us collect a lot more data than was previously possible, to train a model that can do many more tasks,” said Gervet, a former research scientist at Mistral AI who is now Genesis’ president.
Of all the physical manipulation tasks showcased in the video below, Gervet’s personal favorite is cooking, because it proves that the robot has been able to complete a long series of difficult tasks, such as cracking an egg and slicing a tomato. But Genesis has also tasked its robots with preparing smoothies, playing the piano, and solving Rubik’s cube — a robotics gimmick.
Other tasks, such as lab work, are closer to what could be the commercial applications of Genesis’ technology. But what happens behind the scenes is just as important: The startup has also developed a sensor-loaded glove that works as a real-life double of its robotic hand, collecting data that can more readily be used.
“Our idea was that if we could design a robotic hand that tries to mimic a human hand as much as possible, we can instantly unlock huge amounts of human data without having to worry about what people call the ‘embodiment gap’ in robotics research,” Zhou said.
Others have tried their hand at that problem; the main novelty is how Genesis combines this with its model. The current version is named GENE-26.5 for May 2026, but Zhou expects there will be many iterations, thanks to the simulation it has developed. “The real bottleneck for the iteration speed of the model is evaluation. So this helps us speed up model training a lot,” he said.
Beyond simulation, though, data will be key to training models that can help robots perform more tasks. That’s also where Genesis’ glove could come in handy. Gervet said that, unlike clunky data collection devices that get in the way, it is just as light and easy to wear as the security gloves already used in many industries, while relatively cheap to make.
“We’re in talks with a lot of customers right now, and a lot of the value of a glove would be that, for the first time, you can wear the data collection device when you’re doing your daily job, whether it’s a lab technician for pharma or for manufacturing,” Gervet said. This would also be complemented by “egocentric video data” — people filming themselves doing the task.
Still, it remains to be seen whether workers would be happy to wear the very gloves and cameras that could train robots to replace them, and whether they will get extra pay for that training. That will be between Genesis’ customers and their employees, Gervet suggested. “We haven’t nailed the details yet,” he said.
Either way, they may decide not to share that data with the startup, the founders acknowledged. But the startup also has avenues of its own to build its “human skill library” — it could also pay third-party partners to collect data. Its model is already trained on “massive amounts of human-based internet videos,” according to a press release that didn’t mention compensation.
Combined with its simulation system, this could help Genesis lower the costs of its technology for real-world applications like the one it has demonstrated. “This marks an important milestone for their team and the robotics industry more broadly,” said Google’s former CEO, Eric Schmidt, who invested in the startup.
In July 2025, just a few months after its creation, the startup had emerged from stealth with a $105 million seed round co-led by Eclipse and Khosla Ventures, with additional backers including Bpifrance, HSG, and individuals like Schmidt, but also Xavier Niel, Daniela Rus, and Vladlen Koltun.
This funding helped Genesis increase its headcount. With offices in Paris and California, it has also expanded to London. “One big reason we decided to be in Europe is there is a huge talent density across the whole continent,” Gervet said. Its team of 60 people is split around “40-45% in Europe and 50-55% in the U.S.,” and the startup is currently hiring in all three locations.
Aside from hiring, the company also plans to reveal its first general-purpose robot shortly, which Zhou told TechCrunch will be a full-body robot, not just hands. But he insisted that the roadmap is still the same.
“Our goal is to build the most capable robotic system,” he said.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Tech
Google updates AI search to include quotes from Reddit and other sources
Google is updating search to refine its AI experience by adding additional context to links, like excerpts from web forums and blogs, as well as a feature that highlights links from a user’s news subscriptions.
While citing web forums and discussion boards can help users find answers to more niche queries, this design choice could also prove chaotic.

Two years ago, Google overhauled its search experience to put AI front and center — when you search for something, Google will often summon an “AI Overview,” which has spurred mixed reception from users. People quickly pointed out how the feature could be exploited, since it failed to recognize sarcasm or information that comes from dubious sources. (It cited The Onion when telling someone to eat “one small rock per day,” and used Reddit to advise someone to put glue on their pizza to make the cheese stick better.)
Though Google’s AI Overviews have improved significantly, they still — like anything powered by an LLM — are prone to hallucination. A recent New York Times analysis found that the AI Overviews were correct about nine times out of 10. But for a company that processes trillions of queries a year, that success rate would mean that hundreds of thousands of searches turn up inaccurate results every minute.
Of course, not every search has an objective yes-or-no answer, which is why Google might want to pull in voices from web forums where people discuss such questions — there’s a reason why people often add “Reddit” to the end of their Google searches.
“For many searches, people are increasingly seeking out advice from others,” Google explains. “To help you find the most helpful insights to explore further, AI responses will now include a preview of perspectives from public online discussions, social media, and other firsthand sources. We’re also adding more context to these links, like a creator’s name, handle, or community name, to help you decide which discussions you might want to read or participate in.”
But now Google is complicating the role of its AI Overviews. Is the AI Overview supposed to answer a question, or is it supposed to serve you a variety of sources that might have the information you’re looking for? Isn’t that basically just a normal Google search?

Google will, at least, add more context to where its AI Overview commentary comes from, which might help users decipher if they’re getting information from a trustworthy source. It’s similar to how ChatGPT or Claude will sometimes provide links that are supposed to back up its claims.
Still, we’d recommend double-checking that the AI is not hallucinating the validity of these citations.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
