Connect with us

Tech

Coalition demands federal Grok ban over nonconsensual sexual content

A coalition of nonprofits is urging the U.S. government to immediately suspend the deployment of Grok, the chatbot developed by Elon Musk’s xAI, in federal agencies, including the Department of Defense. 

The open letter, shared exclusively with TechCrunch, follows a slew of concerning behavior from the large language model over the past year, including most recently a trend of X users asking Grok to turn photos of real women, and in some cases children, into sexualized images without their consent. According to some reports, Grok generated thousands of nonconsensual explicit images every hour, which were then disseminated at scale on X, Musk’s social media platform that’s owned by xAI. 

“It is deeply concerning that the federal government would continue to deploy an AI product with system-level failures resulting in generation of nonconsensual sexual imagery and child sexual abuse material,” the letter, signed by advocacy groups like Public Citizen, Center for AI and Digital Policy, and Consumer Federation of America, reads. “Given the administration’s executive orders, guidance, and the recently passed Take It Down Act supported by the White House, it is alarming that [Office of Management and Budget] has not yet directed federal agencies to decommission Grok.” 

xAI reached an agreement last September with the General Services Administration (GSA), the government’s purchasing arm, to sell Grok to federal agencies under the executive branch. Two months before, xAI — alongside Anthropic, Google, and OpenAI — secured a contract worth up to $200 million with the Department of Defense. 

Amid the scandals on X in mid-January, Defense Secretary Pete Hegseth said Grok will join Google’s Gemini in operating inside the Pentagon network, handling both classified and unclassified documents, which experts say is a national security risk. 

The letter’s authors argue that Grok has proven itself incompatible with the administration’s requirements for AI systems. According to the OMB’s guidance, systems that present severe and foreseeable risks that cannot be adequately mitigated must be discontinued. 

“Our primary concern is that Grok has pretty consistently shown to be an unsafe large language model,” JB Branch, a Public Citizen Big Tech accountability advocate and one of the letter’s authors, told TechCrunch. “But there’s also a deep history of Grok having a variety of meltdowns, including antisemitic rants, sexist rants, sexualized images of women and children.”

Techcrunch event

Boston, MA
|
June 23, 2026

Several governments have demonstrated an unwillingness to engage with Grok following its behavior in January, which builds on a series of incidents including the generation of antisemitic posts on X and calling itself “MechaHitler.” Indonesia, Malaysia, and the Philippines all blocked access to Grok (they’ve subsequently lifted those bans), and the European Union, the U.K., South Korea, and India are actively investigating xAI and X regarding data privacy and the distribution of illegal content. 

The letter also comes a week after Common Sense Media, a nonprofit that reviews media and tech for families, published a damning risk assessment that found Grok is among the most unsafe for kids and teens. One could argue that, based on the findings of the report — including Grok’s propensity to offer unsafe advice, share information about drugs, generate violent and sexual imagery, spew conspiracy theories, and generate biased outputs — Grok isn’t all that safe for adults either.  

“If you know that a large language model is or has been declared unsafe by AI safety experts, why in the world would you want that handling the most sensitive data we have?” Branch said. “From a national security standpoint, that just makes absolutely no sense.”

Andrew Christianson, a former National Security Agency contractor and current founder of Gobbi AI, a no-code AI agent platform for classified environments, says that using closed-source LLMs in general is a problem, particularly for the Pentagon. 

“Closed weights means you can’t see inside the model, you can’t audit how it makes decisions,” he said. “Closed code means you can’t inspect the software or control where it runs. The Pentagon is going closed on both, which is the worst possible combination for national security.”

“These AI agents aren’t just chatbots,” Christianson added. “They can take actions, access systems, move information around. You need to be able to see exactly what they’re doing and how they’re making decisions. Open source gives you that. Proprietary cloud AI doesn’t.” 

The risks of using corrupted or unsafe AI systems spill out beyond national security use cases. Branch pointed out that an LLM that’s been shown to have biased and discriminatory outputs could produce disproportionate negative outcomes for people as well, especially if used in departments involving housing, labor, or justice. 

While the OMB has yet to publish its consolidated 2025 federal AI use case inventory, TechCrunch has reviewed the use cases of several agencies — most of which are either not using Grok or are not disclosing their use of Grok. Aside from the DoD, the Department of Health and Human Services also appears to be actively using Grok, mainly for scheduling and managing social media posts and generating first drafts of documents, briefings, or other communication materials. 

Branch pointed to what he sees as a philosophical alignment between Grok and the administration as a reason for overlooking the chatbot’s shortcomings. 

“Grok’s brand is being the ‘anti-woke large language model,’ and that ascribes to this administration’s philosophy,” Branch said. “If you have an administration that has had multiple issues with folks who’ve been accused of being Neo Nazis or white supremacists, and then they’re using a large language model that has been tied to that type of behavior, I would imagine they might have a propensity to use it.”

This is the coalition’s third letter after writing with similar concerns in August and October last year. In August, xAI launched “spicy mode” in Grok Imagine, triggering mass creation of non-consensual sexually explicit deepfakes. TechCrunch also reported in August that private Grok conversations had been indexed by Google Search

Prior to the October letter, Grok was accused of providing election misinformation, including false deadlines for ballot changes and political deepfakes. xAI also launched Grokipedia, which researchers found to be legitimizing scientific racism, HIV/AIDS skepticism, and vaccine conspiracies. 

Aside from immediately suspending the federal deployment of Grok, the letter demands that the OMB formally investigate Grok’s safety failures and whether the appropriate oversight processes were conducted for the chatbot. It also asks the agency to publicly clarify whether Grok has been evaluated to comply with President Trump’s executive order requiring LLMs to be truth-seeking and neutral and whether it met OMB’s risk mitigation standards.  

“The administration needs to take a pause and reassess whether or not Grok meets those thresholds,” Branch said.

TechCrunch has reached out to xAI and OMB for comment. 

source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Granola raises $125M, hits $1.5B valuation as it expands from meeting notetaker to enterprise AI app

Users might not like bots in meetings visibly taking notes, but a lot of them don’t mind if an app on someone’s computer is doing the transcription. That’s the core reason behind Granola’s popularity, which helped it secure $125 million in Series C funding led by Danny Rimer at Index Ventures, with participation from Mamoon Hamid at Kleiner Perkins. This has tipped the company’s valuation to $1.5 billion, it said, up from $250 million as of the last round.

The company said that existing investors like Lightspeed, Spark, and NFDG participated in the round as well. With this round, which comes less than a year after its $43 million round, the startup has raised $192 million.

From being a prosumer app that sits on your computer, transcribes meetings, and generates notes, Granola has been building features to suit an enterprise stack. For instance, last year, it started allowing teammates to collaborate on notes. It has now made inroads into enterprises such as Vanta, Gusto, Thumbtack, Asana, Cursor, Lovable, Decagon, and Mistral AI, it says.

With the fundraising announcement, Granola is also adding a feature called Spaces, which are essentially workspaces for a team. You can also create Folders within this workspace. Spaces have granular controls around who can access what part. Users can query notes from Spaces and folders separately.

Image Credits:Granola

The company understands that AI meeting notes are becoming a commodity at this point, with many players offering this feature. That is why, after introducing a Model Context Protocol (MCP) server in February, the company is introducing two new APIs for integrating the context of notes into AI workflows.

Granola now has a personal API that lets people access their notes and notes shared with them, and an enterprise API to let admins work with team context. The personal API is available to users on business and enterprise plans and the enterprise API is available only to enterprise users.

The API launch comes after a bunch of users, including an a16z partner, were mad at Granola for locking down its local database and breaking on-device AI agent workflows they had set up. Granola co-founder Chris Pedregal clarified that the company didn’t want to lock down data, but its local cache was not designed to handle AI workflows, and the startup decided to change how it stored the data. That move broke the agent workflows. Pedregal promised at that time that Granola would launch APIs for users to access data in bulk. He also said that the company will figure out a way to work with local AI agents.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

The company said that it is also updating its MCP server to let users see notes in folders and notes shared with them. It noted that its app already connects with tools including Claude, ChatGPT, Lovable, Figma Make, Replit, Manus, v0, Bolt.new, Duckbill, and Dreamer, and the startup is working on bringing more partners on board.

As meeting note-taking becomes a commonplace feature, the value for startups in this category is to enable users and companies to take actions based on the notes and transcripts. This could range from drafting follow-up emails, or finding time for the next set of meetings, or drawing knowledge from the company database and CRMs to get closer to finalizing a lead. Some companies, such as Read AI, Fireflies, and Quill, have already started working in this direction.

source

Continue Reading

Tech

Harvey confirms $11B valuation: Sequoia triples down

One of the blockbuster hits of the AI age is, without a doubt, legal tech startup Harvey. On Wednesday, the company confirmed that it had closed a new raise at an $11 billion valuation, after reports circulated last month that it was working on another monster round.

The company confirmed it inhaled $200 million from this round, co-led by returning investors Singapore’s GIC and Sequoia. Existing investors Andreessen Horowitz, Coatue, Conviction Partners, Elad Gil, Evantic, and Kleiner Perkins also participated.

With this new funding, the company has raised more than $1 billion in total, and its valuation jumped over 3.5x in a year. Harvey was valued at $8 billion from a round announced in December, led by Andreessen Horowitz. Before that, it was valued at $5 billion from a round led by Kleiner Perkins and Coatue, announced in June, and was at $3 billion from a Sequoia-led raise announced in February 2025.

Sequoia has now co-led three of its rounds since its Series A, a move even Sequoia partner Pat Grady acknowledged was an unusually large show of faith for the VC firm, Grady said in the press release. A few months ago, founder and CEO Winston Weinberg described to TechCrunch EIC what a wild ride it’s been.

source

Continue Reading

Tech

How soap opera-TikTok hybrids became a billion-dollar business

Over the past few years, a new category of mobile apps has quietly exploded into a multi-billion dollar business. They’re called “micro dramas” — short-form, mobile-first scripted shows designed to be watched vertically on your phone. Think soap opera meets TikTok, complete with secret billionaire romances, disapproving werewolf mothers-in-law, and cliffhangers engineered to keep users tapping. The leading app, ReelShort, made $1.2 billion in consumer spending last year alone. 

On this episode of TechCrunch’s Equity podcast, Rebecca Bellan and TechCrunch senior reporter Amanda Silberling sit down with Henry Soong, founder of Watch Club, who thinks the micro drama industry is still “in its MySpace era.” He has a vision for what the Facebook moment could look like. 

Subscribe to Equity on YouTube, Apple Podcasts, Overcast, Spotify and all the casts. You also can follow Equity on X and Threads, at @EquityPod. 


source

Continue Reading