Tech
Coalition demands federal Grok ban over nonconsensual sexual content
A coalition of nonprofits is urging the U.S. government to immediately suspend the deployment of Grok, the chatbot developed by Elon Musk’s xAI, in federal agencies, including the Department of Defense.
The open letter, shared exclusively with TechCrunch, follows a slew of concerning behavior from the large language model over the past year, including most recently a trend of X users asking Grok to turn photos of real women, and in some cases children, into sexualized images without their consent. According to some reports, Grok generated thousands of nonconsensual explicit images every hour, which were then disseminated at scale on X, Musk’s social media platform that’s owned by xAI.
“It is deeply concerning that the federal government would continue to deploy an AI product with system-level failures resulting in generation of nonconsensual sexual imagery and child sexual abuse material,” the letter, signed by advocacy groups like Public Citizen, Center for AI and Digital Policy, and Consumer Federation of America, reads. “Given the administration’s executive orders, guidance, and the recently passed Take It Down Act supported by the White House, it is alarming that [Office of Management and Budget] has not yet directed federal agencies to decommission Grok.”
xAI reached an agreement last September with the General Services Administration (GSA), the government’s purchasing arm, to sell Grok to federal agencies under the executive branch. Two months before, xAI — alongside Anthropic, Google, and OpenAI — secured a contract worth up to $200 million with the Department of Defense.
Amid the scandals on X in mid-January, Defense Secretary Pete Hegseth said Grok will join Google’s Gemini in operating inside the Pentagon network, handling both classified and unclassified documents, which experts say is a national security risk.
The letter’s authors argue that Grok has proven itself incompatible with the administration’s requirements for AI systems. According to the OMB’s guidance, systems that present severe and foreseeable risks that cannot be adequately mitigated must be discontinued.
“Our primary concern is that Grok has pretty consistently shown to be an unsafe large language model,” JB Branch, a Public Citizen Big Tech accountability advocate and one of the letter’s authors, told TechCrunch. “But there’s also a deep history of Grok having a variety of meltdowns, including antisemitic rants, sexist rants, sexualized images of women and children.”
Techcrunch event
Boston, MA
|
June 23, 2026
Several governments have demonstrated an unwillingness to engage with Grok following its behavior in January, which builds on a series of incidents including the generation of antisemitic posts on X and calling itself “MechaHitler.” Indonesia, Malaysia, and the Philippines all blocked access to Grok (they’ve subsequently lifted those bans), and the European Union, the U.K., South Korea, and India are actively investigating xAI and X regarding data privacy and the distribution of illegal content.
The letter also comes a week after Common Sense Media, a nonprofit that reviews media and tech for families, published a damning risk assessment that found Grok is among the most unsafe for kids and teens. One could argue that, based on the findings of the report — including Grok’s propensity to offer unsafe advice, share information about drugs, generate violent and sexual imagery, spew conspiracy theories, and generate biased outputs — Grok isn’t all that safe for adults either.
“If you know that a large language model is or has been declared unsafe by AI safety experts, why in the world would you want that handling the most sensitive data we have?” Branch said. “From a national security standpoint, that just makes absolutely no sense.”
Andrew Christianson, a former National Security Agency contractor and current founder of Gobbi AI, a no-code AI agent platform for classified environments, says that using closed-source LLMs in general is a problem, particularly for the Pentagon.
“Closed weights means you can’t see inside the model, you can’t audit how it makes decisions,” he said. “Closed code means you can’t inspect the software or control where it runs. The Pentagon is going closed on both, which is the worst possible combination for national security.”
“These AI agents aren’t just chatbots,” Christianson added. “They can take actions, access systems, move information around. You need to be able to see exactly what they’re doing and how they’re making decisions. Open source gives you that. Proprietary cloud AI doesn’t.”
The risks of using corrupted or unsafe AI systems spill out beyond national security use cases. Branch pointed out that an LLM that’s been shown to have biased and discriminatory outputs could produce disproportionate negative outcomes for people as well, especially if used in departments involving housing, labor, or justice.
While the OMB has yet to publish its consolidated 2025 federal AI use case inventory, TechCrunch has reviewed the use cases of several agencies — most of which are either not using Grok or are not disclosing their use of Grok. Aside from the DoD, the Department of Health and Human Services also appears to be actively using Grok, mainly for scheduling and managing social media posts and generating first drafts of documents, briefings, or other communication materials.
Branch pointed to what he sees as a philosophical alignment between Grok and the administration as a reason for overlooking the chatbot’s shortcomings.
“Grok’s brand is being the ‘anti-woke large language model,’ and that ascribes to this administration’s philosophy,” Branch said. “If you have an administration that has had multiple issues with folks who’ve been accused of being Neo Nazis or white supremacists, and then they’re using a large language model that has been tied to that type of behavior, I would imagine they might have a propensity to use it.”
This is the coalition’s third letter after writing with similar concerns in August and October last year. In August, xAI launched “spicy mode” in Grok Imagine, triggering mass creation of non-consensual sexually explicit deepfakes. TechCrunch also reported in August that private Grok conversations had been indexed by Google Search.
Prior to the October letter, Grok was accused of providing election misinformation, including false deadlines for ballot changes and political deepfakes. xAI also launched Grokipedia, which researchers found to be legitimizing scientific racism, HIV/AIDS skepticism, and vaccine conspiracies.
Aside from immediately suspending the federal deployment of Grok, the letter demands that the OMB formally investigate Grok’s safety failures and whether the appropriate oversight processes were conducted for the chatbot. It also asks the agency to publicly clarify whether Grok has been evaluated to comply with President Trump’s executive order requiring LLMs to be truth-seeking and neutral and whether it met OMB’s risk mitigation standards.
“The administration needs to take a pause and reassess whether or not Grok meets those thresholds,” Branch said.
TechCrunch has reached out to xAI and OMB for comment.
Tech
Peak XV says internal disagreement led to partner exits as it doubles down on AI
Peak XV Partners, a leading venture capital firm in India and Southeast Asia, has seen a fresh round of senior departures. These follow other leadership exits over the past year as it pushes ahead with plans to deepen its focus on AI investing and expand its footprint in the U.S., while keeping India as its largest market.
The latest departures stem from an internal disagreement with senior partner Ashish Agrawal (pictured above, left) that led to a mutual decision to part ways, Managing Director Shailendra Singh told TechCrunch. He added that two other partners, Ishaan Mittal (pictured above, right) and Tejeshwi Sharma (pictured above, center), chose to leave alongside him.
Singh said Peak XV did not want to go into the specifics of the disagreement and was focused on moving forward. “Just out of privacy, and out of, like, trying to be classy about it,” he said. Singh added that such departures were not uncommon at large, multi-stage venture firms and that Peak XV wanted to move on quickly after several years of working together.
All board seats held by the departing partners would be transitioned “imminently,” Singh said, noting that the firm already had overlapping representation across several portfolio companies. He said Peak XV was not concerned about continuity, noting that multiple general partners and operating partners were already involved across many of those boards.
The departures mark the exit of long-tenured investors from the firm. Agrawal had been with Peak XV for more than 13 years, while Mittal spent over nine years at the firm and Sharma more than seven years, per their LinkedIn profiles.
Agrawal wrote in a LinkedIn post that he had decided to “take the entrepreneurial plunge” and was teaming up with Mittal and Sharma to start a new venture capital firm. He described the move as an opportunity to build a new institution with longtime partners and thanked Peak XV’s leadership for what he called a “truly wonderful partnership.”
During his time at Peak XV, Agrawal led investments across fintech, consumer, and software, including Groww, one of the firm’s most prominent IPO exits in 2025. He also backed multiple early- and growth-stage companies alongside Mittal and Sharma, contributing to Peak XV’s broader portfolio build-out over the past decade.
Agrawal, Mittal, and Sharma did not respond to messages for comments.
Peak XV has also moved to strengthen its senior leadership from within. The firm on Tuesday promoted Abhishek Mohan to general partner, expanding its investment leadership bench, while Saipriya Sarangan was elevated to chief operating officer, taking charge of firm-wide operations.
The leadership changes come amid a standout year for Peak XV’s portfolio exits. Five of its companies — Groww, Pine Labs, Meesho, Wakefit, and Capillary Technologies — went public in November and December 2025, generating roughly ₹300 billion (around $3.33 billion) in unrealized, mark-to-market gains for the firm, in addition to about ₹28 billion (about $310.61 million) in realized gains from share sales during the IPOs.
In addition to the latest departures, Peak XV has seen a broader churn in its senior ranks over the past 12 months. Last year, long-time investment leaders Harshjit Sethi and Shailesh Lakhani exited the India team, while Abheek Anand and Pieter Kemps departed from the firm’s Southeast Asia operations. The firm has also seen leadership changes across its marketing, policy, and operations teams in recent months.
Singh dismissed a view circulating in the market that many of the partners who drove Peak XV’s largest exits were no longer at the firm, calling the narrative “not statistically true.” He said several of the firm’s most significant outcomes had been led by long-tenured partners who remained at Peak XV, and argued that the firm’s exit track record did not hinge on any single individual.
Peak XV currently has seven general partners, along with multiple partners and principals, according to Singh.
The VC firm, which split from Sequoia Capital in 2023 and currently manages over $10 billion in capital across 16 funds, has made about 80 investments linked to AI, Singh said, highlighting its push to deepen its focus on AI funding. It is also preparing to open a U.S. office within the next 90 days as it expands its global footprint, per Singh, while continuing to view India as its largest and most important market.
Singh stated the firm believed AI would reshape venture investing more profoundly than previous technology shifts, arguing that successful AI investing required investors with deep technical understanding rather than “generalist” experience. He added that Peak XV was looking to add more AI-native talent, including researchers and engineers with backgrounds in machine learning and large-scale model development.
The firm has invested in more than 400 companies, and its portfolio has seen over 35 initial public offerings and several M&As to date.
Tech
PayPal hires HP’s Enrique Lores as its new CEO
PayPal said on Tuesday it is hiring HP’s Enrique Lores as its CEO and president, replacing current chief executive Alex Chriss. Lores, who has been the chair of PayPal’s board since July 2024, will also take up the role of president.
PayPal said the appointment was made because the company’s pace of change and execution was “not in line with the Board’s expectations” given broader market trends.
Chriss joined PayPal in September 2023 from Intuit, succeeding Dan Schulman. PayPal’s CFO and COO, Jamie Miller, will take over as interim CEO until Lores joins the company.
The appointment comes as PayPal on Tuesday reported lower than expected revenue and profit in the fourth quarter, as consumer spending dipped amid a broader cost of living crisis and a softening labor market. The company also forecast a dip in its full-year profit, which surprised investors, as Wall Street had broadly expected the company to forecast growth instead.
PayPal’s shares were down about 17.9% in premarket trading on Tuesday.
Lores, who served as president and CEO of HP for over six years, said that apart from product innovation, PayPal will hold itself accountable for delivering quarterly accounts.
“The payments industry is changing faster than ever, driven by new technologies, evolving regulations, an increasingly competitive landscape, and the rapid acceleration of AI that is reshaping commerce daily. PayPal sits at the center of this change, and I look forward to leading the team to accelerate the delivery of new innovations and to shape the future of digital payments and commerce,” Lores said in a statement.
Techcrunch event
Boston, MA
|
June 23, 2026
Tech
Fitbit founders launch AI platform to help families monitor their health
Fitbit founders James Park and Eric Friedman have announced the launch of a new AI startup called Luffu that aims to help families proactively monitor their health. The duo are developing an “intelligent family care system” that will start with an app experience and then expand into hardware devices.
Two years after their exit from Google, Park and Friedman are betting on AI to help lighten the mental burden of caregiving. According to a recent report, 63 million, or nearly 1 in 4, U.S. adults are family caregivers, up 45% from 10 years ago.
Luffu uses AI in the background to gather and organize family information, learn day-to-day patterns, and flag notable changes so families can stay aligned and address potential well-being issues.
“At Fitbit, we focused on personal health—but after Fitbit, health for me became bigger than just thinking about myself,” Park said in a press release. “I was caring for my parents from across the country, trying to piece together my mom’s health care across various portals and providers, with a language barrier that made it hard to get complete, timely context from her about doctor visits. I didn’t want to constantly check in, and she didn’t want to feel monitored. Luffu is the product we wished existed—to stay on top of our family’s health, know what changed and when to step in—without hovering.”

The pair note that today’s consumer health market is filled with tools for individuals, but that real life health is shared across partners, kids, parents, pets, and caregivers. Family information is scattered across devices, portals, calendars, attachments, spreadsheets, and paper documents.
With Luffu, people will be able to track the whole family’s details, including health stats, diet, medications, symptoms, lab tests, doctor visits, and more. Users can log health information using voice, text, or photos. Luffu proactively watches for changes, and surfaces insights and alerts, such as unusual vitals or changes in sleep.
The pair told Axios that people can ask questions using plain language to ask about their family’s health, such as “Is Dad’s new meal plan affecting his blood pressure?” or “Did someone give the dog his medication?”
Techcrunch event
Boston, MA
|
June 23, 2026
“We designed Luffu to capture the details as life happens, keep family members updated and surface what matters at the right time—so caregiving feels more coordinated and less chaotic,” Friedman said in the press release.
People who are interested in Luffu can join the waitlist for the limited public beta.
