Entertainment
ChatGPT users can now choose a trusted contact"
OpenAI has been under intense legal and public pressure to improve the way its flagship AI product ChatGPT responds when a user express suicidal feelings.
On Thursday, the company launched a feature called Trusted Contact, which allows users to designate an adult to notify should the user talk about self-harm or suicide in a serious or concerning way.
The optional feature only encourages the trusted contact to reach out to the user. It does not share chat transcripts or conversation details.
“Our goal is to ensure that AI systems do not exist in isolation,” the company said in a blog post announcing the feature. “Instead they should help connect people to the real-world care, relationships, and resources that matter most.”
OpenAI has been sued multiple times for wrongful death by family members of ChatGPT users who died by suicide after ChatGPT allegedly coached them to end their lives or didn’t respond appropriately to their discussions of psychological distress. OpenAI has denied the allegations in the first of those lawsuits.
Mashable Trend Report

A designated trusted contact receives an invitation like this from ChatGPT.
Credit: Courtesy OpenAI
The state of Florida is also investigating ChatGPT’s links to “criminal behavior,” including the “encouragement of suicide and self-harm.”
Trusted Contact was developed with feedback from experts, including OpenAI’s Expert Council on Well-Being and AI and the American Psychological Association.
“Helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most,” Dr. Arthur Evans, chief executive officer of the American Psychological Association, said in a statement.
How ChatGPT’s Trusted Contact works
-
Users can start the Trusted Contact process by clicking on their ChatGPT settings.
-
One adult age 18 or older can be added via the Trusted Contact form.
-
The contact doesn’t need a ChatGPT account.
-
The designated contact will receive an invitation from OpenAI explaining their role as a trusted contact. They must accept the invite within one week in order to activate the feature. The contact can share their phone number or email address as a contact method. Should the person decline, the user can add a different adult.
-
When OpenAI’s automated monitoring systems detect discussion of self-harm or indicates a serious safety issue, ChatGPT alerts the user that the company may notify their trusted contact. The user prompt encourages outreach to the trusted contact and provides conversation starters.
-
The safety issue is then reviewed by what OpenAI describes as a “small team of specially trained people.” When the human reviewers confirm a possible serious safety concern, ChatGPT sends the Trusted Contact a brief email or text message. If the person has a ChatGPT account, they will receive an in-app notification.
-
The notification doesn’t include details about the user’s discussion. Instead, it informs the trusted contact that the user mentioned self-harm and encourages the contact to reach out. The message includes a link to guidance for having sensitive conversations.
-
Users are free to remove or edit their Trusted Contact at any time. The Trusted Contact can also remove themselves via ChatGPT’s help center.
Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
If you’re feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text “START” to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don’t like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.
Entertainment
New report: X remains the most dangerous platform for LGBTQ users
Elon Musk’s X is still the most unsafe social media platform for LGBTQ+ users, according to a new report by GLAAD.
The organization’s annual Social Media Safety Index (SMSI) and its “platform scorecards” grade social media sites on LGBTQ safety, privacy, and expression. GLAAD assessed external-facing policies on diversity programs, content moderation, user suppression, and enforcement mechanisms, among other metrics, for six major companies: Facebook, Instagram, Threads, X, YouTube, and TikTok.
X scored just 29 points out of a possible 100. No platform has ever scored above a 67.
Mashable Light Speed
While X may have received the worst marks of the bunch, none of the platforms analyzed by the organization got passing grades. Many, in fact, hit historic lows. GLAAD found that all platforms were “rife with anti-LGBTQ hate, harassment, and disinformation,” and noted nationwide rollbacks on Diversity, Equity, and Inclusion (DEI) efforts.
The report specifically calls out Meta and YouTube’s updated LGBTQ policies, including Meta’s overhaul of its Hateful Conduct policy. YouTube’s score fell 11 points, the most severe drop, compared to the 2025 analysis. TikTok was the only platform whose score did not decrease over the last year, although it still only earned a score of 56 out of 100.
GLAAD began issuing platform scorecards in 2021. Over the last five years, X has consistently earned some of the lowest scores among competitor platforms — X came out on top of TikTok in the organization’s 2022 report. Scores are based on corporate transparency metrics established by global digital human rights organization Ranking Digital Rights and 14 LGBTQ-specific online indicators, GLAAD explained.
GLAAD President and CEO Sarah Kate Ellis wrote:
“Leading social media companies today do not meet basic best practices in content moderation, transparency, data privacy, and workforce diversity — and continuously refuse to meaningfully prioritize the safety, privacy, and expression of LGBTQ people and other marginalized communities. Advertisers should question commitments to LGBTQ safety and the disregard for the safety of LGBTQ users as they plan which platforms to continue to support.
To LGBTQ creators, advocates, and organizations targeted on and by these platforms: these companies need to hear from you. The threats in your DMs, the disinformation fueling anti-LGBTQ legislation, and the bullying that leads to real-world violence are not just ‘part of the job.’ They are systemic failures that tech leaders have the tools to fix, yet they choose to profit from them instead.”
Entertainment
AirPods with cameras reportedly in final testing at Apple
Apple might expand its AI wearable efforts into the world of AirPods.
Bloomberg reported today that Apple is in the final stages of testing a new AirPods model that would feature small cameras in each earbud. They would have longer stems than the AirPods you’re used to, but would otherwise look very similar, says Bloomberg’s Mark Gurman.
According to his latest report, the device has “entered a phase where prototypes feature a near-final design and capabilities” after years of development internally, but we don’t have a firm release date yet. It’s also possible that these prototype AirPods never make it to market.
In case you’re worried about being surreptitiously recorded by any random person with AirPods you see on the street, these cameras would not be used for any kind of photo or video capture. Instead, Gurman says they would be low-resolution modules used to see the environment for the purpose of interacting with an AI assistant.
We first heard about AirPods with cameras back in 2024, when the reliable Apple analyst Ming-Chi Kuo described AirPods with built-in infrared cameras. At the time, he said these modules would be similar to FaceID cameras and power new spatial audio experiences. More recently, Gurman reported on camera-equipped AirPods this January, saying the focus would be on powering AI features.
Mashable Light Speed
Gurman says the AirPods will apparently include a little LED indicator light that turns on when the cameras are working their magic, but without seeing the earbuds in action, we don’t know how visible that will be to anyone else yet.
While Apple has a strong track record with privacy, there are obvious privacy concerns with putting cameras (no matter how low resolution) in a pair of earbuds. Meta’s Ray-Ban glasses have enabled a lot of bad behavior, for instance.
All of this begs the question: Would you wear earbuds with a built-in camera?
As someone who vividly remembers the very negative public response to Google Glass, I do wonder if the populace will feel differently this time around.
Big Tech companies clearly think there will be demand for this sort of device. OpenAI is working on an AI wearable with the famed designer Jony Ive, and Motorola released a concept AI pendant at CES 2026. Apple is also rumored to be working on a wearable AI pin, while Meta and Google have invested in developing smart glasses with cameras.
Want to learn more about getting the best out of your tech? Sign up for Mashable’s Top Stories and Deals newsletters today.
Entertainment
Bumble is officially killing the swipe
When Bumble posted a cryptic image on Instagram telling the swipe that “it’s over,” people questioned whether the dating app was really getting rid of swiping. Today, its founder and CEO, Whitney Wolfe Herd, confirmed that it is.
On “The Axios Show,” Wolfe Herd said, “We are going to be saying goodbye to the swipe and hello to something that I believe is revolutionary for the category.”
The change in the matching mechanism will hit certain markets starting in the fourth quarter of 2026.
Hookup apps for everyone
AdultFriendFinder
—
readers’ pick for casual connections
Tinder
—
top pick for finding hookups
Hinge
—
popular choice for regular meetups
Products available for purchase through affiliate links. If you buy something through links on our site, Mashable may earn an affiliate commission.
What will replace the swipe? Wolfe Herd didn’t say exactly, but it likely has to do with the new AI-driven matchmaking experience, Dates. Wolfe Herd has also mentioned on multiple earnings calls that Bumble is revamping the app’s backend as well.
“We are evolving into our next chapter,” Wolfe Herd told Axios’s Sara Fischer, which is similar to what a Bumble spokesperson told Mashable yesterday when asked about the Instagram post.
Mashable Trend Report
The full episode doesn’t appear to be live yet, but from Axios’s own coverage, Wolfe Herd also said that the app will not “force one gender over another to do something first,” yet the app will keep “the essence of what was always meant to be women making the first move.”
Bumble has already begun moving away from its “women making the first move” ethos that it held since its inception in 2014.
In 2024, the app launched “Opening Moves” to let men message women first in heterosexual matches. Then-CEO Lidiane Jones said the move was at least partly due to dating app fatigue. Wolfe Herd soon returned as Bumble’s CEO in early 2025, and in February 2026, the app removed the option in Mexico and Australia.
Swiping has been integral to Bumble’s user experience since its launch, two years after Tinder (which Wolfe Herd also cofounded) popularized the “hot or not” swipe model. But given that Bumble’s revenue and paying users are down year over year, it seems the company wants to try something new to regain those users.
Tinder, too, has seen financial dips recently, and it’s also made some changes.
In March, Tinder released a suite of new features, including an AI matchmaker, Chemistry. Hinge, meanwhile, doesn’t have swiping and keeps growing financially, suggesting that dating app users may be tired of rejecting someone with their thumb.
