Connect with us

Entertainment

The Sexiest Sci-Fi Dystopia Ever Made Is Buried On Peacock

By Jonathan Klotz
| Published

If someone told you that the sexiest sci-fi dystopia of the decade turned monogamy into a dirty word, considered sexual pleasure to be more important than, you know, the continuation of the human race, and turned public displays of affection into not only socially acceptable, but the law, you’d wonder what strange HBO series hit when you weren’t looking. Now, if that same person said that all of this was found in a Peacock original sci-fi series from 2020, you’d think they were insane. Well, it is.

Brave New World, the adaptation of Aldous Huxley’s 1932 landmark sci-fi novel, includes all of the above. It’s sexy, it looks great, and for as dark as it gets, it only skims the surface of the nearly 100-year old story. 

Perfection Comes At A Price

Brave New World 2020
New London In Brave New World

Brave New World stars Alden Ehrenreich (Han Solo from Solo) as John, and Demi Moore as his mother, Linda, residents of The Savage Land. Kept around as an amusement attraction for the sophisticated residents of New London, The Savage Land includes cheap reenactments of such old-fashioned lifestyle choices as monogamy, family, and privacy. That’s because in the utopian society of New London, there’s no such thing as monogamy, privacy, or even history. Every citizen is ranked, from “A” to “F,” which leads to the surprising result when John arrives in the city and it’s discovered he’s an A-rank citizen. 

Brave New World 2020
The Savage Land In Brave New World

John helped save Bernard (Harry Lloyd) and Lenina (Jessica Brown Findlay) during a worker’s revolt in the Savage Land amusement parks. His mother sacrifices herself, taking with her the secret of his parentage: he’s the son of the Director of New London. Bernard, far more of a weasel in this adaptation of the novel, tries to help John adapt to life in New London. The creative, intelligent John balks at integrating into the hedonistic utopia, instead becoming a celebrity thanks to his ability to tell a story. Turns out, living in a perfect utopia that discourages individual expression is bad for artists. 

Brave New World Is A Sci-Fi Masterpiece No One Can Get Right

Brave New World 2020

Aldous Huxley’s novel had the rough edges sanded off, particularly with regards to the Savage Land, which exists in the Peacock series as an amusement park compared to an actual reservation as depicted in the source material. The inclusion of the AI Indra, a way for easy monitoring of the characters that turns New London into the largest Big Brother set imaginable, is also new for the series. It’s a smart inclusion to transform the events of the novel for a visual medium, but it also softens the creepiness of the alleged utopia. 

Thankfully, what remains the same between the two is Helm (Killjoy’s Hannah John-Kamen) and her “feelies” drugs that let the residents feel any type of emotion. Pharmaceuticals have changed drastically from 1932 to today, making that particular part of Huxley’s vision much more relevant now. 

Brave New World 2020

Brave New World was lambasted by critics when it premiered for its slow pace and unlikable characters, both of which are, arguably, found in the original text. The problem is that it takes too long for John to reach New London where the meat of the story takes place. Viewers turned away in droves and NBC pulled the plug on the show after only one season. 

Peacock hasn’t had any success with original sci-fi series since it launched. Brave New World was the streamer’s first high-profile cancelation, and the latest in a line of failed adaptations of the complex novel. At least this time it looks good. 


source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Entertainment

ChatGPT users can now choose a trusted contact"

OpenAI has been under intense legal and public pressure to improve the way its flagship AI product ChatGPT responds when a user express suicidal feelings.

On Thursday, the company launched a feature called Trusted Contact, which allows users to designate an adult to notify should the user talk about self-harm or suicide in a serious or concerning way.

The optional feature only encourages the trusted contact to reach out to the user. It does not share chat transcripts or conversation details.

“Our goal is to ensure that AI systems do not exist in isolation,” the company said in a blog post announcing the feature. “Instead they should help connect people to the real-world care, relationships, and resources that matter most.”

OpenAI has been sued multiple times for wrongful death by family members of ChatGPT users who died by suicide after ChatGPT allegedly coached them to end their lives or didn’t respond appropriately to their discussions of psychological distress. OpenAI has denied the allegations in the first of those lawsuits.

Trusted Contact prompts appear on a smartphone.

A designated trusted contact receives an invitation like this from ChatGPT.
Credit: Courtesy OpenAI

The state of Florida is also investigating ChatGPT’s links to “criminal behavior,” including the “encouragement of suicide and self-harm.”

Trusted Contact was developed with feedback from experts, including OpenAI’s Expert Council on Well-Being and AI and the American Psychological Association.

“Helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most,” Dr. Arthur Evans, chief executive officer of the American Psychological Association, said in a statement.

How ChatGPT’s Trusted Contact works

  1. Users can start the Trusted Contact process by clicking on their ChatGPT settings.

  2. One adult age 18 or older can be added via the Trusted Contact form.

  3. The contact doesn’t need a ChatGPT account.

  4. The designated contact will receive an invitation from OpenAI explaining their role as a trusted contact. They must accept the invite within one week in order to activate the feature. The contact can share their phone number or email address as a contact method. Should the person decline, the user can add a different adult.

  5. When OpenAI’s automated monitoring systems detect discussion of self-harm or indicates a serious safety issue, ChatGPT alerts the user that the company may notify their trusted contact. The user prompt encourages outreach to the trusted contact and provides conversation starters.

  6. The safety issue is then reviewed by what OpenAI describes as a “small team of specially trained people.” When the human reviewers confirm a possible serious safety concern, ChatGPT sends the Trusted Contact a brief email or text message. If the person has a ChatGPT account, they will receive an in-app notification.

  7. The notification doesn’t include details about the user’s discussion. Instead, it informs the trusted contact that the user mentioned self-harm and encourages the contact to reach out. The message includes a link to guidance for having sensitive conversations.

  8. Users are free to remove or edit their Trusted Contact at any time. The Trusted Contact can also remove themselves via ChatGPT’s help center.


Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

If you’re feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text “START” to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don’t like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.

source

Continue Reading

Entertainment

New report: X remains the most dangerous platform for LGBTQ users

Elon Musk’s X is still the most unsafe social media platform for LGBTQ+ users, according to a new report by GLAAD.

The organization’s annual Social Media Safety Index (SMSI) and its “platform scorecards” grade social media sites on LGBTQ safety, privacy, and expression. GLAAD assessed external-facing policies on diversity programs, content moderation, user suppression, and enforcement mechanisms, among other metrics, for six major companies: Facebook, Instagram, Threads, X, YouTube, and TikTok.

X scored just 29 points out of a possible 100. No platform has ever scored above a 67.

While X may have received the worst marks of the bunch, none of the platforms analyzed by the organization got passing grades. Many, in fact, hit historic lows. GLAAD found that all platforms were “rife with anti-LGBTQ hate, harassment, and disinformation,” and noted nationwide rollbacks on Diversity, Equity, and Inclusion (DEI) efforts.

The report specifically calls out Meta and YouTube’s updated LGBTQ policies, including Meta’s overhaul of its Hateful Conduct policy. YouTube’s score fell 11 points, the most severe drop, compared to the 2025 analysis. TikTok was the only platform whose score did not decrease over the last year, although it still only earned a score of 56 out of 100.

GLAAD began issuing platform scorecards in 2021. Over the last five years, X has consistently earned some of the lowest scores among competitor platforms — X came out on top of TikTok in the organization’s 2022 report. Scores are based on corporate transparency metrics established by global digital human rights organization Ranking Digital Rights and 14 LGBTQ-specific online indicators, GLAAD explained.

GLAAD President and CEO Sarah Kate Ellis wrote:

“Leading social media companies today do not meet basic best practices in content moderation, transparency, data privacy, and workforce diversity — and continuously refuse to meaningfully prioritize the safety, privacy, and expression of LGBTQ people and other marginalized communities. Advertisers should question commitments to LGBTQ safety and the disregard for the safety of LGBTQ users as they plan which platforms to continue to support.

To LGBTQ creators, advocates, and organizations targeted on and by these platforms: these companies need to hear from you. The threats in your DMs, the disinformation fueling anti-LGBTQ legislation, and the bullying that leads to real-world violence are not just ‘part of the job.’ They are systemic failures that tech leaders have the tools to fix, yet they choose to profit from them instead.”

source

Continue Reading

Entertainment

AirPods with cameras reportedly in final testing at Apple

Apple might expand its AI wearable efforts into the world of AirPods.

Bloomberg reported today that Apple is in the final stages of testing a new AirPods model that would feature small cameras in each earbud. They would have longer stems than the AirPods you’re used to, but would otherwise look very similar, says Bloomberg’s Mark Gurman.

According to his latest report, the device has “entered a phase where prototypes feature a near-final design and capabilities” after years of development internally, but we don’t have a firm release date yet. It’s also possible that these prototype AirPods never make it to market.

In case you’re worried about being surreptitiously recorded by any random person with AirPods you see on the street, these cameras would not be used for any kind of photo or video capture. Instead, Gurman says they would be low-resolution modules used to see the environment for the purpose of interacting with an AI assistant.

We first heard about AirPods with cameras back in 2024, when the reliable Apple analyst Ming-Chi Kuo described AirPods with built-in infrared cameras. At the time, he said these modules would be similar to FaceID cameras and power new spatial audio experiences. More recently, Gurman reported on camera-equipped AirPods this January, saying the focus would be on powering AI features.

Gurman says the AirPods will apparently include a little LED indicator light that turns on when the cameras are working their magic, but without seeing the earbuds in action, we don’t know how visible that will be to anyone else yet.

While Apple has a strong track record with privacy, there are obvious privacy concerns with putting cameras (no matter how low resolution) in a pair of earbuds. Meta’s Ray-Ban glasses have enabled a lot of bad behavior, for instance.

All of this begs the question: Would you wear earbuds with a built-in camera?

As someone who vividly remembers the very negative public response to Google Glass, I do wonder if the populace will feel differently this time around.

Big Tech companies clearly think there will be demand for this sort of device. OpenAI is working on an AI wearable with the famed designer Jony Ive, and Motorola released a concept AI pendant at CES 2026. Apple is also rumored to be working on a wearable AI pin, while Meta and Google have invested in developing smart glasses with cameras.

Want to learn more about getting the best out of your tech? Sign up for Mashable’s Top Stories and Deals newsletters today.

source

Continue Reading