Tech
Bumble introduces an AI dating assistant, ‘Bee’
Dating app maker Bumble is venturing into generative AI. During the company’s fourth-quarter earnings on Wednesday, Bumble introduced a new AI assistant it’s calling “Bee,” designed to become a personal matchmaker that learns users’ “values, relationship goals, communication style, lifestyle, and dating intentions” through private chats. It then uses those insights to help find the user more relevant matches.
Currently, Bee is in the pilot phase and being tested internally, Bumble founder and CEO Whitney Wolfe Herd told investors, but it’s launching into beta soon.
With Bee, the company envisions being able to capture much more information about Bumble users, as it learns more about each individual’s story and what they really want. This could differentiate Bumble’s app from others like Tinder, which also just underwent an overhaul as the dating app market has fizzled with Gen Z users.
Bumble says users will interact with Bee much like they do with other AI chatbots, through typing and speaking in a more conversational style.

Initially, Bee will be used to power a new dating experience called “Dates” that uses AI to recommend matches, but in the future, Bumble says Bee will move into other areas, like offering date suggestions or requesting anonymous feedback from your prior matches.
In “Dates,” Bee will first learn about the user through a private, onboarding conversation. It then identifies two people who have shared intentions, values, and relationship goals. Both users are notified in the app with a description of why they make a great match.
The addition is part of a broader tech and AI-focused overhaul of the dating app, which to date has marketed itself as more focused on women’s needs. The company pioneered features like “women message first,” body-shaming bans, and tools that blurred unsolicited explicit images, among others.

Now it’s looking to use AI to return to user growth amid a dating market that sees younger users, particularly Gen Z, growing tired of the swipe.
In fact, Herd said that Bumble would experiment with removing the long-popular swipe mechanism in select markets to see how users react. Instead of prioritizing swipes as a binary “yes” or “no,” Bumble is looking to leverage other features, like new “chapter-based” profiles where members can connect with one another on different parts of a user’s life story. This will give Bumble more data to feed into its AI system and algorithms.
“We will be introducing more dynamic ways for somebody to express interest in your story, rather than just your profile, and this is going to drive more dynamic engagement, spark better conversation, and ultimately drive better KPIs across the board — like engagement and chances to get better conversations going,” Wolfe Herd said. “You will also see us take a much more deliberate approach to getting people offline versus just in what people refer to as dead-end chat zones.”
The company is also looking into other ways to better cater to Gen Z, a cohort that often prefers group socializing over one-on-one dates to get to know people.
The company has been working to add AI to its app for years, rolling out changes like AI photo selection and feedback tools, for instance, as well as in areas like safety. Wolfe Herd told investors that Bumble’s back-end infrastructure had been overhauled as the app infused itself with AI.
The company reported better-than-expected earnings in Q4, with revenue of $224.2 million and average revenue per paying user up 7.9% to $22.20. The stock rallied some 40% on the news.
Tech
Alexa+ gets a new ‘adults only’ personality option that curses but won’t do NSFW content
Amazon’s AI assistant Alexa+ is getting another new personality. On Thursday, the company announced it’s expanding its lineup of personality styles for users to choose from to include a “Sassy” option, which is for adults only. Notes Amazon, before opting to use the Sassy personality, users will be required to go through additional security checks in the Alexa app.
The personality style will also not be available when Amazon Kids is enabled, Amazon says.
The new option joins others like Brief, Chill, and Sweet, launched last month.

When you toggle on the option for Sassy in the Alexa mobile app, you’re warned that the Sassy style uses explicit language, which is why it requires a security check. On iOS, this involved a Face ID scan.
The AI assistant explained its style to us like this: “The Sassy style is built on one premise: help first, judge always. Every answer comes wrapped in wit and a well-placed roast — it’ll answer your question; it’ll just make you feel something about it first. Expect reality checks delivered with charm, compliments that somehow sting, and warmth you didn’t see coming. Equal-opportunity irreverence, zero apologies. Honest, sharp, and funny — and somehow that’s more helpful than helpful.”
Alexa’s app also had warned that the style could contain “mature subject matter.”
However, further investigation discovered this is not Amazon’s version of something like Grok’s adult AI companions. The AI assistant said the new option won’t get into areas like explicit sexual content, hate speech, illegal activities, personal attacks, or anything that could cause harm to oneself or others.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
The move is the latest example of how Amazon is trying to make Alexa+ more customizable, as it revamps the assistant for the generative AI era. By offering the assistant different personalities — including one positioned as more adult — Amazon is borrowing from a broader trend in AI, where companies have been experimenting with tone, style, and personas to make their assistants more engaging and personalized to the individual users’ choices.
Tech
Tesla becomes a utility in the UK, setting up showdown with Octopus Energy
Tesla is now an officially licensed utility in the United Kingdom, according to a new report from The Wall Street Journal. The automotive and energy company recently received a license from the Office of Gas and Electricity Markets, allowing it to sell electricity directly to households and commercial and industrial users.
The company has long dabbled in electricity markets. Its first pure energy products, the Powerwall and Powerpack, were introduced in 2015, but it wasn’t until a year later when Tesla merged with SolarCity that it started scaling the division rapidly. In 2022, the company launched Tesla Electric in Texas, which allowed it to sell electricity directly to customers. Powerwall owners can sell electrons from their batteries to participate in the company’s virtual power plant.
The new division, known as Tesla Energy Ventures, will compete with existing utilities in the U.K., including EDF, E.ON, and Octopus Energy. The competition with Octopus should prove particularly interesting. Since its founding in 2015, Octopus has become the country’s largest utility by focusing on slick software, renewable energy, and creative marketing. Sound familiar?
Tech
A writer is suing Grammarly for turning her and other authors into ‘AI editors’ without consent
Grammarly released a controversial feature last week that uses AI to simulate editorial feedback, making it seem like you’re getting a critique from novelist Stephen King, the late scientist Carl Sagan, or tech journalist Kara Swisher. But Grammarly did not get permission from the hundreds of experts it included in this feature, called “Expert Review,” to use their names.
One of the affected writers, journalist Julia Angwin, has filed a class action lawsuit against Superhuman, the parent company that owns Grammarly, arguing that the company violated the privacy and publicity rights of her and the other writers it impersonated. A class action lawsuit allows writers to join Angwin in her case.
“I have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise,” Angwin said in a statement.
The situation is more than a little ironic — Angwin has spent her career leading investigations into tech companies’ impacts on privacy. Other critics of this kind of technology, like renowned AI ethicist Timnit Gebru, were also included in Grammarly’s “Expert Review.”
The “Expert Review” feature, available only to subscribers paying $144 a year, predictably fails to deliver on the promise of thoughtful feedback.
Casey Newton, the founder and editor of the tech newsletter Platformer and another person impersonated by Grammarly, fed one of his articles into the tool and got feedback from Grammarly’s approximation of tech journalist Kara Swisher. Grammarly’s imitation of Swisher produced “feedback” so generic that it raises the question of why the company would go through the rigmarole of using these writers’ likenesses in the first place.
Here is what Grammarly’s approximation of Kara Swisher told him: “Could you briefly compare how daily AI users versus AI skeptics articulate risk, creating a through-line readers can follow?”
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Newton relayed the message from the AI approximation of Kara Swisher to the actual, real human being, Kara Swisher.
“You rapacious information and identity thieves better get ready for me to go full McConaughey on you,” Swisher texted Newton (referring to Grammarly). “Also, you suck.”
Grammarly has since disabled the “Expert Review” feature, according to a LinkedIn post by Superhuman CEO Shishir Mehrotra. While Mehrotra offered an apology, he continued to defend the idea of the feature.
“Imagine your professor sharpening your essay, your sales leader reshaping a customer pitch, a thoughtful critic challenging your arguments, or a leading expert elevating your proposal,” he wrote. “For experts, this is a chance to build that same ubiquitous bond with users, much like Grammarly has.”
