Tech
AI ‘actor’ Tilly Norwood put out the worst song I’ve ever heard
When the production company Particle6 debuted its AI-generated “actor” Tilly Norwood last fall, the move was not warmly welcomed by Hollywood.
“Good Lord, we’re screwed,” Golden Globe winner Emily Blunt said in an interview with the industry publication Variety. “Come on, agencies, don’t do that. Please stop.”
If only Particle6 followed Blunt’s advice. Instead, the company has put out a music video for its AI character, featuring a song called “Take the Lead.”
This is not clickbait. Upon listening to it, I actually think it is the worst song I have ever heard.
I was prepared for Norwood’s musical debut to sound something like “How Was I Supposed to Know?”, the AI-generated song attributed to the digital persona Xania Monet, which turned heads when it made it onto the Billboard R&B charts. Xania Monet’s AI-generated music isn’t my cup of tea, even if its lyrics are supposedly written by a real person — I personally prefer music that could exist without an AI music generator like Suno. But Norwood’s song has unlocked a new level of AI cringe.
Eighteen people contributed to the video for “Take the Lead,” including designers, prompters, and editors. Yet the song itself is about Tilly’s challenges as an AI-generated character who critics underestimate, because they believe she is not human.
“They say it’s not real, that it’s fake,” Norwood snarls at the camera. “But I am still human, make no mistake.”
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
That is, to put it gently, not true.
Music does not have to be relatable to everyone, but perhaps it should be relatable to at least one person. What’s most impressive about Norwood’s song is that the AI character’s team managed to create a song about something that literally no human will ever experience, because no person can connect with the feeling of being disregarded for being an AI.
The song, which sounds like a Sara Bareillis rip-off, opens with the lines, “When they talk about me, they don’t see/The human spark, the creativity.” The song builds as Norwood affirms to herself, “I’m not a puppet, I’m the star.”
Then comes the chorus, in which Norwood appeals to her fellow AI actors:
Actors, it’s time to take the lead
Create the future, plant the seed
Don’t be left out, don’t fall behind
Build your own, and you’ll be free
We can scale, we can grow
Be the creators we’ve always known
It’s the next evolution, can’t you see?
AI’s not the enemy, it’s the key
In the video, Norwood struts down a hallway in a data center, which is perhaps the only part of the video grounded in any element of honesty. When the second chorus hits with a predictable key change, she instead walks across a stage, looking out into a stadium of cheering fake people who give her an undeserved moment of “triumph.”
You could make the argument that Norwood is trying to appeal to actors at large and not just other AI characters. But the outro leaves no question that this is, in fact, a rallying cry from Tilly to her AI brethren:
Take your power, take the stage
The next evolution is all the rage
Unlock it all, don’t hesitate
AI Actors, we create our fate
We do not need this. We do not need music from an AI persona addressing other AI personas with a hopeful anthem about working together to prove judgmental humans wrong.
Twenty years ago, the influential music publication Pitchfork gave Jet’s album “Shine On” a 0.0 out of 10. Instead of writing a review, they just embedded a YouTube video of a monkey peeing into its own mouth. The Jet album isn’t abhorrent, but Pitchfork editor Scott Plagenhoef explained in a 2024 interview why the site’s writers had been so angry about it all those years ago.
“Seeing mainstream rock music, which of course most of us had grown up with a fondness for, become so knuckle-dragging and Xeroxed was disappointing,” he said.
These are the same complaints that artists have today about AI-generated works — these productions ring hollow and simply reproduce the work of artists past.
“‘Tilly Norwood’ is not an actor; it’s a character generated by a computer program that was trained on the work of countless professional performers — without permission or compensation,” SAG-AFTRA, the union representing actors, wrote in a statement last fall. “It has no life experience to draw from, no emotion and, from what we’ve seen, audiences aren’t interested in watching computer-generated content untethered from the human experience. It doesn’t solve any ‘problem’ — it creates the problem of using stolen performances to put actors out of work, jeopardizing performer livelihoods and devaluing human artistry.
While Jet was taking inspiration from older rock groups to make its “knuckle-dragging and Xeroxed” music, Tilly Norwood is literally derived from AI models that could not exist without the training data that tech companies took from artists without their consent.
I think Pitchfork jumped the gun. Twenty years later, they finally have a worthy subject.
Tech
Alexa+ gets a new ‘adults only’ personality option that curses but won’t do NSFW content
Amazon’s AI assistant Alexa+ is getting another new personality. On Thursday, the company announced it’s expanding its lineup of personality styles for users to choose from to include a “Sassy” option, which is for adults only. Notes Amazon, before opting to use the Sassy personality, users will be required to go through additional security checks in the Alexa app.
The personality style will also not be available when Amazon Kids is enabled, Amazon says.
The new option joins others like Brief, Chill, and Sweet, launched last month.

When you toggle on the option for Sassy in the Alexa mobile app, you’re warned that the Sassy style uses explicit language, which is why it requires a security check. On iOS, this involved a Face ID scan.
The AI assistant explained its style to us like this: “The Sassy style is built on one premise: help first, judge always. Every answer comes wrapped in wit and a well-placed roast — it’ll answer your question; it’ll just make you feel something about it first. Expect reality checks delivered with charm, compliments that somehow sting, and warmth you didn’t see coming. Equal-opportunity irreverence, zero apologies. Honest, sharp, and funny — and somehow that’s more helpful than helpful.”
Alexa’s app also had warned that the style could contain “mature subject matter.”
However, further investigation discovered this is not Amazon’s version of something like Grok’s adult AI companions. The AI assistant said the new option won’t get into areas like explicit sexual content, hate speech, illegal activities, personal attacks, or anything that could cause harm to oneself or others.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
The move is the latest example of how Amazon is trying to make Alexa+ more customizable, as it revamps the assistant for the generative AI era. By offering the assistant different personalities — including one positioned as more adult — Amazon is borrowing from a broader trend in AI, where companies have been experimenting with tone, style, and personas to make their assistants more engaging and personalized to the individual users’ choices.
Tech
Tesla becomes a utility in the UK, setting up showdown with Octopus Energy
Tesla is now an officially licensed utility in the United Kingdom, according to a new report from The Wall Street Journal. The automotive and energy company recently received a license from the Office of Gas and Electricity Markets, allowing it to sell electricity directly to households and commercial and industrial users.
The company has long dabbled in electricity markets. Its first pure energy products, the Powerwall and Powerpack, were introduced in 2015, but it wasn’t until a year later when Tesla merged with SolarCity that it started scaling the division rapidly. In 2022, the company launched Tesla Electric in Texas, which allowed it to sell electricity directly to customers. Powerwall owners can sell electrons from their batteries to participate in the company’s virtual power plant.
The new division, known as Tesla Energy Ventures, will compete with existing utilities in the U.K., including EDF, E.ON, and Octopus Energy. The competition with Octopus should prove particularly interesting. Since its founding in 2015, Octopus has become the country’s largest utility by focusing on slick software, renewable energy, and creative marketing. Sound familiar?
Tech
A writer is suing Grammarly for turning her and other authors into ‘AI editors’ without consent
Grammarly released a controversial feature last week that uses AI to simulate editorial feedback, making it seem like you’re getting a critique from novelist Stephen King, the late scientist Carl Sagan, or tech journalist Kara Swisher. But Grammarly did not get permission from the hundreds of experts it included in this feature, called “Expert Review,” to use their names.
One of the affected writers, journalist Julia Angwin, has filed a class action lawsuit against Superhuman, the parent company that owns Grammarly, arguing that the company violated the privacy and publicity rights of her and the other writers it impersonated. A class action lawsuit allows writers to join Angwin in her case.
“I have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise,” Angwin said in a statement.
The situation is more than a little ironic — Angwin has spent her career leading investigations into tech companies’ impacts on privacy. Other critics of this kind of technology, like renowned AI ethicist Timnit Gebru, were also included in Grammarly’s “Expert Review.”
The “Expert Review” feature, available only to subscribers paying $144 a year, predictably fails to deliver on the promise of thoughtful feedback.
Casey Newton, the founder and editor of the tech newsletter Platformer and another person impersonated by Grammarly, fed one of his articles into the tool and got feedback from Grammarly’s approximation of tech journalist Kara Swisher. Grammarly’s imitation of Swisher produced “feedback” so generic that it raises the question of why the company would go through the rigmarole of using these writers’ likenesses in the first place.
Here is what Grammarly’s approximation of Kara Swisher told him: “Could you briefly compare how daily AI users versus AI skeptics articulate risk, creating a through-line readers can follow?”
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Newton relayed the message from the AI approximation of Kara Swisher to the actual, real human being, Kara Swisher.
“You rapacious information and identity thieves better get ready for me to go full McConaughey on you,” Swisher texted Newton (referring to Grammarly). “Also, you suck.”
Grammarly has since disabled the “Expert Review” feature, according to a LinkedIn post by Superhuman CEO Shishir Mehrotra. While Mehrotra offered an apology, he continued to defend the idea of the feature.
“Imagine your professor sharpening your essay, your sales leader reshaping a customer pitch, a thoughtful critic challenging your arguments, or a leading expert elevating your proposal,” he wrote. “For experts, this is a chance to build that same ubiquitous bond with users, much like Grammarly has.”
