Tech
Sam Altman got exceptionally testy over Claude Super Bowl ads
Anthropic’s Super Bowl commercial, one of four ads the AI lab dropped on Wednesday, begins with the word “BETRAYAL” splashed boldly across the screen. The camera pans to a man earnestly asking a chatbot (obviously intended to depict ChatGPT) for advice on how to talk to his mom.
The bot, portrayed by a blonde woman, offers some classic bits of advice. Start by listening. Try a nature walk! And then twists into an ad for a fictitious (we hope!) cougar-dating site called Golden Encounters. Anthropic finishes the spot by saying that while ads are coming to AI, they won’t be coming to it’s own chatbot, Claude.
Another one features a slight young man looking for advice on building a six pack. After offering his height, age, and weight, the bot serves him an ad for height-boosting insoles.
The Anthropic commercials are cleverly crafted at OpenAI’s users, after that company’s recent announcement that ads will be coming to ChatGPT’s free tier. And they caused an immediate stir, spawning headlines that Anthropic “mocks,” “skewers” and “dunks” on OpenAI.
They are funny enough that even Sam Altman admitted on X that he laughed at them. But he clearly didn’t really find them funny. They inspired him to write a novella-sized rant that devolved into calling his rival “dishonest” and “authoritarian.”
In that post, Altman explains that an ad-supported tier is intended to shoulder the burden of offering free ChatGPT to many of its millions of users. ChatGPT is still the most popular chatbot by a large margin.
But the OpenAI CEO insisted they were “dishonest” in implying that ChatGPT will twist a conversation to insert an ad (and possibly for an off-color product, to boot).”We would obviously never run ads in the way Anthropic depicts them,” Altman wrote in the social media post. “We are not stupid and we know our users would reject that.”
Techcrunch event
Boston, MA
|
June 23, 2026
Indeed, OpenAI has promised ads will be separate, labeled, and will never influence a chat. But the company has also said it is planning on making them conversation-specific — which is the central allegation of Anthropic’s ads. As OpenAI explained in its blog. “We plan to test ads at the bottom of answers in ChatGPT when there’s a relevant sponsored product or service based on your current conversation.”
Altman then went on to fling some equally questionable assertions at his rival. “Anthropic serves an expensive product to rich people,” he wrote. “We also feel strongly that we need to bring AI to billions of people who can’t pay for subscriptions.”
But Claude has a free chat tier, too, with subscriptions at $0, $17, $100, $200. ChatGPT’s tiers are $0, $8, $20, $200. One could argue the subscription tiers are fairly equivalent.
Altman also alleged in his post that: “Anthropic wants to control what people do with AI” He argues it blocks usage of Claude Code from “companies they don’t like” like OpenAI, and said Anthropic tells people what they can and can’t use AI for.
True, Anthropic’s whole marketing deal since day one has been “responsible AI.” The company was founded by two former OpenAI alums, after all, who claimed they grew alarmed about AI safety when they worked there.
Still, both chatbot companies have usage policies, AI guardrails, and talk about AI safety. And, while OpenAI allows ChatGPT to be used for erotica while Anthropic does not, it, too, has determined some content should be blocked, particularly in regards to mental health.
Yet Altman took this Anthropic-tells-you-what-to-do argument to an extreme level when he accused Anthropic of being “authoritarian.”
“One authoritarian company won’t get us there on their own, to say nothing of the other obvious risks. It is a dark path,” he wrote.
Using “authoritarian” in a rant over a cheeky Super Bowl ad is misplaced, at best. It’s particularly tactless when considering the current geopolitical environment in which protesters around the world have been killed by agents of their own government. While business rivals have been duking it out in ads since the beginning of time, clearly Anthropic hit a nerve.
Tech
Mirage raises $75M to continue building models for its AI video editing app Captions
Mirage, the maker of video-editing app Captions, has raised $75 million in growth financing from General Catalyst’s Customer Value Fund (CVF).
Over the past year, the startup has made significant changes both to its product and corporate identity. The startup rebranded from Captions to Mirage to position itself as an AI lab that produces different models and also caters to industries like advertising and marketing. It has also trained a model specifically for pacing, framing, and attention dynamics in short videos.
The company also switched to a freemium model in January 2025 to better compete with apps like ByteDance’s CapCut and Meta’s Edits, which was released later in the year. It now offers a video-creation suite as well, with some of the features from Captions, that lets companies create and distribute videos in bulk.
Mirage’s co-founder and CEO Gaurav Misra said that the company aims to create more models. However, he didn’t specify what its next set of models would do, only saying that they would be focused on “assembly intelligence” — basically putting together a video using different sources and components.
Speaking about Mirage’s new audio model, which it claims can preserve accents in generated videos, Misra said, “The reason for the audio model was that we noticed that there was a gap in accents because a lot of our users are international. Accents are just very important. There was my own dad’s example. He was trying to use the app, and he would say a word in an Indian accent, and it would always make it sound like he’s talking in an American accent.”
According to data from analytics firm Appfigures, Captions has been downloaded over 3.2 million times in the last 365 days and has brought in $28.4 million in in-app revenue. Misra said the platform has been used to create more than 200 million videos so far, and that it has attracted an international user base, with only 25% of its revenue coming from the U.S.
Currently, Mirage’s marketing suite is available on the web, and Captions largely offers a mobile-first editing suite. The company aims to merge these two platforms to better target small businesses that may be looking to create marketing videos.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Pranav Singhvi, managing director of General Catalyst’s CVF fund, said Mirage has great product-market fit.
“Mirage’s business equation is extremely figured out. They know exactly how to spend that dollar and generate a very attractive ROI. If you think about the market they’re going after, it’s in a sense an infinite total addressable market. You can start out in the creator world, the influencer world, and then use that as a mechanism to sell to enterprises as well,” Singhvi told TechCrunch.
There are tons of companies building AI video-generation pipelines for marketing. Canva has introduced several tools around marketing creation and tracking, while platforms like D-ID, HeyGen, Webflow, and Avataar have been releasing new models and features.
However, Singhvi seems confident about Mirage’s positioning and unit economics. “Regardless of what the other tools are out there, Mirage is clearly ahead of the pack from a unit economics standpoint. Ultimately, it’s all a reflection of their product,” he said.
Mirage aims to use the fresh capital to fuel growth, and expand in high-growth Asian markets.
Tech
Spotify’s new SongDNA feature maps how your favorite songs are connected
Spotify announced on Tuesday the global rollout of a new feature, SongDNA, that lets listeners more deeply explore their favorite music.
Now available to Premium subscribers on iOS and Android, the feature provides an interactive experience that lets users trace other components of a song beyond the singer, songwriter, or musician. With SongDNA, listeners could explore other connections, like who may have covered that song, plus other information like samples, interpolations, or what other projects the song’s collaborators have also been involved in.
The idea is something of an expansion to the existing “About the Song” feature, allowing Spotify’s customers to learn more about the writers, producers, and collaborators behind their favorite music. This could lead users to see how artists are connected to and influenced by one another’s work. For those in the music industry itself, the feature could help them find new collaborators, producers, engineers, and others they may want to work with.
It also offers those in the background of music production more visibility and credibility than they’ve previously had in the streaming age.

TechCrunch reported in October that Spotify was developing the SongDNA feature as a way to help users discover music through a song’s credits, after references to the feature were spotted in the app’s code by reverse engineer Jane Manchun Wong. The following month, the company officially confirmed its plans to launch SongDNA in early 2026.
In part, SongDNA has been built on top of data from the online community-built music database WhoSampled, which Spotify acquired last year. The feature also competes with TIDAL’s interactive credits, which similarly focus on the contributors behind the songs you stream.
“By bringing collaborators, samples, and covers together in one place, we’re making it easier for fans to discover new music and see how songs connect and come to life—while giving songwriters, producers, and rightsholders meaningful recognition for the role they play in creating it,” said Jacqueline Ankner, Spotify’s head of Songwriter & Publisher Partnerships, in a statement.
The feature is rolling out now in beta to Premium users globally across iOS and Android devices, with plans for the rollout to be complete sometime in April.
Tech
Snapchat’s new ‘AI Clips’ Lens format turns photos into five-second videos
Snapchat announced on Tuesday that it’s launching AI Clips in Lens Studio, its platform that lets creators design and publish AR and AI effects called Lenses. The new Clips are an AI-powered Lens format that transforms a single photo into a five-second video.
Unlike open-ended text-to-video tools, AI Clips are designed as a closed-prompt experience, where Lens creators design the Lens, and users can tap it to generate a video from their own photos.
For example, a Lens creator could design a Lens that allows users to generate a video of themselves walking down a red carpet using their own photo.
Snapchat says both experienced and new developers can use the new Lens format to turn a single prompt into a published Lens in minutes without the need for external tools.
AI Clips are available to Snapchat users who are subscribed to that platform’s Lens+ offering, which costs $8.99 per month. As its name suggests, Lens+ gives users access to exclusive Lenses and AR experiences, along with the features available as part of the standard Snapchat+ subscription.

“For the first time, developers can build and publish photo-to-video AI directly to Snapchat from the GenAI Suite in Lens Studio,” Snapchat wrote in a blog post. “There’s currently nothing else on the market that combines closed-prompt AI video generation with direct photo input, real distribution, and monetization.”
Lens creators enrolled in Lens+ Payouts, Snapchat’s monetization program that allows developers to earn money from their Lenses, can earn revenue from the AI Clips they create.
Snapchat isn’t the only platform focused on letting users create AI clips from their own photos, as YouTube announced last week that it was rolling out “Reimagine,” a new feature that lets users transform a single frame from an existing YouTube Short into an eight-second clip using their own photo.
The launch of AI Clips comes the same day that Snapchat announced that users created nearly two trillion Snaps, or 63,000 Snaps per second, in 2025.
