Tech
Amazon to begin testing AI tools for film and TV production next month
Last summer, Amazon MGM Studios launched a dedicated AI Studio to develop proprietary AI tools to streamline TV and film production, with a focus on areas like improving character consistency across shots and supporting pre- and post-production.
According to a report from Reuters, those tools are now ready to move beyond internal testing. Amazon will begin a closed beta program in March, inviting industry partners to try out its AI tools.
Amazon said it anticipates sharing initial outcomes from the program by May. The company chose not to provide further details on the developments when approached by TechCrunch for a comment.
The AI Studio is collaborating with notable producers like Robert Stromberg, known for “Maleficent,” Kunal Nayyar from “The Big Bang Theory,” and former animator Colin Brady from Pixar to learn the best way to implement these tools. Amazon is also tapping Amazon Web Services for support and intends to work with several LLM providers.
Albert Cheng, who heads the AI Studios initiative, emphasized that the goal is to support creative teams, not to replace them. The focus is on improving efficiency and reducing costs while ensuring that intellectual property is protected and AI-generated content isn’t absorbed into other AI models. One example used is Amazon’s “House of David” series, which featured 350 AI-generated shots in season two.
However, the rise of adoption of AI in Hollywood has stirred up plenty of debate. Many people in the industry worry about what it means for jobs, creativity, and the future of filmmaking.
The conversations around AI are only getting louder as more companies experiment with these new tools. For instance, Netflix has also jumped on the AI bandwagon, with co-CEO Ted Sarandos revealing that its series “The Eternaut” used generative AI to create a building collapse scene.
In recent years, Amazon has cited its success with AI as a factor in layoffs. The company recently eliminated 16,000 jobs in January, following 14,000 layoffs last October.
Tech
Mirage raises $75M to continue building models for its AI video editing app Captions
Mirage, the maker of video-editing app Captions, has raised $75 million in growth financing from General Catalyst’s Customer Value Fund (CVF).
Over the past year, the startup has made significant changes both to its product and corporate identity. The startup rebranded from Captions to Mirage to position itself as an AI lab that produces different models and also caters to industries like advertising and marketing. It has also trained a model specifically for pacing, framing, and attention dynamics in short videos.
The company also switched to a freemium model in January 2025 to better compete with apps like ByteDance’s CapCut and Meta’s Edits, which was released later in the year. It now offers a video-creation suite as well, with some of the features from Captions, that lets companies create and distribute videos in bulk.
Mirage’s co-founder and CEO Gaurav Misra said that the company aims to create more models. However, he didn’t specify what its next set of models would do, only saying that they would be focused on “assembly intelligence” — basically putting together a video using different sources and components.
Speaking about Mirage’s new audio model, which it claims can preserve accents in generated videos, Misra said, “The reason for the audio model was that we noticed that there was a gap in accents because a lot of our users are international. Accents are just very important. There was my own dad’s example. He was trying to use the app, and he would say a word in an Indian accent, and it would always make it sound like he’s talking in an American accent.”
According to data from analytics firm Appfigures, Captions has been downloaded over 3.2 million times in the last 365 days and has brought in $28.4 million in in-app revenue. Misra said the platform has been used to create more than 200 million videos so far, and that it has attracted an international user base, with only 25% of its revenue coming from the U.S.
Currently, Mirage’s marketing suite is available on the web, and Captions largely offers a mobile-first editing suite. The company aims to merge these two platforms to better target small businesses that may be looking to create marketing videos.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Pranav Singhvi, managing director of General Catalyst’s CVF fund, said Mirage has great product-market fit.
“Mirage’s business equation is extremely figured out. They know exactly how to spend that dollar and generate a very attractive ROI. If you think about the market they’re going after, it’s in a sense an infinite total addressable market. You can start out in the creator world, the influencer world, and then use that as a mechanism to sell to enterprises as well,” Singhvi told TechCrunch.
There are tons of companies building AI video-generation pipelines for marketing. Canva has introduced several tools around marketing creation and tracking, while platforms like D-ID, HeyGen, Webflow, and Avataar have been releasing new models and features.
However, Singhvi seems confident about Mirage’s positioning and unit economics. “Regardless of what the other tools are out there, Mirage is clearly ahead of the pack from a unit economics standpoint. Ultimately, it’s all a reflection of their product,” he said.
Mirage aims to use the fresh capital to fuel growth, and expand in high-growth Asian markets.
Tech
Spotify’s new SongDNA feature maps how your favorite songs are connected
Spotify announced on Tuesday the global rollout of a new feature, SongDNA, that lets listeners more deeply explore their favorite music.
Now available to Premium subscribers on iOS and Android, the feature provides an interactive experience that lets users trace other components of a song beyond the singer, songwriter, or musician. With SongDNA, listeners could explore other connections, like who may have covered that song, plus other information like samples, interpolations, or what other projects the song’s collaborators have also been involved in.
The idea is something of an expansion to the existing “About the Song” feature, allowing Spotify’s customers to learn more about the writers, producers, and collaborators behind their favorite music. This could lead users to see how artists are connected to and influenced by one another’s work. For those in the music industry itself, the feature could help them find new collaborators, producers, engineers, and others they may want to work with.
It also offers those in the background of music production more visibility and credibility than they’ve previously had in the streaming age.

TechCrunch reported in October that Spotify was developing the SongDNA feature as a way to help users discover music through a song’s credits, after references to the feature were spotted in the app’s code by reverse engineer Jane Manchun Wong. The following month, the company officially confirmed its plans to launch SongDNA in early 2026.
In part, SongDNA has been built on top of data from the online community-built music database WhoSampled, which Spotify acquired last year. The feature also competes with TIDAL’s interactive credits, which similarly focus on the contributors behind the songs you stream.
“By bringing collaborators, samples, and covers together in one place, we’re making it easier for fans to discover new music and see how songs connect and come to life—while giving songwriters, producers, and rightsholders meaningful recognition for the role they play in creating it,” said Jacqueline Ankner, Spotify’s head of Songwriter & Publisher Partnerships, in a statement.
The feature is rolling out now in beta to Premium users globally across iOS and Android devices, with plans for the rollout to be complete sometime in April.
Tech
Snapchat’s new ‘AI Clips’ Lens format turns photos into five-second videos
Snapchat announced on Tuesday that it’s launching AI Clips in Lens Studio, its platform that lets creators design and publish AR and AI effects called Lenses. The new Clips are an AI-powered Lens format that transforms a single photo into a five-second video.
Unlike open-ended text-to-video tools, AI Clips are designed as a closed-prompt experience, where Lens creators design the Lens, and users can tap it to generate a video from their own photos.
For example, a Lens creator could design a Lens that allows users to generate a video of themselves walking down a red carpet using their own photo.
Snapchat says both experienced and new developers can use the new Lens format to turn a single prompt into a published Lens in minutes without the need for external tools.
AI Clips are available to Snapchat users who are subscribed to that platform’s Lens+ offering, which costs $8.99 per month. As its name suggests, Lens+ gives users access to exclusive Lenses and AR experiences, along with the features available as part of the standard Snapchat+ subscription.

“For the first time, developers can build and publish photo-to-video AI directly to Snapchat from the GenAI Suite in Lens Studio,” Snapchat wrote in a blog post. “There’s currently nothing else on the market that combines closed-prompt AI video generation with direct photo input, real distribution, and monetization.”
Lens creators enrolled in Lens+ Payouts, Snapchat’s monetization program that allows developers to earn money from their Lenses, can earn revenue from the AI Clips they create.
Snapchat isn’t the only platform focused on letting users create AI clips from their own photos, as YouTube announced last week that it was rolling out “Reimagine,” a new feature that lets users transform a single frame from an existing YouTube Short into an eight-second clip using their own photo.
The launch of AI Clips comes the same day that Snapchat announced that users created nearly two trillion Snaps, or 63,000 Snaps per second, in 2025.
