Tech
Hackers publish personal information stolen during Harvard, UPenn data breaches
A notorious hacking group has claimed responsibility for last year’s data breaches at Harvard University and the University of Pennsylvania (UPenn) and published the data that they claim to have stolen from the two schools.
On Wednesday, the group known as ShinyHunters published what it claims are more than 1 million records from each university on the group’s dedicated leak site, which the gang uses to extort its victims.
In November, UPenn confirmed a data breach of “a select group of information systems related to Penn’s development and alumni activities.” At the time, the hackers also sent alumni emails announcing the hack from official university addresses.
The university blamed the breach on social engineering, an attack that often relies on hackers impersonating someone and tricking them into doing something they would not normally do. In its official breach disclosure web page, which has since been taken offline, UPenn did not say exactly what type of data the hackers stole, simply saying the cybercriminals accessed “systems related to Penn’s development and alumni activities.”
Contact Us
Do you have more information about these breaches, or similar attacks? From a non-work device, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram, Keybase and Wire @lorenzofb, or email.
TechCrunch verified a portion of the dataset by confirming with alumni and public records, such as matching the data against student ID numbers.
Later in November, Harvard University also confirmed a breach on its alumni systems, blaming it on a voice phishing attack, meaning an attack where hackers tricked the targets into clicking on a link or opening an attachment with a voice call.
Harvard said that the stolen data included email addresses, phone numbers, home and business addresses, event attendance, details of donations to the university, and other biographical information relating to the university’s fundraising and alumni engagement activities.
Techcrunch event
Boston, MA
|
June 23, 2026
The data published by ShinyHunters, which TechCrunch has seen, appears to match the type of information that both universities said was stolen last year.
The hackers said they published the stolen data because the universities refused to pay a ransom to stop them from doing so. Cybercriminals like ShinyHunters often try to extort their victims asking for payment in exchange for not publishing the data they stole, and if the victims refuse payment, they then release the data online.
During the UPenn breach, the hackers made it seem like they had political motives, in particular they expressed discontent with affirmative action policies. “We hire and admit morons because we love legacies, donors, and unqualified affirmative action admits,” the hackers wrote in the email sent to alumni.
ShinyHunters is not known to have political motives. The hackers did not respond to a question asking why they included that language in the email.
Penn spokesperson Ron Ozio told TechCrunch that the university is “analyzing the data and will notify any individuals if required by applicable privacy regulations.”
Harvard did not respond to a request for comment.
Tech
Mirage raises $75M to continue building models for its AI video editing app Captions
Mirage, the maker of video-editing app Captions, has raised $75 million in growth financing from General Catalyst’s Customer Value Fund (CVF).
Over the past year, the startup has made significant changes both to its product and corporate identity. The startup rebranded from Captions to Mirage to position itself as an AI lab that produces different models and also caters to industries like advertising and marketing. It has also trained a model specifically for pacing, framing, and attention dynamics in short videos.
The company also switched to a freemium model in January 2025 to better compete with apps like ByteDance’s CapCut and Meta’s Edits, which was released later in the year. It now offers a video-creation suite as well, with some of the features from Captions, that lets companies create and distribute videos in bulk.
Mirage’s co-founder and CEO Gaurav Misra said that the company aims to create more models. However, he didn’t specify what its next set of models would do, only saying that they would be focused on “assembly intelligence” — basically putting together a video using different sources and components.
Speaking about Mirage’s new audio model, which it claims can preserve accents in generated videos, Misra said, “The reason for the audio model was that we noticed that there was a gap in accents because a lot of our users are international. Accents are just very important. There was my own dad’s example. He was trying to use the app, and he would say a word in an Indian accent, and it would always make it sound like he’s talking in an American accent.”
According to data from analytics firm Appfigures, Captions has been downloaded over 3.2 million times in the last 365 days and has brought in $28.4 million in in-app revenue. Misra said the platform has been used to create more than 200 million videos so far, and that it has attracted an international user base, with only 25% of its revenue coming from the U.S.
Currently, Mirage’s marketing suite is available on the web, and Captions largely offers a mobile-first editing suite. The company aims to merge these two platforms to better target small businesses that may be looking to create marketing videos.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Pranav Singhvi, managing director of General Catalyst’s CVF fund, said Mirage has great product-market fit.
“Mirage’s business equation is extremely figured out. They know exactly how to spend that dollar and generate a very attractive ROI. If you think about the market they’re going after, it’s in a sense an infinite total addressable market. You can start out in the creator world, the influencer world, and then use that as a mechanism to sell to enterprises as well,” Singhvi told TechCrunch.
There are tons of companies building AI video-generation pipelines for marketing. Canva has introduced several tools around marketing creation and tracking, while platforms like D-ID, HeyGen, Webflow, and Avataar have been releasing new models and features.
However, Singhvi seems confident about Mirage’s positioning and unit economics. “Regardless of what the other tools are out there, Mirage is clearly ahead of the pack from a unit economics standpoint. Ultimately, it’s all a reflection of their product,” he said.
Mirage aims to use the fresh capital to fuel growth, and expand in high-growth Asian markets.
Tech
Spotify’s new SongDNA feature maps how your favorite songs are connected
Spotify announced on Tuesday the global rollout of a new feature, SongDNA, that lets listeners more deeply explore their favorite music.
Now available to Premium subscribers on iOS and Android, the feature provides an interactive experience that lets users trace other components of a song beyond the singer, songwriter, or musician. With SongDNA, listeners could explore other connections, like who may have covered that song, plus other information like samples, interpolations, or what other projects the song’s collaborators have also been involved in.
The idea is something of an expansion to the existing “About the Song” feature, allowing Spotify’s customers to learn more about the writers, producers, and collaborators behind their favorite music. This could lead users to see how artists are connected to and influenced by one another’s work. For those in the music industry itself, the feature could help them find new collaborators, producers, engineers, and others they may want to work with.
It also offers those in the background of music production more visibility and credibility than they’ve previously had in the streaming age.

TechCrunch reported in October that Spotify was developing the SongDNA feature as a way to help users discover music through a song’s credits, after references to the feature were spotted in the app’s code by reverse engineer Jane Manchun Wong. The following month, the company officially confirmed its plans to launch SongDNA in early 2026.
In part, SongDNA has been built on top of data from the online community-built music database WhoSampled, which Spotify acquired last year. The feature also competes with TIDAL’s interactive credits, which similarly focus on the contributors behind the songs you stream.
“By bringing collaborators, samples, and covers together in one place, we’re making it easier for fans to discover new music and see how songs connect and come to life—while giving songwriters, producers, and rightsholders meaningful recognition for the role they play in creating it,” said Jacqueline Ankner, Spotify’s head of Songwriter & Publisher Partnerships, in a statement.
The feature is rolling out now in beta to Premium users globally across iOS and Android devices, with plans for the rollout to be complete sometime in April.
Tech
Snapchat’s new ‘AI Clips’ Lens format turns photos into five-second videos
Snapchat announced on Tuesday that it’s launching AI Clips in Lens Studio, its platform that lets creators design and publish AR and AI effects called Lenses. The new Clips are an AI-powered Lens format that transforms a single photo into a five-second video.
Unlike open-ended text-to-video tools, AI Clips are designed as a closed-prompt experience, where Lens creators design the Lens, and users can tap it to generate a video from their own photos.
For example, a Lens creator could design a Lens that allows users to generate a video of themselves walking down a red carpet using their own photo.
Snapchat says both experienced and new developers can use the new Lens format to turn a single prompt into a published Lens in minutes without the need for external tools.
AI Clips are available to Snapchat users who are subscribed to that platform’s Lens+ offering, which costs $8.99 per month. As its name suggests, Lens+ gives users access to exclusive Lenses and AR experiences, along with the features available as part of the standard Snapchat+ subscription.

“For the first time, developers can build and publish photo-to-video AI directly to Snapchat from the GenAI Suite in Lens Studio,” Snapchat wrote in a blog post. “There’s currently nothing else on the market that combines closed-prompt AI video generation with direct photo input, real distribution, and monetization.”
Lens creators enrolled in Lens+ Payouts, Snapchat’s monetization program that allows developers to earn money from their Lenses, can earn revenue from the AI Clips they create.
Snapchat isn’t the only platform focused on letting users create AI clips from their own photos, as YouTube announced last week that it was rolling out “Reimagine,” a new feature that lets users transform a single frame from an existing YouTube Short into an eight-second clip using their own photo.
The launch of AI Clips comes the same day that Snapchat announced that users created nearly two trillion Snaps, or 63,000 Snaps per second, in 2025.
