Connect with us

Tech

Sequen snags $16M to bring TikTok-style personalization tech to any consumer company

At Etsy, Zoë Weil helped to drive a billion-dollar increase in gross merchandise volume within a single year by improving the online marketplace’s AI ranking systems. With her new startup, Sequen, she aims to bring her and her co-founders’ years of AI research and product development to other businesses in the consumer space.

The company, which just closed on $16 million in Series A funding, offers real-time personalization technology and ranking infrastructure — technology used by the world’s biggest tech firms but that has been inaccessible to other large consumer businesses because of the massive datasets required.

While those outside the tech industry may not understand what this technology involves, anyone who’s used consumer apps like TikTok, Instagram, or YouTube has been the target of these systems.

Explains Weil, Sequen CEO, “Modern tech isn’t really recommending content anymore. It’s bending your will in subtle ways over time to make you actually want things. And, in fact, the tech has gotten so good that a lot of people suspect platforms are eavesdropping on their conversations,” she says.

Weil credits this phenomenon to something called the large event model. While large language models (LLMs) used by chatbots like ChatGPT generalize text, large event models generalize streams of events and human behavior in particular. This technology has use cases that go beyond building a better algorithm.

Image Credits:Sequen

Weil believes that Sequen could eventually replace the cookie — a tracking technology that personalizes web experiences for end users, but in a way that has raised privacy concerns and triggered regulation.

“Our large event models learn from live user actions, not just clicks and scrolls, but also hovers, conversations, and stuff within a given session — not static profiles or third-party cookies,” Weil says. “That’s how you personalize in real time, even with sparse data. So yes, we do unlock TikTok’s algorithms for Fortune 500 companies that don’t have the infrastructure to do it … but I would say we’re taking it a step further,” she adds.

Businesses that work with Sequen integrate with the startup’s RankTune platform, which allows them to access Sequen’s frontier ranking models and real-time ranking models through APIs. (Sequen’s customers are already using some kind of in-house API to power their relevance stack, so they just swap out their API for Sequen’s.)

What’s more, Sequen’s technology is not as privacy-invasive as the cookie because it’s based on real-time data — the user’s identity is not needed to personalize the results. And it’s fast, with sub-20-millisecond decision-making.

“Our large event models are able to generalize to streams of real-time events that they get,” says Weil. “It doesn’t matter who is performing those events — they’re able to understand events and be able to make sense of them without relying on the user’s identity. So actually, the user’s identity is completely irrelevant.”

Image Credits:Sequen

Despite this more privacy-forward aspect, Sequen says its technology can still demonstrate “crazy revenue lift,” Weil claims.

In one example, a large furniture company saw a 7% revenue lift after switching to Sequen, when before, a 0.4% lift was considered a win. Another customer, Fetch Rewards, saw a 20% lift on net revenue in just under 11 days. It’s also working with a company in the streaming media space and an online travel agency.

The system is priced based on requests per second (RPS), with tiers offering up to 500 RPS or 1,000 RPS and so on, with pricing discounts as the tiers increase. Among its first five customers, contracts are in the seven figures.

“What we’ve seen consistently across the board is people opting for the highest tier, because as soon as they see us in one use case, they want to adopt us on their entire platform,” notes Weil.

Weil had started her career in this space on the research side of things, but quickly realized she’d rather build products. Most of her time to date has been spent helping companies develop these types of ranking products to generate business value from them, which is what led her to create Sequen.

Now, in under 18 months, the company has processed some 10 billion monthly requests and won business at a handful of Fortune 500 companies. Its offering includes proprietary technology, including large event models, ranking models, algorithms, and more.

At the startup, Weil is joined by Ethan Benjamin, who worked with her at Etsy, and co-founders Mo Afshar and Alexander Thom. Raphael Louca recently joined from Meta to become Sequen’s chief product officer. Based in New York, the company’s 14-person team includes those from DeepMind, Meta, Anthropic, and elsewhere.

Sequen’s Series A was co-led by White Star Capital and Threshold Ventures, with participation from its prior investors, including Greycroft, which had led its seed round. To date, Sequen has raised $22 million.

source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Mave Health aims to improve attention and mood with its brain-stimulating headset

Over the past few years, there has been a steady influx of startups trying to treat issues like depression, period pain, PMS, anxiety, and insomnia by using wearables that apply electrical, magnetic, or ultrasonic signals to stimulate the brain.

San Francisco-based Mave Health is the latest of that fleet, and claims its $495, neuromodulation headset can improve attention and mood, regulate stress, and even measure mental health. The startup is positioning the wearable as a non-medical device so it won’t need clearance from agencies like the U.S. Food and Drug Administration (FDA) to sell in the U.S.

Dhawal Jain, who started the company in 2023 with his college batch mates Jai Sharma (CMO) and Aman Kumar (CTO), said he realized the need for such a device after his flatmate’s fiancée committed suicide during the COVID-19 pandemic lockdowns.

Founders Aman Kumar, Jai Sharma, and Dhawal JainImage Credits:Mave Health

“In India, committing suicide is a crime, which meant there was police involved, and we had to speak to her psychologist. The answers we got from them made us question if any of it made sense. We started connecting with other psychologists and were getting the same answers,” Jain said.

The founders felt that there was no tangible way to measure progress in the mental health space. “For example, if you ask a psychologist how do you know if a person is making progress, their response to it is very standard, which is that it’s not about progress. It’s about process […] But for somebody with depression who is spending a lot of time in therapy, progress is important. So how do you know whether they’re making progress or not? And even these basic questions were not being answered.”

In an effort to solve that, the team started to learn more about neuroscience by talking to experts, and soon after realized that while there has been progress around neuromodulation in labs, consumers haven’t had the benefit of it.

The company then worked with medical device and mental health experts to conduct trials of the technology. But eventually it took a different route and positioned its headset as a lifestyle device. Jain said this approach would let Mave reach a wider audience.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

The device and technology

Mave Health’s device employs transcranial direct current stimulation (tDCS), a non-invasive technique to administer low-intensity currents to the brain to fire up neurons. The technique is sometimes used in psychology, and is said to be safe. Side effects are mild and temporary, like itching or discomfort.

The headset delivers a low 1-2 mA current to stimulate the brain. The startup says customers can use the device, which weighs roughly 100 grams, anytime, and recommends daily sessions spanning 20 minutes for the first few weeks of usage.

The startup also provides an app that can measure long-term trends in mood, focus, and stress levels. It can also integrate with other health data and track measures like Heart Rate Variability (HRV). Jain said users start with a self-reported baseline assessment when they start, and complete follow-up assessments every two to four weeks, which helps Mave understand if the device is helping a user in the long term.

Image Credits:Mave Health

The company hasn’t performed any clinical trials or published any studies yet. However, Jain says it worked with more than 500 users in a private beta in 2024 and 2025, during which eight in 10 users reported a 60% increase in productivity. The startup noted that 75% of its private beta users also reported a reduction in stress from their baseline within two months of usage.

Mave Health said it has performed four observational studies across 200 participants that are under academic review with an aim to publish this year.

Dr. Himanshu Nirvan, a Delhi-based psychiatrist who worked with Mave Health as a consultant, said that tDCS-based devices are considered a proven way to address mental health-related issues. However, he noted that he hasn’t looked at the technology from a lifestyle lens.

The company says it ran a program in India with Dr. Nirvan to test the device and the technology.

“We did select a lot of patients, and it was essentially a good program in my opinion. Things like that are generally not very frequently and easily available even in the mental health management space,” Dr. Nirvan said. “I felt that for a lot of people, tDCS is actually quite a good modality, considering that it’s a very portable device. You can essentially charge it at home, take it anywhere you want, even while you’re traveling.”

Leigh Elkins Charvet, a clinical neuropsychologist and Professor of Neurology at NYU Grossman School of Medicine, told TechCrunch over email that while tDCS is considered a safe and effective approach to neuromodulation, devices need to be designed well to align electrodes properly, and users need to have regular and consistent sessions.

“One challenge is that consumers may use the device without clinical screening or clear guidance about whether it is appropriate for their symptoms. Another is that it can be difficult for users to determine whether the device is actually helping if outcomes are not being measured in a structured way,” she said.

Charvet added that the use of tDCS for broad lifestyle enhancement in healthy individuals has not been studied widely. “So far, most of the strongest research has focused on clinical populations or structured cognitive training paradigms. We do not yet have clear guidance or strong evidence supporting the use of tDCS to improve performance in otherwise healthy individuals. A lifestyle use case may still emerge, but that will rely on clearly defining target outcomes and demonstrating that effects are measurable and reproducible,” she said.

The device is currently available for preorder, and the company is aiming to ship its first batch to customers in the U.S. and India in April 2026.

The company recently raised $2.1 million in a seed funding round led by Blume Ventures, with participation from individual investors who include Tesla Autopilot AI lead Dhaval Shroff. The startup has raised just under $3 million in funding to date.

source

Continue Reading

Tech

DOD says Anthropic’s ‘red lines’ make it an ‘unacceptable risk to national security’

The U.S. Department of Defense said on Tuesday evening that Anthropic poses an “unacceptable risk to national security,” marking the agency’s first rebuttal to the AI lab’s lawsuits challenging Defense Secretary Pete Hegseth’s decision last month to label the company a supply-chain risk. As part of its complaints, Anthropic had requested the court temporarily block the DOD from enforcing its label.

The crux of the DOD’s argument, made in a 40-page filing in a California federal court, is the concern that Anthropic might “attempt to disable its technology or preemptively alter the behavior of its model” before or during “warfighting operations” if the company “feels that its corporate ‘red lines’ are being crossed.”

Anthropic last summer signed a $200 million contract with the Pentagon to deploy its technology within classified systems. In later negotiations over the terms of the contract, Anthropic said it did not want its AI systems to be used for mass surveillance of Americans, and that the technology wasn’t ready for use in targeting or firing decisions of lethal weapons. The Pentagon contested that a private company shouldn’t dictate how the military uses technology.

In response, an Anthropic spokesperson pointed to CEO Dario Amodei’s late February statement: “Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.”

Chris Mattei, a lawyer specializing in First Amendment issues and a former Justice Department attorney, told TechCrunch there has been no investigation to support the DOD’s concerns of Anthropic potentially disabling or altering its AI models during warfighting operations. Without that evidence, the department’s argument fails to adequately explain how Anthropic’s negotiating position rendered it an “adversary,” Mattei argued.

“The government is relying completely on conjectural, speculative imaginings to justify a very, very serious legal step they’ve taken against Anthropic,” Mattei said. He added the department failed to “articulate a credible or even comprehensible rationale for why Anthropic’s refusal to agree to an ‘all lawful use’ provision rendered it a supply chain risk as opposed to a vendor that DOD simply didn’t want to do business with.”

Many organizations have spoken out against the DOD’s treatment of Anthropic, arguing that the department could have just ended its contract. Several tech companies and employees — including from OpenAI, Google, and Microsoft — as well as legal rights groups have filed amicus briefs in support of Anthropic. 

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

In its lawsuits, Anthropic accused the DOD of infringing on its First Amendment rights and punishing the company based on ideological grounds.  

“In many ways, the government’s nonsensical arguments are themselves the best evidence that the administration’s conduct was plainly a retaliatory punishment for Anthropic’s refusal to agree to the government’s terms, which, contrary to the government’s brief, is a form of protected expression,” Mattei told TechCrunch.

A hearing on Anthropic’s request for a preliminary injunction is set for next Tuesday.

An Anthropic spokesperson told TechCrunch that its decision to seek judicial review does not change its “longstanding commitment to harnessing AI to protect our national security,” but that it’s a “necessary step” to protect its business, customers, and partners.

This article has been updated to include information from Chris Mattei, a constitutional rights lawyer, and comments from Anthropic.

source

Continue Reading

Tech

Russians caught stealing personal data from Ukrainians with new advanced iPhone hacking tools

A group of hackers suspected of working at least in part for the Russian government targeted iPhone users in Ukraine with a new set of hacking tools designed to steal their personal data, as well as potentially steal cryptocurrency, according to cybersecurity researchers. 

Researchers at Google and security firms iVerify and Lookout analyzed new cyberattacks against Ukrainians which were launched by a group identified only as UNC6353. The researchers looked at compromised websites in a hacking campaign that, they say, is related to one uncovered earlier this month. This most recent campaign used a hacking toolkit the companies called Darksword.

The discovery of Darksword, which follows that of a similar hacking toolkit, suggests that advanced, stealthy, and powerful spyware for iPhones may not be as rare as previously thought. Even then, Darksword only targeted users in Ukraine, implying some restraint in what could have otherwise been a widescale hacking campaign targeting users worldwide.

In early March, Google revealed details of a sophisticated iPhone-hacking toolkit called Coruna. The search giant said that the tool was used first by a government customer of a surveillance tech vendor, then by Russian spies targeting Ukrainians, and finally Chinese cybercriminals looking to steal cryptocurrency. As TechCrunch later revealed, the hacking toolkit was originally developed at U.S. defense contractor L3Harris, in particular by its hacking and surveillance tech department Trenchant.

Coruna was originally designed for use by Western governments, in particular those part of the so-called Five Eyes intelligence alliance, consisting of Australia, Canada, New Zealand, the United States, and the United Kingdom, according to former L3Harris employees with knowledge of the company’s iPhone hacking tools.  

Now, researchers said they uncovered a related campaign using more recent hacking tools exploiting different vulnerabilities. 

The Darksword toolkit, according to the researchers, was built to steal personal information such as passwords; photos; WhatsApp, Telegram, and text messages; and browser history. Interestingly, Darksword was not designed for persistent surveillance, but rather to infect victims, steal information, and quickly disappear.

Contact Us

Do you have more information about Darksword, Coruna, or other government hacking and spyware tools? From a non-work device, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram, Keybase and Wire @lorenzofb, or by email.

Darksword’s “dwell time on the device is likely in the range of minutes, depending on the amount of data it discovers and exfiltrates,” Lookout researchers wrote. 

For Rocky Cole, the co-founder of iVerify, the most likely explanation is that the hackers were interested in learning about the victims’ pattern of life, which didn’t require them to do constant surveillance, but rather a smash-and-grab operation

Darksword was also designed to steal cryptocurrency from popular wallet apps, something that is unusual for a suspected government hacking group. 

“This may indicate that this threat actor is financially motivated, or alternatively it may indicate that this (likely) Russian state-aligned activity has expanded into financial theft targeting mobile devices,” Lookout wrote in its report. 

But, Cole told TechCrunch, there is no evidence that the Russian hacking group actually cared about stealing crypto, only that the malware could have been used for that. 

The malware was professionally developed to be modular and to make it easy to add new functionality, something that shows it was professionally designed, according to Lookout. Cole said he believes it’s possible that the same person who sold Coruna to the Russian government hacking group also sold Darksword. 

In terms of who was behind Darksword, for Cole “all signs point to the Russian government,” while Lookout said it’s the same group that used Coruna against Ukrainians, also a suspected Russian government group. 

“UNC6353 is a well-funded and connected threat actor conducting attacks for financial gain and espionage in alignment with Russian intelligence requirements,” Justin Albrecht, principal security researcher at Lookout, told TechCrunch. “We believe that a case can be made that UNC6363 is potentially a Russian criminal proxy, given the dual goals of financial theft and intelligence collection.”

As for victims, Cole said that the malware was designed to infect anyone visiting certain Ukrainian websites, as long as they were visiting them from within Ukraine, so it wasn’t a particularly targeted campaign.

source

Continue Reading