Tech
Here is what’s illegal under California’s 18 (and counting) new AI laws
In September, California Governor Gavin Newsom considered 38 AI-related bills, including the highly contentious SB 1047, which the state’s legislature sent to his desk for final approval. He vetoed SB 1047 on Sunday, marking the end of the road for California’s controversial AI bill that tried to prevent AI disasters, but signed more than a dozen other AI bills into law this month. These bills try to address the most pressing issues in artificial intelligence: everything from Al risk, to deepfake nudes created by AI image generators, to Hollywood studios creating AI clones of dead performers.
“Home to the majority of the world’s leading AI companies, California is working to harness these transformative technologies to help address pressing challenges while studying the risks they present,” said Governor Newsom’s office in a press release.
So far, Governor Newsom has signed 18 AI bills into law, some of which are America’s most far reaching laws on generative AI yet. Here’s what they do.
AI risk
On Sunday, Governor Newsom signed SB 896 into law, which requires California’s Office of Emergency Services to perform risk analyses on potential threats posed by generative AI. CalOES will work with frontier model companies, such as OpenAI and Anthropic, to analyze AI’s potential threats to critical state infrastructure, as well as threats that could lead to mass casualty events.
Training data
Another law Newsom signed this month requires generative AI providers to reveal the data used to train their AI systems in documentation published on their website. AB 2013 goes into effect in 2026, and requires AI providers to publish: the sources of its datasets, a description of how the data is used, the number of data points in the set, whether copyrighted or licensed data is included, the time period the data was collected, among other standards.
Privacy and AI systems
Newsom also signed AB 1008 on Sunday, which clarifies that California’s existing privacy laws are extended to generative AI systems as well. That means that if an AI system, like ChatGPT, exposes someone’s personal information (name, address, biometric data), California’s existing privacy laws will limit how businesses can use and profit off of that data.
Education
Newsom signed AB 2876 this month, which requires California’s State Board of Education to consider “AI literacy” in its math, science, and history curriculum frameworks and instructional materials. This means California’s schools may begin teaching students the basics of how artificial intelligence works, as well as the limitations, impacts, and ethical considerations of using the technology.
Another new law, SB 1288, requires California superintendents to create working groups to explore how AI is being used in public school education.
Defining AI
This month, Newsom signed a bill that establishes a uniform definition for artificial intelligence in California law. AB 2885 states that artificial intelligence is defined as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.”
Healthcare
Another bill signed in September is AB 3030, which requires healthcare providers to disclose when they use generative AI to communicate with a patient, specifically when those messages contain a patient’s clinical information.
Meanwhile, Newsom recently signed SB 1120, which puts limitations on how health care service providers and health insurers can automate their services. The law ensures licensed physicians supervise the use of AI tools in these settings.
AI robocalls
Last Friday, Governor Newsom signed a bill into law requiring robocalls to disclose whether they’ve use AI-generated voices. AB 2905 aims to prevent another instance of the deepfake robocall resembling Joe Biden’s voice that confused many New Hampshire voters earlier this year.
Deepfake pornography
On Sunday, Newsom signed AB 1831 into law, which expands the scope of existing child pornography laws to include matter that is generated by AI systems.
Newsom signed two laws that address the creation and spread of deepfake nudes last week. SB 926 criminalizes the act, making it illegal to blackmail someone with AI-generated nude images that resemble them.
SB 981, which also became law on Thursday, requires social media platforms to establish channels for users to report deepfake nudes that resemble them. The content must then be temporarily blocked while the platform investigates it, and permanently removed if confirmed.
Watermarks
Also on Thursday, Newsom signed a bill into law to help the public identify AI-generated content. SB 942 requires widely used generative AI systems to disclose they are AI-generated in their content’s provenance data. For example, all images created by OpenAI’s Dall-E now need a little tag in their metadata saying they’re AI generated.
Many AI companies already do this, and there are several free tools out there that can help people read this provenance data and detect AI-generated content.
Election deepfakes
Earlier this week, California’s governor signed three laws cracking down on AI deepfakes that could influence elections.
One of California’s new laws, AB 2655, requires large online platforms, like Facebook and X, to remove or label AI deepfakes related to elections, as well as create channels to report such content. Candidates and elected officials can seek injunctive relief if a large online platform is not complying with the act.
Another law, AB 2839, takes aim at social media users who post, or repost, AI deepfakes that could deceive voters about upcoming elections. The law went into effect immediately on Tuesday, and Newsom suggested Elon Musk may be at risk of violating it.
AI-generated political advertisements now require outright disclosures under California’s new law, AB 2355. That means moving forward, Trump may not be able to get away with posting AI deepfakes of Taylor Swift endorsing him on Truth Social (she endorsed Kamala Harris). The FCC has proposed a similar disclosure requirement at a national level and has already made robocalls using AI-generated voices illegal.
Actors and AI
Two laws that Newsom signed earlier this month — which SAG-AFTRA, the nation’s largest film and broadcast actors union, was pushing for — create new standards for California’s media industry. AB 2602 requires studios to obtain permission from an actor before creating an AI-generated replica of their voice or likeness.
Meanwhile, AB 1836 prohibits studios from creating digital replicas of deceased performers without consent from their estates (e.g., legally cleared replicas were used in the recent “Alien” and “Star Wars” movies, as well as in other films).
SB 1047 gets vetoed
Governor Newsom still has a few AI-related bills to decide on before the end of September. However, SB 1047 is not one of them – the bill was vetoed on Sunday.
In a letter explaining his decision, Newsom said that SB 1047 focused too narrowly on large AI systems that could “give the public a false sense of security.” California’s governor noted how small AI models could be just as dangerous as those targeted by SB 1047, and said a more flexible regulatory approach is needed.
During a chat with Salesforce CEO Marc Benioff earlier this month during the 2024 Dreamforce conference, Newsom may have tipped his hat about SB 1047, and how he’s thinking about regulating the AI industry more broadly.
“There’s one bill that is sort of outsized in terms of public discourse and consciousness; it’s this SB 1047,” said Newsom onstage this month. “What are the demonstrable risks in AI and what are the hypothetical risks? I can’t solve for everything. What can we solve for? And so that’s the approach we’re taking across the spectrum on this.”
Check back on this article for updates on what AI laws California’s governor signs, and what he doesn’t.
Tech
SaaS in, SaaS out: Here’s what’s driving the SaaSpocalypse
One day not long ago, a founder texted his investor with an update: he was replacing his entire customer service team with Claude Code, an AI tool that can write and deploy software on its own. To Lex Zhao, an investor at One Way Ventures, the message indicated something bigger — the moment when companies like Salesforce stopped being the automatic default.
“The barriers to entry for creating software are so low now thanks to coding agents, that the build versus buy decision is shifting toward build in so many cases,” Zhao told TechCrunch.
The build versus buy shift is only part of the problem. The whole idea of using AI agents instead of people to perform work throws into question the SaaS business model itself. SaaS companies currently price their software per seat — meaning by how many employees log in to use it. “SaaS has long been regarded as one of the most attractive business models due to its highly predictable recurring revenue, immense scalability, and 70-90% gross margins,” Abdul Abdirahman, an investor at the venture firm F-Prime, told TechCrunch.
When one, or a handful, of AI agents can do that work — when employees simply ask their AI of choice to pull the data from the system — that per-seat model starts to break down.
The rapid pace of AI development also means that new tools, like Claude Code or OpenAI’s Codex, can replicate not just the core functions of SaaS products but also the add-on tools a SaaS vendor would sell to grow revenue from existing customers.
On top of that, customers now have the ultimate contract negotiation tool in their pockets: If they don’t like a SaaS vendor’s prices, they can, more easily than ever before, build their own alternative. “Even if they do not take the build route, this creates downward pressure on contracts that SaaS vendors can secure during renewals,” Abdirahman continued.
We saw this as early as late 2024, when Klarna announced that it had ditched Salesforce’s flagship CRM product in favor of its own homegrown AI system. The realization that a growing number of other companies can do the same is spooking public markets, where the stock prices of SaaS giants like Salesforce and Workday have been sliding. In early February, an investor sell-off wiped nearly $1 trillion in market value from software and services stocks, followed by another billion later in the month.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Experts are calling it the SaaSpocalypse, with one analyst dubbing it FOBO investing — or fear of becoming obsolete.
Yet the venture investors TechCrunch spoke with believe such fears are only temporary. “This isn’t the death of SaaS,” Aaron Holiday, a managing partner at 645 Ventures, told TechCrunch. Rather, it’s the beginning of an old snake shedding its skin, he said.
Move fast, break SaaS
The public market pattern is best illustrated through Anthropic’s recent product launches. The company released Claude Code for cybersecurity, and related stocks dropped. It released legal tools in Claude Cowork AI, and the stock price of the iShares Expanded Tech-Software Sector ETF — a basket of publicly traded software companies that includes firms like LegalZoom and RELX — also dropped.
In some ways, this was expected, as SaaS companies had long been overvalued, investors said. It also doesn’t help that these companies did the bulk of their growing during the zero-interest-rate era, which has since ended. The cost of doing business rises when the cost of borrowing money increases.
Public market investors typically price SaaS companies by estimating future revenue. But there is no telling whether in one year or five years anyone will be using SaaS products to the extent they once did. That’s why every time a new advanced AI tool launches, SaaS stocks feel a tremor.
“This may be the first time in history that the terminal value of software is being fundamentally questioned, materially reshaping how SaaS companies are underwritten going forward,” Abdirahman said.
That’s because slapping AI features on top of existing SaaS products may not be enough. A horde of AI-native startups is rising at a record pace, having completely redefined what it means to be a software company.
Software is now easier and cheaper to build, meaning it’s easier to replicate, Yoni Rechtman, a partner at Slow Ventures, told TechCrunch.
That’s good news for the next generation of startups, but bad news for the incumbents that spent years building their tech stacks.
On the other hand, the market also lacks enough time and evidence to show that whatever new business model emerges the SaaS’s wake will be worthwhile. AI companies are sometimes pricing their models based on consumption, meaning customers pay based on how much AI they use, measured in tokens (which each model provider defines slightly differently).
Others are working on “outcome-based pricing,” where fees are charged based on how well the AI actually works. This, ironically, is the current approach of former Salesforce CEO Bret Taylor’s AI startup, Sierra, a quasi-Salesforce competitor that offers customer service agents.
The approach appears, so far, appears to be working. In November, Sierra hit $100 million in annual recurring revenue in less than two years.
There was once also the idea that cloud-based software like SaaS sells would never depreciate and that it could last for decades. This is still true in some ways compared to what came before — on-premises software, which companies had to install and maintain on their own servers.
But being in the cloud doesn’t protect SaaS vendors from an entirely new technology rising to compete: AI.
Investors are rightfully nervous as AI-native companies pop up, adapt, adopt, and build technology much faster than a traditional SaaS company can move. SaaS companies are, after all, themselves the incumbents, having replaced old-school on-premises vendors in the last era of disruption.
This SaaSpocalypse calls to mind that Taylor Swift lyric about what happens when “someone else lights up the room” because “people love an ingénue.”
“The most important thing to understand about the SaaS pullback is that it is simultaneously a real structural shift and potentially a market overreaction,” Abdirahman said, adding that investors typically “sell first and ask questions later.”
SaaS IPOs are on hold
Public-market SaaS companies aren’t the only ones feeling a chill from investors.
A Crunchbase report released Wednesday showed that, though the IPO market seems to be thawing for some sectors, there haven’t been — and aren’t expected to be — any venture-backed SaaS filings on the horizon.
Holiday said this may be because there is a lot of pressure on large, private, late-stage SaaS companies like Canva and Rippling given the persnickety IPO window, high expectations driven by AI advancements, and the unsteady stock price of already public SaaS companies.
Some of these companies, including mid-size SaaS companies, have even struggled to raise extension rounds in the private market, Holiday said, over the same fears public investors have.
“Nobody wants to be subjected to the volatility of public markets when sentiment can send companies into downward tailspins,” Rechtman said, adding he expects to see companies like these to stay private for much longer.
Meanwhile, the public market waits to get a good look at the finances of the first AI-native companies hoping to IPO. The scuttlebutt says that both OpenAI and Anthropic are contemplating IPOs, maybe even later this year.
The most likely outcome is something that weaves the old and the new together, as tech disruptions always have.
Holiday said most of the new features companies are toying with these days “won’t stick” and that enterprises will always need software that meets compliance regulations, supports audits, manages workflow, and offers durability.
“Durable shareholder value isn’t built on hype,” he continued. “It’s built on fundamentals, retention, margins, real budgets, and defensibility.”
Tech
Anthropic’s Claude rises to No. 1 in the App Store following Pentagon dispute
Anthropic’s chatbot Claude seems to have benefited from the attention around the company’s fraught negotiations with the Pentagon.
As first reported by CNBC, Claude has been rising to the top of the free app rankings in Apple’s US App Store. On Saturday evening, it overtook OpenAI’s ChatGPT to claim the number one spot, a position that it still held on Sunday morning.
According to data from SensorTower, Claude was just outside the top 100 at the end of January, and has spent most of February somewhere in the top 20. It’s climbed rapidly in the past few days, from sixth on Wednesday, to fourth on Thursday, then first on Saturday.
A company spokesperson said that daily signups have broken the all-time record every day this week, free users have increased more than 60% since January, and paid subscribers have more than doubled this year.
After Anthropic attempted to negotiate for safeguards preventing the Department of Defense from using its AI models for mass domestic surveillance or fully autonomous weapons, President Donald Trump directed federal agencies to stop using all Anthropic products and Secretary of Defense Pete Hegseth said he’s designating the company a supply-chain threat.
OpenAI subsequently announced its own agreement with the Pentagon, which CEO Sam Altman claimed includes safeguards related to domestic surveillance and autonomous weapons.
This post was first published on February 28, 2026. It has been updated to reflect Anthropic reaching No. 1, and to include growth numbers from the company.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Tech
Honor launches its new slim foldable Magic V6 with a 6,600 mAh battery
Honor launched its new foldable, the Honor Magic V6, with a massive 6,600 mAh battery and a new sturdy hinge ahead of the Mobile World Congress (MWC) in Barcelona.
The Chinese company has been obsessed with proving that it makes the thinnest foldables. This year’s version is 4mm thick when unfolded and 8.75 mm thick when folded. Compared to last year’s Magic V5, which was 4.1 mm thick when unfolded and 8.8 mm thick when folded. We are talking very thin shavings here, but that helps the company make those claims.
The battery is possibly one of the most impressive parts of the phone. The Honor Magic V6 has a 6,600 mAh battery, up from 5,820 mAh last year. Using Honor’s SuperCharge tech, the phone can charge at 80W through a wired connection, and at 66W wirelessly.
What’s more, Honor also showed a new Silicon-carbon battery tech with 32% silicon density that could push foldable phone battery over 7,000 mAh.
The new device has a 7.95-inch main AMOLED display with 2352 x 2172 pixel resolution and a 6.52-inch cover display with 2420 x 1080 pixel resolution. Both screens support LTPO 2.0, which means they can switch to variable refresh rates between 1-120Hz for different use cases for better content legibility and power saving.
The company said that it has worked on a new Super Steel Hinge with a tensile strength of 2,800 MPa, which would make for sturdy long-term usage. It also said that it has reduced the crease depth by 44%, making the display look smooth. Honor noted that the Magic V6 has a new anti-reflective coating for the external screen with a reflectivity rating of 1.5%.
The phone is powered by Qualcomm’s Snapdragon 8 Elite Gen 5 processor, has 16GB RAM, and 512GB of storage. The Magic V6 has three rear cameras: a 50-megapixel main camera with f/1.6 aperture, a 64-megapixel telephoto camera with f/2.5 aperture, and a 50-megapixel ultrawide camera with f/2.2 aperture. On the front, there are dual 20-megapixel cameras with an f/2.2 aperture.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Honor is taking efforts to make the device have file and notification sharing compatibility with Apple devices. For instance, with Honor Magic V6, you can set up a two-way notification sync with an iPhone. Plus, the device also has settings to display notifications on the Apple Watch. The foldable has the ability share files with Macs with one tap, and it can act as an extended display as well.
Honor didn’t specify pricing for the device, but said that the Magic V6 will be released in select international markets in the second half of the year.
