Tech
Here is what’s illegal under California’s 18 (and counting) new AI laws
In September, California Governor Gavin Newsom considered 38 AI-related bills, including the highly contentious SB 1047, which the state’s legislature sent to his desk for final approval. He vetoed SB 1047 on Sunday, marking the end of the road for California’s controversial AI bill that tried to prevent AI disasters, but signed more than a dozen other AI bills into law this month. These bills try to address the most pressing issues in artificial intelligence: everything from Al risk, to deepfake nudes created by AI image generators, to Hollywood studios creating AI clones of dead performers.
“Home to the majority of the world’s leading AI companies, California is working to harness these transformative technologies to help address pressing challenges while studying the risks they present,” said Governor Newsom’s office in a press release.
So far, Governor Newsom has signed 18 AI bills into law, some of which are America’s most far reaching laws on generative AI yet. Here’s what they do.
AI risk
On Sunday, Governor Newsom signed SB 896 into law, which requires California’s Office of Emergency Services to perform risk analyses on potential threats posed by generative AI. CalOES will work with frontier model companies, such as OpenAI and Anthropic, to analyze AI’s potential threats to critical state infrastructure, as well as threats that could lead to mass casualty events.
Training data
Another law Newsom signed this month requires generative AI providers to reveal the data used to train their AI systems in documentation published on their website. AB 2013 goes into effect in 2026, and requires AI providers to publish: the sources of its datasets, a description of how the data is used, the number of data points in the set, whether copyrighted or licensed data is included, the time period the data was collected, among other standards.
Privacy and AI systems
Newsom also signed AB 1008 on Sunday, which clarifies that California’s existing privacy laws are extended to generative AI systems as well. That means that if an AI system, like ChatGPT, exposes someone’s personal information (name, address, biometric data), California’s existing privacy laws will limit how businesses can use and profit off of that data.
Education
Newsom signed AB 2876 this month, which requires California’s State Board of Education to consider “AI literacy” in its math, science, and history curriculum frameworks and instructional materials. This means California’s schools may begin teaching students the basics of how artificial intelligence works, as well as the limitations, impacts, and ethical considerations of using the technology.
Another new law, SB 1288, requires California superintendents to create working groups to explore how AI is being used in public school education.
Defining AI
This month, Newsom signed a bill that establishes a uniform definition for artificial intelligence in California law. AB 2885 states that artificial intelligence is defined as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.”
Healthcare
Another bill signed in September is AB 3030, which requires healthcare providers to disclose when they use generative AI to communicate with a patient, specifically when those messages contain a patient’s clinical information.
Meanwhile, Newsom recently signed SB 1120, which puts limitations on how health care service providers and health insurers can automate their services. The law ensures licensed physicians supervise the use of AI tools in these settings.
AI robocalls
Last Friday, Governor Newsom signed a bill into law requiring robocalls to disclose whether they’ve use AI-generated voices. AB 2905 aims to prevent another instance of the deepfake robocall resembling Joe Biden’s voice that confused many New Hampshire voters earlier this year.
Deepfake pornography
On Sunday, Newsom signed AB 1831 into law, which expands the scope of existing child pornography laws to include matter that is generated by AI systems.
Newsom signed two laws that address the creation and spread of deepfake nudes last week. SB 926 criminalizes the act, making it illegal to blackmail someone with AI-generated nude images that resemble them.
SB 981, which also became law on Thursday, requires social media platforms to establish channels for users to report deepfake nudes that resemble them. The content must then be temporarily blocked while the platform investigates it, and permanently removed if confirmed.
Watermarks
Also on Thursday, Newsom signed a bill into law to help the public identify AI-generated content. SB 942 requires widely used generative AI systems to disclose they are AI-generated in their content’s provenance data. For example, all images created by OpenAI’s Dall-E now need a little tag in their metadata saying they’re AI generated.
Many AI companies already do this, and there are several free tools out there that can help people read this provenance data and detect AI-generated content.
Election deepfakes
Earlier this week, California’s governor signed three laws cracking down on AI deepfakes that could influence elections.
One of California’s new laws, AB 2655, requires large online platforms, like Facebook and X, to remove or label AI deepfakes related to elections, as well as create channels to report such content. Candidates and elected officials can seek injunctive relief if a large online platform is not complying with the act.
Another law, AB 2839, takes aim at social media users who post, or repost, AI deepfakes that could deceive voters about upcoming elections. The law went into effect immediately on Tuesday, and Newsom suggested Elon Musk may be at risk of violating it.
AI-generated political advertisements now require outright disclosures under California’s new law, AB 2355. That means moving forward, Trump may not be able to get away with posting AI deepfakes of Taylor Swift endorsing him on Truth Social (she endorsed Kamala Harris). The FCC has proposed a similar disclosure requirement at a national level and has already made robocalls using AI-generated voices illegal.
Actors and AI
Two laws that Newsom signed earlier this month — which SAG-AFTRA, the nation’s largest film and broadcast actors union, was pushing for — create new standards for California’s media industry. AB 2602 requires studios to obtain permission from an actor before creating an AI-generated replica of their voice or likeness.
Meanwhile, AB 1836 prohibits studios from creating digital replicas of deceased performers without consent from their estates (e.g., legally cleared replicas were used in the recent “Alien” and “Star Wars” movies, as well as in other films).
SB 1047 gets vetoed
Governor Newsom still has a few AI-related bills to decide on before the end of September. However, SB 1047 is not one of them – the bill was vetoed on Sunday.
In a letter explaining his decision, Newsom said that SB 1047 focused too narrowly on large AI systems that could “give the public a false sense of security.” California’s governor noted how small AI models could be just as dangerous as those targeted by SB 1047, and said a more flexible regulatory approach is needed.
During a chat with Salesforce CEO Marc Benioff earlier this month during the 2024 Dreamforce conference, Newsom may have tipped his hat about SB 1047, and how he’s thinking about regulating the AI industry more broadly.
“There’s one bill that is sort of outsized in terms of public discourse and consciousness; it’s this SB 1047,” said Newsom onstage this month. “What are the demonstrable risks in AI and what are the hypothetical risks? I can’t solve for everything. What can we solve for? And so that’s the approach we’re taking across the spectrum on this.”
Check back on this article for updates on what AI laws California’s governor signs, and what he doesn’t.
Tech
Tesla brings its robotaxi service to Dallas and Houston
Tesla is expanding its robotaxi service to Dallas and Houston, according to a social media post from the company.
The post says simply that “Robotaxi is now rolling out in Dallas & Houston 🤠” and includes a 14-second video showing Tesla vehicles driving without human monitors or drivers in the front seat.
The company now offers robotaxi service in three cities, all of them in Texas, after launching in Austin last year and starting to offer rides without safety drivers in January 2026. In a February filing, Tesla said that its Austin robotaxis have been involved in 14 crashes since launch.
It also offers a more limited ride service with human drivers in the San Francisco Bay Area.
Tesla may not be running many vehicles in either of these new markets yet, with crowdsourced data on the Robotaxi Tracker website only registering a single vehicle in each city (compared to 46 active vehicles logged in Austin).
Tech
Netflix plans to add a vertical video feed, use AI for recommendations
Netflix is going to launch a TikTok-like vertical video feed within its apps this month, and plans to use AI broadly for content creation and recommendations, the company said on Thursday.
Netflix has been testing a vertical video feed since last year. The short video feature could aid users with discovering video podcasts, along with the current slate of shows and movies. The company is also leaning more into using AI for recommendations after launching a ChatGPT-powered search feature last year.
“We have been in personalization and recommendation for two decades, but we still see tremendous room to make it better by leveraging newer technologies,” Netflix co-CEO Gregory Peters said during the company’s first-quarter conference call. “Recommendation systems based on new model architectures not only improve current personalization but also let us iterate and improve more quickly — adding support for different content types much more efficiently.”
Co-CEO Ted Sarandos said he sees AI tools improving the entire content creation process. “In general, we expect GenAI to make content better; better tools, better processes […] It takes a great artist to make great art, and AI won’t change that. But AI will give those artists better tools to bring those visions to life,” he said.
Last month, Netflix bought Ben Affleck’s AI creation company InterPositive, which, Sarandos said, has garnered interest from creators.
“With our acquisition of InterPositive, we think it accelerates our GenAI capability because it is proprietary technology created specifically for filmmakers and filmmaking, different from other GenAI video applications. While our ownership of InterPositive is very new, we have generated interest with creators who have spent time with the tools, and we are seeing momentum build around adoption,” he noted.
Netflix also mentioned that it wants to use AI to improve its ad suite, and allow for new formats and customization to get better returns. The company expects to generate ad revenue of $3 billion this year.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Netflix reported revenue of $12.25 billion in Q1 2026, up 16.2% year-year-year, and said profit jumped 83% to $5.28 billion. Alongside the first-quarter results, Netflix said its co-founder and chair, Reed Hastings, is leaving the company’s board this summer.
Notably, the company hiked subscription prices in the U.S. late last month, which could have a positive impact next quarter. The company said it ended 2025 with 325 million paying subscribers.
Tech
Bluesky confirms DDoS attack is cause of continued app outages
Bluesky’s website and app are still struggling on Friday after experiencing service interruptions that chief operating officer Rose Wang attributed to an ongoing cyberattack.
On Thursday evening, the social media company confirmed that a “sophisticated Distributed Denial-of-Service (DDoS) attack” was to blame for the issues, which had originally started on April 15 at around 8:40 p.m. ET.
Distributed denial-of-service attacks often involve pummeling apps or websites with large amounts of junk web traffic aimed at overloading and knocking its servers offline. While these kinds of cyberattacks do not involve intrusions into a company’s systems, these incidents can still be disruptive to both the company and its users.
In a post on the Bluesky account, the company shared the cause of the problem and noted that the attack was “impacting our operations, with users experiencing intermittent interruptions in service for their feeds, notifications, threads, and search.”
Bluesky said that it has not seen any evidence of unauthorized access to private data, however.
When originally reached for comment on Thursday, Bluesky only pointed us to the status.bsky.app page and account (@status.bsky.app) for updates. The company did not provide an estimated time for a fix.
The network’s status page is currently not working, however.
Bluesky said it will provide another update on the status of the attack and its mitigation by 1 p.m. ET on Friday.

Because the outages are intermittent, the Bluesky site and app will load at times, slowly, and other times will display error messages.
For instance, switching to a particular feed within the app could display a message that says, “This feed is currently receiving high traffic and is temporarily unavailable. Please try again later. Message from server: Rate Limit Exceeded.”

Popular feeds like Discover or the official Bluesky Team’s feed often see this problem, even as users’ own personal feeds are functional.
Other times, like when trying to visit a user’s profile, the site will display an error message, forcing you to refresh and try again.

Bluesky protocol engineer Bryan Newbold remarked around 3:46 a.m. ET on Wednesday, “oof, our services are getting hit pretty hard tonight.”
Notably, the service disruptions are impacting Bluesky, but other communities, like Blacksky, that run their own infrastructure on the underlying protocol that powers the decentralized social network, are still functioning.
Blacksky’s team told TechCrunch that the Bluesky outage has led to a “significant spike” in migration requests from Bluesky users over the past 12 hours, as users, devs, and other ATmosphere founders like Sebastian at Eurosky have been promoting its services.

It was clear that Bluesky’s team was in a hectic state this week while facing these issues, as one message on its status page had a typo: ” investigating an incident with service in one of our reginos [sic].”

