Connect with us

Tech

Trump’s AI framework targets state laws, shifts child safety burden to parents

The Trump administration on Friday laid out a legislative framework for a singular policy for AI in the United States. The framework would centralize power in Washington by preempting state AI laws, potentially undercutting the recent surge of efforts from states to regulate the use and development of the technology.

“This framework can only succeed if it is applied uniformly across the United States,” reads a White House statement on the framework. “A patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race.”

The framework outlines seven key objectives that prioritize innovation and scaling AI, and proposes a centralized federal approach that would override stricter state-level regulations. It places significant responsibility on parents for issues like child safety, and lays out relatively soft, nonbinding expectations for platform accountability. 

For example, it says Congress should require AI companies to implement features that “reduce the risks of sexual exploitation and harm to minors,” but does not lay out any clear, enforceable requirements.

Trump’s framework comes three months after he signed an executive order directing federal agencies to challenge state AI laws. The order gave the Commerce Department 90 days to compile a list of “onerous” state AI laws, potentially risking states’ eligibility for federal funds like broadband grants. The agency has yet to publish that list.

The order also directed the administration to work with Congress on a uniform AI law. That vision is coming into focus, and it mirrors Trump’s earlier AI strategy, which focused less on guardrails and more on promoting companies’ growth.

The new framework proposes a “minimally burdensome national standard,” echoing the administration’s broader push to “remove outdated or unnecessary barriers to innovation” and accelerate AI adoptions across industries. This is a pro-growth, light-touch regulatory approach championed by “accelerationists,” one of whom is White House AI czar and venture capitalist David Sacks. 

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

While the framework nods to federalism, the carve-outs for states are relatively narrow, preserving only their authority over general laws like fraud and child protection, zoning, and state use of AI. It draws a hard line against states regulating AI development itself, which it says is an “inherently interstate” issue tied to national security and foreign policy. 

The framework also seeks to prevent states from “penaliz[ing] AI developers for a third party’s unlawful conduct involving their models” — a key liability shield for developers.

Missing from that framework are any gestures toward liability frameworks, independent oversight, or enforcement mechanisms for potential novel harms caused by AI. In effect, the framework would centralize AI policymaking in Washington while narrowing the space for states to act as early regulators of emerging risks.

Critics say states are the sandboxes of democracy and have been quicker to pass laws around emerging risks. Notably, New York’s RAISE Act and California’s SB-53 seek to ensure large AI companies have and adhere to safety protocols that are publicly documented. 

“White House AI czar David Sacks continues to do the bidding of Big Tech at the expense of regular, hardworking Americans,” said Brendan Steinhauser, CEO of The Alliance for Secure AI. “This federal AI framework seeks to prevent states from legislating on AI and provides no path to accountability for AI developers for the harms caused by their products.” 

Many in the AI industry are celebrating this direction because it gives them broader liberties to “innovate” without the threat of regulation.

“This framework is exactly what startups have been asking for: a clear national standard so they can build fast and scale,” Teresa Carlson, president of General Catalyst Institute, told TechCrunch. “Founders shouldn’t have to navigate a patchwork of conflicting state AI laws that impede innovation.”

The framework was issued at a moment when child safety has emerged as a central flashpoint in the debate over AI. Certain states have moved aggressively to pass laws aimed at protecting minors and placing more responsibility on tech companies. The administration’s proposal points in a different direction, placing greater emphasis on parental control than platform accountability. 

“Parents are best equipped to manage their children’s digital environment and upbringing,” the framework reads. “The Administration is calling on Congress to give parents tools to effectively do that, such as account controls to protect their children’s privacy and manage their device use.”

The framework also says the administration “believes” that AI platforms should “implement features to reduce potential sexual exploitation of children and encouragement of self-harm.” While it calls on Congress to require such safeguards and affirms that existing laws, including those banning child sexual abuse materials, should apply to AI systems, the proposal employs qualifiers like “commercially reasonable” and stops short of laying out clear prerequisites.

On the topic of copyright, the framework attempts to find a middle ground between protecting creators and allowing AI systems to be trained on existing works, citing the need for “fair use.” That kind of language mirrors arguments AI companies have made as they face a growing number of copyright lawsuits over their training data. 

The main guardrails Trump’s AI framework seem to outline involve ensuring “AI can pursue truth and accuracy without limitation.” Specifically, it focuses on preventing government-driven censorship, rather than platform moderation itself. 

“Congress should prevent the United States government from coercing technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological agendas,” the framework reads. It also instructs Congress to provide a way for Americans to pursue legal redress against government agencies that seek to censor expression on AI platforms or dictate information provided by an AI platform.

The framework comes as Anthropic is suing the government for allegedly infringing on its First Amendment rights after the Department of Defense (DOD) labeled it a supply-chain risk. Anthropic argues that the DOD is designating it as such in retaliation for not allowing the military to use its AI products for mass surveillance of Americans or for making targeting and firing decisions in autonomous lethal weapons. Trump has referred to Anthropic and its CEO Dario Amodei as “woke” and a “radical leftist.”

The framework’s language, which emphasizes protecting “lawful political expression or dissent,” seems to build on Trump’s earlier executive order targeting “woke AI,” which pushed federal agencies to adopt systems deemed ideologically neutral. 

It’s unclear what qualifies as censorship versus standard content moderation, so such language could make it difficult for regulators to coordinate with platforms on issues like misinformation, election interference, or public safety risks. 

Samir Jain, vice president of policy at the Center for Democracy and Technology, pointed out: “[The framework] rightly says that the government should not coerce AI companies to ban or alter content based on ‘partisan or ideological agendas,’ yet the Administration’s ‘woke AI’ Executive Order this summer does exactly that.”

source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Are AI tokens the new signing bonus or just a cost of doing business?

This week, a topic that has been boomeranging around Silicon Valley bounced into the spotlight: AI tokens as compensation. The idea is straightforward enough — rather than giving engineers only salary, equity, and bonuses, companies would also hand them a budget of AI tokens, the computational units that power tools like Claude, ChatGPT, and Gemini. Spend them to run agents, automate tasks, crank through code. The pitch is that access to more compute makes engineers more productive, and that more productive engineers are worth more. It’s an investment in the person holding them, is the idea.

Jensen Huang, the leather-jacket-wearing CEO of Nvidia, seemed to capture everyone’s imagination when he floated the notion at the company’s annual GTC event earlier this week that engineers should receive roughly half their base salary again — in tokens. His top people, by his math, might burn through $250,000 a year in AI compute. He called it a recruiting tool and predicted it would become standard across Silicon Valley.

It isn’t entirely clear where the idea was first, well, ideated. Tomasz Tunguz, a renowned VC in the Bay Area who runs Theory Ventures and focuses on AI, data, and SaaS startups — and whose writing on all things data has garnered a loyal following over the years — was talking about this in mid-February, writing that tech startups were already adding inference costs as a “fourth component to engineering compensation.” Using data from the compensation tracking site Levels.fyi, he put a top-quartile software engineer salary at $375,000. Add $100,000 in tokens and you’re at $475,000 fully loaded — meaning roughly one dollar in five is now compute.

That’s no coincidence. Agentic AI has been taking off, and the release of OpenClaw in late January accelerated the conversation considerably. OpenClaw is an open-source AI assistant designed to run continuously — churning through tasks, spawning sub-agents, and working through a to-do list while its user sleeps. It’s part of a broader shift toward “agentic” AI, meaning systems that don’t just respond to prompts but take sequences of actions autonomously over time.

The practical consequence is that token consumption has exploded. Where someone writing an essay might use 10,000 tokens in an afternoon, an engineer running a swarm of agents can blow through millions in a day — automatically, in the background, without typing a word.

By this weekend, the New York Times had put together a smart look at the so-called tokenmaxxing trend, finding that engineers at companies including Meta and OpenAI are competing on internal leaderboards that track token consumption. Generous token budgets are quietly becoming a standard job perk, the paper reported, the way dental insurance or free lunch once was. One Ericsson engineer in Stockholm told the Times he probably spends more on Claude than he earns in salary, though his employer picks up the tab.

Maybe tokens really will become the fourth pillar of engineering compensation. But engineers might want to hold the line before embracing this as a straightforward win. More tokens may mean more power in the short term, but given how fast things are evolving, it doesn’t necessarily mean more job security. For one thing, a large token allotment comes with large expectations. If a company is effectively funding a second engineer’s worth of compute on your behalf, the implicit pressure is to produce at twice the rate (or more).

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

And there’s a muddier problem underneath that: at the point where a company’s token spend per employee approaches or exceeds that employee’s salary, the financial logic of headcount starts to look different to its finance team. If the compute is doing the work, the question of how many humans need to be coordinating it becomes harder to avoid.

Jamaal Glenn, an East Coast-based Stanford MBA and former VC turned financial services CFO, similarly points out that what may seem like a perk can be a clever way for companies to inflate the apparent value of a compensation package without increasing cash or equity — the things that actually compound for an employee over time. Your token budget doesn’t vest. It doesn’t appreciate. It doesn’t show up in your next offer negotiation the way a base salary or equity grant does. If companies successfully normalize tokens as pay, they may find it easier to keep cash comp flat while pointing to a growing compute allowance as evidence of investment in their people.

That’s a good deal for the company. Whether it’s a good deal for the engineer depends on questions most engineers don’t yet have enough information to answer.

source

Continue Reading

Tech

Amazon working on new smartphone with Alexa at its core, report says

Looks like Amazon’s getting back into the smartphone game. More than 11 years after the e-commerce giant pulled the plug on its failed first effort, the Fire Phone, the company is now developing a new smartphone codenamed “Transformer,” Reuters reported, citing anonymous sources.

The device is being developed by the company’s Devices and Services division, and it would feature personalized features that would make it easier to use Amazon’s suite of apps, including Amazon Shopping, Prime Video, and Prime Music, the report said.

The smartphone would also support Alexa, the smart home assistant that Amazon has been investing heavily in, adding AI chops and expanding support to work with most of the company’s devices. AI features are said to be a big focus for the smartphone, which is being seen internally as a way to encourage Amazon customers to use its AI products, Reuters reported.

The smartphone is said to be developed by a relatively new unit within the Devices division called ZeroOne, which is led by J Allard, a former Microsoft executive who helped create the Xbox.

The news comes as Amazon has been going all-in on AI, investing $50 billion into OpenAI recently, and projecting $200 billion in capital expenditures toward its AI, chips, and robotics efforts in 2026.

The company spent more than a year revamping its Alexa assistant with generative AI features, finally launching it this February as Alexa+. The assistant keeps its smart home chops, and can now do most things that other AI chatbots can — like planning an itinerary for a trip, updating a shared calendar, finding and saving recipes to a library, making movie recommendations, helping with homework, exploring a topic, and more.

Amazon declined to comment.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

source

Continue Reading

Tech

Cyberattack on vehicle breathalyzer company leaves drivers stranded across the US

A cyberattack on a U.S. vehicle breathalyzer company has left drivers across the United States stranded and unable to start their vehicles.

The company, Intoxalock, says on its website that it is “currently experiencing downtime” after a cyberattack on March 14. Intoxalock sells breathalyzer devices that fit into vehicle ignition switches, and is used by people who are required to provide a negative alcohol breath sample to start their car.

Intoxalock spokesperson Rachael Larson confirmed to TechCrunch that the company had been hit by a cyberattack. Larson said the company took steps to “temporarily pause some of our systems as a precautionary measure.”

These breathalyzer devices need to be calibrated every few months or so, but the cyberattack has left Intoxalock unable to perform these calibrations. The company said customers whose devices require calibration may experience delays starting their vehicles.

Drivers posting on Reddit say that cars are unable to start if they miss a calibration, effectively locking drivers out of their vehicles.

According to local news reports across Maine, drivers are experiencing lockouts and some have been unable to start their vehicles. One auto shop in Middleboro told WCVB 5 in Boston that it has had cars parked in its lot all week due to the cyberattack.

News reports from across the United States show drivers are affected from New York to Minnesota, and drivers have been unable to drive because their vehicle-based breathalyzers cannot be immediately calibrated.

Intoxalock would not say what kind of cyberattack it was experiencing, such as ransomware or if there was a data breach, or whether it had received any communications from the hackers, including any ransom demands. The company’s technology is used in 46 states, its website says, and it claims to provide services to 150,000 drivers every year.

Intoxalock did not provide an estimated timeline for its recovery.

source

Continue Reading