Connect with us

Tech

Trump’s Silicon Valley advisers have AI ‘censorship’ in their crosshairs

President-elect Donald Trump has surrounded himself with Silicon Valley entrepreneurs — including Elon Musk, Marc Andreessen, and David Sacks — who are now advising him on technology and other issues.

When it comes to AI, this crew of technologists is fairly aligned on the need for rapid development and adoption of AI throughout the U.S. However, there’s one AI safety issue this group brings up quite a bit: the threat of AI “censorship” from Big Tech.

Trump’s Silicon Valley advisers could make the responses of AI chatbots a new battleground for conservatives to fight their ongoing culture war with tech companies.

AI censorship is a term used to describe how tech companies put their thumb on the scale with their AI chatbots’ answers in order to conform to certain politics, or push their own. Others might call it content moderation, which often refers to the same thing but has a very different connotation. Much like social media and search algorithms, getting AI answers right for live news events and controversial subjects is a constantly moving target.

For the last decade, conservatives have repeatedly criticized Big Tech for caving to government pressures and censoring their social media platforms and services. However, some tech executives have begun to moderate their positions in public. For example, ahead of the 2024 election, Meta CEO Mark Zuckerberg apologized to Congress for bending to the Biden administration’s pressure to aggressively moderate content related to COVID-19. Shortly after, the Meta CEO said he’d made a “20-year political mistake” by taking too much responsibility for problems that were out of his company’s control — and said he wouldn’t be making those mistakes again.

But according to Trump’s tech advisers, AI chatbots represent an even greater threat to free speech, and potentially a more powerful way to effect control over speech. Instead of skewing a search or feed algorithm toward a desired outcome, such as downranking vaccine disinformation, tech companies can now just give you a single, clear answer that doesn’t include it.

In recent months, Musk, Andreessen, and Sacks have spoken out against AI censorship in podcasts, interviews, and social media posts. While we don’t know how exactly they’re advising Trump, their publicly stated beliefs could reveal the conversations they’re having behind closed doors in Washington, D.C., and Mar-a-Lago.

“This is my belief, and what I’ve been trying to tell people in Washington, which is if you thought social media censorship was bad, [AI] has the potential to be a thousand times worse,” said a16z co-founder Marc Andreessen in a recent interview with Joe Rogan. “If you wanted to create the ultimate dystopian world, you’d have a world where everything is controlled by an AI that’s been programmed to lie,” said Andreessen in another recent interview with Bari Weiss.

Andreessen also disclosed to Weiss that he has spent roughly half his time with Trump’s team since the election happened, offering advice on technology and business.

“[Andreessen] explained the dystopian path we were on with AI,” said former PayPal COO and Craft Ventures co-founder, David Sacks, in a recent post on X shortly after he was appointed to be Trump’s AI and crypto czar. “But the timeline split, and we’re on a different path now.”

On All In — the popular podcast Sacks hosts alongside other influential venture capitalists — Trump’s new AI adviser has repeatedly criticized Google and OpenAI for, as the show’s hosts describe it, forcing AI chatbots to be politically correct.

“One of the concerns about ChatGPT early on was that it was programmed to be woke, and that it wasn’t giving people truthful answers about a lot of things. The censorship was being built into the answers,” said Sacks on an episode of All In from November 2023.

Despite Sacks’ claims, even Elon Musk admits xAI’s chatbot is often more politically correct than he’d like. It’s not because Grok was “programmed to be woke,” but more likely a reality of training AI on the open internet. That said, Sacks is making it more clear every day that “AI truthfulness” is something he’s focused on.

“That’s how you get Black George Washington at Google”

The most cited case of AI censorship was when Google Gemini’s AI image generator generated multiracial images for queries such as “U.S. founding fathers” and “German soldiers in WWII,” which were obviously inaccurate.

An image generated by Twitter user Patrick Ganley, using Gemini.Image Credits:Gemini / Patrick Ganley

But there are other examples of companies influencing specific results. Most recently, users found out that ChatGPT just won’t answer questions about certain names, and OpenAI admitted that at least one of those names triggered internal privacy tools. At another point, Google’s and Microsoft’s AI chatbots refused to say who won the 2020 U.S. election. During the 2024 election, almost every AI system refused to answer questions about election results, except for Perplexity and Grok.

For some of these examples, the tech companies argued they were making a safe and responsible choice for their users. In some cases, that may be true — Grok hallucinated about the outcome of the 2024 election before votes had even been counted.

But the Gemini incident stuck out; it caused Google to turn off Gemini’s ability to generate images of people — something the free version of Gemini still cannot do. Google referred to that incident as a mistake and apologized for “missing the mark.”

Andreessen and Sacks don’t see it this way. Both venture capitalists have said that Google didn’t miss the mark at all, but rather, hit it a little too obviously. They considered it a pivotal mask-off moment for Google.

“The people running Google AI are smuggling in their preferences and their biases, and those biases are extremely liberal,” said Sacks on an episode of All In from February 2024, responding to the Gemini incident. “Do I think they’re going to get rid of the bias? No, they’re going to make it more subtle. That is what I think is disturbing about it.”

“It’s 100% intentional; that’s how you get Black George Washington at Google,” said Andreessen in the recent interview with Weiss, rehashing the Gemini incident. “This goes directly to Elon’s argument, which is that at the core of this, you have to train the AI to lie [i.e., to produce answers like Gemini’s].”

As Andreessen mentions, Elon Musk has been outspoken against “woke AI chatbots.” Musk originally created his well-funded AI startup, xAI, in 2023 to oppose OpenAI’s ChatGPT, which the billionaire said at the time was infected with the “woke mind virus.” He ultimately created Grok, an AI chatbot with notably fewer safeguards than other leading chatbots.

“I’m going to start something which you call TruthGPT or a maximum truth-seeking AI that tries to understand the nature of the universe,” said Musk in an interview with Fox from 2023.

When Musk launched Grok, Sacks applauded the effort: “Having something like Grok around will — at a minimum — keep OpenAI honest and keep ChatGPT honest,” said Trump’s AI czar in an All In episode from November 2023.

Now, Musk is doing more than just keeping ChatGPT honest. He has raised more than $12 billion to fund xAI and compete with OpenAI. He’s also suing Sam Altman’s startup and Microsoft, potentially halting OpenAI’s for-profit transition.

Musk’s influence on conservative government officials has already shown to carry weight in other areas. Texas attorney general Ken Paxton is investigating a group of advertisers that allegedly boycotted Elon Musk’s X. Musk previously sued the same advertising group, and since then, some of the companies have resumed advertising on his platform.

It’s not clear what Trump and other Republicans could do if they actually wanted to investigate OpenAI or Google for AI censorship. It could be investigations by expert agencies, legal challenges, or perhaps just a cultural issue that Trump can press on for the next four years. Regardless of the path forward, Trump’s Silicon Valley advisers are not mincing words on this issue today.

“Elon, with the Twitter files, did a privatized version of what now needs to happen broadly,” said Andreessen to Weiss, referring to Musk’s allegations of censorship at Twitter. “We, the American population, need to find out what’s been happening all this time, specifically about this intertwining of government pressure with censorship … There needs to be consequences.”

source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Volkswagen’s cheapest EV ever is the first to use Rivian software

Volkswagen’s ultra-cheap EV called the ID EVERY1 — a small four-door hatchback revealed Wednesday — will be the first to roll out with software and architecture from Rivian, according to a source familiar with the new model.

The EV is expected to go into production in 2027 with a starting price of 20,000 euros ($21,500). A second EV called the ID.2all, which will be priced in the 25,000 euro price category, will be available in 2026. Both vehicles are part of the automaker’s new of category electric urban front-wheel drive cars that are being developing under the so-called “Brand Group Core” that makes up the volume brands in the VW Group. And both vehicles are for the European market.

The EVERY1 will be the first to ship with Rivian’s vehicle architecture and software as part of a $5.8 billion joint venture struck last year between the German automaker and U.S. EV maker. The ID.2all is based on the E3 1.1 architecture and software developed by VW’s software unit Cariad.

VW didn’t name Rivian in its reveal Wednesday, although there were numerous nods to next-generation software. Kai Grünitz, member of the Volkswagen Brand Board of Management responsible for Technical Development, noted it would be the first model in the entire VW Group to use a “fundamentally new, particularly powerful software architecture.”

“This means the future entry-level Volkswagen can be equipped with new functions throughout its entire life cycle,” he said. “Even after purchase of a new car, the small Volkswagen can still be individually adapted to customer needs.”

Sources who didn’t want to be named because they were not authorized to speak publicly, confirmed to TechCrunch that Rivian’s software will be in the ID EVERY1 EV. TechCrunch has reached out to Rivian and VW and will update the article if the companies respond.

The new joint venture provides Rivian with a needed influx of cash and the opportunity to diversify its business. Meanwhile, VW Group gains a next-generation electrical architecture and software for EVs that will help it better compete. Both companies have said that the joint venture, called Rivian and Volkswagen Group Technologies, will reduce development costs and help scale new technologies more quickly.

The joint venture is a 50-50 partnership with co-CEOs. Rivian’s head of software, Wassym Bensaid, and Volkswagen Group’s chief technical engineer, Carsten Helbing, will lead the joint venture. The team will be based initially in Palo Alto, California. Three other sites are in development in North America and Europe, the companies have previously said.

image credits: VW

“The ID. EVERY1 represents the last piece of the puzzle on our way to the widest model selection in the volume segment,” Thomas Schäfer, CEO of the Volkswagen Passenger Cars brand and Head of the Brand Group Core, said in a statement. “We will then offer every customer the right car with the right drive system–including affordable all-electric entry-level mobility. Our goal is to be the world’s technologically leading high-volume manufacturer by 2030. And as a brand for everyone–just as you would expect from Volkswagen.”

The Volkswagen ID EVERY1 is just a concept for now — and with only a few details attached to the unveiling. The concept vehicle reaches a top speed of 130 km/h (80 miles per hour) and is powered by a newly developed electric drive motor with 70 kW, according to Volkswagen. The German automaker said the range on the EVERY1 will be at least 250 kilometers (150 miles). The vehicle is small but larger than VW’s former UP! vehicle. The company said it will have enough space for four people and a luggage compartment volume of 305 liters.

source

Continue Reading

Tech

The hottest AI models, what they do, and how to use them

AI models are being cranked out at a dizzying pace, by everyone from Big Tech companies like Google to startups like OpenAI and Anthropic. Keeping track of the latest ones can be overwhelming. 

Adding to the confusion is that AI models are often promoted based on industry benchmarks. But these technical metrics often reveal little about how real people and companies actually use them. 

To cut through the noise, TechCrunch has compiled an overview of the most advanced AI models released since 2024, with details on how to use them and what they’re best for. We’ll keep this list updated with the latest launches, too.

There are literally over a million AI models out there: Hugging Face, for example, hosts over 1.4 million. So this list might miss some models that perform better, in one way or another. 

AI models released in 2025

Cohere’s Aya Vision

Cohere released a multimodal model called Aya Vision that it claims is best in class at doing things like captioning images and answering questions about photos. It also excels in languages other than English, unlike other models, Cohere claims. It is available for free on WhatsApp.

OpenAI’s GPT 4.5 ‘Orion’

OpenAI calls Orion their largest model to date, touting its strong “world knowledge” and “emotional intelligence.” However, it underperforms on certain benchmarks compared to newer reasoning models. Orion is available to subscribers of OpenAI’s $200 a month plan.

Claude Sonnet 3.7

Anthropic says this is the industry’s first ‘hybrid’ reasoning model, because it can both fire off quick answers and really think things through when needed. It also gives users control over how long the model can think for, per Anthropic. Sonnet 3.7 is available to all Claude users, but heavier users will need a $20 a month Pro plan.

xAI’s Grok 3

Grok 3 is the latest flagship model from Elon Musk-founded startup xAI. It’s claimed to outperform other leading models on math, science, and coding. The model requires X Premium (which is $50 a month.) After one study found Grok 2 leaned left, Musk pledged to shift Grok more “politically neutral” but it’s not yet clear if that’s been achieved.

OpenAI o3-mini

This is OpenAI’s latest reasoning model and is optimized for STEM-related tasks like coding, math, and science. It’s not OpenAI’s most powerful model but because it’s smaller, the company says it’s significantly lower cost. It is available for free but requires a subscription for heavy users.

OpenAI Deep Research

OpenAI’s Deep Research is designed for doing in-depth research on a topic with clear citations. This service is only available with ChatGPT’s $200 per month Pro subscription. OpenAI recommends it for everything from science to shopping research, but beware that hallucinations remain a problem for AI.

Mistral Le Chat

Mistral has launched app versions of Le Chat, a multimodal AI personal assistant. Mistral claims Le Chat responds faster than any other chatbot. It also has a paid version with up-to-date journalism from the AFP. Tests from Le Monde found Le Chat’s performance impressive, although it made more errors than ChatGPT.

OpenAI Operator

OpenAI’s Operator is meant to be a personal intern that can do things independently, like help you buy groceries. It requires a $200 a month ChatGPT Pro subscription. AI agents hold a lot of promise, but they’re still experimental: a Washington Post reviewer says Operator decided on its own to order a dozen eggs for $31, paid with the reviewer’s credit card.

Google Gemini 2.0 Pro Experimental

Google Gemini’s much-awaited flagship model says it excels at coding and understanding general knowledge. It also has a super-long context window of 2 million tokens, helping users who need to quickly process massive chunks of text. The service requires (at minimum) a Google One AI Premium subscription of $19.99 a month.

AI models released in 2024

DeepSeek R1

This Chinese AI model took Silicon Valley by storm. DeepSeek’s R1 performs well on coding and math, while its open source nature means anyone can run it locally. Plus, it’s free. However, R1 integrates Chinese government censorship and faces rising bans for potentially sending user data back to China.

Gemini Deep Research

Deep Research summarizes Google’s search results in a simple and well-cited document. The service is helpful for students and anyone else who needs a quick research summary. However, its quality isn’t nearly as good as an actual peer-reviewed paper. Deep Research requires a $19.99 Google One AI Premium subscription.

Meta Llama 3.3 70B

This is the newest and most advanced version of Meta’s open source Llama AI models. Meta has touted this version as its cheapest and most efficient yet, especially for math, general knowledge, and instruction following. It is free and open source.

OpenAI Sora

Sora is a model that creates realistic videos based on text. While it can generate entire scenes rather than just clips, OpenAI admits that it often generates “unrealistic physics.” It’s currently only available on paid versions of ChatGPT, starting with Plus, which is $20 a month. 

Alibaba Qwen QwQ-32B-Preview

This model is one of the few to rival OpenAI’s o1 on certain industry benchmarks, excelling in math and coding. Ironically for a “reasoning model,” it has “room for improvement in common sense reasoning,” Alibaba says. It also incorporates Chinese government censorship, TechCrunch testing shows. It’s free and open source.

Anthropic’s Computer Use

Claude’s Computer Use is meant to take control of your computer to complete tasks like coding or booking a plane ticket, making it a predecessor of OpenAI’s Operator. Computer use, however, remains in beta. Pricing is via API: $0.80 per million tokens of input and $4 per million tokens of output.

x.AI’s Grok 2 

Elon Musk’s AI company, x.AI, has launched an enhanced version of its flagship Grok 2 chatbot it claims is “three times faster.” Free users are limited to 10 questions every two hours on Grok, while subscribers to X’s Premium and Premium+ plans enjoy higher usage limits. x.AI also launched an image generator, Aurora, that produces highly photorealistic images, including some graphic or violent content.

OpenAI o1

OpenAI’s o1 family is meant to produce better answers by “thinking” through responses through a hidden reasoning feature. The model excels at coding, math, and safety, OpenAI claims, but has issues deceiving humans, too. Using o1 requires subscribing to ChatGPT Plus, which is $20 a month.

Anthropic’s Claude Sonnet 3.5 

Claude Sonnet 3.5 is a model Anthropic claims as being best in class. It’s become known for its coding capabilities and is considered a tech insider’s chatbot of choice. The model can be accessed for free on Claude although heavy users will need a $20 monthly Pro subscription. While it can understand images, it can’t generate them.

OpenAI GPT 4o-mini

OpenAI has touted GPT 4o-mini as its most affordable and fastest model yet thanks to its small size. It’s meant to enable a broad range of tasks like powering customer service chatbots. The model is available on ChatGPT’s free tier. It’s better suited for high-volume simple tasks compared to more complex ones.

Cohere Command R+

Cohere’s Command R+ model excels at complex Retrieval-Augmented Generation (or RAG) applications for enterprises. That means it can find and cite specific pieces of information really well. (The inventor of RAG actually works at Cohere.) Still, RAG doesn’t fully solve AI’s hallucination problem.

source

Continue Reading

Tech

Not all cancer patients need chemo. Ataraxis AI raised $20M to fix that.

Artificial intelligence is a big trend in cancer care, and it’s mostly focused detecting cancer at the earliest possible stage. That makes a lot of sense, given that cancer is less deadly the earlier it’s detected.

But fewer are asking another fundamental question: if someone does have cancer, is an aggressive treatment like chemotherapy necessary? That’s the problem Ataraxis AI is trying to solve.

The New York-based startup is focused on using AI to accurately predict not only if a patient has cancer, but also what their cancer outcome looks like in 5 to 10 years. If there’s only a small chance of the cancer coming back, chemo can be avoided altogether – saving a lot of money, while avoiding the treatment’s notorious side effects.

Ataraxis AI now plans to launch their first commercial test, for breast cancer, to U.S. oncologists in the coming months, its co-founder Jan Witowski tells TechCrunch. To bolster the launch and expand into other types of cancer, the startup has raised a $20.4 million Series A, it told TechCrunch exclusively.

The round was led by AIX Ventures with participation from Thiel Bio, Founders Fund, Floating Point, Bertelsmann, and existing investors Giant Ventures and Obvious Ventures. Ataraxis emerged from stealth last year with a $4 million seed round.

Ataraxis was co-founded by Witowski and Krzysztof Geras, an assistant professor at NYU’s medical school who focuses on AI.

Ataraxis’ tech is powered by an AI model that extracts information from high-resolution images of cancer cells. The model is trained on hundreds of millions of real images from thousands of patients, Witowski said. A recent study showed Ataraxis’ tech was 30% more accurate than the current standard of care for breast cancer, per Ataraxis.

Long term, Ataraxis has big ambitions. It wants its tests to impact at least half of new cancer cases by 2030. It also views itself as a frontier AI company that builds its own models, touting Meta’s chief AI scientist Yann LeCun as an AI advisor.

“I think at Ataraxis we are trying to build what is essentially an AI frontier lab, but for healthcare applications,” Witowski said. “Because so many of those problems require a very novel technology.”

The AI boom has led to a rush of fundraises for cancer care startups. Valar Labs raised $22 million to help patients figure out their treatment plan in May 2024, for example. There’s also a bevvy of AI-powered drug discovery firms in the cancer space, like Manas AI which raised $24.6 million in January 2025 and was co-founded by Reid Hoffman, the LinkedIn co-founder.

source

Continue Reading