Tech
Sam Altman catapults past founder mode into ‘god mode’ with latest AI post

Founder mode? Pffft. Who needs that when you can be the father of creation, ushering in a new age of humanity?
Welcome to “god mode.”
Sam Altman, the CEO of the AI startup headed for a $150 billion valuation, OpenAI, has historically pitched AI as the solution to the world’s problems, despite its significant impact on energy resources, carbon emissions, and water usage to cool data centers, coming at the cost of the progress the world has made toward combating climate change.
In Altman’s latest post, the OpenAI leader presents an incredibly positive update on the state of AI, hyping its world-changing potential. Far from being an occasionally helpful alternative to a Google search or a homework helper, AI, as Altman presents, will change humanity’s progress — for the better, naturally.
Through rose-tinted contacts, Altman pitches the numerous ways he believes AI will save the world. But much of what he writes is seemingly meant to convince the skeptics of how much AI matters and could well have the opposite result: Instead of creating new fans, posts like this may well invite increased scrutiny as to whether we’re in an “emperor’s new clothes” situation.
As one commentator with the username sharkjacobs on the technical forum Hacker News writes, “I’m not an AI skeptic at all, I use LLMs all the time, and find them very useful. But stuff like this makes me very skeptical of the people who are making and selling AI.”
Let’s go through Altman’s promises and rate them as believable or just hype:
- AI will help us solve “hard problems.” Believable. Whether those hard problems will be in something profound, like medical science, or something beyond helping engineers with coding challenges, or helping kids cheat on their homework, or the creation of weird and maybe partially stolen art, still remains to be seen.
- “We’ll soon be able to work with AI that helps us accomplish much more than we ever could without AI.” Veering into hype. Yes, using a new tool or technology will help us accomplish more, but will it actually increase efficiency to the point that businesses are willing to shell out for it, especially considering the state it’s in today? It’s still too early to know the answer here.
- “Eventually we can each have a personal AI team, full of virtual experts in different areas, working together to create almost anything we can imagine.” Hype. First of all, creating “almost anything” we can imagine is not necessarily a good thing — not only because it detracts from the art and works created by humans but also because people can imagine some genuinely terrible things. It’s also worth asking whether these “virtual experts” would just be swiping and summarizing the ideas of actual experts.
- “Our children will have virtual tutors.” Believable. A chatbot helper may not be better than a 1:1 tutoring session with a real person, but the fact is many families can’t afford the real thing. But such an important and influential role will need to be carefully defined and rigorously studied.
- “…imagine similar ideas for better healthcare.” Hype. Again, a vague promise that AI will improve our health and well-being, as it will have “the ability to create any kind of software someone can imagine.”
- “We can have shared prosperity to a degree that seems unimaginable today; in the future, everyone’s lives can be better than anyone’s life is now.” Hype! This is where he really goes into god mode.
- AI will “meaningfully improve the lives of people around the world.” Hype. How? When? To what extent? Whose lives? We have many questions here.
- “This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.” Hype with a capital H. A vague tease that AGI (artificial general intelligence) is, with all certainty, going to arrive, and it’s only a matter of time. However, many AI critics argue AGI may not be realized, at least as promised. We may end up with smarter models, but not necessarily those that are capable of the same levels of human understanding, skeptics believe.
- “…the next leap in prosperity.” Hype. Like many technological changes, AI in the near term may lead to job losses before creating new ones. If it were to free up people from the drudgery of work, then how would they pay their rent or buy food in a capitalist society that demands labor as the cost of living for all but the mega-rich? A lot of this rhetoric will be familiar to anyone who has followed the “singularity” type futurists over the years.
- “AI is going to get better with scale…” Believable. It does make sense that AI will improve as the technology scales and grows, though the cost of that scale is not put in the balance.
- “…and that will lead to meaningful improvements to the lives of people around the world.” Hold up! Hype. We’re going to need to see the receipts on this one when the time comes. Also, how is “meaningful” being measured here? Because the consumer experience with things like OpenAI’s ChatGPT and other chatbots today often involves AI hallucinating facts, pulling bad info from scraped websites, or regurgitating the dumbest stuff posted on Reddit, none of which are “meaningful improvements,” as of yet. (Of course, we’re not talking just about chatbots in this post, but it’s a point that could be lost on the intended audience!)
- “AI systems are going to get so good that they help us make better next-generation systems and make scientific progress across the board.” Hype. AI is already improving things in areas like medicine and science, but whether these improvements are incremental or significant is something we can’t yet measure. Until AI’s cancer treatments and radiology expertise provably lead to significantly improved outcomes for regular people, this has to be categorized as hype.
- “If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.” Hype. If we don’t embrace and invest in AI, wars are inevitable? Okay? That’s why we’re spinning up more power plants like the one at Three Mile Island? YOLO!
- “The dawn of the Intelligence Age.” Hype. Historians get to define the past ages; for all we know, this could be the “age of resource overconsumption” that eventually led to our downfall.
- “It will not be an entirely positive story, but the upside is so tremendous…” First part, believable. Second part, hype.
- “…the future is going to be so bright that no one can do it justice by trying to write about it now.” Then why is Altman trying? We rate the futility as believable, but the brightness as hype.
- “A defining characteristic of the Intelligence Age will be massive prosperity.” Hype. Show us the money. Heck, convince the CIOs of AI’s value first.
- “Although it will happen incrementally, astounding triumphs — fixing the climate, establishing a space colony, and the discovery of all of physics — will eventually become commonplace.” Hype. So, we have to destroy the environment to run AI data centers but AI will eventually fix climate change?
- “…we expect that this technology can cause a significant change in labor markets…” Believable. But don’t sugarcoat this one — this coming change could be bad in the immediate future.
- “Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter.” Hype. Why shade lamplighters? That actually sounds like a pretty chill job? Jokes aside, this falsely equates the arrival of AI as being as impactful as the arrival of electricity, which is more than a little presumptuous.
Altman’s hype aside, it’s worth acknowledging that AI is a sizable platform shift and perhaps the biggest since the arrival of mobile technology. (Case in point: Apple is selling its iPhone 16 based on its AI capabilities, not its hardware.)
AI could eventually deliver major changes in time. But today, it’s still fair to question if the arrival of AI will ultimately prove as significant as connecting the world through the internet, putting a web-connected computer in everyone’s home, and then in everyone’s pocket.
On the one side are the true believers counting the days to AGI, and on the other, skeptics who would like to see more before dubbing the AI age a utopia — especially considering the real-world costs to the environment, the workforce, art, and creation.
We’re currently at the point in AI’s development where consumers and businesses alike are figuring out how AI will fit into their usual workflows, where AI can improve efficiencies, and where it will not. Until then, much of what’s being written about AI’s future can only be speculative.
Tech
Volkswagen’s cheapest EV ever is the first to use Rivian software

Volkswagen’s ultra-cheap EV called the ID EVERY1 — a small four-door hatchback revealed Wednesday — will be the first to roll out with software and architecture from Rivian, according to a source familiar with the new model.
The EV is expected to go into production in 2027 with a starting price of 20,000 euros ($21,500). A second EV called the ID.2all, which will be priced in the 25,000 euro price category, will be available in 2026. Both vehicles are part of the automaker’s new of category electric urban front-wheel drive cars that are being developing under the so-called “Brand Group Core” that makes up the volume brands in the VW Group. And both vehicles are for the European market.
The EVERY1 will be the first to ship with Rivian’s vehicle architecture and software as part of a $5.8 billion joint venture struck last year between the German automaker and U.S. EV maker. The ID.2all is based on the E3 1.1 architecture and software developed by VW’s software unit Cariad.
VW didn’t name Rivian in its reveal Wednesday, although there were numerous nods to next-generation software. Kai Grünitz, member of the Volkswagen Brand Board of Management responsible for Technical Development, noted it would be the first model in the entire VW Group to use a “fundamentally new, particularly powerful software architecture.”
“This means the future entry-level Volkswagen can be equipped with new functions throughout its entire life cycle,” he said. “Even after purchase of a new car, the small Volkswagen can still be individually adapted to customer needs.”
Sources who didn’t want to be named because they were not authorized to speak publicly, confirmed to TechCrunch that Rivian’s software will be in the ID EVERY1 EV. TechCrunch has reached out to Rivian and VW and will update the article if the companies respond.
The new joint venture provides Rivian with a needed influx of cash and the opportunity to diversify its business. Meanwhile, VW Group gains a next-generation electrical architecture and software for EVs that will help it better compete. Both companies have said that the joint venture, called Rivian and Volkswagen Group Technologies, will reduce development costs and help scale new technologies more quickly.
The joint venture is a 50-50 partnership with co-CEOs. Rivian’s head of software, Wassym Bensaid, and Volkswagen Group’s chief technical engineer, Carsten Helbing, will lead the joint venture. The team will be based initially in Palo Alto, California. Three other sites are in development in North America and Europe, the companies have previously said.

“The ID. EVERY1 represents the last piece of the puzzle on our way to the widest model selection in the volume segment,” Thomas Schäfer, CEO of the Volkswagen Passenger Cars brand and Head of the Brand Group Core, said in a statement. “We will then offer every customer the right car with the right drive system–including affordable all-electric entry-level mobility. Our goal is to be the world’s technologically leading high-volume manufacturer by 2030. And as a brand for everyone–just as you would expect from Volkswagen.”
The Volkswagen ID EVERY1 is just a concept for now — and with only a few details attached to the unveiling. The concept vehicle reaches a top speed of 130 km/h (80 miles per hour) and is powered by a newly developed electric drive motor with 70 kW, according to Volkswagen. The German automaker said the range on the EVERY1 will be at least 250 kilometers (150 miles). The vehicle is small but larger than VW’s former UP! vehicle. The company said it will have enough space for four people and a luggage compartment volume of 305 liters.
Tech
The hottest AI models, what they do, and how to use them

AI models are being cranked out at a dizzying pace, by everyone from Big Tech companies like Google to startups like OpenAI and Anthropic. Keeping track of the latest ones can be overwhelming.
Adding to the confusion is that AI models are often promoted based on industry benchmarks. But these technical metrics often reveal little about how real people and companies actually use them.
To cut through the noise, TechCrunch has compiled an overview of the most advanced AI models released since 2024, with details on how to use them and what they’re best for. We’ll keep this list updated with the latest launches, too.
There are literally over a million AI models out there: Hugging Face, for example, hosts over 1.4 million. So this list might miss some models that perform better, in one way or another.
AI models released in 2025
Cohere’s Aya Vision
Cohere released a multimodal model called Aya Vision that it claims is best in class at doing things like captioning images and answering questions about photos. It also excels in languages other than English, unlike other models, Cohere claims. It is available for free on WhatsApp.
OpenAI’s GPT 4.5 ‘Orion’
OpenAI calls Orion their largest model to date, touting its strong “world knowledge” and “emotional intelligence.” However, it underperforms on certain benchmarks compared to newer reasoning models. Orion is available to subscribers of OpenAI’s $200 a month plan.
Claude Sonnet 3.7
Anthropic says this is the industry’s first ‘hybrid’ reasoning model, because it can both fire off quick answers and really think things through when needed. It also gives users control over how long the model can think for, per Anthropic. Sonnet 3.7 is available to all Claude users, but heavier users will need a $20 a month Pro plan.
xAI’s Grok 3
Grok 3 is the latest flagship model from Elon Musk-founded startup xAI. It’s claimed to outperform other leading models on math, science, and coding. The model requires X Premium (which is $50 a month.) After one study found Grok 2 leaned left, Musk pledged to shift Grok more “politically neutral” but it’s not yet clear if that’s been achieved.
OpenAI o3-mini
This is OpenAI’s latest reasoning model and is optimized for STEM-related tasks like coding, math, and science. It’s not OpenAI’s most powerful model but because it’s smaller, the company says it’s significantly lower cost. It is available for free but requires a subscription for heavy users.
OpenAI Deep Research
OpenAI’s Deep Research is designed for doing in-depth research on a topic with clear citations. This service is only available with ChatGPT’s $200 per month Pro subscription. OpenAI recommends it for everything from science to shopping research, but beware that hallucinations remain a problem for AI.
Mistral Le Chat
Mistral has launched app versions of Le Chat, a multimodal AI personal assistant. Mistral claims Le Chat responds faster than any other chatbot. It also has a paid version with up-to-date journalism from the AFP. Tests from Le Monde found Le Chat’s performance impressive, although it made more errors than ChatGPT.
OpenAI Operator
OpenAI’s Operator is meant to be a personal intern that can do things independently, like help you buy groceries. It requires a $200 a month ChatGPT Pro subscription. AI agents hold a lot of promise, but they’re still experimental: a Washington Post reviewer says Operator decided on its own to order a dozen eggs for $31, paid with the reviewer’s credit card.
Google Gemini 2.0 Pro Experimental
Google Gemini’s much-awaited flagship model says it excels at coding and understanding general knowledge. It also has a super-long context window of 2 million tokens, helping users who need to quickly process massive chunks of text. The service requires (at minimum) a Google One AI Premium subscription of $19.99 a month.
AI models released in 2024
DeepSeek R1
This Chinese AI model took Silicon Valley by storm. DeepSeek’s R1 performs well on coding and math, while its open source nature means anyone can run it locally. Plus, it’s free. However, R1 integrates Chinese government censorship and faces rising bans for potentially sending user data back to China.
Gemini Deep Research
Deep Research summarizes Google’s search results in a simple and well-cited document. The service is helpful for students and anyone else who needs a quick research summary. However, its quality isn’t nearly as good as an actual peer-reviewed paper. Deep Research requires a $19.99 Google One AI Premium subscription.
Meta Llama 3.3 70B
This is the newest and most advanced version of Meta’s open source Llama AI models. Meta has touted this version as its cheapest and most efficient yet, especially for math, general knowledge, and instruction following. It is free and open source.
OpenAI Sora
Sora is a model that creates realistic videos based on text. While it can generate entire scenes rather than just clips, OpenAI admits that it often generates “unrealistic physics.” It’s currently only available on paid versions of ChatGPT, starting with Plus, which is $20 a month.
Alibaba Qwen QwQ-32B-Preview
This model is one of the few to rival OpenAI’s o1 on certain industry benchmarks, excelling in math and coding. Ironically for a “reasoning model,” it has “room for improvement in common sense reasoning,” Alibaba says. It also incorporates Chinese government censorship, TechCrunch testing shows. It’s free and open source.
Anthropic’s Computer Use
Claude’s Computer Use is meant to take control of your computer to complete tasks like coding or booking a plane ticket, making it a predecessor of OpenAI’s Operator. Computer use, however, remains in beta. Pricing is via API: $0.80 per million tokens of input and $4 per million tokens of output.
x.AI’s Grok 2
Elon Musk’s AI company, x.AI, has launched an enhanced version of its flagship Grok 2 chatbot it claims is “three times faster.” Free users are limited to 10 questions every two hours on Grok, while subscribers to X’s Premium and Premium+ plans enjoy higher usage limits. x.AI also launched an image generator, Aurora, that produces highly photorealistic images, including some graphic or violent content.
OpenAI o1
OpenAI’s o1 family is meant to produce better answers by “thinking” through responses through a hidden reasoning feature. The model excels at coding, math, and safety, OpenAI claims, but has issues deceiving humans, too. Using o1 requires subscribing to ChatGPT Plus, which is $20 a month.
Anthropic’s Claude Sonnet 3.5
Claude Sonnet 3.5 is a model Anthropic claims as being best in class. It’s become known for its coding capabilities and is considered a tech insider’s chatbot of choice. The model can be accessed for free on Claude although heavy users will need a $20 monthly Pro subscription. While it can understand images, it can’t generate them.
OpenAI GPT 4o-mini
OpenAI has touted GPT 4o-mini as its most affordable and fastest model yet thanks to its small size. It’s meant to enable a broad range of tasks like powering customer service chatbots. The model is available on ChatGPT’s free tier. It’s better suited for high-volume simple tasks compared to more complex ones.
Cohere Command R+
Cohere’s Command R+ model excels at complex Retrieval-Augmented Generation (or RAG) applications for enterprises. That means it can find and cite specific pieces of information really well. (The inventor of RAG actually works at Cohere.) Still, RAG doesn’t fully solve AI’s hallucination problem.
Tech
Not all cancer patients need chemo. Ataraxis AI raised $20M to fix that.

Artificial intelligence is a big trend in cancer care, and it’s mostly focused detecting cancer at the earliest possible stage. That makes a lot of sense, given that cancer is less deadly the earlier it’s detected.
But fewer are asking another fundamental question: if someone does have cancer, is an aggressive treatment like chemotherapy necessary? That’s the problem Ataraxis AI is trying to solve.
The New York-based startup is focused on using AI to accurately predict not only if a patient has cancer, but also what their cancer outcome looks like in 5 to 10 years. If there’s only a small chance of the cancer coming back, chemo can be avoided altogether – saving a lot of money, while avoiding the treatment’s notorious side effects.
Ataraxis AI now plans to launch their first commercial test, for breast cancer, to U.S. oncologists in the coming months, its co-founder Jan Witowski tells TechCrunch. To bolster the launch and expand into other types of cancer, the startup has raised a $20.4 million Series A, it told TechCrunch exclusively.
The round was led by AIX Ventures with participation from Thiel Bio, Founders Fund, Floating Point, Bertelsmann, and existing investors Giant Ventures and Obvious Ventures. Ataraxis emerged from stealth last year with a $4 million seed round.
Ataraxis was co-founded by Witowski and Krzysztof Geras, an assistant professor at NYU’s medical school who focuses on AI.
Ataraxis’ tech is powered by an AI model that extracts information from high-resolution images of cancer cells. The model is trained on hundreds of millions of real images from thousands of patients, Witowski said. A recent study showed Ataraxis’ tech was 30% more accurate than the current standard of care for breast cancer, per Ataraxis.
Long term, Ataraxis has big ambitions. It wants its tests to impact at least half of new cancer cases by 2030. It also views itself as a frontier AI company that builds its own models, touting Meta’s chief AI scientist Yann LeCun as an AI advisor.
“I think at Ataraxis we are trying to build what is essentially an AI frontier lab, but for healthcare applications,” Witowski said. “Because so many of those problems require a very novel technology.”
The AI boom has led to a rush of fundraises for cancer care startups. Valar Labs raised $22 million to help patients figure out their treatment plan in May 2024, for example. There’s also a bevvy of AI-powered drug discovery firms in the cancer space, like Manas AI which raised $24.6 million in January 2025 and was co-founded by Reid Hoffman, the LinkedIn co-founder.