Connect with us

Tech

$60B AI chip darling Cerebras almost died early on, burning $8M a month

Today, Cerebras Systems is a public company that sells AI chips for inference to giants like OpenAI and AWS. It held a blockbuster IPO on Thursday, with both of its co-founders billionaires, and ended the week worth about $60 billion.

But in 2019, when it was three years old, it came dangerously close to failure – incinerating a shocking amount of money. It was trying to solve a technical problem no one in the semiconductor industry thought could be done. 

“We were spending about $8 million a month,” founder CEO Andrew Feldman told TechCrunch of that period. “At this point, we had incinerated nearly $200 million trying to solve one technical problem.” 

Every few weeks, Feldman was forced to make the painful walk of shame to the board meeting to report another failure and more money burned. 

But he had no choice. Without a solution, Cerebras was dead anyway.

It was founded with an idea that was simple on paper. The microprocessor industry had spent its entire 50+ years making CPUs faster and cheaper by cramming more transistors onto a silicon wafer and dicing wafers into ever tinier pieces. But AI required so much compute power, many chips had to be strung together and then forced to communicate with each other. Cerebras’ founders believed turning a whole, even bigger wafer into one giant, powerful chip, would work faster. 

The problem was, no one had ever successfully done this before, for any reason, AI or not. Orchestrating that many microscopic electronic components onto a larger, but still thin, surface introduced compounding engineering problems. 

Once Cerebras crossed the first threshold of designing the mega chip and then manufacturing it with TSMC, the team hit the real roadblock. 

They couldn’t solve “packaging.” This involves everything after manufacturing the silicon itself: adhering it to a motherboard, getting power to it, dealing with heating and cooling as well as the pipes that would deliver and return data, Feldman said. 

Cerebras’ chips “were 58 times larger. We were using 40 times as much power as anybody had ever used,” he said. There were no premade heat sinks. No vendors. No manufacturing partners. The brightest minds in microprocessor engineering had tried for decades to build such big, yet more dense chips, and failed. 

The Cerebras team was left with trial and error in which “we destroyed an enormous number of chips” and an enormous amount of cash. But without functional packaging, the chip was useless. 

After exhaustive analysis of each failure, the team finally solved enough problems: how to cool it and move data around. In one instance, they had to invent their own machine that could bolt-in 40 screws simultaneously to secure the wafer to a board without cracking it. 

Feldman still remembers the day in July 2019 when it all, miraculously, worked.

They installed the packaged chip into a computer, turned it on and the entire founding team (pictured below) “just stood in the lab and stared at it,” he said. “Watching a computer run is about as exciting as watching paint dry. But there we were watching lights flashing on the computer, stunned that we’d solved this.” 

“That was one of the greatest moments of my life,” he said. That’s significant, because this same founding team had previously built and sold a pioneering cloud server startup, SeaMicro, to AMD for $334 million in 2012.

Cerebras Systems founding team in 2015: Andrew Feldman, Gary Lauterbach, Michael James, Sean Lie and Jean-Philippe Fricker
Cerebras Systems founding team in 2015: Andrew Feldman, Gary Lauterbach, Michael James, Sean Lie and Jean-Philippe FrickerImage Credits:Cerebras Systems

The day the chip finally worked was also about two years after OpenAI had talked to Cerebras acquiring it, which Feldman confirmed to TechCrunch occurred like the publicly revealed emails said it did. 

Those talks fell through amidst growing squabbling among the OpenAI founders, several of whom are angel investors in Cerebras. 

Today OpenAI is a customer and a partner, having loaned Cerebras $1 billion secured by warrants. Those warrants conditionally grant OpenAI about 33 million shares of Cerebras’ stock, the S-1 discloses. (33 million shares are worth over $9 billion at Friday’s closing price of $279.) 

Interestingly, Cerebras also agreed to not sell its wares to specific OpenAI competitors as part of that loan deal. Feldman wouldn’t confirm that the obvious company this involves: Anthropic. He did, however say that restriction is temporary. 

“It’s limited in time, and it was designed to make sure that we could get OpenAI the capacity,” he said.

The truth was, Cerebras hasn’t yet grown big enough to handle multiple fast-growing model makers anyway.  He likened selling AI compute capacity to an all-you-can eat buffet. Instead of trying to stuff itself on all potential customers, “We’re going to work with part of the buffet only, and we’re going to get comfortable with that, before we attack the rest,” he said.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Indian Uber rival Rapido raises $240M at $3B valuation

Indian ride-hailing company Rapido said on Friday it had raised $240 million in fresh funding at a $3 billion valuation to compete better in the country’s growing but challenging mobility market.

Led by Prosus, the equity round saw participation from existing investors, including WestBridge Capital and Accel. The round was part of a larger $730 million primary and secondary financing. Rapido was previously valued at $2.3 billion during a secondary transaction last year.

Rapido said the fresh capital would be used to increase its footprint in high-growth markets, strengthen its driver network, and invest in technology and platform efficiency.

“We are going deeper into markets where demand exists, but supply remains fragmented,” Rapido co-founder Aravind Sanka said in a statement. “We will sharpen our focus on strengthening supply, building technologies, and expanding our multimodal footprint, with far greater speed and intent.”

The funding round underlines continuing investor interest in India’s mobility sector despite persistent concerns about pricing pressures, regulation, and profitability.

Founded in 2015, Rapido operates in more than 400 cities and has spurred its growth by enabling ride-hailing for lower-cost and more flexible modes of transport such as motorbikes and auto-rickshaws in India’s congested, price-sensitive cities. The Bengaluru-based startup has been expanding to smaller towns, too.

The funding comes in the wake of Uber CEO Dara Khosrowshahi’s visit to India, where the ride-hailing giant this week unveiled plans to expand its engineering and infrastructure operations via two new technology campuses and a local data center partnership. Uber earlier this year infused $330 million into its India subsidiary as it sought to strengthen its presence amid growing competition from local rivals like Ola, Rapido, and Namma Yatri.

Khosrowshahi said last year that Rapido had overtaken Ola as Uber’s biggest competitor in the country.

India is currently one of the world’s most challenging ride-hailing markets because of intense price competition, supply issues, high driver incentive costs, and evolving local regulations. Nevertheless, Rapido has rapidly expanded its market share, even entering the food delivery business through its subsidiary Ownly last year.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

source

Continue Reading

Tech

Osaurus brings both local and cloud AI models to your Mac

As AI models increasingly become commoditized, startups are racing to build the software layer that sits on top of them. One interesting entrant into this space is Osaurus, an open source, Apple-only LLM server that lets users move between different local AI models, either locally or in the cloud, while keeping their files and tools all on their own hardware.

Osaurus evolved out of the idea for a desktop AI companion, Dinoki, which Osaurus co-founder Terence Pae described as a sort of “AI-powered Clippy.” Dinoki’s customers had asked him why they should buy the app if they still had to pay for tokens — the usage units AI companies charge for processing prompts and generating responses.

That got Pae thinking more deeply about running AI locally.

“That’s how Osaurus started,” Pae, previously a software engineer at Tesla and Netflix, told TechCrunch over a call. The idea, he explained, was to try to run an AI assistant locally. “You can do pretty much everything on your Mac locally, like browsing your files, accessing your browser, accessing your system configurations. I figured this would be a great way to position Osaurus as a personal AI for individuals.”

Pae began building the tool in public as an open source project, adding features and fixing bugs along the way.

Image Credits:Osaurus, Inc.

Today, Osaurus can flexibly connect with locally hosted AI models or cloud providers like OpenAI and Anthropic. Users can freely choose which AI models they’re using and keep other aspects of the AI experience on their own hardware, like the models’ own memory, or their files and tools.

Given that different AI models have different strengths, the advantage of this system is that users can switch to the AI model that best fits their needs.

Such a structure makes Osaurus what’s called a “harness” — a control layer that connects different AI models, tools, and workflows through a single interface, similar to tools like OpenClaw or Hermes. However, the difference is that such tools are often aimed at developers who know their way around a terminal. And sometimes, like in the case of OpenClaw, they may pose security issues and holes to worry about.

Osaurus, meanwhile, presents an easy-to-use interface that consumers can use and addresses security concerns by running things in a hardware-isolated, virtual sandbox. This limits the AI to a certain scope, keeping your computer and data safe.

Image Credits:Osaurus, Inc.

Of course, the practice of running AI models on your machine is still in its early days, given that it’s heavily resource-intensive and hardware-dependent. To run local models, your system will need at least 64GB of RAM. For running larger models, like DeepSeek v4, Pae recommends systems with about 128GB of RAM.

But Pae believes local AI’s needs will come down in time.

“I can see the potential of it, because the intelligence per wattage — which is like the metric for local AI — has been going up significantly. It’s on its own curve of innovation. Last year, local AI could barely finish sentences, but today it can actually run tools, write code, access your browser, and order stuff from Amazon … It’s just getting better and better,” he said.

Image Credits:Osaurus, Inc.

Osaurus today can run MiniMax M2.5, Gemma 4, Qwen3.6, GPT-OSS, Llama, DeepSeek V4, and other models. It also supports Apple’s on-device foundation models, Liquid AI’s LFM family of on-device models, and in the cloud, it can connect to OpenAI, Anthropic, Gemini, xAI/Grok, Venice AI, OpenRouter, Ollama, and LM Studio.

As a full MCP (Model Context Protocol) server, you can give any MCP-compatible client access to your tools as well. Plus, it ships with over 20 native plug-ins for Mail, Calendar, Vision, macOS Use, XLSX, PPTX, Browser, Music, Git, Filesystem, Search, Fetch, and more. 

More recently, Osaurus was updated to include voice capabilities as well.

Since the project went live nearly a year ago, it has been downloaded north of 112,000 times, according to its website. The app competes with other tools that let you run models locally, like Ollama, Msty, LM Studio, and others, but offers a differentiated feature set and presents itself as a more user-friendly option for non-developers, too.

Currently, Osaurus’ founders (who include co-founder Sam Yoo) are participating in the New York-based startup accelerator Alliance. They’re also thinking about next steps, which could see Osaurus being offered to businesses, like those in the legal space or in healthcare, where running local LLMs could address privacy concerns.

As the power of local AI models grows, the team believes it could lower the demand for AI data centers.

“We’re seeing this explosive growth in the AI space where [cloud AI providers] have to scale up using data centers and infrastructure, but we feel like people haven’t really seen the value of the local AI yet,” Pae said. “Instead of relying on the cloud, they can actually deploy a Mac Studio on-prem, and it should use substantially less power. You still have the capabilities of the cloud, but you will not be dependent on a data center to be able to run that AI,” he added.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

source

Continue Reading

Tech

Meridian Ventures launched a $35M fund with a focus on MBA-deferred founders

Meridian Ventures was born out of a shared experience: deferred MBAs. Now, founders Devon Gethers and Karlton Haney have raised a $35 million fund to back pre-seed and seed-stage companies started by people like them.

Gethers, 29, told TechCrunch the idea for a firm arose after he met Haney in Harvard’s MBA deferred admission program in 2020.

Gethers grew up in poverty in Washington State, studied behavioral science and finance at the University of Utah, then moved into private equity before launching a company of his own (which he later exited). Haney, meanwhile, grew up on a farm in Arkansas, raising chickens, birds, and “anything that flew,” Gethers said about his business partner. 

Haney, 28, went on to study industrial engineering at the University of Arkansas and worked as an investor at the family office, the Stephens Group. The two came together in 2023 with the idea of launching a firm that backed people with MBAs, with a tilt toward those who had deferred.

“Our thesis is going against a bit of the grain, the rhetoric you hear in Silicon Valley that MBAs don’t make good founders,” Gethers said, referring to the belief that an MBA prepares students for corporate culture, not the flexible, free-wheeling world of Silicon Valley.

To prove their thesis, Gethers and Haney went out and cold-called prospective limited partners and knocked on doors until they raised $2.5 million as a proof-of-concept fund to back 45 companies. 

The two headed off to Harvard Business School in summer 2023 and about a year into it, decided to try and raise their first institutional fund. The funding environment was tough, but the pair ended up raising an oversubscribed $35 million fund from LPs, including publicly traded banks, family offices, and Fortune 500 executives, Gethers said. They graduated from Harvard Business School in 2025. 

This new fund will back founders building enterprise technology in the United States. Meridian is agnostic, Gethers said, noting that the firm has already invested in companies in fintech, logistics, healthcare, and of course, AI. The average check size will be $500,000 for pre-seed and $750,000 for seed, and the capital hopes to be deployed over the next three years. 

“We saw an expanding gap between ambitious founders building frontier technologies and the capital required to help carry those ambitions forward,” Gethers said. “With this $35 million fund, our goal is to seal that gap.”

This piece was updated to clarify that the firm also backs those who have not deferred.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

source

Continue Reading