Tech
Avalanche thinks the fusion power industry should think smaller
Nuclear fusion conjures images of massive reactors or banks of dozens of large lasers. Avalanche co-founder and CEO Robin Langtry thinks smaller is better.
For the last several years, Langtry and his colleagues at Avalanche have been working on what’s essentially a desktop version of nuclear fusion. “We’re using the small size to learn quickly and iterate quickly,” Langtry told TechCrunch.
Fusion power promises to supply the world with large amounts of clean heat and electricity, if researchers and engineers can solve some vexing challenges. At its core, fusion power seeks to harness the power of the Sun. To do that, fusion startups must figure out how to heat and compress plasma for long enough that atoms inside the mix fuse, releasing energy in the process.
Fusion is a famously unforgiving industry. The physics is challenging, the materials science is cutting edge, and the power requirements can be gargantuan. Parts need to be machined with precision, and the scale is usually so large as to obviate rapid fire experimentation.
Some companies like Commonwealth Fusion Systems (CFS) are using large magnets to contain the plasma in a doughnut-like tokamak, others are compressing fuel pellets by shooting them with powerful lasers. Avalanche, though, uses electric current at extremely high voltages to draw plasma particles into an orbit around an electrode. (It also uses some magnets to keep things orderly, though they’re not nearly as powerful as a tokamak’s.) As the orbit tightens and the plasmas speed up, the particles begin to smash into each other and fuse.
The approach has won over some investors. Avalanche recently added another $29 million in an investment round led by R.A. Capital Management with participation from 8090 Ventures, Congruent Ventures, Founders Fund, Lowercarbon Capital, Overlay Capital, and Toyota Ventures. To date, the company has raised $80 million from investors, a relatively small amount in the fusion world. Other companies have raised several hundred to a few billion dollars.
Space-based inspiration
Langtry’s time at the Jeff Bezos-backed space tech company Blue Origin influenced how Avalanche is tackling the problem.
Techcrunch event
Boston, MA
|
June 23, 2026
“We’ve figured out that using this sort of SpaceX ‘new space’ approach is that you can iterate really quickly, you can learn really quickly, and you can solve some of these challenges.” said Langtry, who worked with co-founder Brian Riordan at Blue Origin.
Going smaller allowed Avalanche to speed up. The company has been testing changes to its devices “sometimes twice a week,” something that would be challenging and costly with a large device.
Currently, Avalanche’s reactor is only nine centimeters in diameter, though Langtry said a new version grow to 25 centimeters and is expected to produce about 1 megawatt. That, he said, “is going to give us a significant bump in confinement time, and that’s how we’re actually going to get plasmas that have a chance of being Q>1.” (In fusion, Q refers to the ratio of power in to power out. When it’s greater than one, the fusion device is said to be past the breakeven point.)
Those experiments will be carried out at Avalanche’s FusionWERX, a commercial testing facility the company also rents out to competitors. By 2027, the site will be licensed to handle tritium, an isotope of hydrogen that’s used as fuel and is crucial to many fusion startup’s plans for producing power for the grid.
Langtry wouldn’t commit to a date when he hopes Avalanche will be able to generate more power than its fusion devices consume, a key milestone in the industry. But he’s thinks the company is on a similar timeline as competitors like CFS and the Sam Altman-backed Helion. “I think there’s going to be a lot of really exciting things happening in fusion in 2027 to 2029,” he said.
Tech
An exclusive tour of Amazon’s Trainium lab, the chip that’s won over Anthropic, OpenAI, even Apple
Shortly after Amazon CEO Andy Jassy announced AWS’s groundbreaking $50 billion investment deal with OpenAI, Amazon invited me on a private tour of the chip development lab at the heart of the deal, at (mostly*) its own expense.
Industry experts are watching Amazon’s Trainium chip, created at that facility, for its implications for lower-cost AI inference and, potentially, a dent in Nvidia’s near monopoly.
Curious, I agreed to go.
My tour guides for the day were the lab’s director, Kristopher King (pictured below right) and director of engineering Mark Carroll (below left), as well as the team’s PR person who arranged the visit, Doron Aronson (pictured with yours truly later in the story).

AWS has been Anthropic’s major cloud platform since the AI lab’s early days — a relationship significant enough to survive Anthropic later adding Microsoft as a cloud partner as well, and Amazon’s growing partnership with OpenAI.
The OpenAI deal makes AWS the exclusive provider of the model maker’s new AI agent builder, Frontier, which could become an important part of OpenAI’s business if agents become as big as Silicon Valley thinks they will. We’ll see if that exclusivity stands exactly as announced. The Financial Times reported this week that Microsoft may believe OpenAI’s deal with Amazon violates its own deal with OpenAI, namely with Redmond getting access to all of OpenAI’s models and tech.
What makes AWS so appealing to OpenAI? As part of this deal, the cloud giant has agreed to supply OpenAI with 2 gigawatts of Trainium computing capacity. This is a giant commitment, given that Anthropic and Amazon’s own Bedrock service are already consuming Trainium chips faster than Amazon can produce them.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
There are 1.4 million Trainium chips deployed across all three generations, and Anthropic’s Claude runs on over 1 million of the Trainium2 chips deployed, the company said.
It’s worth noting that while Trainium was originally geared toward faster, cheaper model training (a bigger priority a couple of years ago), it’s now tuned and used for inference as well. Inference — the process of actually running an AI model to generate responses — is currently the biggest performance bottleneck in the industry.
Case in point: Trainium2 handles the majority of the inference traffic on Amazon’s Bedrock service, which supports the building of AI applications by Amazon’s many enterprise customers and allows the apps to use multiple models.
“Our customer base is just expanding as fast as we can get capacity out there,” King said. “Bedrock could be as big as EC2 one day,” he added, referring to AWS’s behemoth compute cloud service.

Trainium vs. Nvidia
Beyond offering an alternative to Nvidia’s backlogged, hard-to-acquire GPUs, Amazon says its new chips running on its new specialty Trn3 UltraServers cost up to 50% less to run for comparable performance than using classic cloud servers.
Along with Trainium3, released in December, this AWS team also built new Neuron switches, and Carroll says that combo is transformative.
“What that gives us is something huge,” Carroll said. The switches allow every Trainium3 chip to talk to every other chip in a mesh configuration, reducing latency. “That’s why Trainium3 is breaking all kinds of records,” particularly in “price per power,” he said.
When trillions of tokens a day are involved, such improvements add up.
In fact, Amazon’s chip team was lauded by Apple in 2024. In a rare moment of openness for the secretive company, Apple’s director of AI publicly described how it used another of the team’s chips — Graviton, a low-power, ARM-based server CPU and the first breakout chip this team designed. Apple also lauded Inferentia — a chip specifically designed for inference — and gave a nod to Trainium, which was new at the time.
These chips represent the classic Amazon playbook: See what people want to buy, then build an in-house alternative that competes on price.
The catch for chips, historically, has been switching costs. Applications written for Nvidia’s chips must be re-architected to work with others — a time-consuming process that discourages developers from switching.
But the AWS chip team proudly told me that Trainium now supports PyTorch, a popular open source framework for building AI models. That includes many of the ones hosted on Hugging Face, a vast library where developers share open source models.
The transition, Carroll told me, requires “basically a one-line change, and then recompile, and then run on Trainium.” In other words, Amazon is attempting to chip away at Nvidia’s market dominance wherever possible.
AWS has also this month announced a partnership with Cerebras Systems, integrating that company’s inference chip on servers running Trainium for what Amazon promises will be superpowered, low-latency AI performance.
But Amazon’s ambitions go beyond the chips themselves. It also designs the server that hosts the chips. Besides the networking components, this team has designed “Nitro,” a hardware-software combo that provides virtualization tech (which allows many instances of software to run separately on the same server); new state-of-the-art liquid cooling technology; and the server sleds (pictured below) that host this gear.
All of that is to control cost and performance.

Working 24/7 on the “bring-up”
Amazon’s custom chip-designing unit was born when the cloud giant bought Israeli chip designer Annapurna Labs in January 2015 for about $350 million. So this team has now had more than 10 years designing chips for AWS. The unit has retained its Annapurna roots and name — its logo is everywhere in the office.
This chip lab is located in a shiny, chrome-windowed building in Austin’s upscale “The Domain” district, a walkable area filled with shops and restaurants that’s sometimes called Austin’s Silicon Valley.
The offices have your classic tech corporate vibe: desks in cubicles, gathering spots, and conference rooms. But tucked away at the back of a high floor in the building is the actual lab, with sweeping views of the city.
The shelving-filled lab, about the size of two large conference rooms, is a noisy industrial space thanks to the fans on the equipment. It looks like a cross between a high school shop class and a Hollywood set for a high-end lab, except the engineers are dressed in jeans, not white lab coats.


Note that this is not where the chips are manufactured, so no white hazmat suits were necessary. The Trainium3 is a state-of-the-art 3-nanometer chip, produced by TSMC, arguably the leader in 3-nanometer manufacturing, with other chips produced by Marvell.
But this is the room where the magic of the “bring-up” occurs.
“A silicon bring-up is when you get the chip for the first time, and it’s like a big overnight party. You stay here, like a lock-in,” King explains. After 18 months of work, the chip is activated for the first time to verify it works as designed. The team even filmed some of the Trainium3 bring-up and posted it on YouTube.
Spoiler alert: It’s never problem-free.
For Trainium3, the prototype chip was originally air-cooled, like previous versions. The current chip is now liquid-cooled, which offers energy advantages and was quite an engineering feat.
During the bring-up, the dimensions for how the chip attached to the air-cooling heat sink were off, so the chip couldn’t be activated.
Unfazed, the team “immediately got a grinder and just started grinding off the metal,” King said. Because they didn’t want the noise disrupting the bring-up pizza party atmosphere, they snuck off and did the grinding in a conference room.
Staying up all night and solving problems “is what silicon bring-up is all about,” King said.
The lab even has a welding station, where hardware lab engineer and master welder Isaac Guevara demonstrated welding tiny integrated circuit components through a microscope. This is such insanely difficult work that senior leader Carroll openly admitted he couldn’t do it, to the guffaws of Guevara and the rest of the engineers in the room.

The lab also contains both custom-made and commercial tools for testing and analyzing issues with chips. Here’s signal engineer Arvind Srinivasan demonstrating how the lab tests each tiny component on the chip:

Sleds are the star of the lab
But the star of the lab is an entire row showcasing each generation of the “sleds” the team designed.

Sleds are the trays that house the Trainium AI chips, Graviton CPU chips, and supporting boards and components. Stack them together on a rack with the networking component, also custom-designed by this team, and you get the systems that are at the heart of Anthropic Claude’s success.
Here’s the sled that was shown off during the AWS re:invent conference in December:

Proven by Anthropic and OpenAI
I expected my guides to crow about the OpenAI deal during the tour. But they didn’t.
The reticence could have been related to the aforementioned potential legal haze that might hang over the deal. But the sense I got was that these boots-on-the-ground engineers (who are currently designing the next version, Trainium4) haven’t had much chance to work with OpenAI yet. Their day-to-day work has so far been focused on Anthropic’s and Amazon’s needs.
Currently, the biggest chunk of Trainium2 chips is deployed in Project Rainier — one of the world’s largest AI compute clusters — which went live in late 2025 with 500,000 chips. It’s used by Anthropic.
But there was a wall monitor in the main office displaying a quote about how OpenAI will be using Trainium. The pride was there, if subtle.
In addition to this lab, the team also has its own private data center for quality and testing purposes. A short drive away, it doesn’t run customer workloads, so it’s housed at a co-location facility, not an AWS data center.
Security is tight: There are strict protocols to enter the building and to access Amazon’s area within.
The data center’s cooling system is so loud that earplugs are mandatory, and the air is thick with the acrid smell of heated metal. It’s not a pleasant place for the average person to hang out.

At this data center, there are rows and rows of servers filled with sleds that integrate all of Amazon’s newest custom chips: Graviton CPU, liquid-cooled Trainium3, Amazon Nitro, all happily computing away. The liquid runs on a closed system, meaning it is reused, which should also help reduce the environmental impact, the engineers said.
Here’s what a current Trn3 UltraServer looks like: Multiple sleds are on top and bottom, with the Neuron switches in the middle. Hardware development engineer David Martinez-Darrow is seen here performing maintenance on a sled:

While attention on the team has always been high, the scrutiny has really ratcheted up as of late.
Amazon CEO Andy Jassy keeps a close eye on this lab, publicly bragging about its products like a proud dad. In December, he said Trainium was already a multibillion-dollar business for AWS and called it one piece of AWS tech he’s most excited about. He also gave the chip a shout-out when announcing the OpenAI agreement.
The team feels the pressure, too. Engineers will work 24/7 for three to four weeks around each bring-up event to fix any issues so the chips can be mass-produced and put into data centers.
“It’s very important that we get as fast as possible to prove that it’s actually going to work,” Carroll said. “So far, we’ve been doing really well.”
*Disclosure: Amazon provided airfare and covered the cost of one night at a local hotel. Honoring its Leadership Principle of Frugality, this was a back-of-the-plane middle seat and a modest room. TechCrunch picked up the other associated travel costs like Ubers and luggage fees. (Yes, I checked a bag for an overnight trip. I’m high maintenance that way.)
Tech
TechCrunch Mobility: Uber everywhere, all at once
Welcome back to TechCrunch Mobility, your central hub for news and insights on the future of transportation. To get this in your inbox, sign up here for free — just click TechCrunch Mobility!
If you haven’t noticed, Uber is suddenly everywhere, at least when it comes to autonomous vehicles. The company sold off Uber ATG, its in-house autonomous vehicle development unit, back in 2020. Uber shed a number of its moonshots — although it maintained an equity stake in all of them — so it could focus on its core businesses of delivery and ride-hailing.
But Uber never gave up entirely on AVs. It’s spent the past two years locking up partnerships with dozens of autonomous vehicle technology companies across delivery, drones, trucking, and robotaxis. It has taken a worldview, too, making agreements with Chinese companies to launch robotaxis in Europe and the Middle East, as well as startups like U.K.-based Wayve.
And now there is another one with Rivian. The TL;DR of the deal is Uber will make an initial $300 million investment in Rivian and will buy 10,000 fully autonomous R2 robotaxis ahead of a planned rollout in San Francisco and Miami in 2028. Uber has the option to buy up to 40,000 more starting in 2030. This fleet will be exclusively available on Uber’s network.
Here’s how I am thinking about this deal. While the total deal could be as high as $1.25 billion, Uber’s initial outlay is relatively small. And the risk ratio is heavily weighted toward Rivian. It’s also the only deal that Uber has made in which the company is the developer of the self-driving system and the vehicle manufacturer.
Rivian hasn’t started producing the R2 SUV yet, nor has it tested and deployed a self-driving system designed for robotaxis. To raise the hurdle even higher, the robotaxi is supposed to be built in Rivian’s Georgia factory, which is still under construction.
And the EV maker has already made at least one sacrifice in hopes of pulling it off. Rivian said it no longer expects to meet its profitability goal in 2027 because of how much money it is spending on its autonomy efforts.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
In our newsletter, we had a poll asking, Are the risks too high for Rivian? Sign up here to get Mobility in your inbox and let your voice be heard in our polls!
A little bird

Speaking of Uber, a little bird hinted that the ride-hailing company might have been in talks with Rivian for its robotaxi deal for quite a long time. One person directly familiar with both companies told me a deal like this wouldn’t happen overnight. After I asked for more specifics, I got a question in return: “Does RJ strike you as someone who has a strategic horizon that short?” Touché!
Got a tip for us? Contact Kirsten Korosec at kirsten.korosec@techcrunch.com or via Signal at kkorosec.07, or email Sean O’Kane at sean.okane@techcrunch.com.
Deals!

Like Uber, Nvidia is everywhere. Or at least wants to be. The company has made numerous investments — either direct cash injections or in-kind chip deals — in autonomous vehicle technology companies. And it’s also locking up partnerships with automakers — as we saw this week during its GTC conference — in a bid to sell its autonomous vehicle development platform called Nvidia Drive Hyperion.
Nvidia CEO Jensen Huang announced onstage deals — either new or expanded — with BYD, Geely, Hyundai, and Nissan for its AV development platform. GM, Mercedes-Benz, and Toyota have already signed deals with Nvidia to use the platform.
Nvidia has been making deals with automakers for years, but the pace and specificity of AVs is worth noting.
“The ChatGPT moment of self-driving cars has arrived. We now know we could successfully autonomously drive cars,” Huang said during his GTC keynote, noting that altogether the four automakers build 18 million cars each year.
Other deals that got my attention …
Advanced Navigation, an Australian startup developing navigation and autonomous systems, raised $110 million in a Series C funding round led by Airtree Ventures, with strategic participation from Quadrant Private Equity and the National Reconstruction Fund Corporation (NRFC).
Arc Boat Company, the Los Angeles electric boat startup, raised $50 million in a Series C funding round from Eclipse, a16z, Menlo Ventures, Lowercarbon Capital, Necessary Ventures, and Offline Ventures.
BusRight, the school bus routing and technology startup, raised more than $30 million in a round led by Volition Capital.
Jeff Bezos is reportedly raising $100 billion for a new fund that will focus on buying up companies in major industrial sectors — like automotive and aerospace. The plan is to then modernize these companies using AI models developed by Bezos’ new startup Project Prometheus.
Rivr, a Zurich-based autonomous robotics startup known for its stair-climbing delivery robot, was acquired by Amazon. Terms of the deal weren’t disclosed.
Trevor Milton, the founder of the now-bankrupt electric truck startup Nikola who was pardoned by President Trump, is trying to raise $1 billion for AI-powered planes.
Zenobē Energy has purchased Revolv, a San Francisco-based fleet charging startup, for an undisclosed amount.
Notable reads and other tidbits

A cyberattack on U.S. vehicle breathalyzer company Intoxalock has left drivers across the United States stranded and unable to start their vehicles.
Kodiak has expanded commercial autonomous freight operations to the Dallas-El Paso corridor. This is the company’s second major route and a core part of its network expansion roadmap, according to COO Michael Wiesinger.
The National Highway Traffic Safety Administration upgraded its investigation into the performance of Tesla’s Full Self-Driving (Supervised) software in low-visibility conditions. The probe has now been escalated to an “engineering analysis,” its highest level of scrutiny and a required step before the agency tells a company to issue a recall.
One more thing …

I mentioned in last week’s edition to keep an eye out for my interview with Rivian founder and CEO RJ Scaringe. We covered a lot of ground and I found his comments about robotics particularly interesting. To summarize, Scaringe thinks companies are approaching industrial robotics all wrong. His new startup, Mind Robotics, is going to do things differently and focus more on robotic hands and steering clear of building robots that can do back flips.
As Scaringe told me: “I think what’s missed in industrial [robotics] and this is one of the things we really see clearly, is the work happens with the hands. So, the hands are very, very important. Everything else, from a robotic system point of view, is to get the hands to the right place. And so the ability for the robots to do really complex motions, like, let’s say, like a back flip, that actually just means the robot has a lot of unnecessary complexity in it for the vast majority of tasks.” You can read the interview here.
Tech
Elon Musk unveils chip manufacturing plans for SpaceX and Tesla
Elon Musk recently outlined ambitious plans for a chip-building collaboration between his companies Tesla and SpaceX.
Bloomberg reports that Musk shared his plans on Saturday night at an event in downtown Austin, Texas, with a photo suggesting that what Musk is calling the “Terafab” facility will be built near Tesla’s Austin headquarters and “gigafactory.”
Musk said he’s pursuing this project because semiconductor manufacturers aren’t making chips quickly enough for his companies’ artificial intelligence and robotics needs: “We either build the Terafab or we don’t have the chips, and we need the chips, so we build the Terafab.”
The goal is to manufacture chips that can support 100 to 200 gigawatts of computing power per year on Earth, along with a terawatt in space, Musk said. He did not offer a timeline for these plans.
As Bloomberg noted, Musk does not have a background in semiconductor manufacturing, but he does have a history of overpromising on goals and timelines.
