Tech
Why the economics of orbital AI are so brutal
In a sense, this whole thing was inevitable. Elon Musk and his coterie have been talking about AI in space for years — mainly in the context of Iain Banks’ science-fiction series about a far-future universe where sentient spaceships roam and control the galaxy.
Now Musk sees an opportunity to realize a version of this vision. His company SpaceX has requested regulatory permission to build solar-powered orbital data centers, distributed across as many as a million satellites, that could shift as much as 100 GW of compute power off the planet. He has reportedly suggested some of his AI satellites will be built on the moon.
“By far the cheapest place to put AI will be space in 36 months or less,” Musk said last week on a podcast hosted by Stripe co-founder John Collison.
He’s not alone. xAI’s head of compute has reportedly bet his counterpart at Anthropic that 1% of global compute will be in orbit by 2028. Google (which has a significant ownership stake in SpaceX) has announced a space AI effort called Project Suncatcher, which will launch prototype vehicles in 2027. Starcloud, a startup that has raised $34 million backed by Google and Andreessen Horowitz, filed its own plans for an 80,000 satellite constellation last week. Even Jeff Bezos has said this is the future.
But behind the hype, what will it actually take to get data centers into space?
In a first analysis, today’s terrestrial data centers remain cheaper than those in orbit. Andrew McCalip, a space engineer, has built a helpful calculator comparing the two models. His baseline results show that a 1 GW orbital data center might cost $42.4 billion — almost 3x its ground-bound equivalent, thanks to the up-front costs of building the satellites and launching them to orbit.
Changing that equation, experts say, will require technology development across several fields, massive capital expenditure, and a lot of work on the supply chain for space-grade components. It also depends on costs on the ground rising as resources and supply chains are strained by growing demand.
Techcrunch event
Boston, MA
|
June 23, 2026
Designing and launching the satellites
The key driver for any space business model is how much it costs to get anything up there. Musk’s SpaceX is already pushing down on the cost of getting to orbit, but analysts looking at what it will take to make orbital data centers a reality need even lower prices to close their business case. In other words, while AI data centers may seem to be a story about a new business line ahead of the SpaceX IPO, the plan depends on completing the company’s longest-running unfinished project — Starship.
Consider that the reusable Falcon 9 delivers, today, a cost to orbit of roughly $3,600/kg. Making space data centers doable, per Project Suncatcher’s white paper, will require prices closer to $200/kg, an 18-fold improvement that it expects to be available in the 2030s. At that price, however, the energy delivered by a Starlink satellite today would be cost competitive with a terrestrial data center.
The expectation is that SpaceX’s next-generation Starship rocket will deliver those improvements — no other vehicle in development promises equivalent savings. However, that vehicle has yet to become operational or even reach orbit; a third iteration of Starship is expected to make its maiden launch sometime in the months ahead.
Even if Starship is completely successful, however, assumptions that it will immediately deliver lower prices to customers may not pass the smell test. Economists at the consultancy Rational Futures make a compelling case that, as with the Falcon 9, SpaceX will not want to charge much less than its best competitor — otherwise the company is leaving money on the table. If Blue Origin’s New Glenn rocket, for example, retails at $70 million, SpaceX won’t take on Starship missions for external customers at much less than that, which would leave it above the numbers publicly assumed by space data center builders.
“There are not enough rockets to launch a million satellites yet, so we’re pretty far from that,” Matt Gorman, the CEO of Amazon Web Services, said at a recent event. “If you think about the cost of getting a payload in space today, it’s massive. It is just not economical.”
Still, if launch is the bane of all space businesses, the second challenge is production cost.
“We always take for granted, at this point, that Starship’s cost is going to be hundreds of dollars per kilo,” McCalip told TechCrunch. “People are not taking into account the satellites are almost $1,000 a kilo right now.”
Satellite manufacturing costs are the largest chunk of that price tag, but if high-powered satellites can be made at about half the cost of current Starlink satellites, the numbers start to make sense. SpaceX has made great advances in satellite economics while building Starlink, its record-setting communications network, and the company hopes to achieve more through scale. Part of the reasoning behind a million satellites is undoubtedly the cost savings that come from mass production.
Still, the satellites that will be used for these missions must be large enough to satisfy the complex requirements for operating powerful GPUs, including large solar arrays, thermal management systems, and laser-based communications links to receive and deliver data.
A 2025 white paper from Project Suncatcher offers one way to compare terrestrial and space data centers by the cost of power, the basic input needed to run chips. On the ground, data centers spend roughly $570 to $3,000 for a kW of power over a year, depending on local power costs and the efficiency of their systems. SpaceX’s Starlink satellites get their power from on-board solar panels instead, but the cost of acquiring, launching, and maintaining those spacecraft delivers energy at $14,700 per kW over a year. Put simply, satellites and their components will have to get a lot cheaper before they’re cost-competitive with metered power.
The space environment is not fooling around
Orbital data center proponents often say that thermal management is “free” in space, but that’s an oversimplification. Without an atmosphere, it’s actually more difficult to disperse heat.
“You’re relying on very large radiators to just be able to dissipate that heat into the blackness of space, and so that’s a lot of surface area and mass that you have to manage,” said Mike Safyan, an executive at Planet Labs, which is building prototype satellites for Google Suncatcher that are expected to launch in 2027. “It is recognized as one of the key challenges, especially long term.”
Besides the vacuum of space, AI satellites will need to deal with cosmic radiation. Cosmic rays degrade chips over time, and they can also cause “bit flip” errors that can corrupt data. Chips can be protected with shielding, use rad-hardened components, or work in series with redundant error checks, but all these options involve expensive trades for mass. Still, Google used a particle beam to test the effects of radiation on its tensor processing units (chips designed explicitly for machine learning applications). SpaceX executives said on social media that the company has acquired a particle accelerator for just that purpose.
Another challenge comes from the solar panels themselves. The logic of the project is energy arbitrage: Putting solar panels in space makes them anywhere from 5x to 8x more efficient than on Earth, and if they’re in the right orbit, they can be in sight of the sun for 90% of the day or more, increasing their efficiency. Electricity is the main fuel for chips, so more energy equals cheaper data centers. But even solar panels are more complicated in space.
Space-rated solar panels made of rare earth elements are hardy, but too expensive. Solar panels made from silicon are cheap and increasingly prevalent in space — Starlink and Amazon Kuiper use them — but they degrade much faster due to space radiation. That will limit the lifetime of AI satellites to around five years, which means they will have to generate return on investment faster.
Still, some analysts think that’s not such a big deal, based on how quickly new generations of chips arrive on the scene. “After five or six years, the dollars per kilowatt-hour doesn’t produce a return, and that’s because they’re not state of the art,” Philip Johnston, the CEO of Starcloud, told TechCrunch.
Danny Field, an executive at Solestial, a startup building space-rated silicon solar panels, says the industry sees orbital data centers as a key driver of growth. He’s speaking with several companies about potential data center projects, and says “any player who is big enough to dream is at least thinking about it.” As a long-time spacecraft design engineer, however, he doesn’t discount the challenges in these models.
“You can always extrapolate physics out to a bigger size,” Field said. “I’m excited to see how some of these companies get to a point where the economics make sense and the business case closes.”
How do space data centers fit in?
One outstanding question about these data centers: What will we do with them? Are they general purpose, or for inference, or for training? Based on existing use cases, they may not be entirely interchangeable with data centers on the ground.
A key challenge for training new models is operating thousands of GPUs together en masse. Most model training is not distributed, but done in individual data centers. The hyperscalers are working to change this in order to increase the power of their models, but it still hasn’t been achieved. Similarly, training in space will require coherence between GPUs on multiple satellites.
The team at Google’s Project Suncatcher notes that the company’s terrestrial data centers connect their TPU networks with throughput in the hundreds of gigabits per second. The fastest off-the-shelf inter-satellite comms links today, which use lasers, can only get up to about 100 Gbps.
That led to an intriguing architecture for Suncatcher: It involves flying 81 satellites in formation so they are close enough to use the kind of transceivers relied on by terrestrial data centers. That, of course, presents its own challenges: The autonomy required to ensure each spacecraft remains in its correct station, even if maneuvers are required to avoid orbital debris or another spacecraft.
Still, the Google study offers a caveat: The work of inference can tolerate the orbital radiation environment, but more research is needed to understand the potential impact of bit-flips and other errors on training workloads.
Inference tasks don’t have the same need for thousands of GPUs working in unison. The job can be done with dozens of GPUs, perhaps on a single satellite, an architecture that represents a kind of minimum viable product and the likely starting point for the orbital data center business.
“Training is not the ideal thing to do in space,” Johnston said. “I think almost all inference workloads will be done in space,” imagining everything from customer service voice agents to ChatGPT queries being computed in orbit. He says his company’s first AI satellite is already earning revenue performing inference in orbit.
While details are scarce even in the company’s FCC filing, SpaceX’s orbital data center constellation seems to anticipate about 100 kW of compute power per ton, roughly twice the power of current Starlink satellites. The spacecraft will operate in connection with each other and use the Starlink network to share information; the filing claims that Starlink’s laser links can achieve petabit-level throughput.
For SpaceX, the company’s recent acquisition of xAI (which is building its own terrestrial data centers) will let the company stake out positions in both terrestrial and orbital data centers, seeing which supply chain adapts faster.
That’s the benefit of having fungible floating point operations per second — if you can make it work. “A FLOP is a FLOP, it doesn’t matter where it lives,” McCalip said. “[SpaceX] can just scale until [it] hits permitting or capex bottlenecks on the ground, and then fall back to [their] space deployments.”
Got a sensitive tip or confidential documents about SpaceX? For secure communication, you can contact Tim via Signal at tim_fernholz.21.
Tech
Twilio co-founder’s fusion power startup raises $450M from Bessemer and Alphabet’s GV
Inertia Enterprises has raised $450 million to build one of the world’s most powerful lasers, which it hopes will serve as the foundation of a grid-scale power plant the fusion startup intends to start construction on in 2030. Inertia Enterprises is building on technology developed at the Lawrence Livermore National Laboratory’s National Ignition Facility (NIF). The NIF is the site of the world’s only controlled fusion reactions that have reached scientific breakeven, in which the reaction releases more energy than it took to start.
The Series A was led by Bessemer Venture Partners with participation from GV, Modern Capital, Threshold Ventures, and others. Inertia’s co-founders include Jeff Lawson, who co-founded Twilio and serves as its CEO; Annie Kritcher, who led the successful experiments at NIF; and Mike Dunne, a Stanford professor who helped Lawrence Livermore develop a power plant design based on NIF. Kritcher has remained in her position at Lawrence Livermore.
NIF’s breakeven experiments have been a key milestone on the road to widespread fusion power. However, considerable progress needs to be made before a fusion power plant can deliver electricity to the grid. For Inertia, that means building a laser capable of delivering 10 kilojoules 10 times per second.
The startup’s reactor relies on a form of fusion known as inertial confinement. In Inertia’s flavor of inertial confinement, lasers bombard a fuel target, compressing the fuel until atoms inside fuse and release energy. The technique is based on NIF’s designs, in which laser light is converted into X-rays inside the target. The X-rays are what ultimately heat and compress the fuel pellet.
Each of Inertia’s power plants will require 1,000 of its lasers bombarding 4.5 mm targets that cost less than $1 each to mass produce. By contrast, the NIF’s system uses 192 lasers to fire on painstakingly crafted targets that take dozens of hours to make. Inertia is betting that by using the same basic principles as NIF and applying a more commercial mindset, it can bring the costs down dramatically.
Inertia’s new round is the latest in a string of funding announcements from fusion startups in recent months. With this round and others, fusion startups have attracted more than $10 billion in investments. And at least a dozen companies have raised more than $100 million.
Last week, Avalanche said it had raised $29 million to advance its desktop-sized fusion reactor. Earlier this year, Type One Energy told TechCrunch it had attracted $87 million in investment in advance of a $250 million Series B that it’s currently raising. Last summer, Commonwealth Fusion Systems raised $863 million from dozens of investors, including Google, Nvidia, and Breakthrough Energy Ventures.
Techcrunch event
Boston, MA
|
June 23, 2026
Two fusion companies recently announced they were going public via reverse mergers. General Fusion said in January it would merge with acquisition company Spring Valley III in a deal that values the combined company at $1 billion. General Fusion had previously struggled to raise money from private investors. Earlier last month, TAE Technologies announced it would merge with Donald Trump’s social media company, Trump Media & Technology Group; the combined company would be worth $6 billion, according to the all-stock transaction.
Tech
UpScrolled’s social network is struggling to moderate hate speech after fast growth
UpScrolled, a social network that caught fire after TikTok’s ownership change in the U.S., is facing a serious moderation problem. After growing to more than 2.5 million users in January, users have reported the app is not taking action on the creation of usernames and hashtags that contain racial slurs, and hasn’t been able to properly moderate harmful content.
After receiving tips from UpScrolled users, TechCrunch confirmed the existence of a wide range of racial slurs and hate speech being used in people’s usernames on the app. For instance, some usernames would feature the name of the slur itself, the slur combined with other words, or multiple slurs in a single username; other usernames contain hate speech, like “Glory to Hitler.”
After reporting these slurs to UpScrolled’s public email address, we received a response from that email that the company is “actively reviewing and removing inappropriate content,” and is working to expand its moderation capacity. The email advised us not to engage with bad-faith actors while the situation is resolved.
Days after reporting this activity on the app, the accounts with slurs in the usernames that were provided to UpScrolled via screenshots remained online.
In addition, slurs and hate speech can be found elsewhere in the app, including hashtags and text used alongside its photo or video content, TechCrunch found. Other harmful content was available, including text posts with racial slurs and hate speech, and photo and video content glorifying Hitler, based on TechCrunch’s review of the app.
TechCrunch wasn’t alone in identifying this problem; the ADL also published a blog post this month, noting that UpScrolled was becoming home to antisemitic and extremist content and designated foreign terrorist organizations, like Hamas and others.
UpScrolled, which was founded in 2025, claims on its website claims that the platform offers every voice “equal power.” The app has seen more than 4 million downloads on iOS and Android since June 2025, according to market intelligence provider Appfigures — a figure even higher than the startup’s self-reported number last month.
Techcrunch event
Boston, MA
|
June 23, 2026
But while UpScrolled’s FAQ explains the app doesn’t “censor opinions,” it does indicate that its policy is to restrict content that involves “illegal activity, hate speech, bullying, harassment, explicit nudity, unlicensed copyrighted material, or anything intended to cause harm.”
That guidance is similar to most modern-day social media platforms. It’s clear, however, that the company is struggling to enforce its rules.
It’s battle that social networks are often faced with — especially those that receive a large influx of new users in a short time period. Bluesky, for instance, faced issues with slurs in account usernames in July 2023, which led to users threatening to leave the site.
After UpScrolled’s initial reply to our inquiry, TechCrunch also received a response from the press account on Tuesday, which directed us to UpScrolled founder Issam Hijazi’s new video, where he addressed the issues with content moderation.
In the video, he confirmed that users have been uploading “harmful content” that goes against UpScrolled’s terms of service and the company’s beliefs.
“We are offering everyone the freedom to express and share their opinions in a healthy and respectful digital environment,” Hijazi said. To create that environment, he said the company is “rapidly expanding our content moderation team, and we are upgrading our technology infrastructure so we can catch and remove harmful content more effectively.”
Tech
How AI changes the math for startups, according to a Microsoft VP
For 24 years, Microsoft’s Amanda Silver has been working to help developers — and in the last few years, that’s meant building tools for AI. After a long stretch on GitHub Copilot, Silver is now a corporate vice president at Microsoft’s CoreAI division, where she works on tools for deploying apps and agentic systems within enterprises.
Her work is focused on the Foundry system inside Azure, which is designed as a unified AI portal for enterprises, giving her a close view of how companies are actually using these systems and where deployments end up falling short.
I spoke with Silver about the current capabilities of enterprise agents, and why she believes this is the biggest opportunity for startups since the public cloud.
This interview was edited for length and clarity.
So, your work focuses on Microsoft products for outside developers — often startups that aren’t otherwise focused on AI. How do you see AI impacting those companies?
I see this as being a watershed moment for startups as profound as the move to the public cloud. If you think about it, the cloud had a huge impact for startups because it meant that they no longer needed to have the real estate space to host their racks, and they didn’t need to spend as much money on the capital infusion of getting the hardware to be hosted in their labs and things like that. Everything became cheaper. Now agentic AI is going to kind of continue to reduce the overall cost of software operations again, because many of the jobs involved in standing up a new venture — whether it’s support people, legal investigations — a lot of it can be done faster and cheaper with AI agents. I think that’s going to lead to more ventures and more startups launching. And then we’re going to see higher-valuation startups with fewer people at the helm. And I think that that’s an exciting world.
What does that look like in practice?
Techcrunch event
Boston, MA
|
June 23, 2026
We are certainly seeing multistep agents becoming very broadly used across all different kinds of coding tasks, right? Just as an example, one thing developers have to do to maintain a codebase is stay current with the latest versions of the libraries that it has a dependency on. You might have a dependency on an older version of the dot-net runtime or the Java SDK. And we can have these agentic systems reason over your entire codebase and bring it up to date much more easily, with maybe a 70% or 80% reduction of the time it takes. And it really has to be a deployed multistep agent to do that.
Live-site operations is another one — if you think of maintaining a website or a service and something goes wrong, there’s a thud in the night, and somebody has to be on call to get woken up to go respond to the incident. We still do have people on call 24/7, just in case the service goes down. But it used to be a really loathed job because you’d get woken up fairly often for these minor incidents. And we’ve now built a genetic system to successfully diagnose and in many cases fully mitigate issues that come up in these live site operations so that humans don’t have to be woken up in the middle of the night and groggily go to their terminals and try to diagnose what’s going on. And that also helps us dramatically reduce the average time it takes for an incident to be resolved.
One of the other puzzles of this present moment is that agentic deployments haven’t happened quite as fast as we expected even six months ago. I’m curious why you think that is.
If you think about the people who are building agents, what is preventing them from being successful, in many cases, it comes down to not really knowing what the purpose of the agent should be. There’s a culture change that has to happen in how people build these systems. What is the business use case that they are trying to solve for? What are they trying to achieve? You need to be very clear-eyed about what the definition of success is for this agent. And you need to think, what is the data that I’m giving to the agent so that it can reason over how to go accomplish this particular task?
We see those things as the bigger stumbling blocks, more than the general uncertainty of letting agents get deployed. Anybody who goes and looks at these systems sees the return on investment.
You mention the general uncertainty, which I think feels like a big blocker from the outside. Why do you see it as less of a problem in practice?
First of all, I think that it’s going to be very common that agentic systems have human-in-the-loop scenarios. Think about something like a package return. It used to be that you would have a workflow for the return processing that was 90% automated and 10% human intervention, where somebody would have to go look at the package and have to make a judgment call as to how damaged the package was before they would decide to accept the return.
That’s a perfect example where actually now the computer vision models are getting so good that in many cases, we don’t need to have as much human oversight over inspecting the package and making that determination. There will still be some cases that are borderline, where maybe the computer vision is not yet good enough to make a call, and maybe there’s an escalation. It’s kind of like, how often do you need to call in the manager?
There are some things that will always need some kind of human oversight, because they’re such critical operations. Think about incurring a contractual legal obligation, or deploying code into a production codebase that could potentially affect the reliability of your systems. But even then, there’s the question of how far we could get in automating the rest of the process.
