Tech
Runway started by helping filmmakers — now it wants to beat Google at AI
AI video-generation startup Runway doesn’t have the typical Silicon Valley pedigree. No Stanford founders, no ex-Google founders, no nine-figure seed round that bought them time to ignore revenue. Its three founders — two from Chile, one from Greece — met at NYU’s Tisch School of the Arts and built the company in New York.
Runway also could be, depending on who you ask, one of the most consequential AI companies today. Not because of what it has built, but because of what it is trying to build next.
For the past several years, the AI industry has largely operated on the premise that intelligence lives in language. Large language models like OpenAI’s ChatGPT and Anthropic’s Claude reflect that bet.
Runway, alongside other competitors, is making a different one. Its founders believe the next form of AI intelligence won’t be built from text, but from video and world models that learn how the world works, not just how humans describe it. That distinction sounds academic. Its implications are not.
Runway co-founder and co-CEO Anastasis Germanidis said training models directly on observational data from the world is the next frontier of AI. The companies that get there first, he argues, won’t be the ones that perfected language.
“We’re basically bound by our own understanding of reality,” Germanidis told TechCrunch from Runway’s homey sunlight-filled headquarters near Union Square.
“Language models are trained on the entire internet, on message boards and social media, on textbooks — distilling the existing human knowledge,” Germanidis continued. “But to get beyond that, we need to leverage less biased data.”
Founded in 2018, Runway built its reputation on video-generation models — including its latest Gen-4.5 — and AI tools that let people turn text prompts into editable, cinematic content.
Today, Runway’s technology powers production workflows for filmmakers and ad agencies, and the company has signed deals with major media players like Lionsgate and AMC Networks. Its tools have even been used in films such as “Everything Everywhere All At Once.”
Runway is now valued at $5.3 billion and, according to one of its founders, added $40 million in annual recurring revenue in the second quarter of 2026.
If Runway’s bet that video generation is the path to world models pays off, the result will be felt from Hollywood to drug discovery. If it doesn’t, Runway risks being outpaced by competitors with far deeper pockets — Google chief among them.
Taking the leap
Within the last six months, the startup has put its plan into action and expanded beyond video generation, launching its first world model in December, with plans to launch another this year. (World models are AI systems that simulate environments well enough to predict how they’ll behave.)
Runway isn’t alone in its pursuit of turning physics-aware video models to world models, with near-term use cases in interactive entertainment, gaming, and robotics training. Startups Luma and World Labs are on a similar trajectory, and Google has pointed its Genie world model in the same direction.
Everyone is after some version of the same thing: AI that solves humanity’s hardest problems. That’s far from Runway’s original product, but it’s the result of both emergent capabilities in the technology and founders who were predisposed to follow where it led.
For his part, Germanidis sees world models as scientific infrastructure. The more sensory data and observations you train a single model on, the closer you get to a working digital twin of the universe — one you can run experiments on faster than any lab could. Much of the scientific process is just waiting on results, he points out. If you could compress that waiting, you could compress progress itself.
“If we can build a better scientist than human scientists, we can accelerate progress in how we understand the universe and how we solve problems,” Germanidis said.
The moonshot

Germanidis fell in love with programming as an 11-year-old in Athens and came to the U.S. at 18 to study neuroscience and film. He turned back to computer science, working at several Silicon Valley tech firms before deciding he’d had enough of the culture. Co-CEO Cristóbal Valenzuela, born and raised in Santiago, studied economics as an undergraduate before working in film and then software. Another Santiago native, chief innovation officer Alejandro Matamala Ortiz studied advertising and ran a design firm.
The three met in 2016 while attending NYU’s ITP (Interactive Communications Program), a graduate program that Valenzuela described as an “art school for engineers.”
The co-founders had all aspired to be filmmakers at certain points in their lives, according to Matamala Ortiz. So Runway started with a simple mission: Can we use AI to make everyone a filmmaker?
According to Matamala Ortiz, after releasing their first video-generation model in February 2023 — which is staggeringly unimpressive compared to what Runway is putting out today — that mission evolved into: Can we make everyone a great filmmaker?
It required growing the team to what it is today. The company has 155 workers spread across offices in New York, London, San Francisco, Seattle, Tel Aviv, and most recently, Tokyo. “But throughout this process, we learned that these models can understand how the world works, and if you scale them, they can be useful for many other different things,” he added.
Things like robotics, drug discovery, and climate modeling — the kinds of problems that have stumped researchers for decades. Last year, Runway launched a robotics unit that Germanidis says has already resulted in real-world testing and deployments.
Germanidis, like others, sees the field heading toward training a single model on many different modalities — text, video, voice, and other sensors — and thinks the compounding effect is the point.
His own moonshot goal for Runway’s technology, given enough time and resources, is biological world models and anti-aging research.
Whether Runway can carry its video dominance into world models is far from settled, and the competition isn’t waiting around. Runway was among the first to develop AI video generation, but world models are a different race with deep-pocketed and well-respected competitors. Google, former Meta chief scientist Yann LeCun, AI’s “godmother” Fei-Fei Li, and a growing field of startups are all chasing the same goal.
Kian Katanforoosh, CEO of AI skills benchmarking company Workera and a lecturer at Stanford, pointed out that no one has yet proven the jump between video intelligence and generalized reasoning via world models, but that doesn’t mean it’s impossible. He said that if Runway wants to turn its world model bet into reality, it will need to continue gathering resources — compute chief among them.
Runway has deals with CoreWeave and Nvidia but wouldn’t confirm whether it has dedicated cluster access — the kind of guaranteed, large-scale compute that training frontier models requires.
“How are you going to build a foundational model without a cluster?” Katanforoosh asked. “I don’t think anybody can do that.”
Runway has raised $860 million to date, including a $315 million round in February from strategic partners like AMD Ventures and Nvidia. That’s roughly in line with its most immediate competitors, Luma AI and World Labs, which have raised $900 million and $1.29 billion, respectively, according to PitchBook.
But Runway is also going up against incumbents like OpenAI, which has raised around $175 billion per CEO Sam Altman, and tech behemoth Google, whose parent company Alphabet is worth $4.86 trillion. Google is Runway’s biggest threat. The company’s Veo model competes directly with Runway’s video-generation business, while its Genie world model targets the same longer-term territory Runway is racing toward.
Katanforoosh nodded at OpenAI, which shuttered its video platform Sora in March after burning roughly $1 million per day in compute costs with barely $2.1 million in revenue according to some estimates. His point: Resources alone don’t guarantee survival. They don’t guarantee it for Runway either.
Katanforoosh isn’t writing Runway off. He pointed to AI audio startup ElevenLabs, which has outperformed OpenAI and Google on their own benchmarks, despite lacking the resources and pedigree of either. Runway, he argues, could follow a similar playbook.
The comparison isn’t lost on Runway’s founders. Valenzuela says the startup’s lack of Bay Area “standardization” gives them an edge. Not only do they have diversity of thought, he contends, but without Silicon Valley ties, they had to be scrappier, lacking the war chest many of their peers have access to that would have insulated them from the need to generate revenue early.
And according to Michelle Kwon, Runway’s chief operating officer, the company isn’t in a rush to raise more funds, even as compute demands increase with scale.
“Their background has led them to be early, to be right more often than not, and to build a culture that moves incredibly quickly,” early investor Michael Dempsey, managing partner at Compound, told TechCrunch.
For Valenzuela, that culture starts with how he sees the world in the first place. He spends whatever free time he has — not much, as a co-CEO and new father — reading books, including the Chilean poet Nicanor Parra, whom he describes as the antithesis of Pablo Neruda: less formal, less academic, holding a view that poetry belongs to the people rather than to rules.
“Rules are just rules they invented,” Valenzuela said. “That’s a driving force of how we do things at Runway. They say Silicon Valley is here and that’s where the startups are. Why? Those are just made-up rules. Scrub them all and start again.”
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Tech
Indian Uber rival Rapido raises $240M at $3B valuation
Indian ride-hailing company Rapido said on Friday it had raised $240 million in fresh funding at a $3 billion valuation to compete better in the country’s growing but challenging mobility market.
Led by Prosus, the equity round saw participation from existing investors, including WestBridge Capital and Accel. The round was part of a larger $730 million primary and secondary financing. Rapido was previously valued at $2.3 billion during a secondary transaction last year.
Rapido said the fresh capital would be used to increase its footprint in high-growth markets, strengthen its driver network, and invest in technology and platform efficiency.
“We are going deeper into markets where demand exists, but supply remains fragmented,” Rapido co-founder Aravind Sanka said in a statement. “We will sharpen our focus on strengthening supply, building technologies, and expanding our multimodal footprint, with far greater speed and intent.”
The funding round underlines continuing investor interest in India’s mobility sector despite persistent concerns about pricing pressures, regulation, and profitability.
Founded in 2015, Rapido operates in more than 400 cities and has spurred its growth by enabling ride-hailing for lower-cost and more flexible modes of transport such as motorbikes and auto-rickshaws in India’s congested, price-sensitive cities. The Bengaluru-based startup has been expanding to smaller towns, too.
The funding comes in the wake of Uber CEO Dara Khosrowshahi’s visit to India, where the ride-hailing giant this week unveiled plans to expand its engineering and infrastructure operations via two new technology campuses and a local data center partnership. Uber earlier this year infused $330 million into its India subsidiary as it sought to strengthen its presence amid growing competition from local rivals like Ola, Rapido, and Namma Yatri.
Khosrowshahi said last year that Rapido had overtaken Ola as Uber’s biggest competitor in the country.
India is currently one of the world’s most challenging ride-hailing markets because of intense price competition, supply issues, high driver incentive costs, and evolving local regulations. Nevertheless, Rapido has rapidly expanded its market share, even entering the food delivery business through its subsidiary Ownly last year.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Tech
Osaurus brings both local and cloud AI models to your Mac
As AI models increasingly become commoditized, startups are racing to build the software layer that sits on top of them. One interesting entrant into this space is Osaurus, an open source, Apple-only LLM server that lets users move between different local AI models, either locally or in the cloud, while keeping their files and tools all on their own hardware.
Osaurus evolved out of the idea for a desktop AI companion, Dinoki, which Osaurus co-founder Terence Pae described as a sort of “AI-powered Clippy.” Dinoki’s customers had asked him why they should buy the app if they still had to pay for tokens — the usage units AI companies charge for processing prompts and generating responses.
That got Pae thinking more deeply about running AI locally.
“That’s how Osaurus started,” Pae, previously a software engineer at Tesla and Netflix, told TechCrunch over a call. The idea, he explained, was to try to run an AI assistant locally. “You can do pretty much everything on your Mac locally, like browsing your files, accessing your browser, accessing your system configurations. I figured this would be a great way to position Osaurus as a personal AI for individuals.”
Pae began building the tool in public as an open source project, adding features and fixing bugs along the way.

Today, Osaurus can flexibly connect with locally hosted AI models or cloud providers like OpenAI and Anthropic. Users can freely choose which AI models they’re using and keep other aspects of the AI experience on their own hardware, like the models’ own memory, or their files and tools.
Given that different AI models have different strengths, the advantage of this system is that users can switch to the AI model that best fits their needs.
Such a structure makes Osaurus what’s called a “harness” — a control layer that connects different AI models, tools, and workflows through a single interface, similar to tools like OpenClaw or Hermes. However, the difference is that such tools are often aimed at developers who know their way around a terminal. And sometimes, like in the case of OpenClaw, they may pose security issues and holes to worry about.
Osaurus, meanwhile, presents an easy-to-use interface that consumers can use and addresses security concerns by running things in a hardware-isolated, virtual sandbox. This limits the AI to a certain scope, keeping your computer and data safe.

Of course, the practice of running AI models on your machine is still in its early days, given that it’s heavily resource-intensive and hardware-dependent. To run local models, your system will need at least 64GB of RAM. For running larger models, like DeepSeek v4, Pae recommends systems with about 128GB of RAM.
But Pae believes local AI’s needs will come down in time.
“I can see the potential of it, because the intelligence per wattage — which is like the metric for local AI — has been going up significantly. It’s on its own curve of innovation. Last year, local AI could barely finish sentences, but today it can actually run tools, write code, access your browser, and order stuff from Amazon … It’s just getting better and better,” he said.

Osaurus today can run MiniMax M2.5, Gemma 4, Qwen3.6, GPT-OSS, Llama, DeepSeek V4, and other models. It also supports Apple’s on-device foundation models, Liquid AI’s LFM family of on-device models, and in the cloud, it can connect to OpenAI, Anthropic, Gemini, xAI/Grok, Venice AI, OpenRouter, Ollama, and LM Studio.
As a full MCP (Model Context Protocol) server, you can give any MCP-compatible client access to your tools as well. Plus, it ships with over 20 native plug-ins for Mail, Calendar, Vision, macOS Use, XLSX, PPTX, Browser, Music, Git, Filesystem, Search, Fetch, and more.
More recently, Osaurus was updated to include voice capabilities as well.
Since the project went live nearly a year ago, it has been downloaded north of 112,000 times, according to its website. The app competes with other tools that let you run models locally, like Ollama, Msty, LM Studio, and others, but offers a differentiated feature set and presents itself as a more user-friendly option for non-developers, too.
Currently, Osaurus’ founders (who include co-founder Sam Yoo) are participating in the New York-based startup accelerator Alliance. They’re also thinking about next steps, which could see Osaurus being offered to businesses, like those in the legal space or in healthcare, where running local LLMs could address privacy concerns.
As the power of local AI models grows, the team believes it could lower the demand for AI data centers.
“We’re seeing this explosive growth in the AI space where [cloud AI providers] have to scale up using data centers and infrastructure, but we feel like people haven’t really seen the value of the local AI yet,” Pae said. “Instead of relying on the cloud, they can actually deploy a Mac Studio on-prem, and it should use substantially less power. You still have the capabilities of the cloud, but you will not be dependent on a data center to be able to run that AI,” he added.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Tech
Meridian Ventures launched a $35M fund with a focus on MBA-deferred founders
Meridian Ventures was born out of a shared experience: deferred MBAs. Now, founders Devon Gethers and Karlton Haney have raised a $35 million fund to back pre-seed and seed-stage companies started by people like them.
Gethers, 29, told TechCrunch the idea for a firm arose after he met Haney in Harvard’s MBA deferred admission program in 2020.
Gethers grew up in poverty in Washington State, studied behavioral science and finance at the University of Utah, then moved into private equity before launching a company of his own (which he later exited). Haney, meanwhile, grew up on a farm in Arkansas, raising chickens, birds, and “anything that flew,” Gethers said about his business partner.
Haney, 28, went on to study industrial engineering at the University of Arkansas and worked as an investor at the family office, the Stephens Group. The two came together in 2023 with the idea of launching a firm that backed people with MBAs, with a tilt toward those who had deferred.
“Our thesis is going against a bit of the grain, the rhetoric you hear in Silicon Valley that MBAs don’t make good founders,” Gethers said, referring to the belief that an MBA prepares students for corporate culture, not the flexible, free-wheeling world of Silicon Valley.
To prove their thesis, Gethers and Haney went out and cold-called prospective limited partners and knocked on doors until they raised $2.5 million as a proof-of-concept fund to back 45 companies.
The two headed off to Harvard Business School in summer 2023 and about a year into it, decided to try and raise their first institutional fund. The funding environment was tough, but the pair ended up raising an oversubscribed $35 million fund from LPs, including publicly traded banks, family offices, and Fortune 500 executives, Gethers said. They graduated from Harvard Business School in 2025.
This new fund will back founders building enterprise technology in the United States. Meridian is agnostic, Gethers said, noting that the firm has already invested in companies in fintech, logistics, healthcare, and of course, AI. The average check size will be $500,000 for pre-seed and $750,000 for seed, and the capital hopes to be deployed over the next three years.
“We saw an expanding gap between ambitious founders building frontier technologies and the capital required to help carry those ambitions forward,” Gethers said. “With this $35 million fund, our goal is to seal that gap.”
This piece was updated to clarify that the firm also backs those who have not deferred.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
