Connect with us

Tech

How AI changes the math for startups, according to a Microsoft VP

For 24 years, Microsoft’s Amanda Silver has been working to help developers — and in the last few years, that’s meant building tools for AI. After a long stretch on GitHub Copilot, Silver is now a corporate vice president at Microsoft’s CoreAI division, where she works on tools for deploying apps and agentic systems within enterprises.

Her work is focused on the Foundry system inside Azure, which is designed as a unified AI portal for enterprises, giving her a close view of how companies are actually using these systems and where deployments end up falling short.

I spoke with Silver about the current capabilities of enterprise agents, and why she believes this is the biggest opportunity for startups since the public cloud.

This interview was edited for length and clarity.

So, your work focuses on Microsoft products for outside developers — often startups that aren’t otherwise focused on AI. How do you see AI impacting those companies?

I see this as being a watershed moment for startups as profound as the move to the public cloud. If you think about it, the cloud had a huge impact for startups because it meant that they no longer needed to have the real estate space to host their racks, and they didn’t need to spend as much money on the capital infusion of getting the hardware to be hosted in their labs and things like that. Everything became cheaper. Now agentic AI is going to kind of continue to reduce the overall cost of software operations again, because many of the jobs involved in standing up a new venture — whether it’s support people, legal investigations — a lot of it can be done faster and cheaper with AI agents. I think that’s going to lead to more ventures and more startups launching. And then we’re going to see higher-valuation startups with fewer people at the helm. And I think that that’s an exciting world. 

What does that look like in practice?

Techcrunch event

Boston, MA
|
June 23, 2026

We are certainly seeing multistep agents becoming very broadly used across all different kinds of coding tasks, right? Just as an example, one thing developers have to do to maintain a codebase is stay current with the latest versions of the libraries that it has a dependency on. You might have a dependency on an older version of the dot-net runtime or the Java SDK. And we can have these agentic systems reason over your entire codebase and bring it up to date much more easily, with maybe a 70% or 80% reduction of the time it takes. And it really has to be a deployed multistep agent to do that.

Live-site operations is another one — if you think of maintaining a website or a service and something goes wrong, there’s a thud in the night, and somebody has to be on call to get woken up to go respond to the incident. We still do have people on call 24/7, just in case the service goes down. But it used to be a really loathed job because you’d get woken up fairly often for these minor incidents. And we’ve now built a genetic system to successfully diagnose and in many cases fully mitigate issues that come up in these live site operations so that humans don’t have to be woken up in the middle of the night and groggily go to their terminals and try to diagnose what’s going on. And that also helps us dramatically reduce the average time it takes for an incident to be resolved.

One of the other puzzles of this present moment is that agentic deployments haven’t happened quite as fast as we expected even six months ago. I’m curious why you think that is.

If you think about the people who are building agents, what is preventing them from being successful, in many cases, it comes down to not really knowing what the purpose of the agent should be. There’s a culture change that has to happen in how people build these systems. What is the business use case that they are trying to solve for? What are they trying to achieve? You need to be very clear-eyed about what the definition of success is for this agent. And you need to think, what is the data that I’m giving to the agent so that it can reason over how to go accomplish this particular task?

We see those things as the bigger stumbling blocks, more than the general uncertainty of letting agents get deployed. Anybody who goes and looks at these systems sees the return on investment.

You mention the general uncertainty, which I think feels like a big blocker from the outside. Why do you see it as less of a problem in practice?

First of all, I think that it’s going to be very common that agentic systems have human-in-the-loop scenarios. Think about something like a package return. It used to be that you would have a workflow for the return processing that was 90% automated and 10% human intervention, where somebody would have to go look at the package and have to make a judgment call as to how damaged the package was before they would decide to accept the return. 

That’s a perfect example where actually now the computer vision models are getting so good that in many cases, we don’t need to have as much human oversight over inspecting the package and making that determination. There will still be some cases that are borderline, where maybe the computer vision is not yet good enough to make a call, and maybe there’s an escalation. It’s kind of like, how often do you need to call in the manager? 

There are some things that will always need some kind of human oversight, because they’re such critical operations. Think about incurring a contractual legal obligation, or deploying code into a production codebase that could potentially affect the reliability of your systems. But even then, there’s the question of how far we could get in automating the rest of the process.

source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Exclusive: Google deepens Thinking Machines Lab ties with new multi-billion-dollar deal

Former OpenAI executive Mira Murati’s startup, Thinking Machines Lab, has signed a new multi-billion-dollar agreement to expand its use of Google Cloud’s AI infrastructure, including systems powered by Nvidia’s latest GPUs, TechCrunch has exclusively learned.

The deal is valued in the single-digit billions, according to a source familiar with the matter, and includes access to Google’s latest AI systems built atop Nvidia’s new GB300 chips, alongside infrastructure services to support model training and deployment.

Google has been actively striking a number of cloud deals with AI developers as it aims to wrap together its AI computing offerings with other cloud services like storage, a Kubernetes engine, and Spanner, its database product. Earlier this month, Anthropic signed an agreement with Google and Broadcom for multiple gigawatts of tensor processing unit (TPUs) capacity (these are Google’s custom-designed AI chips for machine learning workloads). 

But the competition is fierce. Just this week, Anthropic also signed a new agreement with Amazon to secure up to 5 gigawatts of capacity for training and deploying Claude. 

Earlier this year, Thinking Machines partnered with Nvidia in a deal that included an investment from the chipmaker. But this is the first time the lab has struck a deal with a cloud services provider. The deal is not exclusive, so Thinking Machines may use multiple cloud providers over time, but it’s still a sign that Google is looking to lock in fast-growing frontier labs early. 

Murati left her job as OpenAI’s chief technologist and founded Thinking Machines in February 2025. The company, which soon afterwards raised a $2 billion seed round at a $12 billion valuation, has remained highly secretive, but launched its first product in October. Dubbed Tinker, it’s a tool that automates the creation of custom frontier AI models. 

Wednesday’s deal provided some insight into what Thinking Machines is developing. In a press release, Google noted that it can support the startup’s reinforcement learning workloads, which Tinker’s architecture relies on. Reinforcement learning is a training approach that has underpinned recent breakthroughs at labs, including DeepMind and OpenAI, and the scale of the Google Cloud deal reflects how computationally expensive that work can get. 

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Thinking Machines is among the first Google Cloud customers to access its GB300-powered systems, which offer a 2X improvement in training and serving speed compared to prior-generation GPUs, per Google. 

“Google Cloud got us running at record speed with the reliability we demand,” Myle Ott, a founding researcher at Thinking Machines, said in a statement.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

source

Continue Reading

Tech

The most interesting startups showcased at Google Cloud Next 2026

Google Cloud Next is taking place this week in Las Vegas, and one clear message has emerged: Google wants AI startups on its cloud. To that end, it made several startup-related announcements.

The most significant is that the tech giant has earmarked a new $750 million budget to help its Cloud partners sell more AI agents to enterprises. This funding is available to partners ranging from startups to the big consulting firms. It can be used for costs like Gemini proof-of-concept projects, Google forward-deployed engineers, cloud credits, and deployment rebates.

Google also highlighted a long list of startups that are using Google Cloud, either newly signed or expanding their footprint. Among them are a few standout names:

Lovable is expanding its use of Google Cloud by launching a new coding agent through Google’s enterprise app marketplace. Lovable is the fast-growing vibe coding startup and was on a $400 million ARR track as of February, it said.

Notion, Silicon Valley’s favorite AI-infused document productivity app, most recently valued at about $11 billion, is using Gemini models to power its text and image generation features.

Gamma, an AI-powered PowerPoint killer recently valued at a $2.1 billion valuation, is using Google’s state-of-the-art image model Nano Banana 2 and other Google Cloud features.

Inferact, the commercial inference startup from the creators of the popular open-source project vLLM, is accessing Nvidia’s GPUs through Google Cloud, in addition to using the tech giant’s AI stack.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

ComfyUI, the popular open-source tool for creating AI-generated images and multimedia, also offers access to Nano Banana 2 and is using other Cloud features.

Other startups that received the Google Cloud shout-out this year include:

ChorusView, which makes AI-powered smart tags that track the condition and movement of goods in real time.

Emergent AI, a vibe coding platform.

ExaCare AI, which makes AI software for post-acute medical care facilities.

Insilica, which creates AI-generated regulatory-compliant chemical safety reports.

Optii, which makes AI-enhanced hotel operations software.

Parallel AI, which builds web search and research APIs built for AI agents.

Proximal Health, which makes AI-powered software that automates the insurance claims adjudication process.

Reducto, which does AI-powered document parsing.

Stord, which handles e-commerce fulfillment and parcel operations.

Stylitics, which makes AI image generation software for retailers for tasks like outfit styling and product bundles.

Temporal, a developer cloud environment built to prevent failures.

Vapi, which makes dev tools for building conversational voice agents.

Vurvey Labs, which conducts synthetic market research via AI agents.

Wand, an in-game assistant for single-player PC games.

Watershed, which makes software that helps enterprises report on and manage sustainability programs.

ZenBusiness, an all-in-one back-office tool for small businesses that includes an AI chat assistant.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

source

Continue Reading

Tech

Duolingo is now giving free users access to advanced learning content

Duolingo announced on Wednesday that its advanced language learning content is now available for free across nine languages: English, Spanish, French, German, Italian, Portuguese, Japanese, Korean, and Chinese. Users can access this content through the web, iOS, and Android devices.

This advanced content is at the B2 level on the Common European Framework of Reference for Languages (CEFR), which is the international standard for language skills that schools and employers recognize. B2 level content refers to learning materials without translations, complex scenarios, and specialized vocabulary.

The new offering will include features like “Advanced Stories,” which helps with reading comprehension, and DuoRadio, a podcast-like audio experience for listening comprehension.

Now that Duolingo users can tap into this advanced learning content for free, they can level up their skills, whether that’s practicing for job interviews, prepping for studying abroad, or tackling complex news articles, films, and books without relying on translations.

The company says this positions it as the only free app to offer advanced-level learning across these nine languages at no cost. While competitors like Babbel and Busuu offer advanced courses, they typically require paid subscriptions. For instance, Busuu has some CEFR-aligned courses up to the B2 level, but the free version is pretty limited and doesn’t offer lessons like grammar explanations, so users need to pay for full access.

Previously, Duolingo only provided free courses that capped at A2 or B1 levels, mainly focusing on basic communication skills. 

Image Credits:Duolingo

The company is positioning this free advanced learning offering as an enticing opportunity for job seekers, framing language learning as a practical pathway to improving employability in an increasingly global workforce.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

This comes at a time when the job market remains highly competitive and overall growth has slowed. Research from the American Council on the Teaching of Foreign Languages shows that learning a second language can raise someone’s employability by as much as 50%.

“Reaching job-ready proficiency in a new language used to be out of reach for most people,” Bozena Pajak, head of learning science at Duolingo, said in a statement. “It took years of expensive classes or immersive experiences that not everyone could access.”

Duolingo’s decision to offer advanced learning for free is also a strategy to increase its free user base. In its Q4 earnings report, the company stated that it has 52.7 million daily active users, demonstrating 30% growth compared to the previous year. This number is higher than its paid subscriber base, which stands at 12.2 million. However, Duolingo’s shares fell after the company projected that the year-over-year bookings growth rate for Q2 2026 is expected to experience a slight decline.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

source

Continue Reading