Tech
India bids to attract over $200B in AI infrastructure investment by 2028
India has set out an aggressive push to attract more than $200 billion in artificial-intelligence infrastructure investment over the next two years, as it seeks to position itself as a global hub for AI computing and applications at a time when capacity, capital, and regulation are becoming strategic assets.
The plans were outlined on Tuesday by India’s IT minister Ashwini Vaishnaw (pictured above) at the Indian government-backed five-day AI Impact Summit in New Delhi, attended by senior executives from OpenAI, Google, Anthropic, and other global technology firms. To attract investment, the government is rolling out a mix of tax incentives, state-backed venture capital, and policy support aimed at pulling more of the global AI value chain into the South Asian nation.
India’s pitch comes as U.S. technology giants, including Amazon, Google, and Microsoft, have already committed about $70 billion to expand AI and cloud infrastructure in the country, giving New Delhi a foundation to argue it can combine scale, cost advantages, and policy incentives to attract the next wave of global AI computing investment.
While the bulk of the projected $200 billion is expected to flow into AI infrastructure — including data centers, chips, and supporting systems, and encompassing the around $70 billion already pledged by Big Tech companies — Vaishnaw said the Indian government also anticipates an additional $17 billion of investment into deep-tech and AI applications, highlighting a push to move beyond infrastructure and capture more of the value chain.
The effort is backed by recent policy decisions aimed at making India a more attractive base for AI computing, including long-term tax relief for export-oriented cloud services and a ₹100 billion (about $1.1 billion) government-backed venture program targeting high-risk areas such as AI and advanced manufacturing. Earlier this month, New Delhi also extended the period for which deep-tech companies qualify as startups to 20 years and raised the revenue threshold for startup-specific benefits to ₹3 billion (about $33.08 million).
“We have seen VCs committing funds for dtech startups,” Vaishnaw said at a press briefing on the sidelines of the AI Impact Summit in New Delhi. “We have seen VCs and other players committing funds for big solutions, big applications. We have seen VCs committing funds for further research in cutting-edge models.”
India plans to scale its shared compute capacity under the IndiaAI Mission beyond its existing 38,000 GPUs, the minister said, with an additional 20,000 units to be added in the coming weeks, signaling what he described as the next phase of the country’s AI strategy.
Looking ahead, Vaishnaw said the Indian government is preparing a second phase of its AI Mission, with a stronger focus on research and development, innovation, and wider diffusion of AI tools, alongside further expansion of shared compute capacity, as India seeks to broaden access to AI infrastructure beyond a small group of companies.
The push also faces structural challenges, including access to reliable power and water for energy-intensive data centers, underlining the execution risks as India seeks to compress years of AI infrastructure build-out into a much shorter time frame.
Vaishnaw acknowledged those challenges, saying the government was cognizant of the pressure AI infrastructure would place on power and water resources, and pointed to India’s energy mix — with more than half of installed generation capacity coming from clean sources — as an advantage as demand from data centers rises.
Whether India can deliver on that vision will matter well beyond its borders, as companies seek new locations for AI computing amid rising costs, capacity constraints, and intensifying global competition.
Tech
Cohere launches a family of open multilingual models
Enterprise AI company Cohere launched a new family of multilingual models on the sidelines of the ongoing India AI Summit. The models, dubbed Tiny Aya, are open-weight — meaning their underlying code is publicly available for anyone to use and modify — support over 70 languages, and can run on everyday devices like laptops without requiring an internet connection.
The model, launched by the company’s research arm Cohere Labs, supports South Asian languages such as Bengali, Hindi, Punjabi, Urdu, Gujarati, Tamil, Telugu, and Marathi.
The base model contains 3.35 billion parameters — a measure of its size and complexity. Cohere has also launched TinyAya-Global, a version fine-tuned to better follow user commands, for apps that require broad language support. Regional variants round out the family: TinyAya-Earth for African languages; TinyAya-Fire for South Asian languages; and TinyAya-Water for Asia Pacific, West Asia, and Europe.

“This approach allows each model to develop stronger linguistic grounding and cultural nuance, creating systems that feel more natural and reliable for the communities they are meant to serve. At the same time, all Tiny Aya models retain broad multilingual coverage, making them flexible starting points for further adaptation and research,” the company said in a statement.
Cohere noted that these models, which were trained on a single cluster of 64 H100 GPUs (a type of high-powered chip by Nvidia) using relatively modest computing sources, are ideal for researchers and developers building apps for audiences that speak native languages. The models are capable of running directly on devices, so developers can use them to power offline translation. The company noted that it built its underlying software to suit on-device usage, requiring less computing power than most comparable models.

In linguistically diverse countries like India, this kind of offline-friendly capability can open up a diverse set of applications and use cases without the need for constant internet access.
The models are available on HuggingFace, the popular platform for sharing and testing AI models, and the Cohere Platform. Developers can download them on HuggingFace, Kaggle, and Ollama for local deployment. The company is also releasing training and evaluation datasets on HuggingFace and plans to release a technical report detailing its training methodology.
Techcrunch event
Boston, MA
|
June 23, 2026
The startup’s CEO, Aidan Gomez, said last year that the company plans to go public “soon.” According to CNBC, the company ended 2025 on a high note, posting $240 million in annual recurring revenue, with 50% growth quarter-over-quarter throughout the year.
Tech
As AI jitters rattle IT stocks, Infosys partners with Anthropic to build ‘enterprise-grade’ AI agents
Indian IT giant Infosys said on Tuesday it has partnered with Anthropic to develop enterprise-grade AI agents, as automation driven by large language models reshapes the global IT services industry.
Under the partnership, Infosys plans to integrate Anthropic’s Claude models into its Topaz AI platform to build so-called “agentic” systems. The companies claim these agents will be able to autonomously handle complex enterprise workflows across industries such as banking, telecoms, and manufacturing. The tie-up was announced at India’s AI Impact Summit in New Delhi this week, which will see top executives from AI companies and Big Tech alike in attendance.
The deal comes amid fears that AI tools, especially those built by major AI labs like Anthropic and OpenAI, will disrupt India’s heavily-staffed, $280 billion IT services industry, raising questions about the future of labor-intensive outsourcing business models. Earlier this month, shares of Indian IT companies went into freefall after Anthropic launched a suite of enterprise AI tools that claimed to automate tasks across legal, sales, marketing and research roles.
The partnership would give Infosys, one of the world’s largest IT services businesses, access to Anthropic’s Claude models and developer tools for building AI agents tailored for large enterprises. Infosys said it would use Anthropic’s Claude Code to help write, test and debug code, and said it is already deploying the tool internally to build expertise that will be applied to client work.
Infosys also detailed how AI is contributing to its business: AI-related services generated revenue of ₹25 billion (around $275 million), or 5.5% of the company’s total revenue of ₹454.8 billion (about $5 billion) in the December quarter. Rival Tata Consultancy Services previously said its AI services generate about $1.8 billion annually, or around 6% of revenue.
For Anthropic, the partnership offers a route into heavily regulated enterprise sectors where deploying AI systems at scale requires industry expertise and governance capabilities.
“There’s a big gap between an AI model that works in a demo and one that works in a regulated industry,” said Anthropic co-founder and CEO Dario Amodei. Infosys’ experience in sectors such as financial services, telecoms, and manufacturing helps bridge that gap, he said.
Techcrunch event
Boston, MA
|
June 23, 2026
Anthropic this week also opened its first India office in Bengaluru, as it seeks to expand further into the country, which has grown into the company’s second-largest market. Anthropic said India now accounts for about 6% of global Claude usage, second only to the U.S., and much of that activity is concentrated in programming.
Infosys did not disclose the timeline for deploying Claude-powered AI agents or the financial terms of the deal.
The partnership is similar to other moves by Indian IT services firms. HCLTech and OpenAI last year partnered up to help enterprises deploy AI tools at scale.
Tech
Airbnb expands its “Reserve Now, Pay Later” globally
Airbnb said on Tuesday that it is launching its “Reserve Now, Pay Later” feature — which lets users secure bookings without immediate payment — globally. This allows users to cancel their bookings if there is a change of plans without losing money upfront.
The company launched the feature in the U.S. last year for domestic travel. Airbnb said that properties with a “flexible” or “moderate” cancellation policy are eligible for the upfront reservation. With this option, users get charged closer to their check-in date rather than at the time of booking. The feature mirrors “buy now, pay later” payment plans that have become popular in e-commerce, making expensive travel more accessible by spreading out costs. The company noted that since the launch, the feature saw 70% adoption for eligible bookings.

During its earnings calls for Q4 2025, Airbnb said that the feature helped grow nights booked in the quarter.
“Reserve Now, Pay Later saw significant adoption among eligible guests in Q4. It’s also led to longer booking lead times and a mix shift towards larger entire homes, especially those with four or more bedrooms, contributing to the increase in average daily rate,” Ellie Mertz, CFO of Airbnb, said during the call.
Mertz noted that Airbnb’s overall cancellation rate jumped from 16% to 17% for the quarter, and it was higher among customers who use the upfront booking product. However, she said that this was “not hugely material relative to the broader cancellations on the platform.”
Last year, the company surveyed U.S. travelers along with Focaldata, a London-based market research and polling company. Of those surveyed, 60% of participants said that a flexible payment option is important while booking a holiday, and 55% said that would use a flexible payment option.
The company has been experimenting with pay-later products for years now. Back in 2018, Airbnb launched a product that allowed users to book a property by paying 20% or 50% of the total charges. upfront, with the rest due later. In 2023, the company partnered with fintech firm Klarna to let users pay for their stays in four installments over six weeks.
Techcrunch event
Boston, MA
|
June 23, 2026
