Tech
GibberLink lets AI agents call each other in robo-language
A weekend hackathon project that lets AI agents talk on the phone with each other in a robotic language, one that’s incomprehensible to humans, has gone viral on social media over the past week.
The project, called GibberLink, was created by two Meta software engineers during a hackathon competition in London, hosted by ElevenLabs and Andreessen Horowitz.
GibberLink allows an AI agent to recognize when it’s speaking on the phone with another AI agent, the project’s creators, Boris Starkov and Anton Pidkuiko, told TechCrunch in an interview. Once an AI agent realizes it’s talking to another AI agent, GibberLink prompts the agents to switch into a more efficient communication protocol called GGWave.
GGWave is an open-source library of sounds in which each sound represents a small bit of data. This lets computers communicate faster and more efficiently than they can by using human speech. To the human ear, however, GGWave sounds like a series of “beeps” and “boops” – exactly what you’d imagine a computer’s native language sounds like.
While it seems unlikely today that two AI agents would end up on the phone with one another, it’s not impossible to imagine these scenarios arising soon. Companies are increasingly replacing call center employees with AI agents from ElevenLabs, Level AI, Retell AI, and other voice-based AI startups.
At the same time, tech giants such as OpenAI, Google, and Amazon are starting to introduce consumer AI agents capable of handling complex tasks on your behalf. These AI agents may soon be able to call a customer service center for you.
In this potential future, GibberLink could enhance the efficiency of communication between AI agents, provided both sides have the protocol enabled. While AI voice models are pretty good at translating human speech into tokens an AI model can understand, the whole process is very compute intensive – and just unnecessary – if two AI agents are talking to each other. Starkov and Pidkuiko estimate that AI agents communicating via GGWave could reduce computation costs by an order of magnitude or more.
For today though, it’s just a cool project. Starkov and Pidkuiko created a website that you can open on two devices to watch as the AI agents talk to each other in GGWave.
Much like a good sci-fi movie, GibberLink’s demo sparked widespread curiosity – and anxiety – about the future of AI agents. In the week since the London hackathon, a video demonstration of GibberLink has amassed over 15 million views on X, and was even reposted by YouTube’s most followed tech reviewer, Marques Brownlee.
However, Starkov and Pidkuiko emphasize that GibberLink’s underlying technology isn’t new – it dates back to the dial-up internet modems of the 1980s.
Some might recall the distinctive sounds of early computers communicating with modems via household landlines – a process known as the “handshake.” Essentially, this handshake represented data transfers using a robotic language, which is fundamentally similar to what’s happening between AI agents through GibberLink.
Starkov and Pidkuiko also noted that the viral craze around GibberLink has taken on a life of its own. Someone purchased the domain GibberLink.com and is now trying to sell it for $85,000. Others have created a GibberLink memecoin, while a few imposters are selling webinars purportedly teaching “agent-to-agent communications.”
Currently, GibberLink’s creators say they are not commercializing the project, and clarify that it is unrelated to their work at Meta. Instead, Starkov and Pidkuiko have open-sourced GibberLink on GitHub, though they say they may work on some additional tooling related to the project in their free time, and release it the near future.
Tech
Khosla-backed robotics startup Genesis AI has gone full stack, demo shows
Genesis AI, a startup that raised a $105 million seed round to build foundational AI for robotics, has unveiled its first model, GENE-26.5, and it comes with surprise hands. In a demo video, the company showcased various advanced tasks performed by a set of robotic hands it has designed in-house.
“The model has always been the goal, because a better model means better intelligence,” Genesis co-founder and CEO Zhou Xian told TechCrunch. But the company soon realized that it needed control over the hardware. “So we decided to go full stack,” he said.
Other well-funded companies operate at the intersection of AI and robotics — such as Physical Intelligence and Skild AI. Zhou also acknowledged that “there’s probably 50 or 100 robotic hand companies out there.” But he and his co-founder Théophile Gervet hope that building their own will give them the upper hand.
The key difference is that Genesis’ hand has the same size and shape as a human hand — rather than the two-finger grippers many robotics companies have been using — reducing the gap with real-world conditions.
“That lets us collect a lot more data than was previously possible, to train a model that can do many more tasks,” said Gervet, a former research scientist at Mistral AI who is now Genesis’ president.
Of all the physical manipulation tasks showcased in the video below, Gervet’s personal favorite is cooking, because it proves that the robot has been able to complete a long series of difficult tasks, such as cracking an egg and slicing a tomato. But Genesis has also tasked its robots with preparing smoothies, playing the piano, and solving Rubik’s cube — a robotics gimmick.
Other tasks, such as lab work, are closer to what could be the commercial applications of Genesis’ technology. But what happens behind the scenes is just as important: The startup has also developed a sensor-loaded glove that works as a real-life double of its robotic hand, collecting data that can more readily be used.
“Our idea was that if we could design a robotic hand that tries to mimic a human hand as much as possible, we can instantly unlock huge amounts of human data without having to worry about what people call the ‘embodiment gap’ in robotics research,” Zhou said.
Others have tried their hand at that problem; the main novelty is how Genesis combines this with its model. The current version is named GENE-26.5 for May 2026, but Zhou expects there will be many iterations, thanks to the simulation it has developed. “The real bottleneck for the iteration speed of the model is evaluation. So this helps us speed up model training a lot,” he said.
Beyond simulation, though, data will be key to training models that can help robots perform more tasks. That’s also where Genesis’ glove could come in handy. Gervet said that, unlike clunky data collection devices that get in the way, it is just as light and easy to wear as the security gloves already used in many industries, while relatively cheap to make.
“We’re in talks with a lot of customers right now, and a lot of the value of a glove would be that, for the first time, you can wear the data collection device when you’re doing your daily job, whether it’s a lab technician for pharma or for manufacturing,” Gervet said. This would also be complemented by “egocentric video data” — people filming themselves doing the task.
Still, it remains to be seen whether workers would be happy to wear the very gloves and cameras that could train robots to replace them, and whether they will get extra pay for that training. That will be between Genesis’ customers and their employees, Gervet suggested. “We haven’t nailed the details yet,” he said.
Either way, they may decide not to share that data with the startup, the founders acknowledged. But the startup also has avenues of its own to build its “human skill library” — it could also pay third-party partners to collect data. Its model is already trained on “massive amounts of human-based internet videos,” according to a press release that didn’t mention compensation.
Combined with its simulation system, this could help Genesis lower the costs of its technology for real-world applications like the one it has demonstrated. “This marks an important milestone for their team and the robotics industry more broadly,” said Google’s former CEO, Eric Schmidt, who invested in the startup.
In July 2025, just a few months after its creation, the startup had emerged from stealth with a $105 million seed round co-led by Eclipse and Khosla Ventures, with additional backers including Bpifrance, HSG, and individuals like Schmidt, but also Xavier Niel, Daniela Rus, and Vladlen Koltun.
This funding helped Genesis increase its headcount. With offices in Paris and California, it has also expanded to London. “One big reason we decided to be in Europe is there is a huge talent density across the whole continent,” Gervet said. Its team of 60 people is split around “40-45% in Europe and 50-55% in the U.S.,” and the startup is currently hiring in all three locations.
Aside from hiring, the company also plans to reveal its first general-purpose robot shortly, which Zhou told TechCrunch will be a full-body robot, not just hands. But he insisted that the roadmap is still the same.
“Our goal is to build the most capable robotic system,” he said.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Tech
Google updates AI search to include quotes from Reddit and other sources
Google is updating search to refine its AI experience by adding additional context to links, like excerpts from web forums and blogs, as well as a feature that highlights links from a user’s news subscriptions.
While citing web forums and discussion boards can help users find answers to more niche queries, this design choice could also prove chaotic.

Two years ago, Google overhauled its search experience to put AI front and center — when you search for something, Google will often summon an “AI Overview,” which has spurred mixed reception from users. People quickly pointed out how the feature could be exploited, since it failed to recognize sarcasm or information that comes from dubious sources. (It cited The Onion when telling someone to eat “one small rock per day,” and used Reddit to advise someone to put glue on their pizza to make the cheese stick better.)
Though Google’s AI Overviews have improved significantly, they still — like anything powered by an LLM — are prone to hallucination. A recent New York Times analysis found that the AI Overviews were correct about nine times out of 10. But for a company that processes trillions of queries a year, that success rate would mean that hundreds of thousands of searches turn up inaccurate results every minute.
Of course, not every search has an objective yes-or-no answer, which is why Google might want to pull in voices from web forums where people discuss such questions — there’s a reason why people often add “Reddit” to the end of their Google searches.
“For many searches, people are increasingly seeking out advice from others,” Google explains. “To help you find the most helpful insights to explore further, AI responses will now include a preview of perspectives from public online discussions, social media, and other firsthand sources. We’re also adding more context to these links, like a creator’s name, handle, or community name, to help you decide which discussions you might want to read or participate in.”
But now Google is complicating the role of its AI Overviews. Is the AI Overview supposed to answer a question, or is it supposed to serve you a variety of sources that might have the information you’re looking for? Isn’t that basically just a normal Google search?

Google will, at least, add more context to where its AI Overview commentary comes from, which might help users decipher if they’re getting information from a trustworthy source. It’s similar to how ChatGPT or Claude will sometimes provide links that are supposed to back up its claims.
Still, we’d recommend double-checking that the AI is not hallucinating the validity of these citations.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Tech
Chrome on Android now supports approximate instead of precise location sharing
Chrome on Android now lets users share their approximate location with websites rather than their precise location, Google announced this week. The tech giant says that while some cases require precise location, such as when you’re placing a delivery order or trying to find the closest ATM, there are instances when your approximate location is enough, like when you’re getting access to local weather and news.
“By letting you share your approximate location, we’re giving you more control over your location data,” Google explained in a blog post. “And you can still share your precise location when it’s needed — e.g., for navigation — so you won’t lose functionality.”

Google plans to bring this feature to the desktop in the coming months. The company did not share a timeline for when, or if, the feature will launch for Chrome on iOS.
The company will also be introducing new APIs that let web developers request either an approximate location or specify when a precise location is necessary. The tech giant says it encourages developers to review their location needs and only request precise location when it’s essential for the site’s functionality.
The new feature is a small win for Android users, as it gives them more control over how much location data they share with websites.
