Tech
Musk bashes OpenAI in deposition, saying ‘nobody committed suicide because of Grok’
In a newly released deposition filed in Elon Musk’s case against OpenAI, the tech executive attacked OpenAI’s safety record, claiming that his company, xAI, better prioritizes safety. He went so far as to say that “Nobody has committed suicide because of Grok, but apparently they have because of ChatGPT.”
The comment came up in a line of questioning about a public letter Musk signed in March 2023. In it, he called on AI labs to pause development of AI systems more powerful than GPT-4, OpenAI’s flagship model at the time, for at least six months. The letter, which was signed by over 1,100 people, including many AI experts, stated there was not enough planning and management taking place at AI labs, as they were locked in an “out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”
Those fears have since gained credibility. OpenAI now faces a series of lawsuits alleging that ChatGPT’s manipulative conversation tactics have led several people to experience negative mental health effects, with some dying by suicide. Musk’s comment suggests that these incidents could be used as fodder in his case against OpenAI.
The transcript of Musk’s video testimony, which took place back in September, was filed publicly this week, ahead of the expected jury trial next month.
The lawsuit against OpenAI centers on the company’s shift from a nonprofit AI research lab to a for-profit company, which Musk claims violated its founding agreements. As part of his arguments, Musk claims that AI safety could be compromised by OpenAI’s commercial relationships, as such relationships would place speed, scale, and revenue above safety concerns.
However, since that recording, xAI has faced safety concerns of its own. Last month, Musk’s social network X was flooded with nonconsensual nude images generated by xAI’s Grok, some of which were said to be of minors. This led the California Attorney General’s office to open an investigation into the matter. The EU is also running its own investigation, and other governments have taken action, too, with some imposing blocks and bans.
In the newly filed deposition, Musk claimed he had signed the AI safety letter because “it seemed like a good idea,” not because he had just incorporated an AI company looking to compete with OpenAI.
“I signed it, as many people did, to urge caution with AI development,” Musk said. “I just wanted … AI safety to be prioritized.”

Musk also responded to other questions in the deposition, including those about artificial general intelligence, or AGI — the concept of AI that can match or surpass human reasoning across a broad range of tasks — saying “it has a risk.” He also confirmed that he “was mistaken” about his supposed $100 million donation to OpenAI; the second amended complaint in the case puts the actual figure closer to $44.8 million.
He also recalled why OpenAI was founded, which, from his perspective, was because he was “increasingly concerned about the danger of Google being a monopoly in AI,” adding that his conversations with Google co-founder Larry Page were “alarming, in that he did not seem to be taking AI safety seriously.” OpenAI was formed as a counterweight to that threat, Musk claimed.
Tech
Revolut eyes valuation of up to $200B in eventual IPO
British neobank Revolut seems to be eyeing a major valuation bump when it eventually goes public. The company is targeting a market cap between $150 billion and $200 billion in an initial public offering, the Financial Times reported on Tuesday, citing anonymous investor sources.
The fintech giant, which secured a full banking license in the United Kingdom in March after years of waiting, was most recently valued at $75 billion, up from $45 billion in 2024, in a secondary share sale that made it one of Europe’s most valuable private tech companies.
Revolut’s co-founder and CEO, Nik Storonsky, last week said that the company’s IPO was at least “two years away,” according to Bloomberg.
According to PitchBook and the Financial Times, the company is working on another secondary share sale, scheduled for the second half of 2026, that would value it at more than $100 billion.
As of November 2025, the company had raised a total of $5.89 billion, according to PitchBook. Revolut reported revenue of $6 billion in the financial year ended December 31, 2025, up from $4 billion in 2024. The company’s net profit grew to $1.7 billion, up from $1 billion in 2024, and counted 68.3 million retail customers at the end of 2025.
Revolut declined to comment.
Founded in 2015, Revolut offers a range of services spanning multi-currency accounts, payment and transfer services, crypto products, insurance, and more. The neobank has been pouring truckloads of cash into expanding its operations internationally, and recently applied for a banking license in the United States.
Besides the U.K., Revolut has a banking license in the European Union, and it operates in Australia, Japan, New Zealand, Singapore, Brazil, and the U.S. Revolut launched operations in India last October, is about to start operating in Colombia this year, and has received a banking license in Mexico.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Tech
Amazon taps Sweden’s Einride for its electric big rigs
Einride is adding 75 of its electric heavy duty trucks to Amazon’s Relay freight network as part of a deal that gives the Swedish startup a toehold in the e-commerce giant’s operations. Einride will also provide charging infrastructure across five locations in the United States, under the agreement announced Tuesday.
Amazon isn’t buying or operating the electric trucks. Instead, Einride will own and manage (using its own Saga AI software) the trucks, which can be used by drivers in Amazon’s Relay freight network. Relay, launched in 2017, is an app that truck drivers can use to book hauling gigs with Amazon.
Einride CEO Roozbeh Charli, who took over as chief nearly a year ago, said working with Amazon is a powerful validation of the startup’s technology and strategic vision.
“By deploying our intelligent platform within one of the world’s most sophisticated logistics networks, we are accelerating growth, while continuing to build industry-leading operational expertise,” he said in a statement.
Einride has gained attention and investment for its two-pronged approach to freight. The company has developed and now operates a fleet of about 200 heavy-duty electric trucks for companies like Heineken, PepsiCo, and Carlsberg Sweden in Europe, North America, and the UAE. It has also developed autonomous pod-like trucks, which stand out for their cab-less design.
The agreement with Amazon doesn’t include the autonomous pods.
Einride has landed this agreement at a critical time: The startup is finalizing a merger with blank-check company Legato Merger Corp. and is expected to go public soon.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
While the agreement might not carry the same weight for Amazon, which has a market cap of $2.7 trillion, it does contribute to its low-carbon goals. Amazon has said it wants to reach net-zero carbon emissions across its operations by 2040.
“This rollout is an important step forward in addressing one of the toughest challenges we face in decarbonizing our transportation network — electrifying heavy-duty trucking,” an Amazon spokesperson said in an emailed statement. “We’re excited to continue to collaborate with Einride and learn from these operations as the trucks hit the road.”
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Tech
YouTube expands its AI likeness detection technology to celebrities
YouTube is expanding its new “likeness detection” technology, which identifies AI-generated content, such as deepfakes, to people within the entertainment industry, the company announced on Tuesday.
The technology works similarly to YouTube’s existing Content ID system, which detects copyright-protected material in users’ uploaded videos, allowing rights owners to request removal or share in the video’s revenue.
Likeness detection does the same, but for simulated faces. The feature is meant to help protect creators and other public figures from having their identities used without their permission — a common problem for celebrities who find their likenesses have been used in scam advertisements.
The technology was first made available to a subset of YouTube creators in a pilot program last year before expanding more broadly to include politicians, government officials, and journalists this spring.

Now YouTube says the technology is being made available to those in the entertainment industry, including talent agencies, management companies, and the celebrities they represent. The company has support from major agencies like CAA, UTA, WME, and Untitled Management, which offered feedback on the new tool.
Use of the likeness detection tool does not require entertainers to have their own YouTube channels.
Instead, the feature scans for AI-generated content to detect visual matches of an enrolled participant’s face. Users can then choose to request removal of the video for privacy policy violations, submit a copyright removal request, or do nothing. YouTube notes that it won’t remove all content, as it permits parody and satire content under its rules.
In the future, the technology will support audio as well, the company says.
Related to this, YouTube has also been advocating for similar protections at a federal level, with its support for the NO FAKES Act in Washington, D.C. This would regulate the use of AI to create unauthorized re-creations of an individual’s voice and visual likeness.
The company hasn’t yet said how many removals of AI deepfakes have been managed by the tool so far, but noted in March that the amount of removals was still “very small.”
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
