Tech
Kalshi fined a MrBeast editor for insider trading on markets related to the YouTube star
An editor for YouTube’s most popular creator, MrBeast, has been accused by the predictions market Kalshi of insider trading on the platform.
After an investigation, Kalshi said it “found reasonable cause” to believe that this editor, Artem Kaptur, had used non-public, insider information about MrBeast videos to inform his betting on matters involving the MrBeast YouTube channel.
Prediction markets like Kalshi and competitor Polymarket allow users to place bets on a wide variety of future events, like who will win a political election, how many albums a certain musician will sell in a week, or when the sequel to a popular film will be announced.
Kalshi did not disclose the specific bets that Kaptur placed about MrBeast, but some markets on the platform allow users to bet on what words the creator will say during an upcoming video — private information that a video editor could feasibly influence. Kalshi users can also trade on when MrBeast will get married, or when his company, Beast Industries, will announce an IPO.
A Beast Industries spokesperson told TechCrunch that the company does not tolerate this behavior, and that this stance extends to company employees, as well as contestants on MrBeast’s Amazon Prime show “Beast Games.” Contestants are also made aware that their knowledge of confidential information precludes them from participating in related prediction markets.
“With regard to this particular matter, we’ve already initiated an independent investigation as part of our overall ongoing efforts to ensure the integrity of our workplace and trust with our global audiences,” the spokesperson told TechCrunch. “We welcome Kalshi — and hopefully others in the space — also taking this issue seriously, but it only works if they are willing to communicate their findings, so we’re hopeful they’ll be more open to that in the future.”
Kalshi says that Kaptur traded around $4,000 on YouTube streaming markets in August and September 2025. He made a $5,397.58 profit, prompting Kalshi to fine him for that amount, plus a $15,000 penalty. Kalshi also banned Kaptur for two years. The company said in its blog post that it will donate the fine to a consumer education nonprofit.
Techcrunch event
Boston, MA
|
June 9, 2026
Kalshi also fined Kyle Langford, a candidate for political office in California, who traded about $200 on his own candidacy, then posted about it on social media.
The markets on platforms like Kalshi and Polymarket are so vast that it’s challenging to ensure that the users trading on them are not using private knowledge to their advantage, which is against the rules. When it comes to securities like stocks, similar behavior is punishable by up to 20 years in federal prison.
The potential for these markets to be manipulated has drawn attention among U.S. lawmakers.
Last month, one Polymarket user suspiciously bet $32,000 that Venezuelan President Nicolás Maduro would be removed from power by the end of January — just hours later, the U.S. military captured Maduro, earning that user a $400,000 payout.
In response, Representative Ritchie Torres (D-NY) proposed legislation that would make it illegal for government employees to trade on prediction markets related to government policy, government actions, or political outcomes.
Kalshi CEO Tarek Mansour said in a Linkedin post last month that he supports the bill, since Kalshi already adheres to the rules it would enforce. He claimed that alleged insider trading cases are not occurring on U.S.-based platforms (both Kalshi and Polymarket are based in the U.S.).
“This American bill only applies to regulated, American companies and not to unregulated, non-American companies, which is where the alleged issues are occurring,” Mansour wrote. “Prediction markets, like any industry, are not a monolith: there are important distinctions that matter.”
Updated, 2/25/25, 3:45 p.m. ET with comment from Beast Industries.
Tech
Revolut eyes valuation of up to $200B in eventual IPO
British neobank Revolut seems to be eyeing a major valuation bump when it eventually goes public. The company is targeting a market cap between $150 billion and $200 billion in an initial public offering, the Financial Times reported on Tuesday, citing anonymous investor sources.
The fintech giant, which secured a full banking license in the United Kingdom in March after years of waiting, was most recently valued at $75 billion, up from $45 billion in 2024, in a secondary share sale that made it one of Europe’s most valuable private tech companies.
Revolut’s co-founder and CEO, Nik Storonsky, last week said that the company’s IPO was at least “two years away,” according to Bloomberg.
According to PitchBook and the Financial Times, the company is working on another secondary share sale, scheduled for the second half of 2026, that would value it at more than $100 billion.
As of November 2025, the company had raised a total of $5.89 billion, according to PitchBook. Revolut reported revenue of $6 billion in the financial year ended December 31, 2025, up from $4 billion in 2024. The company’s net profit grew to $1.7 billion, up from $1 billion in 2024, and counted 68.3 million retail customers at the end of 2025.
Revolut declined to comment.
Founded in 2015, Revolut offers a range of services spanning multi-currency accounts, payment and transfer services, crypto products, insurance, and more. The neobank has been pouring truckloads of cash into expanding its operations internationally, and recently applied for a banking license in the United States.
Besides the U.K., Revolut has a banking license in the European Union, and it operates in Australia, Japan, New Zealand, Singapore, Brazil, and the U.S. Revolut launched operations in India last October, is about to start operating in Colombia this year, and has received a banking license in Mexico.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Tech
Amazon taps Sweden’s Einride for its electric big rigs
Einride is adding 75 of its electric heavy duty trucks to Amazon’s Relay freight network as part of a deal that gives the Swedish startup a toehold in the e-commerce giant’s operations. Einride will also provide charging infrastructure across five locations in the United States, under the agreement announced Tuesday.
Amazon isn’t buying or operating the electric trucks. Instead, Einride will own and manage (using its own Saga AI software) the trucks, which can be used by drivers in Amazon’s Relay freight network. Relay, launched in 2017, is an app that truck drivers can use to book hauling gigs with Amazon.
Einride CEO Roozbeh Charli, who took over as chief nearly a year ago, said working with Amazon is a powerful validation of the startup’s technology and strategic vision.
“By deploying our intelligent platform within one of the world’s most sophisticated logistics networks, we are accelerating growth, while continuing to build industry-leading operational expertise,” he said in a statement.
Einride has gained attention and investment for its two-pronged approach to freight. The company has developed and now operates a fleet of about 200 heavy-duty electric trucks for companies like Heineken, PepsiCo, and Carlsberg Sweden in Europe, North America, and the UAE. It has also developed autonomous pod-like trucks, which stand out for their cab-less design.
The agreement with Amazon doesn’t include the autonomous pods.
Einride has landed this agreement at a critical time: The startup is finalizing a merger with blank-check company Legato Merger Corp. and is expected to go public soon.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
While the agreement might not carry the same weight for Amazon, which has a market cap of $2.7 trillion, it does contribute to its low-carbon goals. Amazon has said it wants to reach net-zero carbon emissions across its operations by 2040.
“This rollout is an important step forward in addressing one of the toughest challenges we face in decarbonizing our transportation network — electrifying heavy-duty trucking,” an Amazon spokesperson said in an emailed statement. “We’re excited to continue to collaborate with Einride and learn from these operations as the trucks hit the road.”
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Tech
YouTube expands its AI likeness detection technology to celebrities
YouTube is expanding its new “likeness detection” technology, which identifies AI-generated content, such as deepfakes, to people within the entertainment industry, the company announced on Tuesday.
The technology works similarly to YouTube’s existing Content ID system, which detects copyright-protected material in users’ uploaded videos, allowing rights owners to request removal or share in the video’s revenue.
Likeness detection does the same, but for simulated faces. The feature is meant to help protect creators and other public figures from having their identities used without their permission — a common problem for celebrities who find their likenesses have been used in scam advertisements.
The technology was first made available to a subset of YouTube creators in a pilot program last year before expanding more broadly to include politicians, government officials, and journalists this spring.

Now YouTube says the technology is being made available to those in the entertainment industry, including talent agencies, management companies, and the celebrities they represent. The company has support from major agencies like CAA, UTA, WME, and Untitled Management, which offered feedback on the new tool.
Use of the likeness detection tool does not require entertainers to have their own YouTube channels.
Instead, the feature scans for AI-generated content to detect visual matches of an enrolled participant’s face. Users can then choose to request removal of the video for privacy policy violations, submit a copyright removal request, or do nothing. YouTube notes that it won’t remove all content, as it permits parody and satire content under its rules.
In the future, the technology will support audio as well, the company says.
Related to this, YouTube has also been advocating for similar protections at a federal level, with its support for the NO FAKES Act in Washington, D.C. This would regulate the use of AI to create unauthorized re-creations of an individual’s voice and visual likeness.
The company hasn’t yet said how many removals of AI deepfakes have been managed by the tool so far, but noted in March that the amount of removals was still “very small.”
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
