Tech
Bluesky addresses trust and safety concerns around abuse, spam, and more
Social networking startup Bluesky, which is building a decentralized alternative to X (formerly Twitter), offered an update on Wednesday about how it’s approaching various trust and safety concerns on its platform. The company is in various stages of developing and piloting a range of initiatives focused on dealing with bad actors, harassment, spam, fake accounts, video safety, and more.
To address malicious users or those who harass others, Bluesky says it’s developing new tooling that will be able to detect when multiple new accounts are spun up and managed by the same person. This could help to cut down on harassment, where a bad actor creates several different personas to target their victims.
Another new experiment will help to detect “rude” replies and surface them to server moderators. Similar to Mastodon, Bluesky will support a network where self-hosters and other developers can run their own servers that connect with Bluesky’s server and others on the network. This federation capability is still in early access. However, further down the road, server moderators will be able to decide how they want to take action on those who post rude replies. Bluesky, meanwhile, will eventually reduce these replies’ visibility in its app. Repeated rude labels on content will also lead to account-level labels and suspensions, it says.
To cut down on the use of lists to harass others, Bluesky will remove individual users from a list if they block the list’s creator. Similar functionality was also recently rolled out to Starter Packs, which are a type of sharable list that can help new users find people to follow on the platform (check out the TechCrunch Starter Pack).
Bluesky will also scan for lists with abusive names or descriptions to cut down on people’s ability to harass others by adding them to a public list with a toxic or abusive name or description. Those who violate Bluesky’s Community Guidelines will be hidden in the app until the list owner makes changes to comply with Bluesky’s rules. Users who continue to create abusive lists will also have further action taken against them, though the company didn’t offer details, adding that lists are still an area of active discussion and development.
In the months ahead, Bluesky will also shift to handling moderation reports through its app using notifications, instead of relying on email reports.
To fight spam and other fake accounts, Bluesky is launching a pilot that will attempt to automatically detect when an account is fake, scamming, or spamming users. Paired with moderation, the goal is to be able to take action on accounts within “seconds of receiving a report,” the company said.
One of the more interesting developments involves how Bluesky will comply with local laws while still allowing for free speech. It will use geography-specific labels allowing it to hide a piece of content for users in a particular area to comply with the law.
“This allows Bluesky’s moderation service to maintain flexibility in creating a space for free expression, while also ensuring legal compliance so that Bluesky may continue to operate as a service in those geographies,” the company shared in a blog post. “This feature will be introduced on a country-by-country basis, and we will aim to inform users about the source of legal requests whenever legally possible.”
To address potential trust and safety issues with video, which was recently added, the team is adding features like being able to turn off autoplay for videos, making sure video is labeled, and ensuring that videos can be reported. It’s still evaluating what else may need to be added, something that will be prioritized based on user feedback.
When it comes to abuse, the company says that its overall framework is “asking how often something happens vs how harmful it is.” The company focuses on addressing high-harm and high-frequency issues while also “tracking edge cases that could result in serious harm to a few users.” The latter, though only affecting a small number of people, causes enough “continual harm” that Bluesky will take action to prevent the abuse, it claims.
User concerns can be raised via reports, emails, and mentions to the @safety.bsky.app account.
Tech
Revolut eyes valuation of up to $200B in eventual IPO
British neobank Revolut seems to be eyeing a major valuation bump when it eventually goes public. The company is targeting a market cap between $150 billion and $200 billion in an initial public offering, the Financial Times reported on Tuesday, citing anonymous investor sources.
The fintech giant, which secured a full banking license in the United Kingdom in March after years of waiting, was most recently valued at $75 billion, up from $45 billion in 2024, in a secondary share sale that made it one of Europe’s most valuable private tech companies.
Revolut’s co-founder and CEO, Nik Storonsky, last week said that the company’s IPO was at least “two years away,” according to Bloomberg.
According to PitchBook and the Financial Times, the company is working on another secondary share sale, scheduled for the second half of 2026, that would value it at more than $100 billion.
As of November 2025, the company had raised a total of $5.89 billion, according to PitchBook. Revolut reported revenue of $6 billion in the financial year ended December 31, 2025, up from $4 billion in 2024. The company’s net profit grew to $1.7 billion, up from $1 billion in 2024, and counted 68.3 million retail customers at the end of 2025.
Revolut declined to comment.
Founded in 2015, Revolut offers a range of services spanning multi-currency accounts, payment and transfer services, crypto products, insurance, and more. The neobank has been pouring truckloads of cash into expanding its operations internationally, and recently applied for a banking license in the United States.
Besides the U.K., Revolut has a banking license in the European Union, and it operates in Australia, Japan, New Zealand, Singapore, Brazil, and the U.S. Revolut launched operations in India last October, is about to start operating in Colombia this year, and has received a banking license in Mexico.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Tech
Amazon taps Sweden’s Einride for its electric big rigs
Einride is adding 75 of its electric heavy duty trucks to Amazon’s Relay freight network as part of a deal that gives the Swedish startup a toehold in the e-commerce giant’s operations. Einride will also provide charging infrastructure across five locations in the United States, under the agreement announced Tuesday.
Amazon isn’t buying or operating the electric trucks. Instead, Einride will own and manage (using its own Saga AI software) the trucks, which can be used by drivers in Amazon’s Relay freight network. Relay, launched in 2017, is an app that truck drivers can use to book hauling gigs with Amazon.
Einride CEO Roozbeh Charli, who took over as chief nearly a year ago, said working with Amazon is a powerful validation of the startup’s technology and strategic vision.
“By deploying our intelligent platform within one of the world’s most sophisticated logistics networks, we are accelerating growth, while continuing to build industry-leading operational expertise,” he said in a statement.
Einride has gained attention and investment for its two-pronged approach to freight. The company has developed and now operates a fleet of about 200 heavy-duty electric trucks for companies like Heineken, PepsiCo, and Carlsberg Sweden in Europe, North America, and the UAE. It has also developed autonomous pod-like trucks, which stand out for their cab-less design.
The agreement with Amazon doesn’t include the autonomous pods.
Einride has landed this agreement at a critical time: The startup is finalizing a merger with blank-check company Legato Merger Corp. and is expected to go public soon.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
While the agreement might not carry the same weight for Amazon, which has a market cap of $2.7 trillion, it does contribute to its low-carbon goals. Amazon has said it wants to reach net-zero carbon emissions across its operations by 2040.
“This rollout is an important step forward in addressing one of the toughest challenges we face in decarbonizing our transportation network — electrifying heavy-duty trucking,” an Amazon spokesperson said in an emailed statement. “We’re excited to continue to collaborate with Einride and learn from these operations as the trucks hit the road.”
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Tech
YouTube expands its AI likeness detection technology to celebrities
YouTube is expanding its new “likeness detection” technology, which identifies AI-generated content, such as deepfakes, to people within the entertainment industry, the company announced on Tuesday.
The technology works similarly to YouTube’s existing Content ID system, which detects copyright-protected material in users’ uploaded videos, allowing rights owners to request removal or share in the video’s revenue.
Likeness detection does the same, but for simulated faces. The feature is meant to help protect creators and other public figures from having their identities used without their permission — a common problem for celebrities who find their likenesses have been used in scam advertisements.
The technology was first made available to a subset of YouTube creators in a pilot program last year before expanding more broadly to include politicians, government officials, and journalists this spring.

Now YouTube says the technology is being made available to those in the entertainment industry, including talent agencies, management companies, and the celebrities they represent. The company has support from major agencies like CAA, UTA, WME, and Untitled Management, which offered feedback on the new tool.
Use of the likeness detection tool does not require entertainers to have their own YouTube channels.
Instead, the feature scans for AI-generated content to detect visual matches of an enrolled participant’s face. Users can then choose to request removal of the video for privacy policy violations, submit a copyright removal request, or do nothing. YouTube notes that it won’t remove all content, as it permits parody and satire content under its rules.
In the future, the technology will support audio as well, the company says.
Related to this, YouTube has also been advocating for similar protections at a federal level, with its support for the NO FAKES Act in Washington, D.C. This would regulate the use of AI to create unauthorized re-creations of an individual’s voice and visual likeness.
The company hasn’t yet said how many removals of AI deepfakes have been managed by the tool so far, but noted in March that the amount of removals was still “very small.”
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
