Connect with us

Tech

Hackers are actively exploiting a bug in cPanel, used by millions of websites

Security researchers are sounding the alarm on a newly discovered vulnerability in the widely used web server management software cPanel and WebHost Manager (WHM). 

The bug allows hackers to hijack and take full control of the servers running the affected software, which is thought to be used by tens of millions of website owners around the world.

Many commercial web hosting companies have patched their customers’ systems already. But the cPanel maker urged customers to ensure that their systems are patched as the bug affects all supported versions of the software.

cPanel and WHM are two software suites used for managing web servers that host websites, manage emails, and handle important configurations and databases needed to maintain an internet domain. The two suites have deep-access to the servers that they manage, allowing a malicious hacker potentially unrestricted access to data managed by the affected software.

The bug, officially tracked as CVE-2026-41940, allows malicious hackers to remotely bypass its login screen to gain full access to the software’s administration panel. 

Given the ubiquity of the cPanel and WHM software across the web hosting industry, hackers could compromise potentially large numbers of websites that haven’t patched the bug.

Canada’s national cybersecurity agency said in an advisory that the bug could be exploited to compromise websites on shared hosting servers, such as large web hosting companies.

The agency said that “exploitation is highly probable” and that immediate action from cPanel customers, or their web hosts, is necessary to prevent malicious access.

Web hosting giant Namecheap, which uses cPanel to allow its customers to manage their web servers, said the company blocked access to customers’ cPanel panels after learning of the flaw to prevent exploitation, and to give it time to patch its customers’ systems

HostGator also said it patched its systems and is considering the bug a “critical authentication-bypass exploit.”

One web hosting company says it found evidence that hackers have been abusing the vulnerability for months before the attempts were discovered.

KnownHost CEO Daniel Pearson said in a post on Reddit that his company has seen attempts to exploit the vulnerability as far back as February 23. The company said it also briefly began blocking access to customer systems before applying patches.

According to Pearson, around 30 servers at KnownHost showed signs of unauthorized attempted access out of thousands of computers on its network. Pearson likened the efforts to attempts, and has not seen signs of active compromise. cPanel also said it rolled out a security fix for WP Squared, a similar tool for managing WordPress websites.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

FDA approval, fundraising, and the reality of building in healthcare according to BioticsAI founder

Founders building in the healthcare space can’t just build fast and break things. Timelines stretch longer, stakes are higher, and success depends on navigating systems that reward rigor over speed. 

That’s exactly the reality Robhy Bustami, co-founder and CEO of BioticsAI, has been building in. His company is developing an AI copilot for ultrasound that helps detect fetal abnormalities, an area where misdiagnosis rates remain surprisingly high. Bustami joined Isabelle Johannessen on Build Mode to discuss how the company has navigated a highly regulated space and kept the team motivated while cutting through all the red tape.

BioticsAI started scrappy. The team built an early, functioning version of the product for under $100,000, an almost unheard-of milestone in the medical device world. That prototype helped them win TechCrunch Startup Battlefield in 2023, bringing early visibility and credibility. In January, they gained FDA approval, which means they can begin launching in hospitals and growing the business at a new rate. 

From day one, the team approached product development with FDA approval in mind. Instead of building first and figuring out regulation later, they integrated clinical validation, regulatory strategy, and product development into a single process. That meant working closely with clinicians, collecting large-scale datasets, and running structured clinical studies before ever reaching the submission stage.

The FDA process itself is often viewed as a black box, but Bustami emphasizes that founders don’t have to navigate it blindly. Early engagement with regulators, through pre-submission meetings, helped the team align on study design and expectations. Still, risk never fully disappears. For many investors, the biggest question is simple: What if the FDA says no?

Internally, those long timelines create a different kind of challenge: keeping a team motivated when the biggest milestone is years away. At BioticsAI, that meant building a culture of alignment across engineers, clinicians, and researchers, ensuring everyone got to see the wins that were happening.

 “Making sure everyone is completely aligned, even if it’s outside of their technical scope,” Bustami said, “constantly seeing wins on the R&D side,” from clinical studies to new healthcare partnerships.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Now, with FDA clearance secured, BioticsAI is entering a new phase: deployment. The company is beginning to roll out its technology in hospitals, with plans to expand beyond obstetrics into broader areas of reproductive health.

Building in healthcare is a long game. It requires patience, discipline, and a willingness to operate in uncertainty. For founders willing to take that path, the reward isn’t just a successful company — it’s the chance to build something that genuinely changes how care is delivered.


Subscribe to Build Mode on Apple Podcasts, Spotify, or wherever you like to listen. Watch the full videos on YouTube. Isabelle Johannessen is our host. Build Mode is produced and edited by Maggie Nye. Audience Development is led by Morgan Little. And a special thanks to the Foundry and Cheddar video teams. 


Apply to Startup Battlefield: We are looking for early-stage companies that have an MVP. So nominate a founder (or yourself). Be sure to say you heard about Startup Battlefield from the Build Mode podcast. Apply here.

TechCrunch Disrupt 2026: We’re back for TechCrunch Disrupt on October 13 to 15 in San Francisco, where the Startup Battlefield 200 takes the stage. So if you want to cheer them on, or just network with thousands of founders, VCs, and tech enthusiasts, then grab your tickets.

Use code buildmode15 for 15% off any ticket type. 

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

source

Continue Reading

Tech

Elon Musk testifies that xAI trained Grok on OpenAI models

OpenAI and Anthropic have been on the warpath lately against third-party efforts to train new AI models by prompting their publicly accessible chatbots and APIs, a process known as “distillation.”

That conversation has focused on Chinese firms using distillation to create open-weight models that are nearly as capable as U.S. offerings, but available at a much lower cost. However, tech workers have widely assumed that American labs use these techniques on each other to avoid falling behind competitors.

Now we know it’s true in at least one case: On the stand in a California federal court on Thursday, Elon Musk was asked if xAI has used distillation techniques on OpenAI models to train Grok, and he asserted it was a general practice among AI companies. Asked if that meant “yes,” he said, “Partly.”

Musk is in the process of suing OpenAI, CEO Sam Altman, and Greg Brockman, alleging they breached the original nonprofit mission for OpenAI by shifting the entity to a for-profit structure. That trial began this week, featuring testimony from the tech leader.

Musk’s admission is notable because distillation threatens AI giants by undermining the advantage they’ve built by investing in compute infrastructure. This allows other software makers to create models that are nearly as capable on the cheap. There’s no small amount of irony here, given the bending and alleged breaking of copyright rules by frontier labs in their search for sufficient data to train their models.

It’s no surprise that Musk’s xAI, which started in 2023, years after OpenAI, would try to learn from the then-leader in the field. It’s not clear that distillation is explicitly illegal, but rather may violate the terms of service companies set for the user of their products.

OpenAI, Anthropic, and Google have reportedly launched an initiative through the Frontier Model Forum to share information about how to combat distillation attempts from China. These typically involve systematic querying of models to understand their inner workings. To stop the efforts, frontier labs are working to prevent users from making suspicious mass queries.

OpenAI did not respond to a request for comment on Musk’s admission at press time.

Later in his testimony, Musk was asked about a claim he made last summer that xAI would soon be far beyond any company besides Google. In response, he ranked the world’s leading AI providers, saying Anthropic held the top spot, followed by OpenAI, Google, and Chinese open source models. He characterized xAI as a much smaller company with just a few hundred employees.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

source

Continue Reading

Tech

OpenAI announces new advanced security for ChatGPT accounts, including a partnership with Yubico

OpenAI is getting serious about account security.

The company on Thursday launched Advanced Account Security (AAS), a set of opt-in protections for ChatGPT users designed for high-value individuals — but available to anyone who wants them.

As part of that new program, digital security provider Yubico announced it has partnered with OpenAI to link two new security key products to ChatGPT accounts. The company said the partnership was designed to protect users from the threat of phishing, which is considered to be a growing threat for chatbot users.

The two companies are releasing a pair of “co-branded” YubiKeys — dubbed the YubiKey C NFC and the YubiKey C Nano.

OpenAI has suggested that AAS is a good fit for political dissidents, journalists, researchers, and elected officials — people who engage in politically charged and risky work. One would assume that it might make sense for enterprise users, whose corporate secrets are squirreled away in ChatGPT sessions.

“Ultimately, our intent is to drastically reduce the threat of unauthorized access to sensitive data in OpenAI accounts worldwide,” Yubico CEO Jerrod Chong said in press release announcing the deal.

Security keys are small pieces of hardware that can be tied to digital accounts and enacted through a computer’s USB ports. A unique cryptographic identifier lives on the key, which allows only the person in possession of it to log into a connected account.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

If the threat of phished ChatGPT accounts may seem somewhat abstract, there is a growing body of literature showing that bad actors are increasingly targeting chatbot users. Cybercriminals are always on the lookout for extortion-worthy information and, given the intimate nature of most chatbot conversations, there is plenty of fodder when it comes to both enterprise and personal-level users.

Digital security is also becoming a bigger focus of the AI industry. Several weeks ago, Anthropic announced a new cybersecurity model called Mythos. Perhaps seeking to steal some of its competitor’s thunder, OpenAI has also made a number of announcements related to digital security. Thursday’s news of the Yubico partnership followed OpenAI’s announcement that it’s launching a new framework for digital defense.

Of course, a security-key-enabled account does offer stronger protection, but it comes with a tradeoff: If the key is lost, OpenAI won’t be able to help recover access. In practice, that means conversations could be lost for good.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

source

Continue Reading