Tech
After dissing Anthropic for limiting Mythos, OpenAI restricts access to Cyber, too
After Sam Altman trash-talked Anthropic for gatekeeping its cybersecurity tool Mythos by only releasing it to select users, he confirmed that OpenAI would be doing the same with its competing tool, Cyber.
Altman said in a post on X on Thursday that OpenAI will begin rolling out GPT-5.5 Cyber “to critical cyber defenders” in the next few days. OpenAI has an application on its website where people submit information about their credentials and planned use in order to gain access.
This version of Cyber can perform such tasks as penetration testing, vulnerability identification (and exploitation), and malware reverse engineering, the application implies. It’s intended to be a toolkit to help a company find security holes and test defenses. The fear is that the kit could be misused by the bad guys.
When Anthropic similarly restricted access to Mythos, Altman called the tactic fear-based marketing. Some critics also thought so, saying Anthropic’s rhetoric was overblown. Ironically, an unauthorized group reportedly managed to gain access to Mythos anyway.
OpenAI says it’s working to make Cyber more widely available by consulting with the U.S. government and identifying more users with legit cybersecurity credentials.
A spokesperson tells TechCrunch that the company’s system for verifying those with legit cybersecurity credentials, which it calls Trusted Access for Cyber (TAC) has scaled “to thousands of verified defenders and hundreds of teams responsible for protecting critical software.” Those folks can use the latest model, GPT 5.5 for “cybersecurity tasks” with less “friction” from safeguards.
The TAC permissions program is tiered, the spokesperson said: “Critical defenders with legitimate defensive use cases can apply to access dedicated more cyber-permissive models like GPT 5.4-Cyber, and the forthcoming GPT 5.5-Cyber, through the program.”
Note: This story was updated to include a statement from OpenAI.
Tech
FDA approval, fundraising, and the reality of building in healthcare according to BioticsAI founder
Founders building in the healthcare space can’t just build fast and break things. Timelines stretch longer, stakes are higher, and success depends on navigating systems that reward rigor over speed.
That’s exactly the reality Robhy Bustami, co-founder and CEO of BioticsAI, has been building in. His company is developing an AI copilot for ultrasound that helps detect fetal abnormalities, an area where misdiagnosis rates remain surprisingly high. Bustami joined Isabelle Johannessen on Build Mode to discuss how the company has navigated a highly regulated space and kept the team motivated while cutting through all the red tape.
BioticsAI started scrappy. The team built an early, functioning version of the product for under $100,000, an almost unheard-of milestone in the medical device world. That prototype helped them win TechCrunch Startup Battlefield in 2023, bringing early visibility and credibility. In January, they gained FDA approval, which means they can begin launching in hospitals and growing the business at a new rate.
From day one, the team approached product development with FDA approval in mind. Instead of building first and figuring out regulation later, they integrated clinical validation, regulatory strategy, and product development into a single process. That meant working closely with clinicians, collecting large-scale datasets, and running structured clinical studies before ever reaching the submission stage.
The FDA process itself is often viewed as a black box, but Bustami emphasizes that founders don’t have to navigate it blindly. Early engagement with regulators, through pre-submission meetings, helped the team align on study design and expectations. Still, risk never fully disappears. For many investors, the biggest question is simple: What if the FDA says no?
Internally, those long timelines create a different kind of challenge: keeping a team motivated when the biggest milestone is years away. At BioticsAI, that meant building a culture of alignment across engineers, clinicians, and researchers, ensuring everyone got to see the wins that were happening.
“Making sure everyone is completely aligned, even if it’s outside of their technical scope,” Bustami said, “constantly seeing wins on the R&D side,” from clinical studies to new healthcare partnerships.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Now, with FDA clearance secured, BioticsAI is entering a new phase: deployment. The company is beginning to roll out its technology in hospitals, with plans to expand beyond obstetrics into broader areas of reproductive health.
Building in healthcare is a long game. It requires patience, discipline, and a willingness to operate in uncertainty. For founders willing to take that path, the reward isn’t just a successful company — it’s the chance to build something that genuinely changes how care is delivered.
Subscribe to Build Mode on Apple Podcasts, Spotify, or wherever you like to listen. Watch the full videos on YouTube. Isabelle Johannessen is our host. Build Mode is produced and edited by Maggie Nye. Audience Development is led by Morgan Little. And a special thanks to the Foundry and Cheddar video teams.
Apply to Startup Battlefield: We are looking for early-stage companies that have an MVP. So nominate a founder (or yourself). Be sure to say you heard about Startup Battlefield from the Build Mode podcast. Apply here.
TechCrunch Disrupt 2026: We’re back for TechCrunch Disrupt on October 13 to 15 in San Francisco, where the Startup Battlefield 200 takes the stage. So if you want to cheer them on, or just network with thousands of founders, VCs, and tech enthusiasts, then grab your tickets.
Use code buildmode15 for 15% off any ticket type.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Tech
Elon Musk testifies that xAI trained Grok on OpenAI models
OpenAI and Anthropic have been on the warpath lately against third-party efforts to train new AI models by prompting their publicly accessible chatbots and APIs, a process known as “distillation.”
That conversation has focused on Chinese firms using distillation to create open-weight models that are nearly as capable as U.S. offerings, but available at a much lower cost. However, tech workers have widely assumed that American labs use these techniques on each other to avoid falling behind competitors.
Now we know it’s true in at least one case: On the stand in a California federal court on Thursday, Elon Musk was asked if xAI has used distillation techniques on OpenAI models to train Grok, and he asserted it was a general practice among AI companies. Asked if that meant “yes,” he said, “Partly.”
Musk is in the process of suing OpenAI, CEO Sam Altman, and Greg Brockman, alleging they breached the original nonprofit mission for OpenAI by shifting the entity to a for-profit structure. That trial began this week, featuring testimony from the tech leader.
Musk’s admission is notable because distillation threatens AI giants by undermining the advantage they’ve built by investing in compute infrastructure. This allows other software makers to create models that are nearly as capable on the cheap. There’s no small amount of irony here, given the bending and alleged breaking of copyright rules by frontier labs in their search for sufficient data to train their models.
It’s no surprise that Musk’s xAI, which started in 2023, years after OpenAI, would try to learn from the then-leader in the field. It’s not clear that distillation is explicitly illegal, but rather may violate the terms of service companies set for the user of their products.
OpenAI, Anthropic, and Google have reportedly launched an initiative through the Frontier Model Forum to share information about how to combat distillation attempts from China. These typically involve systematic querying of models to understand their inner workings. To stop the efforts, frontier labs are working to prevent users from making suspicious mass queries.
OpenAI did not respond to a request for comment on Musk’s admission at press time.
Later in his testimony, Musk was asked about a claim he made last summer that xAI would soon be far beyond any company besides Google. In response, he ranked the world’s leading AI providers, saying Anthropic held the top spot, followed by OpenAI, Google, and Chinese open source models. He characterized xAI as a much smaller company with just a few hundred employees.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
Tech
OpenAI announces new advanced security for ChatGPT accounts, including a partnership with Yubico
OpenAI is getting serious about account security.
The company on Thursday launched Advanced Account Security (AAS), a set of opt-in protections for ChatGPT users designed for high-value individuals — but available to anyone who wants them.
As part of that new program, digital security provider Yubico announced it has partnered with OpenAI to link two new security key products to ChatGPT accounts. The company said the partnership was designed to protect users from the threat of phishing, which is considered to be a growing threat for chatbot users.
The two companies are releasing a pair of “co-branded” YubiKeys — dubbed the YubiKey C NFC and the YubiKey C Nano.
OpenAI has suggested that AAS is a good fit for political dissidents, journalists, researchers, and elected officials — people who engage in politically charged and risky work. One would assume that it might make sense for enterprise users, whose corporate secrets are squirreled away in ChatGPT sessions.
“Ultimately, our intent is to drastically reduce the threat of unauthorized access to sensitive data in OpenAI accounts worldwide,” Yubico CEO Jerrod Chong said in press release announcing the deal.
Security keys are small pieces of hardware that can be tied to digital accounts and enacted through a computer’s USB ports. A unique cryptographic identifier lives on the key, which allows only the person in possession of it to log into a connected account.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
If the threat of phished ChatGPT accounts may seem somewhat abstract, there is a growing body of literature showing that bad actors are increasingly targeting chatbot users. Cybercriminals are always on the lookout for extortion-worthy information and, given the intimate nature of most chatbot conversations, there is plenty of fodder when it comes to both enterprise and personal-level users.
Digital security is also becoming a bigger focus of the AI industry. Several weeks ago, Anthropic announced a new cybersecurity model called Mythos. Perhaps seeking to steal some of its competitor’s thunder, OpenAI has also made a number of announcements related to digital security. Thursday’s news of the Yubico partnership followed OpenAI’s announcement that it’s launching a new framework for digital defense.
Of course, a security-key-enabled account does offer stronger protection, but it comes with a tradeoff: If the key is lost, OpenAI won’t be able to help recover access. In practice, that means conversations could be lost for good.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
