Connect with us

Tech

OpenAI announces new advanced security for ChatGPT accounts, including a partnership with Yubico

OpenAI is getting serious about account security.

The company on Thursday launched Advanced Account Security (AAS), a set of opt-in protections for ChatGPT users designed for high-value individuals — but available to anyone who wants them.

As part of that new program, digital security provider Yubico announced it has partnered with OpenAI to link two new security key products to ChatGPT accounts. The company said the partnership was designed to protect users from the threat of phishing, which is considered to be a growing threat for chatbot users.

The two companies are releasing a pair of “co-branded” YubiKeys — dubbed the YubiKey C NFC and the YubiKey C Nano.

OpenAI has suggested that AAS is a good fit for political dissidents, journalists, researchers, and elected officials — people who engage in politically charged and risky work. One would assume that it might make sense for enterprise users, whose corporate secrets are squirreled away in ChatGPT sessions.

“Ultimately, our intent is to drastically reduce the threat of unauthorized access to sensitive data in OpenAI accounts worldwide,” Yubico CEO Jerrod Chong said in press release announcing the deal.

Security keys are small pieces of hardware that can be tied to digital accounts and enacted through a computer’s USB ports. A unique cryptographic identifier lives on the key, which allows only the person in possession of it to log into a connected account.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

If the threat of phished ChatGPT accounts may seem somewhat abstract, there is a growing body of literature showing that bad actors are increasingly targeting chatbot users. Cybercriminals are always on the lookout for extortion-worthy information and, given the intimate nature of most chatbot conversations, there is plenty of fodder when it comes to both enterprise and personal-level users.

Digital security is also becoming a bigger focus of the AI industry. Several weeks ago, Anthropic announced a new cybersecurity model called Mythos. Perhaps seeking to steal some of its competitor’s thunder, OpenAI has also made a number of announcements related to digital security. Thursday’s news of the Yubico partnership followed OpenAI’s announcement that it’s launching a new framework for digital defense.

Of course, a security-key-enabled account does offer stronger protection, but it comes with a tradeoff: If the key is lost, OpenAI won’t be able to help recover access. In practice, that means conversations could be lost for good.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

FDA approval, fundraising, and the reality of building in healthcare according to BioticsAI founder

Founders building in the healthcare space can’t just build fast and break things. Timelines stretch longer, stakes are higher, and success depends on navigating systems that reward rigor over speed. 

That’s exactly the reality Robhy Bustami, co-founder and CEO of BioticsAI, has been building in. His company is developing an AI copilot for ultrasound that helps detect fetal abnormalities, an area where misdiagnosis rates remain surprisingly high. Bustami joined Isabelle Johannessen on Build Mode to discuss how the company has navigated a highly regulated space and kept the team motivated while cutting through all the red tape.

BioticsAI started scrappy. The team built an early, functioning version of the product for under $100,000, an almost unheard-of milestone in the medical device world. That prototype helped them win TechCrunch Startup Battlefield in 2023, bringing early visibility and credibility. In January, they gained FDA approval, which means they can begin launching in hospitals and growing the business at a new rate. 

From day one, the team approached product development with FDA approval in mind. Instead of building first and figuring out regulation later, they integrated clinical validation, regulatory strategy, and product development into a single process. That meant working closely with clinicians, collecting large-scale datasets, and running structured clinical studies before ever reaching the submission stage.

The FDA process itself is often viewed as a black box, but Bustami emphasizes that founders don’t have to navigate it blindly. Early engagement with regulators, through pre-submission meetings, helped the team align on study design and expectations. Still, risk never fully disappears. For many investors, the biggest question is simple: What if the FDA says no?

Internally, those long timelines create a different kind of challenge: keeping a team motivated when the biggest milestone is years away. At BioticsAI, that meant building a culture of alignment across engineers, clinicians, and researchers, ensuring everyone got to see the wins that were happening.

 “Making sure everyone is completely aligned, even if it’s outside of their technical scope,” Bustami said, “constantly seeing wins on the R&D side,” from clinical studies to new healthcare partnerships.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Now, with FDA clearance secured, BioticsAI is entering a new phase: deployment. The company is beginning to roll out its technology in hospitals, with plans to expand beyond obstetrics into broader areas of reproductive health.

Building in healthcare is a long game. It requires patience, discipline, and a willingness to operate in uncertainty. For founders willing to take that path, the reward isn’t just a successful company — it’s the chance to build something that genuinely changes how care is delivered.


Subscribe to Build Mode on Apple Podcasts, Spotify, or wherever you like to listen. Watch the full videos on YouTube. Isabelle Johannessen is our host. Build Mode is produced and edited by Maggie Nye. Audience Development is led by Morgan Little. And a special thanks to the Foundry and Cheddar video teams. 


Apply to Startup Battlefield: We are looking for early-stage companies that have an MVP. So nominate a founder (or yourself). Be sure to say you heard about Startup Battlefield from the Build Mode podcast. Apply here.

TechCrunch Disrupt 2026: We’re back for TechCrunch Disrupt on October 13 to 15 in San Francisco, where the Startup Battlefield 200 takes the stage. So if you want to cheer them on, or just network with thousands of founders, VCs, and tech enthusiasts, then grab your tickets.

Use code buildmode15 for 15% off any ticket type. 

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

source

Continue Reading

Tech

Elon Musk testifies that xAI trained Grok on OpenAI models

OpenAI and Anthropic have been on the warpath lately against third-party efforts to train new AI models by prompting their publicly accessible chatbots and APIs, a process known as “distillation.”

That conversation has focused on Chinese firms using distillation to create open-weight models that are nearly as capable as U.S. offerings, but available at a much lower cost. However, tech workers have widely assumed that American labs use these techniques on each other to avoid falling behind competitors.

Now we know it’s true in at least one case: On the stand in a California federal court on Thursday, Elon Musk was asked if xAI has used distillation techniques on OpenAI models to train Grok, and he asserted it was a general practice among AI companies. Asked if that meant “yes,” he said, “Partly.”

Musk is in the process of suing OpenAI, CEO Sam Altman, and Greg Brockman, alleging they breached the original nonprofit mission for OpenAI by shifting the entity to a for-profit structure. That trial began this week, featuring testimony from the tech leader.

Musk’s admission is notable because distillation threatens AI giants by undermining the advantage they’ve built by investing in compute infrastructure. This allows other software makers to create models that are nearly as capable on the cheap. There’s no small amount of irony here, given the bending and alleged breaking of copyright rules by frontier labs in their search for sufficient data to train their models.

It’s no surprise that Musk’s xAI, which started in 2023, years after OpenAI, would try to learn from the then-leader in the field. It’s not clear that distillation is explicitly illegal, but rather may violate the terms of service companies set for the user of their products.

OpenAI, Anthropic, and Google have reportedly launched an initiative through the Frontier Model Forum to share information about how to combat distillation attempts from China. These typically involve systematic querying of models to understand their inner workings. To stop the efforts, frontier labs are working to prevent users from making suspicious mass queries.

OpenAI did not respond to a request for comment on Musk’s admission at press time.

Later in his testimony, Musk was asked about a claim he made last summer that xAI would soon be far beyond any company besides Google. In response, he ranked the world’s leading AI providers, saying Anthropic held the top spot, followed by OpenAI, Google, and Chinese open source models. He characterized xAI as a much smaller company with just a few hundred employees.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

source

Continue Reading

Tech

EV startup Faraday Future paid $7.5M to company tied to founder Jia Yueting

Faraday Future paid around $7.5 million to a company controlled by its founder Jia Yueting in 2025, according to a new regulatory filing.

The long-struggling electric vehicle startup made the payments in a year when it delivered only four vehicles and lost nearly $400 million. The company has pivoted to selling cheaper vans and robots imported from China.

The payments happened while Faraday Future was still under investigation by the Securities and Exchange Commission (SEC), which was probing what are known as “related party transactions” between the company and entities related to or controlled by Jia, Faraday’s own filings have shown. The SEC was also investigating whether Faraday Future properly represented the level of control Jia had over the company when it went public in 2021, and whether it lied about early sales of its EVs in 2023.

The SEC dropped its four-year investigation in March, as TechCrunch first reported, despite having sent notices to Faraday Future, Jia, and other executives last year stating that investigators were recommending an enforcement action. The closure of the investigation comes amid a historic drop in white-collar crime enforcement during the second Trump administration.

The new transactions were revealed in Faraday Future’s annual proxy filing published on Thursday. It shows Faraday Future paid a mix of monthly $100,000 “consulting” fees, a $2 million “bonus payment,” and $1.7 million to repay loans from the company, which is called FF Global Partners LLC. The company did not explain the remaining $2.6 million in the filing.

Faraday Future did not respond to a request for comment.

Faraday Future describes FF Global as an “affiliate” of Jia in the proxy filing, and in previous filings has said he exerts “significant influence” over the LLC. FF Global has five “voting managers,” one of whom is Jia, while the others include business associates and a family member — his nephew Jerry Wang.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Wang, who is a president at Faraday Future, draws a six-figure salary from FF Global, according to the filing. So does his wife, who is the head of FF Global’s legal department. FF Global also has a similar “consulting agreement” with a crypto holding company run by Wang (and advised by Jia) called AIXC. (Wang’s wife’s law firm also consults for AIXC.)

FF Global is also a major shareholder of Faraday Future and — with Jia — controls almost every aspect of the EV company, to the point that Faraday labels this as a risk factor in its most recent annual filing.

“Jia and FF Global, over which Mr. Jia exercises significant influence, have control over our management, business and operations, and may use this control in ways that are not aligned with our business or financial objectives or strategies or that are otherwise inconsistent with our interests,” the company wrote earlier this year.

FF Global also helped bring Jia back to power after the company went public in 2021. Shortly after Faraday Future merged with a special purpose acquisition company, the new public company board of directors opened an investigation into Jia’s movement of money in and out of the company, and into the disclosures made during the merger process.

In early 2022, the board sidelined Jia, who has been blacklisted by China for financial fraud, after finding Faraday Future had misrepresented the level of control he had over the company. They then referred their findings to the SEC, which opened its investigation shortly after.

FF Global, meanwhile, spent all of 2022 agitating to replace certain board members with ones friendly to Jia. This campaign became so intense that multiple board members received death threats. Those board members ultimately resigned in part because they feared for their lives. Jia was re-installed as co-CEO last year, and is now Faraday Future’s sole CEO.

FF Global is not the only Jia-related company that Faraday Future has paid, or plans to pay, money to. The company stated in its proxy filing that it paid $700,000 to a loan company associated with him last year. It also owes $8.5 million to Leshi Information Technology Co. Ltd., one of the companies related to his failed Chinese tech conglomerate LeEco, for “advertising services.”

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

source

Continue Reading