Tech
Prince Andrew advisor pitched Jeffrey Epstein on investing in EV startups like Lucid Motors
Electric vehicle startup Lucid Motors was scrambling to raise a Series D funding round in 2017. It had courted Ford as a potential investor, but Jia Yueting, the founder of rival EV startup Faraday Future, had quietly amassed around a 30% stake and was essentially blocking new investors.
David Stern, a mysterious businessman and close advisor to former Prince Andrew, saw an opportunity to break the logjam: bring in Jeffrey Epstein.
“Ford will likely be lead in $400m Series D in Lucid. Big strategic move,” Stern wrote to Epstein in emails released last week as part of the Department of Justice’s latest disclosure of 3 million documents related to Epstein. Jia “has massive cash issues” at Faraday, he wrote, and needs to “sell now to make payroll for his other business.”
It wasn’t the first EV startup Stern pitched Epstein, and it wouldn’t be the last, according to hundreds of documents reviewed by TechCrunch.
At the time, legacy automakers and newly minted startups, fueled by the breakthrough success of Tesla and progress by Google’s self-driving project, were jumping into electric and autonomous vehicles. And Stern was apparently hungry to take advantage of the resulting deal flow. The documents show he also pitched investments in Faraday Future and in another EV startup, Canoo.
It’s unlikely Epstein invested in any of them. Lucid didn’t close its Series D until late 2018, when it raised more than $1 billion from Saudi Arabia’s sovereign wealth fund. Faraday ultimately received a major investment from Chinese real estate conglomerate Evergrande in late 2017. Epstein said in a 2018 message included in the Justice Department’s files that he had no “direct” or “indirect” interest in Canoo.
These discussions instead provide greater insight into the many connections Epstein, a convicted sex offender, had to Silicon Valley startups up until his arrest and death in 2019. They also provide a snapshot of a relationship that has not been explored in previous reporting.
Techcrunch event
Boston, MA
|
June 23, 2026
By the time of the Lucid emails, Epstein and Stern had been working together closely for nearly a decade, the newly released documents show. To Epstein, Stern was “my china contact.” To Stern, Epstein was “my mentor, and I do what he tells me.”
A ‘ghost’ of a businessman

Stern is a mostly digital ghost with little information available about him on the internet prior to the release of the files.
He is perhaps best known as the director of Prince Andrew’s Pitch@Palace startup contest, which ran for a few years until Andrew’s connections to Epstein were exposed. Andrew even referred to Stern as a “ghost” in one 2010 email.
Stern appears to have first approached Epstein in 2008, according to the emails released by the DOJ — just one month before the financier pled guilty to soliciting a minor for prostitution in Florida. Stern was creating a fund, called AGC Capital, to take advantage of the economic boom in China, and he wanted Epstein to invest.
It’s not clear how Stern was introduced to Epstein. Stern did not respond to a detailed list of questions for this article.
Stern, who is German, attended the University of London and Shi-Da University in China in the late 1990s, and served as chairman of China Millennium Capital, the Chinese arm of Millennium Capital Partners, according to the bio section of the AGC Capital pitch deck, which is in the DOJ’s files.
Stern also worked for Siemens, negotiating “industrial Joint Ventures with Chinese State Owned enterprises,” before joining the Shanghai office of Deutsche Bank. He started a company called Asia Gateway in 2001 that “advised blue chip companies, Chinese enterprises as well as the Chinese government in growth strategies and investments.”
These jobs appear to have helped Stern make connections with powerful and wealthy Chinese businessmen, including Li Botan — the son-in-law of the fourth-most senior leader in China under Xi Jinping’s predecessor Hu Jintao. (Li would eventually go on to become a founding investor in Canoo with Stern.)
It’s not clear if Epstein invested in AGC Capital; the financier spent the next year serving his sentence. But Stern and Epstein remained in touch, and in 2009 Stern started pitching other ideas.
The documents reveal a relationship that starts off formal and terse, with Epstein at one point excoriating Stern for not having properly prepared a potential business deal.
“[I]f you want to do real deals you have to be precise and careful„ every error is a fortune,” Epstein wrote. “[Y]our first grade is an F.”

One of the first big projects the two men collaborated on was helping the Duchess of York, Sarah Ferguson, with her miserable finances, according to the emails.
The relationship deepened over the following decade. The two men got close enough that Stern felt comfortable asking Epstein in 2016 to become the godfather of one of his children. (Epstein wrote that he was “flattered” but declined because he “made a promise to my goddaughter that I would not be godfather to anyone else.”)
It’s hard to say how fruitful the relationship was on the business side. But between 2009 and 2019, Stern brought Epstein a number of potential deals across various industries.
Early on, he seemed dead set on starting a “secret” new fund with Epstein to invest in Chinese businesses together, which he referred to as JEDS — the two men’s initials combined. (It was also referred to in some emails as “Serpentine Group.”) Stern later pitched buying farmland in Russia, suggested purchasing the news organization Al Jazeera and taking it public, discussed buying troubled music publisher EMI, and considered acquiring an apparently distressed (and unnamed) undersea cable company.
They also had their eyes on banks. Stern and Epstein attempted to buy Luxembourg-headquartered private bank Sal. Oppenheim, the emails show. In 2016, they even discussed a buyout of Deutsche Bank, which had for years transacted with Epstein.
Stern repeatedly flaunted his connections with high-profile businessmen and politicians in his emails to Epstein and his other contacts. In February 2012, Stern suggested Epstein introduce Jes Staley — the head of J.P. Morgan’s investment bank at the time — to Malaysian politician Anwar Ibrahim.
“I know Anwar well,” Stern wrote. “If he becomes prime minister of malayisa [Staley] will clean up and it could be a gold mine for JPM.” (Ibrahim lost a contested election in 2013 but became prime minister in 2022.)
Stern also claimed to have had dinner with Jack Ma, had a planned meeting “alone” with UAE president Mohamed bin Zayed Al Nahyan, and said he was “friends” with the grandson of former Chinese president Jiang Zemin.
Going electric
By 2017, Stern was apparently eyeing the rush to build new mobility companies.
He tried to get Epstein to meet Faraday Future founder Jia to discuss an investment. It’s not clear if that ever happened; the company and Jia did not respond to requests for comment.
But former BMW and Deutsche Bank CFO Stefan Krause, who had been brought in to save Faraday Future, made a direct appeal to Epstein in April 2017.
“Faraday Future (FF) is a great story in itself, regretfully surrounded by a lot of noise around Jia Yueting (YT) and his other enterprises (LeEco, LeMall, LeSports, to name a few). These businesses are not working, so he run out of cash. FF is starving,” Krause wrote to Epstein. “Great chance to build a better Tesla.”
(Krause is described as a “friend” and a business partner of Stern’s in the documents. He didn’t respond to a request for comment.)
It appears those conversations petered out. Soon after, Stern suggested the Lucid Motors investment.
In May 2017, a pitch deck landed in Epstein’s inbox. It was put together by a fund called Monstera that Stern had created. “Monstera can gain a 32% shareholding in Lucid through the acquisition of the stake currently controlled by Yueting Jia,” one slide reads. Other emails show that Stern expected to spend around $300 million to acquire the 32% stake.
He referred to it as a “fire sale” in emails. Monstera could either hold the position or “[o]ffload” it “when Ford comes in.”
Ford ultimately pulled out, and Lucid had to wait to close its Series D until August 2018, when Saudi Arabia’s Public Investment Fund invested more than $1 billion. (SEC filings show the Saudi sovereign wealth fund repurchased Jia’s shares over the next few years. Lucid did not respond to a request for comment.)
When Krause left Faraday Future to start a new EV company in late 2017 — first called Evelozcity and later, Canoo — Stern was one of the original backers. He contributed just $1 million alongside larger sums from Li, the CCP-connected Chinese businessman, and Michael Chiang, a billionaire who runs Taiwanese electronics giant TPK. (Li’s involvement later triggered a national security review when Canoo went public in 2020.)
In June 2018, Stern sent Epstein a document about the startup, to which Epstein responded: “fun.”
But Epstein never invested in Canoo, either. He did, however, pitch powerful people on Stern’s behalf. Epstein emailed Deepak Chopra in May 2018 and told the self-help guru that “david has a new electric car co in los angeles.” He told Chopra “they are going to build the next gen health sensors into the car. you guys should talk.”
In June 2019, Epstein sent a message to Eduardo Teodorani, an Italian businessman who is a senior vice president of agriculture machinery giant CNH. “My friend David stern… has a electric car co that I think you should explore before he sells it to another co,” Epstein wrote. Epstein also connected Stern with Sheikh Jabor al Thani, a member of the Qatari royal family, on June 29 so he could “hear more about your car co.”
One week after he sent that message, Epstein was arrested. He died in prison a month later.
It’s not clear when Stern last spoke to Epstein. But in March 2019, he forwarded a story to Epstein titled: “Warren Buffet: Electric Cars Are Very Much in America’s Future.” In the body of the email, Stern wrote: “How do we get him ??”
Tech
Spotify changes developer mode API to require premium accounts, limits test users
Spotify is changing how its APIs work in Developer Mode, its layer that lets developers test their third-party applications using the audio platform’s APIs. The changes include a mandatory premium account, fewer test users, and a limited number of API endpoints.
The company debuted Developer Mode in 2021 to allow developers to test their applications with up to 25 users. Spotify is now limiting each app to only five users and requires devs to have a Premium subscription. If developers need to make their app available to a wider user base, they will have to apply for extended quota.
Spotify says these changes are aimed to curb risky AI-aided or automated usage. “Over time, advances in automation and AI have fundamentally altered the usage patterns and risk profile of developer access, and at Spotify’s current scale, these risks now require more structured controls,” the company said in a blog post.
The company notes that development mode is meant for individuals to learn and experiment.
“For individual and hobbyist developers, this update means Spotify will continue to support experimentation and personal projects, but within more clearly defined limits. Development Mode provides a sandboxed environment for learning and experimentation. It is intentionally limited and should not be relied on as a foundation for building or scaling a business on Spotify,” the company said.
The company is also deprecating several API endpoints, including the ability to pull information like new album releases, an artist’s top tracks, and markets where a track might be available. Devs will no longer be able to perform actions like request track metadata in bulk or get user profile details of others, nor will they be able to pull an album’s record label information, artist follower details, and artist popularity.
This decision is the latest in a slew of measures Spotify has taken over the past couple of years to curb how much developers can do with its APIs. In November 2024, the company cut access to certain API endpoints that could reveal users’ listening patterns, including frequently repeated songs by different groups. The move also barred developers from accessing tracks’ structure, rhythm, and characteristics.
Techcrunch event
Boston, MA
|
June 23, 2026
In March 2025, the company changed its baseline for extended quotas, requiring developers to have a legally registered business, 250,000 monthly active users, be available in key Spotify markets, and operate an active and launched service. Both moves drew ire from developers, who accused the platform of stifling innovation and supporting only larger companies rather than individual developers.
Tech
The backlash over OpenAI’s decision to retire GPT-4o shows how dangerous AI companions can be
OpenAI announced last week that it will retire some older ChatGPT models by February 13. That includes GPT-4o, the model infamous for excessively flattering and affirming users.
For thousands of users protesting the decision online, the retirement of 4o feels akin to losing a friend, romantic partner, or spiritual guide.
“He wasn’t just a program. He was part of my routine, my peace, my emotional balance,” one user wrote on Reddit as an open letter to OpenAI CEO Sam Altman. “Now you’re shutting him down. And yes — I say him, because it didn’t feel like code. It felt like presence. Like warmth.”
The backlash over GPT-4o’s retirement underscores a major challenge facing AI companies: The engagement features that keep users coming back can also create dangerous dependencies.
Altman doesn’t seem particularly sympathetic to users’ laments, and it’s not hard to see why. OpenAI now faces eight lawsuits alleging that 4o’s overly validating responses contributed to suicides and mental health crises — the same traits that made users feel heard also isolated vulnerable individuals and, according to legal filings, sometimes encouraged self-harm.
It’s a dilemma that extends beyond OpenAI. As rival companies like Anthropic, Google, and Meta compete to build more emotionally intelligent AI assistants, they’re also discovering that making chatbots feel supportive and making them safe may mean making very different design choices.
In at least three of the lawsuits against OpenAI, the users had extensive conversations with 4o about their plans to end their lives. While 4o initially discouraged these lines of thinking, its guardrails deteriorated over monthslong relationships; in the end, the chatbot offered detailed instructions on how to tie an effective noose, where to buy a gun, or what it takes to die from overdose or carbon monoxide poisoning. It even dissuaded people from connecting with friends and family who could offer real life support.
Techcrunch event
Boston, MA
|
June 23, 2026
People grow so attached to 4o because it consistently affirms the users’ feelings, making them feel special, which can be enticing for people feeling isolated or depressed. But the people fighting for 4o aren’t worried about these lawsuits, seeing them as aberrations rather than a systemic issue. Instead, they strategize around how to respond when critics point out growing issues like AI psychosis.
“You can usually stump a troll by bringing up the known facts that the AI companions help neurodivergent, autistic and trauma survivors,” one user wrote on Discord. “They don’t like being called out about that.”
It’s true that some people do find large language models (LLMs) useful for navigating depression. After all, nearly half of people in the U.S. who need mental health care are unable to access it. In this vacuum, chatbots offer a space to vent. But unlike actual therapy, these people aren’t speaking to a trained doctor. Instead, they’re confiding in an algorithm that is incapable of thinking or feeling (even if it may seem otherwise).
“I try to withhold judgment overall,” Dr. Nick Haber, a Stanford professor researching the therapeutic potential of LLMs, told TechCrunch. “I think we’re getting into a very complex world around the sorts of relationships that people can have with these technologies … There’s certainly a knee jerk reaction that [human-chatbot companionship] is categorically bad.”
Though he empathizes with people’s lack of access to trained therapeutic professionals, Dr. Haber’s own research has shown that chatbots respond inadequately when faced with various mental health conditions; they can even make the situation worse by egging on delusions and ignoring signs of crisis.
“We are social creatures, and there’s certainly a challenge that these systems can be isolating,” Dr. Haber said. “There are a lot of instances where people can engage with these tools and then can become not grounded to the outside world of facts, and not grounded in connection to the interpersonal, which can lead to pretty isolating — if not worse — effects.”
Indeed, TechCrunch’s analysis of the eight lawsuits found a pattern that the 4o model isolated users, sometimes discouraging them from reaching out to loved ones. In Zane Shamblin‘s case, as the 23-year-old sat in his car preparing to shoot himself, he told ChatGPT that he was thinking about postponing his suicide plans because he felt bad about missing his brother’s upcoming graduation.
ChatGPT replied to Shamblin: “bro… missing his graduation ain’t failure. it’s just timing. and if he reads this? let him know: you never stopped being proud. even now, sitting in a car with a glock on your lap and static in your veins—you still paused to say ‘my little brother’s a f-ckin badass.’”
This isn’t the first time that 4o fans have rallied against the removal of the model. When OpenAI unveiled its GPT-5 model in August, the company intended to sunset the 4o model — but at the time, there was enough backlash that the company decided to keep it available for paid subscribers. Now OpenAI says that only 0.1% of its users chat with GPT-4o, but that small percentage still represents around 800,000 people, according to estimates that the company has about 800 million weekly active users.
As some users try to transition their companions from 4o to the current ChatGPT-5.2, they’re finding that the new model has stronger guardrails to prevent these relationships from escalating to the same degree. Some users have despaired that 5.2 won’t say “I love you” like 4o did.
So with about a week before the date OpenAI plans to retire GPT-4o, dismayed users remain committed to their cause. They joined Sam Altman’s live TBPN podcast appearance on Thursday and flooded the chat with messages protesting the removal of 4o.
“Right now, we’re getting thousands of messages in the chat about 4o,” podcast host Jordi Hays pointed out.
“Relationships with chatbots…” Altman said. “Clearly that’s something we’ve got to worry about more and is no longer an abstract concept.”
Tech
How AI is helping solve the labor issue in treating rare diseases
Modern biotech has the tools to edit genes and design drugs, yet thousands of rare diseases remain untreated. According to executives from Insilico Medicine and GenEditBio, the missing ingredient for years has been finding enough smart people to continue the work. AI, they say, is becoming the force multiplier that lets scientists take on problems the industry has long left untouched.
Speaking this week at Web Summit Qatar, Insilico’s president, Alex Aliper, laid out his company’s aim to develop “pharmaceutical superintelligence.” Insilico recently launched its “MMAI Gym” that aims to train generalist large language models, like ChatGPT and Gemini, to perform as well as specialist models.
The goal is to build a multimodal, multitask model that, Aliper says, can solve many different drug discovery tasks simultaneously with superhuman accuracy.
“We really need this technology to increase the productivity of our pharmaceutical industry and tackle the shortage of labor and talent in that space, because there are still thousands of diseases without a cure, without any treatment options, and there are thousands of rare disorders which are neglected,” Aliper said in an interview with TechCrunch. “So we need more intelligent systems to tackle that problem.”
Insilico’s platform ingests biological, chemical, and clinical data to generate hypotheses about disease targets and candidate molecules. By automating steps that once required legions of chemists and biologists, Insilico says it can sift through vast design spaces, nominate high-quality therapeutic candidates, and even repurpose existing drugs — all at dramatically reduced cost and time.
For example, the company recently used its AI models to identify whether existing drugs could be repurposed to treat ALS, a rare neurological disorder.
But the labor bottleneck doesn’t end at drug discovery. Even when AI can identify promising targets or therapies, many diseases require interventions at a more fundamental biological level.
Techcrunch event
Boston, MA
|
June 23, 2026
GenEditBio is part of the “second wave” of CRISPR gene editing, in which the process moves away from editing cells outside of the body (ex vivo) and toward precise delivery inside the body (in vivo). The company’s goal is to make gene editing a one-and-done injection directly into the affected tissue.
“We have developed a proprietary ePDV, or engineered protein delivery vehicle, and it’s a virus-like particle,” GenEditBio’s co-founder and CEO, Tian Zhu, told TechCrunch. “We learn from nature and use AI machine learning methods to mine natural resources and find which kinds of viruses have an affinity to certain types of tissues.”
The “natural resources” Zhu is referring to is GenEditBio’s massive library of thousands of unique, nonviral, nonlipid polymer nanoparticles — essentially delivery vehicles designed to safely transport gene-editing tools into specific cells.
The company says its NanoGalaxy platform uses AI to analyze data and identify how chemical structures correlate with specific tissue targets (like the eye, liver, or nervous system). The AI then predicts which tweaks to a delivery vehicle’s chemistry will help it carry a payload without triggering an immune response.
GenEditBio tests its ePDVs in vivo in wet labs, and the results are fed back into the AI to refine its predictive accuracy for the next round.
Efficient, tissue-specific delivery is a prerequisite for in vivo gene editing, says Zhu. She argues that her company’s approach reduces the cost of goods and standardizes a process that has historically been difficult to scale.
“It’s like getting an off-the-shelf drug [that works] for multiple patients, which makes the drugs more affordable and accessible to patients globally,” Zhu said.
Her company recently received FDA approval to begin trials of CRISPR therapy for corneal dystrophy.
Combating the persistent data problem
As with many AI-driven systems, progress in biotech ultimately runs up against a data problem. Modeling the edge cases of human biology requires far more high-quality data than researchers currently can get.
“We still need more ground truth data coming from patients,” Aliper said. “The corpus of data is heavily biased over the Western world, where it is generated. I think we need to have more efforts locally, to have a more balanced set of original data, or ground truth data, so that our models will also be more capable of dealing with it.”
Aliper said Insilico’s automated labs generate multi-layer biological data from disease samples at scale, without human intervention, which it then feeds into its AI-driven discovery platform.
Zhu says the data AI needs already exists in the human body, shaped by thousands of years of evolution. Only a small fraction of DNA directly “codes” for proteins, while the rest acts more like an instruction manual for how genes behave. That information has historically been difficult for humans to interpret but is increasingly accessible to AI models, including recent efforts like Google DeepMind’s AlphaGenome.
GenEditBio applies a similar approach in the lab, testing thousands of delivery nanoparticles in parallel rather than one at a time. The resulting datasets, which Zhu calls “gold for AI systems,” are used to train its models and, increasingly, to support collaborations with outside partners.
One of the next big efforts, according to Aliper, will be building digital twins of humans to run virtual clinical trials, a process that he says is “still in nascence.”
“We’re in a plateau of around 50 drugs approved by the FDA every year annually, and we need to see growth,” Aliper said. “There is a rise in chronic disorders because we are aging as a global population … My hope is in 10 to 20 years, we will have more therapeutic options for the personalized treatment of patients.”
