Elon Musk, who threatened to ban his employees from using Apple devices at the companies he runs, said in a June 10 post on X that he’s no longer a fan of the iPhone, iPad and Mac computers because he has security concerns about whether Apple’s new partnership with OpenAI, the maker of ChatGPT, will protect users’ personal data.
But the situation prompting Musk — who is one of the world’s richest men, the CEO of X, the head of a startup developing a rival to ChatGPT called Grok and a co-founder of OpenAI, a company he’s now suing — may be more complicated than just worries about security. Musk, who has a reputation for bluster, is now being called out by members of his social media fact-checking community, saying his claims are inaccurate and misleading.
Here’s what happened: After Apple CEO Tim Cook and his team took the stage at the company’s developers’ conference on Monday and announced generative AI features they’re bringing to iPhone, iPad and Mac users in the next versions of Apple’s operating system software this fall, including a deal to give Apple users access to OpenAI’s popular ChatGPT gen AI chatbot, Musk made his threat.
“If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies.” Musk posted to X, formerly known as Twitter, on Monday. “That is an unacceptable security violation.
He also said in his tweets that visitors to his companies, which include Tesla, X, chatbot maker xAI, tunneling startup the Boring Commpany and rocket producer SpaceX, will have to “check their Apple devices at the door, where they will be stored in a Faraday cage.” Faraday cages are enclosures that shield anything placed inside them from electromagnetic fields.
What he didn’t offer was any evidence to back up his speculation about potential security risks. Instead, Musk, in a followup post on Monday, belittled Apple for inking a deal with an outside maker of a large language model (LLM), which is what enables generative AI functionality. He also said he might make his own phone to “combat this,” again without detailing what “this” is.
“It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy,” Musk wrote. “Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.”
Musk, who has worked to portray himself as an advocate for users, also didn’t mention his legal beef with OpenAI, which is detailed in his February lawsuit. In that lawsuit, he claims the San Francisco-based startup, including CEO Sam Altman, abandoned its founding mission to develop AI that will benefit humanity in favor of chasing profit.
In reply, OpenAI, in a lengthy blog post on its site on March 5, challenged Musk’s narrative and said the billionaire investor was angry that his 2018 attempt to take over OpenAI, which included his demand to become the CEO and majority shareholder so he could turn it into a “for-profit entity” himself, was rebuffed.
Watch this: Apple Introduces Private Cloud Compute for AI Processing
Apple didn’t reply to CNET’s request for comment about how ChatGPT will be integrated into “Apple Intelligence,” the name it gave to its approach to adding generative AI-based features throughout its hardware and software. Those features include the ability to rewrite or summarize notes as well as Siri’s improved capability to understand the context of conversations.
Apple also used the WWDC conference to announce its partnership with OpenAI, saying its users could choose to gain access to ChatGPT through Siri, Apple’s virtual assistant, and in new Writing Tools that will proofread your writing, rewrite copy in various styles and quickly summarize long sections of text.
See also: Apple Says Its AI Sets a ‘New Standard’ for Privacy, Invites Security Experts to Test It
During the WWDC keynote, Apple talked at length about the security and privacy aspects of its AI systems, including what it calls Private Cloud Compute for managing communications between personal devices and Apple’s remote servers working in the cloud.
“Privacy protections are built in for users who access ChatGPT — their IP addresses are obscured, and OpenAI won’t store requests,” Apple said in a press release.
The iPhone maker said to expect the ChatGPT integration in new software for its iPhone, iPad and Mac computers this fall. The integration with ChatGPT is an optional feature, the company said, demonstrating that users can choose to opt in or use OpenAI’s chatbot on its website. Apple said its devices would not collect personal data, but would be aware of it.
The iPhone maker has championed privacy as a core value when designing products and services, and it said that Apple Intelligence would set “a new standard for privacy in AI.” To help achieve this, Apple said certain AI-related tasks will be processed on-device, while more complex requests will be routed to the cloud in data centers running Apple-made chips. In either case, “data is not stored or made accessible to Apple and is only used to fulfill the user’s requests, and independent experts can verify this privacy,” the company said.
Fact-checkers on X also pointed out that Musk’s posts, labeling the Apple-OpenAI partnership as “creepy spyware,” were not factually correct, Forbes noted. “Users, citing Apple’s own introduction to the Apple Intelligence models, said Musk’s claim the company will hand data over to OpenAI is misleading as Apple has developed its own AI systems that will run on-device, or locally, and will use private cloud computing.”
In another community note, Forbes reported, fact-checkers wrote that Musk “misrepresents what was actually announced,” as “Apple Intelligence is Apple’s own creation” and access to ChatGPT “is entirely separate, and controlled by the user.”
Editors’ note: CNET used an AI engine to help create several dozen stories, which are labeled accordingly. The note you’re reading is attached to articles that deal substantively with the topic of AI but are created entirely by our expert editors and writers. For more, see our AI policy.