Apple needs to answer a host of questions around user privacy after its tie-up with OpenAI
Ultimately, the success of AI on Apple products depends on how well it is executed.

Apple, a company known for its longstanding commitment to user privacy, has received flak since unveiling its artificial intelligence strategy at its Worldwide Developers Conference on June 10. This relates to the centrepiece of Apple Intelligence, which gives users access to OpenAI’s powerful generative AI tool, ChatGPT.
Some users welcomed the new features, which will initially only be available from the autumn on iPhone 15 Pro and Pro Max phones as well as laptops and tablets running on the M chips introduced in 2020. But others expressed concerns about how this aligns with Apple’s privacy commitment.
Notably, Elon Musk, a co-founder of OpenAI who walked away in 2018 over strategic differences, called Apple’s new partnership an unacceptable security violation. He threatened to ban Apple devices from his company offices and hinted at launching a privacy-hardwired Xphone in response.
It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!
Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.— Elon Musk (@elonmusk) June 10, 2024
In fact, Apple has also unveiled in-house AI capabilities with Apple Intelligence. Whereas most state of the art AI is accessed by...