"Apple’s AI Promise: Your Data is Never Stored or Made Accessible to Apple"

Apple recently announced its new "Apple Intelligence" system, which it is integrating into its products.  Most large language models are run on remote, cloud-based server farms, so some users have been reluctant to share personally identifiable and/or private data with AI companies.  Apple says that its new system will use a new "Private Cloud Compute" to ensure any data processed on its cloud servers is protected in a transparent and verifiable way.  According to Apple, "a brand new standard for privacy and AI" is achieved through on-device processing.  Apple says many of its generative AI models can run entirely on a device powered by an A17+ or M-series chips, eliminating the risk of sending personal data to a remote server.  When a bigger, cloud-based model is needed to fulfill a generative AI request, Apple stressed that it will "run on servers it created especially using Apple silicon," which allows for the use of security tools built into the Swift programming language.  Apple noted that the Apple Intelligence system "sends only the data relevant to completing a task" to those servers rather than giving blanket access to the entirety of the contextual information the device has access to.  Apple says that minimized data is not going to be saved for future server access or used to further train Apple's server-based models.  Apple noted that the server code used by Private Cloud Compute will be publicly accessible, meaning that "independent experts can inspect the code that runs on these servers to verify its privacy promise." The entire system has been set up cryptographically so that Apple devices "will refuse to talk to a server unless its software has been publicly logged for inspection."


Ars Technica reports: "Apple’s AI Promise: Your Data is Never Stored or Made Accessible to Apple"

Submitted by Adam Ekwall on