The five things Apple will do to keep its “Private Cloud Compute” private

“End-to-end encryption is our most powerful defense,” says Apple. However, this safeguard cannot be applied when sending data to large language models in the cloud. Nonetheless, Apple assures that no one, even Cupertino iGeeks, will be able to access personal data when it leaves the user’s device for processing in the cloud.

Apple Intelligence, a suite of AI-powered features to empower the newest Apple devices, will mostly rely on the device’s processing power.

Macs, iPhones, and iPads will run a three billion-parameter on-device language model. To handle more advanced features like writing, summarizing, visual generation, and app interactions, devices will connect to a larger server-based model. Apple calls its cloud intelligence system “Private Cloud Compute.”

The notion that user data leaves the device is often associated with privacy risks. Tech mogul Elon Musk even said he plans to ban iPhones at his companies, while the visitors will have to keep the devices stored in a Faraday cage.

Apple itself says it “has long championed on-device processing as the cornerstone for the security and privacy of user data.”

Yet, Apple devotes a lot of effort to ensuring that there is no risk of passing some data to the cloud. The company has recently released a post in its Security Research blog explaining how the new off-device technologies will work.

“PCC extends the industry-leading security and privacy of Apple devices into the cloud, making sure that personal user data sent to PCC isn’t accessible to anyone other than the user – not even to Apple,” Cupertino says.

Apple’s private cloud will also run on “custom Apple silicon and a hardened operating system designed for privacy.”

The sophisticated technologies are supposed to satisfy 5 requirements: stateless computation (independent of any previous computations or data), enforceable guarantees, no privileged access, non-targetability, and verifiable transparency.

Here’s how Apple describes the inner workings of its new system:

1. Compute nodes on Apple silicon

Custom-built server hardware will use Apple silicon with the same hardware security technologies present in the iPhone. The new OS will support running LLK tasks while maintaining a very narrow attack surface, running signed and sandboxed code. There will be no remote access, monitoring, or other traditional data center tools, only “a small, restricted set of operational metrics.”

2. “Stateless” processing and “enforceable guarantees”

End-to-end encryption is not an option when the cloud needs to compute user data.

Apple guarantees that its PPC compute node will handle a user’s data for the “sole exclusive purpose of fulfilling” the user’s request. The data stays on the node “only until the response is returned,” and it “is never available to Apple.”

According to the blog post, even staff with administrative access to the production service or hardware will not be able to access users’ data, which will be deleted each time after fulfilling the request.

iPhone, Mac, or iPad prompts to the cloud will define the desired AI model and parameters. They will be encrypted during transit to “highly protected PCC nodes.” Supporting data center services won’t have decryption keys; they will be kept safe in the hardware-level “Secure Enclave,” where they cannot be duplicated or extracted. The keys will also not be retained once a request is complete.

Each request will run isolated from others. Sandboxing, Pointer Authentication Codes, and other technologies are used to resist exploitation and limit attacker’s horizontal movement.

3. No privileged access

Apple says the PPC is designed to ensure that no one can bypass stateless computation guarantees – there will be no privileged access.

“First, we intentionally did not include remote shell or interactive debugging mechanisms on the PCC node,” the post reads. “PCC nodes cannot enable Developer Mode and do not include the tools needed by debugging workflows.”

The system does not include a general-purpose logging mechanism. Limited monitoring and management tooling are designed “to prevent user data from being exposed.”

“Only pre-specified, structured, and audited logs and metrics can leave the node, and multiple independent layers of review help prevent user data from accidentally being exposed through these mechanisms,” Apple assures.

4. Non-targetable system

In an unlikely worst-case scenario where an attacker manages to get physical access to a computer node and is highly sophisticated, Apple defends by making the hardware attack very costly, quickly discoverable, and limited to a small scale by ensuring attackers cannot target specific users.

“Hardware security starts at manufacturing, where we inventory and perform high-resolution imaging of the components of the PCC node before each server is sealed and its tamper switch is activated,” Apple says.

After revalidation in the data center monitored by the third-party observer, user devices will only send data to any PPC after checking the validity of issued certificates.

The third-party-run OHTTP relay, used for forwarding encrypted HTTP messages, will hide IP addresses. Valid requests will be authorized using single-use credentials, preventing attackers from linking requests to individuals. Request metadata will leave out most of the personal details and only include limited required contextual data.

Also, randomly selected nodes will process the requests, thus limiting the impact if any nodes get compromised.

5. Building trust with supervision from security researchers

The last requirement that Apple has put in place is “verifiable transparency,” which allows security researchers to verify security and privacy guarantees.

“We’ll take the extraordinary step of making software images of every production build of PCC publicly available for security research,” Apple says.

User devices can reportedly access PCC nodes only if they can prove they’re running the publicly listed software.

Researchers will be able to access all code, including the OS, applications, and executables running on PCCs in a tamper-proof log and get rewards for bugs found.

“This is an extraordinary set of requirements and one that we believe represents a generational leap over any traditional cloud service security model,” Apple said.

Introduction of Responsible AI principles

Apple also outlines its principles for Responsible AI development. These principles focus on empowering users, representing them “around the globe authentically,” designing with care, and protecting privacy.

“Our models have been created with the purpose of helping users do everyday activities across their Apple products, and developed responsibly at every stage and guided by Apple’s core values,” the company said.

Despite Apple’s claims and strong track record in privacy and security, much depends on trust. Some researchers remain skeptical about relying on a single company’s claims and implementations because Apple has centralized control over the system’s design, implementation, and updates, which are prone to changes over time.