After a year of being seen as behind in the AI race, Apple has finally announced its AI developments. This fall, it will start bringing new features powered by Apple Intelligence to iOS, MacOS, and iPadOS platforms.
As the company announced at its Worldwide Developers Conference, Apple's devices will be able to transcribe and summarize recordings, assist in writing, and determine which notifications are important. Siri will be able to understand contextual information, among other things.
These new features, as well as the ChatGPT integration, raise questions about how the company will handle user data. In its WWDC keynote, Apple's representatives devoted time to assuring the audience that this data would be protected.
While cybersecurity experts agree that Apple's approach meets the highest security standards, the company may still need to address potential vulnerabilities and issues before releasing AI features to the masses.
Apple's approach
Generally, Apple tries to use on-device processing as much as possible. As the company says, this way, the data is disaggregated and not subject to any centralized point of attack.
Most of Apple's AI inquiries will be processed on the device with Apple's own language and diffusion models. The company will also use an on-device semantic index that can search information from across apps.
However, sometimes, the data needs to be sent to be processed in the cloud. Typically, this means that it goes to a third-party provider and opens doors to all kinds of vulnerabilities.
To mitigate these risks, Apple is introducing several measures. It will send only the information relevant to the task to a server – no data will be stored on servers or accessible to Apple, and the server's hardware will be made of Apple silicon.
In addition, the company says it will cryptographically ensure that its devices only communicate with the server if they’ve been publicly logged by inspection. It will also use independent security researchers to guarantee privacy and security.
OpenAI's integration brings additional risks
Apple's Intelligence will not handle all of the users' inquiries. Some will be directed to OpeanAI's chatbot, ChatGPT.
Elon Musk, the founder and CEO of Tesla and one of the co-founders of OpenAi on X, called this an unacceptable security risk and threatened to ban Apple's devices from his companies.
If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation.
undefined Elon Musk (@elonmusk) June 10, 2024
It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!
undefined Elon Musk (@elonmusk) June 10, 2024
Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.
However, as Musk sometimes does, he made these comments without knowing all the facts. X's crowdsourced Community Notes system even corrected his posts, saying that Musk's comments had misrepresented what was actually announced.
Joseph Thacker, principal AI engineer and security researcher at AppOmni, says Musk may not fully grasp how Apple is using its own models locally and in its private cloud.
Apple's AI will decide which inquiries should be directed from Siri to ChatGPT, it will not log information, while ChatGPT integration will be opted in.
However, the expert sees some additional risks.
"The risks associated with using SaaS providers like OpenAI are that there is an entirely new attack vector. Startups like OpenAI don't have years of expertise and large teams, which would allow them to take the time to threat model properly and secure everything to the highest standard," he says.
The risks with ChatGPT's integration into Siri are pretty small, though, as Apple confirmed that they are not passing any user data to Open Ai when calling ChatGPT.
Even if OpenAI were to break its promise not to log the data, unless a person states, "My name is X, and I'd like to know about Y," it would be impossible for OpenAI to know who is asking what question, Thacker explains.
The researchers describe Apple's Private Cloud Compute as a "very secure, complex and well-thought-out infrastructure."
"It would require multiple sophisticated zero-day vulnerabilities to compromise Apple's private cloud if the published architecture is implemented correctly. It's based on zero trust. Even their engineers don't have access to the keys necessary to decrypt the data being processed. Exploitation is extremely unlikely," Thacker adds.
It all comes down to data management
OpenAI's integration, even in the case of a verifiable origin, potentially adds more layers of complexity and possibly additional attack vectors, says Jacob Kalvo, cybersecurity expert and a co-founder and CEO of Live Proxies.
According to him, even though Apple claims that no information will be captured, the integration process can expose new attack surfaces, such as API vulnerabilities or inadvertently disclosed data.
"Though Apple has long been held up as one of the standards of privacy and security, the essence of AI and machine learning is dealing with tons of data, and data can be hacked and breached by poor data management practices," Kalvo says.
He underlines several potential risks associated with Private Cloud Compute. It leaves it open to potential side-channel attacks, where information can be inferred by exploiting the device's physical properties or advanced malware that targets the hardware layer.
"Other dangers include exploiting zero-day vulnerabilities, causing unauthorized access of the information. The success rate of these attacks will vary since it factors in how advanced the attackers are and the security protections Apple has hardened into the system," he adds.
Loopholes in AI
Joe Warnimont, security and technical expert at HostingAdvice, says that sending data to the cloud always poses additional risks.
"Private Cloud Compute still uses the cloud. That's a vulnerability. The second data leaves your local device; bad actors can obtain it. Those threats often come in the form of contractors or employees of the companies managing the cloud servers," the expert says.
He reminds us that Apple once experienced issues with third-party contractors listening to Siri recordings. Those recordings were encrypted, except for quality control – and that's the loophole the contractors manipulated.
"With AI, there are bound to be loopholes as well – like how AI models require unencrypted data (at some point) to process requests," Warnimont says.
However, the researcher praises Apple for using a different approach to its competitors.
Other big tech players store and often share enormous amounts of customer data with voice assistants, online search, and home automation. And their approach to AI is the same, Warnimont says.
Meanwhile, Apple at least has a reputation for stronger privacy and security measures and a reliable infrastructure to build upon.
Potential prompt injection attack vulnerability
After Apple's announcements, several experts shared their opinions on Apple's Private Cloud Compute on social media.
Mathew Green, who teaches cryptography at Johns Hopkins University, said that if you gave an excellent team a huge pile of money and told them to build the best "private" cloud in the world, it would probably look like this.
"Keep in mind that super-spies aren't your biggest adversary. For many people, your biggest adversary is the company that sold you your device/software. This PCC system represents a real commitment by Apple not to "peek" at your data," the researcher said in an series of post on X and Threads.
However, he also sees many invisible sharp edges that could exist in a system like this. That includes hardware flaws, issues with the cryptographic attenuation framework and clever software exploits.
"Many of these will be hard for security researchers to detect," Green posted.
Finally, there are so many invisible sharp edges that could exist in a system like this. Hardware flaws. Issues with the cryptographic attenuation framework. Clever software exploits. Many of these will be hard for security researchers to detect. That worries me too. 18/
undefined Matthew Green (@matthew_d_green) June 10, 2024
Simon Willison, an engineer, said that Siri's ability to both access data on your device and trigger actions based on your instructions may make it vulnerable to prompt injection attacks.
"What happens if someone sends you a text message that tricks Siri into forwarding a password reset email to them, and you ask for a summary of that message? I'm fascinated to learn what Apple has done to mitigate this risk," Willison wrote.
I wrote up some initial thoughts on the Apple Intelligence announcements from WWDC this morning https://t.co/IR7KVQcDCb
undefined Simon Willison (@simonw) June 10, 2024
Your email address will not be published. Required fields are markedmarked