
Security flaws in the Perplexity AI app can steal your passwords and identity.
It’s 2025 and AI apps are replacing your search engines, your shopping lists, and maybe even your friends. But while they’re busy answering your existential queries, they’re also exposing your personal data.
Take Perplexity AI, one of the sleekest, smartest AI-powered assistants out there. Cybersecurity researchers just pulled back the curtain, and what they found is straight out of a digital horror story, which can lead to account takeovers, data theft, and identity hijacking.
Hey, Perplexity, how do I get hacked?
In their latest deep-dive, the security team at Appknox cracked open Perplexity’s Android app and found not one, not two, but a buffet of vulnerabilities that would make even Deepseek blush.
According to the report, the app’s code contains hardcoded API keys, meaning anyone who knows how to decompile an Android app can swipe them.
Once they do, they can access backend services, leak user data, and potentially compromise entire systems. Basically, leaving hardcoded secrets is like writing your ATM PIN on the back of your debit card and calling it innovation.
Researchers also discovered that Perplexity’s API is misconfigured in the most reckless way: it contains wildcard origins. This means that literally any website can send requests to the app’s backend. It’s an open invitation to Cross-Site Request Forgery attacks, where malicious sites can trick the app into leaking user data.
To make things even more crazy, there is zero SSL pinning. Without it, attackers can pull off man-in-the-middle (MitM) attacks – intercepting your searches, stealing your credentials, and watching your activity in real time.
Also, Perplexity’s bytecode is totally exposed, making it a playground for reverse engineering. Attackers can tear it apart, find vulnerabilities, or worse – create fake versions of the app that can steal data or scam users.
Finally, Perplexity has no protection against debugging or developer exploits. That’s a massive red flag. It means attackers can toy with the app in a controlled environment, figure out how it works, and tweak it to their advantage.
“Our testing highlights critical vulnerabilities in Perplexity AI that expose users to a variety of risks, including data theft, reverse engineering, and exploitation,” said Subho Halder, CEO of Appknox.
“It’s crucial for the developers to address these issues swiftly. In the meantime, users should be cautious about using the app, particularly for sensitive activities,” he adds.
Is Perplexity AI more dangerous than Deepseek?
According to Appknox, Perplexity AI might be a bigger cybersecurity risk than the Chinese AI model Deepseek.
“Every vulnerability we found in Deepseek is also present in Perplexity, plus five additional weaknesses that widen the attack surface. This isn’t just an oversight – it’s a pattern. AI applications are evolving fast, but their security isn’t keeping up,” write the researchers.
However, while Perplexity has more vulnerabilities overall, Deepseek had its own set of critical flaws – like unsecured network configurations and exposure to advanced threats like StrandHogg and Janus. These risks make Deepseek a prime target for more sophisticated attacks that can hijack user sessions and inject malware.
If this is what we’re seeing from top-tier AI apps, imagine what’s lurking in the hundreds of AI clones flooding app stores.
Cybernews has contacted Perplexity AI for comment, but a response has yet to be received.
Your email address will not be published. Required fields are markedmarked