
New research shows most executives and staff think they can spot a phishing scam. Most of them are wrong.
Often, cybercriminals don’t need to hack into your company. They just need to get you to click the wrong email.
A new study commissioned by Dojo on 2,000 UK workers and executives reveals a sobering truth. 56% couldn’t distinguish real emails from phishing scams, despite high levels of confidence in their ability to do so.
And it’s not just junior staffers. Executive-level workers also have been falling for a trick, especially for artificial intelligence (AI)-developed phishing campaigns.
Meanwhile, numbers of phishing attacks continue to rise, hitting 85% of UK businesses in 2025, up 2% from last year. Yet despite the increasing threat, over a quarter of organisations (27%) still consider cybersecurity a low priority at the senior level.
Key phishing research numbers
- 53% of all respondents failed to detect the phishing emails they were shown.
- Executives were better at detecting legitimate emails from business messaging applications like Slack and password management applications like Dashlane: 58% on average, vs an average of 36% of non-executive employees.
- 90% of executives said they are confident they could spot an AI scam, yet 66% struggled to detect it when put to the test.
- Only 38% could identify the two legitimate emails in the test batch.
- 47% missed red flags in a fake Google alert email
- 57% fell for a bogus Google Sheets invitation.
- 48% couldn’t see the issue with a scam Dropbox message, despite the sender using a fake URL.
- Overall, most people were fooled by AI-generated scams: 64% of non-executive employees and 66% of C-Suite executives could not identify an AI-generated scam.
Methodology
A total of 2,000 people, split between C-suite executives and non-executive employees, were tested on their scam-spotting skills.
Each group received six emails: three were identical across both groups, while the other three were tailored to their roles. Both groups were also hit with an AI-generated phishing email to see who could sniff out a scam.

AI scams are getting smarter. Humans are not keeping up
The survey used AI-generated scam emails written with help from ChatGPT, which mimicked Google-style alerts. They included fake URLs, created a sense of urgency, and prompted staff to download a file.
Despite the obvious clues, and suspicious URLs like no-reply@google-alerts.com or [email protected], the trick was good enough to fool 64% of non-executive employees.

Employees were also tested with a classic CEO impersonation scam, a common tactic where cybercriminals pose as high-ranking executives to squeeze out sensitive payment details.
64% of non-executive employees failed to recognise the red flags in an AI-generated CEO impersonation scam. Among them, entry-level graduates were the most vulnerable, with 68% mistaking the fake request for the real email.
The scam relied on urgency. Phrases like “quick signature” and “end of the day” were designed to pressure the recipient. It also discouraged verification by pushing employees to act fast and stay within email, avoiding a phone call or in-person check that could have exposed the fraud.
“Our research discovered that, on average, 56% of the UK workers surveyed could not detect the real emails from the phishing scams, with just half correctly defining the term ‘phishing’,”
said Naveed Islam, Chief Information Security Officer at Dojo.
According to him, this highlights a stark gap in knowledge that can be addressed by investing in people and building their confidence around phishing. “Not prioritising the protection of their data and capital can pose significant risks to the areas where investment is being placed,” he adds.

Executives also fall for phishing emails
You might expect executives to be sharper at spotting phishing scams, and in some ways, they are. In the study, C-suite respondents correctly identified legitimate emails from Slack and Dashlane 58% of the time, compared to just 36% among non-executive employees.
However, two-thirds of execs (66%) failed to identify AI-generated phishing emails, despite 9 in 10 saying they were confident in their ability to do so. Among the worst performers were the founders, 73% of whom fell for a scam email written by ChatGPT.

Human factor is key to overall cybersecurity
According to Daniel Houghton, Cyber Protect Officer at City of London Police, the biggest vulnerability isn’t outdated software — it’s people.
“The Hollywood image of a hacker furiously typing away on a computer is far from the truth - cyber criminals rarely hack systems, instead targeting people using phishing campaigns and social engineering,” he says.
He highlights that up to 88% of cybersecurity breaches can be attributed to human error. “Be that weak passphrases, poor digital hygiene, or clicking links in emails. That’s why cybersecurity starts and finishes with the people in your organisation,” adds Houghton.
What can businesses actually do?
- Train smarter: Go beyond annual security videos. Use realistic phishing simulations and teach employees to read email headers, not just spot bad spelling.
- Stay humble about AI: Everyone thinks they can spot a deepfake, until they can’t. Assume every inbox is a threat vector.
- Lock down email protocols: Use domain-based authentication (DMARC, SPF, DKIM) and ensure internal comms tools can’t be spoofed.
- Protect the frontline: Admins, receptionists, and payment handlers see the most scam attempts. Give them the most support.
Your email address will not be published. Required fields are markedmarked