OWASP has published a guide discussing how AI is being exploited to execute cyberattacks and how to address these risks.
The rapid development of generative AI technologies has introduced radical changes to business operations and individuals' lives. A notable innovation was the ability to generate convincingly realistic images, videos, or audio using AI tooling.
While this provides numerous advantages to businesses and individuals alike, it has also opened the door to threat actors who can exploit the new technology to execute various malicious actions, specifically social engineering attacks, such as phishing and fraud schemes.
OWASP has already published a detailed guide about the top 10 security risks of large language models (LLMs).
The guide covers the critical vulnerabilities often seen in LLM applications, highlighting their potential impact and spread in real-world applications. However, with the increased risks of utilizing AI technologies to produce convincing deepfake content for malicious purposes, OWASP has found it necessary to publish another guide that discusses the adversarial use of AI technology to execute cyberattacks rather than focusing on the vulnerabilities within AI systems.
What is deepfake content, and how is it produced?
A deepfake is any synthetic media content – primarily images, videos, or audio – generated using advanced AI technologies such as deep learning and neural networks, which often employ generative adversarial networks (GANs).
Although similar AI technologies drive text generation, it is typically classified separately in the field of generative AI because it focuses on generating text content, not visual or auditory replication.
The key distinction between deepfakes and conventionally manipulated content (also known as cheapfakes) lies in their use of sophisticated AI architectures. Traditional content manipulation utilizes editing software programs like Photoshop, while deepfakes utilize AI technologies to analyze and replicate patterns in data.
Deepfake content is generated using a plethora of AI tools. Here are the most prominent:
- Text content: ChatGPT, Claude, and Google Gemini
- Images: DALL·E 2, Stability AI and Wombo
- Video: Synthesia, Deepbrain AI, Elai and Pictory
- Audio: AudioCraft, AssemblyAI, AWS Transcribe and ElevenLabs
Objectives of deepfake attacks
The OWASP guide identified four objectives that cybercriminals aim to achieve by maliciously utilizing deepfake technology:
- Financial gain through fraud by impersonation
- Job interview fraud
- Impersonation to further cyberattacks (such as initial access)
- Mis/Dis/Mal information
The guide suggests guidelines for handling each type of deepfake incident. Although the preparation phase is the same for the four deepfake events, the other phases, detection and analysis, containment, eradication and recovery, and post-incident activity, remain event-specific.
Preparation
Organizations must assess their exposure to deepfake incidents by different threat activity threats such as:
- Authentication evasion: For example, using an AI-generated voice to convince the technical support employee to reset the target user account password.
- Impersonation: Such as a CEO fraud scheme to request making illegal transfers from a CFO. A recent example happened in early 2024 when a finance worker at a multinational company was deceived into transferring $25 million to fraudsters who used deepfake technology to impersonate the company's chief financial officer during a video conference call.
- Reputational damage: Impersonating key employees and top management and spreading video/audio recordings of them giving incorrect statements about their company work or hateful speech to damage the company's reputation against the public.
- Deepfake employment interviews: Threat actors use deepfake videos and stolen personal data to convince HR personnel to hire them during online job interviews. The final aim is to gain some level of corporate access to sensitive data or IT systems as a part of their new job role.
- Misinformation: spreading fabricated information (audio, video, and images) using AI technology to spread rumors and impact a particular company's stock prices or to prevent other companies from partnering with it. Misinformation can also spread via text content, such as spreading fake news.
The key preparatory steps for handling such incidents include risk analysis, defense assessments, incident response planning, and employee education.
Risk analysis
Each organization will face deepfake risks based on its business type (banking, manufacturing, healthcare, etc.), media and political exposure, business history, threat actors' desires, and susceptibility to deepfake attacks.
Defense assessments
Assessing an organization's defense against deepfake threats requires a comprehensive review of security policies, work procedures, security controls enforcement, and auditing methods in the following business areas:
- Sensitive data disclosure – Review your business policies about sharing and accessing sensitive information, including HR and third-party provider information.
- Helpdesk: Audit sensitive workflows such as password rest routine, authorizing computing devices for multi-factor authentication (MFA), and how authentication failure should be handled.
- Financial transactions: Audit how financial transactions are executed within your company. Ensure strict personal verifications before releasing funds to external partners.
- Event response: Evaluate how your organization responds to deepfake incidents. This includes its detection, communication, and containment strategies.
These measures should be regularly audited through assessment and employee interviews to measure their effectiveness against the evolving threat of different types of deepfake attacks.
Establishing a deepfake incident response plan
Developing a customized response plan ensures your organization can react promptly and effectively when faced with a deepfake attack. This plan should include the following key points:
- Incident identification and verification: Establish workflows or protocols to quickly verify whether a particular media content is authentic or synthetically generated using detection tools and human expertise.
- Response escalation: Define clear guidelines for escalating deepfake incidents to relevant teams - legal, technical, communications, or executive leadership.
- Mitigation strategies: Identifying what actions your organization should take to contain the impact of the deepfake incident, such as issuing counter-statements, disabling compromised user accounts, or notifying stakeholders about the incident to avoid any legal liability or public embarrassment.
Deepfake awareness training
Educate your employees on detecting synthetic media indicators and understanding their organizational impact. Personnel must recognize specific indications in suspicious content, including unnatural speech patterns and voice transitions (pauses) in audio recordings.
In visual content, they should watch for facial asymmetry, inconsistent lighting, misaligned features, and blurred backgrounds. More indicators in AI-generated images include strange finger shapes, abnormal backgrounds, and irregular movement patterns in hair or clothing.
Organizations should establish structured response procedures where employees document suspicious deepfake content, avoid sharing it, and report immediately to designated teams such as legal, security, communications, or top management. Your organization must maintain clear reporting routes and define specific roles within the incident management plan.
Regular testing of these procedures ensures organizational readiness against synthetic media threats.
Event-specific guidance
In this section, the OWASP guide provides detailed instructions for each step (detection and analysis; containment, eradication and recovery; and post-incident activity) of the deepfake incident events. As we already said, each deepfake event requires different measures. However, step naming is the same for all event types.
Financial gain through fraud by impersonation
Financial gain through fraud by impersonation has gained attention as the primary threat vector in corporate environments. In such attacks, threat actors impersonate one of the company's C-suite executives, such as CEOs and CFOs, or other key personnel with the authority to execute critical financial operations. Under this scenario, threat actors utilize deepfake technology in two ways:
Real-time communication fraud
In this attack, cybercriminals used AI-cloned voice and video technology to impersonate a CEO during an emergency video conference or phone call. The common aim of such an attack is to instruct the finance team to transfer money to close an urgent deal, while the actual recipient is the attacker's bank account.
Asynchronous transaction requests
In this attack type, threat actors avoid real-time communication to deceive their targets. Instead, they utilize delayed communications through video recordings, impersonating key personnel such as the CFO, and demand urgent wire transfers. These requests commonly claim to address immediate business needs, such as closing a critical acquisition deal that requires instant action to prevent business losses.
The synthetic video recordings are generated using deepfake technology and distributed via video messages or email attachments to targeted recipients.
A notable example is the one involving Beazley's CFO, which shows how deepfake can be used to execute sophisticated cyberattacks. The attack was initiated with a synthetic video message via WhatsApp, pretending to be from the organization's CEO. The video call later failed, so the perpetrator shifted communication to text-based messaging. The attacker fabricated a critical business transaction scenario requiring executing immediate financial transfer. To add credibility to the call, the attacker assures the CFO that a lawyer will contact them to assist with the transaction.
Impersonation for cyberattacks
In this attack scenario, deepfake technology is used to create new accounts and take over existing ones. For instance, threat actors can use deepfakes to conduct social engineering attacks such as phishing, bypass authentication or biometric authentication mechanisms (e.g., voice and facial authentication), or perform reconnaissance of a target to get more information about them or their company to execute further attacks, such as planting ransomware.
The Deepfake Offensive Toolkit is an example of a tool that injects controllable deepfakes into virtual cameras to bypass biometric verification checks.
Other notable real-world incidents that leverage deepfake technology are impersonating executives on video calls with employees to extract sensitive information about operations, security measures, and business strategies or trade secrets.
Job interview fraud
The increase of synthetic media in job interviews represents a critical security threat to organizational recruitment processes. For instance, threat actors utilize sophisticated deepfake technology to impersonate qualified candidates during online interviews. They aim to win a position that gives them some access to sensitive corporate data and IT systems.
Some seek insider access to target IT environments to plant malware, such as ransomware, or to facilitate other prolonged cyberattacks, such as advanced persistent threats (APT).
Mis/Dis/Mal Information
The emergence of generative AI technology has expanded the scale and sophistication of disinformation campaigns, enabling malicious actors to achieve various malicious objectives.
- Political manipulation: Interfering in elections by spreading fabricated media to influence voters, destabilize governments, and undermine public trust in democratic processes.
- Corporate sabotage: Spreading fabricated news or announcements to impact stock prices, disrupt market stability, or damage competitors' reputations.
- Social engineering and fraud: Employing extortion tactics by threatening organizations with the release of fabricated content designed to harm their reputation unless they pay money to the attackers.
- Health disinformation: Circulating false claims to undermine public trust in healthcare institutions. For example, encouraging people from taking vaccines by fabricating adverse effects, which can impact the stock value of pharmaceutical companies producing the vaccine.
Your email address will not be published. Required fields are markedmarked