AI threats pushing cyber pros to seek legal safeguards


Cybersecurity executives at the world’s leading firms say they are taking legal steps to protect themselves from “unprecedented” pressures of a shifting threat landscape.

Increased governmental and corporate scrutiny are personally impacting chief information security officers (CISOs), forcing many to take legal steps to protect themselves, according to a new report from Team8, a venture group.

At 54%, more than half of survey respondents said their personal well-being had been impacted due to concern about liability. The same number reported experiencing “significantly” tighter scrutiny from their superiors.

ADVERTISEMENT

As a result, 32% of those surveyed said they had actively taken steps to mitigate personal risks through actions such as seeking legal counsel, purchasing additional insurance, or adjusting their contract.

The survey was carried out at Team8’s annual CISO Summit, attended by executives from companies such as Oracle, Barclays, SolarWinds, SentinelOne, and Anthropic.

“The latest SEC rulings and rising liability pressures have pushed CISOs into new and complex territory, intensifying both the legal and emotional challenges they must navigate," said Ross Young, cybersecurity officer at Team8.

"This pivotal shift carries far-reaching consequences – not only for the well-being of CISOs but for the security and resilience of organizations globally,” Young said.

He added: “With AI-driven threats on the rise, the CISOs who excel will be those who can adeptly manage these mounting pressures while staying focused on the critical mission of protecting against an ever-evolving threat landscape.”

The report also found that 70% of CISOs’ budgets increased in 2024 compared to 2023. Combined with increased scrutiny of their cybersecurity teams, this may indicate companies are taking AI-driven risks increasingly seriously.

Data protection “critical” concern

According to the survey, 75% of CISO’s said phishing attacks pose the greatest AI-powered threat to their organization, while 56% said it was deepfake-enhanced fraud.

ADVERTISEMENT

Most see “lack of expertise” and “balancing security with usability” as the two main challenges organizations face when defending AI systems.

To address future AI-related threats, just under half of CISOs expect purchasing solutions for managing the AI development lifecycle. Many are also prioritizing solutions for third-party AI application data privacy, as well as tools to discover and map the usage of “shadow” AI, or unsanctioned AI within an organization.

Most CISOs also said that data protection was a top issue not adequately addressed by existing solutions. Insider threats, third-party risk management, AI application security, human identity management, and security executive dashboards were identified as “critical” data security concerns.

“Recent technological advancements have rapidly transformed the threat landscape, and CISOs are responding. As companies evolve from using third-party AI tools to developing their own AI applications, securing AI development pipelines and data infrastructure has become a priority,” said Amir Zilberstein, managing partner at Team8.

“At the same time, AI also introduces new, novel risks, such as deepfakes and social engineering, which are unfamiliar territory for CISOs. Balancing these emerging threats with ongoing issues like identity and third-party risk management will be a critical challenge in the coming years,” Zilberstein said.