US mulls measures to contain AI as analyst warns of rise in digital scams


The National Artificial Intelligence Advisory Committee (NAIAC) has delivered its first report to the US president and set up another body tasked with regulating the use of AI in policing. The development comes as fraud analysts warn of the technology’s increasing use by criminals to conduct phishing attacks.

The NAIAC report calls for the Biden-Harris administration to focus on developing “trustworthy AI” to shore up the superpower’s global position. It also said it would “establish a subcommittee to consider matters related to the use of AI in law enforcement.”

NAIAC added: “This subcommittee will provide advice to the President on topics that include bias, security of data, the adoptability of AI for security or law enforcement, and legal standards that include those that ensure that AI use is consistent with privacy rights, civil rights and civil liberties, and disability rights.”

Commenting on the disclosure, the National Institute of Standards and Technology (NIST) said research and development, greater international cooperation, and support for workers in the “AI era” would also top the list of priorities for NAIAC.

“We are at a pivotal moment in the development of AI technology and need to work fast to keep pace with the changes it is bringing to our lives,” said US Deputy Secretary of Commerce Don Graves.

“As AI opens up exciting opportunities to improve things like medical diagnosis and access to healthcare and education, we have an obligation to make sure we strike the right balance between innovation and risk.”

Not before time

Meanwhile, Sift, which provides digital fraud protection services to Fortune 500 companies, warns in its latest report that AI content generators, another key area of focus for NAIAC going forward, are contributing to the rise of increasingly sophisticated social engineering scams.

“The breakthrough technology, powering popular chatbots like ChatGPT and Bard, uses algorithms to generate original content in the form of text, code, images, audio, and video based on virtually any given prompt,” it said.

This facility, it believes, is behind the rise in phishing and other social engineering scams observed since ChatGPT went viral in November.

Sift claims that 68% of its clients say they’ve noticed “an increase in the frequency of spam and scams in the past six months,” and adds that it has been forced to block 40% more fraudulent content on their behalf in the first three months of 2023 than it did in the whole of last year.

“Generative AI is proving to be a game changer for fraudsters,” it said. “Its ability to create conversational language free of spelling, grammatical, or verb tense errors makes it difficult for the average person to distinguish this ‘synthetic media’ from the authentic. This is creating a flood of disinformation and scams.”

It added: “When AI technology is available to anyone, its uses are nearly limitless. Tools like ChatGPT represent an endless network that benefits from instant knowledge-sharing capable of doing the work of humans at an entirely inhuman speed.”

Next-level scamming

Other tricks being leveraged using AI technology include voice and deepfake scams, with loved ones being digitally impersonated to convince victims to hand over sensitive details such as credit card numbers, essentially leaving them wide open to fraud.

“Once a fraudster is able to successfully phish account credentials and/or payment information, they’ll use it to access the victim’s accounts and make unauthorized purchases,” said Sift.

The fraud analyst stresses that anyone and everyone is vulnerable, from an unwary individual targeted by a phishing attack to an organization subjected to business email compromise.

“The emergence of AI-generated emails impersonating executives, coupled with employees’ poor password hygiene and low reporting rates, make these scams a significant — and fast-growing — risk for businesses,” said Sift.


More from Cybernews:

Anonymous Sudan: neither anonymous nor Sudanese

Netflix co-founder “excited” about AI

Google pledges $20M to expand free cybersecurity clinics across US

1.5M people exposed in biggest MOVEit bug breach so far

Cl0p names PWC, Ernst & Young, and Sony in MOVEit hack

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are markedmarked