With AI the buzzword of the moment, the US Federal Trade Commission is warning firms to beware of making unsubstantiated claims.
Back in 2019, Indian startup Engineer.ai was caught out faking claims to have created an AI-based app development platform, with reports of the company’s suspect assertions emerging the following year.
The startup said its “human-assisted AI" technique allowed customers to build more than 80% of a mobile app from scratch with just a few clicks and in less than an hour - an assertion that enabled it to raise nearly $30 million from various investors, including a subsidiary of SoftBank.
But after the company's chief business officer sued over the claims, it was revealed by the Wall Street Journal that the “artificial intelligence” used to create the apps was, in fact, the rather more natural intelligence of teams of India-based software engineers.
It's an extreme example of what can be termed 'AI-washing' – but by no means the only one. Indeed, according to a study of 2,830 European startups by consultancy MMC Ventures, 40% of companies describing themselves as 'AI startups' used virtually no AI at all.
There's a number of reasons companies make such unsubstantiated claims. First and foremost, AI is one of the buzzwords of the moment, meaning that any startup claiming to use it is more likely to get the attention of investors. Indeed, according to MMC Ventures, startups using — or claiming to use — AI can attract as much as 50% more funding than other software firms.
Some companies, meanwhile, have every intention of using AI — they just haven't quite got there yet. And some don't fully understand what AI actually is, and use the term to refer to machine learning (ML) or simple algorithms. While definitions can be woolly, true AI is usually defined as being able to cope with unstructured data.
FTC warning
Now, the US Federal Trade Commission (FTC) has stepped into the fray, with Michael Atleson, attorney in its advertising practices division, publishing a blog post warning companies to avoid making exaggerated claims about their use of AI.
"At the FTC, one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them," he writes.
"The fact is that some products with AI claims might not even work as advertised in the first place. In some cases, this lack of efficacy may exist regardless of what other harm the products might cause. Marketers should know that – for FTC enforcement purposes – false or unsubstantiated claims about a product’s efficacy are our bread and butter."
As Atleson points out, AI-washing doesn't necessarily have to be as clear-cut as claiming that the technology is being used when it isn't. It can also, he says, simply mean exaggerating what an AI product can do, with claims still classified as inaccurate if they lack scientific support or apply only to certain types of users or under certain conditions.
In 2020, this problem was highlighted by British Medical Journal researchers, who found that many studies claiming that AI is as good as – or in some cases, even better than – human experts at interpreting medical images were of poor quality and arguably exaggerated, posing a risk for the safety of millions of patients.
Increasing scrutiny
AI-washing is named after greenwashing – the practice of making false or misleading environmental claims. And here, we're starting to see more scrutiny and more legal enforcement, with tighter regulation in the pipeline in the UK, the EU, and the US.
At the same time, investors are starting to take greenwashing more seriously, with 236 climate-related shareholder resolutions filed last year aiming to force companies to improve the quality of their climate disclosures.
The same has yet to happen in the case of AI-washing. However, the FTC's latest statement could mark a turning point, with Atleson warning that making fake AI claims could have serious consequences.
"Whatever it can or can't do, AI is important, and so are the claims you make about it," he said. "You don't need a machine to predict what the FTC might do when those claims are unsupported."
More on AI:
Artificial Intelligence: a double-edged sword in the fight against cybercrime
AI lawyer retires before it even has its first case in court
AI can see things we can't – but does that include the future?
NIST to launch AI guidelines amid ChatGPT fears
Pigeons puzzle experts with AI-matching intelligence
Subscribe to our newsletter
Your email address will not be published. Required fields are markedmarked