An AI lawyer doesn’t deliver, a consumer reviews generator is used for deception, and AI-powered money-making machines do not earn money. These are some of the cases targeted by the Federal Trade Commission (FTC). The FTC also sends a message: AI can’t be used for tricking, misleading, or defrauding consumers.
The FTC announced actions against five companies that tried to profit from artificial intelligence (AI) hype and AI tech in deceptive and unfair ways. They used AI to supercharge deceptive or unfair conduct that harms consumers.
The law enforcement sweep against is called Operation AI Comply.
“Using AI tools to trick, mislead, or defraud people is illegal,” said FTC Chair Lina M. Khan. “There is no AI exemption from the laws on the books. By cracking down on unfair or deceptive practices in these markets, FTC is ensuring that honest businesses and innovators can get a fair shot and consumers are being protected.”
The commission noted that claims around AI are becoming more prevalent, such as promises that AI can enhance lives through automation and problem-solving.
The problem arises when firms seize the hype to lure consumers into bogus schemes and use AI tools to turbocharge deception.
A company without lawyers offered a robot lawyer
The first case is against DoNotPay, an AI service that claimed to be “the world’s first robot lawyer.” The FTC argues that the product failed to live up to its lofty claims and couldn’t substitute for human lawyer expertise.
DoNotPay charged subscribers $36 every two months for a general membership and $49.99 per month for a Small Business Protection Plan. The company promised that the service would allow consumers to “sue for assault without a lawyer” and “generate perfectly valid legal documents in no time,” among other false claims. The company even said it would “replace the $200-billion-dollar legal industry with artificial intelligence.”
Yet, according to the complaint, DoNotPay did not conduct testing to determine whether its chatbot’s output was equal to that of a human lawyer; the company itself didn’t even hire or retain any attorneys.
The settlement requires DoNotPay to pay $193,000, notify its consumers about the limitations of law-related features on the service, and stop making claims without evidence to back them up.
No five-figure gain, only pain
In the second case, the FTC filed a lawsuit against Ascend Ecom for defrauding consumers of at least $25 million. The company falsely claimed its “cutting-edge” AI-powered tools would help consumers quickly earn thousands of dollars a month in passive income by opening online storefronts.
The FTC alleges that it operated under different names and charged consumers tens of thousands of dollars to start online stores on e-commerce platforms such as Amazon, Walmart, Etsy, and TikTok while also requiring them to spend tens of thousands more on inventory.
Claims of five-figure monthly income gains by the second year did not materialize for nearly all consumers. Their bank accounts were depleted, and many were left with hefty credit card bills. Meanwhile, Ascend pressured unhappy consumers to modify or delete negative reviews.
The FTC has also charged another similar business opportunity scheme, known as Ecommerce Empire Builders (EEB), that falsy claimed to help build an “AI-powered Ecommerce Empire.”
EEB offered training programs costing almost $2,000 and “done for you” online storefronts for tens of thousands of dollars, claiming that consumers could potentially make millions. In social media ads, EEB claimed that its clients could make $10,000 monthly without evidence to back up the claims.
The FTC alleges that EEB’s CEO, Peter Prusinowski, has used consumers’ money to enrich himself while failing to deliver on the scheme’s promises of high income by selling goods online.
“Numerous consumers have complained that stores they purchased from EEB made little or no money, and that the company has resisted providing refunds to consumers, either denying refunds or only providing partial refunds,” the FTC said.
Yet another online storefront scheme, named FBA Machine, cost consumers more than $15.9 million based on deceptive earnings claims.
FBA Machine’s promoted “AI-powered” tools were supposed to help price products in the stores and maximize profits. According to FTC, the scheme made extravagant claims about risk-free “seven-figure” businesses and cited testimonials of “$100,000 monthly profits.”
Users invested from tens of thousands to hundreds of thousands of dollars.
In all three cases, the federal courts issued orders to temporarily halt the schemes while the cases are ongoing.
Writing assistant deceives customers with fake reviews
In the last case, the company called Rytr sold its AI “writing assistant” services. One of its uses was specified as “Testimonial & Review” generation, allowing paid subscribers to generate an unlimited number of detailed consumer reviews while requiring very limited and generic input.
The FTC alleges that it was an unfair business practice. In many cases, the generated reviews featured false and deceptive written information that would deceive potential customers when making purchase decisions.
“Rytr’s service generated detailed reviews that contained specific, often material details that had no relation to the user’s input, and these reviews almost certainly would be false for the users who copied them and published them online,” the press release reads.
The FTC found that Rytr’s subscribers used the service to crank out hundreds, and in some cases, tens of thousands, of potentially false reviews, polluting the marketplace and harming both consumers and honest competitors. The FTC wants to prevent Rytr from engaging in similar illegal conduct in the future and bar the company from advertising and otherwise promoting any service to generate consumer reviews.
However, two out of five commissioners disagreed with the complaint, calling it misguided, and noted that Rytr offers 43 “use cases” tailored to many specific purposes useful for consumers.
More AI misuses
The FTC also said that its operation AI Comply builds on a number of other recent cases involving claims about AI. Some of them include:
- Automators, another online storefront scheme
- Career Step, a company that allegedly used AI technology to convince consumers to enroll in bogus career training
- NGL Labs, a company that allegedly claimed to use AI to provide moderation in an anonymous messaging app it unlawfully marketed to children
- Rite Aid, which allegedly used AI facial recognition technology in its stores without reasonable safeguards
- CRI Genetics, a company that allegedly deceived users about the accuracy of its DNA reports, including claims it used an AI algorithm to conduct genetic matching
Your email address will not be published. Required fields are markedmarked