
Meta is fighting fire with fire as it plans to combat deepfake scams using facial recognition. The technology, which was introduced earlier last year, is now being brought to the UK and EU for the first time.
The parent company of Facebook, Instagram, and WhatsApp started testing two facial recognition tools in other parts of the world earlier last year.
Now, Meta is expanding its use of facial recognition tools to combat scams in UK and EU countries for the first time.
This decision came after Meta had “concluded discussions with regulators,” a Meta representative told Cybernews.
Meta has used two different tools, one of which combats scams that misappropriate a celebrity's image to lure victims to scam websites or to buy fraudulent products. This is known widely as “celeb-bait.”
The way it works is that if Meta’s systems suspect that an ad containing an image of a celebrity or public figure is potentially celeb-bait, the tech giant will use facial recognition to “compare faces in the ad to the public figure’s Facebook and Instagram profile pictures.”
Meta will then block the ad if it’s deemed a scam and “delete any facial data generated from ads for this one-time comparison.”
The tech giant also says that it doesn’t “use it for any other purpose.”
Regarding the safety of users' facial data, a Meta representative told Cybernews that the company “immediately deletes any facial data generated from either the ads or the video selfies once they have been used to make the match, and during that process, the data is encrypted.”
Meta claims that “early testing with a small group of celebrities and public figures shows promising results,” and celebrities whose likenesses are popular for these types of scams will be notified in the coming weeks.
The other tool is facial recognition-based, but it helps users who need their Meta accounts recovered. Instead of handing over copies of their IDs, Meta is now testing “video selfies” to help people verify their identity when they’ve been locked out of their accounts.
“The user will upload a video selfie, and we’ll use facial recognition technology to compare the selfie to the profile pictures on the account they’re trying to access,” Meta said in a blog post.
While this may sound troubling, as images of individuals could be misused if they fall into the wrong hands, Meta claims that the video selfie “will be encrypted and stored securely.”
Meta also claims that the video will be immediately deleted “regardless of whether there’s a match or not.”
Facebook and Instagram will have the opportunity to opt into the facial recognition services in the coming weeks, Meta said in a blog post.
Deepfake scams are proliferating at a rapid speed and have cost individuals and organizations thousands, if not millions of dollars.
The Guardian reported that a scam ring from the former Soviet state of Georgia had defrauded thousands of people in the UK, Europe, and Canada out of around $35 million after tricking them with fake celebrity adverts on Facebook and Google.
Furthermore, individuals have been scammed out of thousands on Instagram.
One notable case in France affected a woman called Anne, who had been scammed out of $850,000 after bad actors used AI-generated images of Brad Pitt to extort her via Instagram.
Your email address will not be published. Required fields are markedmarked