Meta’s efforts to create a watermark to detect AI-generated content aren’t going unnoticed. But how effective will this new watermark be in catching AI content?
While AI-generated content has its advantages, such as efficiency, it has also become the subject of stories about how AI tools are used to spread misinformation and even scam people.
Meta, known for its involvement in situations related to disinformation and scams, introduced the AudioSeal tool, which marks AI-generated speech.
AudioSeal claims to be one of the first tools on the market that identifies video elements generated by AI. This technology could help tackle the ongoing problem surrounding disinformation and fraud in video content.
While the new system already shows potential, it’s still in the works. Reports show that the watermark can still be easily altered or removed entirely, and since this new technology has no precedent, there is no clear standard on how it should work.
Meta also stated that it’s not ready to use the watermark on content created using its tools or release it for wider audiences, as first reported by MIT Technology Review.
Its creators will present it in Austria at the International Conference on Machine Learning in July. But those curious about how the new system works can try out AudioSeal, which is already available on GitHub.
Your email address will not be published. Required fields are markedmarked