
The internet puts information and a global community at your fingertips, but its Wild West-like environment can be a minefield of toxicity and hate speech.
The internet has grown to have a worldwide reach in the 21st century, creating more opportunities for hateful comments and toxic content to appear. A brand or organization that wants to build a healthy, positive community should not only focus on the content, but on moderating online communication as well. That effectively can be done with the help of AI and human finesse both workin together.
To better understand this moderation process, we tapped into the knowledge and expertise of Matthieu Boutard, president and co-founder of Bodyguard.ai – a free application that protects you from toxic content on social media platforms.
How did the idea of Bodyguard originate? What has the journey been like since your launch?
My co-founder, Charles Cohen, and I met when I was at Google. He is a computer genius, passionate about the impact of social networks in our lives and the technological challenge of moderating online content. In 2017, he laid the foundation for what would become the Bodyguard.ai technology, a uniquely contextual and instantaneous moderation solution. The following year he launched a free mobile app allowing anyone to protect themselves from online hate.
In 2019, we began building a team to further develop a solution – technology that identifies and blocks 90% of toxic user-generated content on social networks, in real time. By 2022, we were partnering with Keen Venture Partners (UK), Ring Capital (FR), and Starquest Capital (FR) for a further €9 million to enable us to expand from our home market of France into the UK. In 2023, we launched our first teams in the US and have ambitious plans to scale from there.
Can you introduce us to your moderation solution? What technology do you use to detect harmful content?
Bodyguard.ai has designed and developed an artificial intelligence technology that protects individuals, communities, and brands from toxic online content (hate, spam, fraud, etc.). Its contextual and instantaneous moderation solution is unique because it unites the speed of a machine with human accuracy. This combination allows the solution to identify and block 90% of the toxic content targeting your social networks and platforms in real time.
Our unique intelligent moderation solution identifies anything that could be toxic, always taking into account the linguistic and relational context. Bodyguard.ai determines in real-time who the content is for, how it is toxic, and its severity; this enables moderation that is accurate, intelligent, and instantaneous.
Should every website owner implement content moderation services, or is it only a necessity for certain websites?
We know that online hate and toxicity not only have a negative effect on people (over 40% of users leave a platform after their first encounter with toxicity) but also on revenue. For example, our sports clients' online moderation preserves the integrity of the game by protecting players and ensuring they can perform without distraction.
Furthermore, it enables fan engagement (even fan criticism) that deters extreme, hateful, and damaging speech. These are examples of how moderation can deter negative effects. However, in removing junk and spam links from the community, the effort of moderation prevents commercial revenue from being circulated back into the sport but not as much if no moderation was in place.
We suggest that every brand or organization using its channels to communicate with customers or create engagement have a strong internet safety policy. If you have a niche business like selling heavy farming equipment, perhaps it is not so urgent. But if you have a community of fans or customers that are talking about you, or supporting and looking for information or validation about you, then protecting your communications channels is vital!
How did the recent global events affect your field of work? Were there any new challenges you had to adapt to?
Sadly, major events can lead to spikes in online toxicity. Where there is passion, there is almost always intolerance. Our platform has to constantly learn and adapt to new situations and habits.
On top of this, there is constant evolution in methods to bypass platforms’ algorithms, meaning our human moderation teams also need to respond quickly. As half of the world speaks over 20 languages, trained human moderators take around 10 seconds to read and process a single comment, so you can see why technology is needed to help share the load.
Additionally, what practices or tools do you think are essential in combating these new threats?
It's essential to foster positive online interactions by analyzing and eliminating toxic content.
In addition to moderation solutions that help brands preserve their reputation and ensure positive online experiences, they also need to effectively deploy social listening within their social networks and online platforms to better understand customers, improve their image, and avoid negative sentiment.
In your opinion, what are the most common reasons users share or even excessively spam inappropriate content online?
The motivations for sharing inappropriate content are varied. I’m lucky to have been born at a time when social media was not so prevalent, but my time at YouTube meant I saw some really extreme reactions and negativity towards creators and online communities. I think the internet is a powerful tool for connecting people, but there will always be bad actors who use its relative anonymity to share toxic views and content.
I also think there's an element of peer pressure with younger users to share more and more extreme content. We should work collectively to identify and call out poor online behavior. There is so much evidence that online toxicity impacts mental health, and we need to address it proactively.
What threats floating around social media, do you find the most concerning at the moment?
On social networks, there is discrimination, hate, and violence. There is also polluting content and comments posted by people who want to take advantage of your brand power to tout personal advertising or spread nuisance ads, spam, fraud, and bots. But what concerns us more is content that contains racism, harassment, threats, insults, and anti-LGBTQ bigotry. This is the type of content that seriously hinders the interactions of Internet users and makes online communities feel unsafe.
What can average Internet users do to protect themselves while browsing? Are there details one should be vigilant about?
As our focus is primarily on brands and businesses, let me turn the question on its head and look at what they can do to protect their communities. Many of these can also be applied to your personal channels:
- Have clear guidelines that set out what won’t be tolerated in your channels.
- Pause before you react – saying something on the spot might mean saying something you regret, and with the lifespan of social media, it can soon get out of hand!
- Training and coaching teams. Dealing with hateful content regularly can be mentally draining, so ensure you have the best training and support in place for your teams.
- Tools – you can ease the mental load on your team by setting them up with the right tools to prioritize, moderate, and support social media without being too drawn into negativity.
Tell us, what’s next for Bodyguard?
We are solving a serious, mission-critical issue: the reduction of online toxicity. Our goal is to protect brands and their communities from the extremes of the internet as they use it to achieve their ambitions.
We’ll be making the AI compatible with more languages and expanding from just text moderation to audio and video content. Our intention is to ensure the AI can deal with all existing forms of online expression as accurately as possible before we have to teach it about physical expressions, body movements, hand signs, etc., for the metaverse.
We already have the functionality to enable individual users full autonomy over the degree of moderation applied to their accounts. We are passionate about leveraging our technological and industrial expertise to fight toxicity, as well as understanding and solving the challenges faced by our clients.
Your email address will not be published. Required fields are marked