© 2023 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

Tech totalitarianism: are we close to the point of no return?


Recently health sects have come to light in France, spreading disinformation in the post-COVID world – but as any internet watcher will know, that is just the tip of the iceberg. Cybernews reached out to an intelligence expert to get his views on where the information wars are taking us.

Ruarigh Thornton is head of digital investigations at Protection Group International. He has spent the past half-decade trying to untangle the increasingly complex web of conspiracy theories, political extremism, and fake news that has gradually stretched itself to, it would seem, every corner of the internet.

Whether it be through concerns about health or politics or even a subconscious need for religiosity, social media forums and the like are powerful magnets for people all over the world – but, unfortunately for our species, this means they can also be used to divide and conquer. Never has the human race had so much potential to communicate ideas, and, as such, never has it been more vulnerable to manipulation and exploitation.

Looking at the recent reports from France24 about dangerous healthcare-related disinformation being spread online in the post-COVID world, the word sect jumped out at me. Is there indeed a religious dimension to these groups?

I think the comparison to religion is a good one. The way that these groups operate and a lot of what they give is not necessarily identical to religion, but it gives an identity, which is what religion conveys to a lot of people as well. A lot of these groups provide significance, importance to people's lives because it gives them a bit of insight, something that they think in many ways makes them superior to the rest of the world. When you look at a lot of these anti-vax conspiracy-style groups, if you look at the Beyond the Curve documentary on Netflix around flat-earthers, the whole conclusion revolves around the fact that these people are well aware that it's not real. But because it becomes such a large part of their life, it becomes an identity – then it's basically impossible to escape from, once you pass that point of no return. And the same is true for the anti-vaxx style conspiracists as well that coalesce in these health spaces.

Let me just pick you up on that, because you're saying the people involved in these groups, whether they be flat-earthers or anti-vaxxers actually get to a point where they realize what they're peddling is bunk, but they can't back out… is that an ego thing, or is it that the rest of the culture steps in and says: “No, you're in it now. It's like the Mafia. You don't get out.”

I think it's a little bit of both. You start to form an identity around something and it becomes a huge part of your life. And I think especially when we're talking about things that relate to the COVID periods when people were spending extended periods of time online when you had the 9 to 5 with a laptop and then the 5 to 9 with a different laptop afterwards: you're just constantly exposed to these communities. And it really does become that identity, a thing that your whole life revolves around. It overrides all your best instincts in terms of having skepticism around the content that you're reading.

In terms of the facts, if you think of an example like nine out of 10 dentists recommend a certain brand of toothpaste: someone in this community will say, why have we not consulted the tenth dentist? Do you go out of your way to look for an alternative explanation to make the facts that you're seeing fit your worldview? It is like a sunk cost at some point, and it's really hard to pull people out of that. You can see examples in various Subreddits for casualties of conspiracies, where people will document their family members’ descent into these conspiracies, and you can really see the trajectory over time. Those exist for health, the vaccine skepticism for QAnon in particular, and political stuff, but they're all super documented. It's not pleasant reading.

"If you mirror the evolution of disinformation with election cycles, we had 2012, which was all around overt fake news: things were either entirely real or entirely fake."

Ruarig Thornton, head of digital investigations at Protection Group International

I suppose some conspiracy theories have been proven to be valid, but you'd call that investigative journalism. So how does one differentiate? Because obviously to flip the nine out of 10 paradigm, maybe one out of 10 of these theories has something legitimate behind it. And looking to the post-COVID era, the vaccine was rushed through, and there were legitimate medical concerns about such a hurried response to a global crisis. What kind of advice would you give to people that are faced with these dilemmas, as we all are, for separating the wheat from the chaff?

I think the problem in that scenario becomes like the checks and balances that exist. So in the real world, people are less afraid to maybe call someone out or challenge them, or people will be naturally exposed to that because you're not surrounding yourself just with a community that's formed around an identity-style issue. Whereas when you seek out a space online that is particularly defined and exists only because of a singular issue, there's never any challenge to that worldview. So if you're in a Facebook group or if you've curated a TikTok feed that is just anti-vaxxers, you're only going to see the anti-vaxx content, and it's only going to push you further and further down that kind of road.

Whereas if you're down the local pub and interacting with a wide variety of people, the chances are there's never going to be a consensus there, right? There's never a consensus about any political, social, societal, or economic issue in a pub, let alone around health and vaccine stuff. And so you have that natural challenge to your worldview, which basically makes you a bit more moderate and temperate when it comes to it. But without those checks and balances, there's a term called audience capture, and this is where it starts to get into the area of: how can this be weaponized at a sort of second level up?

If you think about the evolution of disinfo and these types of conspiratorial communities online, if you mirror the evolution of disinformation with election cycles, we had 2012, which was all around overt fake news: things were either entirely real or entirely fake. Then you have 2016, where it became bots, and everybody was terrified about automated disinfo online, and then 2020 was very much like it was real people: real members of the militias that ended up storming the Capitol, it was all of these communities that had been conditioned to behave in a certain way by the news that they were being fed. And the reason that they were able to be conditioned to behave in this way is because they'd been captured.

They'd undergone this process where they'd coalesced around a particular issue, whether it was pro-Second Amendment, anti-Black Lives Matter, anti-immigration, anti-vax, anything at all. They'd been redirected from mainstream spaces, the equivalent of the pub online, if you want to think of that as Facebook. And they'd been taken to Discord servers, Telegram channels, closed forums, chat groups, threads where you lose those checks and balances. They'd been captured.

And what happens at that point is a threat actor, whether domestic or foreign, is able to dictate behaviors to them, to dictate instructions once you've got them conditioned in a certain way. You end up with a Telegram channel that has 5,000 anti-vaxxers from Arizona in it, and you tell each one of them: go out and tweet something negative about FICA. What you're going to get is what looks like original content at scale because you get 5,000 slightly differently worded attack narratives that follow the same central line. Whereas if you'd automated that process four years previously, you'd have had the same hashtags, identical content, identical behaviors, and posting patterns, which is much easier to track.

So it shifted from that inauthentic to authentic because it's real people saying real things, but it's in that hybrid gray zone of inauthenticity that sort of exists in the middle. And that's where these groups become so dangerous or so powerful: they act as a sort of threat actor because they are organic conspiracists promoting this content themselves and who are trying to recruit more people into that conspiracy, to spread that worldview because they really believe it, a lot of them. But also they're able to be weaponized and used as a threat vector by a foreign state actor that's trying to push a particular wedge issue within society that will divide and conquer, push people further apart from each other, further that kind of polarization process that exists at the moment.

And that's not unique to groups around anti-vaxx or health: that exists for Brexit, immigration, US political cycles, the dispute around the Nord Stream pipeline. There are groups on either side that believe: each has been captured and can subsequently be weaponized, whoever that threat actor wants.

"Russia relates more to vaccine diplomacy, when they were trying to ship the Sputnik vaccine at scale: they heavily targeted Serbia with amplifying skepticism [...] and that's because they don't want Serbia to be further closely integrated with the EU."

Thornton

Are you seeing any particular major players, whether it's post-COVID disinformation or election cycles? Can you name any groups that you've identified?

So I think if you're looking at who those states are, then it tends to be the big three that are responsible for a lot of this, which is always Russia, Iran, and China, all behaving in slightly different ways. And they all have different objectives when it comes to amplifying anti-vax stuff.

For example, Russia relates more to vaccine diplomacy, when they were trying to ship the Sputnik vaccine at scale: they heavily targeted Serbia with amplifying skepticism around the safety of the Western-manufactured vaccines so that they could create a natural tendency for people to look towards the Russian element. And that's because they don't want Serbia to be further closely integrated with the EU. And so it was a means to an end for them in that regard. China was doing the same thing in the Philippines in terms of trying to use the vaccine as diplomacy. They're amplifying organic domestic conspiracy around the safety of certain vaccines that the US was offering, because you're trying to push the US further away. Iran was doing it to try and deny that COVID was a problem within their society.

And at the same time, because it coincided with an election cycle, everyone was trying to use it as a wedge issue within the US, because vaccine skeptics have always existed, they've always been a community. But they weren't very vocal externally: they were sort of contained to their own little quiet corners of the internet, if you like. And what this did is really mainstream it in terms of the scale that we saw. If you're talking about who did they target, they went for a whole variety.

I think there's a tendency to think of internet threat actors as someone sat in a basement in a quiet corner typing away, causing harm, whereas the majority of communities they went after in 2020, they're just normal people that spend time on social media and get drawn into these places. A whole load of people got a Peloton bike during the pandemic because it's like, hey, let's stay fit at home. It comes with an app, and an app comes with a community. It comes with message boards, hashtags that you can follow: no mask, no vaccine, 175 followers for this neighborhood.

And so you can start to capture people from a Peloton app and say: hey, I saw you joined the No Mask group, come and join this community as well. And then you DM them a link, and they fall into it, and it goes from there. They're capturing people from all over and bringing them in. There's often a tendency to think of top-level social media as being the problem, but it's quite difficult to pull people off Facebook because they like being on Facebook, they like being on Twitter. It's much easier to pull people off a dedicated space, like a particular app or a particular service. That's a much smaller scale.

"They hire people on minimum wage to post and drown out stuff. The way the future's going is not content at scale - it's going to be behavior at scale."

Thornton

What about the AI-driven GPT3 writing app: cybersecurity people are quite concerned that this tool will be used to scale up the kind of disinformation campaigns that you are talking about. I just wanted to put that back to you because it sounds from what you are saying that maybe it is not going to make such a splash, because it seems that already good, old-fashioned human beings are doing the legwork anyway?

So there's a tendency within this industry to pivot towards the tech, and that's in terms of both outlining what the problem is and what the potential solution is. Because if you can convince someone that they have a tech problem, you can sell them a product, the tech solution on the end of it. However, there is no tool that can detect this information at scale, not in the sophisticated way that it exists at present.

And I think, honestly, the same is going to be true when we look at the future of things like ChatGPT and its potential. The same argument has existed around deepfakes, right? For six years we've been told that deepfakes are going to end the world, end media integrity and legitimacy as we know it. But how many times have we seen a politicized deepfake in the wild in that period? Twice, maybe maybe three times. Have they been immediately detected? Yes. Have they done any damage? Not really. It's sort of the same when it comes to things like ChatGPT, a lot of it still sits in that uncanny valley. You cannot replace real content written by real people in terms of the natural engagement and identity that it generates.

Because there is that impact even when you have someone that's willing to lay down all of their digital resilience because it's something that matches their worldview. There's still the uncanny valley trigger that goes off to some extent. And so, no, I don't think the future is entirely GPT dominated and destroyed. I think it can play a role. Of course, it can help to some extent with scaling.

But if you look at the model, for example, that exists in a lot of Southeast and East Asian geographies at the moment, they have basically troll armies. China has an entity called the 50 Cent Army, or Wu Mao, which is like tens of thousands of people that are paid minimum wage per post in order to promote pro-CCP nationalist sentiment to drown out dissenting voices. If you look at Vietnam, they have exactly the same thing. It's called Task Force 47, and it's a hybrid military-civilian unit. If you look at the Philippines, they're running similar entities where it's just not even military or government led, but they're just sort of like these huge, almost call center style Accenture content farms, where they hire people on minimum wage to post and drown out stuff. The way the future's going is not going to be the content at scale stuff - it's going to be the behavior at scale.

So whenever you try to push a particular inauthentic narrative, at some point the audience understands its inauthenticity. So instead, what you have to do is just drown out the ability for a counter narrative to get into that space. For example, there's an election coming up in 48 hours, and you've got a troll farm of 10,000 people. Don't use them to attack the integrity of your opponent. Use them to mass report your opponent’s profile to have it taken down: trigger that platform’s automatic response, drown them out, and basically create your own moratorium 48 hours before an election, your own vacuum into which you can push whatever the hell you want. Because now you control the information space, not through content, but through behavior. So you can have no opposition to it. That's the way that we're seeing the trends go. When you sort of look ahead towards 2024, if we're still thinking about US election cycles, that's the way it's going.

"If you look at the Russian invasion of Ukraine, there have been state aligned or proximal groups mass reporting, doxing, harassing antiwar activists within RUssia, families of Ukrainian soldiers, people connected to foreign fighters who have joined Ukraine as well."

Thornton

So if I've understood correctly, this tactic, if deployed, would dupe platform moderators into excluding possibly more legitimate voices and then create that space like a preemptive strike?

Because a lot of these thresholds – if it happens at a certain scale, as threat actors have figured out, are automated. So there's no human in the loop of that process. What platforms can do and have started to do is to preempt the preemptive strike, which is to profile those who might be the targets of such campaigns during particularly sensitive periods: civil unrest, protests, elections, that kind of thing. But it's yeah, it's the evolution of the cat and mouse game in that space, basically, as to how it's going to go. And states are well into this space. If you look at the Russian invasion of Ukraine, there have been Russian state aligned or state proximal groups that have been mass reporting, doxing, mass harassing antiwar activists within Russia, families of Ukrainian soldiers, people connected to foreign fighters who have joined Ukraine as well. But it's happening in a conflict scenario; whereas when it transfers to an election scenario it becomes particularly dangerous for democracy, society, integrity of institutions, that kind of thing.

So that being the case, love him or loathe him, is Elon Musk on to something when he talks about broadening freedom on Twitter so that people can't get shunted off it? But then, doesn't that open up the risk of allowing extremists freer rein?

It's a little bit of a zero-sum game. You need to have an element of inoculation. Kids post-pandemic are getting every cold and flu and virus under the sun, because they didn't have that kind of exposure where they were outside playing with friends, eating dirt, touching a whole load of stuff. You need to have that same element within a digital environment, because if you exist within a purely sanitized information space online, you don't have that inherent natural digital resilience. The reason that people who grew up spending too much time online are that much more resilient to things like disinformation and influence is because they understand the dynamics that are at play. They understand the significance of a piece of content transiting from a chat board to Reddit to Twitter, rather than something that's originally posted firsthand on Twitter. They have that inherent understanding of it.

I guess if you look to the future, it's hopeful. The future generations, having grown up entirely online, will have that sort of slightly increased inherent digital resilience. I go home for Christmas, and my dad is, you know, the person who told me not to believe everything you see on TV when I was a kid. He now believes everything on the social media feed because it's that much more personal, it's tailored, it's on the phone, it's in your space, basically. Whereas the next generation won't have that discord because they've grown up with that, inherent as part of their world experience. I think the debate over whether Musk is doing harm or good I think tilts towards harm still at this point, but it could change in the future.

But I think when you're looking at things like conspiracy, health, those that are prone to radicalization, you can't draw the same age-defined demographic around it. To give an example from France, there's a movement that has sought to capture former members of the Legion of the Yellow vest protesters. Because that's already an organic movement that exists and has crossed that threshold from digital coordination to real world activity in terms of protest gathering, civil unrest. And so they've sought to capture them and turn them into an anti-vaxx movement or an anti-health pass movement when France was trying to introduce the health pass in order to travel.

They're all in their fifties, sixties, seventies, and above, organized in a Discord server where they coordinate the creation and sharing of propaganda, and the coordinated posting of that propaganda to various social media platforms. They coordinate real-world protests where they engage and demonstrate, and they're also coordinating running for local offices so they can start to get into things like the local authority, the local council, the local mayor's office, school boards, and they can start to then change the way that their ideology is normalized, mainstreamed, legitimized. And so it's not just this kind of generation in the thirties, there's a whole spectrum of normal people that have been turned into threat actors. And I think anyone can be prone to it. I know people the same age as me that would be immediately drawn in by the first conspiracy that they happen to stumble into; and I know people that would be hella skeptical of anything in that kind of space. I think it's more the character of a person or their proneness to radicalization rather than necessarily an age thing that draws them.

Are we inevitably headed toward some form of tech totalitarianism or dystopia? Is this avoidable? What can people do to stop this from happening?

I've spent every day of the last five, six years now investigating this stuff, digging into these communities, understanding how influence operations are put together, looking at the trajectory, the evolution, where it's going in the future. I'm not super optimistic for the future, if that's one way of putting it. I think you can talk about hard left and far right, like Russia and Iran are particularly aware of horseshoe theory: at some point, you can pick a narrative that converges and that only worsens the process going forward. I think if you look at the rise of any political issue within the UK and look at the lack of a middle ground that exists around that issue, look at it for education, immigration, the NHS, Brexit, who should be the leader of what party, how much, how many checks and balances and oversight on ethics there should be. You almost hit a point of no return.

We've sort of been waiting for society to hit a learning moment: when you hit rock bottom, and you realize this is where we have to pull back and improve digital resilience, education around digital skills, and literacy, transparency, and accountability. I thought we would hit it with Brexit, honestly, and then I thought we might hit it with various leadership scandals in the Conservative Party. We've not hit it yet. If you compare the UK and the US, I thought the US would hit it with the 6th of January [Capitol Hill riots]. They're still going.

More from Cybernews:

Netflix bets big on Korean content

Southwest Airlines sued for outdated technology

Robot rockers nail Nirvana and Metallica classics

Biden pushes Republicans and Democrats “to hold Big Tech accountable”

What's wrong with hybrid work and how to fix it

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are marked