Cyberattacks on human perception: a threat to democracy


A year away from the US elections, a computer scientist and anti-disinformation specialist is warning that platforms are less prepared to counter voter manipulation than they were in 2020. And Wasim Khaled, CEO of Blackbird.AI, has another dire prediction: machine learning will likely lead to huge net job losses.

Khaled divides his professional time between advising US government departments and Fortune 500 companies. He is quite frank about why his own company doesn’t make its services available to individual members of the public — their inherent biases are so strong that he has little faith that even a high-tech solution can overcome them.

ADVERTISEMENT

“Confirmation biases that people have about sharing content that enrages them or just say what you want to hear anyway — that's so ingrained in human nature that it's difficult for technology to find a way,” he says. “That's why this has been so difficult. It's also, unfortunately, why at Blackbird we don't really work with consumers at the individual level. There's too much personal ideology to really help them in the way that we could if they were more susceptible to looking at these ideas and actually understanding this stuff.”

"At Blackbird we don't really work with consumers at the individual level. There's too much personal ideology."

Blackbird CEO Wasim Khaled on the uphill struggle to convince social media users that they're being played

Instead, he prefers to work with public- and private-sector organizations by giving them the tools to spot trends in online disinformation in the apparent hope that they can take action against them on behalf of the populace. But what Khaled himself has to say during our conversation leaves me in some doubt as to whether they will be able to.

“Let's just say hypothetically, there was some magic machine that could flag every piece of content that is misinformation, disinformation, true-false nuance, and flag it precisely,” he posits. “The issue is, people are so ideologically aligned that they wouldn't believe such a system anyway.”

That pretty much sets the tone for what is to be an unsettling but admittedly fascinating discussion of human interaction and sharing of ideas in the 21st century.

The disinformation wars

Khaled describes himself as a computer scientist and entrepreneur by background but says that in the past six years, he’s been focused on trying to combat the steady rise of disinformation on the internet.

“I've been working purely in this category of using AI and technology to better understand narrative warfare and the information ecosystem,” he tells me. “And Blackbird's mission is to empower and foster trust, safety, and integrity across the global information ecosystem.”

ADVERTISEMENT

“These kinds of cyberattacks on human perception have increased and proliferated by threat actors over the last ten years, but more prevalently in the last five,” he adds.

With another seismic election due next year, as former president Donald Trump looks set to make a comeback despite being mired in controversy if not outright disgrace, is Khaled worried?

“At Blackbird, we are a very mission-driven company,” he replies. “Nobody who works here, including myself, is not constantly worrying [...] about the future of being able to maintain trust and integrity at all different levels of our society and government, based on what we are seeing in this category of information and narrative manipulation.”

"These kinds of cyberattacks on human perception have increased and proliferated over the last ten years, but more prevalently in the last five."

Khaled says sifting fact from fiction online has only gotten more complicated in the years following the 2016 presidential election

What concerns him specifically is the potential misuse of the very technology he champions, artificial intelligence, to spread and multiply political half-truths and lies, or what he calls “the use of generative AI to create election misinformation.”

Whereas paid networks, he says, have tightened up their game since 2016, unpaid networks — that is to say, social media platforms where anyone can post content for free with potentially extraordinary reach — will be far more vulnerable than they were in 2020. The reason for this, Khaled suggests, is post-COVID big tech layoffs in the past year that have decimated platform moderators.

“Unpaid networks on the same platforms are in more danger than ever, because everyone's fired all of their moderation teams,” he says. “They've reduced Twitter, but also Facebook — tens of thousands of their fact checkers are out. So for that reason, I think a lot of the defenses that were in place in 2020, going into elections, are not going to be in place going into 2024.”

Reality check: democracy at risk

Khaled further implies that big tech platform leaders might be finding it all too easy to court populist opinion by slashing moderators and therefore being seen to shore up freedom of expression while conveniently cutting costs — effectively killing two birds with one stone. And possibly democracy too in the process.

ADVERTISEMENT

“The moment you have moderators, I think people yell a foul of ‘censorship,’ when moderation is what makes the internet run,” he says. “I think people have forgotten that if you did not have moderation, the internet would be unusable. You can call moderation censorship today, but the key thing is, where do you set the line? Obviously, with things like nudity and violence and harmful imagery, it's already been established that that's not censorship. That's just moderation [based] on common decency. Nobody argues those.”

But where the line gets fuzzier, he says, is when large groups of people see content and don’t care if it's true or false, but simply whether it goes against their ideology or belief system, “and then true or false doesn't matter.”

He adds: “And so therein we have a human versus a technical problem, because I think a lot of these organizations are shying away from getting into the political fireball of having to go into Senate intelligence hearings with their executive teams and preparing for those things. And so there's kind of almost choosing to wipe their hands of it and say, ‘Okay, let's just stick to our old moderation and let this fly.’”

"The moment you have moderators, I think people yell a foul of 'censorship,' when moderation is what makes the internet run."

Khaled on why Facebook and Twitter getting rid of their moderating teams was a really bad idea

If he’s right about this, it’s a concerning trend. As Khaled himself states, 2016 was the year when everyone was caught off-guard. A year that saw both the surprise election of Trump and the no-less controversial Brexit vote in the UK, at least partly influenced by Russian information operations and shady data crunchers like Cambridge Analytica serving targeted political commercials to people who had never even voted. Four years down the line saw the US at least somewhat better prepared, having learned from its mistakes and tightened up its monitoring.

“In 2016, those systems weren't in place — it's really only 2020 where they had built up a muscle mass of moderation and algorithmic analysis,” says Khaled. “And so, yeah, it's going to be, I think, different from both, right? The closer we get, there's going to be a bigger ramp-up of all the things we're doing to make sure the elections are safeguarded.”

Facebook, he thinks, will at least clamp down on paid content, the kind of targeted political advertising that many believe swung the vote count on both sides of the Atlantic in 2016. But that doesn’t allay his concerns about partisan posters going wild and sharing propaganda to their hearts’ content on social media: and as for other platforms such as Twitter, Khaled seems even less confident in those.

Is Threads the answer?

But Twitter and Facebook are very much the devils we know in social media. I’m interested to hear what Khaled has to say about some of the newer platforms that have emerged, most notably Mark Zuckerberg’s Instagram spillover Threads.

Meta’s latest salvo against Musk’s social media stronghold saw a meteoric rise in its first week, with more than 100,000 initial sign-ups, followed by a sharp dip in returning custom, but given its illustrious provenance, one presumably can’t count out the new social media platform. So what about Threads, I ask Khaled: where will it feature in the run-up to arguably the most important election in the world?

ADVERTISEMENT

Too soon to say for sure, is his response, although he seems tentatively optimistic that it might shape up to be a somewhat ‘cleaner’ town square than some of its rivals.

"There is extremist content on Instagram that already has pretty big follower counts. These could be hyperpartisan accounts that have had their followers and then they poured it over to Threads."

Khaled speculates on the potential for Meta's new platform to be infiltrated by the same actors Twitter has recently welcomed back

“Like all Facebook platforms, the answer is a little bit murkier than it will be, say, a month from now,” he says of Threads. “So Twitter acts more like a conversation where everybody can join in, everybody can follow one another. And that was the whole nature of that network — anyone can self-subscribe.

“The gates that are around Threads, it's pulling from an additional platform that didn't really function as that kind of conversational platform. It's about pictures and, more recently, about reels and whatnot. It's taking that peer group and network and porting it to where people actually have to say things now. So there's a certain level of adaptation. For influencers, but also for threat actors who are used to posting memes to see how they're going to drive particular narratives and conversations, and they're really using copycats of what happened on Twitter to make that happen.”

Executives at Meta have taken steps to try to counter that, but this added security will not come without a price — and unfortunately that cost could also have negative implications for political discourse and sharing of information online.

“Right now, there is extremist content on Instagram already that has pretty big follower counts,” he explains. “So these could be, you know, hyperpartisan accounts that have had their followers and then they poured it over to Threads. We've seen a lot of things like, ‘Hey, this is just like Twitter, but with way more censorship.’ Because Threads is basically going out there and their leadership and the executives are saying ‘This is not a place for politically toxic discourse.’ In other words, unapologetically, they're taking the exact opposite stance of Twitter on this.”

Pick your evil: conflict or isolation

But in doing so, Khaled contends, Meta executives are, in effect, creating a silo culture on Threads, where partisan groups will be ringfenced from one another on the platform to prevent the kind of ugly clashes bosses don’t want to see.

“What they're doing is enforcing echo chambers — they're keeping people further apart from one another,” he explains. “So there's less chance of those viewpoints being exposed to one another and creating friction points. Now, ten years ago, we said: ‘This is a really bad thing for us on the internet.’ Because the internet was going to be a magic utopian kind of thing, where everybody's exchanging knowledge and information. You remember those days, right? It was going to be like the Library of Alexandria online.”

Back in the much grimmer reality of today, this siloing at the micro level within specific social media platforms will replicate at a smaller scale what is happening at the macro, with different platforms being preferred by different political interest groups. What this all adds up to is fragmentation, a lot of it — in stark contrast, Khaled points out, to what the internet was originally supposed to stand for.

ADVERTISEMENT

“The internet used to be about bringing everybody together so anyone can talk to anyone,” he says. “Now we're seeing this reverse kind of scenario where people are trying to fragment to stay with their own tribes. You know, as an early internet person, that sucks, right? That really was not what this was meant for. But it certainly seems like maybe today a lot of organizations think it's what they need, to avoid controversy.”

AI won’t kill humanity — but it will kill jobs

And speaking of controversy. During our conversation, Khaled lets something slip that I just can’t resist picking him up on towards the end of our interview.

Moving on to the subject of AI more generally, he takes pains to emphasize that it isn’t all doom and gloom. His job as a cybersecurity worker, particularly in the domain of “cyberattacks on human perception” as he puts it, perforce entails being more negative.

As a journalist, I can immediately sympathize: our job doesn’t exactly get done by indulging endlessly in blue-sky thinking.

“But, you know, with the generative threats, which are pretty concerning, there are lots of very positive things that are going to come from AI as well, in the areas of medicine, biotech, pharmaceuticals, automation in agriculture,” he stresses. “It just so happens that in our industry, which is analysis of the narrative, it's terrible for that area. But, the real question is, net-net, where technologies create more good than they create chaos.”

He goes on. “With AI and generative AI, it does really stand the chance to accelerate innovation, tenfold or a hundredfold than what we would have been able to do without it, in all areas. If AI can look at an MRI and catch cancer, almost every time, faster than a human eye can, I'm not really worried about the people that are going to lose their jobs over not being able to do the work as well. Because you're essentially saving every single person that you wouldn't have caught. So net-net, that's a good innovation.”

"You're going to lose a bunch of roles. The number of jobs lost to the number of jobs gained, it's not going to be one-to-one."

Though he believes AI will greatly advance things like medical care, Khaled is also convinced it will have a negative impact on employment

But is it? If many people lose their jobs, surely that will feed back into declining mental health, addiction, rising crime, a lower tax take to fund the very innovations he hopes AI will drive?

ADVERTISEMENT

His candor is refreshing, though not particularly reassuring.

“It will cause job loss,” he reiterates. “I said I was not as worried about it from an existential perspective — humanity's survival, AI against humans — I am not doomy from that perspective. But I can't think of a single technological innovation that didn't create a huge job loss and job increase.”

So does he at least think that the job increases from AI will offset those that are lost to innovation? Probably not, he concedes; in fact, the advent of AI could create net job losses unlike any we’ve seen caused by the previous industrial revolutions since the 18th century.

“You're going to lose a bunch of roles,” he answers. “People aren't out there handpicking vegetables anymore in the developed world, those jobs are gone forever in many industrialized nations. You've got massive threshers that do that work. So, of course, it's going to create job loss. The number of jobs lost to the number of jobs gained, it's not going to be one-to-one. With agriculture, it was sometimes like ten to one or 50 to one.”

And just in case you’re in any doubt about what Khaled is saying, he continues with scarcely a pause: “This is definitely not my opinion: this is fact, across the board. Everybody who is anybody in this space is concerned about a big delta between jobs done better by AI and jobs that are no longer available for people. This is not ‘my take.’”

Why AI could spur reform… or revolution

So what has to be done to offset the socio-economic losses caused by this huge net drain on employment? I suggest universal basic income, which has been growing in popularity as an idea and was among proposed measures advocated by OpenAI head Sam Altman at the recent launch of his Worldcoin digital platform, and Khaled seems supportive of the idea.

But he adds: “Everybody talking about everything from universal income to offsetting how much [AI] can be deployed in a particular industry — nothing has been determined. All of this stuff is up in the air, because this thing came at people like a ton of bricks, so there was no time to really react. It just exploded.”

The irony here is that this explosion aligns with predictions of how AI innovation would manifest itself: slow, steady progress on the sidelines of the tech industry, followed by sudden and startling leaps forward into the center ground. Put another way, experts did foresee this moment; just that none of them knew quite how to prepare for it. Put yet another way still, expectation is not anticipation.

"Nothing has been determined. All of this stuff is up in the air, because this thing came at people like a ton of bricks."

Khaled on why legislators are ill prepared to contain the fallout from AI, despite having known for years it would eventually take off

“When it hits these kinds of technologies, just like with Moore's Law exponential, it'll just zip right by us, and then we're going to be playing catch-up,” says Khaled. “That was always what was predicted, and that's exactly what's happening with hitting that critical mass of AI technology. That exponential curve is continuing to go up with every passing week, so we're further and further behind in terms of anything like job preservation.”

I can see one possible silver lining in this, however slender. If what Khaled is predicting comes to pass, and he seems confident that it will, one can hope it will force governments to sign up to the kind of social programs many countries are clearly crying out for. Universal basic income, as mentioned, but also caps on rent and energy prices, land value taxes, and other state interventions to stop the rich getting richer at the poor’s expense, to name but a few.

Of course, chances are that might not happen: in the face of multiple recessions and economic turmoil since the subprime mortgage crisis in the first decade of the century, global elites haven’t shown much willingness to relinquish the privileges and power they’ve amassed. But in that case, one wonders how a horde of highly intelligent, educated, and skilled people who suddenly find themselves out of work with no access to a decent safety net might react.

In such an instance, would it be a step too far to predict a new revolution for the 21st century?


More from Cybernews:

Record $300M FTC fine hits robocalling operators

Parcel mule scam: if a stranger offers to send you stuff for free, don't agree

Patreon payment outage leaves influencers in the lurch

Buried gold, burning trash: US couple admits to hiding hacked crypto

NASA and IBM build AI model to monitor climate change

Subscribe to our newsletter