How fictional is the Black Mirror dystopia?
“If technology is a drug - and it does feel like a drug - then what, precisely, are the side effects?” the creator of British dystopian TV Series Black Mirror Charlie Brooker once asked. Technology lawyer and scholar Paulius Astromskis goes further by asking if those side effects can be tolerated, and how to regulate technology without suppressing its progress.
Black Mirror is a dystopian science fiction TV series about our dependence on social media and technology. Is it really fiction, though, CyberNews asked technology lawyer and scholar Paulius Astromskis.
“Jules Verne’s novels used to be science fiction too. But if you would read it now, it would seem more like a historical novel. It’s difficult to say how fictional those movies are. Fictional plots might inspire someone to create a startup. Reality and fiction are intertwined. Every plot from Black Mirror is possible. The only question is when,” he told CyberNews.
According to Astromskis, Black Mirror portrays well-known and painful issues within society, such as cyberbullying or social exclusion.
Twitter zone politics
“There are some episodes that can’t even be called science fiction because it shows and analyzes real issues, with negative consequences, such as cyberbullying,” he said at a LOGIN conference.
According to him, the spread of information is no longer controlled. While freedom of speech has its own limits, those boundaries are often overstepped, and there’s no technical or legal way to stop hate speech, cyberbullying, or fake news.
What’s more, social media has become a weapon to attack your opponent, ignite a revolution, or strip somebody of power.
“Politicians, other people in charge, exploit the so-called Twitter zone more and more. They don't base their decisions on arguments, they don’t seek what is best, but they make decisions that would make them look good on social media,” Paulius Astromskis told CyberNews.
He also talked about social exclusion, and how being part of the digital sphere can only deepen the issue. If you have a developed and well-polished social identity, it is easier for you to sell, to get a job, to get credit, etc. China has already developed a social credit system to evaluate its citizens. You can read more about How China is building a panopticon for its citizens.
Another topic that interests Mr. Paulius Astromskis and is discussed in the Black Mirror series is the brain-computer interface.
Even without it, there are a lot of unsolved questions regarding family and digital space. Do I have to share my LinkedIn profile with my partner after marriage? Is Facebook a shared property? What about Tinder? Am I even allowed to have Tinder as a married person?
Also, there are a lot of apps that let parents track their children - to restrict their access to the internet, to monitor them online, etc. “How will it impact their development in the future?” Mr. Astromskis wonders.
The issue becomes even more sensitive when we think about the future. What if it is possible to extract your thoughts? “Do I have a right to access my wife’s, my child’s thoughts? Can I do something with it?” said Paulius Astromskis.
Also, he wonders if that kind of technology will be used when interviewing witnesses, gathering important evidence information.
“Will we be able to use technology as a punishment? Like chemical castration, only it’s for your brains or thoughts,” he said.
Also, the important question is whether technology will be used to suppress soldiers' feelings, such as compassion or mercy, and amplify their aggression towards the enemy.
While some questions might seem like science fiction, there are attempts to actually link the brain and computer, and not only as augmented reality.
A month ago, Elon Musk’s startup Neuralink unveiled a pig with a small computer chip implanted in its brain.
Should we let AI decide?
TV shows like Black Mirror also tend to explore artificial intelligence (AI) and how it might change our lives.
“The use of autonomous weapons in armed conflicts is already a very controversial issue. On the one hand, it protects soldiers. On the other hand, it fuels conflicts because it becomes easier to engage in conflicts if you are not risking human life,” said Paulius Astromskis.
Also, there’s a discussion about whether technology could help us break out from a vicious cycle of routine.
“Is the only way to break out to win X Factor? One of the possibilities is to make AI in charge so that it could make decisions for us,” said Paulius Astromskis.
If AI is already clever enough to recommend us a book to read, or a song to listen to, why can’t it also choose a job for us, and even arrange a successful marriage?
“Or maybe we should let a human decide?” asked Paulius Astromskis.
Anyway, there are so many questions that arise not only from science fiction shows. The problem, according to Mr. Astromskis, is that technology is usually developing much faster than ethical or legal discussions about it.
Algorithms are created by people
Political discourse and scientific research about technology are always reactive.
“When the problem is noticed, it is already mature, and the research or discussions about it can sometimes take years. Meanwhile, the technological progress is moving ahead,” said Paulius Astromskis.
To prevent any future issues, we should discuss and look into technology that is not here yet.
“We should talk about ethics more. Also, include futurists in the discussions about the technology. What could be done to prevent technology from doing harm in the future? The only way is to start fantasizing about future scenarios,” said Paulius Astromskis.
The algorithms are built by people, who might be tired, unmotivated, or angry when writing code.
According to him, the biggest risk is uncertainty and human irrationality. The algorithms are built by people, who might be tired, unmotivated, or angry when writing code.
“There’s a big chance of human error,” Paulius Astromskis told CyberNews.
From a legal perspective, two big problems surround technology. First is the jurisdictional fragmentation.
“As long as there’s at least one safe zone for malicious actors, there’s a risk that technology could be used for malicious purposes,” he said.
Secondly, it’s almost impossible to attribute cyber incidents to anyone. As CyberNews has pointed out before, a small share of cybercrime leads to convictions.
“It’s difficult to say who’s responsible - is it an individual, an organization, a state?” asked Paulius Astromskis.