Considering what we’ve experienced in the past few years regarding our data and privacy, this sobering point about our cybersecurity in 2020 begs for an emphasis:
We should assume all our data has already been stolen or sold.
At this point, we should come to terms with the fact that there are countless known and unknown data breaches that, when put together, suggest our information has already been compromised. It’s just a matter of how much, and whether it’s something that’s possible to fix in the future.
The good thing about information such as our passwords and emails is that they can be edited or updated. Simply change your all your passwords – and get a password manager so that not even you know what your passwords are. However, there’s a possibility that hackers have already gained access to more sensitive data. In that case, those things aren’t 100% fixable: things like confidential tax records, social security numbers, your private videos or photos.
The only thing we can actually do there is try to mitigate our losses and ensure the safety of our data is safer, both now and in the future.
So, with that as our practical goals, what is the landscape in 2020 and beyond when it comes to data and privacy? We’ve put together a list of the 7 top cybersecurity trends that you should be aware of.
#1 Greater accountability for data breaches
Let’s start it all off with some good news. As data breaches and privacy scandals continue stay in the public mind, state and federal regulations should kick in to hold those responsible accountable.
The FTC is already considering leveling “record-setting” fines against Facebook for all its many, many data “whoopsies.” That’s not to mention the $1.6 billion dollar fine the EU is considering slapping against Facebook.
What does it all mean? As more consumers get troubled by regular data breaches, legislators will be crunching overtime to hold those companies accountable so that these situations can be avoided in the future.
#2 Cryptocurrency blackmail
This really took off in 2016 and 2017, but it’s only set to continue in 2020. Ransomware, a type of malware that holds users’ sensitive files for ransom in exchange for cryptocurrency, will only become more prominent going forward.
Typically, attackers will either threaten to release sensitive data, or block access to those data, until the victim pays up. The ransomware market is also expected to consolidate with fewer groups working together for more effective campaigns.
That means bigger targets for bigger sums with a higher frequency.
#3 More state-sponsored hacking
Since there are no rules for cyber warfare, it’s quite likely that states will continue working to exploit and attack each other on every digital front. It has already happened with the following:
· The US and Israel using malware to destroy nuclear machinery
This trend is only set to increase as the technological development of cyber warfare increases.
2017 saw the cryptocurrencies skyrocket, and 2018 saw that bubble burst as BTC lost roughly 8x of its November 2017 value. However, it doesn’t mean that crypto is dead. Especially when it’s so easy to mine it – using other people’s computers.
In fact, with weak IoT devices and the abundance of malware options, it won’t be too difficult for hackers to use the computer power of users’ devices to mine for cryptocurrency. Known as crypto-jacking, this type of attack can go largely unnoticed by the user – except in seeing significant slowdowns in their device performance.
#5 IoT devices 1: weak defences will bring about more powerful botnets
In October 2016, the Dyn cyberattack affected large parts of North America and Europe. The DDoS attack was helped by poorly protected IoT (Internet of Things) devices that were linked together in a huge botnet.
More IoT devices are being put on the market, and most of them have really low protection. These devices can include any “smart” thing, including smart thermometers, smart speakers, smart fridges, smart TVs, and more. Unfortunately, most IoT devices will have the same usernames and passwords set to “admin,” “password” or something equally easy.
With the IoT market booming, the likelihood of botnets successfully using IoT devices increases as well. Which means it’s only be a matter of time before another major cyberattack takes place. Only this time, it’ll be much worse.
Some cybersecurity experts predict that we’ll begin to see “swarm” networks or “hivenets” – self-sufficient bots that can make decisions, gather together their collective intelligence, and work independently to target vulnerabilities in networks. They will also be able to identify new vulnerable devices to add to the hive.
With the new 5G networks rolling out soon, this will only be compounded, allowing these hivenets to become even more effective.
#6 IoT devices 2: the dangers-by-design
Perhaps the most popular IoT devices now are smart speakers, powered by the virtual assistants Alexa or Google Home. As they gain in popularity (and their prices come down), we’ll be seeing the following ethical and legal situations popping up.
No reasonable expectation of privacy
Fun fact: your Google- or Amazon-powered devices are always on. Listening for their magic wake up word (“OK Google” or “Alexa”). There’s a lingering fear in consumers’ minds, however, that Alexa is always listening. Always recording. This fear leads some homeowners to preferring to have private discussions in another room or even speak more quietly when at home.
We wish it were merely paranoia, but two instances have proven otherwise. First is the Portland, Oregon, couple that had their conversation about hardwood floors sent to one of their employees.
And then there’s the Frankfurt man who has mistakenly sent 1,700 audio files of a complete stranger. Of course, it’s all very much explainable: the second situation was simply a human error on Amazon’s side. And, for the first, it’s possible that they said a word similar to the wake-up word – something like “Alexis” (a woman’s name), “a Lexus,” or any other similar-sounding string of words – that caused the smart speaker to start recording.
Still, it has happened. And it will continue happening with time. That’s leading Alexa-owning homeowners to have no reasonable expectation of privacy, even in their own homes.
No notice of consent
If someone buys an Echo with Alexa, it’s logical that they’ve implicitly consented to having their voice data recorded. This is for the basic functioning of the Echo, so that it actually does what they want it to do.
But now, the question: does Alexa have consent for any guests that happen to be in the owner’s home? There’s no explicit notice of consent. Alexa simply doesn’t ask them for it. Therefore, no explicit consent is given. So, does Amazon have the right to record and store voice data from random people? What if those people include children – say, those under the age of 12?
Combine that with the possibility of another “unforeseen and completely rare error,” and we’ll likely get into some interesting legal issues here.
Long-term emotional tracking
Alexa can connect many separate Amazon products and devices. Google Home goes further by working on any compatible smart device. These virtual assistants also have the ability to sync across devices, which allows them to share data to (allegedly) provide a more seamless experience for the user.
One of the possible outcomes of this is long-term emotional tracking, where certain data points are recorded and a proprietary algorithm works on predicting what the user wants, when he wants it.
Let’s say it starts with this: the AI learns when to order your toothpaste, toilet paper, and milk in advance based on your past behaviours. One evening you and your significant other have a fight, the next morning you ask Alexa to play your favourite sad song. And then, unprompted, Alexa goes ahead and orders your favourite ice cream as well.
Pretty appropriate for the state you’re in, but this kind of calculation means recording not just your actions, but also your emotions at any given time. Seeing as there’s a pretty fine line between prediction and manipulation when it comes to profits, we’re getting into some ethical issues here.
While it may seem far off, Amazon has already patented the technology to analyze your emotions based on your voice alone – and to use that data for “advertisements or promotions.”
#7 AI-created photo and video deep-fakes
In early 2019, Nvidia researchers unveiled a number of extremely realistic photos of people that had been entirely generated by their AI, which allows them to copy the styles of real faces and create blends that are scary realistic.