The future is impossible to see – we often don’t notice a new danger until the beast charges. Yet we can try, and it’s the future of hacking, both offensive and defensive, we simply need to look at today in order to be safer tomorrow.
How is the hacking world changing? How do new artificial intelligence (AI) tools help both the attackers and the defenders?
To discuss the ins and outs of the newest trends in the world of hacking and anti-hacking tools, Cybernews senior journalist Gintaras Radauskas is joined by Jurgita Lapienytė, Cybernews editor-in-chief, and Vincentas Baubonis, the head of Security Research.
Cybercriminals, including nation-state threat actors, are increasingly exploiting AI technologies. It’s not surprising in the least – AIs never rest and can analyze massive amounts of data to find patterns a human might not notice.
But in the case of bad actors, often with substantial resources, they can access and exploit advanced AI tools maliciously and on an industrial scale. Text-based scams are now supported by large language models, and there’s an epidemic of deepfakes.
In Hong Kong, an IT firm worker recently transferred more than $25 million to criminals after they used a deepfake to impersonate the company’s chief financial officer on a video call, for example.
The cost of creating content for influence operations has also already dropped significantly, making it easier to clone websites or create fake media outlets, Lapienytė points out. The entry barrier is lower every day.
Generative AI is expected to expedite threat actors' ability to carry out reconnaissance of critical infrastructure facilities and glean information that could be of strategic use in follow-on attacks.
However, the cyber defense industry is not snoozing, either. It’s proven it can also play an effective cat-and-mouse game, says Baubonis. According to him, machine learning has long been used in detection of spam, phishing and other malicious email campaigns and individual emails.
Still, AI is not a cybersecurity panacea – it should be used cautiously and always overseen by us, humans. Baubonis agrees: we need human critical thinking to use AI to solve and prevent problems.
“AI is currently at a state where it is more an augmented intelligence rather than a fully autonomous AI. A constant watchful eye of security specialists to monitor implemented algorithm's work is still needed. The tools help a lot but they’re nowhere near the ‘plug-and- play’ principle at the moment,” says Baubonis.
In other words, dear cybercriminal or cyber defender, even if you manage to code generate something because you don’t know how to write it yourself, you by definition cannot review it without going through an effort equivalent to writing it yourself in the first place.
Your email address will not be published. Required fields are markedmarked