AI can see things we can’t – but does that include the future?

With tech companies already using machine learning to map world events as they unfold, it is reasonable to ask when artificial intelligence will be able to predict them before they happen. Cybernews reached out to industry experts and academics to learn how close we really are to making science fiction a reality.
In 2020, three academics from the University of Science and Humanities in Lima, Peru, published a paper in which they claimed that artificial intelligence (AI) could already predict terrorist attacks with a reasonable degree of efficiency.
A perusal of the paper, featured in the International Journal of Advanced Computer and Science Applications, proved somewhat disappointing. The data cited barely seemed to tally with the claims being made by the authors – that AI predictive models known as “decision trees” matched real-life records of terror attacks around the world between 1970 and 2018 by type and region with just under 80% accuracy.
And yet, my curiosity was piqued. Could we really be on the threshold of a predictive future – one where machine learning crunches the numbers for us to turn our historical data into the digital equivalent of a crystal ball? I decided to try and find out.
Seeing through a glass, darkly
“I believe this is not something new and that military intelligence has been developing algorithms and processes for this for many years,” Dr Jorge Sosa Lopez, professor of engineering at CETYS university in Baja California, Mexico, tells me. “As long as the data is available and the algorithms are developed then it is possible, albeit with a certain level of error.”
This margin for error widens considerably between predicting, say, when the next hurricane might strike the Caribbean and whether or not one country will invade another.
“The data fundamentals are very similar [...] however, the difference lies in the fact that natural disasters follow natural laws that may be referenced with scientific principles and theories,” Dr Lopez explains. “Whereas socio-political events follow human behavior, which is more difficult to model or predict.”
This does not mean it is impossible, he stresses. But the task of mapping major events that arise directly from human action and interaction is “much more complex, because of the various scenarios a decision might create, and how said decision is influenced by state of mind as well as the cultural, educational, and religious framework of a person – or group of people, which makes the process even more complex.”
For Dr Lopez the question is not so much whether machine learning can predict future outcomes, but rather how accurate its predictions are. In other words, if AI really is a crystal ball, it is for now a glass we see through darkly.
Past, present, and future
Dustin Radtke, chief technology officer at AI-driven solutions provider OnSolve, seems confident that intelligent machines are on course to take the dark glasses off within our lifetimes.
He does have an interest in suggesting this: the company he represents uses machine learning to sift through reams of data about potentially harmful incidents as they occur throughout the world, so they can deliver their 30,000-strong client roster – which straddles the public and private sectors and includes businesses, government departments, and church congregations – a timely heads-up whenever they might be at risk.
“Being able to identify what happened in the past – that's easy to do,” he tells me. “It's not just what happened, but what's happening right now, the impact, and what's trending. You may know that a weather disaster is happening, because those are reported heavily in the news. But what you may not know is there's a protest happening right now, and where protests start to get more violent based on different criteria. And this is where AI can start to come in.”
Radtke clarifies further: “A protest in regions with certain dynamics will have a higher propensity to become violent – that's a way to predict actions that you need to take, based on a suspected outcome.”
At this point I start to feel a bit uncomfortable. This definitely feels like we’re straying into Minority Report, a sci-fi movie based on the Philip K Dick short story that sees a futuristic policeman arresting people for crimes they have not yet committed but likely will in the future. True, if police target a certain neighborhood based on its previous data, it might allow them to prepare in advance for civil unrest that could cause damage to local homes and businesses – but it also sounds dangerously close to racial profiling and the like.
"A protest in regions with certain dynamics will have a higher propensity to become violent - that's a way to predict actions that you need to take, based on a suspected outcome."
Dustin Radtke, CTO of OnSolve
“What we focus on is augmented intelligence for humans to take action [on],” says Radtke when I raise this concern. “We are not prescribing the action to be taken based on the insights that we get – we're trying to make sure that the human has all the necessary intelligence to drive the behavior that they need to drive. We're reporting facts back – this actually happened here, this is what has happened in the past – and you can take action based on that. It's all about driving improved safety for everyone in that area.”
When I press him on the possible human rights concern and the inevitable pushback that will arise if AI is routinely used to pre-emptively police areas deemed as problematic, he answers: “I think that with every technology that's ever been out there in history there is always a way to use it for non-good. I think you have to focus on the good that it can provide and make sure that you police the non-good behavior that could happen from it.”
This will entail some sort of oversight. “There are consortiums out there to help drive the ethical adoption of AI throughout the industry – we definitely keep aware of those. But I think people have to look back, it's not just an AI problem – that is a continual problem as you innovate across every sector and industry.”
Ethics and geopolitics
The Alan Turing Institute defines AI ethics as a “set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies.”
Its 2019 report on the ethical issues facing the AI industry highlights discrimination and bias as a significant obstacle to the equitable application of machine-learning technology.
“Because they gain their insights from the existing structures and dynamics of the societies they analyse, data-driven technologies can reproduce, reinforce, and amplify the patterns of marginalisation, inequality, and discrimination that exist in these societies,” it says. “Likewise, because many of the features, metrics, and analytic structures of the models that enable data mining are chosen by their designers, these technologies can potentially replicate their designers’ preconceptions and biases.”
Such biases will naturally have wide-ranging implications, as different nation-states vie with one another to see who comes out on top in the ‘AI arms race.’ Radtke agrees that geopolitics will play a major part in the development of machine-learning solutions, and that this will see rival powers squaring off against one another in what amounts to a global game of digital chess. In the run-up to Russia’s invasion of Ukraine, he tells me, OnSolve had early indicators of an escalation in hostilities, most notably the growing number of political protests that AI was able to track and map.
“In the Ukraine we started to see heightened activity well before the Russian invasion happened,” he says. “We were seeing maybe 30 to 40 events a day that would impact our customer facilities, and that slowly started to increase. Closer to the actual invasion, we're seeing up to 1,200 risks that deal more with military events. And as you start to see that, you take more proactive measures.”
I ask Radtke if he foresees a global escalation between state-backed actors, each one trying to use its own AI to neutralize or compromise that of its rivals.
"In Ukraine we started to see heightened activity well before the Russian invasion happened. Closer to the actual invasion, we're seeing up to 1,200 risks that deal more with military events. And as you start to see that, you take more proactive measures."
Dustin Radtke, CTO of OnSolve
“For sure,” he replies. “You're always going to have to identify the bad actors. There's obviously proof that there has been some manipulation of the [2016] election in the US that was partially driven by AI. You have to make sure that you're focusing on identifying that so you can create counter attacks on the positive side. How is that good going to outweigh that bad profiling? And how do you interact and account for the bad actors and insights that you're actually creating? Which is exactly why we focus on vetted data sources. There are so many Twitter feeds, and you start to see bots being identified out there. You can't always trust the information you're provided with.”
Given what happened in the run-up to the US ballot with data-mining company Cambridge Analytica – which helped the Trump campaign to win by profiling voters based on their interaction with Facebook – does Radtke worry that AI has the potential to throw future elections completely? “I wouldn't say it's going to throw it, but it could definitely have an influence,” he concedes.
AI’s quantum leap
Meanwhile, the data available to machines is set to grow exponentially with the advent of quantum computers, capable of processing at a vastly accelerated rate. The National Institute of Science and Technology recently named four algorithms capable of withstanding a quantum-driven cyberattack, while tech research firm NVIDIA has heralded a new platform that will allow the supercomputers to work with their slower counterparts.
“I think where it all comes into play is speed,” says Radtke when I ask him about the impact quantum computing will have on AI-driven machine learning. “How fast can you disseminate all the information and intelligence and do all the correlation necessary? The more processing available to you, the faster it's going to be. The faster you get insights, the faster people can take action.”
Not only that, but supercomputers will much more quickly be able to determine what data is relevant to a given situation, and what can be discarded. “If you give me a million pieces of information very fast, but only two pieces are relevant, it does me no good,” Radtke explains. “The power will give you that speed, paired with the relevance. Because relevance requires ground truth – ground truth requires corresponding sources of information, providing that context so that you feel comfortable that what you're reporting is true. If you look at what we were thinking five years ago, [it was] purely: ‘Can I report on what's happening?’ But now we've got the treasure troves of information – can I start to use that data to be more predictive? That's what the power is really going to give us.”
Comments
Your email address will not be published. Required fields are markedmarked