Big tech tracks its users' web browsing, social media activity, and location to keep them scrolling and buying. Facebook alone has more than 52,000 data points on every person, and there is a good chance that an algorithm predicted that this article would be of interest to you. When applying for a mortgage or a new job, AI, machine learning, and predictive analytics could determine the success of your application. Unfortunately, this is not an episode of Black Mirror. AI algorithms are already predicting your future.
In 2019, researchers at the University of Michigan revealed how a deep learning-based algorithm could predict the future location of a pedestrian and their gait. The biomechanically inspired recurrent neural network was created to help driverless vehicles while also steering clear of entering the creepy territory.
The future of healthcare and drug discovery
Atomwise is using AI and deep learning to transform the drug discovery industry. Its algorithms extract insights from millions of experimental affinity measures and thousands of protein structures to predict the binding of small molecules to proteins. As a result, AI has helped the company improve hit rates by 10,000x and speed up the work of chemists.
Elsewhere, hospital-admissions data such as zip code, medication, and diagnostic history is helping AI predict what patients are at future risk of suicide and deliver programs to them before the thoughts enter their head. Although this tech comes from a good place, it also raises ethical debates around how to intervene to help someone who has not asked for it or is aware of their problems awaiting on the horizon.
In the fight against , the healthcare industry needs to better predict who is going to get it in the future. Capturing the disease in the early stages and proactively preventing breast cancer has the potential to be a game-changer for women. MIT researchers developed an algorithm and deep learning model that can predict whether a patient could develop breast cancer in the future.
However, things can quickly take a turn for the worst when predictive policing and pre-crime algorithms enter the conversation. There is an increasing belief in some circles that if you feed big data from arrest histories, social media, and other activities into AI, a pre-crime division can solve crimes before they occur.
Twenty years after the movie's original release, Minority Report-style technology has already arrived with driverless cars, personalized ads, voice-controlled homes, and facial recognition. But predictive policing is arguably leading us into a world where police analysts instruct officers to question suspects based on algorithms rather than their present actions.
Subscribers of the "if you've got nothing to hide, you've got nothing to fear" school of thought often miss the point. If technology automatically charges every jaywalker or every person that has even slightly gone over the speed limit, it wouldn't take too long until a large portion of society is criminalized.
Another example is the recent news that Athletes competing at the Winter Olympics in Beijing have been warned that they will be under 'Orwellian' levels of surveillance which stifle freedoms and keep people from engaging in certain types of conversation. Monitoring personal data and social media make it easier to not only predict future behaviour and conversations but prevent them from taking place.
Can AI algorithms predict your future?
Ten years ago, Target's algorithms were able to identify that a teen girl was pregnant before her father did. Five years ago, Facebook boasted to advertisers that it could recognize when teens felt insecure. More recently, as AI gets more intelligent, it can determine when a person will die. The problem with predicting future human behaviours in the same way a meteorologist would forecast the weather is that we are de-humanizing people.
We all learn through the unique experiences in our life. But the success of any computer model will depend on its worldview and biases inputted by other individuals with a different set of values. Further problems will occur if an algorithm doesn't understand your history and incorrectly determines you are not reliable enough to be given a loan or land your next job. Rather than predicting your future, it could purposely restrict your options around self-improvement.
Whether AI can predict or merely determine your future is something we should be talking about much more than we are. There has always been bias of some kind in every form of decision-making. Eventually, I am hopeful that AI will at the very least help us reduce it.
However, we should never lose sight of the fact that when all hope is lost and securing a victory is illogical from a machine's perspective. Humans have a canny ability to overcome all the odds and surprise everyone. That's one thing that AI algorithms still cannot understand or predict.