© 2021 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

The proposed AI law to make Europe fit for a digital age


Thierry Breton european commissioner during a press conference following the meeting about the promo

Big tech quickly changed the world with its move-fast-and-break-things mantra. But it didn't stop to think of the future implications on society. Without realizing it, AI has engulfed every aspect of our life. The outcome of a job or loan application is likely to be determined by an algorithm rather than a human.

The next song, movie, TV Show, or viral video on your smartphone will also have been the result of an algorithm that leveraged your personal data. When shopping online, it will be conversational AI and chatbots that you communicate with. Some users are even creating AI clones of themselves to communicate with audiences around the world.

DeepMind AI has already decoded the language of life, and humans are struggling to understand what is going on in Black Box AI. Things take a darker turn with facial recognition being used for predictive policing and bringing pre-crime algorithms to life. There is an argument that we lost our way and big tech has more power than many governments.

The European Union steps up to regulate the uses of AI in society

These concerns have prompted the European Union to unveil sweeping legislation that, if passed, would ban the use of AI and algorithms designed to manipulate human behavior or circumvent users' free will. In the proposals, any organization breaking the rules could face fines of up to 6% of its global turnover. For big tech firms such as Google and Facebook, this would be billions of dollars.

The whitepaper outlines a roadmap for the future that would outlaw systems that enable 'social scoring' by governments. The move would eradicate the use of AI around our social behavior to predict, evaluate and manipulate future behavior or trustworthiness. The EU wants to encourage the world to build more human-centric, sustainable, secure, inclusive, and trustworthy AI.

A divided tech community

Predictably, US tech giants with a presence in Europe are already preparing to challenge the proposed law. Mark Twain once said that history doesn't repeat itself, but it often rhymes. It's difficult not to compare the new rules to the EU's data privacy regulation GDPR, which came into effect back in 2018 and quickly became a privacy standard template for the world.

Realistically, it will take years for the proposed AI regulations to become law.

Many have accused the EU of needlessly adding more bureaucracy regulating AI and technology that it doesn't understand. While critics accuse regulators of stifling innovation, it should at the very least stop big tech from having everything its own way and help curb an increasingly worrying behavior in the industry.

Worried that Europe will get left behind, some believe that the EU is too obsessed with regulations while the rest of the world focuses on innovation. But as we begin exploring a brave new digital world of self-driving cars and remote surgery, regulation is inevitable. By setting itself apart from the US and China for the right reasons, many also argue that it will be on the right side of history.

Rebuilding trust in the technology that surrounds us

The polarization, hate, and extremism we see on our smartphones represent a less than positive image of life in a digital age. Regulators have long struggled to keep up with the fast pace of big tech. Thankfully this is changing. The reassessing of our use of tech and how we got where we are today is long overdue. Tech leaders also need to reflect on their creations and step up to their moral and ethical responsibilities.

However, it is a notoriously complex area to navigate. It's easy to blame AI for manipulating humans. But for as long as we can all remember, the world of advertising has been purposely designed to manipulate our behavior. It can also be difficult to know if we are interacting with a machine or human when selecting the chat option on a website.

Where do we draw the line? And how do humans become accountable for the biases they feed into algorithms?

Moving fast and breaking things to get products and services into customers' hands quickly is no longer an acceptable strategy. With one eye on the future, organizations need a more proactive approach to anticipate risks before it enters the wild rather than reacting and firefighting issues as they appear.

As we prepare for life in a post-pandemic world, there is an increasing appetite to do things differently and take a new path that enables us to rebuild our trust in technology. Creating a European global hub that is united on building trustworthy Artificial Intelligence-based solutions that open up worlds for everyone is another massive step in the right direction.

Embedding the values from this new rulebook and encouraging organizations to collectively prove AI can be a force for good will be a challenge that will take years to implement. But as Elon Musk embarks on a mission to take 1 million people to Mars and help humans become an interplanetary species, it's clearly a time for us all to think bigger.

Leave a Reply

Your email address will not be published. Required fields are marked