© 2023 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

Singapore pushes for fairer AI amid bias fears


Singapore has announced it will pilot a program for regulatory oversight of artificial intelligence (AI) to address concerns that it discriminates against “certain populations,” according to the World Economic Forum.

Infocomm Media Development Authority (IMDA), working in partnership with the Personal Data Protection Commission (PDPC) of Singapore, released a white paper in May, in which it declared its aim to assure the public of the city-state that “AI systems are fair, explainable, and safe, and companies that deploy them are transparent and accountable.”

Now global thinktank WEF has lent its support to the initiative, which it says will help to redress problems of inherent bias in AI facial recognition systems and the likes. Last year saw several controversial incidents in which the pioneering technology appeared to racially profile and even insult people of color.

“Artificial intelligence is becoming ubiquitous,” said the WEF. “It underpins all kinds of functions, including critical ones. It impacts our work, life and play. It is used, for example, in medical imagery to detect severe illnesses and, through facial recognition, to unlock a smartphone.

“There are instances, however, where the AI model's output does not perform as intended. When the AI model is not trained and tested against representative datasets, for example, there can be bias against certain populations.”

Dubbed the Minimum Viable Product (MVP) system, the program will be voluntary and is intended as a set of self-regulatory guidelines “to help system owners and developers implement trustworthy AI products and services.”

It is aimed at system developers and owners with a view to building a solid international consensus on ethical standards in AI.

“With greater maturity and more pervasive adoption of AI, the industry needs to demonstrate to their stakeholders their implementation of responsible AI in an objective and verifiable way,” said the white paper.

“IMDA and PDPC have taken the first step to develop a governance testing framework and toolkit to enable industry to demonstrate their deployment of responsible AI.”


More from Cybernews:

Hackers compromise Norton Password Manager

Southwest Airlines sued for outdated technology

Robot rockers nail Nirvana and Metallica classics

Biden pushes Republicans and Democrats “to hold Big Tech accountable”

What's wrong with hybrid work and how to fix it

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are marked