The White House’s newly unveiled AI Bill of Rights will guide the design, use, and deployment of automated systems in what the administration calls “the age of artificial intelligence.”
Five identified principles – Safe and Effective Systems, Algorithmic Discrimination Protections, Data Privacy, Notice and Explanation, and Human Alternatives, Consideration, and Fallback – are supposed to govern the use of AI, although on a voluntary basis.
Safe and Effective Systems: users should be protected from unsafe or ineffective systems. Those systems must be proven safe by undergoing pre-deployment testing, risk identification and mitigation, and ongoing monitoring.
Algorithmic Discrimination Protections: the design of systems should not allow users to face discrimination in any way. During the design process, they should undergo proactive equity assessments and future independent testing, as well as plain language reporting.
Data Privacy: built-in privacy protections should safeguard users from abusive data practices, allowing them to decide the course of their data usage. Developers of AI systems must respect the privacy choices of consumers – and only accept their consent in cases where it can be appropriately and meaningfully given.
Notice and Explanation: users should have a clear understanding of how AI is used and how it will impact them. Manufacturers should provide a straightforward yet comprehensive explanation written in clear language regarding the descriptions, impact, and potential usage of AI.
Human Alternatives, Consideration, and Fallback: users “should be able to opt out from automated systems in favor of a human alternative, where appropriate.” Human consideration and fallback should be available to everyone.
The document goes on to explain possible scenarios in which these principles can be implemented.
It also calls the malicious exploitation of data and technology one of “the great challenges posed to democracy today” and urges to apply the principles to any automated system that could “potentially impact the American public’s rights, opportunities, or access to critical resources or services.”
“Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent,” the document explains.
More from Cybernews:
Subscribe to our newsletter