New DC advocacy group wants to copy already successful rules for AI

Does AI have to be regulated heavily? Should we leave the tech visionaries alone? A new advocacy group thinks neither is necessary – let’s just copy successful rules from other industries.

The nonprofit and bipartisan group, called Americans for Responsible Innovation (ARI), launched Wednesday in DC is focused on emerging technologies such as AI.

The organization, founded by former Democratic Congressman and senior defense official Brad Carson and tech entrepreneur Eric Gastfriend, advocates the middle-of-the-road solution, supporting “thoughtful” regulation while continuing to foster innovation at the same time.

“While over-regulation can be harmful to an industry, thoughtful regulation can promote public trust, which leads to long-term market growth,” says the group.

“For example, part of the reason the United States is a leading financial center globally is because we have well-regulated capital markets that protect lenders, borrowers, and property rights.”

ARI also says it doesn’t take money from the tech industry: “To maintain our independence as an organization acting on behalf of the public interest, we do not take money from industry and our organization is run fully independently of our donors.”

The group simply says it’s different from others that are working on AI policy. So far, the problem has been that think tanks and nonprofits are structured as 501(c)(3) organizations which are broadly prohibited from becoming too involved in the legislative process – this “often leaves a gap filled by industry interests,” says ARI.

This specific group, however, is structured as a 501(c)(4) organization. This allows them to work directly with policymakers on AI-related matters.

ARI’s approach to the issue encompasses three distinct focus areas: current harms, including algorithmic bias and electoral interference; national security, including export controls and national competitiveness; and emerging risks, which covers threats that have only begun to appear but may evolve unpredictably as AI capabilities improve rapidly.

According to Axios, ARI proposes to create an AI Auditing Oversight Board that would ensure integrity in external AI audits mentioned by president Joe Biden in his executive order on AI last year.

It also wants more money for the Commerce Department’s National Institute of Standards and Technology, which could become the leading federal AI regulator, dealing with supply chain coordination across democracies, “Know Your Customer” regulations pioneered in banking, and incident reporting databases – just like in the cybersecurity world.

It all sounds very attractive but observers point out that US lawmakers, constantly facing armies of Big Tech lobbyists, have never passed comprehensive regulation of software or digital platforms, and it’s still unclear how Biden’s executive order could work in practice.

Still, new initiatives should probably be welcomed – if only to provide people with more knowledge about AI. New research shows that public trust in AI is rapidly shrinking globally, especially in the US.

More from Cybernews:

Large online dictionary leaks nearly 7M records

Google employee charged with stealing AI trade secrets

As Big Tech scrambles to meet EU rules, investigations seen as likely

LinkedIn down for thousands of users

Large online dictionary leaks nearly 7M records

Subscribe to our newsletter

Leave a Reply

Your email address will not be published. Required fields are markedmarked