It seems the White House has heard growing calls to regulate artificial intelligence (AI). The technology is booming, but the government has now pledged to tame the risks related to its development.
The National Science Foundation plans to spend $140 million on new research centers devoted to AI, White House officials said.
The administration also pledged to release draft guidelines for government agencies to ensure that their use of AI safeguards “the American people’s rights and safety.” The press release added that several AI companies had agreed to make their products available for scrutiny at a cybersecurity conference in August.
On Thursday, Vice President Kamala Harris and other senior administration officials are meeting chief executives of the four American companies most successful in AI innovation so far – Alphabet, Anthropic, Microsoft, and OpenAI.
According to the White House, Harris will underscore the importance of “driving responsible, trustworthy, and ethical innovation with safeguards that mitigate risks and potential harms to individuals and our society.”
“President Biden has been clear that when it comes to AI, we must place people and communities at the center by supporting responsible innovation that serves the public good while protecting our society, security, and economy. Importantly, this means that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public,” the press release said.
Interest in AI technology exploded last year when OpenAI released ChatGPT. Dozens of millions of people soon started using the chatbot to do tasks for them, and other firms rushed to accelerate their own AI research.
However, experts have been wondering how many workers AI will eventually replace in their jobs. There’s also speculation about the technology spreading misinformation or even beginning to act against humans.
There’s also the question of copyright – chatbots are trained on real data which belongs to someone. The European Union is, for instance, edging closer toward the world’s first comprehensive legislation regulating the technology.
In March, thousands of notable signatories, including Twitter, Tesla, and SpaceX boss Elon Musk and Steve Wozniak, co-founder of Apple, signed an open letter calling on all artificial intelligence (AI) labs to pause the training of systems more powerful than GPT-4.
US President Joe Biden said at the beginning of April that it remains to be seen whether AI is dangerous, but he underscored the notion that technology companies have a responsibility to ensure their products are safe before making them public.
A group of government agencies recently pledged in a joint statement to “monitor the development and use of automated systems and promote responsible innovation,” while punishing violations of the law committed while using the technology.
More from Cybernews:
Subscribe to our newsletter