Technology giants alone cannot be trusted to ensure safe and fair generative AI, a new report by the Norwegian Consumer Council warns.
The Norwegian governmental agency has also called on policymakers and regulators to resist attempts by technology companies to dilute any future laws aimed at protecting consumers from harmful uses of AI.
“We must ensure that the development and use of generative AI is safe, reliable, and fair. Unfortunately, history has shown that we cannot trust the Big Tech companies to fix this on their own,” Find Myrstad, director of digital policy at the Norwegian Consumer Council, said.
Fourteen other consumer rights organizations from across Europe and the US have joined the call, as reports emerge that OpenAI, the company behind ChatGPT, was actively lobbying European officials to water down the EU’s landmark AI Act.
“It is crucial that the EU makes the AI Act as watertight as possible in terms of protecting consumers from harmful uses of this technology,” Ursula Pachl, deputy director general of the European Consumer Organisation, said.
She added: “We call on EU institutions to resist the powerful lobbying of Big Tech companies to water down protections in the future law.”
Until the EU's AI Act comes into force and an international framework regulating AI is created, it’s up to national authorities to act and enforce existing data protection, safety, and consumer protection legislation, consumer groups said.
“Companies cannot be absolved from the EU’s existing regulations, nor should consumers be manipulated or misled, just because this technology is new,” Pachl said.
Privacy, security concerns
The report by the Norwegian Consumer Council highlighted the risks of generative AI, which it said are not properly addressed. They include manipulation, bias, privacy challenges, and impact on labor, among others.
It also warned that the technology was being concentrated in the hands of the few Big Tech companies, was built on opaque systems, and lacked accountability.
The report called for strengthened consumer protections and the development of an “overarching” AI strategy centered on fundamental rights, with strict guidelines for the use of generative AI in the public sector.
“Technology is not some uncontrollable force, but must be adapted and formed by fundamental rights, regulations, and societal values. We are in the driver’s seat if we choose to be,” Myrstad said.
Technology companies like Google and Microsoft are racing to incorporate AI into their services despite internal concerns about the process.
OpenAI has reportedly warned its backer Microsoft against rushing GPT-4 integration into Bing, while Google has cautioned its own employees against disclosing sensitive information to its chatbot Bard, or using code generated by it.
More from Cybernews:
Subscribe to our newsletter