In an open letter, more than 100 researchers have called on generative AI companies to clarify their rules and allow investigators access to their systems, which are used by millions of consumers.
According to the researchers, protocols created to keep bad actors from abusing AI systems are probably doing that – but they’re also hindering independent research. Safety-testing AI models without a company’s permission can end in a ban or a lawsuit.
“Independent evaluation is necessary for public awareness, transparency, and accountability of high impact generative AI systems. Currently, AI companies’ policies can chill independent evaluation,” says the open letter.
“While companies’ terms of service deter malicious use, they also offer no exemption for independent good faith research, leaving researchers at risk of account suspension or even legal reprisal.”
The letter was signed by top experts in AI research, law, and policy, including Mozilla fellow Deb Raji, a pioneering researcher of auditing AI models, and Brown University professor Venkatasubramanian, a former adviser to the White House Office of Science and Technology Policy, academics from Stanford University.
The plea, sent to the largest AI companies such as OpenAI, Meta, Anthropic, Midjourney, and, of course, Google, urges the firms to provide a “safe harbor” for researchers to analyze their products. So far, barriers are aplenty.
For instance, while some firms indeed offer researcher access programs, the structure of these allows companies to select their own evaluators. Needless to say, this doesn’t really sound like independent research.
“This is complementary, rather than a substitute, for the full range of diverse evaluations that might otherwise take place independently,” say the researchers.
They add that in some cases, generative AI companies have already suspended researcher accounts and even changed their terms of service to deter evaluation.
According to the authors of the letter, it could seem like generative AI firms are following the footsteps of social media platforms, “many of which have effectively banned types of research aimed at holding them accountable, with the threat of legal action, cease-and-desist letter, or other methods to impose chilling effects on research.”
Indeed, generative AI companies are now quite aggressively pushing outside auditors out of their systems. OpenAI claimed recently that the New York Times “hacked” their ChatGPT chatbot to find potential copyright violations, for example.
Meta now actually revokes the license to LLaMA 2, its latest large language model, to a user if they allege that the system infringes on intellectual property rights.
And Artist Reid Southen – who also signed the letter – had his multiple accounts banned from image generator Midjourney while testing whether the tool could be used to create copyrighted characters of movie characters.
The researchers are asking AI companies to provide two levels of protection to research. First, a legal safe harbor would “indemnify good faith independent AI safety, security, and trustworthiness research.”
“Second, companies should commit to more equitable access, by using independent reviewers to moderate researchers’ evaluation applications, which would protect rule-abiding safety research from counterproductive account suspensions, and mitigate the concern of companies selecting their own evaluators.”
Your email address will not be published. Required fields are markedmarked