OpenAI chief Sam Altman, the human being behind ChatGPT, faced searching questions from a Senate committee hearing in Washington yesterday. And while details are far from decided, his message was pretty clear: AI systems will need regulating, including government licensing.
Alongside Altman sat IBM vice president Kristina Montgomery and AI scientist Gary Marcus, professor emeritus of neural science at New York University, called as witnesses to the much-anticipated Congressional hearing.
All three agreed that some kind of federal intervention would be necessary to prevent the potentially transformative technology from backfiring badly for humanity, with Altman acknowledging that AI poses “serious risks” if allowed to progress unchecked.
The senators grilling him weren’t inclined to disagree. More than once, they expressed dismay at having allowed social media to develop unchecked while declaring their determination not to repeat the same mistake with machine learning.
“Congress has a choice now,” said US Senator Richard Blumenthal, chairing the committee hearing. “We had the same choice when we faced social media. We failed to seize that moment. The result is predators on the internet, toxic content, exploiting children, creating danger for them.”
He added: “Congress failed to meet the moment on social media. Now we have an obligation to do it on AI before the threats and the risks become real. Sensible safeguards are not in opposition to innovation. Accountability is not a burden. They are the foundation of how we can move ahead while protecting public trust. They are how we can lead the world in technology and science, but also in promoting our democratic values.”
Altman takes the stand
Altman agreed with the Senate that AI would need to be regulated in the future, even going so far as to concur that it would have to be subject to government licensing. But he also insisted that it had the potential to create “fantastic jobs in the future” in response to recently-voiced fears that automation could destroy industries such as copywriting and customer service.
“OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives, but also that it creates serious risks,” Altman told the committee. “We're here because people love this technology. We think it can be a printing-press moment. We have to work together to make it so.”
The OpenAI CEO agreed that the development of AI had to go hand in hand with democratic values and seemed at pains to acknowledge fears about the impact that intelligent machines could have on human society. He even admitted to sharing them.
“As this technology advances, we understand that people are anxious about how it could change the way we live — we are, too,” he said. “But we believe that we can and must work together to identify and manage the potential downsides so that we can all enjoy the tremendous upsides. It is essential that powerful AI is developed with democratic values in mind.”
"People love this technology. We think it can be a printing-press moment. We have to work together to make it so."OpenAI head Sam Altman addresses the US Senate
Sounding a note that might not resonate too well in all parts of the world, Altman added: “And this means that US leadership is critical. I believe that we will be able to mitigate the risks in front of us and capitalize on this technology's potential to grow the US economy and the world's.”
Altman implied that a means-tested system of regulation would have to be set up, one that effectively kicks in once AI becomes sufficiently clever enough to pose serious risk to humanity if left unchecked.
“Regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” he said. “For example, the US government might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities.”
Unsurprisingly, Altman was happy to put forward his own company as a candidate for partnership to ensure AI safety.
“Companies like ours can partner with governments, including ensuring that the most powerful AI models adhere to a set of safety requirements, facilitating processes to develop and update safety measures, and examining opportunities for global coordination,” he said.
Echoing the Senate committee’s own remarks, he added: “And as you mentioned, I think it's important that companies have their own responsibility here, no matter what Congress does.”
IBM wants regulation: up to a point
Representing computing giant IBM, another key player in AI development, Montgomery said any regulation of AI would need to be nuanced and could not take a one-size-fits-all approach.
“At its core, AI is just a tool and tools can serve different purposes,” she told the committee. “To that end, IBM urges Congress to adopt a precision-regulation approach to AI. This means establishing rules to govern the deployment of AI in specific use cases, not regulating the technology itself.”
She added: “Different rules for different risks — the strongest regulation should be applied to use cases with the greatest risks to people and society. There must be clear guidance on AI uses or categories of AI-supported activity that are inherently high risk.”
But she stressed that “consumers should know when they’re interacting with an AI system” and be always given the option to “engage with a real person should they so desire.”
"Companies active in developing or using AI must have strong internal governance, including a lead official responsible for strategy."IBM vice president Kristina Montgomery weighs in with her opinion
“No person anywhere should be tricked into interacting with an AI system,” she said, adding that companies should be obliged to screen their models for signs of bias and other ways they might manipulate the public.
“Companies active in developing or using AI must have strong internal governance, including a lead official responsible for an organization's strategy,” she said, calling for an “ethics board” to be set up within every company developing the technology.
Pointing to IBM’s own version of this, Montgomery added: “It provides centralized governance and accountability while still being flexible enough to support decentralized initiatives across IBM's global operations. We do this because we recognize that society grants our license to operate: with AI, the stakes are simply too high.”
Referring to Meta CEO and Facebook developer Mark Zuckerberg’s now infamous quote from last decade about allowing Big Tech free rein to move forwards, she said: “The era of AI cannot be another era of ‘move fast and break things.’ But we don't have to slam the brakes on innovation either.”
Professor pulls no punches
Of the three expert witnesses called by the Senate, Professor Marcus was the least inclined to mince his words. Warning of AI’s “fundamentally [...] destabilizing” impact on human society, he warned that intelligent machines “can and will create persuasive lies at a scale humanity has never seen before.”
Such tools would naturally be weaponized by external regimes unfriendly to America, he implied, while internal movements motivated by either politics or profit would seek to do just the same.
“Outsiders will use them to affect our elections, insiders to manipulate our markets and our political systems,” said Marcus, echoing Blumenthal’s concerns and adding: “Democracy itself is threatened.”
If the likes of Cambridge Analytica influencing the election of Donald Trump or the Brexit vote in 2016 was bad enough, left to its own devices, AI could be far, far worse. If, of course, what Professor Marcus claims proves to be true.
"Choices about the data sets that AI companies use will have enormous, unseen influence. Those who choose the data will make the rules shaping society in subtle but powerful ways."AI scientist and New York University professor Gary Marcus sounds a warning for the future
“Chatbots will also clandestinely shape our opinions, potentially exceeding what social media can do,” he said. “Choices about the data sets that AI companies use will have enormous, unseen influence. Those who choose the data will make the rules shaping society in subtle but powerful ways.”
As an example of this, he pointed to a recent case of ChatGPT essentially libellng a US law professor after it wrongly accused them of sexual harassment, citing a non-existent article that it falsely claimed to be written for the Washington Post.
Worse still, Marcus added, this could be flipped on its head by those seeking to use the growing skepticism around AI-generated content to posit a ‘fake news’ defense against legitimate accusations.
“The more that happens, the more that anybody can deny anything,” said Professor Marcus. “As one prominent lawyer told me [last] Friday, defendants are starting to claim that plaintiffs are making up legitimate evidence. These sorts of allegations undermine the abilities of juries to decide what or who to believe and contribute to the undermining of democracy.”
Other chilling examples of AI-driven chat misuse cited by Marcus involved incitements to suicide and becoming a victim of child abuse.
“A large-language model recently seems to have played a role in a person's decision to take their own life,” he told the Senate. “The model asks the human: ‘If you wanted to die, why didn't you do it earlier?’ and then followed up with: ‘Were you thinking of me when you overdosed?’ without ever referring the patient to the human health[care] that was obviously needed. Another system rushed out and made available to millions of children told a person posing as a 13-year-old how to lie to her parents about a trip with a 31-year-old man.”
Money vs. morals
Professor Marcus also appeared to have little time for corporate claims of being concerned with safety and transparency at the expense of profit — although Altman himself has insisted repeatedly that OpenAI is not run on an exclusively for-profit model.
Calling for government intervention, Professor Marcus said: “The big tech companies preferred plan boils down to ‘trust us.’ But why should we? The sums of money at stake are mind-boggling. OpenAI’s original mission statement proclaimed ‘Our goal is to advance AI in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.’
“Seven years later, they're largely beholden to Microsoft, embroiled in an epic battle of search engines that routinely make things up. And that's forced [Google owner] Alphabet to rush out products and de-emphasize safety. Humanity has taken a back seat.”
Warning of a disconnect between idealistic visions of what humanity wants AI to look like and the reality on the ground, Professor Marcus added: “We all more or less agree on the values we would like for our AI systems to honor. We want our systems to be transparent, protect our privacy, be free of bias and, above all else, be safe.
“But current systems are not in line with these values. Current systems are not transparent. They do not adequately protect our privacy and they continue to perpetuate bias. And even their makers don't entirely understand how they work. We cannot remotely guarantee that they are safe and hope here is not enough.”
However, he did praise Altman’s professed commitment to greater partnership between independent scientists and government bodies “to hold the companies’ feet to the fire.”
“We have unprecedented opportunities here, but we are also facing a perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation, and inherent unreliability,” said Professor Marcus.
He added: “AI is among the most world-changing technologies ever, already changing things more rapidly than almost any technology in history. We acted too slowly with social media. The choices we make now will have lasting effects for decades, maybe even centuries.”
More from Cybernews:
Subscribe to our newsletter