© 2024 CyberNews- Latest tech news,
product reviews, and analyses.

OpenAI’s Sam Altman in Davos: we will just have better tools


Can technology amplify our humanity? OpenAI's CEO Sam Altman, fresh off a leadership battle at the AI startup late last year, was asked this question at the World Economic Forum in Davos on Thursday.

Technology, especially the rise of generative artificial intelligence capabilities, is increasingly important for driving development and prosperity worldwide. However, the possible benefits are closely followed by risks, with responsible decision-making and job creation for people rather than machines as some stark examples.

Is it possible to move smoothly into the future with this hugely transformative technology? Altman, who is seemingly still on his AI regulation world tour, certainly thinks so and praises people for understanding “how it works.” This is despite OpenAI quietly removing the previous language in its usage policies that banned military and warfare uses of the company’s tools.

Speaking at the World Economic Forum in Davos, Switzerland, Altman said: “A very good sign about this new tool (generative AI) is that even with its very limited current capability and its very deep flaws, people are finding ways to use it for great productivity gains and understand the limitations.”

According to Altman, caring about what other humans think “seems very deeply wired into us,” and this will not change.

Chess, one of the first so-called victims of AI, is a good example. The Deep Blue supercomputer first beat then-world chess champion Garry Kasparov in 1997.

“All of the commentators said, this is the end of chess. Now that a computer can beat a human, you know, no one's going to bother to watch chess again, ever. But chess has, I think, never been more popular than it is right now,” said Altman.

“If you cheat with AI, that's a big deal. And no one watches two AIs play each other. We're very interested in what other humans do. We're going to have better tools. We're still very focused on each other, and I think we will just do things with better tools.”

But what if the technology ends up in the hands of bad people with bad intentions? Altman said he had “a lot of empathy for the general nervousness and discomfort of the world towards companies” like OpenAI.

Sam Altman OpenAI
Sal Altman. Image by Shutterstock.

“I believe, and I think the world now believes, that the benefit here is so tremendous that we should go do this. But it is on us to figure out a way to get the input from society about how we're going to make these decisions, not only about what the values of the system are but also what the safety thresholds are and what kind of global coordination we need,” said Altman.

Earlier at the forum, Altman told Axios that he thought OpenAI's next big model "will be able to do a lot, lot more" than the existing models can. He also said AI was evolving much faster than previous technologies, and that is allegedly why "uncomfortable" decisions might be required.

OpenAI's technology is going to be customized individually, Altman said, and will probably give different users based on their value preferences and – possibly – on what country they reside in.

Altman was suddenly removed as OpenAI's CEO last November before being swiftly returned to the role. The dispute was allegedly caused by tense internal debate over the company's mission, and after Altman's return, OpenAI's co-founder and chief scientist Ilya Sutskever resigned from the firm.

“When the board first asked me the day after firing me if I wanted to talk about coming back, my immediate response was no, because I was just very pissed about a lot of things about it. And then I quickly got to my senses and realized I didn't want to see all the value and all these wonderful people, who had put their lives into this, destroyed,” Altman revealed.


More from Cybernews:

Pandora’s box: AI in an X-rated world

Musk's X approved for Virginia money transmitter license

BreachForums former admin gets 15 year prison sentence

AI and big data: the energy suckers of the future

Pandora’s box: AI in an X-rated world

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are markedmarked