When it comes to machines and humans, Western thinkers are too preoccupied with a “them vs. us” mentality and might do well to look East for inspiration. That was my key takeaway after talking to UN artificial intelligence advisor Neil Sahota.
With more than a dozen patents to his name, the public speaker and author has been working with the United Nations to steer it closer to its 17 goals of sustainable development. Closely involved with this year’s UN-mandated AI For Good initiative, he hopes the emerging technology can help.
Neil’s credentials come highly recommended, so there is some reason to believe him. He first came to the attention of IBM, which signed him up to work on its AI program, Watson, awarding him the coveted Master Inventor status for innovation in the field.
While he acknowledges the fears that have multiplied around AI and its potential misuse, he remains optimistic. And, having worked with corporate heads in Ningbo, a city in China’s Zhejiang province, he also believes that the Eastern mindset is more attuned to working with machines for the benefit of humankind.
Could he be right? For decades, Western storytelling has emphasized the ‘Terminator effect’ – whereby machines initially intended for good end up wreaking havoc on the human race. But look east, Neil says, and a very different prognosis emerges.
Cybernews sat down with him to discuss AI, online disinformation, and the need to properly educate the public about the developments that are fast upon us as technology transforms the world at a rate never seen before.
Perhaps you could start by telling us how you got into this space.
I’ve always been a problem solver. And so 22 years ago, I started doing some work to see if computers could look at data, and I was calling it “enterprise intelligence” back then. I got some patents, and other things around it got the attention of IBM. In 2006, I was asked to join a secret project, Watson.
You’re an IBM Master Inventor – tell us more about what that entails.
That's a designation that's given out to very few people, usually when you've created some sort of – cliche! – game-changing technology that has a tremendous impact, as well as multi-billion dollar value.
Tell me more about the work you did with the UN and why it brought you in as an AI technology advisor.
So 2015, there’s a big conference. A lot of world leaders and ambassadors are there. I was asked to give a speech about AI specifically, and I was told in advance that there are a few leaders who think AI is Terminator: time to rise up, conquer the world, and eradicate humanity.
Skynet syndrome, I guess…
Yeah. I got a little more optimistic. I focused more on how AI is already being used in public service and could be used towards the [UN] Sustainable Development Goals [SDGs], and that really seemed to resonate. Several of the leaders came up to me at the reception, and they're like, “Neil, we never thought about using the technology for something about regulation and policy. I think there's an opportunity here. Can you help us figure something out?” And that really led to the creation of the AI For Good Initiative, where I realized AI and other emerging technology could bridge some of the gaps in making the SDGs a reality.
It's a very solutions-focused initiative. It's not just about governance and policy and regulation. It's also about projects and putting things out there that people can use to make a difference towards the SDGs. It's not like we're going to solve climate change with one big project. It's going to be a series of projects.
Around 2018 or 2019, we started the Innovation Factory, which is actually a program for social impact entrepreneurs, because I've learned that local problems have global solutions. So, one area is food production and safety. It's an issue around the world. Same thing with upskilling and workforce development. The goal around this has been to try and jump ahead, not just with AI, but with blockchain, the metaverse, to say, “How can we advance the SDGs, and how can we do good for people and the planet?”
"Now, with all the war, skirmishes, everything going on, there's a lot of fake news, deep fakes, deep audios. We've already seen some of that in Ukraine and Russia. I'm sure that's going on with Israel and Hamas."
UN artificial intelligence advisor Neil Sahota
Now, with all the war, skirmishes, everything going on, there's a lot of fake news, deep fakes, deep audios. We've already seen some of that in Ukraine and Russia. I'm sure that's going on with Israel and Hamas. We see it in some of the elections, not just in North America but internationally. It has gotten to the point where information is now weaponized.
Tell me more about the social impact entrepreneur. What is that, as opposed to a regular entrepreneur?
So, a social impact entrepreneur is a person who is doing a for-profit company with the central tenet or some key element of their business that's focused on trying to improve society or the environment. We often think of it as: if they're doing something that helps drive one of the Sustainable Development Goals, then they’re a social impact entrepreneur.
The UN has 17 SDGs, right? Which of these do you think AI can help with?
That's right, like: zero hunger, good health, justice. We've completed over 170 projects, we have 281 going on right now across all 17 SDGs.
I recently spoke to Wasim Khalid, CEO of Blackbird AI, who specializes in countering disinformation online. He said that even if there was a perfect software to weed fake and manipulated content out, it still wouldn't work because “people are so ideologically aligned that they wouldn't believe such a system anyway.” That’s a valid concern, right?
It is. We always talk about things as “people process technology,” and often, people are usually where most of the challenge lies. Every time there's a technological tool, it's going to impact the process. We have to recognize that. And if we don't understand or adapt or incorporate, we're stuck. I hate to say that everyone lives in a bubble, but our bubbles seem to be shrinking because, ironically, our news or social media feeds are all being controlled by AI [that] learns your opinions, attitudes, and interests. So it's feeding you stuff you basically already believe in.
So you have this big echo chamber and think more people think the way you do than in reality. We don't get that diversity of perspective or thought anymore. And even if we had this magical tool, there’s still some percentage of the population that would not believe a tool would actually work, or that would believe the tool is trying to lie to them, or that wouldn't think of it at all and just say: “it looks or sounds like that person, so it must be that person.”
So, how would you counter that? Do you have any ideas for solutions, or is that a whole other area of expertise?
It's something we all think about. One of the common things that gets bandied about is some ‘trust system’ you can create: any piece of content, whether it's image, video, audio, or even text, could there be something like a watermark to authenticate? And people have been trying that, but, unfortunately, you get fake watermarks. So, honestly, it’s going to be a new arms race. These guys are going to get better with their fake technology, and we'll develop better counter tools, so they’ll develop better fake technology, and we'll develop better counter tools. It'll be an ongoing struggle.
You’ve also worked on legislation around Lethal and Automated Weapons Systems (LAWS). However, the UN hasn’t yet agreed on what constitutes such a weapon. Tell me about your experience with that and why it’s proving so difficult to define.
Yeah, there’s some politics in that. You know, drones have become very common usage in a lot of countries. So, are drones technically lethal – are they autonomous weapon systems? You got some people saying, “We don't want that to get classified that way.” So there’s a lot of tug and pull about what items actually constitute this. That's why there’s no agreement. And anything within the UN requires the approval of the Security Council. Somebody on the Security Council vetoes it. The whole thing is blocked. That's been one of the challenges.
And because there are wars going on right now that involve some Security Council nations, you're seeing a lot of politics about trying to dance around what really constitutes that. Everyone agrees it’s morally wrong, but no one wants to say, “Well, X, Y, and Z constitute Lethal Autonomous Weapon Systems.”
"Look at Eastern culture, movies, and books. They've always seen AI and robots as helpers and assistants, as a tool to be used to further the benefit of humans, and as a result, they're actually wired to look for opportunities."
Neil explains why he thinks countries like China might be more receptive to developments in AI and robotics
You're obviously a passionate advocate for AI, and you’ve expressed that in the book you co-authored, Own the AI Revolution. But another I recently reviewed, Blood in the Machine, by Brian Merchant, suggests tech will be used to degrade workers, their pay, living standards, and dignity. What would you say to those who fear that AI will be used to concentrate even more wealth and power into even fewer hands?
There are two things we have to keep in mind. One, technology is just a tool, and it's how we choose to use it: we can create or we can destroy. And if we don’t encourage people to try and do good or helpful things with it. Not that many people will probably focus on that right away and think, we really have to create that mindset that this can really be a powerful tool.
I’ve already seen a lot of companies that went in thinking AI and robotics would help reduce headcount, and they really haven’t that much. In fact, what they said is that now that some of these grunt tasks are taken off, you can do more of this complex value-added work, but [you] need people for that. It's good with standardized, repetitive tasks, but it can’t do everything. This is really about hybrid intelligence: the partnership of humans and machines, basically complementing human capabilities with machine abilities.
The World Economic Forum said in the next three to five years, they expect 80 million jobs to get displaced – not eliminated, changed. But they also saw the creation of 90-something million jobs in that same timeframe. There’s gloom and doom, and there’s optimism, but we have to keep in mind that it depends on what we do with this.
That brings me to my second point: there’s also an inherent cultural difference going on. In Western culture, movies, and books, it's always been human versus machine, right? But there’s something special about us as humans – we always wind up winning. And so we've kind of grown up with the adversarial relationship: we’re trained to look at AI as a threat. And so that kind of skews our perception.
But look at Eastern culture, movies, and books. They’ve always seen AI and robots as helpers and assistants, as a tool to be used to further the benefit of humans, and as a result, they're actually wired to look for opportunities. And that’s why we see that places like Japan, Korea, and China they’re further ahead of us in the use of AI in healthcare and elevating the quality of services because there you have that kind of mindset. The [US sci-fi] movie Crater ironically plays right to that narrative: there’s this war, Western versus Eastern; Western [culture] wants to eradicate, Eastern culture wants to protect and use.
That is a fascinating insight, unpack it for me a bit more. You’re talking about countries like China and Japan being more advanced in the uptake of robotics. Why do you think those Asian countries have that different approach?
It's a multifaceted answer. But a couple of the big drivers, I think, for Eastern culture, is that you have this big sense of community, right? They don't think about individuals and slices of the pie, that kind of stuff: get less pie, get more pie. They think of society. Big picture first. I think because of that viewpoint, they're looking for more tools and things that can help generally.
The second big driver, I think, is some of those areas are actually very resource-constrained. Even things like wood and pulp and paper are a bit rarer and more expensive. They've realized that anything that helps bridge the gap, where they can optimize or maximize the resources they have, is beneficial. I think that's why they've become very, to a degree, tool-oriented.
We're living in a time of, essentially, hyper-change: we're probably going to experience a hundred years’ worth in the next ten. And our mindset and ability to deal with that isn't quite there. I still hear from a lot of people: “I understand some of the things that are coming. It’s going to impact the workforce, or it’s going to impact lives, but we probably have 15 or 20 years to figure it out.” And the truth is we don’t. We might have 10 or 15 months, and we can't wait for something bad to happen. We can't just react. We have to get proactive about these things, which means that it's not just technology updating our processes. It’s also about how we get people ready for hyper-change.
Any suggestions in a nutshell on that? I know, it’s hard to be reductive about this kind of thing, but go for it…
I think it’s going to be a major mind shift. We have to get good at scenario planning and thinking of the ‘non-happy’ path. Think about exceptions. We build to a use [purpose], right? For a lot of engineers and technologists, this is the outcome they’re looking for, it gets built to that: they’re not thinking about other uses or misuses, the ‘what if’ scenarios. And we have to get good at that if we're going to jump ahead of it.
And do you think, just going back to the UN, that it will play a major role in mediating this kind of process?
I think everyone agrees the UN should play a role. It’s a question of how big or small a role that should be. That’s what is being debated internally at the moment.
More from Cybernews:
Gamblers’ data compromised after casino giant fails to set password
LockBit ransom gang behind mass exploitation of Citrix bug, researchers say
Meta, Alphabet, ByteDance must face social media addiction lawsuits
EP committee rejects “mass surveillance” proposal, similar to UK’s Online Safety Bill
YouTube cracking down on AI clones of artists, content creators
Subscribe to our newsletter
Your email address will not be published. Required fields are markedmarked