Artificial intelligence (AI) has almost become a cliche of prediction – talk to most people, and it’s destined either to be the savior of the human race or its ultimate downfall. But what does AI really look like up close to somebody who works with it every day? Cybernews sat down with an industry expert to find out.
Daniel Bruce is Chief Product Officer for Levatas, an AI company working hand in hand with factory robot manufacturer Boston Dynamics to provide its machines with what he calls “cognitive intelligence” to complement the “athletic intelligence” they already possess.
This synthesis of functions, Bruce says, will be key to the success of the robot worker program, and he predicts major breakthroughs in this and other AI projects within the next five years. Naturally, this portends some significant changes for the human race in terms of how it works to sustain and reproduce the material lifestyle.
As the chief architect behind the Cognitive Inspection Platform, Levatas’s patented active-learning engine, Bruce seemed well-placed to paint a picture of what the AI-driven future might look like within the next decade.
For starters, just talk me through the practical difference between the athletic intelligence that Boston Dynamics has been developing and the cognitive intelligence provided by Levatas.
There aren't really black-and-white lines to distinguish that, but I can give you broad brush strokes. Boston Dynamics’ focus is on building the best-breed robot with the ability to navigate very difficult and unstructured environments. Things like staircases, piles of rubble, and, in an industrial environment, climbing over or crouching under obstacles to get to equipment. Being able to read gauges, get close enough to check the thermal reading for a piece of equipment, that type of stuff.
This robot has a native ability to balance itself, so in a slippery condition is able to navigate those types of conditions without falling over. If something happens and it were to fall over, it has the ability to right itself and figure out how to stand back up in a lot of situations. So it's really built to navigate very ruggedized environments that your traditional tracked or wheeled robots would not be able to.
Some folks are more familiar with the concept of what's called semantic understanding: the Boston Dynamics robot comes with cameras, those can look at things like gauges or thermal readings, or people, or objects. But the robot itself does not understand what those mean. A lot of our customers want to use the robot to get to that equipment and then use the Levatas platform to understand what's going on. This is a gauge, and it's reading at 35 PSI [unit used to measure pressure]. This is a piece of equipment that's overheating. This is a person in a restricted area. And we need to trigger some kind of notification to respond accordingly. That's the way that we distinguish between athletic intelligence on Boston Dynamics side and cognitive intelligence on the Levatas side.
If you think about humans, there are basic tasks like walking, running, jogging, balancing, crouching – those types of things obviously evolved using cognitive intelligence. But there are higher-order functions like driving: understanding when to change lanes, that type of thing. And that's what we really refer to as cognitive intelligence.
"The robot has a native ability to balance itself. It's built to navigate very ruggedized environments that your traditional tracked or wheeled robots would not be able to."Daniel Bruce, Chief Product Officer at Levatas
What kind of clients are you working with – I presume we're talking about big industrial manufacturers?
Typically large enterprises. We do have some smaller customers, usually on an R&D [research and development] basis. But usually, it is the larger manufacturers, oil and gas energy customers. And the reason why is the robot and our platform are sizable investments for an organization. It makes sense for organizations doing substantial output at scale and where saving hundreds of thousands or millions of dollars in maintenance costs for their equipment or increasing efficiency by 5-10% is a substantial needle-mover. Being able to detect equipment that's overheating, not running at the full optimum output, and even being able to save a few percentage points on performance or extend the lifetime by, say, 15-20%.
So from a consumer point of view, what kind of goods are going to be manufactured in a more cost-effective manner?
We work with a major semiconductor chip manufacturer, and their chips are probably in every single device that you use: computer, iPhone, car. Also, it's an industry that's having a very hard time recruiting for the next-generation workforce. And so for them, it's absolutely critical that they keep up with demand and hire enough help. We also work with a beverage manufacturer. It's a household name that you would run into every day.
The use cases are really quite wide. It's really any customer using a substantial amount of industrial equipment as part of their manufacturing process because these are machines that break down, overheat, may become corroded, or just not operate at peak efficiency. So those types of customers tend to be really good fits for us.
You mentioned workforce retention and recruitment problems. What's the issue with that?
These jobs that our customers are usually trying to automate and bring in robots to help with tend to be very unfulfilling, strenuous, or in some cases, dangerous. We have customers using robots in radioactive environments not safe for humans to be in for any prolonged amount of time. We have customers that have to inspect equipment that is running very, very hot. So these are sweaty, uncomfortable environments for people to be in. There's a desire to automate those tasks because humans don't want to do this, or it may not be safe for them.
Beyond that, though, there's also just the monotony of these tasks. A typical customer may have a facility that has 100,000 analog gauges that need to be read and digitized, and somebody needs to walk through with a tablet and look at every single one of these gauges and write down the reading and enter it into a tablet. And it's just mind-numbing.
"These are sweaty, uncomfortable environments for people to be in. There's a desire to automate those tasks because humans don't want to do this, or it may not be safe for them."Daniel Bruce, Levatas
But on the other hand, these also tend to be fairly highly trained, certified employees. You know, not for the most part college kids or recent graduates: in a nuclear facility, for example, you have to have a certification to even step into these facilities. So those are some of the economic dynamics. Then you have a workforce that is retiring, they were trained 30-40 years ago to do these jobs. And there is not enough supply to replace them, these are jobs that the next generation does not want to do. Then you've got forces like Amazon, for example, that is for all intents and purposes, setting the minimum wage across the country. Because if you could get a job at Amazon making $25 an hour, you're not going to work at another manufacturer for less than that.
The appeal of being able to bring in a robot that's essentially a one-time expense upfront, plus some ongoing maintenance costs: the robot doesn't get sick, it doesn't need smoke breaks or to sleep and recover overnight. And once you've trained this robot once, it doesn't need to be trained again. Some of these industries claim to have 170% turnover a year for these roles, literally spending months recruiting: and these employees will be there for, in some cases, less time than they spent getting trained.
Are factory workers concerned about their jobs being taken? Because obviously that's another issue that comes up a lot around AI. But from what you've been saying, it sounds like the next generation is not going to want these jobs anyway.
It would probably be naive to think that there will be no replacement or competition between humans and robots. Historically, if you look at things like carriage drivers and that type of thing, there are pain points: people who made a living taking care of horses and driving carriages that were out of a job when cars came around. If you look at that, though, and step back, that's like a blip on the historical map. And so fast forwarding all the way here to robots, they don't remove jobs. They create jobs.
We see some signs of that already. Our customers are looking at taking some of these highly specialized roles and centralizing them, and so people who are walking around performing these rounds today are moving to a more white-collar office type of job, reviewing the data that's captured by these robots. In terms of conditions, instead of walking around a radioactive or uncomfortable environment, they're sitting at a desk in air conditioning. There are jobs training the robots to walk these routes and perform these tasks.
For the long term, we're very optimistic. These types of trends tend to be very good in general. They result in increased work-life balance, job satisfaction. It sort of escalates humans into higher-order tasks and gets rid of some of those less desirable, less safe jobs that humans really don't want to be doing anyway.
Do you have similar optimism about what we were talking about in terms of the reduced costs being passed down to the consumer – whether that be a beverage or a car, do you see cheaper goods arising as a result of this new technology?
Yes, that's what I hope. These are industries that tend to be very competitive, so our customers are looking to optimize. And certainly, one of the ways that they can compete, especially with inflation going out of control right now, is on a price point. We're helping them create an environment to compete more efficiently, and hopefully, that does get passed down to consumers – it doesn't always work like that.
How long before this really starts to kick in? Are we looking at this coming through in the next five years?
Most of our customers are either in experimentation with the technology or initial pilots. So they may have chosen one or two locations and a limited rollout where they're testing. These are large companies, they tend to move kind of slow: this is not something we see really hitting full-scale adoption in the next, say, 12 to 18 months. But in the next five years, absolutely. The economics are just too compelling, in our opinion.
Being vetted at this point are the kinks of robots co-existing alongside humans in these environments efficiently and safely, making sure that they are trained well, identifying the right jobs for the robots to do versus humans.
Do you ever worry about these robots being hacked? What are your views on the role artificial intelligence has to play in relation to cyber conflict?
Our software in these robots exists right now in some very sensitive areas. We're talking about manufacturing, but in the energy space, these solutions are being tested and piloted for critical infrastructure applications, areas of the grid that, if they went down, would be detrimental to the common good, but also very dangerous areas like nuclear facilities. ISo it's also being vetted from a security standpoint in those types of solutions as well.
There are a lot of ways that risk can be mitigated. Ultimately, we tell our customers nothing is 100% secure: from a digital standpoint, everything is hackable. That said, these robots are built to run completely localized within a customer's environment. What that means is the net new risk that they represent is not different than that posed every time you connect a computer or any other device to your network. Yes, it could become infected, compromised: but the measures taken today to control these devices still very much hold true.
"There's a lot of fearmongering around artificial intelligence. I think a lot of that is deserved in the long term: if you fast forward 50-100 years from now, there are existential and certainly ethical concerns that we will probably have to deal with."Daniel Bruce, Levatas
Anytime you're talking about a robot versus just a laptop, there's a higher degree of concern there. But a lot of the same paradigm still applies as far as AI in general. There's a lot of fear-mongering around artificial intelligence. I think a lot of that is deserved in the long term: if you fast forward 50-100 years from now, there are existential and certainly ethical concerns that we will probably have to deal with.
In the short term, though, we see those types of concerns being very far out, particularly for the applications that we're working in. I think ethical and security concerns tend to be much more prominent today in areas like self-driving cars, or where artificial intelligence is being used to automate or guide healthcare or hiring decisions: bias and that type of thing tends to be very much an ethical concern. For us, tradeoffs tend to be things like, is this piece of equipment operating at peak efficiency or not? Making a mistake around that could cost money, but it's generally not going to cost lives.
You said an advantage with these robots is that they don't get tired or take sick days, but surely they will require maintenance? What kind of shelf life do AI worker machines have in your anticipation?
To paint a picture of the extremes of environments that we're working within, we've got robots that are operating on oil rigs out in the ocean. That is about as difficult an environment as you can imagine: salt water and metal components notoriously do not pair very well. Also, in that category, you've got radioactive environments. On the other end of the spectrum, we've got customers that are manufacturing semiconductor chips – that has to be an extremely controlled, clean environment. You've got people walking around in white gloves and masks in suits all the time.
We and Boston Dynamics have not tested this at scale, but the shelf life on an oil rig could be as short as six months. In a controlled environment, shelf life can be as long as 20 years. Obviously, this equipment hasn't been around for 10 or 20 years, so we're making educated guesses. But like any mechanical device, there is the equivalent of tires that need to be changed: the Boston Dynamics robot has pads on its legs that have to be replaced on a regular basis. There are components that will fail and need to be replaced. All of those will create jobs for maintenance teams to keep these robots in operating order.
Do you think we'll ever see a time when robots repair the robots?
I think we definitely will, but probably not in the five-year timeframe.
And when you talk about shelf life, I presume, it's not six months, and the entire machine has to be junked and replaced?
It's substantial enough maintenance that an oil and gas customer will be doing the math on essentially replacing the cost of the robot every six to 12 months. Even if they have to replace this robot on a more regular basis, for many of these applications, it's still just a no-brainer.
We've been covering robot dogs quite a bit lately, and, it has to be said, a lot of them have been military or police machines. Does that bother you at all? Is it a bit of an image problem for you to see footage of these lethal cyber dogs going viral on Twitter?
Boston Dynamics and Levatas are firm believers in our solutions, whether that be the robots or the software, not being weaponized. We do see a lot of legitimate uses for the technology in more defensive scenarios: there was a news story recently about police using it to do bomb detection. And that's that's an exciting use case for us, walking a [robot] dog into a potentially explosive situation rather than a person. That's something most of us would support. But then you also see these horrifying stories of robots patrolling poverty-stricken areas, searching for a crime. It just feels too Orwellian.
I can't speak on Boston Dynamics’ behalf, but they have made very public statements about this. They signed an anti-weaponization statement quite recently. We don't step into those types of use cases. It's not that robots have no legitimate uses in military contexts, it's just that the ethics, frankly, are so frightening and fraught with problems that we don't want to be involved in that at all.
I understand you don’t, but are you worried about cyber espionage? Would somebody, whether it be China or even a homegrown threat, trying to access the technology be by any means possible?
Yes. In many ways, this is just another machine that poses a risk. If you remember the Stuxnet virus that made headlines, I guess that was almost ten years ago now. That was an example where smart devices within an energy plant were compromised. That certainly could happen with robots. But we don't see that as a unique type of threat or problem. It's just, unfortunately, the same challenging, in some ways frightening world that we've lived in for the last 15 or 20 years.
"I think we're going to see humanoid robots entering everyday life, almost the way that we see drones today."Daniel Bruce, Levatas
Do you see robots coming in other shapes apart from quadrupedal?
Yes. There are companies experimenting with spider robots, with very small forms the size of a roach or mouse. If you look at the Boston Dynamics Atlas Robot, also the Optimus Robot from Tesla, that humanoid form factor is very exciting to us for a lot of reasons. There's a scary aspect: the idea certainly opens up a more direct comparison to robots taking over the world. You know, robots that look like us and are as smart as us. But for practical-use cases, there are a lot of places where a humanoid or bipedal form is just far and away the best solution. Think things like packing boxes: it's not accidental that humans have evolved the capabilities that we have with our hands.
Do you see anything really way out in the near-to-far future, like entertainment for kids in the form of fantastical or mythical creatures created using AI robotics? Or how about enhanced humanoid forms for other service jobs – for instance, I can just picture a robot bartender in 50 years’ time with multiple limbs pouring drinks! Or does that purely belong to the realm of science fiction?
My guess would be we're probably looking at the 10-20 year timeframe for that. I obviously could be wrong. Grand scheme of things: Stuxnet was ten years ago, as we were just saying. That's crazy, to realize that's a blink. I think we're going to see humanoid robots entering everyday life, almost the way that we see drones today. Or driverless cars: between now and, say, the next ten years, we'll see specialized applications of them. That would be my guess, but who knows? As somebody once said, predictions are difficult – especially about the future!
More from Cybernews:
Subscribe to our newsletter