A robot has been trained to perform surgical procedures as skillfully as a human doctor simply by watching videos of professional surgeons.
Researchers behind the experiment described the training system, which utilized imitation learning, as a “breakthrough” that opens a new frontier in medical robotics.
The successful use of imitation learning to train robots eliminates the need to program each individual movement required during surgery.
This brings surgery robots closer to true autonomy, where robots could perform complex surgeries without human help, researchers said.
The findings of the study, which was carried out by a team that included researchers from Johns Hopkins University (JHU) and Stanford University, will be presented later this week at the Conference on Robot Learning in Munich, Germany.
"It's really magical to have this model and all we do is feed it camera input and it can predict the robotic movements needed for surgery," said senior author Axel Krieger, an assistant professor in JHU’s Department of Mechanical Engineering.
"We believe this marks a significant step forward toward a new frontier in medical robotics," added Krieger.
He noted, "The model is so good at learning things we haven't taught it. For example, if it drops the needle, it will automatically pick it up and continue. This isn't something I taught it to do."
The research team trained the da Vinci Surgical System robot on three fundamental surgery skills: manipulating a needle, lifting body tissue, and suturing. In each task, the robot performed as well as a human.
The da Vinci Surgical System is a robotic-assisted surgical platform with nearly 7,000 units in use worldwide. According to researchers, more than 50,000 surgeons are trained on the system, creating a large archive of data for robots to “imitate.”
The model combined imitation learning with the machine learning architecture that underpins ChatGPT but used kinematics – a language that breaks down robotic motion into math – instead of words.
The researchers fed their model hundreds of videos recorded from wrist cameras placed on the arms of da Vinci robots during surgical procedures.
While the da Vinci system is widely used, it’s “notoriously” imprecise, according to researchers. The team addressed this flaw by training the model to perform relative movements rather than absolute actions, which can be inaccurate.
"All we need is image input and then this AI system finds the right action," said lead author Ji Woong "Brian" Kim, a postdoctoral researcher at JHU.
"We find that even with a few hundred demos, the model is able to learn the procedure and generalize new environments it hasn't encountered," he said.
According to researchers, the model could be used to quickly train a robot to perform any type of surgical procedure. Using imitation learning, the team is now working to train a robot to perform entire surgeries rather than just individual tasks.
Your email address will not be published. Required fields are markedmarked