Computer Vision Algorithm Studies Laparoscopic Procedures to Understand What’s G
Laparoscopic surgeries are often automatically recorded from the point of view of the endoscope’s lens. This is thanks to built-in recording equipment that accompanies many commercial endoscopic systems. What isn’t easy is reviewing all those hours of footage to find something that may be useful for training clinicians or that may be used to improve laparoscopy-related equipment.
Now researchers at MIT have reported at the International Conference on Robotics and Automation in Singapore on a new video processing system that can, on its own, identify different stages of laparoscopic surgeries, potentially allowing researchers to quickly find relevant scenes that they can easily study.
The computer vision algorithm powering the system can spot when a biopsy is performed, a wound irrigated, or tissue stapled, among other activities. Moreover, other actions can also be programmed into the system for it to find among recordings.
While initially developed to analyze pre-recorded video, the same software may one day help surgeons intraoperatively, by recognizing what steps are being taken and warning when something doesn’t look normal. It may also help suggest when and which instruments to use and generally serve as an additional vigilant eye over the course of the procedure.
“Surgeons are thrilled by all the features that our work enables,” in an MIT announcement said Daniela Rus, Professor of Electrical Engineering and Computer Science and senior author on the paper. “They are thrilled to have the surgical tapes automatically segmented and indexed, because now those tapes can be used for training. If we want to learn about phase two of a surgery, we know exactly where to go to look for that segment. We don’t have to watch every minute before that. The other thing that is extraordinarily exciting to the surgeons is that in the future, we should be able to monitor the progression of the operation in real-time.”