Document Type

Article

Publication Date

7-31-2021

Abstract

Currently, there are numerous obstacles to performing palpation during laparoscopic surgery. The laparoscopic interface does not allow access into a patient's body anything other than the tools that are inserted through the trocars. Palpation is usually done with the surgeon's hands to detect lumps and certain anomalies underneath the skin, muscle, or tissues. It can be useful technique for augmenting surgical decision-making during laparoscopic surgery, especially when discerning operations involving cancerous tumors. Previous research demonstrated the use of tactile sensors and mechanical sensors placed at the end-effectors for palpating laparoscopically. In this study, a visual guidance system is proposed for use during laparoscopic palpation, specifically engineered to be part of a motion-based laparoscopic palpation system. In particular, the YOLACT++ model is used to localize a target organ, the gall bladder, on a custom dataset of laparoscopic cholecystectomy. Our experiments showed an AP score of 90.10 for bounding boxes and 87.20 on masks. In terms of the speed performance, the model achieved a playback speed of approximately 20 fps, which translates to approximately 48 ms video latency. The palpation path guides are guidelines that are computer-generated within the identified organ, and they show potential in helping the surgeon implement the palpation more accurately. Overall, this study demonstrates the potential of deep learning-based real-time image processing models to complete our motion-based laparoscopic palpation system, and to realize the promising role of artificial intelligence in surgical decision-making. Visual presentation of our results can be seen on our project page: https://kerwincaballas.github.io/lap-palpation.

Share

COinS