这是indexloc提供的服务,不要输入任何密码
Skip to main content
Log in

Video-based fully automatic assessment of open surgery suturing skills

  • Original Article
  • Published:
International Journal of Computer Assisted Radiology and Surgery Aims and scope Submit manuscript

Abstract

Purpose

The goal of this study was to develop a new reliable open surgery suturing simulation system for training medical students in situations where resources are limited or in the domestic setup. Namely, we developed an algorithm for tools and hands localization as well as identifying the interactions between them based on simple webcam video data, calculating motion metrics for assessment of surgical skill.

Methods

Twenty-five participants performed multiple suturing tasks using our simulator. The YOLO network was modified to a multi-task network for the purpose of tool localization and tool–hand interaction detection. This was accomplished by splitting the YOLO detection heads so that they supported both tasks with minimal addition to computer run-time. Furthermore, based on the outcome of the system, motion metrics were calculated. These metrics included traditional metrics such as time and path length as well as new metrics assessing the technique participants use for holding the tools.

Results

The dual-task network performance was similar to that of two networks, while computational load was only slightly bigger than one network. In addition, the motion metrics showed significant differences between experts and novices.

Conclusion

While video capture is an essential part of minimal invasive surgery, it is not an integral component of open surgery. Thus, new algorithms, focusing on the unique challenges open surgery videos present, are required. In this study, a dual-task network was developed to solve both a localization task and a hand–tool interaction task. The dual network may be easily expanded to a multi-task network, which may be useful for images with multiple layers and for evaluating the interaction between these different layers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+
from $39.99 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Al Hajj H, Lamard M, Conze PH, Cochener B, Quellec G (2018) Monitoring tool usage in surgery videos using boosted convolutional and recurrent neural networks. Med Image Anal 47:203–218

    Article  Google Scholar 

  2. Ali H, Khursheed M, Fatima SK, Shuja SM, Noor S (2019) Object recognition for dental instruments using SSD-MobileNet. In: 2019 international conference on information science and communication technology (ICISCT). IEEE, pp 1–6

  3. Campain NJ, Kailavasan M, Chalwe M, Gobeze AA, Teferi G, Lane R, Biyani CS (2018) An evaluation of the role of simulation training for teaching surgical skills in sub-Saharan Africa. World J Surg 42(4):923–929

    Article  Google Scholar 

  4. Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, Zagoruyko S (2020) End-to-end object detection with transformers. In: European conference on computer vision. Springer, pp 213–229

  5. Cleary K, Chung HY, Mun SK (2004) Or2020 workshop overview: operating room of the future. In: International congress series. Elsevier, vol 1268, pp 847–852

  6. D’Angelo ALD, Rutherford DN, Ray RD, Laufer S, Kwan C, Cohen ER, Mason A, Pugh CM (2015) Idle time: an underdeveloped performance metric for assessing surgical skill. Am J Surg 209(4):645–651

  7. D’Angelo ALD, Rutherford DN, Ray RD, Laufer S, Mason A, Pugh CM (2016) Working volume: validity evidence for a motion-based metric of surgical efficiency. Am J Surg 211(2):445–450

  8. Darzi A, Smith S, Taffinder N (1999) Assessing operative skill: needs to become more objective

  9. Davies J, Khatib M, Bello F (2013) Open surgical simulation—a review. J Surg Educ 70(5):618–627

    Article  Google Scholar 

  10. Du X, Kurmann T, Chang PL, Allan M, Ourselin S, Sznitman R, Kelly JD, Stoyanov D (2018) Articulated multi-instrument 2-D pose estimation using fully convolutional networks. IEEE Trans Med Imaging 37(5):1276–1287

    Article  Google Scholar 

  11. Eckert M, Cuadrado D, Steele S, Brown T, Beekley A, Martin M (2010) The changing face of the general surgeon: national and local trends in resident operative experience. Am J Surg 199(5):652–656

    Article  Google Scholar 

  12. Fard MJ, Ameri S, Chinnam RB, Pandya AK, Klein MD, Ellis RD (2016) Machine learning approach for skill evaluation in robotic-assisted surgery. In: Proceedings of the world congress on engineering and computer science, vol 1

  13. Fawaz HI, Forestier G, Weber J, Idoumghar L, Muller PA (2018) Evaluating surgical skills from kinematic data using convolutional neural networks. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 214–221

  14. Fonseca AL, Evans LV, Gusberg RJ (2013) Open surgical simulation in residency training: a review of its status and a case for its incorporation. J Surg Educ 70(1):129–137

    Article  Google Scholar 

  15. Funke I, Mees ST, Weitz J, Speidel S (2019) Video-based surgical skill assessment using 3D convolutional neural networks. Int J Comput Assist Radiol Surg 14(7):1217–1225

    Article  Google Scholar 

  16. Gallagher AG, O’Sullivan GC (2011) Fundamentals of surgical simulation: principles and practice. Springer, New York

  17. Gao M, Bai Y, Li Z, Li S, Zhang B, Chang Q (2021) Real-time jellyfish classification and detection based on improved YOLOV3 algorithm. Sensors 21(23):8160

    Article  Google Scholar 

  18. Gao Y, Vedula SS, Reiley CE, Ahmidi N, Varadarajan B, Lin HC, Tao L, Zappella L, Béjar B, Yuh DD, Chen CCG, Vidal R, Khudanpur S, Hager G (2014) JHU-ISI gesture and skill assessment working set (JIGSAWS): a surgical activity dataset for human motion modeling. In: MICCAI workshop: M2cai, vol 3, p 3

  19. Gao Y, Vedula SS, Reiley CE, Ahmidi N, Varadarajan B, Lin HC, Tao L, Zappella L, Béjar B, Yuh DD, Chen CCG, Vidal R, Khudanpur S, Hager G (2014) Language of surgery: A surgical gesture dataset for human motion modeling. Modeling and monitoring of computer assisted interventions (M2CAI), 2014

  20. García Vazquez A, Verde JM, Dal Mas F, Palermo M, Cobianchi L, Marescaux J, Gallix B, Dallemagne B, Perretta S, Gimenez ME (2020) Image-guided surgical e-learning in the post-Covid-19 pandemic era: what is next? J Laparoendosc Adv Surg Tech 30(9):993–997

    Article  Google Scholar 

  21. Genc V, Sulaimanov M, Cipe G, Basceken SI, Erverdi N, Gurel M, Aras N, Hazinedaroglu SM (2011) What necessitates the conversion to open cholecystectomy? A retrospective analysis of 5164 consecutive laparoscopic operations. Clinics 66(3):417–420

    Article  Google Scholar 

  22. Goldbraikh A, D’Angelo ALD, Pugh CM, Laufer S (2020) Tool usage in open surgery video data. In: Computer assited radialogy and surgery (CARS) 2020

  23. Hasan OH, Ayaz A, Khan M, Docherty C, Hashmi P (2019) The need for simulation in surgical education in developing countries. The wind of change. Review article. J Pak Med Assoc 69(Supl. 1):S62

    PubMed  Google Scholar 

  24. Hassan NI, Tahir NM, Zaman FHK, Hashim H (2020) People detection system using YOLOV3 algorithm. In: 2020 10th IEEE international conference on control system, computing and engineering (ICCSCE). IEEE, pp 131–136

  25. He K, Gkioxari G, Dollár P, Girshick R (2017) Mask R-CNN. In: Proceedings of the IEEE international conference on computer vision, pp 2961–2969

  26. Herzig R, Levi E, Xu H, Gao H, Brosh E, Wang X, Globerson A, Darrell T (2019) Spatio-temporal action graph networks. In: Proceedings of the IEEE/CVF international conference on computer vision workshops

  27. Hu X, Yu L, Chen H, Qin J, Heng PA (2017) Agnet: attention-guided network for surgical tool presence detection. In: Deep learning in medical image analysis and multimodal learning for clinical decision support. Springer, pp 186–194

  28. Jin A, Yeung S, Jopling J, Krause J, Azagury D, Milstein A, Fei-Fei L (2018) Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. In: 2018 IEEE winter conference on applications of computer vision (WACV). IEEE, pp 691–699

  29. Jo K, Choi Y, Choi J, Chung JW (2019) Robust real-time detection of laparoscopic instruments in robot surgery using convolutional neural networks with motion vector prediction. Appl Sci 9(14):2865

    Article  Google Scholar 

  30. Katić D, Wekerle AL, Gärtner F, Kenngott H, Müller-Stich BP, Dillmann R, Speidel S (2014) Knowledge-driven formalization of laparoscopic surgeries for rule-based intraoperative context-aware assistance. In: International conference on information processing in computer-assisted interventions. Springer, pp 158–167

  31. Lewis LL, Kerna NA (2019) Cognitive apprenticeship appropriate surgical education for countries with limited resources. Surg Med Open Access J

  32. Li Y, Zhao Z, Luo Y, Qiu Z (2020) Real-time pattern-recognition of GPR images with YOLO V3 implemented by TensorFlow. Sensors 20(22):6476

    Article  Google Scholar 

  33. Li YL, Zhou S, Huang X, Xu L, Ma Z, Fang HS, Wang Y, Lu C (2019) Transferable interactiveness knowledge for human-object interaction detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3585–3594

  34. Lin TY, Goyal P, Girshick R, He K, Dollár P (2017) Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision, pp 2980–2988

  35. Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, Berg AC (2016) SSD: single shot multibox detector. In: European conference on computer vision. Springer, pp 21–37

  36. Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: hierarchical vision transformer using shifted windows. In: International conference on computer vision (ICCV)

  37. Luck J, Gosling N, Saour S (2021) Undergraduate surgical education during Covid-19: could augmented reality provide a solution? Br J Surg 108(3):e129–e130

    Article  CAS  Google Scholar 

  38. McCoy AC, Gasevic E, Szlabick RE, Sahmoun AE, Sticca RP (2013) Are open abdominal procedures a thing of the past? An analysis of graduating general surgery residents’ case logs from 2000 to 2011. J Surg Educ 70(6):683–689

    Article  Google Scholar 

  39. Moorthy K, Munz Y, Sarker SK, Darzi A (2003) Objective assessment of technical skills in surgery. BMJ 327(7422):1032–1037

    Article  Google Scholar 

  40. Partridge RW, Hughes MA, Brennan PM, Hennessey IA (2014) Accessible laparoscopic instrument tracking (“instrac”): construct validity in a take-home box simulator. J Laparoendosc Adv Surg Tech 24(8):578–583

  41. Perez A, Klimberg VS (2021) Guest editorial “tele-education and tele-mentoring”. J Surg Oncol 124(2):161–161. https://doi.org/10.1002/jso.26501

  42. Ramesh S, Dall’Alba D, Gonzalez C, Yu T, Mascagni P, Mutter D, Marescaux J, Fiorini P, Padoy N (2021) Multi-task temporal convolutional networks for joint recognition of surgical phases and steps in gastric bypass procedures. Int J Comput Assist Radiol Surg 1–9

  43. Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 779–788

  44. Redmon J, Farhadi A (2017) Yolo9000: better, faster, stronger. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7263–7271

  45. Redmon J, Farhadi A (2018) Yolov3: an incremental improvement. arXiv preprint arXiv:1804.02767

  46. Reiley CE, Lin HC, Yuh DD, Hager GD (2011) Review of methods for objective surgical skill evaluation. Surg Endosc 25(2):356–366

    Article  Google Scholar 

  47. Ren S, He K, Girshick R, Sun J (2015) Faster R-CNN: towards real-time object detection with region proposal networks. Adv Neural Inf Process Syst 28:91–99

    Google Scholar 

  48. Reynolds W Jr (2001) The first laparoscopic cholecystectomy. JSLS J Soc Laparoendosc Surg 5(1):89

    Google Scholar 

  49. Reznick RK, MacRae H (2006) Teaching surgical skills-changes in the wind. N Engl J Med 355(25):2664–2669

    Article  CAS  Google Scholar 

  50. Roach E, Okrainec A (2021) Telesimulation for remote simulation and assessment. J Surg Oncol 124(2):193–199. https://doi.org/10.1002/jso.26505

    Article  PubMed  Google Scholar 

  51. Saun TJ, Zuo KJ, Grantcharov TP (2019) Video technologies for recording open surgery: a systematic review. Surg Innov 26(5):599–612

    Article  Google Scholar 

  52. Schroder M, Ritter H (2017) Hand-object interaction detection with fully convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 18–25

  53. Siddiqui UD, Aslanian HR (2020) The new virtual reality: advanced endoscopy education in the Covid-19 era. Dig Dis Sci 65:1888–1891

  54. Soviany P, Ionescu RT (2018) Optimizing the trade-off between single-stage and two-stage deep object detectors using image difficulty prediction. In: 2018 20th international symposium on symbolic and numeric algorithms for scientific computing (SYNASC). IEEE, pp 209–214

  55. Twinanda AP, Alkan EO, Gangi A, de Mathelin M, Padoy N (2015) Data-driven spatio-temporal RGBD feature encoding for action recognition in operating rooms. Int J Comput Assist Radiol Surg 10(6):737–747

    Article  Google Scholar 

  56. Twinanda AP, Shehata S, Mutter D, Marescaux J, De Mathelin M, Padoy N (2016) EndoNet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans Med Imaging 36(1):86–97

    Article  Google Scholar 

  57. VanVoorst BR, Hackett M, Strayhorn C, Norfleet J, Honold E, Walczak N, Schewe J (2015) Fusion of lidar and video cameras to augment medical training and assessment. In: 2015 IEEE international conference on multisensor fusion and integration for intelligent systems (MFI). IEEE, pp 345–350

  58. Yadav SK, Mishra A, Mishra SK (2021) Telemedicine: history and success story of remote surgical education in India. Indian J Surg 1–5

  59. Yoon Y, Hwang H, Choi Y, Joo M, Oh H, Park I, Lee KH, Hwang JH (2019) Analyzing basketball movements and pass relationships using realtime object tracking techniques based on deep learning. IEEE Access 7:56564–56576

    Article  Google Scholar 

  60. Zhang M, Cheng X, Copeland D, Desai A, Guan MY, Brat GA, Yeung S (2020) Using computer vision to automate hand detection and tracking of surgeon movements in videos of open surgery. In: AMIA annual symposium proceedings. American Medical Informatics Association, vol 2020, p 1373

  61. Zia A, Essa I (2018) Automated surgical skill assessment in RMIS training. Int J Comput Assist Radiol Surg 13(5):731–739

    Article  Google Scholar 

Download references

Acknowledgements

Funding for this study was provided by the National Institutes of Health grant 1F32EB017084-01 entitled “Automated Performance Assessment System: A New Era in Surgical Skills Assessment.”

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adam Goldbraikh.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

Study approval was granted by the University of Wisconsin Health Sciences Institutional Review Board, and written informed consent was obtained from all participants.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (avi 183851 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Goldbraikh, A., D’Angelo, AL., Pugh, C.M. et al. Video-based fully automatic assessment of open surgery suturing skills. Int J CARS 17, 437–448 (2022). https://doi.org/10.1007/s11548-022-02559-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Version of record:

  • Issue date:

  • DOI: https://doi.org/10.1007/s11548-022-02559-6

Keywords