Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Detecting Semantics in Endoscopic Videos with Deep Neural Networks

155 views

Published on

These are the slides of my talk given at the 4th European Congress on Endometriosis (EEC2018) in Vienna, Austria, on November 23, 2018.

Published in: Health & Medicine
  • Be the first to comment

  • Be the first to like this

Detecting Semantics in Endoscopic Videos with Deep Neural Networks

  1. 1. Detecting Semantics in Endoscopic Videos with Deep Neural Networks Klaus Schöffmann Institut für Informationstechnologie Klagenfurt University, Austria Jörg Keckstein Medizinische Fakultät Ulm University, Germany
  2. 2. • Grant/Research Support • European Regional Development Fund (ERDF) – 20214 • Carinthian Economic Promotion Fund (KWF) and Lakeside Labs 3520/26336/38165 Disclosures Klaus Schöffmann, Klagenfurt University EEC2018 2
  3. 3. • Currently, there are many ways of delivering video signals to the operation room, in order to perform a surgery or inspection • Endoscopes in different endoscopic surgeries • Capsules for minimally-invasive inspections of the GI-tract • Microscopes with mounted cameras in case of microscopic surgery • Video streams for robot-assisted surgery Multitude of Videos in Medicine Klaus Schöffmann, Klagenfurt University EEC2018 3
  4. 4. Video Streams/Recordings – added value? Klaus Schöffmann, Klagenfurt University EEC2018 4 Post-surgical use • Documentation • Education and training • Treatment planning • Case re-visitation • Legal … Problems • Manual analysis time-consuming/tedious • Multiple hours-long surgeries per day: huge video archives What is needed • Capable systems aiding surgeons during or post surgery • Finding content of interest and making archives manageable How to proceed • Automatic content analysis: detecting relevant scenes • Machine learning of relevant scenes from many examples Patient names Suturing Cutting Injection Coagulation? ? ? ? ?
  5. 5. What semantics may we want to find? Klaus Schöffmann, Klagenfurt University EEC2018 5 ? ? ? ? ? Image/Video Archive Instruments Anatomy Pathology Surgical Actions Irrelevant Scenes Technical Errors Surgery Phases 1. Access 2. Dissection 3. Clipping 4. Cutting 5. Separation
  6. 6. Recognizing Surgical Actions in Laparoscopic Gynecology Klaus Schöffmann, Klagenfurt University EEC2018 6 Dissection (blunt) Coagulation Cutting (cold) Cutting (high frequency) Sling (Hysterectomy) Injection SuturingSuction & Irrigation Andreas Leibetseder, Stefan Petscharnig, Manfred Jürgen Primus, Sabrina Kletz, Bernd Münzer, Klaus Schoeffmann, and Jörg Keckstein. 2018. Lapgyn4: a dataset for 4 automatic content analysis problems in the domain of laparoscopic gynecology. In Proceedings of the 9th ACM Multimedia Systems Conference (MMSys '18). ACM, New York, NY, USA, 357-362. 31.000 images 111 surgeries
  7. 7. Demo: Surgical Action Recognition Klaus Schöffmann, Klagenfurt University EEC2018 7 Petscharnig, S., & Schöffmann, K. (2017). Learning laparoscopic video shot classification for gynecological surgery. Multimedia Tools and Applications, 1-19.
  8. 8. Quantitative Evaluation: LapGyn4 (4-part Dataset) Klaus Schöffmann, Klagenfurt University EEC2018 8 Surgical Actions (~31K samples) Anatomical Structures (~3K samples) Very high performance using modern machine learning standard: • >95% recognition accuracy • >91% recall Andreas Leibetseder, Stefan Petscharnig, Manfred Jürgen Primus, Sabrina Kletz, Bernd Münzer, Klaus Schoeffmann, and Jörg Keckstein. 2018. Lapgyn4: a dataset for 4 automatic content analysis problems in the domain of laparoscopic gynecology. In Proceedings of the 9th ACM Multimedia Systems Conference (MMSys '18). ACM, New York, NY, USA, 357-362.
  9. 9. Quantitative Evaluation: LapGyn4 (4-part Dataset) Klaus Schöffmann, Klagenfurt University EEC2018 9 Suturing on Anatomy (~1K samples)Instrument Count (~22K samples) Andreas Leibetseder, Stefan Petscharnig, Manfred Jürgen Primus, Sabrina Kletz, Bernd Münzer, Klaus Schoeffmann, and Jörg Keckstein. 2018. Lapgyn4: a dataset for 4 automatic content analysis problems in the domain of laparoscopic gynecology. In Proceedings of the 9th ACM Multimedia Systems Conference (MMSys '18). ACM, New York, NY, USA, 357-362. Comparatively poor performance: • 80% accuracy • 62% recall • Not enough samples (~1.000 images) • Visual context perhaps too similar Still good performance: • >92% accuracy • >84% recall • best in classifying zero instruments (acc: 96%)
  10. 10. Towards Endometriosis Recognition – former approach Klaus Schöffmann, Klagenfurt University EEC2018 10 • Tool for supporting experts in annotating Endometriosis • Annotating affected regions via ENZIAN/rASRM score • Problems • Too many examples needed for every single ENZIAN/rASRM category required for machine learning • Too much effort for few expert annotators A. Leibetseder, B. Münzer, K. Schoeffmann and J. Keckstein, "Endometriosis Annotation in Endoscopic Videos," 2017 IEEE International Symposium on Multimedia (ISM), Taichung, 2017, pp. 364-365.
  11. 11. Future work: Region-based Endometriosis Detection Klaus Schöffmann, Klagenfurt University EEC2018 11 Endometriosis/Adhesion Database (expert annotations) • Purpose: supporting surgeons by suggesting suspicious regions • Goals: localized detection using 3 categories of diseases: adhesions, endometriosis suspicion, endometriosis • Methodology • Creating endometriosis/adhesion database • Training and evaluating suitable machine learning models for classification Machine Learning (e.g. R-CNN) No Endometriosis Adhesions Endometriosis Prediction Model Endometriosis Suspicion
  12. 12. Automatic Smoke Detection (Real-Time Support) Klaus Schöffmann, Klagenfurt University EEC2018 12 Detection Runtime + Machine Learning (acc: 94%) – Saturation Analysis (acc: 81%) + Saturation Analysis (12ms) – Machine Learning (150ms) Andreas Leibetseder, Manfred J. Primus, Stefan Petscharnig, and Klaus Schoeffmann. “Image-based Smoke Detection in Laparoscopic Videos“. Proceedings of Computer Assisted and Robotic Endoscopy and Clinical Image-Based Procedures: 4th International Workshop, CARE 2017, and 6th International Workshop, CLIP 2017, held in Conjunction with MICCAI 2017, Quebec City, QC, Canada, September 14, 2017, pp. 70-87 Leibetseder A., Primus M.J., Schoeffmann K. (2018) Automatic Smoke Classification in Endoscopic Video. In: Schoeffmann K. et al. (eds) MultiMedia Modeling. MMM 2018. Lecture Notes in Computer Science, vol 10705. Springer, Cham
  13. 13. • High potential in full video documentation • Enables and aids additional usage scenarios • Post-operative and real-time support • Facilitating education and training • Technical skill assessment • Improving case re-visitations • Aiding medical staff during interventions • Needs a system to organize and analyze videos • To detect important semantics/scenes • Machine learning can help a lot with this, but also it requires: • Many and diverse training examples • Computational power • Experts for creating correct training examples Conclusions Klaus Schöffmann, Klagenfurt University EEC2018 13
  14. 14. Klaus Schöffmann, Klagenfurt University EEC2018 14 Thank You! Q/A Assoc.-Prof. PD Dr. Klaus Schöffmann Institut für Informationstechnologie Alpen-Adria-Universität Klagenfurt www.EndoscopicVideo.com

×