Abstract
We describe a detector of robotic instrument parts in image-guided surgery. The detector consists of a huge ensemble of scale-variant and pose-dedicated, rigid appearance templates. The templates, which are equipped with pose-related keypoints and segmentation masks, allow for explicit pose estimation and segmentation of multiple end-effectors as well as fine-grained non-maximum suppression. We train the templates by grouping examples of end-effector articulations, imaged at various viewpoints, in thus arising space of instrument shapes. Proposed shape-based grouping forms tight clusters of pose-specific end-effector appearance. Experimental results show that the proposed method can effectively estimate the end-effector pose and delineate its boundary while being trained with moderately sized data clusters. We then show that matching such huge ensemble of templates takes less than one second on commodity hardware.
Citations
-
0
CrossRef
-
0
Web of Science
-
0
Scopus
Authors (2)
Cite as
Full text
full text is not available in portal
Keywords
Details
- Category:
- Conference activity
- Type:
- materiały konferencyjne indeksowane w Web of Science
- Title of issue:
- Computer Assisted and Robotic Endoscopy and Clinical Image-Based Procedures strony 3 - 15
- Language:
- English
- Publication year:
- 2017
- Bibliographic description:
- Węsierski D., Cygert S..: Shape-Based Pose Estimation of Robotic Surgical Instruments, W: Computer Assisted and Robotic Endoscopy and Clinical Image-Based Procedures, 2017, ,.
- DOI:
- Digital Object Identifier (open in new tab) 10.1007/978-3-319-67543-5_1
- Verified by:
- Gdańsk University of Technology
seen 142 times
Recommended for you
Abdominal Aortic Aneurysm segmentation from contrast-enhanced computed tomography angiography using deep convolutional networks
- T. Dziubich,
- P. Białas,
- Ł. Znaniecki
- + 2 authors