[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2968220.2968235acmotherconferencesArticle/Chapter ViewAbstractPublication PagesvinciConference Proceedingsconference-collections
research-article
Public Access

Eliciting Tacit Expertise in 3D Volume Segmentation

Published: 24 September 2016 Publication History

Abstract

The output of 3D volume segmentation is crucial to a wide range of endeavors. Producing accurate segmentations often proves to be both inefficient and challenging, in part due to lack of imaging data quality (contrast and resolution), and because of ambiguity in the data that can only be resolved with higher-level knowledge of the structure and the context wherein it resides. Automatic and semi-automatic approaches are improving, but in many cases still fail or require substantial manual clean-up or intervention. Expert manual segmentation and review is therefore still the gold standard for many applications. Unfortunately, existing tools (both custom-made and commercial) are often designed based on the underlying algorithm, not the best method for expressing higher-level intention. Our goal is to analyze manual (or semi-automatic) segmentation to gain a better understanding of both low-level (perceptual tasks and actions) and high-level decision making. This can be used to produce segmentation tools that are more accurate, efficient, and easier to use. Questioning or observation alone is insufficient to capture this information, so we utilize a hybrid capture protocol that blends observation, surveys, and eye tracking. We then developed, and validated, data coding schemes capable of discerning low-level actions and overall task structures.

References

[1]
A. Blandford, D. Furniss, and S. Makri. Qualitative HCI research: Going behind the scenes. Synthesis Lectures on Human-Centered Informatics, 9(1):1--115, 2016.
[2]
L. F. Cazzaniga, M. A. Marinoni, A. Bossi, E. Bianchi, E. Cagna, D. Cosentino, L. Scandolaro, M. Valli, and M. Frigerio. Interphysician variability in defining the planning target volume in the irradiation of prostate and seminal vesicles. Radiotherapy and Oncology, 47(3):293--296, 1998.
[3]
R. E. Clark, D. Feldon, J. J. van MerriÃńnboer, K. Yates, and S. Early. Cognitive task analysis. Handbook of Research on Educational Communications and Technology, 3:577--593, 2008.
[4]
C. A. Cohen and M. Hegarty. Sources of difficulty in imagining cross sections of 3D objects. In Proceedings of the Twenty-Ninth Annual Conference of the Cognitive Science Society, pages 179--184. Cognitive Science Society Austin TX, 2007.
[5]
S. Elling, L. Lentz, and M. de Jong. Retrospective think-aloud method: Using eye movements as an extra cue for participants' verbalizations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1161--1170. ACM, 2011.
[6]
T. Farrington-Darby, J. R. Wilson, B. Norris, and T. Clarke. A naturalistic study of railway controllers. Ergonomics, 49(12-13):1370--1394, 2006.
[7]
F. Foppiano, C. Fiorino, G. Frezza, C. Greco, R. Valdagni, A. N. W. G. on Prostate Radiotherapy, et al. The impact of contouring uncertainty on rectal 3D dose--volume data: Results of a dummy run in a multicenter trial (airopros 01-02). International Journal of Radiation Oncology* Biology* Physics, 57(2):573--579, 2003.
[8]
A. Gegenfurtner, A. Siewiorek, E. Lehtinen, and R. Saljo. Assessing the quality of expertise differences in the comprehension of medical visualizations. Vocations and Learning, 6(1):37--54, 2013.
[9]
Z. Guan, S. Lee, E. Cuddihy, and J. Ramey. The validity of the stimulated retrospective think-aloud method as measured by eye tracking, 2006.
[10]
L. Hoyte, W. Ye, L. Brubaker, J. R. Fielding, M. E. Lockhart, M. E. Heilbrun, M. B. Brown, and S. K. Warfield. Segmentations of MRI images of the female pelvic floor: A study of inter-and intra-reader reliability. Journal of Magnetic Resonance Imaging, 33(3):684--691, 2011.
[11]
A. Hyrskykari, S. Ovaska, P. Majaranta, K.-J. Räihä, and M. Lehtinen. Gaze path stimulation in retrospective think-aloud. Journal of Eye Movement Research, 2(4):1--18, 2008.
[12]
C. Johnson, R. Moorhead, T. Munzner, H. Pfister, P. Rheingans, and T. S. Yoo. NIH/NSF visualization research challenges report. In Los Alamitos, Ca: IEEE Computing Society. Citeseer, 2006.
[13]
T. Ju, Q.-Y. Zhou, and S.-M. Hu. Editing the topology of 3D models by sketching. In ACM SIGGRAPH 2007 Papers, SIGGRAPH '07, New York, NY, USA, 2007. ACM.
[14]
G. A. Klein, R. Calderwood, and D. Macgregor. Critical decision method for eliciting knowledge. Systems, Man and Cybernetics, IEEE Transactions on, 19(3):462--472, 1989.
[15]
E. A. Krupinski. Current perspectives in medical image perception. Attention, Perception, & Psychophysics, 72(5):1205--1217, 2010.
[16]
U. Kuckartz. Maxqda: Qualitative data analysis. Berlin: VERBI software, 2007.
[17]
R. Li, J. Pelz, P. Shi, C. O. Alm, and A. R. Haake. Learning eye movement patterns for characterization of perceptual expertise. In Proceedings of the Symposium on Eye Tracking Research and Applications, pages 393--396, New York, NY, USA, 2012. ACM, ETRA '12.
[18]
M. R. Mahfouz, W. A. Hoff, R. D. Komistek, and D. A. Dennis. Effect of segmentation errors on 3D-to-2D registration of implant models in x-ray images. Journal of biomechanics, 38(2):229--239, 2005.
[19]
U. Neisser. Cognition and reality: Principles and implications of cognitive psychology. WH Freeman/Times Books/Henry Holt & Co, 1976.
[20]
S. D. Olabarriaga and A. W. Smeulders. Interaction in the segmentation of medical images: a survey. Medical Image Analysis, 5(2):127--142, 2001.
[21]
A. Ramkumar, J. Dolz, H. A. Kirisli, S. Adebahr, T. Schimek-Jasch, U. Nestle, L. Massoptier, E. Varga, P. J. Stappers, W. J. Niessen, et al. User interaction in semi-automatic segmentation of organs at risk: a case study in radiotherapy. Journal of digital imaging, pages 1--14, 2015.
[22]
A. Sanandaji, C. Grimm, R. West, and M. Parola. Where do experts look while doing 3D image segmentation. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications, pages 171--174. ACM, 2016.
[23]
L.-K. Soh and C. Tsatsoulis. Learning methodologies and discriminating visual cues for unsupervised image segmentation. In Seventeenth International Conference on Machine Learning: Workshop on Machine Learning of Spatial Knowledge, Palo Alto, CA, 2000.
[24]
J. Van de Steene, N. Linthout, J. de Mey, V. Vinh-Hung, C. Claassens, M. Noppen, A. Bel, and G. Storme. Definition of gross tumor volume in lung cancer: inter-observer variability. Radiotherapy and Oncology, 62(1):37--49, 2002.
[25]
S. K. Warfield, K. H. Zou, and W. M. Wells. Simultaneous truth and performance level estimation (staple): an algorithm for the validation of image segmentation. Medical Imaging, IEEE Transactions on, 23(7):903--921, 2004.
[26]
O. Wirjadi. Survey of 3D image segmentation methods, volume 35. ITWM, 2007.

Cited By

View all
  • (2021)Developing and Validating a Computer-Based Training Tool for Inferring 2D Cross-Sections of Complex 3D StructuresHuman Factors: The Journal of the Human Factors and Ergonomics Society10.1177/0018720821101811065:3(508-528)Online publication date: 18-May-2021
  • (2017)Inferring cross-sections of 3D objectsProceedings of the ACM Symposium on Applied Perception10.1145/3119881.3119888(1-4)Online publication date: 16-Sep-2017

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
VINCI '16: Proceedings of the 9th International Symposium on Visual Information Communication and Interaction
September 2016
173 pages
ISBN:9781450341493
DOI:10.1145/2968220
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 September 2016

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. 3D volume segmentation
  2. conceptual framework

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

VINCI '16

Acceptance Rates

VINCI '16 Paper Acceptance Rate 14 of 42 submissions, 33%;
Overall Acceptance Rate 71 of 193 submissions, 37%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)62
  • Downloads (Last 6 weeks)15
Reflects downloads up to 17 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2021)Developing and Validating a Computer-Based Training Tool for Inferring 2D Cross-Sections of Complex 3D StructuresHuman Factors: The Journal of the Human Factors and Ergonomics Society10.1177/0018720821101811065:3(508-528)Online publication date: 18-May-2021
  • (2017)Inferring cross-sections of 3D objectsProceedings of the ACM Symposium on Applied Perception10.1145/3119881.3119888(1-4)Online publication date: 16-Sep-2017

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media