Abstract
In this study, we constructed and tested the usability of a surgical area-measuring robot-mechanical system, which does not obstruct the movements of doctors, assistants, or nurses during surgery, under two operating lights in an operating room. This study revealed that using the robotic slider to move the camera up and down did not result in excessive vibration or inconsistent depth measurements before, during, and after the movement. For example, if a doctor moves the camera out of the way to move a microscope to the upper part of the surgical area for microsurgery and then brings it back, the system could accurately retain the depth image alignment.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
In recent years, many groups have conducted extensive research on surgical operation navigation systems. Often, they develop navigation systems in the field of orthopedic surgery because bones have few variations. Navigation systems have also been suggested for neurosurgery and otolaryngology [1, 2], which deal with fairly immobile organs that are surrounded by bones. Surgical navigation systems focusing on the kidney and liver are also being explored; these include endoscopic and/or laparoscopic surgery systems [3, 4] and robot surgery systems [5].
We have supported doctors by designing a sensor-based surgical operation navigator for the liver and brain [6,7,8,9,10,11,12,13]. For this, it was essential to accurately measure the surgical area with a depth camera to obtain a depth image. To accomplish this, last year we built a new surgical area-measuring robot-mechanical system and assessed the correlation between the distance the robot traveled and the change in distance of the depth image [14,15,16].
For example, this robot-mechanical system allows a surgeon to raise multiple cameras simultaneously to insert a microscope into the surgical area. Then, when the microsurgery is finished, the surgeon can lower the cameras back into the surgical area. The robot knows the distance that the group of cameras was raised or lowered, so it can achieve consistency in the precision of the depth image before, during, and after the cameras are moved. The system captures images from multiple angles in the surgical area because during long surgeries, the surgeon’s head and arms block parts of the surgical area. Using one camera proved insufficient for acquiring an accurate depth image, so we increased the number of cameras and controlled their infrared pattern emissions to acquire stable and accurate depth images.
The robotic-mechanical system is constructed in such a way that there is a pole next to the surgical area and the system is mounted directly onto the operation bed, which obstructs various tasks performed during surgery. Therefore, in this study, we mounted the pole on the operation bed by the patient’s feet, and using it as a base, we designed and built a new robotic-mechanical system to support the group of cameras in the surgical area. Here we will evaluate the measurement precision.
In Sect. 2 of this study, we will compare the old and new surgical area-measuring robotic-mechanical systems. In Sect. 3, we will evaluate the surgical area-imaging robotics-mechanical system in an operating room that has two operating lights. Finally, in Sect. 4, we will summarize the assessment test and discuss future projects.
2 Comparison of the Old and New Robotic-Mechanical Systems
First, we will introduce the specifications of the Intel Real Sense SR300, and then we will introduce the old and new surgical area measuring robotic-mechanical systems.
2.1 Intel Real Sense SR300
In this study, we used three Intel Real Sense SR300 cameras to acquire serial depth images of the surgical area (Fig. 1, Table 1).
2.2 Previous Surgical Area-Imaging Robotic-Mechanical System
We constructed a robotic-mechanical system that gives doctors an unobstructed view of the surgical area while being able to freely raise and lower a group of cameras (Fig. 2(a), (b), (c)) [14,15,16]. In this system, the three cameras alternate emission of an infrared pattern, and upon receiving the signal, the system was able to acquire a stable depth image [16].
However, when the system was tested, we discovered a flaw. The poles that attach the robotic-mechanical system to the operating bed would often obstruct access to the patient. Therefore, to solve this problem, we built a robotic-mechanical system with the supporting pole located by the patient’s feet.
2.3 The New Model of the Surgical Area-Imaging Robotic-Mechanical System
We constructed a robotic-mechanical system supported by a pole that does not obstruct the doctor, assistant, or nurse from accessing the patient (Fig. 3(a), (b), (c)). Then we evaluated the robotic-mechanical system’s vibrations when the cameras are raised or lowered using the change in the depth image. We will describe the test results in the next section.
3 Assessment of the Surgical Area-Imaging Robotic-Mechanical System in the Operating Room
We attached the new surgical area-imaging robotic-mechanical system to an operating bed in an operating room with two shadow-less operating lights. We verified that the center Intel Real Sense SR300 depth camera could acquire a stable depth image when the operating lights were off. Then, using the robotic slider, we raised and lowered the camera approximately 50 cm, 100 cm, and 200 cm to verify that the depth image also changed by 50 cm, 100 cm, and 200 cm, accordingly. Here, we averaged a depth of 30 pixels in the depth image for the depth at each time point. The sampling time was set to 30 frames per second. Then, we evaluated the extent of the robotic-mechanical system’s vibrations by the change in depth over time. Next, we performed the same experiment, but with the operating lights on using the same evaluation described above.
3.1 Change in Depth Over Time (Without Operating Lights)
First, we raised the camera 50 mm, 100 mm, and 200 mm from the starting point (310 mm), and then we lowered it 200 mm, 100 mm, and 50 mm. The resulting graphs shown in Fig. 4 reveal that the depth of the top camera of the surgical area-imaging robotic-mechanical system increases and decreases are approximately equal.
3.2 Vibrations While Raising the Robotic-Mechanical System (Without Operating Lights)
The results for when we raised the camera 50 mm, 100 mm, and 200 mm are shown in Fig. 5(a), (b), (c). There was almost no vibration when it was raised 50 mm and 100 mm. When it was raised 200 mm, however, it vibrated at most 1 s. This shows that the differences in the depth images before and after raising the camera is significant. However, typical surgical operations last 3–4 h and laparotomies take about 20 min, so this vibration has very little effect on the surgical operation navigation.
3.3 Vibrations While Lowering the Robotic-Mechanical System (Without Operating Lights)
The results from when we lowered the camera 50 mm, 100 mm, and 200 mm are shown in Fig. 6(a), (b), (c). There was almost no vibration when it was lowered 50 mm and 100 mm, however, when it was lowered 200 mm, it vibrated when it started moving. Fortunately, the vibration time is less than 30 frames < 1 s, which is quite smaller than real periods of several surgeries.
3.4 Change in Depth Over Time (with Two Operating Lights)
First, we raised the camera 50 mm, 100 mm, and 200 mm from the starting point (310 mm), and then we lowered it 200 mm, 100 mm, and 50 mm. The resulting graph, which is shown in Fig. 7, reveals that the depth of the top camera of the surgical area-imaging robotic-mechanical system increases and decreases are approximately equally. The depth is more accurate when the camera is on a flat plain and the operating lights are on, which means the operating lights have a positive effect on sensing.
3.5 Vibrations While Raising the Robotic-Mechanical System (with Operating Lights)
The graph of when we raised the camera 50 mm, 100 mm, and 200 mm is shown in Fig. 8(a), (b), (c). There was almost no vibration when it was raised 50 mm and 100 mm, however, when raised 200 mm, there were vibrations, which lasted for a shorter time compared to when the operating lights were off, and the depth was more constant after the vibrations stopped. This means the operating lights have a positive effect on sensing. From these results, we can see that the depth image comparison before and after raising the camera is significant. Typical surgical operations last 3–4 h and laparotomies take about 20 min, so this vibration has very little effect on the surgical operation navigation.
3.6 Vibrations While Lowering the Robotic-Mechanical System (with Operating Lights)
The results when we lowered the camera 50 mm, 100 mm, and 200 mm are shown in Fig. 9(a), (b), (c). There was almost no vibration when it was lowered 50 mm, 100 mm, or 200 mm, and depth is more accurate when the camera is on a flat plain and the operating lights are on, which means the illumination from the operating lights has a positive effect on sensing.
4 Concluding Remarks
In this study, we built a new surgical area-measuring robotic-mechanical system with a pole attached to the operating bed that does not obstruct access to the patient. Then, we evaluated the usability of the system by graphing the change in depth measured by the top camera. Our results showed that under two lit operating room lights, even if the robotic slider raised or lowered the camera, there were no abnormal vibrations and the system was able to stably acquire a depth image of the target object.
In future studies, we will examine calculating depth by averaging depths at randomly selected 30 pixels (instead of averaging depths at a fixed set of 30 pixels), which we believe will cancel out errors and provide better results. After that, we plan to change the number of frames to obtain a more stable average depth value (and individual depth values).
We will strengthen the theoretical formulas for moving the robotic slider and camera up and down based on the geometric properties of the robotic-mechanical system and its targets (various organs) as well as the geometric properties of the position of the three cameras and their targets (various organs). In addition, we will calibrate the relationship between the movement of the robot and the movement of the average depth value. We believe this will allow us to more precisely measure the target object.
References
Matsumoto, N., et al.: A minimally invasive registration method using surface template-assisted marker positioning (STAMP) for image-guided otologic surgery. Otolaryngol. Head Neck Surg. 140(1), 96–102 (2009)
Hong, J., Hashizume, M.: An effective point-based registration tool for surgical navigation. Surg. Endosc. 24(4), 944–948 (2010)
Ieiri, S., et al.: Augmented reality navigation system for laparoscopic splenectomy in children based on preoperative CT image using optical tracking device. Pediatr. Surg. Int. 28(4), 341–346 (2012)
Mahmud, N., Cohen, J., Tsourides, K., Berzin, T.M.: Computer vision and augmented reality in gastrointestinal endoscopy. Gastroenterol. Rep. (Oxf.) 3(3), 179–184 (2015). https://doi.org/10.1093/gastro/gov027. Accessed 1 Jul 2015
Pessaux, P., Diana, M., Soler, L., Piardi, T., Mutter, D., Marescaux, J.: Towards cybernetic surgery: robotic and augmented reality-assisted liver segmentectomy. Langenbecks Arch. Surg. 400(3), 381–385 (2015)
Watanabe, K., et. al.: Brain shift simulation controlled by directly captured surface points. In: Proceedings of 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Sessions: Ignite_Theme 2_Fr2, Poster Session III, Orlando Florida USA (2016)
Yano, D., Koeda, M., Onishi, K., Noborio, H.: Development of a surgical knife attachment with proximity indicators. In: Marcus, A., Wang, W. (eds.) DUXU 2017. LNCS, vol. 10289, pp. 608–618. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58637-3_48
Watanabe, K., Yoshida, S., Yano, D., Koeda, M., Noborio, H.: A new organ-following algorithm based on depth-depth matching and simulated annealing, and its experimental evaluation. In: Marcus, A., Wang, W. (eds.) DUXU 2017. LNCS, vol. 10289, pp. 594–607. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58637-3_47
Sengiku, A., et al.: Augmented reality navigation system for robot-assisted laparoscopic partial nephrectomy. In: Marcus, A., Wang, W. (eds.) DUXU 2017. LNCS, vol. 10289, pp. 575–584. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58637-3_45
Onishi, K., Miki, Y., Okuda, K., Koeda, M., Noborio, H.: A study of guidance method for AR laparoscopic surgery navigation system. In: Marcus, A., Wang, W. (eds.) DUXU 2017. LNCS, vol. 10289, pp. 556–564. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58637-3_43
Noborio, H., et. al.: Fast surgical algorithm for cutting with liver standard triangulation language format using Z-buffers in graphics processing unit. In: Fujie, M. (ed.) Computer Aided Surgery, pp. 127–140. Springer, Tokyo (2016). https://doi.org/10.1007/978-4-431-55810-1_11
Noborio, H., Aoki, K., Kunii, T., Mizushino, K.: A potential function-based scalpel navigation method that avoids blood vessel groups during excision of cancerous tissue. In: Proceedings of the 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2016), pp. 6106–6112 (2016)
Noborio, H., Kunii, T., Mizushino, K.: Comparison of GPU-based and CPU-based algorithms for determining the minimum distance between a CUSA scalper and blood vessels. In: BIOSTEC 2016, pp. 128–136. The SCITEPRESS Digital Library (2016)
Watanabe, K., et. al.: A mechanical system directly attaching beside a surgical bed for measuring surgical area precisely by depth camera. In: Proceedings of the 10th MedViz Conference and the 6th Eurographics Workshop on Visual Computing for Biology and Medicine (EG VCBM), pp. 105–108 (2016)
Watanabe, K., et. al.: Capturing a brain shift directly by the depth camera kinect v2. In: Proceedings of 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Sessions: Ignite_Theme 4_Fr1, Poster Session II, Orlando Florida USA (2016)
Nonaka, M., Watanabe, K., Noborio, H., Kayaki, M., Mizushino, K.: Capturing a surgical area using multiple depth cameras mounted on a robotic mechanical system. In: Marcus, A., Wang, W. (eds.) DUXU 2017. LNCS, vol. 10289, pp. 540–555. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58637-3_42
Acknowledgment
This research has been partially supported by the Collaborative Research Fund for Graduate Schools (A) of the Osaka Electro-Communication University, and a Grant-in-Aid for Scientific Research of the Ministry of Education, Culture, Sports, Science and Technology (Research Project Number: JP26289069).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Nonaka, M., Chikayama, Y., Kayaki, M., Koeda, M., Tachibana, K., Noborio, H. (2018). A Useful Robotic-Mechanical System for Measuring a Surgical Area Without Obstructing Surgical Operations by Some Surgeon. In: Kurosu, M. (eds) Human-Computer Interaction. Interaction in Context. HCI 2018. Lecture Notes in Computer Science(), vol 10902. Springer, Cham. https://doi.org/10.1007/978-3-319-91244-8_4
Download citation
DOI: https://doi.org/10.1007/978-3-319-91244-8_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-91243-1
Online ISBN: 978-3-319-91244-8
eBook Packages: Computer ScienceComputer Science (R0)