Multi-Cue-Based Circle Detection and Its Application to Robust Extrinsic Calibration of RGB-D Cameras
<p>Proposed method for extrinsic calibration of multiple RGB-D cameras. The green circles represent detected sphere regions. Refer to the text for more detail.</p> "> Figure 2
<p>Color-based pixel clustering. The black circles represent pixel colors. The most distant color from <math display="inline"><semantics> <msub> <mi mathvariant="bold">a</mi> <mi>m</mi> </msub> </semantics></math> is <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">a</mi> <mi>d</mi> <mrow> <mo stretchy="false">(</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math>, and the pixels are divided into two disjoint sets <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>d</mi> <mrow> <mo stretchy="false">(</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>m</mi> <mrow> <mo stretchy="false">(</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math>, which is the union of <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>m</mi> <mrow> <mo stretchy="false">(</mo> <mn>3</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>d</mi> <mrow> <mo stretchy="false">(</mo> <mn>3</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math>, and <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>d</mi> <mrow> <mo stretchy="false">(</mo> <mn>2</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math>. The black dotted lines represent the boundary between different sets of pixels. Refer to the text for more detail. Best viewed in color.</p> "> Figure 3
<p>Color-based pixel clustering example. (<b>a</b>) Input color image. (<b>b</b>) <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>m</mi> <mrow> <mo stretchy="false">(</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>d</mi> <mrow> <mo stretchy="false">(</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math> differently color-coded. <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>m</mi> <mrow> <mo stretchy="false">(</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>d</mi> <mrow> <mo stretchy="false">(</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math> are colored in red and blue, respectively. (<b>c</b>) <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>m</mi> <mrow> <mo stretchy="false">(</mo> <mn>2</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>d</mi> <mrow> <mo stretchy="false">(</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math>, and <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>d</mi> <mrow> <mo stretchy="false">(</mo> <mn>2</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math> colored in red, blue, and green, respectively. (<b>d</b>) <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>m</mi> <mrow> <mo stretchy="false">(</mo> <mn>3</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>d</mi> <mrow> <mo stretchy="false">(</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math>, and <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>d</mi> <mrow> <mo stretchy="false">(</mo> <mn>2</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>d</mi> <mrow> <mo stretchy="false">(</mo> <mn>3</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math> colored in red, blue, green, and yellow, respectively. (<b>e</b>) <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>m</mi> <mrow> <mo stretchy="false">(</mo> <mn>4</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>d</mi> <mrow> <mo stretchy="false">(</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>d</mi> <mrow> <mo stretchy="false">(</mo> <mn>2</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>d</mi> <mrow> <mo stretchy="false">(</mo> <mn>3</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math>, and <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>d</mi> <mrow> <mo stretchy="false">(</mo> <mn>4</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math> colored in red, blue, green, yellow, and cyan, respectively.</p> "> Figure 4
<p>Edge pixels and connected components. (<b>a</b>) Edge image of <a href="#sensors-19-01539-f003" class="html-fig">Figure 3</a>a. (<b>b</b>) Connected regions detected in <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>m</mi> <mrow> <mo stretchy="false">(</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>d</mi> <mrow> <mo stretchy="false">(</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math>, differently color-coded. (<b>c</b>) Connected regions detected in <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>m</mi> <mrow> <mo stretchy="false">(</mo> <mn>2</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>d</mi> <mrow> <mo stretchy="false">(</mo> <mn>2</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math>. (<b>d</b>) Connected regions detected in <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>m</mi> <mrow> <mo stretchy="false">(</mo> <mn>3</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>d</mi> <mrow> <mo stretchy="false">(</mo> <mn>3</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math>. (<b>e</b>) Connected regions detected in <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>m</mi> <mrow> <mo stretchy="false">(</mo> <mn>4</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi mathvariant="script">S</mi> <mi>d</mi> <mrow> <mo stretchy="false">(</mo> <mn>4</mn> <mo stretchy="false">)</mo> </mrow> </msubsup> </semantics></math>.</p> "> Figure 5
<p>Multi-cue-based circle detection. Edge pixels in <a href="#sensors-19-01539-f004" class="html-fig">Figure 4</a>a are drawn in black on the connected regions in <a href="#sensors-19-01539-f004" class="html-fig">Figure 4</a>d. The blue dashed circles represent the search range from <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>-</mo> <msub> <mi>τ</mi> <mrow> <mi>M</mi> <mi>C</mi> </mrow> </msub> </mrow> </semantics></math>–<math display="inline"><semantics> <mrow> <mi>r</mi> <mo>+</mo> <msub> <mi>τ</mi> <mrow> <mi>M</mi> <mi>C</mi> </mrow> </msub> </mrow> </semantics></math> for region boundary pixels. Refer to the text for more detail.</p> "> Figure 6
<p>Circle detection accuracy (styrofoam ball set). The error is the distance from the best circle center detected by a method to its manually-recorded ground-truth center. For each method, the error has been sorted in ascending order, so the image indices do not match across the methods. (<b>b</b>) has been scaled from (<b>a</b>) to show the number of detected circles with low errors. Best viewed in color. CHT, Circular Hough Transform.</p> "> Figure 7
<p>Top four circles detected by the methods (styrofoam ball set). (<b>a</b>) Proposed method. (<b>b</b>) Kwon et al. [<a href="#B21-sensors-19-01539" class="html-bibr">21</a>]. (<b>c</b>) CHT [<a href="#B36-sensors-19-01539" class="html-bibr">36</a>]. (<b>d</b>) EDCircles [<a href="#B44-sensors-19-01539" class="html-bibr">44</a>]. (<b>e</b>) Color probability images [<a href="#B21-sensors-19-01539" class="html-bibr">21</a>]. From (<b>a</b>–<b>d</b>), the best circles are drawn in yellow, while the remaining circles are drawn in green. Some circles have similar centers and radii in (<b>a</b>,<b>b</b>), so the numbers of circles in the images may seem less than four. Best viewed in color.</p> "> Figure 8
<p>Circle detection accuracy. (<b>a</b>) Gym ball set. (<b>b</b>) Basketball set. The error is the distance from the best circle center detected by a method to its manually-recorded ground-truth center. For each method, the error has been sorted in ascending order, so the image indices do not match across the methods. Best viewed in color.</p> "> Figure 9
<p>Top four circles detected by the methods (gym ball set). (<b>a</b>) Proposed method. (<b>b</b>) Kwon et al. [<a href="#B21-sensors-19-01539" class="html-bibr">21</a>]. (<b>c</b>) CHT [<a href="#B36-sensors-19-01539" class="html-bibr">36</a>]. (<b>d</b>) EDCircles [<a href="#B44-sensors-19-01539" class="html-bibr">44</a>]. From (<b>a</b>–<b>d</b>), the best circles are drawn in yellow, while the remaining circles are drawn in green. Some circles have similar centers and radii in (<b>a</b>) and (<b>b</b>), so the numbers of circles in the images may seem less than four. Best viewed in color.</p> "> Figure 10
<p>Top four circles detected by the methods (basketball set). (<b>a</b>) Proposed method. (<b>b</b>) Kwon et al. [<a href="#B21-sensors-19-01539" class="html-bibr">21</a>]. (<b>c</b>) CHT [<a href="#B36-sensors-19-01539" class="html-bibr">36</a>]. (<b>d</b>) EDCircles [<a href="#B44-sensors-19-01539" class="html-bibr">44</a>]. From (<b>a</b>–<b>d</b>), the best circles are drawn in yellow, while the remaining circles are drawn in green. Some circles have similar centers and radii in (<b>a</b>,<b>b</b>), so the numbers of circles in the images may seem less than four. Best viewed in color.</p> "> Figure 11
<p>Examples of aligned and undistorted color and depth image pairs acquired by three Kinect v2 cameras. From (<b>a</b>–<b>f</b>), we have collected a static set by capturing still images of a static sphere. From (<b>g</b>–<b>l</b>), we have collected a dynamic set by capturing videos of a sphere carried by a person. The green empty circles represent sphere regions detected by the proposed method.</p> "> Figure 12
<p>Our Kinect v2 camera setup.</p> "> Figure 13
<p>(<b>a</b>,<b>b</b>) show examples of misdetected circles. In (<b>c</b>), the circle has been accurately detected. However, in (<b>d</b>), the circular region does not match the sphere region in the corresponding depth image due to the asynchronous nature of the Kinect v2 camera.</p> "> Figure 14
<p>Aligned sphere centers (static set, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>95</mn> </mrow> </semantics></math>). (<b>a</b>) Proposed method (pairwise). (<b>b</b>) Proposed method (Bundle Adjustment (BA)). (<b>c</b>) Su et al. [<a href="#B20-sensors-19-01539" class="html-bibr">20</a>]. Refer to the text for more detail. Best viewed in color.</p> "> Figure 15
<p>Aligned sphere centers (dynamic set, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>164</mn> </mrow> </semantics></math>). (<b>a</b>) Proposed method (pairwise). (<b>b</b>) Proposed method (BA). (<b>c</b>) Su et al. [<a href="#B20-sensors-19-01539" class="html-bibr">20</a>]. (<b>d</b>) Su et al. [<a href="#B20-sensors-19-01539" class="html-bibr">20</a>] (inlier set). Refer to the text for more detail. Best viewed in color.</p> "> Figure 16
<p>Number of inliers. The figures show the number of triples of corresponding sphere centers whose maximum pairwise distance is less than the error threshold. (<b>a</b>) Static set (<math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>95</mn> </mrow> </semantics></math>). (<b>b</b>) Dynamic set (<math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>164</mn> </mrow> </semantics></math>). Refer to the text for more detail. Best viewed in color.</p> "> Figure 17
<p>Number of inliers according to the number of sphere centers <math display="inline"><semantics> <msub> <mi>N</mi> <mi>F</mi> </msub> </semantics></math> used for extrinsic calibration (proposed method). The figures show the number of triples of corresponding sphere centers whose maximum pairwise distance is less than the error threshold. (<b>a</b>) Static set (<math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>95</mn> </mrow> </semantics></math>). (<b>b</b>) Dynamic set (<math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>164</mn> </mrow> </semantics></math>). Refer to the text for more detail. Best viewed in color.</p> "> Figure 18
<p>Rendering results (dynamic set). (<b>a</b>) Overview of the environment. 3D point clouds from different cameras have been drawn in different colors. (<b>b</b>) Proposed method (BA). (<b>c</b>) Su et al. [<a href="#B20-sensors-19-01539" class="html-bibr">20</a>]. (<b>d</b>) Su et al. (inlier set). (<b>e</b>) Proposed method (pairwise). (<b>f</b>) Proposed method (BA). (<b>g</b>) Su et al. [<a href="#B20-sensors-19-01539" class="html-bibr">20</a>]. (<b>h</b>) Su et al. [<a href="#B20-sensors-19-01539" class="html-bibr">20</a>] (inlier set). Refer to the text for more detail. Best viewed in color.</p> "> Figure 19
<p>Examples of aligned and undistorted color and depth image pairs of the static basketball set. The green empty circles represent sphere regions detected by the proposed method. (<b>a</b>), (<b>b</b>), (<b>c</b>) show corresponding color images, and (<b>d</b>), (<b>e</b>), (<b>f</b>) show their corresponding depth images. (<b>g</b>), (<b>h</b>), (<b>i</b>) show another triple of corresponding color images, and (<b>j</b>), (<b>k</b>), (<b>l</b>) show their corresponding depth images.</p> "> Figure 20
<p>Aligned sphere centers and number of inliers (static basketball set, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>101</mn> </mrow> </semantics></math>). (<b>a</b>) Proposed method (BA). (<b>b</b>) Su et al. [<a href="#B20-sensors-19-01539" class="html-bibr">20</a>]. (<b>c</b>) Number of inliers. Refer to the text for more detail. Best viewed in color.</p> "> Figure 21
<p>Rendering results (static basketball set). (<b>a</b>,<b>c</b>) Proposed method (BA). (<b>b</b>,<b>d</b>) Su et al. [<a href="#B20-sensors-19-01539" class="html-bibr">20</a>]. Refer to the text for more detail. Best viewed in color.</p> ">
Abstract
:1. Introduction
2. Related Work
3. Proposed Method
3.1. Robust Estimation
Algorithm 1: General MSAC procedure. |
Result: minimizing the maximum value of the data type of ; an arbitrary value or vector; |
3.2. Multi-Cue Based Circle Detection
Algorithm 2: Proposed circle detection algorithm. |
Result: minimizing ; the set of the entire pixels in the input image; Sort in ascending order; with the least in the sorted set. |
3.3. Sphere Fitting
3.4. Pairwise Pose Estimation
3.5. Bundle Adjustment
Algorithm 3: Proposed extrinsic calibration algorithm. |
Result: , minimizing Apply the Levenberg–Marquardt algorithm to find , minimizing , with , as the initial solution; |
3.6. Discussion on Parameter Settings
4. Experiments
4.1. Circle Detection Results
4.2. Extrinsic Calibration Results
4.3. Computation Time
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Shotton, J.; Fitzgibbon, A.; Cook, M.; Sharp, T.; Finocchio, M.; Moore, R.; Kipman, A.; Blake, A. Real-time human pose recognition in parts from single depth images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 1297–1304. [Google Scholar]
- Sharp, T.; Keskin, C.; Robertson, D.; Taylor, J.; Shotton, J.; Kim, D.; Rhemann, C.; Leichter, I.; Vinnikov, A.; Wei, Y.; et al. Accurate, robust, and flexible real-time hand tracking. In Proceedings of the Annual ACM Conference on Human Factors in Computing Systems, Seoul, Korea, 18–23 April 2015; pp. 3633–3642. [Google Scholar]
- Newcombe, R.A.; Izadi, S.; Hilliges, O.; Molyneaux, D.; Kim, D.; Davison, A.J.; Kohli, P.; Shotton, J.; Hodges, S.; Fitzgibbon, A. KinectFusion: Real-time dense surface mapping and tracking. In Proceedings of the IEEE International Symposium on Mixed and Augmented Reality, Basel, Switzerland, 26–29 October 2011; pp. 127–136. [Google Scholar]
- Endres, F.; Hess, J.; Sturm, J.; Cremers, D.; Burgard, W. 3-D mapping with and RGB-D camera. IEEE Trans. Robot. 2013, 30, 177–187. [Google Scholar] [CrossRef]
- ASUS. Xtion PRO LIVE. 2011. Available online: https://www.asus.com/us/3D-Sensor/Xtion_PRO_LIVE/ (accessed on 29 March 2019).
- Microsoft. Kinect v2. 2015. Available online: https://support.xbox.com/en-US/xbox-on-windows/accessories/kinect-for-windows-v2-info (accessed on 29 March 2019).
- Intel. RealSense Camera SR300. 2016. Available online: https://www.mouser.com/pdfdocs/intel_realsense_camera_sr300.pdf (accessed on 29 March 2019).
- Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
- Bradski, G. The OpenCV Library. Dr. Dobb’s J. Softw. Tools 2000, 120, 122–125. [Google Scholar]
- Svoboda, T.; Martinec, D.; Pajdla, T. A convenient multicamera self-calibration for virtual environments. Presence 2005, 14, 407–422. [Google Scholar] [CrossRef]
- Reynolds, M.; Doboš, J.; Peel, L.; Weyrich, T.; Brostow, G.J. Capturing Time-of-Flight Data with Confidence. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 1–8. [Google Scholar]
- Agrawal, M.; Davis, L.S. Camera calibration using spheres: A semi-definite programming approach. In Proceedings of the IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; pp. 782–789. [Google Scholar]
- Zhang, H.; Wong, K.Y.K.; Zhang, G. Camera calibration from images of spheres. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 499–503. [Google Scholar] [CrossRef]
- Guan, J.; Deboeverie, F.; Slembrouck, M.; Van Haerenborgh, D.; Van Cauwelaert, D.; Veelaert, P.; Philips, W. Extrinsic calibration of camera networks using a sphere. Sensors 2015, 15, 18985–19005. [Google Scholar] [CrossRef]
- Shen, J.; Xu, W.; Luo, Y.; Su, P.C.; Cheung, S.C.S. Extrinsic calibration for wide-baseline RGB-D camera network. In Proceedings of the International Workshop on Multimedia Signal Processing, Jakarta, Indonesia, 22–24 September 2014. [Google Scholar]
- Ruan, M.; Huber, D. Calibration of 3D sensors using a spherical target. In Proceedings of the International Conference on 3D Vision, Tokyo, Japan, 8–11 December 2014; pp. 187–193. [Google Scholar]
- Staranowicz, A.N.; Brown, G.R.; Morbidi, F.; Mariottini, G.L. Practical and accurate calibration of RGB-D cameras using spheres. Comput. Vision Image Underst. 2015, 137, 102–114. [Google Scholar] [CrossRef]
- Staranowicz, A.N.; Ray, C.; Mariottini, G.L. Easy-to-use, general, and accurate multi-Kinect calibration and its application to gait monitoring for fall prediction. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Milan, Italy, 25–29 August 2015; pp. 4994–4998. [Google Scholar]
- Lee, J.H.; Kim, E.S.; Park, S.Y. Synchronization error compensation of multi-view RGB-D 3D modeling system. In Proceedings of the Asian Conference on Computer Vision Workshops, Taipei, 20–24 November 2016; pp. 162–174. [Google Scholar]
- Su, P.C.; Shen, J.; Xu, W.; Cheung, S.C.S.; Luo, Y. A fast and robust extrinsic calibration for RGB-D camera networks. Sensors 2018, 18, 235. [Google Scholar] [CrossRef]
- Kwon, Y.C.; Jang, J.W.; Choi, O. Automatic sphere detection for extrinsic calibration of multiple RGBD cameras. In Proceedings of the International Conference on Control, Automation and Systems, Daegwallyeong, Korea, 17–20 October 2018; pp. 1451–1454. [Google Scholar]
- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Torr, P.H.S.; Zisserman, A. MLESAC: A new robust estimator with application to estimating image geometry. Comput. Vision Image Underst. 2000, 78, 138–156. [Google Scholar] [CrossRef]
- Foix, S.; Alenya, G.; Torras, C. Lock-in Time-of-Flight (ToF) cameras: A survey. IEEE Sens. J. 2011, 11, 1917–1926. [Google Scholar] [CrossRef]
- Khoshelham, K.; Elberink, S.O. Accuracy and resolution of kinect depth data for indoor mapping applications. Sensors 2012, 12, 1437–1454. [Google Scholar] [CrossRef]
- Kim, Y.S.; Kang, B.; Lim, H.; Choi, O.; Lee, K.; Kim, J.D.K.; Kim, C. Parametric model-based noise reduction for ToF depth sensors. Proc. SPIE 2012, 8290, 82900A. [Google Scholar]
- Choi, O.; Lee, S.; Lim, H. Inter-frame consistent multi-frequency phase unwrapping for Time-of-Flight cameras. Opt. Eng. 2013, 52, 057005. [Google Scholar] [CrossRef]
- Kim, Y.M.; Chan, D.; Theobalt, C.; Trun, S. Design and calibration of a multi-view TOF sensor fusion system. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Anchorage, AK, USA, 23–28 June 2008. [Google Scholar]
- Herrera, C.D.; Kannala, J.; Heikkilä, J. Joint depth and color camera calibration with distortion correction. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2058–2064. [Google Scholar] [CrossRef]
- Jung, J.; Lee, J.Y.; Jeong, Y.; Kweon, I. Time-of-flight sensor calibration for a color and depth camera pair. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1501–1513. [Google Scholar] [CrossRef] [PubMed]
- Basso, F.; Menegatti, E.; Pretto, A. Robust intrinsic and extrinsic calibration of RGB-D cameras. IEEE Trans. Robot. 2018, 34, 1315–1332. [Google Scholar] [CrossRef]
- Yang, R.S.; Chan, Y.H.; Gong, R.; Nguyen, M.; Strozzi, A.G.; Delmas, P.; Gimel’farb, G.; Ababou, R. Multi-Kinect scene reconstruction: Calibration and depth inconsistencies. In Proceedings of the International Conference on Image and Vision Computing, Wellington, New Zealand, 27–29 November 2013; pp. 47–52. [Google Scholar]
- Ha, J.E. Extrinsic calibration of a camera and laser range finder using a new calibration structure of a plane with a triangular hole. Int. J. Control Autom. Syst. 2012, 10, 1240–1244. [Google Scholar] [CrossRef]
- Fernández-Moral, E.; González-Jiménez, J.; Rives, P.; Arévalo, V. Extrinsic calibration of a set of range cameras in 5 seconds without pattern. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robotics and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 429–435. [Google Scholar]
- Perez-Yus, A.; Fernández-Moral, E.; Lopez-Nicolas, G.; Guerrero, J.J.; Rives, P. Extrinsic calibration of multiple RGB-D cameras from line observations. IEEE Robot. Autom. Lett. 2018, 3, 273–280. [Google Scholar] [CrossRef]
- Ballard, D.H. Generalizing the Hough transform to detect arbitrary shapes. Pattern Recognit. 1981, 13, 111–122. [Google Scholar] [CrossRef]
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle Adjustment—A Modern Synthesis. In Vision Algorithms: Theory and Practice; Triggs, B., Zisserman, A., Szeliski, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2000; pp. 298–372. [Google Scholar]
- Levenberg, K. A method for the solution of certain non-linear problems in least squares. Q. Appl. Math. 1944, 2, 164–168. [Google Scholar] [CrossRef]
- Marquardt, D. An algorithm for least-squares estimation of nonlinear parameters. SIAM J. Appl. Math. 1963, 11, 431–441. [Google Scholar] [CrossRef]
- Shapiro, L.; Stockman, G. Computer Vision; Pearson: Upper Saddle River, NJ, USA, 2001. [Google Scholar]
- Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 679–698. [Google Scholar] [CrossRef]
- Lachat, E.; Macher, H.; Landes, T.; Grussenmeyer, P. Assessment and calibration of a RGB-D camera (Kinect v2 sensor) towards a potential use for close-range 3D modeling. Remote Sens. 2015, 7, 13070–13097. [Google Scholar] [CrossRef]
- Akinlar, C.; Topal, C. EDCircles: A real-time circle detector with a false detection control. Pattern Recognit. 2013, 46, 725–740. [Google Scholar] [CrossRef]
- Barney Smith, E.H.; Lamiroy, B. Circle detection performance evaluation revisited. In Graphic Recognition. Current Trends and Challenges; Lamiroy, B., Dueire Lins, R., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 3–18. [Google Scholar]
- Lourakis, M. Levmar: Levenberg-Marquardt Nonlinear Least Squares Algorithms in C/C++. Available online: http://www.ics.forth.gr/~lourakis/levmar/ (accessed on 14 January 2019).
Parameter | Stage or Meaning | Setting in This Paper | Recommended Settings |
---|---|---|---|
(or ) | Error-clipping value (threshold) of the robust loss function | ||
Circle fitting | 3 pixels | 2–4 pixels | |
Circle detection | × | Adaptive | |
Sphere fitting | 2 cm | 1–5 cm | |
Pairwise pose estimation | cm | ||
Bundle adjustment | 2 cm | 1–5 cm | |
Number of total samples in MSAC | |||
Circle fitting | 1000 | 1000 | |
Sphere fitting | 10,000 | 10,000 | |
Pairwise pose estimation | 10,000 | 10,000 | |
Mean sphere color | (165.79, 146.02) | Learned | |
K | Hierarchical segmentation | 30 | 30 |
Circle detection | 10 | 5–15 (a small value) | |
Circle detection | 10% | Dependent on the purpose | |
Circle detection | 10 pixels | 10 pixels | |
Circle detection | Adaptive |
Method | Stage | Static Set () | Dynamic Set () |
---|---|---|---|
Proposed | Circle detection (per image) | 53.5 ms | 60.5 ms |
Proposed | Sphere fitting (per region) | 327 ms | 313 ms |
Proposed | Pairwise pose estimation (per camera pair) | 776 ms | 1.28 s |
Proposed | Bundle adjustment | 29.6 s | 149 s |
Su et al. [20] | Pairwise pose estimation (per camera pair) | 78.3 s | 82.3 s |
Su et al. [20] | Bundle adjustment | 2.09 s | 3.51 s |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kwon, Y.C.; Jang, J.W.; Hwang, Y.; Choi, O. Multi-Cue-Based Circle Detection and Its Application to Robust Extrinsic Calibration of RGB-D Cameras. Sensors 2019, 19, 1539. https://doi.org/10.3390/s19071539
Kwon YC, Jang JW, Hwang Y, Choi O. Multi-Cue-Based Circle Detection and Its Application to Robust Extrinsic Calibration of RGB-D Cameras. Sensors. 2019; 19(7):1539. https://doi.org/10.3390/s19071539
Chicago/Turabian StyleKwon, Young Chan, Jae Won Jang, Youngbae Hwang, and Ouk Choi. 2019. "Multi-Cue-Based Circle Detection and Its Application to Robust Extrinsic Calibration of RGB-D Cameras" Sensors 19, no. 7: 1539. https://doi.org/10.3390/s19071539
APA StyleKwon, Y. C., Jang, J. W., Hwang, Y., & Choi, O. (2019). Multi-Cue-Based Circle Detection and Its Application to Robust Extrinsic Calibration of RGB-D Cameras. Sensors, 19(7), 1539. https://doi.org/10.3390/s19071539