Abstract
Object matching in non-overlapped scenes of multi-cameras is a challenging task, due to a large number of factors, e.g. complex backgrounds, illumination variance, pose of observed object, viewpoint and image resolutions of different cameras, shadows and occlusions. For an object, matching its observations with variant appearances in such context usually turns to evaluate their similarity over some sophisticatedly chosen image features. We observe that certain feature is usually robust to certain variance, e.g. SIFT is robust to the variance in viewpoint and scale. We mean that incorporating the abilities of a bag of such features would reach a better performance. Based on these observations and insights, we propose an adaptive feature-fusion algorithm. The algorithm, first, evaluates the matching accuracy of four sophisticatedly chosen and well validated features: color histogram, UV chromaticity, major color spectrum and SIFT, using exponential models of entropy as similarity measure. Second, an adaptive fusion algorithm is presented to fuse a bag of features for a collaborative similarity measure. Our approach is shown to be able to adaptively and dynamically reduce the variances of object appearances caused by multiple factors. Experimental results show that our approach applied to human matching reaches a high robustness and matching accuracy in comparison with the previous fusion methods.
Similar content being viewed by others
References
Guo, Y., Hsu, S., Sawhney, H., Kumar, R., & Shan, Y. (2007). “Robust object matching for persistent tracking with heterogeneous features”. IEEE Transactions Pattern Analysis Machine Intelligence, 29(5), 824–839.
Patwardhan, K., Sapiro, G., & Morellas, V. (2007). “A graph-based foreground representation and its application in example based people matching in video,” In Proc. IEEE International Conference on Image Processing, pp. 37–40.
Madden, C., & Piccardi, M. (2007). “A framework for track matching across disjoint cameras using robust shape and appearance features,” In Proc. IEEE International Conference on advanced Video and Signal Based Surveillance, pp. 188–193.
Lv, X., Kong, Q., Weng, F., & Liu, Y. (2009). “Analysis of appearance features for human matching between different fields of view,” ICME, pp. 670–673.
Lv, X., Kong, Q., & Liu, Y. (2008). “A feature fusion algorithm for hu- man matching between non-overlapping cameras (in chinese),” Chinese Conference on Pattern Recognition, pp. 73–78.
Kong, Q., Lv, X., Li, X., & Liu, Y. (2008). “Analysis of human matching algorithms between non-overlapping cameras (in chinese),” Conference on Harmonious Human Machine Environment, pp. 44–50.
Shan, Y., Sawhney, H., & Kumar, R. (2008). “Unsupervised learning of discriminative edge measures for vehicle matching between non-overlapping cameras”. IEEE Transactions Pattern Analysis Machine Intelligence, 30(4), 700–711.
Javed, O. Shafique, K., Rasheed, Z., & Shah, M. (2008). “Model- ing inter-camera space-time and appearance relationships for tracking across non-overlapping views,” Computer Vis. Image Underst., no. 109, pp. 146–1627.
Cai, Y., Chen, W., Huang, K., & Tan, T. (2007). “Continuously tracking objects across multiple widely separated cameras,” In Proc. 8th Asian Conference on Computer Vision, pp. 843–852.
Jeong, K., & Jaynes, C. (2008). “Object matching in disjoint cameras using a color transfer approach,” Machine Vis. Applications.
Madden, C., Cheng, E. D., & Piccardi, M. (2007). “Tracking people across disjoint camera views by an illumination-tolerant appearance representation”. Machine Vis Applications, 18, 233–247.
Gilbert, A., & Bowden, R. “Incremental, scalable tracking of objects inter camera,” Computer Vis. Image Underst.
Teixeira, L., & Corte-Real, L. (2008). “Video object matching across multiple independent views using local descriptors and adaptive learning,” Pattern Recognit. Lett.
Lowe, D. (2004). “Distinctive image features from scale-invariant key-points”. International Journal Computer Vision, 60(2), 91–110.
Yu, Y., Harwood, D., Yoon, K., & Davis, L. (2008). “Human appearance modeling for matching across video sequences”. Machine Vision Applications, 18, 139–149.
Kullback, S., & Leibler, R. (1951). “On information and sufficiency”. Annual Mathematics Statistician, 22, 79–86.
Chen, E., & Piccardi, M. (2007). “Disjoint track matching based on a major color spectrum histogram representation”. Optical Engineering, 46(4), 1–14.
Zhou, S., & Chellappa, R. (2006). “From sample similarity to ensemble similarity: probabilistic distance measures in reproducing kernel hilbert space”. IEEE Transactions Pattern Analysis Machine Intelligence, 28(6), 917–929.
Yang, S., Li, S., Pan, Q., & Li, J. “Real-time multiple objects tracking with occlusion handling in dynamic scenes,” In Proc. IEEE International Conference on CVPR, vol. 1.
Acknowledgments
This work was supported by National Natural Science Foundation of China under Grant 61105022, and Research Fund for the Doctoral Program of Higher Education of China under Grant20110073120028, and Jiangsu Provincial Natural Science Foundation of China under Grant BK2012296.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Liu, H., Lv, X., Zhu, T. et al. An Adaptive Feature-fusion Method for Object Matching over Non-overlapped Scenes. J Sign Process Syst 76, 77–89 (2014). https://doi.org/10.1007/s11265-013-0806-7
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11265-013-0806-7