[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content

Advertisement

Log in

An Adaptive Feature-fusion Method for Object Matching over Non-overlapped Scenes

  • Published:
Journal of Signal Processing Systems Aims and scope Submit manuscript

Abstract

Object matching in non-overlapped scenes of multi-cameras is a challenging task, due to a large number of factors, e.g. complex backgrounds, illumination variance, pose of observed object, viewpoint and image resolutions of different cameras, shadows and occlusions. For an object, matching its observations with variant appearances in such context usually turns to evaluate their similarity over some sophisticatedly chosen image features. We observe that certain feature is usually robust to certain variance, e.g. SIFT is robust to the variance in viewpoint and scale. We mean that incorporating the abilities of a bag of such features would reach a better performance. Based on these observations and insights, we propose an adaptive feature-fusion algorithm. The algorithm, first, evaluates the matching accuracy of four sophisticatedly chosen and well validated features: color histogram, UV chromaticity, major color spectrum and SIFT, using exponential models of entropy as similarity measure. Second, an adaptive fusion algorithm is presented to fuse a bag of features for a collaborative similarity measure. Our approach is shown to be able to adaptively and dynamically reduce the variances of object appearances caused by multiple factors. Experimental results show that our approach applied to human matching reaches a high robustness and matching accuracy in comparison with the previous fusion methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
Figure 8
Figure 9
Figure 10
Figure 11

Similar content being viewed by others

References

  1. Guo, Y., Hsu, S., Sawhney, H., Kumar, R., & Shan, Y. (2007). “Robust object matching for persistent tracking with heterogeneous features”. IEEE Transactions Pattern Analysis Machine Intelligence, 29(5), 824–839.

    Article  Google Scholar 

  2. Patwardhan, K., Sapiro, G., & Morellas, V. (2007). “A graph-based foreground representation and its application in example based people matching in video,” In Proc. IEEE International Conference on Image Processing, pp. 37–40.

  3. Madden, C., & Piccardi, M. (2007). “A framework for track matching across disjoint cameras using robust shape and appearance features,” In Proc. IEEE International Conference on advanced Video and Signal Based Surveillance, pp. 188–193.

  4. Lv, X., Kong, Q., Weng, F., & Liu, Y. (2009). “Analysis of appearance features for human matching between different fields of view,” ICME, pp. 670–673.

  5. Lv, X., Kong, Q., & Liu, Y. (2008). “A feature fusion algorithm for hu- man matching between non-overlapping cameras (in chinese),” Chinese Conference on Pattern Recognition, pp. 73–78.

  6. Kong, Q., Lv, X., Li, X., & Liu, Y. (2008). “Analysis of human matching algorithms between non-overlapping cameras (in chinese),” Conference on Harmonious Human Machine Environment, pp. 44–50.

  7. Shan, Y., Sawhney, H., & Kumar, R. (2008). “Unsupervised learning of discriminative edge measures for vehicle matching between non-overlapping cameras”. IEEE Transactions Pattern Analysis Machine Intelligence, 30(4), 700–711.

    Article  Google Scholar 

  8. Javed, O. Shafique, K., Rasheed, Z., & Shah, M. (2008). “Model- ing inter-camera space-time and appearance relationships for tracking across non-overlapping views,” Computer Vis. Image Underst., no. 109, pp. 146–1627.

  9. Cai, Y., Chen, W., Huang, K., & Tan, T. (2007). “Continuously tracking objects across multiple widely separated cameras,” In Proc. 8th Asian Conference on Computer Vision, pp. 843–852.

  10. Jeong, K., & Jaynes, C. (2008). “Object matching in disjoint cameras using a color transfer approach,” Machine Vis. Applications.

  11. Madden, C., Cheng, E. D., & Piccardi, M. (2007). “Tracking people across disjoint camera views by an illumination-tolerant appearance representation”. Machine Vis Applications, 18, 233–247.

    Article  MATH  Google Scholar 

  12. Gilbert, A., & Bowden, R. “Incremental, scalable tracking of objects inter camera,” Computer Vis. Image Underst.

  13. Teixeira, L., & Corte-Real, L. (2008). “Video object matching across multiple independent views using local descriptors and adaptive learning,” Pattern Recognit. Lett.

  14. Lowe, D. (2004). “Distinctive image features from scale-invariant key-points”. International Journal Computer Vision, 60(2), 91–110.

    Article  Google Scholar 

  15. Yu, Y., Harwood, D., Yoon, K., & Davis, L. (2008). “Human appearance modeling for matching across video sequences”. Machine Vision Applications, 18, 139–149.

    Article  Google Scholar 

  16. Kullback, S., & Leibler, R. (1951). “On information and sufficiency”. Annual Mathematics Statistician, 22, 79–86.

    Article  MATH  MathSciNet  Google Scholar 

  17. Chen, E., & Piccardi, M. (2007). “Disjoint track matching based on a major color spectrum histogram representation”. Optical Engineering, 46(4), 1–14.

    MATH  Google Scholar 

  18. Zhou, S., & Chellappa, R. (2006). “From sample similarity to ensemble similarity: probabilistic distance measures in reproducing kernel hilbert space”. IEEE Transactions Pattern Analysis Machine Intelligence, 28(6), 917–929.

    Article  Google Scholar 

  19. Yang, S., Li, S., Pan, Q., & Li, J. “Real-time multiple objects tracking with occlusion handling in dynamic scenes,” In Proc. IEEE International Conference on CVPR, vol. 1.

Download references

Acknowledgments

This work was supported by National Natural Science Foundation of China under Grant 61105022, and Research Fund for the Doctoral Program of Higher Education of China under Grant20110073120028, and Jiangsu Provincial Natural Science Foundation of China under Grant BK2012296.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huanxi Liu.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Liu, H., Lv, X., Zhu, T. et al. An Adaptive Feature-fusion Method for Object Matching over Non-overlapped Scenes. J Sign Process Syst 76, 77–89 (2014). https://doi.org/10.1007/s11265-013-0806-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11265-013-0806-7

Keywords

Navigation