[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Virtual interaction algorithm of cultural heritage based on multi feature fusion

Published: 01 January 2022 Publication History

Abstract

During the traditional cultural heritage virtual interaction algorithm in the interaction action recognition, the database is too single, resulting in low recognition accuracy, recognition time-consumer and other issues. Therefore, this paper introduces the multi feature fusion method to optimize the cultural heritage virtual interaction algorithm. Kinect bone tracking technology is applied to identify the movement of the tracking object, 20 joints of the human body are tracked, and interactive action recognition is realized according to the fingertip candidate points. In order to carry out the judgment virtual interactive operation of subsequent recognition actions, a multi feature fusion database is established. The mean shift is used to derive the moving mean of the target’s action position and to track the interactive object. The Euclidean distance formula is used to train samples of multi feature fusion database data to realize the judgment of recognition action and virtual interaction. In order to verify the feasibility of the research algorithm, the virtual interactive script of ink painting in a cultural heritage museum is used to simulate the research algorithm, and a comparative experiment is designed. The experimental results show that the proposed algorithm is superior to the traditional virtual interactive algorithm in recognition accuracy and efficiency, which proves the feasibility of this method.

References

[1]
Odelli E, Rousaki A, Raneri S, Vandenabeele P. Advantages and pitfalls of the use of mobile Raman and XRF systems applied on cultural heritage objects in Tuscany (Italy). European Physical Journal Plus. 2021; 136(4): 449.
[2]
Siddharth N, Nicholas P, Ivica IB. Cinemacraft: Exploring fidelity cues in collaborative virtual world interactions. Virtual Reality. 2020; 24(1): 53-73.
[3]
Zhang T. Application of AI-based real-time gesture recognition and embedded system in the design of English major teaching. Wireless Networks. 2021; (6): 2693-2699.
[4]
Park J, Jang J, Lee G, Koh H, Kim TW. A time domain artificial intelligence radar system using 33-GHz direct sampling for hand gesture recognition. IEEE Journal of Solid-State Circuits. 2020; 55(4): 879-888.
[5]
Li Z, Deepak A, Roope R. Gaze-based Kinaesthetic interaction for virtual reality. Interacting with Computers. 2020; 32(1): 17-32.
[6]
Yang X, Wang Y, Li S, Piao X, Wei X. Real-virtual consistent traffic flow interaction. Graphical Models. 2019; 106(11): 1-10.
[7]
Badías A, González D, Alfaro I, Chinesta F, Cueto E. Real-time interaction of virtual and physical objects in mixed reality applications. International Journal for Numerical Methods in Engineering. 2020; 121(1): 3849-3868.
[8]
Schultze U, Brooks JAM. An interactional view of social presence: Making the virtual other “real”. Information Systems Journal. 2019; 29(3): 707-737.
[9]
Wang XH, Yan K. Immersive human-computer interactive virtual environment using large-scale display system. Future Generation Computer Systems. 2019; 96(7): 649-659.
[10]
Xiao Z, Xu S, Wang D, Zhang R, Chen H. On extracting regular travel behavior of private cars based on trajectory data analysis. IEEE Transactions on Vehicular Technology. 2020; 69(12): 14537-14549.
[11]
Zhang ZL, Li Y, Guo J, Weng D, Liu Y, Wang Y. Vision-tangible interactive display method for mixed and virtual reality: Toward the human-centered editable reality. Journal of the Society for Information Display. 2019; 27(2): 72-84.
[12]
Wang H, Zhao L, Li P. Nondeterministic finite automata based on quantum logic: Language equivalence relation and robustness. International Journal of Approximate Reasoning. 2021; 129(1): 20-40.
[13]
Ren B, Hou B, Wen ZD, Xie W, Jiao LC. PolSAR image classification via multimodal sparse representation-based feature fusion. International Journal of Remote Sensing. 2018; 39(22): 7861-7880.
[14]
Xu Z, Zhou X, Wu H, Li X, Li S. Motion planning of manipulators for simultaneous obstacle avoidance and target tracking: An RNN approach with guaranteed performance. IEEE Transactions on Industrial Electronics. 2021; 20(4): 3073305.
[15]
Denys B, Alexandre A. Cognitive interaction with virtual assistants: From philosophical foundations to illustrative examples in aeronautics. Computers in Industry. 2019; 107(1): 33-49.

Cited By

View all
  • (2022)Automatic Recognition Method of Machine English Translation Errors Based on Multisignal Feature FusionComputational Intelligence and Neuroscience10.1155/2022/29872272022Online publication date: 1-Jan-2022

Index Terms

  1. Virtual interaction algorithm of cultural heritage based on multi feature fusion
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image Journal of Computational Methods in Sciences and Engineering
        Journal of Computational Methods in Sciences and Engineering  Volume 22, Issue 1
        2022
        342 pages

        Publisher

        IOS Press

        Netherlands

        Publication History

        Published: 01 January 2022

        Author Tag

        1. Virtual interaction; action recognition; candidate points; multi feature fusion; kinect bone tracking; mean shift

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 18 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2022)Automatic Recognition Method of Machine English Translation Errors Based on Multisignal Feature FusionComputational Intelligence and Neuroscience10.1155/2022/29872272022Online publication date: 1-Jan-2022

        View Options

        View options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media