CN116430991A - Exhibition hall digital person explanation method and system based on mixed reality and electronic equipment - Google Patents
Exhibition hall digital person explanation method and system based on mixed reality and electronic equipment Download PDFInfo
- Publication number
- CN116430991A CN116430991A CN202310204280.7A CN202310204280A CN116430991A CN 116430991 A CN116430991 A CN 116430991A CN 202310204280 A CN202310204280 A CN 202310204280A CN 116430991 A CN116430991 A CN 116430991A
- Authority
- CN
- China
- Prior art keywords
- digital
- explanation
- exhibition
- person
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000012545 processing Methods 0.000 claims description 32
- 230000009471 action Effects 0.000 claims description 22
- 230000004048 modification Effects 0.000 claims description 20
- 238000012986 modification Methods 0.000 claims description 20
- 230000006399 behavior Effects 0.000 claims description 4
- 230000003993 interaction Effects 0.000 abstract description 7
- 239000000463 material Substances 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 238000012937 correction Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000010422 painting Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000001737 promoting effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 239000000919 ceramic Substances 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09F—DISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
- G09F27/00—Combined visual and audible advertising or displaying, e.g. for public address
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application provides a mixed reality-based exhibition hall digital person explanation method, a system and electronic equipment. The exhibition hall digital person explanation method based on mixed reality is applied to a server, and comprises the following steps: receiving a first exhibition looking request sent by user equipment; generating a first digital lecturer based on the first exhibition request; and displaying the first digital explanation person so that the first digital explanation person explains the exhibited article. According to the method and the system, the first exhibition watching request sent by the user equipment is received, the first digital explanation person is generated according to the first exhibition watching request, and finally the first digital explanation person is displayed, so that the first digital explanation person explains the exhibited article. Compared with the prior art, excessive manpower and material resources are not needed, interaction between visitors and explanation people is enhanced, and experience of users is further enhanced.
Description
Technical Field
The application relates to the technical field of digital technology, in particular to a mixed reality-based exhibition hall digital person explanation method, a mixed reality-based exhibition hall digital person explanation system and electronic equipment.
Background
The exhibition hall is an important place and window for exhibiting human history progress, cultural prosperity and technological innovation development, and plays roles of spreading knowledge, ceramic and metallurgical feelings and education masses.
In a plurality of exhibition halls, visitors generally know the exhibits in the exhibition halls in two ways, and a real person explaining person explains the exhibits, but the explanation effect is poor due to the fact that the labor intensity of the explaining person is too high, so that the watching and exhibition experience of the visitors is affected; the other is that the exhibition hall utilizes voice broadcasting equipment to explain the exhibits, but this kind of mode only has the sound, does not have the explanation person, has lacked the interaction between visitor and the explanation person, makes the visitor feel boring easily, and experience feeling is also lacking.
Therefore, both the above two modes are unfavorable for visitors to observe and display, and exhibition hall digital person explanation methods, systems and electronic devices based on mixed reality are urgently needed.
Disclosure of Invention
The application provides a mixed reality-based exhibition hall digital person explanation method, a system and electronic equipment. According to the method and the system, the first exhibition watching request sent by the user equipment is received, the first digital explanation person is generated according to the first exhibition watching request, and finally the first digital explanation person is displayed, so that the first digital explanation person explains the exhibited article. Compared with the prior art, excessive manpower and material resources are not needed, interaction between visitors and explanation people is enhanced, and experience of users is further enhanced.
In a first aspect of the present application, there is provided a mixed reality based digital human interpretation method for an exhibition hall, applied to a server, the method comprising:
receiving a first exhibition looking request sent by user equipment;
generating a first digital lecturer based on the first exhibition request;
and displaying the first digital explanation person so that the first digital explanation person explains the exhibited article.
Through adopting above-mentioned technical scheme, the server is through receiving user equipment's first exhibition of looking request, again according to first exhibition of looking request, generates the first digital explanation people who corresponds with it, and then, the server demonstrates first digital explanation people to first digital explanation people explain the showpiece. Compared with the prior art, the method reduces the probability of poor effect caused by artificial explanation, enhances the interaction between the explanation person and the visitor in the aspects of vision and hearing, ensures that the visitor further knows the exhibited items in the exhibition hall, and improves the exhibition experience of the visitor.
Optionally, control information of the user is obtained, wherein the control information comprises one or two of voice information and action information;
identifying the control information, and generating a first control instruction according to a preset relationship, wherein the preset relationship is a corresponding relationship between the control information and the control instruction;
And sending the first control instruction to the first digital explanation person so as to control the first digital explanation person to explain the exhibited article.
Through adopting above-mentioned technical scheme, the server acquires user's control information at first to discern control information, and then according to the correspondence between control information and the control instruction generation first control instruction, next, send first control instruction to first digital explanation people, thereby control first digital explanation people and explain the showpiece. Therefore, the first digital explanation person can automatically explain the exhibited article according to the voice information and/or the action information of the user, the intelligent recognition effect is achieved, the interaction between the explanation person and the visitor is enhanced, and the experience of the user is improved.
Optionally, acquiring position information of the user;
extracting explanation content corresponding to the position information according to the position information, wherein the explanation content comprises explanation of the exhibits and explanation actions of the first digital explanation person;
and playing the explanation content by the first digital explanation person so that the first digital explanation person explains the exhibits.
By adopting the technical scheme, the server firstly acquires the position information of the user and according to the explanation content corresponding to the position information, wherein the explanation content comprises the explanation of the exhibited article and the explanation action of the first digital explanation person; next, the explanation content is played through the first digital explanation person, so that the purpose that the first digital explanation person explains the exhibits is achieved, and compared with the prior art, the visitor is more vividly informed of the exhibits in the exhibition hall.
Optionally, after the obtaining the location information of the user, the method further includes:
judging whether the position information is the same as target position information, wherein the target position information is the position information of key exhibits of the exhibition hall;
and if the position information is the same as the target position information, sending a second control instruction to the first digital explanation person so as to control the first digital explanation person to broadcast preset voice, wherein the preset voice is used for prompting the user to standardize the behavior.
Through adopting above-mentioned technical scheme, after having obtained user's positional information, the server still will judge whether this positional information is the same with the positional information of the key showpiece in the exhibition room, when user's positional information is the same with target positional information, the server will send the second control command to first digital explanation people to control first digital explanation people and broadcast the default pronunciation that is used for promoting user's normal action, reduced the probability that key showpiece in the exhibition room was damaged by the visitor outside, promoted user's exhibition experience simultaneously and felt.
Optionally, sending a plurality of exhibit information to the user device;
receiving exhibition looking route selection information sent by user equipment, and generating a preset first exhibition looking route and a third control instruction according to the exhibition looking route selection information;
Transmitting the preset first exhibition route to the user equipment so that the user equipment displays the preset first exhibition route;
and sending the third control instruction to the first digital lecturer to control the first digital lecturer to guide the user to watch according to the preset first watching route.
By adopting the technical scheme, the server also transmits a plurality of exhibit information to the user equipment, so that a user can conveniently select a viewing route suitable for the user according to the user's own demand, the display route is transmitted to the server through the user equipment, and after receiving the viewing route selection information transmitted by the user equipment, the server generates a preset first viewing route and a third control instruction according to the viewing route selection information. The server sends a preset first exhibition route to the user equipment so that the user equipment can display the preset first exhibition route; the server also sends a third control instruction to the first digital interpreter, so that the server is convenient for controlling the first digital interpreter to interpret the exhibits, and therefore, a user is convenient for visiting and knowing the exhibits according to a preset first exhibition watching route displayed by user equipment.
Optionally, receiving a modification request sent by the user equipment;
Generating a first characteristic information group according to the user modification request, wherein the first characteristic information group is a parameter information group of a digital interpreter obtained after the user modification request is identified;
searching the first characteristic information group in a preset database, if a second characteristic information group corresponding to the first characteristic information group exists in the preset database, displaying a second digital explanation person corresponding to the second characteristic information group so that the second digital explanation person explains the exhibited article, wherein the corresponding relation between the characteristic information group and the digital explanation person is prestored in the preset database, and the first characteristic information group and the second characteristic information group are the same characteristic information group.
By adopting the technical scheme, the server receives the modification request sent by the user equipment, and generates the first characteristic information group according to the modification request of the user, so that the server can conveniently distribute corresponding digital explanatory people according to the first characteristic information group. The server firstly searches the first characteristic information group in the preset database, and when the second characteristic information group corresponding to the first characteristic information group exists in the preset database, the second digital explanation person corresponding to the second characteristic information group is displayed, so that the second digital explanation person can conveniently explain the exhibited article, the personalized requirements of the user on the digital explanation person are met, and the watching and displaying experience of the user is further enhanced.
Optionally, if the second feature information group corresponding to the first feature information group does not exist in the preset database, a third digital explanation person is constructed according to the first feature information group, so that the third digital explanation person explains the exhibited article.
Through adopting above-mentioned technical scheme, when not having the second characteristic information group that corresponds with first characteristic information group in the preset database, will construct the digital explanation people according to first characteristic information group to produce the third digital explanation people, so that the third digital explanation people explain the showpiece, be convenient for improve user's seeing and exhibition interest, thereby promote user's experience sense.
Optionally, after the third control instruction is sent to the first digital lecturer to control the first digital lecturer to guide the user to watch according to the preset first watching route, the method further includes:
acquiring a real-time motion route of a user;
calculating a difference value between the real-time movement route and the preset first exhibition route;
judging whether the difference value exceeds a preset difference value range, and if the difference value exceeds the preset difference value range, sending a fourth control instruction to the first digital explanation person so as to control the first digital explanation person to correct the real-time movement route of the user.
By adopting the technical scheme, after the server guides the user to watch according to the preset first watching route, the server also acquires the real-time movement route of the user, calculates the difference between the real-time movement route and the preset first watching route, and then judges whether the difference exceeds the preset difference range; when the difference exceeds the preset difference range, the server sends a fourth control instruction to the first digital explanation person, so that the server can control the first digital explanation person to correct the real-time movement route of the user, the user is ensured to see the exhibition according to the hobby requirement, the user is prevented from missing the explanation content of the exhibits, and the watching exhibition experience of the user is further improved.
In a second aspect of the application, there is provided a mixed reality based digital human interpretation system for an exhibition hall, the digital human interpretation system being a server comprising a receiving module and a processing module, wherein,
the receiving module is used for receiving a first exhibition request sent by user equipment;
the processing module is used for generating a first digital lecturer based on the first exhibition request;
the processing module is further used for displaying the first digital explanation person, so that the first digital explanation person explains the exhibited article.
In a third aspect of the present application there is provided an electronic device comprising a processor, a memory for storing instructions, a user interface and a network interface, both for communicating to other devices, the processor being for executing the instructions stored in the memory to cause the electronic device to perform a method as claimed in any one of the preceding claims.
In summary, the present application includes at least one of the following beneficial technical effects:
1. the server generates a first digital explanation person corresponding to the first exhibition request by receiving the first exhibition request of the user equipment, and then displays the first digital explanation person so that the first digital explanation person can explain the exhibited article. Compared with the prior art, the method reduces the probability of poor effect caused by artificial explanation, enhances the interaction between the explanation person and the visitor in the aspects of vision and hearing, ensures that the visitor further knows the exhibits in the exhibition hall, and improves the exhibition experience of the visitor;
2. the server receives the modification request sent by the user equipment, and generates a first characteristic information group according to the modification request of the user, so that the server can conveniently distribute corresponding digital explanatory people according to the first characteristic information group. The server firstly searches the first characteristic information group in the preset database, and when the second characteristic information group corresponding to the first characteristic information group exists in the preset database, the second digital explanation person corresponding to the second characteristic information group is displayed, so that the second digital explanation person can conveniently explain the exhibited article, the personalized requirements of the user on the digital explanation person are met, and the watching and displaying experience of the user is further enhanced.
Drawings
Fig. 1 is a flow chart of a mixed reality-based digital human explanation method for an exhibition hall according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of a mixed reality based digital human interpretation method for an exhibition hall according to an embodiment of the present application.
Fig. 3 is another flow diagram of a mixed reality based digital human interpretation method for an exhibition hall according to an embodiment of the present application.
Fig. 4 is an exemplary schematic diagram of a look-and-develop routing in accordance with an embodiment of the present application.
Fig. 5 is a block diagram of a mixed reality based digital human interpretation system for an exhibition in accordance with an embodiment of the present application.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Reference numerals illustrate: 51. a receiving module; 52. a processing module; 53. a transmitting module; 61. a processor; 62. a communication bus; 63. a user interface; 64. a network interface; 65. a memory.
Detailed Description
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments.
In the description of embodiments of the present application, words such as "for example" or "for example" are used to indicate examples, illustrations or descriptions. Any embodiment or design described herein as "such as" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "or" for example "is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, the term "and/or" is merely an association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a alone, B alone, and both A and B. In addition, unless otherwise indicated, the term "plurality" means two or more. For example, a plurality of exhibits refers to two or more exhibits. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating an indicated technical feature. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Before describing embodiments of the present application, some terms referred to in the embodiments of the present application will be first defined and described.
MR: the mixed reality technology introduces virtual scene information in a real environment, and an interactive feedback information loop is built among the real world, the virtual world and a user, so that the sense of reality of user experience is enhanced, and the mixed reality technology has the characteristics of reality, real-time interactivity, conception and the like. Mixed reality is a set of technical combinations that not only provide new viewing methods but also new input methods, and all methods are combined with each other, thereby promoting innovation. For example MR glasses.
With the gradual development of digital technology, the application of the exhibition hall is becoming wider and wider under the guidance of the national development strategy of digital technology. At present, the content guidance of the domestic exhibition hall mainly comprises modes of a descriptive board, an intelligent voice guidance mode, a personnel explanation mode and the like, and the guidance modes have limitations, so that the user experience is poor. The explanation cards are mostly planar drawing characters, and the displayed content is limited, so that a lot of important information can be concisely and briefly summarized. The visitor can only scan the sightseeing of the horse-going sightseeing, even the watching sequence and the route are unreasonable, and the whole picture is difficult to outline in the brain, so that the emotional resonance of the heart is caused; the intelligent voice navigation system can only receive sound and can not effectively cause visual impact and tour interests of tourists; staff explanation is easily influenced by labor time and professional level, so that propaganda functions and education functions of the exhibition hall are greatly discounted, and user experience is poor.
In order to solve the problems, the application provides a mixed reality-based exhibition hall digital person explanation method. Referring to fig. 1, fig. 1 is a flow chart of a mixed reality-based digital human explanation method in an exhibition hall according to an embodiment of the application. The exhibition hall digital person explanation method based on mixed reality is applied to a server and comprises the following steps of S110 to S130:
s110, receiving a first exhibition request sent by user equipment.
Specifically, the user of the embodiment of the present application is a visitor to a hallway. When the visitor arrives at the exhibition hall and the visitor wants to know the exhibits and meanwhile wants to learn the professional exhibits for the visitor, the visitor can scan the exhibits labels and/or wear the user equipment such as MR glasses to visit and listen to the explanation contents through the user equipment.
At this time, the server will receive a first viewing request sent by the user through the user device. The server is a server for managing the exhibition hall and is used for providing background service for the user equipment, and the server can be a server, a server cluster formed by a plurality of servers or a cloud computing service center. The server may communicate with the user equipment via a wired or wireless network. The user equipment refers to interaction equipment between a server and a user, and is a handheld electronic device provided with a function conforming to MR technology software. Wherein the user equipment includes, but is not limited to: android (Android) system devices, mobile operating system (iOS) devices developed by apple corporation, personal Computers (PCs), global area network (Web) devices, mixed Reality (MR) devices, and the like. For example, when a visitor arrives at a exhibition hall and wants to visit a piece of exhibition and get in depth, he can choose to scan the two-dimensional code of the label of the piece of exhibition by using a mobile phone, so as to finish sending a exhibition watching request to a server.
S120, generating a first digital lecturer based on the first exhibition request.
Specifically, after receiving a first exhibition looking request sent by the user equipment, the server generates a first digital interpreter according to the first exhibition looking request. The generation mode of the first digital explanation person can identify human bones and perform 3D modeling through an image scanning technology after the explanation video is recorded for a real person, and the specific implementation mode is set according to specific conditions and is not repeated here. The first digital explanatory person presets the explanation information of all exhibits stored in the exhibition hall, and can conduct professional and vivid explanation on related exhibits.
S130, displaying the first digital explanation person so that the first digital explanation person explains the exhibits.
Specifically, after the server generates the first digital explanation person according to the first exhibition request, the first digital explanation person is displayed, so that the first digital explanation person explains the exhibited article, and the visitors can conveniently learn the exhibited article. The action display mode of the first digital exposer comprises, but is not limited to, mixed reality equipment worn by a visitor, user equipment held by the visitor and the like. The explanation voice output of the first digital explanation person comprises, but is not limited to, independent sound equipment preset by each display in the exhibition hall, mixed reality equipment worn by the visitor, user equipment held by the visitor and the like.
For example, a visitor has generated a strong interest in the a exhibit in the XX exhibition after someone comes to the XX exhibition. At this time, a visitor can scan the two-dimensional code corresponding to the A exhibit through the mobile phone, and after scanning, the introduction of the A exhibit by the digital lecturer is watched by using MR technical software on the mobile phone, and meanwhile, the explanation voice of the digital lecturer is listened through a loudspeaker of the mobile phone. Compared with the prior art, the probability of poor explanation effect caused by human factors is reduced, a visitor can know the exhibited items more deeply, the visitor can establish emotion connection with the digital explanation person, and the meaning of the exhibition hall is more vividly and vividly explained.
In one possible implementation, referring to fig. 2, fig. 2 is a schematic flow diagram of another exemplary mixed reality-based digital human interpretation method for an exhibition hall according to the present application. The exhibition hall digital person explanation method based on mixed reality further comprises steps S210 to S230, wherein the steps are as follows;
s210, acquiring control information of a user, wherein the control information comprises one or two of voice information and action information.
S220, identifying the control information, and generating a first control instruction according to a preset relationship, wherein the preset relationship is a corresponding relationship between the control information and the control instruction;
S230, a first control instruction is sent to the first digital lecturer to control the first digital lecturer to explain the exhibited item.
Specifically, the server firstly acquires control information of the visitor, wherein the control information comprises one or two of voice information and action information; the server identifies the control information and generates a first control instruction according to the corresponding relation between the preset control information and the control instruction; and finally, the server sends a first control instruction to the first digital explanation person, so that the purpose that the server controls the first digital explanation person to explain the exhibited article is achieved.
The control information of the visitor and the control instruction of the server are in one-to-one correspondence, the correspondence is constructed in advance, and the voice information and/or the action information can enable the server to recognize and generate the corresponding control instruction. When the voice information and the action information exist simultaneously, the server can recognize the voice information preferentially according to the priority, so that the probability of server recognition errors caused by nonstandard actions of visitors is reduced. The voice information can be a preset standard voice call, and can also be intelligently fuzzy matched with the server, and the specific implementation mode is analyzed according to specific conditions, and the details are not repeated here. The action information can be preset for exhibition hall staff, and after the visitors do the action on related products, the digital explanation people can explain the exhibited products.
For example, a visitor wanting to go deep into a certain exhibit B in a certain area of an exhibition hall, the server collects voice information of the certain area through a handset of the handset, the voice information may be "hello, a digital explanation person X, help me explain exhibit B", after receiving the voice information, the server searches keywords "explain", "exhibit B'" in the voice information, and after the search is completed, the server sends a control instruction containing explanation information content of exhibit B to the digital explanation person X to control the digital explanation person X to explain the exhibit B.
In one possible implementation, referring to fig. 3, fig. 3 is a schematic flow diagram of another exemplary mixed reality-based digital human interpretation method for an exhibition hall according to the present application. The exhibition hall digital person explanation method based on mixed reality further comprises steps S310 to S330, wherein the steps are as follows:
s310, obtaining the position information of the user.
S320, extracting explanation contents corresponding to the position information according to the position information, wherein the explanation contents comprise explanation of the exhibits and explanation actions of the first digital explanation person.
S330, the first digital lecturer plays the explanation content, so that the first digital lecturer explains the exhibits.
Specifically, the server also acquires the position information of the user, and extracts corresponding explanation contents according to the position information, wherein the explanation contents comprise the explanation of the exhibits and the explanation actions of the first digital explanation person. Then, the server plays the explanation content through the first digital explanation person, so that the first digital explanation person explains the exhibits. The server obtains the location information of the user, including but not limited to locating the user equipment of the user, identifying the current position of the exhibit being taught, and uploading the position of the user by the user equipment.
For example, in a museum-type exhibition hall, when a visitor arrives at the position of the exhibit D, the server recognizes the position and then controls the digital lecturer to automatically play the explanation contents related to the exhibit D. The explanation content played by the digital explanation person can be named as XX, and the explanation content is played in XX places in XX month and XX days in XX years from the current XX year, and meanwhile, the explanation content is accompanied with the explanation action of the digital explanation person.
In one possible implementation, when the digital lecturer explains the exhibit, the user can also ask questions to the digital lecturer through the user equipment for the explanation content. For example, in a museum-type exhibition hall, a visitor Zhao Mou has an interest in the drawing paper material of the exhibit "XX painting" by listening to the explanation of the exhibit "XX painting" by a digital lecturer through a mobile phone. At this time, the visitor Zhao Mou may send a voice question information "what is the drawing paper material of the" XX picture "to the server through the user equipment, and after receiving the voice question information, the server identifies the voice question information, searches through a preset database, and generates a control instruction including the answer to the question, and sends the control instruction to the digital lecturer, so that the digital lecturer can explain the question. Therefore, communication and exchange between visitors and digital explanation people are realized, knowledge reserves of the visitors are enriched, the visitors can conveniently obtain better exhibition experience, and the functions of exhibition hall propaganda and education masses are further exerted.
In one possible implementation manner, after the server obtains the location information of the user, the method specifically further includes: judging whether the position information is the same as the target position information, wherein the target position information is the position information of key exhibits in an exhibition hall; if the position information is the same as the target position information, a second control instruction is sent to the first digital explanation person so as to control the first digital explanation person to broadcast preset voice which is used for prompting the user to conduct standard behaviors.
Specifically, after the server obtains the position information of the user, the server also determines whether the position information is the same as the position information of the key exhibit. When the position information of the user is the same as the position information of the key exhibit, in order to ensure the safety of the key exhibit, the server sends a second control instruction to the first digital explanation person to control the first digital explanation person to broadcast the preset voice, so that the effect of warning the visitor of the standardized behavior is achieved. For example, the preset voice may be "your own exhibit F is the key cultural relics of the museum, and do not touch with hands.
In one possible implementation, a server sends a plurality of exhibit information to a user device; receiving exhibition looking route selection information sent by user equipment, and generating a preset first exhibition looking route and a third control instruction according to the exhibition looking route selection information; transmitting a preset first exhibition route to the user equipment so that the user equipment displays the preset first exhibition route; and sending a third control instruction to the first digital lecturer to control the first digital lecturer to guide the user to watch according to a preset first watching route.
Specifically, the server will also send a plurality of exhibit information to the user device for the visitor to select his own view route according to the exhibit information. After the visitor selects the exhibition watching route, the user equipment sends exhibition watching route selection information to the server, and the server generates a preset first exhibition watching route and a third control instruction according to the exhibition watching route selection information. The preset first exhibition route can be displayed on user equipment of a user so as to be convenient for the user to view. The server sends a third control instruction to the first digital explaining person so that the first digital explaining person guides the visitor according to a preset first exhibition route and sequentially explains the exhibits on the preset first exhibition route.
In one possible implementation, the server may also be based on personal preferences uploaded by the visitor via the user device, including preference levels for the exhibits. The server sorts the exhibits in the exhibition hall according to personal preference, combines the positions among a plurality of exhibits, and the preference selects the exhibits close to the user and the user favorites to explain, thereby automatically recommending the personalized exhibition watching route suitable for the user to the user, and further improving the exhibition watching experience of the user.
In one possible implementation manner, the server sends a third control instruction to the first digital lecturer to control the first digital lecturer to guide the user to watch according to a preset first watching route, and specifically further includes: acquiring a real-time motion route of a user; calculating a difference value between the real-time movement route and a preset first exhibition route; judging whether the difference value exceeds a preset difference value range, and if the difference value exceeds the preset difference value range, sending a fourth control instruction to the first digital interpreter so as to control the first digital interpreter to correct the real-time movement route of the user.
Specifically, the server also monitors the movement route of the visitor in real time. Firstly, the server acquires a real-time movement route of the visitor and calculates a difference value between the real-time movement route and a preset first exhibition route. The server judges whether the difference exceeds a preset difference range or not, so that whether the visitor looks according to a preset first looking-up route or not is judged. When the difference value exceeds the preset difference value range, the server sends a fourth control instruction to the first digital interpreter so as to control the first digital interpreter to correct the real-time movement route of the visitor. The correction mode can broadcast preset correction voice for the first digital professor, prompt information can be sent to the user equipment for the server, the user equipment achieves the purpose of reminding the visitor of a movement route after the prompt information is displayed, probability of the visitor going wrong route is reduced, and viewing experience of the visitor is improved.
For example, referring to fig. 4, fig. 4 is an exemplary schematic diagram of a look-up route selection according to an embodiment of the present application. As shown in the figure, the real-time movement route of the visitor is A, the preset first exhibition route is B, and the preset difference range is 0m to 2m. When the difference between the point a and the point b is 4m, the server determines that the real-time movement route of the visitor at the point a has large deviation from the preset first exhibition route, generates voice correction information of 'you have large position deviation and please walk with me', and controls the first digital interpreter to prompt the visitor to correct the route; when the difference between the y point and the z point is 1m, the server determines that the deviation between the real-time movement route of the visitor at the y point and the preset first exhibition route is small, and the route of the visitor cannot be corrected.
In one possible implementation, the server receives a modification request sent by the user equipment; generating a first characteristic information group according to a user modification request, wherein the first characteristic information group is a parameter information group of a digital interpreter obtained after the user modification request is identified; searching a first characteristic information group in a preset database, if a second characteristic information group corresponding to the first characteristic information group exists in the preset database, displaying a second digital explanation person corresponding to the second characteristic information group, so that the second digital explanation person can explain the exhibited article, the corresponding relation between the characteristic information group and the digital explanation person is prestored in the preset database, and the first characteristic information group and the second characteristic information group are the same characteristic information group.
In one possible implementation manner, if the second characteristic information set corresponding to the first characteristic information set does not exist in the preset database, a third digital interpreter is constructed according to the first characteristic information set, so that the third digital interpreter interprets the exhibits.
Specifically, the server may further receive a modification request sent by the visitor through the user device, where the modification request is used to modify the personalized image of the digital lecturer, and generate a first feature information set according to the modification request of the user, where the first feature information set is a parameter information set of the digital lecturer that meets the visitor request. Then, the server searches in the preset database to judge whether a second characteristic information group corresponding to the first characteristic information group exists in the preset database. When a second characteristic information group exists in the preset database, displaying a second digital explanation person corresponding to the second characteristic information group, so that the second digital explanation meeting the visitor request is explained for the visitor; when the second characteristic information set does not exist in the preset database, the server reconstructs the digital explanatory person according to the second characteristic information set, and the mode of constructing the digital explanatory person can be an image recognition technology or a human skeleton modeling technology, and the specific implementation mode is set according to specific conditions and is not repeated here.
The first feature information set and the second feature information set are the same information set, which can be understood in the embodiment of the present application as follows: the first characteristic information group and the second characteristic information group are similar characteristic information groups, namely, the similarity between the first characteristic information corresponding to the first characteristic information group and the second characteristic information corresponding to the characteristic information group is calculated by utilizing Euclidean distance and cosine similarity algorithm. The first characteristic information group contains 1 or more characteristic information. The corresponding relation between the characteristic information groups and the digital explanatory people is prestored in the preset database, namely, 1 characteristic information group corresponds to 1 digital explanatory people. The feature information set includes, but is not limited to, physiological features, physical features, wearing features, and the like.
For example, the visitor needs a digital lecturer image of "height 180, weight 130, male, gray suit" and the first feature information set is "height 180, weight 130, male, gray suit". When the second characteristic information group of the height 180, the weight 130, the male and the dark gray suit exists in the preset database, the digital explanation person H corresponding to the second characteristic information group is displayed, so that the digital explanation person H can explain the exhibits for visitors.
The application also provides a mixed reality-based exhibition hall digital person explanation system, and referring to fig. 5, fig. 5 is a schematic diagram of a module of the mixed reality-based exhibition hall digital person explanation system in an embodiment of the application. The exhibition hall digital human explanation system based on mixed reality is a server, and the server comprises a receiving module 51, a processing module 52 and a sending module 53, wherein the receiving module 51 is used for receiving a first exhibition request sent by user equipment; a processing module 52 for generating a first digital lecturer based on the first exhibition request; the processing module 52 is further configured to display the first digital lecturer, so that the first digital lecturer explains the exhibit.
In one possible implementation, the processing module 52 obtains control information for the user, the control information including one or both of voice information and motion information; the processing module 52 identifies the control information, generates a first control instruction according to a preset relationship, and the preset relationship is a corresponding relationship between the control information and the control instruction; the transmitting module 53 transmits a first control instruction to the first digital lecturer to control the first digital lecturer to explain the exhibit.
In one possible implementation, processing module 52 obtains location information of the user; the processing module 52 extracts the explanation content corresponding to the position information according to the position information, wherein the explanation content comprises the explanation of the exhibits and the explanation actions of the first digital explanation person; the processing module 52 plays the explanation content through the first digital lecturer to enable the first digital lecturer to explain the exhibit.
In one possible implementation, after the processing module 52 obtains the location information of the user, the method specifically further includes: the processing module 52 judges whether the position information is the same as the target position information, and the target position information is the position information of the key exhibits of the exhibition hall; if the position information is the same as the target position information, the sending module 53 sends a second control instruction to the first digital interpreter to control the first digital interpreter to broadcast a preset voice, where the preset voice is used for prompting the user to perform a standard action.
In one possible implementation, the sending module 53 sends the plurality of exhibit information to the user device; the processing module 52 receives the exhibition looking route selection information sent by the user equipment, and generates a preset first exhibition looking route and a third control instruction according to the exhibition looking route selection information; the sending module 53 sends the preset first exhibition route to the user equipment, so that the user equipment displays the preset first exhibition route; the sending module 53 sends a third control instruction to the first digital lecturer to control the first digital lecturer to guide the user to watch the exhibition according to the preset first exhibition watching route.
In a possible implementation manner, the receiving module 51 receives a modification request sent by the user equipment; the processing module 52 generates a first characteristic information group according to the modification request of the user, wherein the first characteristic information group is a parameter information group of the digital interpreter obtained after the modification request of the user is identified; the processing module 52 searches the first feature information set in the preset database, if a second feature information set corresponding to the first feature information set exists in the preset database, the processing module 52 displays a second digital interpreter corresponding to the second feature information set, so that the second digital interpreter interprets the exhibited article, the corresponding relation between the feature information set and the digital interpreter is pre-stored in the preset database, and the first feature information set and the second feature information set are the same feature information set.
In one possible implementation, if the second feature information set corresponding to the first feature information set does not exist in the preset database, the processing module 52 constructs a third digital interpreter according to the first feature information set, so that the third digital interpreter interprets the exhibit.
In one possible implementation manner, the sending module 53 sends a third control instruction to the first digital lecturer to control the first digital lecturer to guide the user to watch according to a preset first watching route, and specifically further includes: the processing module 52 obtains a real-time movement route of the user; the processing module 52 calculates a difference between the real-time movement route and a preset first exhibition route; the processing module 52 determines whether the difference value exceeds a preset difference value range, and if the difference value exceeds the preset difference value range, the sending module 53 sends a fourth control instruction to the first digital interpreter to control the first digital interpreter to correct the real-time motion route of the user.
The application further provides an electronic device, and referring to fig. 6, fig. 6 is a schematic structural diagram of the electronic device. The electronic device may include: at least one processor 61, at least one network interface 64, a user interface 63, a memory 65, at least one communication bus 62.
Wherein the communication bus 62 is used to enable connected communication between these components.
The user interface 63 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 63 may further include a standard wired interface and a standard wireless interface.
The network interface 64 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein processor 61 may comprise one or more processing cores. The processor 61 connects various parts within the entire server using various interfaces and lines, performs various functions of the server and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 65, and calling data stored in the memory 65. Alternatively, the processor 61 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (ProgrammableLogic Array, PLA). The processor 61 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 61 and may be implemented by a single chip.
The memory 65 may include a random access memory (Random Access Memory, RAM) or a Read-only memory (Read-only memory). Optionally, the memory 65 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 65 may be used to store instructions, programs, code, a set of codes, or a set of instructions. The memory 65 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described respective method embodiments, etc.; the storage data area may store data or the like involved in the above respective method embodiments. The memory 65 may also optionally be at least one storage device located remotely from the aforementioned processor 61. As shown in fig. 6, an operating system, a network communication module, a user interface module, and an application program of the mixed reality-based exhibition hall digital person explanation method may be included in the memory 65 as one type of computer storage medium.
In the electronic device shown in fig. 6, the user interface 63 is mainly used for providing an input interface for a user, and acquiring data input by the user; and processor 61 may be configured to invoke application programs in memory 65 storing mixed reality based digital human interpretation methods that, when executed by one or more processors, cause the electronic device to perform the methods as in one or more of the embodiments described above.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided herein, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as a division of units, merely a division of logic functions, and there may be additional divisions in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned memory includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a magnetic disk or an optical disk.
The foregoing is merely exemplary embodiments of the present disclosure and is not intended to limit the scope of the present disclosure. That is, equivalent changes and modifications are contemplated by the teachings of this disclosure, which fall within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a scope and spirit of the disclosure being indicated by the claims.
Claims (10)
1. The exhibition hall digital person explanation method based on mixed reality is characterized by being applied to a server, and comprises the following steps:
receiving a first exhibition looking request sent by user equipment;
generating a first digital lecturer based on the first exhibition request;
and displaying the first digital explanation person so that the first digital explanation person explains the exhibited article.
2. The method of digital human interpretation of an exhibition of claim 1, further comprising:
Acquiring control information of a user, wherein the control information comprises one or two of voice information and action information;
identifying the control information, and generating a first control instruction according to a preset relationship, wherein the preset relationship is a corresponding relationship between the control information and the control instruction;
and sending the first control instruction to the first digital explanation person so as to control the first digital explanation person to explain the exhibited article.
3. The method of digital human interpretation of an exhibition of claim 1, further comprising:
acquiring position information of a user;
extracting explanation content corresponding to the position information according to the position information, wherein the explanation content comprises explanation of the exhibits and explanation actions of the first digital explanation person;
and playing the explanation content by the first digital explanation person so that the first digital explanation person explains the exhibits.
4. The method of claim 3, wherein after the obtaining the location information of the user, the method further comprises:
judging whether the position information is the same as target position information, wherein the target position information is the position information of key exhibits of the exhibition hall;
And if the position information is the same as the target position information, sending a second control instruction to the first digital explanation person so as to control the first digital explanation person to broadcast preset voice, wherein the preset voice is used for prompting the user to standardize the behavior.
5. The method of digital human interpretation of an exhibition of claim 1, further comprising:
transmitting a plurality of exhibit information to a user device;
receiving exhibition looking route selection information sent by user equipment, and generating a preset first exhibition looking route and a third control instruction according to the exhibition looking route selection information;
transmitting the preset first exhibition route to the user equipment so that the user equipment displays the preset first exhibition route;
and sending the third control instruction to the first digital lecturer to control the first digital lecturer to guide the user to watch according to the preset first watching route.
6. The method of digital human interpretation of an exhibition of claim 1, further comprising:
receiving a modification request sent by user equipment;
generating a first characteristic information group according to the user modification request, wherein the first characteristic information group is a parameter information group of a digital interpreter obtained after the user modification request is identified;
Searching the first characteristic information group in a preset database, if a second characteristic information group corresponding to the first characteristic information group exists in the preset database, displaying a second digital explanation person corresponding to the second characteristic information group so that the second digital explanation person explains the exhibited article, wherein the corresponding relation between the characteristic information group and the digital explanation person is prestored in the preset database, and the first characteristic information group and the second characteristic information group are the same characteristic information group.
7. The method of digital human interpretation of the exhibition of claim 6, further comprising:
if the second characteristic information group corresponding to the first characteristic information group does not exist in the preset database, a third digital explanation person is constructed according to the first characteristic information group, so that the third digital explanation person can explain the exhibited article.
8. The method of claim 5, wherein the sending the third control command to the first digital lecturer controls the first digital lecturer to direct the user to look at the exhibition along the preset first exhibition looking route, the method further comprising:
Acquiring a real-time motion route of a user;
calculating a difference value between the real-time movement route and the preset first exhibition route;
judging whether the difference value exceeds a preset difference value range, and if the difference value exceeds the preset difference value range, sending a fourth control instruction to the first digital explanation person so as to control the first digital explanation person to correct the real-time movement route of the user.
9. The exhibition hall digital person explanation system based on mixed reality is characterized in that the exhibition hall digital person explanation system is a server, the server comprises a receiving module (51) and a processing module (52), wherein,
the receiving module (51) is configured to receive a first exhibition request sent by a user equipment;
-the processing module (52) for generating a first digital lecturer based on the first exhibition request;
the processing module (52) is further configured to display the first digital lecturer, so that the first digital lecturer explains the exhibit.
10. An electronic device comprising a processor (61), a memory (65), a user interface (63) and a network interface (64), the memory (65) being adapted to store instructions, the user interface (63) and the network interface (64) being adapted to communicate to other devices, the processor (61) being adapted to execute the instructions stored in the memory (65) to cause the electronic device to perform the method according to any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310204280.7A CN116430991A (en) | 2023-03-06 | 2023-03-06 | Exhibition hall digital person explanation method and system based on mixed reality and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310204280.7A CN116430991A (en) | 2023-03-06 | 2023-03-06 | Exhibition hall digital person explanation method and system based on mixed reality and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116430991A true CN116430991A (en) | 2023-07-14 |
Family
ID=87084431
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310204280.7A Pending CN116430991A (en) | 2023-03-06 | 2023-03-06 | Exhibition hall digital person explanation method and system based on mixed reality and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116430991A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117033800A (en) * | 2023-10-08 | 2023-11-10 | 法琛堂(昆明)医疗科技有限公司 | Intelligent interaction method and system for visual cloud exhibition system |
CN117591748A (en) * | 2024-01-18 | 2024-02-23 | 北京笔中文化科技产业集团有限公司 | Planning method and device for exhibition route and electronic equipment |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011020223A (en) * | 2009-07-16 | 2011-02-03 | Otsuka Bijutsu Zaidan | Exhibit explanation robot and exhibit explanation method using the robot |
CN105810132A (en) * | 2016-03-10 | 2016-07-27 | 广州欧科信息技术股份有限公司 | Intelligent tour guide system of museum |
CN109471440A (en) * | 2018-12-10 | 2019-03-15 | 北京猎户星空科技有限公司 | Robot control method, device, smart machine and storage medium |
CN111443853A (en) * | 2020-03-25 | 2020-07-24 | 北京百度网讯科技有限公司 | Digital human control method and device |
CN112070906A (en) * | 2020-08-31 | 2020-12-11 | 北京市商汤科技开发有限公司 | Augmented reality system and augmented reality data generation method and device |
CN113849069A (en) * | 2021-10-18 | 2021-12-28 | 深圳追一科技有限公司 | Image replacing method and device, storage medium and electronic equipment |
CN113901190A (en) * | 2021-10-18 | 2022-01-07 | 深圳追一科技有限公司 | Man-machine interaction method and device based on digital human, electronic equipment and storage medium |
CN113992929A (en) * | 2021-10-26 | 2022-01-28 | 招商银行股份有限公司 | Virtual digital human interaction method, system, equipment and computer program product |
CN115544234A (en) * | 2022-10-20 | 2022-12-30 | 深圳康佳电子科技有限公司 | User interaction method and device, electronic equipment and storage medium |
-
2023
- 2023-03-06 CN CN202310204280.7A patent/CN116430991A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011020223A (en) * | 2009-07-16 | 2011-02-03 | Otsuka Bijutsu Zaidan | Exhibit explanation robot and exhibit explanation method using the robot |
CN105810132A (en) * | 2016-03-10 | 2016-07-27 | 广州欧科信息技术股份有限公司 | Intelligent tour guide system of museum |
CN109471440A (en) * | 2018-12-10 | 2019-03-15 | 北京猎户星空科技有限公司 | Robot control method, device, smart machine and storage medium |
CN111443853A (en) * | 2020-03-25 | 2020-07-24 | 北京百度网讯科技有限公司 | Digital human control method and device |
CN112070906A (en) * | 2020-08-31 | 2020-12-11 | 北京市商汤科技开发有限公司 | Augmented reality system and augmented reality data generation method and device |
CN113849069A (en) * | 2021-10-18 | 2021-12-28 | 深圳追一科技有限公司 | Image replacing method and device, storage medium and electronic equipment |
CN113901190A (en) * | 2021-10-18 | 2022-01-07 | 深圳追一科技有限公司 | Man-machine interaction method and device based on digital human, electronic equipment and storage medium |
CN113992929A (en) * | 2021-10-26 | 2022-01-28 | 招商银行股份有限公司 | Virtual digital human interaction method, system, equipment and computer program product |
CN115544234A (en) * | 2022-10-20 | 2022-12-30 | 深圳康佳电子科技有限公司 | User interaction method and device, electronic equipment and storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117033800A (en) * | 2023-10-08 | 2023-11-10 | 法琛堂(昆明)医疗科技有限公司 | Intelligent interaction method and system for visual cloud exhibition system |
CN117591748A (en) * | 2024-01-18 | 2024-02-23 | 北京笔中文化科技产业集团有限公司 | Planning method and device for exhibition route and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113395533B (en) | Virtual gift special effect display method and device, computer equipment and storage medium | |
US20190196576A1 (en) | Virtual reality device and a virtual reality server | |
KR102053128B1 (en) | Live streaming image generating method and apparatus, live streaming service providing method and apparatus, live streaming system | |
US20210248803A1 (en) | Avatar display system in virtual space, avatar display method in virtual space, and computer program | |
CN112199002B (en) | Interaction method and device based on virtual role, storage medium and computer equipment | |
CN116430991A (en) | Exhibition hall digital person explanation method and system based on mixed reality and electronic equipment | |
KR101981774B1 (en) | Method and device for providing user interface in the virtual reality space and recordimg medium thereof | |
CN113411656B (en) | Information processing method, information processing device, computer equipment and storage medium | |
CN110908504B (en) | Augmented reality museum collaborative interaction method and system | |
CN110570698A (en) | Online teaching control method and device, storage medium and terminal | |
JPWO2008066093A1 (en) | Position-dependent information expression system, position-dependent information expression control device, and position-dependent information expression method | |
KR20030039019A (en) | Medium storing a Computer Program with a Function of Lip-sync and Emotional Expression on 3D Scanned Real Facial Image during Realtime Text to Speech Conversion, and Online Game, Email, Chatting, Broadcasting and Foreign Language Learning Method using the Same | |
CN109427219B (en) | Disaster prevention learning method and device based on augmented reality education scene conversion model | |
KR20130089921A (en) | Operating method and content providing system | |
CN111242704B (en) | Method and electronic equipment for superposing live character images in real scene | |
KR20150087763A (en) | system and method for providing collaborative contents service based on augmented reality | |
US20130229342A1 (en) | Information providing system, information providing method, information processing apparatus, method of controlling the same, and control program | |
CN105847316A (en) | Information sharing method and system, client and server | |
KR20190094874A (en) | Digital signage system for providing mixed reality content comprising three-dimension object and marker and method thereof | |
WO2022131148A1 (en) | Information processing device, information processing method, and information processing program | |
WO2023026546A1 (en) | Information processing device | |
CN112752159A (en) | Interaction method and related device | |
JP7050884B6 (en) | Information processing system, information processing method, information processing program | |
CN116437155A (en) | Live broadcast interaction method and device, computer equipment and storage medium | |
CN115086693A (en) | Virtual object interaction method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |