CN113961133A - Display control method and device for electronic equipment, electronic equipment and storage medium - Google Patents
Display control method and device for electronic equipment, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113961133A CN113961133A CN202111257446.9A CN202111257446A CN113961133A CN 113961133 A CN113961133 A CN 113961133A CN 202111257446 A CN202111257446 A CN 202111257446A CN 113961133 A CN113961133 A CN 113961133A
- Authority
- CN
- China
- Prior art keywords
- target
- information
- visitor
- target visitor
- display screen
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 95
- 238000005034 decoration Methods 0.000 claims abstract description 115
- 230000004044 response Effects 0.000 claims abstract description 59
- 230000001815 facial effect Effects 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 8
- 238000005516 engineering process Methods 0.000 description 25
- 238000010586 diagram Methods 0.000 description 22
- 238000012545 processing Methods 0.000 description 22
- 230000008569 process Effects 0.000 description 17
- 230000003993 interaction Effects 0.000 description 14
- 238000004891 communication Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 238000013528 artificial neural network Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 8
- 238000010295 mobile communication Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000007654 immersion Methods 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 210000001508 eye Anatomy 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000000214 mouth Anatomy 0.000 description 1
- 210000001331 nose Anatomy 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The disclosure relates to a display control method and apparatus for an electronic device, and a storage medium. The electronic equipment comprises a camera and a first display screen, and is arranged in a first target place. The method comprises the following steps: performing face recognition on a first image acquired by the camera to obtain a face recognition result corresponding to the first image; in response to the face recognition result indicating that the face in the first image belongs to a target visitor of the first target place, obtaining guide information for the target visitor and obtaining decoration information corresponding to the target visitor; and controlling the first display screen to display the guide information and the face image of the target visitor added with the decoration information.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a display control method and apparatus for an electronic device, and a storage medium.
Background
In a scene that the visitor visits the target place, the electronic equipment arranged in the target place is controlled to display the information aiming at the visitor, so that the experience of the visitor visiting the target place can be improved. The display control method has the advantages that convenience of display control of the electronic equipment is improved, and the method has important significance.
Disclosure of Invention
The present disclosure provides a display control technical scheme of an electronic device.
According to an aspect of the present disclosure, there is provided a display control method of an electronic device, the electronic device including a camera and a first display screen, and being provided at a first target location, the method including:
performing face recognition on a first image acquired by the camera to obtain a face recognition result corresponding to the first image;
in response to the face recognition result indicating that the face in the first image belongs to a target visitor of the first target place, obtaining guide information for the target visitor and obtaining decoration information corresponding to the target visitor;
and controlling the first display screen to display the guide information and the face image of the target visitor added with the decoration information.
The method comprises the steps of carrying out face recognition on a first image collected by a camera to obtain a face recognition result corresponding to the first image, responding to the face recognition result to indicate that the face in the first image belongs to a target visitor in a first target place, obtaining guide information aiming at the target visitor, obtaining decoration information corresponding to the target visitor, and controlling a first display screen to display the guide information, and adding the face image of the target visitor after the decoration information is added, so that the electronic equipment can be controlled to automatically and accurately display the guide information aiming at the target visitor based on the face recognition result of the image collected by the electronic equipment, the convenience of display control of the electronic equipment can be improved, and the utilization efficiency of display screen resources arranged in the first target place can be improved. In the process of visiting the first target place, the target visitor can acquire the guide information aiming at the target visitor through the face information, and in the process of acquiring the guide information aiming at the target visitor through the electronic equipment, the electronic equipment also displays the face image of the target visitor after the decoration information corresponding to the target visitor is added, so that in the process of visitor guiding, the target visitor can feel the relevant information (such as enterprise culture, enterprise technology and the like) of the first target place, interactivity and immersion are improved, and the visiting experience of the target visitor can be further improved.
In a possible implementation manner, the controlling the first display screen to display the guidance information and the facial image of the target visitor to which the decoration information is added includes:
in response to detecting the intention of the target visitor to view the first display screen, controlling the first display screen to display the guide information and the face image of the target visitor after the decoration information is added.
In this implementation, by controlling the first display screen to display the guidance information and the face image of the target visitor after adding the decoration information in response to detecting the intention of the target visitor to view the first display screen, unnecessary information display can be reduced and utilization efficiency of display screen resources provided at the first target location can be further improved.
In one possible implementation, the method further includes:
determining that an intent of the target guest to view the first display screen is detected in response to at least one of:
the target visitor gazes at the first display screen;
the target visitor touches the first display screen;
the proportion of the face area of the target visitor in the first image is greater than or equal to a first preset threshold;
the time length of the target visitor staying in front of the first display screen is greater than or equal to a second preset threshold value;
the distance between the target visitor and the first display screen is smaller than or equal to a third preset threshold.
In this implementation, the method includes determining that the target visitor's intention to view the first display screen is detected by, in response to the target visitor gazing at the first display screen, or in response to the target visitor touching the first display screen, determining that the target visitor's intention to view the first display screen is detected, or in response to a ratio of a face area of the target visitor in the first image being greater than or equal to a first preset threshold, determining that the target visitor's intention to view the first display screen is detected, or in response to a length of time that the target visitor stays in front of the first display screen being greater than or equal to a second preset threshold, determining that the target visitor's intention to view the first display screen is detected, or in response to a distance between the target visitor and the first display screen being less than or equal to a third preset threshold, determining that the target visitor's intention to view the first display screen is detected, and in response to the target visitor's intention to view the first display screen being detected, and controlling the first display screen to display the guide information and the face image of the target visitor added with the decoration information, thereby further reducing unnecessary information display and improving the utilization efficiency of display screen resources of the first target place.
In one possible implementation, after the controlling the first display screen to display the guidance information and the facial image of the target visitor after adding the decoration information in response to detecting the intention of the target visitor to view the first display screen, the method further includes:
in response to the detection that the target visitor leaves the first display screen, controlling the first display screen to stop displaying the guide information and adding the decorative information to the face image of the target visitor.
In the implementation mode, the first display screen is controlled to stop displaying the guide information in response to the fact that the target visitor leaves the first display screen, and the face image of the target visitor is added after the decoration information is added, so that a worker in a first target place does not need to manually replace information displayed on the first display screen after the target visitor leaves for artificial confirmation, convenience of display control of the electronic equipment can be further improved, and utilization efficiency of display screen resources in the first target place is improved.
In one possible implementation form of the method,
the guiding information comprises traveling prompting information aiming at the target visitor, wherein the traveling prompting information is used for guiding the target visitor to travel to a second target place, and the second target place is a place reserved for visiting by the target visitor in the first target place;
the obtaining of the guidance information for the target guest includes: and generating traveling prompt information aiming at the target visitor according to the position information of the second target place.
In the implementation mode, the traveling prompt message aiming at the target visitor is generated according to the position information of the second target place, so that the target visitor can be guided to smoothly arrive at the second target place reserved for access in the first target place, the time of the target visitor can be saved, and the experience of the target visitor in accessing the first target place can be improved.
In one possible implementation, the generating a travel prompt message for the target visitor according to the location information of the second target location includes:
and generating travel prompt information from the first display screen to the second target place according to the position information of the second target place and the position information of the first display screen.
In the implementation manner, the traveling prompt message from the first display screen to the second target place is generated according to the position information of the second target place reserved for access by the target visitor in the first target place and the position information of the first display screen, so that the target visitor can be guided to reach the second target place reserved for access in the first target place more smoothly, the time of the target visitor can be further saved, and the experience of the target visitor in accessing the first target place can be improved.
In one possible implementation, the guidance information includes a time alert for the target guest; wherein the time prompt message is used for prompting the reserved time of the target visitor, and/or the time prompt message is used for prompting the time interval between the current time and the reserved time.
In the implementation manner, the first display screen is controlled to display the reserved time and/or the time interval between the current time and the reserved time of the target visitor, so that the target visitor is helped to reasonably arrange the time, and the experience of the target visitor in accessing the first target place can be further improved.
In one possible implementation form of the method,
the bootstrap information includes welcome information for the target guest;
the obtaining of the guidance information for the target guest includes: and according to the personal information and/or the access reservation information of the target visitor, welcome information aiming at the target visitor is generated.
In the implementation mode, the welcome information aiming at the target visitor is generated according to the personal information and/or the access reservation information of the target visitor, so that the welcome information which is personalized and customized can be displayed aiming at the target visitor, the experience of the target visitor in accessing the first target place is improved, the fluency of the target visitor in accessing the first target place is improved, and the time of the target visitor is saved.
In one possible implementation, the welcome information includes an avatar for the target guest and a welcome word for the target guest.
In the implementation mode, the first display screen is controlled to display the virtual image for the target visitor and the welcome word for the target visitor, so that the effect of speaking the welcome word through the virtual image can be presented, the personification of the machine is realized in the human-computer interaction, the human-computer interaction mode is more accordant with the interaction habit of people, the interaction is more natural, and the target visitor can feel the warmth of the human-computer interaction.
In a possible implementation manner, the controlling the first display screen to display the guidance information and the facial image of the target visitor to which the decoration information is added includes:
and controlling the first display screen to display the guide information in a first area, and displaying the face image of the target visitor added with the decoration information in a second area, wherein the avatar displayed in the first area faces the second area.
In this implementation manner, the first display screen is controlled to display the guide information in a first area, and the face image of the target visitor added with the decoration information is displayed in a second area, wherein the avatar displayed in the first area faces the second area, so that the human-computer interaction process can be more natural, and the interestingness can be further improved.
In a possible implementation manner, the obtaining decoration information corresponding to the target visitor includes:
and obtaining decoration information corresponding to the target visitor according to the personal information and/or the access reservation information of the target visitor.
In the implementation mode, the decoration information corresponding to the target visitor is obtained according to the personal information and/or the access reservation information of the target visitor, so that the displayed information can reflect the personal information and/or the access reservation information of the target visitor, and the access experience of the target visitor can be further improved.
In one possible implementation, the decoration information includes: a headwear sticker and/or a face sticker corresponding to the target visitor.
In the implementation mode, the first display screen is controlled to display the human face image of the target visitor added with the headwear sticker and/or the face sticker, so that the interestingness of the target visitor accessing the first target place can be further improved.
In a possible implementation manner, the obtaining decoration information corresponding to the target visitor includes:
in response to the target visitor having viewed a second display screen disposed at the first target location, obtaining decorative information different from the decorative information displayed by the second display screen for the target visitor, wherein the second display screen is disposed at a different location than the first display screen at the first target location.
In this implementation, by responding to that the target visitor has viewed the second display screen provided in the first target place, decoration information different from the decoration information displayed by the target visitor is obtained from the second display screen, so that when the target visitor views the display screen provided in the first target place at every time, the subject face image of the target visitor added with the different decoration information is displayed to the target visitor, thereby further improving the interest of the target visitor.
According to an aspect of the present disclosure, there is provided a display control apparatus of an electronic device, the electronic device including a camera and a first display screen, and the electronic device being provided at a first target location, the apparatus including:
the face recognition module is used for carrying out face recognition on a first image acquired by the camera to obtain a face recognition result corresponding to the first image;
an obtaining module, configured to, in response to the face recognition result indicating that the face in the first image belongs to a target visitor in the first target location, obtain guidance information for the target visitor, and obtain decoration information corresponding to the target visitor;
and the display control module is used for controlling the first display screen to display the guide information and the face image of the target visitor added with the decoration information.
In one possible implementation, the display control module is configured to:
in response to detecting the intention of the target visitor to view the first display screen, controlling the first display screen to display the guide information and the face image of the target visitor after the decoration information is added.
In one possible implementation, the apparatus further includes:
a determination module to determine that an intent of the target guest to view the first display screen is detected in response to at least one of:
the target visitor gazes at the first display screen;
the target visitor touches the first display screen;
the proportion of the face area of the target visitor in the first image is greater than or equal to a first preset threshold;
the time length of the target visitor staying in front of the first display screen is greater than or equal to a second preset threshold value;
the distance between the target visitor and the first display screen is smaller than or equal to a third preset threshold.
In one possible implementation, the display control module is further configured to:
in response to the detection that the target visitor leaves the first display screen, controlling the first display screen to stop displaying the guide information and adding the decorative information to the face image of the target visitor.
In one possible implementation form of the method,
the guiding information comprises traveling prompting information aiming at the target visitor, wherein the traveling prompting information is used for guiding the target visitor to travel to a second target place, and the second target place is a place reserved for visiting by the target visitor in the first target place;
the obtaining module is configured to: and generating traveling prompt information aiming at the target visitor according to the position information of the second target place.
In one possible implementation, the obtaining module is configured to:
and generating travel prompt information from the first display screen to the second target place according to the position information of the second target place and the position information of the first display screen.
In one possible implementation, the guidance information includes a time alert for the target guest; wherein the time prompt message is used for prompting the reserved time of the target visitor, and/or the time prompt message is used for prompting the time interval between the current time and the reserved time.
In one possible implementation form of the method,
the bootstrap information includes welcome information for the target guest;
the obtaining module is configured to: and according to the personal information and/or the access reservation information of the target visitor, welcome information aiming at the target visitor is generated.
In one possible implementation, the welcome information includes an avatar for the target guest and a welcome word for the target guest.
In one possible implementation, the display control module is configured to:
and controlling the first display screen to display the guide information in a first area, and displaying the face image of the target visitor added with the decoration information in a second area, wherein the avatar displayed in the first area faces the second area.
In one possible implementation, the obtaining module is configured to:
and obtaining decoration information corresponding to the target visitor according to the personal information and/or the access reservation information of the target visitor.
In one possible implementation, the decoration information includes: a headwear sticker and/or a face sticker corresponding to the target visitor.
In one possible implementation, the obtaining module is configured to:
in response to the target visitor having viewed a second display screen disposed at the first target location, obtaining decorative information different from the decorative information displayed by the second display screen for the target visitor, wherein the second display screen is disposed at a different location than the first display screen at the first target location.
According to an aspect of the present disclosure, there is provided an electronic device including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, a face recognition result corresponding to a first image is obtained by performing face recognition on the first image acquired by the camera, guidance information for a target visitor is obtained in response to the face recognition result indicating that the face in the first image belongs to the target visitor at the first target location, decoration information corresponding to the target visitor is obtained, and controls the first display screen to display the guide information and the face image of the target visitor added with the decoration information, therefore, the electronic equipment can be controlled to automatically and accurately display the guide information aiming at the target visitor based on the face recognition result of the image acquired by the electronic equipment, therefore, convenience of display control of the electronic equipment can be improved, and utilization efficiency of display screen resources arranged in the first target place is improved. In the process of visiting the first target place, the target visitor can acquire the guide information aiming at the target visitor through the face information, and in the process of acquiring the guide information aiming at the target visitor through the electronic equipment, the electronic equipment also displays the face image of the target visitor after the decoration information corresponding to the target visitor is added, so that in the process of visitor guiding, the target visitor can feel the relevant information (such as enterprise culture, enterprise technology and the like) of the first target place, interactivity and immersion are improved, and the visiting experience of the target visitor can be further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of display control of an electronic device provided by an embodiment of the present disclosure.
Fig. 2 shows a schematic diagram of an electronic device 200 provided by the embodiment of the disclosure.
Fig. 3 is a schematic diagram illustrating that a first display screen displays guide information and a face image of a target visitor to which the decoration information is added in the display control method of the electronic device according to the embodiment of the disclosure.
Fig. 4 is another schematic diagram illustrating that the first display screen displays the guidance information and the face image of the target visitor to which the decoration information is added in the display control method of the electronic device according to the embodiment of the present disclosure.
Fig. 5 shows a block diagram of a display control apparatus of an electronic device provided in an embodiment of the present disclosure.
Fig. 6 illustrates a block diagram of an electronic device 800 provided by an embodiment of the disclosure.
Fig. 7 shows a block diagram of an electronic device 1900 provided by an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
In the related art, when a visitor accesses a target place, a worker of the target place is required to manually add to display information for the visitor (e.g., the name of the visitor, a subject of a conference in participation, etc.). Moreover, the staff at the target location needs to manually grasp the time for the display screen of the electronic device to display the information for each visitor, that is, the relevant information needs to be updated to the display screen in time when the visitor is about to visit.
This approach requires the staff of the target site to match the visit times of different visitors with the information displayed for different visitors (e.g., the names of the visitors, the subject of the conference being attended, etc.), is time consuming, labor intensive, and prone to error. In addition, after the visitor finishes visiting, the staff at the target place needs to manually replace the information on the display screen. However, it is often difficult for the staff at the target site to confirm whether the visitor leaves, thereby easily causing waste of resources.
The embodiment of the disclosure provides a display control method and device for electronic equipment, the electronic equipment and a storage medium. The method comprises the steps of carrying out face recognition on a first image collected by a camera to obtain a face recognition result corresponding to the first image, responding to the face recognition result to indicate that the face in the first image belongs to a target visitor in a first target place, obtaining guide information aiming at the target visitor, obtaining decoration information corresponding to the target visitor, and controlling a first display screen to display the guide information, and adding the face image of the target visitor after the decoration information is added, so that the electronic equipment can be controlled to automatically and accurately display the guide information aiming at the target visitor based on the face recognition result of the image collected by the electronic equipment, the convenience of display control of the electronic equipment can be improved, and the utilization efficiency of display screen resources arranged in the first target place can be improved. In the process of visiting the first target place, the target visitor can acquire the guide information aiming at the target visitor through the face information, and in the process of acquiring the guide information aiming at the target visitor through the electronic equipment, the electronic equipment also displays the face image of the target visitor after the decoration information corresponding to the target visitor is added, so that in the process of visitor guiding, the target visitor can feel the relevant information (such as enterprise culture, enterprise technology and the like) of the first target place, interactivity and immersion are improved, and the visiting experience of the target visitor can be further improved.
The following describes a display control method of an electronic device according to an embodiment of the present disclosure in detail with reference to the accompanying drawings.
Fig. 1 shows a flowchart of a display control method of an electronic device provided in an embodiment of the present disclosure. As shown in fig. 1, the display control method of the electronic device includes steps S11 to S13.
In step S11, performing face recognition on the first image acquired by the camera to obtain a face recognition result corresponding to the first image.
In step S12, in response to the face recognition result indicating that the face in the first image belongs to a target visitor at the first target location, obtaining guidance information for the target visitor and obtaining decoration information corresponding to the target visitor.
In step S13, the first display screen is controlled to display the guidance information and the face image of the target visitor to which the decoration information is added.
In the embodiment of the present disclosure, the electronic device includes a camera and a first display screen, and the electronic device is disposed in a first target location. The first target location may be any location such as an enterprise, a college, a unit, a cell, and the like. The electronic device may be any electronic device in the first target location for displaying information to the target visitor. For example, if the first target site is a business, the electronic device may be an electronic device disposed at any position of the corridor, an electronic device disposed at the entrance of the conference room, or the like. As the target visitor moves within the first target site, information may be displayed to the target visitor through a different electronic device. Fig. 2 shows a schematic diagram of an electronic device 200 provided by the embodiment of the disclosure. As shown in fig. 2, the electronic device 200 may include a camera 210 and a first display screen 220. In one possible implementation manner, the display control method of the electronic device may be executed by a terminal device or a server or other processing device. The terminal device may be the electronic device, a User Equipment (UE), a mobile device, a User terminal, a Personal Digital Assistant (PDA), a handheld device, a computing device, or a wearable device. In some possible implementations, the display control method of the electronic device may be implemented by a processor calling computer readable instructions stored in a memory.
In the embodiment of the present disclosure, at least one electronic device may be provided at a first target location, and the display control method of the electronic device provided in the embodiment of the present disclosure is respectively performed for the at least one electronic device, so as to implement visitor guidance through the at least one electronic device. In a possible implementation manner, electronic devices may be respectively disposed at a plurality of locations of the first target location, and the display control method of the electronic device provided in the embodiments of the present disclosure is respectively performed for the plurality of electronic devices, so that when a target visitor is located at different locations of the first target location, guidance may be respectively obtained through the electronic devices disposed at different locations of the first target location.
In the disclosed embodiments, some or all of the visitors accessing the first target site may be respectively targeted visitors. The target visitor may be any person who accesses the first target location. For example, if the first target location is a business, the target guest may be a person of a partner of the business, an interviewer of the business, a person performing an inspection of the business (e.g., security inspector), and so forth. As another example, if the first target site is a cell, the target visitor may be a relatives, a take-away, a courier, etc. of the resident of the cell.
In one possible implementation, the electronic device may control the camera to continuously capture video streams and/or images. In another possible implementation manner, the electronic device may control the camera to acquire images at a preset frequency. In another possible implementation manner, the electronic device may control the camera to capture a video stream and/or an image when detecting that the human body is close to the camera or the first display screen. For example, the electronic device may be provided with a distance sensor or a pyroelectric infrared sensor in the vicinity of the camera or the first display screen to detect whether a human body approaches the camera or the first display screen by the distance sensor or the pyroelectric infrared sensor.
The first image may be any image or video frame captured by the camera. For example, the first image may be the most recently acquired image or video frame of the camera. In the embodiment of the present disclosure, face recognition may be performed based on one or more images acquired by the camera. In the case of performing face recognition based on a plurality of images, the plurality of images may include the first image and other images.
In a possible implementation manner, the face features in the first image can be extracted through a pre-trained first neural network, then the face features are compared with a pre-established face feature library, and a face recognition result corresponding to the first image is determined according to the comparison result. The first neural network can be trained in advance based on a face image set with face key point labeling information, so that the first neural network learns the capabilities of face positioning and face feature extraction. The key points of the face can include key points of forehead, eyes, eyebrows, mouth, nose, face contour and other parts. The number of face key points may be 21, 106, 240, etc. The structure of the first neural network is not limited in the embodiments of the present disclosure, and may include, for example and without limitation, a Back Propagation (BP) neural network, a convolutional neural network, a Radial Basis Function (RBF) neural network, a perceptron neural network, a linear neural network, a feedback neural network, and the like.
In the embodiment of the present disclosure, the face recognition result corresponding to the first image may include information whether a face is recognized. For example, the face recognition result may be that a face is recognized or that a face is not recognized. And under the condition that the face recognition result is that the face is recognized, the face recognition result can also comprise identification information of the target user. Wherein the target user may represent the user in the first image. That is, the target user may be a user currently in front of a camera of the electronic device. The identification information of the target user may be information that can be used to uniquely identify the target user. For example, the identification information of the target user may be a number (ID), a name, an identification number, a mobile phone number, and the like of the target user.
After obtaining the identification information of the target user, the identification information of the target user may be matched with the identification information of the target visitor in the first target location, so that it may be determined whether the face in the first image belongs to the target visitor. Wherein part or all of the guests of the first target site may be respectively targeted guests. In the case that the face in the first image belongs to a target visitor, guidance information for the target visitor and decoration information corresponding to the target visitor may be obtained.
In the embodiment of the present disclosure, in response to the face recognition result indicating that the face in the first image belongs to a target visitor of the first target place, guidance information for the target visitor is obtained, wherein the guidance information is guidance information corresponding to the target visitor. Wherein the guiding information for different target visitors (i.e. the guiding information corresponding to different target visitors) may be different. Of course, there may be cases where the guidance information for multiple target guests (i.e., guidance information corresponding to multiple target guests) is the same. For example, target visitor V1Target visitor V2And target visitor V3For attending meeting M at first target place1Visitor of (1), target visitor V4And target visitor V5For attending meeting M at first target place2Then, for the target visitor V1Target visitor V2And target visitor V3May be the same for the target visitor V4And target visitor V5The guidance information of (a) may be the same. For example, for target visitor V1Target visitor V2And target visitor V3The guidance information of (1) may be all "please go to the meeting room R1Participating in a meeting M1", for target visitor V4And target visitor V5The guidance information of (1) may be all "please go to the meeting room R2Participating in a meeting M2”。
In one possible implementation, the guidance information includes travel guidance information for the target visitor, wherein the travel guidance information is used for guiding the target visitor to travel to a second target location, and the second target location is a location reserved for access by the target visitor in the first target location; the obtaining of the guidance information for the target guest includes: and generating traveling prompt information aiming at the target visitor according to the position information of the second target place. For example, if the first target location is a business, the second target location that the target visitor reserves access to in the first target location may be a conference room, an exhibition hall, or the like. For another example, if the first target site is a cell, the second target site for which the target visitor reserves access within the first target site may be X rooms. In this implementation, the travel guidance information may be any information for guiding the target visitor to travel to the second target location, and the travel guidance information is determined at least according to the location information of the second target location. In the implementation mode, the traveling prompt message aiming at the target visitor is generated according to the position information of the second target place, so that the target visitor can be guided to smoothly arrive at the second target place reserved for access in the first target place, the time of the target visitor can be saved, and the experience of the target visitor in accessing the first target place can be improved.
As an example of this implementation, the generating of the travel prompt message for the target visitor according to the location information of the second target location includes: and generating travel prompt information from the first display screen to the second target place according to the position information of the second target place and the position information of the first display screen.
In this example, according to the position information of the first display screen and the position information of the second destination location, route information from the position of the first display screen to the second destination location may be determined, so that at least one of a traveling direction, a traveling speed, a distance to be traveled, a remaining traveling time, and the like from the position of the first display screen to the second destination location may be determined. The travel prompt information from the first display screen to the second target place can be generated according to at least one of the travel direction, the travel speed, the distance to be traveled, the remaining travel time and the like from the position of the first display screen to the second target place. That is, the travel guidance information may be used to guide at least one of the travel direction, the travel speed, the distance to be traveled, the remaining travel time, and the like. Wherein, the traveling direction may include straight traveling, left turning, right turning, etc.; the distance to be traveled may include at least one of a remaining distance in the at least one direction of travel, a remaining distance to the second target site, and the like; the remaining travel time may include at least one of a remaining travel time in the at least one travel direction, a remaining travel time to the second destination location, and the like. For example, the travel prompt may include straight, 20 meters, 10 minutes to reach the second destination location, and so on.
In this example, by generating the traveling prompt message from the first display screen to the second target location according to the location information of the second target location for which the target visitor reserved for access within the first target location and the location information of the first display screen, the target visitor is guided to more smoothly reach the second target location for which the target visitor reserved for access within the first target location, thereby further saving the time of the target visitor and improving the experience of the target visitor accessing the first target location.
As another example of this implementation, the generating of the travel prompt message for the target visitor from the location information of the second target location includes: and taking the position information of the second target place as the traveling prompt information aiming at the target visitor. That is, in this example, the travel prompt message for the target visitor may include location information for the second target location.
In one possible implementation, the guidance information includes a time alert for the target guest; wherein the time prompt message is used for prompting the reserved time of the target visitor, and/or the time prompt message is used for prompting the time interval between the current time and the reserved time. As an example of this implementation, the time alert information is used to alert the target guest of a reserved time. As another example of this implementation, the time alert information is used to alert a time interval between a current time and the scheduled time. As another example of this implementation, the time alert information is used to alert the target guest of a reserved time and a time interval between a current time and the reserved time. In the implementation manner, the first display screen is controlled to display the reserved time and/or the time interval between the current time and the reserved time of the target visitor, so that the target visitor is helped to reasonably arrange the time, and the experience of the target visitor in accessing the first target place can be further improved.
In one possible implementation, the guidance information includes welcome information for the target guest; the obtaining of the guidance information for the target guest includes: and according to the personal information and/or the access reservation information of the target visitor, welcome information aiming at the target visitor is generated. The personal information of the target visitor may include at least one of name, gender, position, affiliated unit information, and the like. The visit reservation information may include at least one of a purpose of the visit (e.g., a meeting, interview, examination, etc.), a name of the meeting scheduled to be attended, meeting room information corresponding to the meeting scheduled to be attended, information of a second target location for which the target visitor has reserved the visit within the first target location, and the like. In this implementation, the welcome information for the target visitor may include at least one of a welcome word, a welcome map, a welcome animation, a welcome video, a welcome voice, etc. for the target visitor. That is, the form of the welcome message for the target visitor may include at least one of text message, image, animation, video, voice, and the like.
As an example of this implementation, welcome information for a target visitor may be generated from personal information of the target visitor. For example, if the name of the target visitor is wang XX, the gender of the target visitor is woman, and the name of the first target location is XX technology, the welcome information for the target visitor may include text information such as "wang XX woman, XX technology welcome you" or "welcome wang XX woman". For another example, if the name of the target visitor is zhang XX, the position is professor, the unit of belonging is XX university, and the name of the first target location is XX science and technology, the welcome information for the target visitor may include text information such as "XX university professor, XX science and technology welcome you" or "welcome XX university professor".
As another example of this implementation, welcome information for a target guest may be generated according to access reservation information of the target guest. For example, the visit reservation information of the target visitor includes the name of the conference reserved for participationConference room information "conference room R" called "XX conference" and conference to be reserved1"then the welcome message for the target visitor may include" please go to the meeting room R1Attend XX meeting "and so on.
As another example of this implementation, welcome information for a target visitor may be generated according to personal information and access reservation information of the target visitor. For example, the name of the target visitor is king XX, the gender is woman, the name of the first target place is XX science, the name of the conference reserved for the target visitor is XX conference, and the conference room information corresponding to the conference reserved for the target visitor is conference room R1", the welcome message for the target visitor may include" Wang XX lady, XX science welcome you! Please go to the meeting room R1Attend XX meeting "and so on. Fig. 3 is a schematic diagram illustrating that a first display screen displays guide information and a face image of a target visitor to which the decoration information is added in the display control method of the electronic device according to the embodiment of the disclosure. In the example shown in FIG. 3, the guidance information includes "Wang XX lady, XX science welcome you! Please go to the meeting room R1Attend the welcome phrase of XX meeting ", the decoration information includes a headwear sticker.
In the implementation mode, the welcome information aiming at the target visitor is generated according to the personal information and/or the access reservation information of the target visitor, so that the welcome information which is personalized and customized can be displayed aiming at the target visitor, the experience of the target visitor in accessing the first target place is improved, the fluency of the target visitor in accessing the first target place is improved, and the time of the target visitor is saved.
As one example of this implementation, the welcome information includes an avatar for the target guest and a welcome word for the target guest. In this example, the avatar may be a three-dimensional avatar or a two-dimensional avatar for a welcome target guest. The avatar may be a cartoon style, a realistic style, or the like. For example, the first target site is a business, and the avatar may be a mascot for the business, etc. In this example, the avatars for different target guests (i.e., avatars corresponding to different target guests) may be different. Of course, there may be situations where the avatars for multiple target guests are the same. In this example, personal information of the target visitor may be obtained, and an avatar for the target visitor may be determined according to the personal information of the target visitor. For example, preference information of a target visitor may be predicted from personal information of the target visitor, and an avatar for the target visitor may be determined according to a correspondence between the preference information and the avatar established in advance. In the example, the first display screen is controlled to display the virtual image aiming at the target visitor and the welcome word aiming at the target visitor, so that the effect of speaking the welcome word through the virtual image can be presented, the personification of the machine is realized in the man-machine interaction, the man-machine interaction mode is more accordant with the interaction habit of people, the interaction is more natural, and the target visitor can feel the warmth of the man-machine interaction.
In one example, the controlling the first display screen to display the guidance information and the facial image of the target visitor after adding the decoration information includes: and controlling the first display screen to display the guide information in a first area, and displaying the face image of the target visitor added with the decoration information in a second area, wherein the avatar displayed in the first area faces the second area. In this example, the first region and the second region may not have an overlapping region, or may have a partially overlapping region, which is not limited herein. For example, the first area is located on the left side of the first display screen and the second area is located on the right side of the first display screen. For another example, the first area is located on the right side of the first display screen and the second area is located on the left side of the first display screen. In this instance, the guide information is displayed in a first area by controlling the first display screen, and the face image of the target visitor to which the decoration information is added is displayed in a second area toward which the avatar is displayed in the first area, whereby the man-machine interaction process can be more natural and the interest can be further improved. Fig. 4 is another schematic diagram illustrating that the first display screen displays the guidance information and the face image of the target visitor to which the decoration information is added in the display control method of the electronic device according to the embodiment of the present disclosure. In the example shown in fig. 4, the first area is located at the lower left of the first display screen, the second area is located at the right side of the first display screen, and the avatar displayed in the first area is oriented toward the second area.
In the embodiment of the disclosure, in response to that the face recognition result indicates that the face in the first image belongs to a target visitor at the first target location, decoration information corresponding to the target visitor is obtained, where the decoration information may represent any information for decorating the face image of the target visitor. The decoration information can play a role in decorating, decorating or beautifying the face image of the target visitor and can also be used for improving interestingness. For example, the decorative information may be in the form of a sticker, bubble, etc. The decoration information may include static decoration information and/or dynamic decoration information.
In the disclosed embodiments, the facial image of the target visitor may represent an image containing at least a portion of the target visitor's face. For example, the facial image of the target visitor may contain the entire face of the target visitor. As another example, the facial image of the target visitor may contain a substantial area of the target visitor's face. Of course, the facial image of the target visitor may also contain other parts of the target visitor's body. For example, the facial image of the target visitor may also contain the neck of the target visitor; as another example, the facial image of the target visitor may also include the neck and upper body of the target visitor; as another example, the face image of the target visitor may be a full body photograph of the target visitor. In the embodiment of the present disclosure, the face image of the target visitor may be a static image and/or a dynamic image, which is not limited herein.
In one possible implementation, the face image of the target visitor may be determined according to any one of the following: the latest face image meeting the preset quality condition is selected from the face images of the target visitor collected by the camera; a first image; and the face image of the target visitor is stored in advance. For example, the preset quality condition may include at least one of that the eyes are open, that the proportion of the human face region that is occluded is smaller than a preset proportion, that the definition is greater than or equal to a preset definition, that the brightness is greater than or equal to a preset brightness, and the like. Of course, a person skilled in the art can flexibly set the preset quality condition according to the requirements of the actual application scenario, and is not limited herein. In this implementation manner, the pre-stored face image of the target visitor may be a face image uploaded to the guest system by the target visitor in advance, or may be a face image of the target visitor obtained through another way, which is not limited herein. As an example of the implementation manner, the face image of the target visitor is the latest face image meeting a preset quality condition in the face images of the target visitor collected by the camera. As another example of this implementation, the facial image of the target visitor is the most recent facial image after beautification. As another example of this implementation, the image of the face of the target visitor is the first image. As another example of this implementation, the face image of the target visitor is the first image after beautification. As another example of this implementation, the face image of the target visitor is a pre-stored face image of the target visitor. As another example of the implementation, the face image of the target visitor is a face image obtained by beautifying a face image of the target visitor stored in advance. By adopting the implementation mode, the face image of the target visitor is determined, so that the face image of the target visitor with better quality or more new acquisition time can be displayed, and the experience of the user in visiting the first target place can be further improved.
In a possible implementation manner, the obtaining decoration information corresponding to the target visitor includes: and obtaining decoration information corresponding to the target visitor according to the personal information and/or the access reservation information of the target visitor. In this implementation, the personal information of the target guest may represent any information related to the target guest itself. The visiting reservation information of the target visitor may include at least one of a visiting purpose (e.g., attending a meeting, interviewing, examination, etc.), location information of the first target location, a name of the first target location, and the like.
As an example of this implementation, the personal information of the target visitor may include a gender of the target visitor, and the decoration information corresponding to the target visitor may be decoration information corresponding to the gender of the target visitor.
As one example of this implementation, the personal information of the target visitor may include a birth date. If the day that the target visitor visits the first target place is the birthday of the target visitor, the decoration information corresponding to the target visitor may include birthday decoration information, such as a birthday cake sticker, a blessing sticker, and the like. In addition, the decoration information corresponding to the target visitor may also be decoration information corresponding to the age of the target visitor.
As an example of this implementation, the decoration information corresponding to the target guest includes at least one of: a sticker of a landmark building of the first target place, a sticker of a product around the first target place, a sticker including a name of a country and/or a city to which the first target place belongs, a sticker including a name of the first target place, and a sticker including a slogan corresponding to the first target place. The location information of the first target location visited by the target visitor at this time may include any information that can indicate the location of the first target location. For example, the location information of the first target place may include at least one of a country to which the first target place belongs, a city to which the first target place belongs, a region to which the first target place belongs, a longitude and latitude of the first target place, and the like. For example, the location information of the first target location is "shanghai china". The name of the first target place may include at least one of a name of a building corresponding to the first target place, a name of a business to which the first target place belongs, and the like. For example, the name of the first target site may be "XX building" or "XX technology".
In the implementation mode, the decoration information corresponding to the target visitor is obtained according to the personal information and/or the access reservation information of the target visitor, so that the displayed information can reflect the personal information and/or the access reservation information of the target visitor, and the access experience of the target visitor can be further improved.
As an example of this implementation, the decoration information includes: a headwear sticker and/or a face sticker corresponding to the target visitor. In this example, the first display screen is controlled to display a face image of the target visitor to which the headwear sticker and/or the face sticker are added, whereby the interest of the target visitor in accessing the first target place can be further increased.
In the embodiment of the disclosure, the matching degree of a plurality of preset sets of decoration information and the target visitor can be obtained, and the decoration information corresponding to the target visitor is determined according to the matching degree. Any one set of decoration information in the plurality of sets of decoration information may include one or more than two items of decoration information. For example, the decoration information with the highest matching degree with the target visitor in the plurality of sets of decoration information may be determined as the decoration information corresponding to the target visitor. For another example, the decoration information with the highest matching degree with the target visitor in the candidate decoration information may be determined as the decoration information corresponding to the target visitor, where the candidate decoration information represents the decoration information that has not been shown to the target visitor in the multiple sets of decoration information.
In a possible implementation manner, the obtaining decoration information corresponding to the target visitor includes: in response to the target visitor having viewed a second display screen disposed at the first target location, obtaining decorative information different from the decorative information displayed by the second display screen for the target visitor, wherein the second display screen is disposed at a different location than the first display screen at the first target location. In this implementation, the second display screen represents a display screen that has been viewed by the target visitor in the current access process. In this implementation, by responding to that the target visitor has viewed the second display screen provided in the first target place, decoration information different from the decoration information displayed by the target visitor is obtained from the second display screen, so that when the target visitor views the display screen provided in the first target place at every time, the subject face image of the target visitor added with the different decoration information is displayed to the target visitor, thereby further improving the interest of the target visitor.
In a possible implementation manner, after obtaining a face recognition result corresponding to the first image, the method further includes: in response to the face recognition result indicating that the face in the first image belongs to a target visitor in the first target place, obtaining a background image corresponding to the target visitor; and controlling the first display screen to take the background image as a background image of the portrait image of the target visitor. In this implementation manner, the background image corresponding to the target visitor may be obtained according to the access reservation information of the target visitor, so that the relevant information of the current visit of the target visitor may be reflected by the background image displayed on the first display screen. For example, the background image may be an image including a landmark building including the first target site. For example, a target image of the area where the human face or the human body of the visitor is located can be extracted from the human face image of the target visitor; and taking the background image as a bottom layer image layer, taking the target image as a middle layer image, taking the decoration information as an upper layer image layer, and displaying the upper layer image layer in a second area of the first display screen. Wherein the target image represents an image of an area where the face or body of the visitor is located in the face image.
In a possible implementation manner, the controlling the first display screen to display the guidance information and the facial image of the target visitor to which the decoration information is added includes: in response to detecting the intention of the target visitor to view the first display screen, controlling the first display screen to display the guide information and the face image of the target visitor after the decoration information is added. In this implementation, by controlling the first display screen to display the guidance information and the face image of the target visitor after adding the decoration information in response to detecting the intention of the target visitor to view the first display screen, unnecessary information display can be reduced and utilization efficiency of display screen resources provided at the first target location can be further improved.
As an example of this implementation, the method further comprises: determining that an intent of the target guest to view the first display screen is detected in response to at least one of: the target visitor gazes at the first display screen; the target visitor touches the first display screen; the proportion of the face area of the target visitor in the first image is greater than or equal to a first preset threshold; the time length of the target visitor staying in front of the first display screen is greater than or equal to a second preset threshold value; the distance between the target visitor and the first display screen is smaller than or equal to a third preset threshold.
In one example, an intent of a target visitor to view a first display screen may be detected in response to the target visitor gazing at the first display screen. In this example, the target visitor may be subjected to line-of-sight detection based on the video stream captured by the camera to determine whether the line of sight of the target visitor falls on the first display screen. If the sight line of the target visitor falls on the first display screen, the duration of the stay of the sight line of the target visitor on the first display screen can be further detected. If the duration of the sight line of the target visitor staying on the first display screen reaches the preset duration, the target visitor can be determined to watch the first display screen. For example, the preset time period may be 1 second, 0.5 second, or the like.
In another example, an intent of the target visitor to view the first display screen may be detected in response to the target visitor touching the first display screen. In this example, it may be determined that the target visitor touches the first display screen in response to detecting that the first display screen is touched and that the face in the first image belongs to the target visitor.
In another example, an intent of the target visitor to view the first display screen may be determined to be detected in response to a duty ratio of a face area of the target visitor in the first image being greater than or equal to a first preset threshold. In this example, in the case that the face in the first image belongs to the target visitor, the area of the face region of the target visitor in the first image may be determined, and the ratio of the area of the face region of the target visitor in the first image to the area of the first image may be determined, resulting in the occupation ratio of the face region of the target visitor in the first image.
In another example, it may be determined that an intent of the target visitor to view the first display screen is detected in response to the target visitor staying in front of the first display screen for a time greater than or equal to a second preset threshold.
In another example, an intent of the target visitor to view the first display screen may be determined to be detected in response to a distance between the target visitor and the first display screen being less than or equal to a third preset threshold.
In the above example, the method further includes determining that the target visitor's intention to view the first display screen is detected by, in response to the target visitor gazing at the first display screen, or in response to the target visitor touching the first display screen, determining that the target visitor's intention to view the first display screen is detected, or in response to a ratio of a face area of the target visitor in the first image being greater than or equal to a first preset threshold value, determining that the target visitor's intention to view the first display screen is detected, or in response to a length of time that the target visitor remains in front of the first display screen being greater than or equal to a second preset threshold value, determining that the target visitor's intention to view the first display screen is detected, and in response to detecting the target visitor's intention to view the first display screen, and controlling the first display screen to display the guide information and the face image of the target visitor added with the decoration information, thereby further reducing unnecessary information display and improving the utilization efficiency of display screen resources of the first target place.
Of course, those skilled in the art may also flexibly set the manner of detecting the intention of the target guest to view the first display screen according to the requirements of the actual application scenario, which is not limited herein.
As an example of this implementation, after the controlling the first display screen to display the guidance information and the facial image of the target visitor after adding the decoration information in response to detecting the intention of the target visitor to view the first display screen, the method further includes: in response to the detection that the target visitor leaves the first display screen, controlling the first display screen to stop displaying the guide information and adding the decorative information to the face image of the target visitor.
In this example, it may be determined that the target visitor is detected to leave the first display screen in response to at least one of: the occupation ratio of the face area of the target visitor in the latest image acquired by the camera is smaller than a fourth preset threshold, wherein the fourth preset threshold can be smaller than or equal to the first preset threshold; the distance between the target visitor and the first display screen is greater than a fifth preset threshold, wherein the fifth preset threshold may be greater than or equal to a third preset threshold; the sight line of the target visitor no longer falls on the first display screen; the face of the target visitor is not oriented to the first display screen any more; the human body of the target visitor is no longer directed toward the first display screen. Of course, those skilled in the art may also flexibly set the way of detecting the target visitor leaving the first display screen according to the requirements of the actual application scenario, which is not limited herein.
In this example, by controlling the first display screen to stop displaying the guidance information and adding the face image of the target visitor after the decoration information in response to detecting that the target visitor leaves the first display screen, it is not necessary for a worker at the first target location to manually replace the information displayed on the first display screen after artificially confirming that the target visitor leaves, so that convenience of display control of the electronic device can be further improved and utilization efficiency of display screen resources at the first target location can be improved.
The embodiment of the disclosure can be applied to application scenes such as visitor guiding, visitor systems, route reminding, visitor all-in-one machines, visitor machines, positioning navigation, augmented reality, computer vision and the like.
The following describes a display control method of an electronic device according to an embodiment of the present disclosure with a specific application scenario. In the application scenario, the first target place is an enterprise, the name of the first target place is XX science and technology, the name of the target visitor V is king XX, the gender is female, the target visitor V visits the first target place for meeting, the name of the meeting reserved for the target visitor V is 'XX meeting', and the target visitor V reserves the meetingThe meeting room information (i.e. the second target location) corresponding to the meeting subscribed by the visitor V is the meeting room R1Conference room R1On floor 12 of the first destination site.
After the target visitor V arrives at the first target location, the visitor sticker may be signed and printed by the visitor machine provided at the first target location. The target visitor V may check in and print the visitor sticker through a visitor code (e.g., a two-dimensional code corresponding to the target visitor) or a human face. In the process that the target visitor V accesses the first target place, the visitor sticker can be used as the identity, so that visitor management can be conveniently carried out on the first target place.
After the target visitor V arrives at the first target place, the target visitor V can pass through the electronic equipment D arranged in the first-floor lobby of the first target place1View information, wherein, the electronic device D1The electronic equipment can be a visitor machine or other electronic equipment. Electronic device D1Comprises a camera C1And a display screen S1. Electronic device D1To camera C1Acquired image F1Carrying out face recognition to obtain an image F1Corresponding face recognition results and response to image F1Corresponding face recognition result indicating image F1The face of the user belongs to the target visitor V, and welcome information, advancing prompt information and decoration information corresponding to the target visitor V are obtained. For example, the display screen S1The following information may be displayed: aiming at the virtual image of the target visitor V, the welcome words 'Wang XX lady, XX science and technology welcome you' of the target visitor V and the travel prompt information 'please go to the meeting room R of the 12 th floor' of the target visitor V1Participating in XX meeting ", adding decorative information A1The face image of the subsequent target visitor V.
After the target visitor V arrives at the 12 th floor of the first target place, the target visitor V can pass through the electronic equipment D arranged at the 12 th floor2View information, wherein, the electronic device D2Comprises a camera C2And a display screen S2. Electronic device D2To camera C2Acquired image F2Carrying out face recognition to obtain an image F2Corresponding human face recognitionAnd responsive to the image F2Corresponding face recognition result indicating image F2The face of the user belongs to the target visitor V, and welcome information, advancing prompt information and decoration information corresponding to the target visitor V are obtained. For example, the display screen S2The following information may be displayed: aiming at the virtual image of the target visitor V, welcoming words 'Wang XX lady, XX science and technology welcome you' of the target visitor V, and advancing prompt information 'please meet room R' of the target visitor V1Participating XX meeting, turning right after 20 meters in front, adding decorative information A2The face image of the subsequent target visitor V.
After the target visitor V walks a section of way on the 12 th floor, the target visitor V can pass through the electronic equipment D arranged on the 12 th floor3View information, wherein, the electronic device D3Comprises a camera C3And a display screen S3. Electronic device D3To camera C3Acquired image F3Carrying out face recognition to obtain an image F3Corresponding face recognition results and response to image F3Corresponding face recognition result indicating image F3The face of the user belongs to the target visitor V, and welcome information, advancing prompt information and decoration information corresponding to the target visitor V are obtained. For example, the display screen S3The following information may be displayed: aiming at the virtual image of the target visitor V, the welcome words ' Wang XX lady, XX science and technology welcome you ' of the target visitor V, and the advancing prompt information ' turn left ahead and then go straight 20 m, meeting room R1On your left hand side ", decorative information A is added3The face image of the subsequent target visitor V.
By adopting the application scene, the electronic equipment arranged at different positions of the first target place can be automatically and accurately controlled to display the guide information aiming at the target visitor and the face image of the target visitor added with the decoration information, so that the convenience of display control of the electronic equipment can be improved. In this application scenario, after the visitor takes a step in the visited enterprise, the "one-to-one" VIP service can be obtained through the electronic devices provided at the multiple locations of the visited enterprise.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a display control apparatus of an electronic device, a computer-readable storage medium, and a program, which can be used to implement any display control method of an electronic device provided by the present disclosure, and corresponding technical solutions and technical effects can be referred to in corresponding descriptions of the method section and are not described again.
Fig. 5 shows a block diagram of a display control apparatus of an electronic device provided in an embodiment of the present disclosure. The electronic equipment comprises a camera and a first display screen, and is arranged in a first target place. As shown in fig. 5, the display control apparatus of the electronic device includes:
a face recognition module 51, configured to perform face recognition on a first image acquired by the camera to obtain a face recognition result corresponding to the first image;
an obtaining module 52, configured to, in response to the face recognition result indicating that the face in the first image belongs to a target visitor at the first target location, obtain guidance information for the target visitor, and obtain decoration information corresponding to the target visitor;
and the display control module 53 is configured to control the first display screen to display the guidance information and the face image of the target visitor added with the decoration information.
In one possible implementation manner, the display control module 53 is configured to:
in response to detecting the intention of the target visitor to view the first display screen, controlling the first display screen to display the guide information and the face image of the target visitor after the decoration information is added.
In one possible implementation, the apparatus further includes:
a determination module to determine that an intent of the target guest to view the first display screen is detected in response to at least one of:
the target visitor gazes at the first display screen;
the target visitor touches the first display screen;
the proportion of the face area of the target visitor in the first image is greater than or equal to a first preset threshold;
the time length of the target visitor staying in front of the first display screen is greater than or equal to a second preset threshold value;
the distance between the target visitor and the first display screen is smaller than or equal to a third preset threshold.
In one possible implementation manner, the display control module 53 is further configured to:
in response to the detection that the target visitor leaves the first display screen, controlling the first display screen to stop displaying the guide information and adding the decorative information to the face image of the target visitor.
In one possible implementation form of the method,
the guiding information comprises traveling prompting information aiming at the target visitor, wherein the traveling prompting information is used for guiding the target visitor to travel to a second target place, and the second target place is a place reserved for visiting by the target visitor in the first target place;
the obtaining module 52 is configured to: and generating traveling prompt information aiming at the target visitor according to the position information of the second target place.
In one possible implementation, the obtaining module 52 is configured to:
and generating travel prompt information from the first display screen to the second target place according to the position information of the second target place and the position information of the first display screen.
In one possible implementation, the guidance information includes a time alert for the target guest; wherein the time prompt message is used for prompting the reserved time of the target visitor, and/or the time prompt message is used for prompting the time interval between the current time and the reserved time.
In one possible implementation form of the method,
the bootstrap information includes welcome information for the target guest;
the obtaining module 52 is configured to: and according to the personal information and/or the access reservation information of the target visitor, welcome information aiming at the target visitor is generated.
In one possible implementation, the welcome information includes an avatar for the target guest and a welcome word for the target guest.
In one possible implementation manner, the display control module 53 is configured to:
and controlling the first display screen to display the guide information in a first area, and displaying the face image of the target visitor added with the decoration information in a second area, wherein the avatar displayed in the first area faces the second area.
In one possible implementation, the obtaining module 52 is configured to:
and obtaining decoration information corresponding to the target visitor according to the personal information and/or the access reservation information of the target visitor.
In one possible implementation, the decoration information includes: a headwear sticker and/or a face sticker corresponding to the target visitor.
In one possible implementation, the obtaining module 52 is configured to:
in response to the target visitor having viewed a second display screen disposed at the first target location, obtaining decorative information different from the decorative information displayed by the second display screen for the target visitor, wherein the second display screen is disposed at a different location than the first display screen at the first target location.
In the embodiment of the disclosure, a face recognition result corresponding to a first image is obtained by performing face recognition on the first image acquired by the camera, guidance information for a target visitor is obtained in response to the face recognition result indicating that the face in the first image belongs to the target visitor at the first target location, decoration information corresponding to the target visitor is obtained, and controls the first display screen to display the guide information and the face image of the target visitor added with the decoration information, therefore, the electronic equipment can be controlled to automatically and accurately display the guide information aiming at the target visitor based on the face recognition result of the image acquired by the electronic equipment, therefore, convenience of display control of the electronic equipment can be improved, and utilization efficiency of display screen resources arranged in the first target place is improved. In the process of visiting the first target place, the target visitor can acquire the guide information aiming at the target visitor through the face information, and in the process of acquiring the guide information aiming at the target visitor through the electronic equipment, the electronic equipment also displays the face image of the target visitor after the decoration information corresponding to the target visitor is added, so that in the process of visitor guiding, the target visitor can feel the relevant information (such as enterprise culture, enterprise technology and the like) of the first target place, interactivity and immersion are improved, and the visiting experience of the target visitor can be further improved.
In some embodiments, functions or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementations and technical effects thereof may refer to the description of the above method embodiments, which are not described herein again for brevity.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-described method. The computer-readable storage medium may be a non-volatile computer-readable storage medium, or may be a volatile computer-readable storage medium.
Embodiments of the present disclosure also provide a computer program, which includes computer readable code, and when the computer readable code runs in an electronic device, a processor in the electronic device executes the above method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-volatile computer readable storage medium carrying computer readable code, which when run in an electronic device, a processor in the electronic device performs the above method.
An embodiment of the present disclosure further provides an electronic device, including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the above-described method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 6 illustrates a block diagram of an electronic device 800 provided by an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 6, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (Wi-Fi), a second generation mobile communication technology (2G), a third generation mobile communication technology (3G), a fourth generation mobile communication technology (4G), a long term evolution of universal mobile communication technology (LTE), a fifth generation mobile communication technology (5G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 7 shows a block diagram of an electronic device 1900 provided by an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 7, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Unilike of open source codex operating system (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (16)
1. A display control method of an electronic device, wherein the electronic device comprises a camera and a first display screen, and the electronic device is disposed at a first target location, the method comprising:
performing face recognition on a first image acquired by the camera to obtain a face recognition result corresponding to the first image;
in response to the face recognition result indicating that the face in the first image belongs to a target visitor of the first target place, obtaining guide information for the target visitor and obtaining decoration information corresponding to the target visitor;
and controlling the first display screen to display the guide information and the face image of the target visitor added with the decoration information.
2. The method of claim 1, wherein the controlling the first display screen to display the guidance information and the facial image of the target visitor with the decoration information added comprises:
in response to detecting the intention of the target visitor to view the first display screen, controlling the first display screen to display the guide information and the face image of the target visitor after the decoration information is added.
3. The method of claim 2, further comprising:
determining that an intent of the target guest to view the first display screen is detected in response to at least one of:
the target visitor gazes at the first display screen;
the target visitor touches the first display screen;
the proportion of the face area of the target visitor in the first image is greater than or equal to a first preset threshold;
the time length of the target visitor staying in front of the first display screen is greater than or equal to a second preset threshold value;
the distance between the target visitor and the first display screen is smaller than or equal to a third preset threshold.
4. The method of claim 2 or 3, wherein after the controlling the first display to display the guidance information and the facial image of the target visitor after adding the decoration information in response to detecting the intention of the target visitor to view the first display, the method further comprises:
in response to the detection that the target visitor leaves the first display screen, controlling the first display screen to stop displaying the guide information and adding the decorative information to the face image of the target visitor.
5. The method according to any one of claims 1 to 4,
the guiding information comprises traveling prompting information aiming at the target visitor, wherein the traveling prompting information is used for guiding the target visitor to travel to a second target place, and the second target place is a place reserved for visiting by the target visitor in the first target place;
the obtaining of the guidance information for the target guest includes: and generating traveling prompt information aiming at the target visitor according to the position information of the second target place.
6. The method of claim 5, wherein generating a travel prompt for the target visitor based on the location information of the second target location comprises:
and generating travel prompt information from the first display screen to the second target place according to the position information of the second target place and the position information of the first display screen.
7. The method of any one of claims 1 to 6, wherein the guidance information comprises a time alert for the target guest; wherein the time prompt message is used for prompting the reserved time of the target visitor, and/or the time prompt message is used for prompting the time interval between the current time and the reserved time.
8. The method according to any one of claims 1 to 7,
the bootstrap information includes welcome information for the target guest;
the obtaining of the guidance information for the target guest includes: and according to the personal information and/or the access reservation information of the target visitor, welcome information aiming at the target visitor is generated.
9. The method of claim 8, wherein the welcome message includes an avatar for the target guest and a welcome word for the target guest.
10. The method of claim 9, wherein the controlling the first display screen to display the guidance information and the facial image of the target visitor with the decoration information added comprises:
and controlling the first display screen to display the guide information in a first area, and displaying the face image of the target visitor added with the decoration information in a second area, wherein the avatar displayed in the first area faces the second area.
11. The method according to any one of claims 1 to 10, wherein the obtaining of the decoration information corresponding to the target visitor comprises:
and obtaining decoration information corresponding to the target visitor according to the personal information and/or the access reservation information of the target visitor.
12. The method of claim 11, wherein the decoration information comprises: a headwear sticker and/or a face sticker corresponding to the target visitor.
13. The method according to any one of claims 1 to 12, wherein the obtaining of the decoration information corresponding to the target visitor comprises:
in response to the target visitor having viewed a second display screen disposed at the first target location, obtaining decorative information different from the decorative information displayed by the second display screen for the target visitor, wherein the second display screen is disposed at a different location than the first display screen at the first target location.
14. The utility model provides a display control device of electronic equipment, its characterized in that, electronic equipment includes camera and first display screen, and electronic equipment sets up in first target place, the device includes:
the face recognition module is used for carrying out face recognition on a first image acquired by the camera to obtain a face recognition result corresponding to the first image;
an obtaining module, configured to, in response to the face recognition result indicating that the face in the first image belongs to a target visitor in the first target location, obtain guidance information for the target visitor, and obtain decoration information corresponding to the target visitor;
and the display control module is used for controlling the first display screen to display the guide information and the face image of the target visitor added with the decoration information.
15. An electronic device, comprising:
one or more processors;
a memory for storing executable instructions;
wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the method of any one of claims 1 to 13.
16. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 13.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111257446.9A CN113961133A (en) | 2021-10-27 | 2021-10-27 | Display control method and device for electronic equipment, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111257446.9A CN113961133A (en) | 2021-10-27 | 2021-10-27 | Display control method and device for electronic equipment, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113961133A true CN113961133A (en) | 2022-01-21 |
Family
ID=79467620
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111257446.9A Pending CN113961133A (en) | 2021-10-27 | 2021-10-27 | Display control method and device for electronic equipment, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113961133A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114742689A (en) * | 2022-04-02 | 2022-07-12 | 亿玛创新网络(天津)有限公司 | Watermark adding method, system, computer equipment and computer readable storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106033539A (en) * | 2015-03-20 | 2016-10-19 | 上海宝信软件股份有限公司 | Meeting guiding method and system based on video face recognition |
CN110992562A (en) * | 2019-11-25 | 2020-04-10 | 上海商汤智能科技有限公司 | Access control method and device, electronic equipment and storage medium |
CN210605858U (en) * | 2019-11-11 | 2020-05-22 | 青岛皇甲信息技术有限公司 | Intelligent visitor reservation management system based on face recognition |
CN111210061A (en) * | 2019-12-31 | 2020-05-29 | 咪咕文化科技有限公司 | Guidance method, apparatus, system, and computer-readable storage medium |
WO2020243969A1 (en) * | 2019-06-06 | 2020-12-10 | 深圳市汇顶科技股份有限公司 | Facial recognition apparatus and method, and electronic device |
CN112668762A (en) * | 2020-12-17 | 2021-04-16 | 深圳市富思源智慧消防股份有限公司 | Visitor navigation method and system based on fire evacuation indication terminal and terminal |
CN112907804A (en) * | 2021-01-15 | 2021-06-04 | 北京市商汤科技开发有限公司 | Interaction method and device of access control machine, access control machine assembly, electronic equipment and medium |
CN113240846A (en) * | 2021-04-26 | 2021-08-10 | 联仁健康医疗大数据科技股份有限公司 | Visitor service management method and device, electronic equipment and storage medium |
-
2021
- 2021-10-27 CN CN202111257446.9A patent/CN113961133A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106033539A (en) * | 2015-03-20 | 2016-10-19 | 上海宝信软件股份有限公司 | Meeting guiding method and system based on video face recognition |
WO2020243969A1 (en) * | 2019-06-06 | 2020-12-10 | 深圳市汇顶科技股份有限公司 | Facial recognition apparatus and method, and electronic device |
CN210605858U (en) * | 2019-11-11 | 2020-05-22 | 青岛皇甲信息技术有限公司 | Intelligent visitor reservation management system based on face recognition |
CN110992562A (en) * | 2019-11-25 | 2020-04-10 | 上海商汤智能科技有限公司 | Access control method and device, electronic equipment and storage medium |
CN111210061A (en) * | 2019-12-31 | 2020-05-29 | 咪咕文化科技有限公司 | Guidance method, apparatus, system, and computer-readable storage medium |
CN112668762A (en) * | 2020-12-17 | 2021-04-16 | 深圳市富思源智慧消防股份有限公司 | Visitor navigation method and system based on fire evacuation indication terminal and terminal |
CN112907804A (en) * | 2021-01-15 | 2021-06-04 | 北京市商汤科技开发有限公司 | Interaction method and device of access control machine, access control machine assembly, electronic equipment and medium |
CN113240846A (en) * | 2021-04-26 | 2021-08-10 | 联仁健康医疗大数据科技股份有限公司 | Visitor service management method and device, electronic equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
汤一平;严海东;: "非约束环境下人脸识别技术的研究", 浙江工业大学学报, no. 02, 15 April 2010 (2010-04-15) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114742689A (en) * | 2022-04-02 | 2022-07-12 | 亿玛创新网络(天津)有限公司 | Watermark adding method, system, computer equipment and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3841454B1 (en) | Multi-device mapping and collaboration in augmented-reality environments | |
WO2020134858A1 (en) | Facial attribute recognition method and apparatus, electronic device, and storage medium | |
US9979921B2 (en) | Systems and methods for providing real-time composite video from multiple source devices | |
CN111595349A (en) | Navigation method and device, electronic equipment and storage medium | |
US9912970B1 (en) | Systems and methods for providing real-time composite video from multiple source devices | |
CN113641442B (en) | Interaction method, electronic device and storage medium | |
CN105308641B (en) | Information processing apparatus, information processing method, and program | |
US20190045270A1 (en) | Intelligent Chatting on Digital Communication Network | |
US20190312917A1 (en) | Resource collaboration with co-presence indicators | |
Xie et al. | Helping helpers: Supporting volunteers in remote sighted assistance with augmented reality maps | |
CN113989469A (en) | AR (augmented reality) scenery spot display method and device, electronic equipment and storage medium | |
CN112633232A (en) | Interaction method and device based on sitting posture detection, equipment, medium and household equipment | |
CN113961133A (en) | Display control method and device for electronic equipment, electronic equipment and storage medium | |
CN113611152A (en) | Parking lot navigation method and device, electronic equipment and storage medium | |
CN113625874A (en) | Interaction method and device based on augmented reality, electronic equipment and storage medium | |
WO2023155477A1 (en) | Painting display method and apparatus, electronic device, storage medium, and program product | |
CN113887488A (en) | Display control method and device of display screen, electronic equipment and storage medium | |
WO2023173613A1 (en) | Attendance check method and apparatus, and electronic device, storage medium and program product | |
KR101962635B1 (en) | Method for controlling mobile terminal supplying service enabling user to record and recollect remembrance data based on location information | |
CN114371904B (en) | Data display method and device, mobile terminal and storage medium | |
CN110910281A (en) | Hotel room-returning handling method and device based on robot | |
CN113821744A (en) | Visitor guiding method and device, electronic equipment and storage medium | |
CN116499489A (en) | Man-machine interaction method, device, equipment and product based on map navigation application | |
JP6576181B2 (en) | Event support system | |
CN114296627A (en) | Content display method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |