CN114384848B - Interaction method, interaction device, electronic equipment and storage medium - Google Patents
Interaction method, interaction device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114384848B CN114384848B CN202210042513.3A CN202210042513A CN114384848B CN 114384848 B CN114384848 B CN 114384848B CN 202210042513 A CN202210042513 A CN 202210042513A CN 114384848 B CN114384848 B CN 114384848B
- Authority
- CN
- China
- Prior art keywords
- target
- hand
- state
- target object
- equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 230000003993 interaction Effects 0.000 title claims abstract description 57
- 230000000875 corresponding effect Effects 0.000 claims description 44
- 230000001276 controlling effect Effects 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 13
- 230000004044 response Effects 0.000 claims description 13
- 230000009471 action Effects 0.000 claims description 8
- 230000002452 interceptive effect Effects 0.000 claims 2
- 230000008569 process Effects 0.000 abstract description 11
- 238000012545 processing Methods 0.000 description 20
- 238000010586 diagram Methods 0.000 description 18
- 238000005516 engineering process Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 230000033001 locomotion Effects 0.000 description 5
- 238000010295 mobile communication Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000010365 information processing Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000003491 array Methods 0.000 description 3
- 230000002596 correlated effect Effects 0.000 description 3
- 208000033614 Joint crepitation Diseases 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 238000004378 air conditioning Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 210000001145 finger joint Anatomy 0.000 description 1
- 210000000245 forearm Anatomy 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000000241 respiratory effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/04—Programme control other than numerical control, i.e. in sequence controllers or logic controllers
- G05B19/042—Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
- G05B19/0423—Input/output
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/25—Pc structure of the system
- G05B2219/25257—Microcontroller
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Automation & Control Theory (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The disclosure relates to an interaction method, an interaction device, electronic equipment and a storage medium, wherein the interaction method comprises the following steps: acquiring a target image containing a target object, and determining an operation area in the target image; identifying position information of a hand of the target object in the target image in the operation area and a state of the hand; determining a selected target device in a scene where the target object is located based on position information of a hand of the target object in the operation area and a state of the hand; and controlling the target equipment to perform feedback, wherein the feedback indicates that the target equipment is selected. The embodiment of the disclosure can reduce the possibility that the control instruction disturbs others in a computer vision mode, and can intuitively display the selected target equipment for the user. In addition, the user does not need to touch an application program in the electronic device in the process of selecting the target device, so that the process of selecting the target device in the embodiment of the disclosure is more convenient.
Description
Technical Field
The disclosure relates to the field of man-machine interaction, and in particular relates to an interaction method, an interaction device, electronic equipment and a storage medium.
Background
With the development of the internet of things, more and more devices can be controlled through the internet, for example: lamps, air conditioners, switches, etc., and when multiple devices are co-located in a space, the directivity of the user when sending control instructions is often not intuitive enough, for example: under the condition that a plurality of intelligent air conditioners exist in an office area, a user needs to try for a plurality of times in mobile phone application and observe the working states of the plurality of intelligent air conditioners, so that the control instruction sent at the time can be confirmed to be executed by the intelligent air conditioner, and the complexity of the flow for controlling the equipment is greatly increased.
Disclosure of Invention
The disclosure provides an interaction technical scheme.
According to an aspect of the present disclosure, there is provided an interaction method including: acquiring a target image containing a target object, and determining an operation area in the target image; identifying position information of a hand of the target object in the target image in the operation area and a state of the hand; determining a selected target device in a scene where the target object is located based on position information of a hand of the target object in the operation area and a state of the hand; and controlling the target equipment to perform feedback, wherein the feedback indicates that the target equipment is selected.
In a possible implementation manner, the determining, based on the position information of the hand of the target object in the operation area and the state of the hand, the selected target device in the scene where the target object is located includes: determining position information of the hand in the operation area in response to the state of the hand in the operation area being in a preset state; and determining the selected target equipment in the scene where the target object is located based on the position information of the hand of the target object in the operation area.
In a possible implementation manner, the determining, based on the position information of the hand of the target object in the operation area, the selected target device in the scene where the target object is located includes: determining a device region corresponding to the position information in the operation region based on the position information; wherein the device region corresponds to at least one device; and using the equipment corresponding to the equipment area as the selected target equipment in the scene where the target object is located.
In a possible implementation manner, the determining, based on the location information, a device area corresponding to the location information in the operation area includes: acquiring area coordinates of an equipment area corresponding to at least one piece of equipment; the region coordinates are mapped to the operation region to determine at least one device region in the operation region.
In a possible implementation manner, the interaction method further comprises: and controlling the target equipment to execute corresponding actions according to the state of the hand in the target image.
In a possible implementation manner, the controlling the target device to execute the corresponding action according to the state of the hand in the target image includes: determining an adjustment parameter of the target device based on a state parameter of the hand in the first state in response to the state of the hand being in the first state; and/or determining an adjustment time of the target device based on a duration of time the hand is in the first state.
In one possible embodiment, the adjustment parameters include: at least one of brightness, volume, wind speed, temperature, moving object position, and moving direction.
In a possible implementation manner, the target device is provided with a feedback component, and the controlling the target device to perform feedback includes: and controlling a feedback component corresponding to the target equipment to perform feedback.
In a possible implementation manner, the interaction method further comprises: and displaying the state of the hand corresponding to the execution action of the target equipment through a screen.
In a possible implementation manner, the interaction method further comprises: and displaying the working state corresponding to the target equipment through a screen.
According to an aspect of the present disclosure, there is provided an interaction device including: the target image acquisition module is used for acquiring a target image containing a target object and determining an operation area in the target image; an information identifying module for identifying position information of a hand of the target object in the target image in the operation area and a state of the hand; a target device determining module, configured to determine a selected target device in a scene where the target object is located, based on position information of a hand of the target object in the operation area and a state of the hand; and the target equipment control module is used for controlling the target equipment to carry out feedback, wherein the feedback indicates that the target equipment has been selected.
According to an aspect of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the interaction method described above.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described interaction method.
In the embodiment of the disclosure, by acquiring a target image containing a target object, determining an operation area in the target image, namely, by a computer vision mode, the possibility that a control instruction disturbs others is reduced, and then identifying the position information of the hand of the target object in the operation area and the state of the hand in the target image; determining selected target equipment in a scene where the target object is located based on the position information of the hand of the target object in the operation area and the state of the hand; and finally controlling the target equipment to feed back so as to intuitively display the selected target equipment for the user. In addition, the user does not need to touch an application program in the electronic device in the process of selecting the target device, so that the process of selecting the target device in the embodiment of the disclosure is more convenient.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
Fig. 1 shows a flow chart of an interaction method according to an embodiment of the present disclosure.
Fig. 2 shows a flow chart of an interaction method according to an embodiment of the present disclosure.
FIG. 3 illustrates a reference schematic diagram of a spatial scene prior to two-dimensional modeling in accordance with an embodiment of the present disclosure.
FIG. 4 illustrates a reference schematic diagram of a two-dimensional modeled spatial scene according to an embodiment of the disclosure.
Fig. 5 shows a block diagram of an interaction device according to an embodiment of the present disclosure.
Fig. 6 shows a block diagram of an electronic device, according to an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
In the related art, the control of the smart device depends on a separate account number and an application program, and when other people want to control the smart device, the control is usually implemented through an original switch or through a specific device provided with the application program. If set as above, the following problems occur: 1. the directivity of the control instruction is not obvious, namely, the original switch or the virtual switch in the specific device correspondingly controls which intelligent device in reality, so that a user can hardly know which intelligent device in reality the user can only know by continuously trying the original switch or the virtual switch. 2. The user interaction path is long, namely, the user firstly needs to open a specific application program, then, the control instruction menu of the target equipment needs to be selected in the application program, and the process is not convenient enough. In the related art, there is a technical scheme of voice control, but the problem of fuzzy directivity of control instructions still exists, and meanwhile, the voice control is used in a multi-person space to easily disturb other people.
In view of this, the embodiments of the present disclosure provide an interaction method, by acquiring a target image including a target object, determining an operation area in the target image, that is, by means of computer vision, reducing a possibility that a control instruction may disturb others, and then identifying position information of a hand of the target object in the operation area and a state of the hand in the target image; determining selected target equipment in a scene where the target object is located based on the position information of the hand of the target object in the operation area and the state of the hand; and finally controlling the target equipment to feed back so as to intuitively display the selected target equipment for the user. In addition, the user does not need to touch an application program in the electronic device in the process of selecting the target device, so that the process of selecting the target device in the embodiment of the disclosure is more convenient.
For example, the above interaction method may be performed by an electronic device such as a terminal device or a server, and the terminal device (also referred to as a computing device) may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, or the like, which may be implemented by a processor invoking computer readable instructions stored in a memory. For example: the terminal equipment can be an intelligent control terminal in the scene of the Internet of things, and the intelligent control terminal can send control instructions to the intelligent equipment in the same Internet of things.
Referring to fig. 1, fig. 1 shows a flowchart of an interaction method according to an embodiment of the disclosure, the interaction method including:
Step S100, a target image containing a target object is acquired, and an operation area in the target image is determined. For example, the target image may be collected by a camera, and the target object may be a target person, for example: a person in the foreground in the target image, or a designated person in the target image, etc. In one example, the operation region may be one effective region in which a corresponding operation is performed based on the hand of the target object, in other words, when the hand of the target object is in the operation region, various subsequent operations may be performed.
Step S200, identifying position information of a hand of the target object in the operation area and a state of the hand in the target image. For example, the target image may be detected by a gesture detection algorithm or a machine learning model in the related art, and then the position information and the state of the hand are determined.
Step S300, determining a selected target device in a scene where the target object is located based on the position information of the hand of the target object in the operation area and the state of the hand.
The target device may be an intelligent device connected to the electronic device executing the embodiment of the present application through the internet of things, for example: lamps, curtains, switches, air conditioners, sound boxes, televisions and the like. The target image can be collected by a camera, and the camera can be a camera built in the electronic equipment or an external camera connected with the electronic equipment so as to collect the target image. For example, if for device operational stability considerations, at least one of the following constraints may be added: 1. only one gesture is included in the target image. 2. At the same time, the number of devices that can achieve effective interaction is 1.
Referring to fig. 2, fig. 2 shows a flowchart of an interaction method according to an embodiment of the disclosure, in one possible implementation, step S300 may include:
Step S310, determining position information of the hand in the operation area in response to the state of the hand in the operation area being in a preset state. For example, the preset state may be a single-selection state (for example, the index finger clicking above) for selecting one target device, or a multi-selection state (for example, palm sliding) for continuously selecting a plurality of target devices, or a click state (for example, the joint clicking) for intermittently selecting a plurality of target devices, and different preset states may be preset to correspond to the single-selection state, the multi-selection state and the click state, respectively, so as to determine the operation mode of the user at this time by recognizing different gestures.
For example, if consideration is given to selecting a plurality of target devices at a time, a specific multiple selection state may be set, for example: when the hand is in the multi-selection state, different devices can be selected at one time by moving the hand, and then the adjustment is performed through the same operation gesture in the subsequent steps. For example: if the projector is required to be used in the conference room, and a plurality of intelligent curtains exist in the conference room, a user can select all intelligent curtains (i.e. the movement path of the multi-selection gesture is divided into a plurality of device areas) by moving the hands in the multi-selection state (for example, moving the hands in the palm open state) in front of the camera, and then drop all intelligent curtains by one operation gesture (i.e. the state of the hands corresponding to the execution action of the target device, which will be described later), so as to save the time of adjusting the device of the user.
For example, a specific click state may be set, and the hand in the click state is moved to sequentially select devices whose device areas are not adjacent, and then the adjustment is performed by the same operation gesture in a subsequent step. For example: under the conference room scene, if the equipment areas of other equipment exist between the equipment areas of the intelligent curtains, the user cannot continuously select the equipment by using multiple gestures, the user can sequentially select a plurality of curtains by moving the hand (for example, finger joint clicking) in the clicking state, and then all the intelligent curtains fall down by the operation gestures, so that the probability of misoperation of other equipment is reduced.
Step S320, determining the selected target device in the scene where the target object is located based on the position information of the hand of the target object in the operation area. Illustratively, the above-described location information may include coordinate information, such as: the coordinate system can be established based on the operation area, the center position of the hand is regarded as a coordinate point, and the target equipment can be determined later by determining which equipment area corresponding to the target equipment the coordinate point falls into. In one example, this step may include: determining a device region corresponding to the position information in the operation region based on the position information; wherein the device region corresponds to at least one device; and using the equipment corresponding to the equipment area as the selected target equipment in the scene where the target object is located. In the embodiment of the present disclosure, the operation area may include at least one device area, where each device area corresponds to a target device, and the target device that the user wants to operate may be determined by comparing the device area with the position information of the hand, so as to implement intelligent operation of the device.
Referring to fig. 3 and 4, fig. 3 illustrates a reference schematic diagram of a spatial scene before two-dimensional modeling according to an embodiment of the present disclosure, and fig. 4 illustrates a reference schematic diagram of a spatial scene after two-dimensional modeling according to an embodiment of the present disclosure. In the embodiment of the disclosure, a user may set a viewing angle (such as a photographing viewing angle in fig. 3) suitable for interacting with a plurality of devices in a current spatial scene, and the user or manufacturer may perform two-dimensional (such as a television area, a plurality of curtain areas, etc. in fig. 4) or three-dimensional modeling on the plurality of devices according to the viewing angle, where the space where the plurality of device areas are located also uses two-dimensional or three-dimensional modeling accordingly. For example: in two-dimensional modeling, a space including a plurality of device-capable regions may appear as an ellipse or polygon (as shown in FIG. 4). In three-dimensional modeling, a storage space including a plurality of device regions may appear as an ellipsoid, a polyhedron, or the like. The specific shape of each device area is not limited herein, and may be determined according to actual modeling requirements, so that each device area may be guaranteed to correspond to one device. In one example, the storage space may be divided into a corresponding number of device areas according to the position of each device in the storage space. In other words, the plurality of equipment areas can completely or partially fill the storage space according to modeling requirements; and then the two-dimensional or three-dimensional object placing space is used as the operation area and mapped into the target image, and the proportional relation between the operation area and the target image is not limited in the embodiment of the disclosure, so that a user or a developer can be determined according to actual situations.
In one example, the device area in the operation area may be determined by: acquiring area coordinates of an equipment area corresponding to at least one piece of equipment; the region coordinates are mapped to the operation region to determine at least one device region in the operation region. The region coordinates may be coordinates of boundary points of the device region in the object space (for example, coordinates of each vertex in the polygon), that is, the boundary points of the device region may be mapped to the operation region, so that the target image may be divided into a plurality of device regions, and then a determination is made as to which device region the hand in the preset state is located. For example, in the case that the first size of the two-dimensional space scene formed by each device area is inconsistent with the second size of the operation area, the boundary point coordinates of each device area may be scaled first, the scaling ratio may be determined according to the ratio of the second size and the first size, and then the scaled boundary point coordinates are mapped onto the operation area, so as to determine the device area in the operation area. In one example, if the shape of the storage space is different from the shape of the operation area (for example, the storage space is two-dimensional oval, and the operation area is rectangular), the centers of the two may be overlapped, and then a connection between the device area and the operation area in the storage space is established. In one example, the accuracy of the mapping may be improved by setting the shooting orientation of the camera such that the view angle of the two-dimensional spatial scene may remain parallel to the view angle of the target image.
For example, whether the hand is located in the device area may be determined according to a preset rule, for example, if the hand is in the index finger click state, it may be determined that the hand is located in a certain device area when the position of the index finger tip is within the device area.
And step S400, controlling the target equipment to perform feedback. The feedback indicates that the target device has been selected.
In one possible implementation, step S400 may include: and controlling a feedback component corresponding to the target equipment to perform feedback. Illustratively, the feedback component may include: sound boxes, screens, light strips, etc., embodiments of the present disclosure are not limited herein. In one example, the feedback means may be a light emitting means, and the controlling the target device to perform feedback may include controlling the light emitting means of the target device to emit light in a different mode from other devices. The light emitting component can comprise a light belt, the light belt can surround the periphery of the target device, and the interaction feedback of the target device and a user can be realized in an independent control mode. For example, after being determined as a target device, the target device's light strip may be continuously illuminated, the other devices' light strips may not be illuminated, or the target device's light strip may flash, the other devices' light strips may not be illuminated, or remain continuously illuminated. In this way, even when the user does not know the specific location of the "device area", it is possible to clearly know which device is currently selected by the user based on feedback from the light emitting means of the device.
In one example, existing components of the target device may also be multiplexed for feedback, such as: the display screen of the television is used as a feedback component or the sound box of the television is used as a feedback component, and the user can be prompted that the target equipment is selected by means of lighting the screen or giving out a prompt tone.
In operation, the user can observe the light-emitting conditions of the light-emitting components of each device while keeping the hand in the preset state, determine which device is currently selected, and if the selected device is not the device which the user wants to operate, the user can move the position and then put the hand in the preset state again, and continuously observe the light-emitting conditions of the light-emitting components of each device until the device which the user wants to operate is selected.
The layout of the device areas, such as shown in fig. 4, may also be displayed, for example, through a display screen to assist the user in quickly selecting the device that he wants to operate.
In one example, the position of the hand may also be displayed in the layout of the device area described above through the display screen, i.e., the user may confirm in the display screen whether the target device was successfully selected in this operation. For example: in the case of a damage to the light emitting device, the user can know whether the target device is selected for this operation through the display screen.
According to the embodiment of the disclosure, the target image containing the target object is obtained, the operation area in the target image is determined, namely, the possibility that a control instruction disturbs other people is reduced in a computer vision mode, and then the position information of the hand of the target object in the operation area and the state of the hand are identified; determining selected target equipment in a scene where the target object is located based on the position information of the hand of the target object in the operation area and the state of the hand; and finally controlling the target equipment to feed back so as to intuitively display the selected target equipment for the user. In addition, the user does not need to touch an application program in the electronic device in the process of selecting the target device, so that the process of selecting the target device in the embodiment of the disclosure is more convenient.
With continued reference to fig. 2, in one possible implementation, the interaction method may further include:
Step S500, according to the state of the hand in the target image, controlling the target device to execute a corresponding action. For example, the target image may be detected via a gesture detection algorithm or a machine learning model in the related art, and then it is determined whether the target image includes a hand in a specific state to determine a specific operation performed on the target device. The above-described specific states, the embodiments of the present disclosure are not limited herein. For example, if the electronic device has a display screen, the electronic device may display, through the screen, a state of a hand corresponding to the execution action of the target device, so as to prompt a user what state of the hand the target device may be operated by, thereby reducing a use threshold of the interaction method. For example, the lighting components of the target device may be controlled to continuously illuminate or provide light feedback to the user in the form of a respiratory light while the target device is performing an action. In one example, feedback from the target device may be turned off (e.g., turning off the light emitting component) when a hand in a particular state is no longer present in the target image.
In one possible implementation, step S500 may include: determining an adjustment parameter of the target device based on a state parameter of the hand in the first state in response to the state of the hand being in the first state; and/or determining an adjustment time of the target device based on a duration of time the hand is in the first state. Illustratively, the first state may be a rotational state, a translational state, a gripping state, or the like. The above-mentioned adjustment parameters include: at least one of brightness, volume, wind speed, temperature, moving object position, and moving direction. For example: the brightness may include light brightness, display brightness, etc., the volume includes play volume, etc., the wind speed includes air conditioner wind speed, etc., and the temperature includes air conditioner temperature, etc. The moving target position includes: the shielding position after curtain adjustment, the air outlet position after air conditioning adjustment, the shooting position after camera adjustment, the illumination position after lamp adjustment, etc. The movement direction comprises; the moving direction of the curtain, the rotating direction of the air outlet baffle of the air conditioner, the rotating direction of the camera, the rotating direction of the luminous component in the lamp, and the like. In other words, the target device may adjust the region of action of the target device by translating, rotating its own components, etc. in a motion. The disclosed embodiments are not limited herein. The first, different state is provided herein for reference.
In one possible implementation, step S500 may include: determining an adjustment amplitude of an adjustment parameter of the target device based on a rotation angle of the hand in a rotation state in response to the state of the hand in the rotation state; an adjustment time of the target device is determined based on a duration of time the hand is in a rotated state. The above-mentioned adjustment parameters may include: at least one of light brightness, play volume, air conditioner wind speed and air conditioner temperature. For example: the rotation state can be a hand state that the forearm is kept stationary and the index finger rotates clockwise/anticlockwise. For example, the rotation angle may be detected based on a three-axis coordinate system (i.e., a horizontal axis, a vertical axis, and a circular axis extending through the two axes) in the related art, i.e., the rotation angle may be expressed as a translation distance of the end of the index finger (or a waving angle of the index finger in three axes) on the circular axis in the three-axis coordinate system. For example, the rotation angle may be detected based on a two-axis coordinate system (i.e., a horizontal axis and a vertical axis of the image), that is, the rotation angle may be expressed as an angle of the index finger end in the two-axis coordinate system, and the embodiment of the disclosure does not limit the determination manner of the rotation angle. According to the embodiment of the disclosure, the rotation angle and the duration are set, so that the adjusting process can be more visual and controllable when the adjusting parameters are continuous variables.
For example, the rotation angle may be positively correlated with the adjustment amplitude of the adjustment parameter. For example: if the rotation angle is 50 degrees and the duration is 5 seconds, the device is an intelligent sound box, and when the intelligent sound box stays from the finger of the user, the sound volume is increased (or decreased) within 5 seconds according to 5 db/second. In the same scene, if the rotation angle is 90 degrees and the duration is 5 seconds, the intelligent sound box increases (or decreases) the volume within 5 seconds according to 10 db/second when the intelligent sound box stays from the finger of the user. The above values are merely exemplary, and the particular adjustment range manufacturer or user may depend on the actual situation.
In one possible implementation, step S500 may include: and in response to the state of the hand being in a translational state, determining an adjustment amplitude of an adjustment parameter of the target device based on a movement distance of the hand being in the translational state, and determining an adjustment time of the target device based on a duration of the hand being in the translational state. For example: the translating state may be translating the index finger. The moving distance may be determined by the three-axis coordinate system or the two-axis coordinate system, which is not described herein. The embodiment of the disclosure can make the adjustment of the adjustment parameters as continuous variables more visual and controllable by setting the moving distance and the duration.
For example, the movement distance may be positively correlated with the adjustment amplitude of the adjustment parameter. For example: if the moving distance is 100px (pixels), the duration is 5 seconds, and the device is a smart speaker, the smart speaker increases (or decreases) the volume within 5 seconds according to 5 db/second when the smart speaker stays from the finger of the user. In the same scene, if the moving distance is 200px (pixels) and the duration is 5 seconds, the sound volume of the smart speaker is increased (or decreased) within 5 seconds according to 10 db/second when the smart speaker stays from the finger of the user. The above values are merely exemplary, and the particular adjustment range manufacturer or user may depend on the actual situation.
In one possible implementation, step S500 may include: determining an adjustment distance of an adjustment parameter of the target device based on a dragging distance of the hand in the grip state in response to the state of the hand in the grip state; and determining the adjusting direction of the adjusting parameter based on the dragging direction of the hand in the gripping state. The above-mentioned adjustment parameters include: at least one of a curtain position, an air conditioner wind direction, a camera position and an illumination position. For example, the dragging distance and the dragging direction may be determined by the three-axis coordinate system or the two-axis coordinate system, which is not described herein.
For example, the drag distance may be positively correlated with the adjustment distance of the adjustment parameter, and the drag direction may be similar to the adjustment direction. For example: if the dragging distance is 100px, the dragging direction is downward, and the device is an intelligent curtain, the intelligent curtain is pulled downward by 2 cm. Under the same scene, if the dragging distance is 200px and the dragging direction is upward right, the intelligent curtain is pulled upwards by 4 cm.
In one possible implementation, step S500 may include: the switch state of the target device is switched in response to the state of the hand changing from the grip state to the open state. For example: the device is an intelligent television, the switch state is powered off, the hand of a user is changed from the gripping state to the opening state, and the switch state of the intelligent television is changed from the power off to the power on.
The various possible embodiments can be arranged and combined according to actual requirements. For example: the manufacturer can directly integrate the combined interaction method into the electronic device so as to reduce the operation difficulty of the user, and can also display the various possible implementation modes on a screen of the electronic device, so that the interaction method is customized by the user so as to better meet the actual requirements of the user, and the embodiment of the disclosure is not limited.
Several scenarios using the interaction methods described above are listed herein for reference. 1. In an office scene, a user can conveniently control an air conditioner, light, a curtain, a television and a video conference system in the office through specific gestures on a chair, even when someone knocks a door, the user can automatically open the door through the specific gestures (namely, the user selects the door through the selected gestures and opens the door through the gestures of controlling the door opening), and the user does not need to time-consuming searching for corresponding equipment in an application program. 2. In a meeting room scene, when a projector is needed, a user can realize the tasks of projection remote control, curtain drawing and the like which need multi-person cooperation only by making a plurality of gestures (namely, respectively selecting the projector, the curtain and the like through the selected gestures and respectively operating through the gestures after the selection), so that the labor cost is greatly saved. 3. In the house scene, when a user sits on a sofa, the lamp and the air conditioner can be conveniently controlled through specific gestures, even the wind direction of the air conditioner, the brightness and darkness of the light, the color of the light, the volume of the sound box, the opening and closing of the game machine and the like are not needed, and the convenience of man-machine interaction flow is realized.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure. It will be appreciated by those skilled in the art that in the above-described methods of the embodiments, the particular order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the disclosure further provides an interaction device, an electronic device, a computer readable storage medium, and a program, where the foregoing may be used to implement any one of the interaction methods provided in the disclosure, and corresponding technical schemes and descriptions and corresponding descriptions referring to method parts are not repeated.
Fig. 5 shows a block diagram of an interaction device according to an embodiment of the present disclosure, as shown in fig. 5, the interaction device 100 includes: the target image acquisition module 110 is configured to acquire a target image including a target object, and determine an operation region in the target image. The information identifying module 120 is configured to identify, in the target image, position information of a hand of the target object in the operation area and a state of the hand. The target device determining module 130 is configured to determine a selected target device in a scene where the target object is located based on the position information of the hand of the target object in the operation area and the state of the hand. A target device control module 140 to control the target device to feedback, the feedback indicating that the target device has been selected.
In a possible implementation manner, the determining, based on the position information of the hand of the target object in the operation area and the state of the hand, the selected target device in the scene where the target object is located includes: determining position information of the hand in the operation area in response to the state of the hand in the operation area being in a preset state; and determining the selected target equipment in the scene where the target object is located based on the position information of the hand of the target object in the operation area.
In a possible implementation manner, the determining, based on the position information of the hand of the target object in the operation area, the selected target device in the scene where the target object is located includes: determining a device region corresponding to the position information in the operation region based on the position information; wherein the device region corresponds to at least one device; and using the equipment corresponding to the equipment area as the selected target equipment in the scene where the target object is located.
In a possible implementation manner, the determining, based on the location information, a device area corresponding to the location information in the operation area includes: acquiring area coordinates of an equipment area corresponding to at least one piece of equipment; the region coordinates are mapped to the operation region to determine at least one device region in the operation region.
In a possible implementation manner, the interaction method further comprises: and controlling the target equipment to execute corresponding actions according to the state of the hand in the target image.
In a possible implementation manner, the controlling the target device to execute the corresponding action according to the state of the hand in the target image includes: determining an adjustment parameter of the target device based on a state parameter of the hand in the first state in response to the state of the hand being in the first state; and/or determining an adjustment time of the target device based on a duration of time the hand is in the first state.
In one possible embodiment, the adjustment parameters include: at least one of brightness, volume, wind speed, temperature, moving object position, and moving direction.
In a possible implementation manner, the target device is provided with a feedback component, and the controlling the target device to perform feedback includes: and controlling a feedback component corresponding to the target equipment to perform feedback.
In a possible implementation manner, the interaction method further comprises: and displaying the state of the hand corresponding to the execution action of the target equipment through a screen.
In a possible implementation manner, the interaction method further comprises: and displaying the working state corresponding to the target equipment through a screen.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method. The computer readable storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the above method.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, performs the above method.
The electronic device may be provided as a terminal, or other form of device.
Fig. 6 shows a block diagram of an electronic device 800, according to an embodiment of the disclosure. For example, the electronic device 800 may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, or the like.
Referring to fig. 6, an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen between the electronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the electronic device 800, a relative positioning of the components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a photosensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (Wi-Fi), a second generation mobile communication technology (2G), a third generation mobile communication technology (3G), a fourth generation mobile communication technology (4G), long Term Evolution (LTE) of a universal mobile communication technology, a fifth generation mobile communication technology (5G), or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including computer program instructions executable by processor 820 of electronic device 800 to perform the above-described methods.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
The computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as SMALLTALK, C ++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
If the technical scheme of the application relates to personal information, the product applying the technical scheme of the application clearly informs the personal information processing rule before processing the personal information and obtains the autonomous agreement of the individual. If the technical scheme of the application relates to sensitive personal information, the product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'explicit consent'. For example, a clear and remarkable mark is set at a personal information acquisition device such as a camera to inform that the personal information acquisition range is entered, personal information is acquired, and if the personal voluntarily enters the acquisition range, the personal information is considered as consent to be acquired; or on the device for processing the personal information, under the condition that obvious identification/information is utilized to inform the personal information processing rule, personal authorization is obtained by popup information or a person is requested to upload personal information and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing mode, and a type of personal information to be processed.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (10)
1. An interaction method, characterized in that the interaction method comprises:
acquiring a target image containing a target object, and determining an operation area in the target image;
Identifying position information of a hand of the target object in the target image in the operation area and a state of the hand;
Determining a selected target device in a scene where the target object is located based on position information of a hand of the target object in the operation area and a state of the hand;
Controlling the target device to perform feedback, wherein the feedback indicates that the target device has been selected;
Wherein the determining, based on the position information of the hand of the target object in the operation area and the state of the hand, the selected target device in the scene where the target object is located includes:
determining position information of the hand in the operation area in response to the state of the hand in the operation area being in a preset state;
determining selected target equipment in a scene where the target object is located based on the position information of the hand of the target object in the operation area;
The determining, based on the position information of the hand of the target object in the operation area, the selected target device in the scene where the target object is located, includes:
determining a device region corresponding to the position information in the operation region based on the position information; wherein the device region corresponds to at least one device;
Using the equipment corresponding to the equipment area as the selected target equipment in the scene where the target object is located;
The determining, based on the location information, a device area corresponding to the location information in the operation area includes:
Acquiring area coordinates of an equipment area corresponding to at least one piece of equipment;
The region coordinates are mapped to the operation region to determine at least one device region in the operation region.
2. The interaction method of claim 1, wherein the interaction method further comprises:
And controlling the target equipment to execute corresponding actions according to the state of the hand in the target image.
3. The interaction method of claim 2, wherein the controlling the target device to perform the corresponding action according to the state of the hand in the target image comprises:
Determining an adjustment parameter of the target device based on a state parameter of the hand in the first state in response to the state of the hand being in the first state; and/or
An adjustment time of the target device is determined based on a duration of time the hand is in the first state.
4. The interaction method of claim 3, wherein the adjusting parameters comprises: at least one of brightness, volume, wind speed, temperature, moving object position, and moving direction.
5. The interaction method according to any of the claims 1 to 4, wherein the target device is provided with feedback means, the controlling the target device to feedback comprising: and controlling a feedback component corresponding to the target equipment to perform feedback.
6. The interaction method of any of claims 1 to 4, wherein the interaction method further comprises: and displaying the state of the hand corresponding to the execution action of the target equipment through a screen.
7. The interaction method of any of claims 1 to 4, wherein the interaction method further comprises: and displaying the working state corresponding to the target equipment through a screen.
8. An interactive apparatus, characterized in that the interactive apparatus comprises:
the target image acquisition module is used for acquiring a target image containing a target object and determining an operation area in the target image;
an information identifying module for identifying position information of a hand of the target object in the target image in the operation area and a state of the hand;
a target device determining module, configured to determine a selected target device in a scene where the target object is located, based on position information of a hand of the target object in the operation area and a state of the hand;
a target device control module to control the target device to feedback, the feedback indicating that the target device has been selected;
Wherein the determining, based on the position information of the hand of the target object in the operation area and the state of the hand, the selected target device in the scene where the target object is located includes:
determining position information of the hand in the operation area in response to the state of the hand in the operation area being in a preset state;
determining selected target equipment in a scene where the target object is located based on the position information of the hand of the target object in the operation area;
The determining, based on the position information of the hand of the target object in the operation area, the selected target device in the scene where the target object is located, includes:
determining a device region corresponding to the position information in the operation region based on the position information; wherein the device region corresponds to at least one device;
Using the equipment corresponding to the equipment area as the selected target equipment in the scene where the target object is located;
The determining, based on the location information, a device area corresponding to the location information in the operation area includes:
Acquiring area coordinates of an equipment area corresponding to at least one piece of equipment;
The region coordinates are mapped to the operation region to determine at least one device region in the operation region.
9. An electronic device, comprising:
A processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored in the memory to perform the interaction method of any of claims 1 to 7.
10. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the interaction method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210042513.3A CN114384848B (en) | 2022-01-14 | 2022-01-14 | Interaction method, interaction device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210042513.3A CN114384848B (en) | 2022-01-14 | 2022-01-14 | Interaction method, interaction device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114384848A CN114384848A (en) | 2022-04-22 |
CN114384848B true CN114384848B (en) | 2024-08-09 |
Family
ID=81201306
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210042513.3A Active CN114384848B (en) | 2022-01-14 | 2022-01-14 | Interaction method, interaction device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114384848B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117021117B (en) * | 2023-10-08 | 2023-12-15 | 电子科技大学 | Mobile robot man-machine interaction and positioning method based on mixed reality |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108377212A (en) * | 2017-01-30 | 2018-08-07 | 联发科技股份有限公司 | The control method and its electronic system of household electrical appliance |
CN112068698A (en) * | 2020-08-31 | 2020-12-11 | 北京市商汤科技开发有限公司 | Interaction method and device, electronic equipment and computer storage medium |
CN113486765A (en) * | 2021-06-30 | 2021-10-08 | 上海商汤临港智能科技有限公司 | Gesture interaction method and device, electronic equipment and storage medium |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101339489A (en) * | 2008-08-14 | 2009-01-07 | 炬才微电子(深圳)有限公司 | Human-computer interaction method, device and system |
CN101813922A (en) * | 2009-08-10 | 2010-08-25 | 李艳霞 | Intelligent control device with orientation identification function |
CN107493495B (en) * | 2017-08-14 | 2019-12-13 | 深圳市国华识别科技开发有限公司 | Interactive position determining method, system, storage medium and intelligent terminal |
KR20200113522A (en) * | 2019-03-25 | 2020-10-07 | 삼성전자주식회사 | Method for performing fucntion according to gesture input and electronic device performing thereof |
CN110471296B (en) * | 2019-07-19 | 2022-05-13 | 深圳绿米联创科技有限公司 | Device control method, device, system, electronic device and storage medium |
CN111949134A (en) * | 2020-08-28 | 2020-11-17 | 深圳Tcl数字技术有限公司 | Human-computer interaction method, device and computer-readable storage medium |
CN113190106B (en) * | 2021-03-16 | 2022-11-22 | 青岛小鸟看看科技有限公司 | Gesture recognition method and device and electronic equipment |
-
2022
- 2022-01-14 CN CN202210042513.3A patent/CN114384848B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108377212A (en) * | 2017-01-30 | 2018-08-07 | 联发科技股份有限公司 | The control method and its electronic system of household electrical appliance |
CN112068698A (en) * | 2020-08-31 | 2020-12-11 | 北京市商汤科技开发有限公司 | Interaction method and device, electronic equipment and computer storage medium |
CN113486765A (en) * | 2021-06-30 | 2021-10-08 | 上海商汤临港智能科技有限公司 | Gesture interaction method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114384848A (en) | 2022-04-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10564833B2 (en) | Method and apparatus for controlling devices | |
US20170091551A1 (en) | Method and apparatus for controlling electronic device | |
WO2016134591A1 (en) | Manipulation method and apparatus of intelligent device | |
CN106791893A (en) | Net cast method and device | |
EP3096184A1 (en) | Method and device for controlling flash light and terminal | |
KR20170064242A (en) | Method and Electronic Apparatus for Providing Video Call | |
CN103927101B (en) | The method and apparatus of operational controls | |
CN112327653A (en) | Device control method, device control apparatus, and storage medium | |
EP3016319A1 (en) | Method and apparatus for dynamically displaying device list | |
CN107977083A (en) | Operation based on VR systems performs method and device | |
CN105282441A (en) | Photographing method and device | |
CN113194254A (en) | Image shooting method and device, electronic equipment and storage medium | |
CN111610912B (en) | Application display method, application display device and storage medium | |
CN111611034A (en) | Screen display adjusting method and device and storage medium | |
CN110782532B (en) | Image generation method, image generation device, electronic device, and storage medium | |
JP2023510443A (en) | Labeling method and device, electronic device and storage medium | |
CN114387445A (en) | Object key point identification method and device, electronic equipment and storage medium | |
CN114463212A (en) | Image processing method and device, electronic equipment and storage medium | |
CN114384848B (en) | Interaction method, interaction device, electronic equipment and storage medium | |
CN111538451A (en) | Weather element display method and device and storage medium | |
KR102333931B1 (en) | Video projector and operating method thereof | |
CN114637390A (en) | Content display method and device | |
CN108845852B (en) | Control method and device of intelligent equipment and computer readable storage medium | |
CN111782053B (en) | Model editing method, device, equipment and storage medium | |
CN115729663A (en) | Brightness adjusting method and device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |