CN106339093B - Cloud deck control method and device - Google Patents
Cloud deck control method and device Download PDFInfo
- Publication number
- CN106339093B CN106339093B CN201610798151.5A CN201610798151A CN106339093B CN 106339093 B CN106339093 B CN 106339093B CN 201610798151 A CN201610798151 A CN 201610798151A CN 106339093 B CN106339093 B CN 106339093B
- Authority
- CN
- China
- Prior art keywords
- gesture
- angle
- control
- target
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention provides a cradle head control method and device, which are used for solving the technical problems of long lag time and poor interaction experience of the prior art in controlling a cradle head. The method comprises the following steps: acquiring an image through an image acquisition unit carried on a holder, and identifying the acquired image to obtain an identification result; the recognition result represents that a preset gesture exists in the acquired image, and the preset gesture is a first preset gesture; starting a first control strategy corresponding to the first preset gesture based on the recognition result; acquiring the attitude parameter of the first preset gesture from the acquired image according to the first control strategy; and calling a control instruction corresponding to the attitude parameter according to the attitude parameter, and executing the control instruction to control the holder.
Description
Technical Field
The invention relates to the technical field of electronics, in particular to a holder control method and device.
Background
the existing way of controlling the pan/tilt is to control the moving direction and speed of the pan/tilt and to control the pan/tilt to shoot by clicking a key, clicking a virtual key or operating a remote control lever, etc. The specific description is given by taking an example of clicking a virtual button in a graphical interactive interface through a mouse. The user visual angle is firstly concentrated in the virtual key area, and the virtual key is clicked to control the holder. The view angle is then moved to the pan/tilt head area to see if the pan/tilt head has reached the target, e.g. turned 30 degrees off-course. And if the target is not reached, clicking the virtual key again according to the specific situation of the holder.
Therefore, in the method for controlling the cradle head in the prior art, the cradle head can be adjusted, observed and readjusted for many times to reach the target. Also, during control, the user's perspective has to be moved back and forth between the control area and the pan and tilt area. Therefore, the method in the prior art has the technical problems of long delay time, poor interaction experience and complex control.
Disclosure of Invention
The embodiment of the invention provides a cradle head control method and device, which are used for solving the technical problems of long lag time and poor interaction experience of the prior art in controlling a cradle head.
In a first aspect, the present invention provides a pan/tilt control method, including:
acquiring an image through an image acquisition unit carried on a holder, and identifying the acquired image to obtain an identification result; the recognition result represents that a preset gesture exists in the acquired image, and the preset gesture is a first preset gesture;
Starting a first control strategy corresponding to the first preset gesture based on the recognition result;
Acquiring the attitude parameter of the first preset gesture from the acquired image according to the first control strategy;
And calling a control instruction corresponding to the attitude parameter according to the attitude parameter, and executing the control instruction to control the holder.
Optionally, the gesture parameter is a particle motion trajectory of the first predetermined gesture, and the invoking of the control instruction corresponding to the gesture parameter according to the gesture parameter includes:
And determining a target yaw angle and/or a target pitch angle of the cradle head in a gesture control state, which needs to be rotated, according to the motion direction of the first preset gesture represented by the particle motion track, and calling a first control instruction based on the target yaw angle and/or the target pitch angle to enable the cradle head to rotate by the target yaw angle and/or the target pitch angle.
Optionally, determining a target yaw angle and/or a target pitch angle that the pan-tilt in the gesture control state needs to rotate includes:
Obtaining a first distance corresponding to a pitch axis and/or a second distance corresponding to a yaw axis of the motion distance of the first predetermined gesture represented by the particle motion trajectory;
And determining an angle corresponding to the first distance as the target yaw angle and/or an angle corresponding to the second distance as the target pitch angle.
Optionally, the gesture parameter is a rigid motion trajectory of the first predetermined gesture, and the invoking of the control instruction corresponding to the gesture parameter according to the gesture parameter includes:
And determining a target roll angle of the cradle head in a gesture control state, which needs to rotate, according to the rotation directions of the first preset gesture represented by the rigid body motion track in the planes of the yaw axis and the pitch axis, and calling a second control instruction based on the target roll angle so as to enable the cradle head to rotate the target roll angle.
Optionally, determining a target roll angle that the pan/tilt head in the gesture control state needs to rotate includes:
and determining the rotation angle as the target roll angle according to the rotation angles of the first preset gesture represented by the rigid body motion trail in the planes of the yaw axis and the pitch axis.
Optionally, the gesture parameter is a relative movement direction of the finger of the first predetermined gesture, and the invoking of the control instruction corresponding to the gesture parameter according to the gesture parameter includes:
When the relative movement directions of the fingers are mutually gathered, calling a third control instruction for reducing the specific display object; or
And calling a fourth control instruction for enlarging the specific display object when the relative movement directions of the fingers are mutually separated.
Optionally, the gesture parameter is an indication direction of the first predetermined gesture, and the invoking of the control instruction corresponding to the gesture parameter according to the gesture parameter includes:
and determining that the pan-tilt in the gesture control state needs to adjust the yaw angle and/or the pitch angle according to the indication direction, and calling a fifth control instruction for adjusting the yaw angle and/or the pitch angle.
optionally, the method further comprises
judging whether the holder is in the gesture control state or not;
and when the cradle head is not in the gesture control state and the cradle head detects a trigger gesture, controlling the cradle head to enter the gesture control state.
in a second aspect, the present invention provides a pan/tilt head control apparatus, comprising:
The identification module is used for acquiring images through an image acquisition unit carried on the holder, identifying the acquired images and obtaining an identification result; the recognition result represents that a preset gesture exists in the acquired image, and the preset gesture is a first preset gesture;
The starting module is used for starting a first control strategy corresponding to the first preset gesture based on the recognition result;
The acquisition module is used for acquiring the attitude parameter of the first preset gesture from the acquired image according to the first control strategy;
and the calling module is used for calling a control instruction corresponding to the attitude parameter according to the attitude parameter and executing the control instruction so as to control the holder.
Optionally, the attitude parameter is a particle motion trajectory of the first predetermined gesture, and the invoking module is configured to determine a target yaw angle and/or a target pitch angle at which the pan-tilt in a gesture control state needs to rotate according to a motion direction of the first predetermined gesture represented by the particle motion trajectory, and invoke a first control instruction based on the target yaw angle and/or the target pitch angle, so as to rotate the pan-tilt by the target yaw angle and/or the target pitch angle.
optionally, the invoking module is configured to obtain, according to a movement distance of the first predetermined gesture represented by the particle motion trajectory, a first distance corresponding to a pitch axis and/or a second distance corresponding to a yaw axis of the movement distance; and determining an angle corresponding to the first distance as the target yaw angle and/or an angle corresponding to the second distance as the target pitch angle.
Optionally, the attitude parameter is a rigid body motion trajectory of the first predetermined gesture, and the invoking module is configured to determine a target roll angle at which the pan-tilt in the gesture control state needs to rotate according to a rotation direction of the first predetermined gesture represented by the rigid body motion trajectory in a plane in which a yaw axis and a pitch axis are located, and invoke a second control instruction based on the target roll angle, so as to rotate the pan-tilt by the target roll angle.
Optionally, the invoking module is configured to determine, according to a rotation angle of the first predetermined gesture represented by the rigid body motion trajectory in a plane where the yaw axis and the pitch axis are located, the rotation angle as the target roll angle.
optionally, the gesture parameter is a relative movement direction of the fingers of the first predetermined gesture, and the calling module is configured to call a third control instruction for zooming out a specific display object when the relative movement directions of the fingers are mutually gathered; or when the relative movement directions of the fingers are mutually separated, calling a fourth control instruction for enlarging the specific display object.
optionally, the attitude parameter is an indication direction of the first predetermined gesture, and the calling module is configured to determine that the pan/tilt head in the gesture control state needs to adjust a yaw angle and/or a pitch angle according to the indication direction, and call a fifth control instruction for adjusting the yaw angle and/or the pitch angle.
Optionally, the apparatus further comprises:
The judging module is used for judging whether the holder is in the gesture control state or not;
and the triggering module is used for controlling the cradle head to enter the gesture control state when the cradle head is not in the gesture control state and the cradle head detects a triggering gesture.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
in the embodiment of the invention, firstly, an image is collected through an image collecting unit carried on a holder, and the collected image is identified to obtain an identification result which shows that a preset gesture exists in the collected image and the preset gesture is a first preset gesture; then, starting a first control strategy corresponding to the first preset gesture based on the recognition result; further, acquiring attitude parameters of a first preset gesture from the acquired image according to a first control strategy; and further, calling a control instruction corresponding to the attitude parameter according to the attitude parameter, and executing the control instruction to control the holder. Therefore, the corresponding command is called to control the cloud platform based on the first preset gesture of the user and the gesture of the first preset gesture, so that the cloud platform can make corresponding response in real time according to the preset gesture. Furthermore, the user can control the cradle head simply by gestures. And, because the visual angle does not need to be switched through gesture control, so the user can observe the real-time state of the cloud platform simultaneously, thereby adjusting the gesture in time to make the cloud platform correspondingly adjust. Therefore, the invention solves the technical problem of long delay time of the pan-tilt control and realizes the technical effect of controlling the pan-tilt in real time.
Drawings
FIG. 1 is a flow chart of a pan/tilt/zoom control method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a first predetermined gesture in accordance with an embodiment of the present invention;
3a-3b are schematic diagrams illustrating a second predetermined gesture, in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a third predetermined gesture in accordance with an embodiment of the present invention;
FIGS. 5a-5d are schematic diagrams illustrating a fourth predetermined gesture in accordance with an embodiment of the present invention;
FIG. 6 is a schematic view of a user and a balance car according to an embodiment of the present invention;
Fig. 7 is a schematic view of a pan-tilt control device in the embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a cradle head control method and device, which are used for solving the technical problems of long lag time and poor interaction experience of the prior art in controlling a cradle head.
In order to solve the above technical problems, the solution idea provided by the present invention is as follows:
In the embodiment of the invention, firstly, an image is collected through an image collecting unit carried on a holder, and the collected image is identified to obtain an identification result which shows that a preset gesture exists in the collected image and the preset gesture is a first preset gesture; then, starting a first control strategy corresponding to the first preset gesture based on the recognition result; further, acquiring attitude parameters of a first preset gesture from the acquired image according to a first control strategy; and further, calling a control instruction corresponding to the attitude parameter according to the attitude parameter, and executing the control instruction to control the holder. Therefore, the corresponding command is called to control the cloud platform based on the first preset gesture of the user and the gesture of the first preset gesture, so that the cloud platform can make corresponding response in real time according to the preset gesture. Furthermore, the user can control the cradle head simply by gestures. And, because the visual angle does not need to be switched through gesture control, so the user can observe the real-time state of the cloud platform simultaneously, thereby adjusting the gesture in time to make the cloud platform correspondingly adjust. Therefore, the invention solves the technical problem of long delay time of the pan-tilt control and realizes the technical effect of controlling the pan-tilt in real time.
The technical solutions of the present invention are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present invention are described in detail in the technical solutions of the present application, and are not limited to the technical solutions of the present application, and the technical features in the embodiments and examples of the present application may be combined with each other without conflict.
the term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The invention provides a holder control method in a first aspect. Referring to fig. 1, a flow chart of a pan/tilt control method according to an embodiment of the present invention is shown, where the method includes:
s101: acquiring an image through an image acquisition unit carried on a holder, and identifying the acquired image to obtain an identification result;
S102: starting a first control strategy corresponding to the first preset gesture based on the recognition result;
S103: acquiring the attitude parameter of the first preset gesture from the acquired image according to the first control strategy;
s104: and calling a control instruction corresponding to the attitude parameter according to the attitude parameter, and executing the control instruction to control the holder.
Specifically, before S101, the image capturing unit mounted on the pan/tilt head is started to be in an image capturing state. Then, in S101, image capturing is performed for a plurality of consecutive frames by the image capturing unit, and the captured image is recognized while being captured, obtaining a recognition result. In a specific implementation process, the predetermined gesture may or may not be recognized from the captured image. For convenience of describing the method in the embodiment of the present invention, it will be assumed that the recognition result in S101 specifically indicates that a predetermined gesture exists in the captured image. In addition, the predetermined gesture in the embodiment of the present invention may be one or more, as shown in fig. 2, fig. 3a, fig. 3b, fig. 4, fig. 5a, fig. 5b, fig. 5c, and fig. 5d, the predetermined gesture currently acquired is any one of the predetermined gestures. For convenience of description, the embodiment of the present invention refers to the currently acquired predetermined gesture as a first predetermined gesture.
in the embodiment of the invention, in order to facilitate different controls of the holder by different preset gestures for a user, different control strategies are set for different preset gestures. The control strategy defines which parameters are required to be obtained, which control instructions are executed, how to execute the control instructions, and the like when the control effect corresponding to the preset gesture is achieved. For example, a control strategy for controlling the yaw angle and/or the pitch angle of the pan/tilt head is set for a predetermined gesture as shown in fig. 2, a control strategy for controlling the roll angle of the pan/tilt head is set for a predetermined gesture as shown in fig. 3a and 3b, and the like, which can be set by a person having ordinary skill in the art to which the present invention belongs according to the actual implementation, and the present invention is not limited specifically.
in S102, according to that the predetermined gesture represented by the recognition result is a first predetermined gesture, a first control strategy corresponding to the first predetermined gesture is started. In the embodiment of the invention, different control strategies are started according to different preset gestures, and corresponding instruction sets are called according to different preset gestures. For example, a first predetermined gesture is as shown in fig. 2, then an instruction set comprising control instructions to control yaw and/or pitch is invoked; the first predetermined gesture is shown in fig. 3, and a set of instructions including control instructions to control roll angle is invoked.
Then, in S103, the gesture parameters of the first predetermined gesture are acquired from the acquired image according to the gesture parameters required by the first control policy indication. Further, in S104, a control command corresponding to the attitude parameter is called based on the attitude parameter. In other words, from the invoked instruction set, a control instruction corresponding to the attitude parameter is determined, and then the control instruction is invoked and executed, thereby realizing the control of the pan/tilt head by the first predetermined gesture of the current attitude made by the user.
it can be seen from the above description that the corresponding command is called to control the pan-tilt based on the first predetermined gesture of the user and the posture of the first predetermined gesture, so that the pan-tilt in the present invention can make a corresponding response in real time according to the predetermined gesture. Furthermore, the user can control the cradle head simply by gestures. And, because the visual angle does not need to be switched through gesture control, so the user can observe the real-time state of the cloud platform simultaneously, thereby adjusting the gesture in time to make the cloud platform correspondingly adjust. Therefore, the invention solves the technical problem of long delay time of the pan-tilt control and realizes the technical effect of controlling the pan-tilt in real time.
Specifically, in S101, in order to determine whether or not a predetermined gesture exists in the captured image, it is necessary to perform hand recognition in the captured image, in other words, it is necessary to recognize whether or not a hand exists in the captured image. And performing feature matching in the acquired image according to the general hand features. And if an element which accords with the universal hand characteristics is matched from the acquired image, determining that the acquired image has the hand, and the element is the hand. Of course, if a hand is recognized, it is further recognized whether the hand gesture is a predetermined gesture; however, if the hand is not recognized in the captured image, it is not necessary to further confirm whether there is a predetermined gesture in the captured image.
In the embodiment of the invention, all the preset gesture characteristics are stored in advance. The predetermined gesture may be any gesture, for example, four gestures shown in fig. 2 to 5d, which is not limited in this respect. The predetermined gesture feature may be a default setting; the user can set the preset gesture according to needs and preferences, and control the image acquisition unit to acquire the preset gesture set by the user, so that the holder control device obtains and stores the characteristics of the gesture. In the specific implementation process, a person skilled in the art to which the present invention pertains may set the implementation according to practice, and the present invention is not particularly limited.
And when the hand exists in the acquired image, matching is carried out in the acquired image according to the preset gesture characteristics. Specifically, hand elements are matched, and the matching degree of each preset gesture is obtained. And if the matching degree of each preset gesture is lower than the threshold value, indicating that the gesture of the hand is not the preset gesture, obtaining a matching result indicating that the preset gesture does not exist in the acquired image. And if the matching degree with one of the preset gestures reaches the threshold value, the matching degrees with other preset gestures do not reach the threshold value, which indicates that the gesture of the hand is the preset gesture, and the gesture is specifically the preset gesture with the matching degree reaching the threshold value, namely the first preset gesture. Further, a recognition result indicating that the first predetermined gesture exists in the captured image is obtained. The threshold value is, for example, 80%, 90%, or 95%.
in an embodiment of the present invention, the predetermined gesture includes a static gesture and a dynamic gesture. The static gesture refers to keeping the gesture posture and standing still when the cloud platform is controlled. The dynamic gesture means that the gesture posture is kept and is not static when the cloud platform is controlled. Different preset gestures can achieve different control effects, and further, the control strategy of each preset gesture also indicates that different gesture parameters are obtained and different control instructions are executed. In the specific implementation process, a person skilled in the art to which the present invention pertains may set the predetermined gesture and the specific control policy according to the actual situation, and the present invention is not limited specifically.
The implementation of controlling the pan/tilt head will be described below by listing several predetermined gestures, which include, but are not limited to, the following in the specific implementation process.
the first method comprises the following steps:
The first predetermined gesture is a dynamic gesture. The gesture can be any gesture, such as a fist-making gesture, a gesture of extending the index finger out of the four fingers and bending the other four fingers, or a gesture of opening the five fingers and raising the palm as shown in fig. 2, as long as the gesture can be regarded as a particle when moving. For convenience of explanation, the following description will be given taking the posture shown in fig. 2 as an example. The black gesture in fig. 2 indicates that the gesture is present at the current time, and the gray gesture indicates that the gesture is present before the current time.
In a first implementation, the gesture parameter required to be obtained by the first control strategy is a particle motion trajectory of the first predetermined gesture. Therefore, according to the first control strategy, the motion trail of the first preset gesture is extracted from the collected image. The motion trail in the embodiment of the invention comprises the motion direction and further comprises the motion distance.
And then, determining a target yaw angle and/or a target pitch angle which the cradle head in the gesture control state needs to rotate according to the motion direction represented by the particle motion track.
Specifically, in the embodiment of the present invention, the mapping relationship of the movement direction to the rotation direction and the rotation angle is set in advance. The rotation direction includes rotation about a yaw axis direction and/or rotation about a pitch axis direction. Further, in the embodiment of the present invention, the rotation angle, that is, the angle values of the target yaw angle and the target pitch angle may be a default, or may be determined according to the specific motion state of the first predetermined gesture, which is not limited in particular. And then, determining a target yaw angle and/or a target pitch angle corresponding to the motion direction according to the motion direction represented by the particle motion track and the mapping relation.
for example, assume that the rotation angle is set to 20 degrees by default. The mapping relation between the motion direction and the rotation angle is specifically as follows: the horizontal leftward movement direction corresponds to a target yaw angle of-20 degrees, the horizontal rightward movement direction corresponds to a target yaw angle of 20 degrees, the vertical upward movement direction corresponds to a target pitch angle of 20 degrees, the vertical downward movement direction corresponds to a target pitch angle of-20 degrees, the upward leftward movement direction corresponds to a target yaw angle of-20 degrees and a target pitch angle of 20 degrees, the upward rightward movement direction corresponds to a target yaw angle of 20 degrees and a target pitch angle of 20 degrees, the downward leftward movement direction corresponds to a target yaw angle of-20 degrees and a target pitch angle of-20 degrees, and the downward rightward movement direction corresponds to a target yaw angle of 20 degrees and a target pitch angle of-20 degrees. Wherein, yaw angle is negative indicating left rotation, yaw angle is positive indicating right rotation, pitch angle is negative indicating downward rotation, and pitch angle is positive indicating upward rotation.
Then, assuming that a horizontal leftward movement direction is identified from the acquired image, determining that the target needs to be rotated by a yaw angle of-20 °; assuming that the vertical upward movement direction is identified from the acquired image, determining that the target pitch angle needs to be rotated by 20 degrees; assuming that the direction of motion to the lower right is identified from the acquired image, the target yaw angle to be rotated is determined to be 20 ° and the target pitch angle is determined to be-20 °.
if the rotation angle is not set by default, but is dynamically set according to the motion state of the first preset gesture, determining a target yaw angle and/or a target pitch angle of the pan-tilt head, which needs to be rotated, specifically by the following processes:
Obtaining a first distance corresponding to a pitch axis and/or a second distance corresponding to a yaw axis of the motion distance of the first predetermined gesture represented by the particle motion trajectory;
And determining an angle corresponding to the first distance as the target yaw angle and/or an angle corresponding to the second distance as the target pitch angle.
First, a movement distance represented by a movement trajectory is obtained. The movement distance is then resolved in a direction parallel to the pitch axis and the yaw axis, resulting in a first distance parallel to the pitch axis and a second distance parallel to the yaw axis. Of course, in the specific implementation process, if the moving direction is horizontal, i.e. parallel to the pitch axis, the moving distance may be determined directly as the first distance without being resolved; or the movement direction is vertical to the ground, namely parallel to the yaw axis, or the movement distance can be directly determined to be the second distance without decomposing the movement distance.
And then, determining a target yaw angle and/or a target pitch angle according to a preset angle corresponding to the distance in the pitch axis direction and an angle corresponding to the distance in the yaw axis direction.
For example, if the imaging plane coordinate system of the image capturing unit is taken as the reference system, the pitch axis corresponds to the horizontal axis of the imaging plane coordinate system, and the yaw axis corresponds to the vertical axis of the imaging plane coordinate system. One unit length in the direction of the horizontal axis corresponds to a yaw angle of 1 degree, and one unit length in the direction of the vertical axis corresponds to a pitch angle of 1 degree. Assuming that the motion direction of the first preset gesture is recognized to be the upper left, the first distance is 10 unit lengths and the second distance is 9 unit lengths after decomposition, so that the target yaw angle is determined to be 10 degrees and the target pitch angle is determined to be 9 degrees.
And finally, calling a first control instruction for controlling the cradle head to rotate by a corresponding angle based on the determined target yaw angle and/or target pitch angle, so that the cradle head rotates by the target yaw angle and/or the target pitch angle.
As can be seen from the above description, through the particle motion of the first predetermined gesture, the user can adjust the yaw angle and/or pitch angle of the pan/tilt head in real time by keeping the first predetermined gesture and moving in the direction according to the user's need.
And the second method comprises the following steps:
The second predetermined gesture is a dynamic gesture. The gesture can be any gesture, for example, a gesture of opening five fingers, or a gesture of extending the index finger out of the rest four fingers as shown in fig. 3a and 3b, etc., as long as the gesture moves in a steel body form during movement. For convenience of explanation, the following description will be given taking the postures shown in fig. 3a and 3b as an example.
In a second implementation manner, the posture parameter required to be obtained by the first control strategy is a rigid body motion trajectory of the first predetermined gesture. Therefore, according to the first control strategy, the rigid body motion track of the first preset gesture is extracted from the collected image. The rigid body motion trajectory in the embodiment of the invention comprises a rotation direction and further comprises a rotation angle.
of course, in a specific implementation process, in order to obtain the rigid motion trajectory conveniently, the rigid motion trajectory may be obtained by obtaining a motion trajectory of one reference point of the second predetermined gesture. For example, the second predetermined gesture is shown in fig. 3a and 3b, the reference point is the tip of the index finger; or the second preset gesture is a gesture of opening five fingers, and the reference point is the finger tip of the thumb. Those skilled in the art to which the present invention pertains may set the present invention according to the practice, and the present invention is not particularly limited.
And then, determining a target roll angle of the cradle head in the gesture control state, which needs to rotate, according to the rotation direction represented by the rigid body motion track. In an embodiment of the invention, the rotation of the first predetermined gesture is parallel to the plane of the yaw axis and the pitch axis. For convenience of description, the rotation of the first predetermined gesture is considered to be the rotation occurring in the plane of the yaw axis and the pitch axis.
in the embodiment of the invention, the mapping relation between the rotation direction and the rotation angle is preset. The direction of rotation is in particular the direction of rotation about the roll axis. Further, in the embodiment of the present invention, the rotation angle, that is, the angle value of the target roll angle may be a default, or may be determined according to the specific motion state of the first predetermined gesture, and the present invention is not limited specifically. And then, determining a target roll angle required to rotate according to the rotation direction represented by the rigid body motion trail and the mapping relation.
for example, assume that the rotation angle is set to 20 degrees by default. The mapping relation between the rotation direction and the rotation angle is specifically as follows: as shown in fig. 3a, the clockwise rotation direction corresponds to a target roll angle of 20 °; as in fig. 3b, the counter-clockwise direction of rotation corresponds to a target roll angle of-20. Wherein, the negative roll angle represents the counterclockwise rotation of the tripod head, and the positive roll angle represents the clockwise rotation of the tripod head. Further, the counterclockwise direction and the clockwise direction in the embodiment of the present invention are directions on a plane where the pitch axis and the yaw axis are located as viewed from a position on the roll axis which is infinite.
If the rotation angle is not set by default, but can be dynamically set according to the motion state of the first preset gesture, determining the target roll angle of the pan-tilt head required to rotate, specifically by the following process:
And determining the rotation angle as the target roll angle according to the rotation angles of the first preset gesture represented by the rigid body motion trail in the planes of the yaw axis and the pitch axis.
First, the rotation angle of the first predetermined gesture in the plane of the yaw axis and the pitch axis is recognized from the captured image. The rotation angle is then determined as the angle of the target roll angle. In other words, how many degrees the second predetermined gesture rotates will control how many degrees the pan/tilt head rotates in the rolling direction by the rolling angle.
for example, the captured image is recognized, and the rigid body motion trajectory of the first predetermined gesture shown in fig. 3a is obtained as a clockwise rotation of 100 °. According to the corresponding relation, determining that the holder rotates clockwise corresponding to the clockwise rotation direction; the rotation angle is 100 deg., and thus the target yaw angle is 100 deg..
And finally, calling a first control instruction for controlling the cradle head to rotate by a corresponding angle based on the determined target yaw angle and/or target pitch angle, so that the cradle head rotates by the target yaw angle and/or the target pitch angle.
as can be seen from the above description, through the rigid body motion of the first predetermined gesture, the user can adjust the roll angle of the pan/tilt head in real time by maintaining the first predetermined gesture and rotating as required.
And the third is that:
the third predetermined gesture is also a dynamic gesture. The specific gesture posture can be any posture, for example, a posture in which the index finger and the middle finger extend out of the other three-valued bend, or a posture in which the thumb and the index finger extend out of the other three-valued bend as shown in fig. 4, or a posture in which the five fingers are open, as long as at least two fingers can move during the movement. For convenience of explanation, the following description will be given taking the posture shown in fig. 4 as an example.
In a third implementation manner, the gesture parameter required to be obtained by the first control strategy is a relative movement direction of the finger of the first predetermined gesture. Therefore, according to the first control strategy, the relative movement direction of the fingers of the first preset gesture is extracted from the collected image.
the movement direction of the fingers in the embodiment of the invention can be the relative movement direction of any two fingers, and can also be the relative movement direction of three, four or even five fingers. In the embodiment of the invention, the relative movement directions at least comprise two types of mutual gathering and mutual separation.
in the embodiment of the invention, when the relative movement directions of the fingers are mutually gathered, a third control instruction for reducing the specific display object on the display unit is called according to the first control strategy. On the contrary, when the relative movement directions of the fingers are mutually separated, a fourth control instruction for enlarging the specific display object is called.
specifically, one or more display objects are displayed on the display unit, the specific display object may be any one or more of all the display objects, and the user adjusts the display size of any display object through different relative finger movements. The specific display object may also be one or more preset display objects in all display objects, such as pictures, maps, lists, and the like, and the user may adjust the display size of the preset display objects through relative movement of different fingers. For example, the specific display object is a picture, and when the user wants to enlarge the picture, the specific display object is realized by the movements of the fingers separated from each other. The specific display object can also be all display objects, and then the user can adjust the display size of the display object by the relative movement of fingers in different directions. In the specific implementation process, a person skilled in the art to which the present invention pertains may set the implementation according to practice, and the present invention is not particularly limited.
As can be seen from the above description, the user can control the display size of the display object on the display unit of the pan/tilt head in real time through the relative movement of the fingers in different directions.
And fourthly:
the fourth predetermined gesture is a static gesture. The specific gesture posture can be any posture, for example, a posture in which the index finger and the middle finger extend out of the other three-valued bend, or a posture in which the index finger extends out of the other four-pointed bend as shown in fig. 5a to 5d, or a posture in which the five fingers are open, as long as the direction can be indicated. For convenience of explanation, the following description will be given taking the postures shown in fig. 5a to 5d as an example.
In a fourth implementation, the gesture parameter that the first control strategy needs to obtain is an indication direction of the first predetermined gesture. Thus, according to the first control strategy, the direction indicated by the first predetermined gesture is extracted from the captured image.
The indication direction in the embodiment of the invention is the current direction of the reference vector of the first preset gesture. For example, the first predetermined gesture is as shown in FIGS. 5a-5d, setting the reference vector as the index finger. Then, when the first predetermined gesture is as shown in fig. 5a, the index finger points upwards, i.e. the current direction of the reference vector is upwards, indicating that the direction is upwards; when the first predetermined gesture is as shown in fig. 5b, the food points to the lower finger, that is, the current direction of the reference vector is downward, and the pointing direction is downward; when the first predetermined gesture is as shown in fig. 5c, the index finger points to the left, i.e. the current direction of the reference vector is to the left, then the indication direction is to the left; when the first predetermined gesture is as shown in fig. 5d, the index finger points to the right, i.e. the current direction of the reference vector is to the right, then the indicated direction is to the right.
Or, for another example, the first predetermined gesture is that five fingers are open, the reference vector is set to be a thumb, and then the current direction of the thumb is the indication direction. Or, for another example, the first predetermined gesture is that the thumb is stretched out of the bend of the remaining four fingers, the reference vector is set to be the thumb, and then the current direction of the thumb is the indication direction. Further examples are not repeated herein, and those skilled in the art to which the present invention pertains may set the method according to practice, and the present invention is not limited specifically.
further, according to the indication direction, a target yaw angle and/or a target pitch angle of the cradle head in the gesture control state, which needs to rotate, are determined.
specifically, in the embodiment of the present invention, the mapping relationship of the pointing direction with the rotation direction and the rotation angle is set in advance. The rotation direction includes rotation about a yaw axis direction and/or rotation about a pitch axis direction. Further, since the predetermined gesture is a static gesture, the values of the rotation angles, i.e. the target yaw angle and the target pitch angle, in the embodiment of the present invention are default. And then, determining a target yaw angle and/or a target pitch angle corresponding to the indication direction according to the indication direction and the mapping relation.
For example, assume that the rotation angle is set to 20 degrees by default. The mapping relationship between the indication direction and the rotation angle is specifically as follows: the horizontal leftward indicating direction corresponds to a target yaw angle of-20 degrees, the horizontal rightward indicating direction corresponds to a target yaw angle of 20 degrees, the vertical upward indicating direction corresponds to a target pitch angle of 20 degrees, the vertical downward indicating direction corresponds to a target pitch angle of-20 degrees, the upward left indicating direction corresponds to a target yaw angle of-20 degrees and a target pitch angle of 20 degrees, the upward right indicating direction corresponds to a target yaw angle of 20 degrees and a target pitch angle of 20 degrees, the downward left indicating direction corresponds to a target yaw angle of-20 degrees and a target pitch angle of-20 degrees, and the downward right indicating direction corresponds to a target yaw angle of 20 degrees and a target pitch angle of-20 degrees.
then, assuming that the pointing direction is upward as recognized from the captured image as shown in fig. 5a, it is determined that the target pitch angle needs to be rotated by 20 °; assuming that the pointing direction is downward as identified from the captured image as shown in fig. 5b, determining that the target pitch angle needs to be rotated by-20 °; assuming that the indicated direction is identified to the left from the captured image as shown in FIG. 5c, it is determined that the target needs to be rotated by a yaw angle of-20 °; assuming that the pointing direction is recognized to the right from the captured image as shown in fig. 5d, it is determined that the target yaw angle needs to be rotated by 20 °.
as can be seen from the above description, by statically indicating different directions, the user can control the yaw angle and/or pitch angle of the pan/tilt head in real time.
In the specific implementation process, a person skilled in the art can select any one of the four implementation manners to control the holder. In addition, any of the four implementation modes can be combined and applied to the control of the holder, so that a user can realize different controls on the holder through different preset gestures.
Further, in combination with any one of the above implementation manners, the method further includes:
Judging whether the holder is in the gesture control state or not;
And when the cradle head is not in the gesture control state and the cradle head detects a trigger gesture, controlling the cradle head to enter the gesture control state.
specifically, the gesture control state in the embodiment of the present invention refers to a state in which the pan/tilt head is controlled according to the recognition result of the predetermined gesture and the predetermined gesture posture. Therefore, to control the pan/tilt head by a predetermined gesture requires the pan/tilt head to enter a gesture control state. If the holder is not in the gesture control state, the holder does not respond even if a preset gesture exists in the acquired image. Therefore, in the embodiment of the present invention, it is necessary to determine whether the pan/tilt head is in the gesture control state.
whether the holder is in the gesture control state or not is judged, and the state of the holder can be determined by inquiring. If the inquired state is the gesture control state, judging that the holder is in the gesture control state; otherwise, if the inquired state is not the gesture control state, judging that the holder is not in the gesture control state.
When the cradle head is not in the gesture control state, the cradle head is controlled to enter the gesture control state by detecting the trigger gesture. Specifically, the control of the cradle head to enter the gesture control state through detecting the trigger gesture can be realized through the following processes:
when a trigger gesture is recognized from the acquired image, judging whether the holding time of the trigger gesture reaches a preset time or not;
and when the holding time of the trigger gesture reaches the preset time, controlling the holder to enter a gesture control state.
In the embodiment of the present invention, the triggering gesture may be the same gesture as any one of the predetermined gestures, or may also be a different gesture from all of the predetermined gestures, and the present invention is not limited in particular. The manner in which the trigger gesture is recognized from the captured images is similar to the manner in which the predetermined gesture is recognized, and how the predetermined gesture is recognized is described in detail above, and therefore the process of recognizing the trigger gesture will not be described in detail here.
in the specific implementation process, a user may make a trigger gesture due to misoperation, and the pan/tilt head is not required to enter a gesture control state, so that in order to improve the accuracy of control, whether the holding time of the trigger gesture reaches the preset time needs to be judged. The preset time is, for example, 2S, 3S, etc., and a person skilled in the art to which the present invention belongs may set the preset time according to the actual situation, and the present invention is not limited specifically.
For the holding time, since the image capturing unit captures images at equal time intervals, the holding time can be calculated by continuously recognizing the number of frames of the image triggering the gesture. For example, assuming that the image capturing unit captures one frame of image every 1mS and the trigger gesture is recognized in all 1200 consecutive frames of images, the holding time of the trigger gesture is calculated to be 1.2S.
when the holding time of the trigger gesture reaches the preset time, the possibility that the trigger gesture is made by the misoperation of the user is low, so that the user is determined to control the cradle head, and the cradle head is controlled to enter the gesture control state.
For example, assume that the triggering gesture is a slight shaking motion of the five fingers, and the preset time is 2S. When a trigger gesture in which the five fingers are opened and slightly shaken is recognized in the acquired image, the holding time of the trigger gesture is calculated. And when the holding time of the trigger gesture reaches 2S, controlling the holder to enter a gesture control state.
According to the description, the cradle head is controlled to enter the gesture control state when the holding time of the trigger gesture is judged to reach the preset time, the cradle head can be prevented from entering the gesture control state due to misoperation, and the accuracy of control is improved.
and further, after entering a cradle head gesture control state, the user makes a corresponding preset gesture according to own control requirements to control the cradle head. Of course, in order to further improve the accuracy of the control, the preset gesture holding time can be set, so that unnecessary control caused by the fact that the user performs the preset gesture due to misoperation is avoided.
How to control the pan/tilt head by a predetermined gesture is described in the following with several embodiments. In the implementation process, the following embodiments are included but not limited.
the first embodiment is as follows:
As shown in fig. 6, the user stands in front of the balance car, and lifts the left hand so that the left hand remains in the acquisition range of the image acquisition unit. The cradle head control device identifies a trigger gesture that the five fingers are opened and slightly shake from the acquired image. The pan-tilt is not in the gesture control state at this time. Further, the calculated holding time of the trigger gesture reaches the preset time 2S, so that the cradle head is controlled to enter a gesture control state.
Then, the user puts down the left hand, lifts up the right hand, and holds the right hand to make a sliding motion horizontally to the right in the image capturing range in the posture shown in fig. 2. By identifying the acquired image, it is determined that a predetermined gesture exists in the acquired image, and the predetermined gesture is specifically a first predetermined gesture as shown in fig. 2, so as to invoke a corresponding first control strategy.
and according to a first control strategy, obtaining a particle motion track of a first preset gesture. The particle motion trajectory represents the first predetermined gesture motion direction to the left, thus determining that the default target yaw angle needs to be rotated by 20 °. And then, calling and executing a first control command, and enabling the tripod head to rotate rightwards by a yaw angle of 20 degrees.
example two:
as shown in fig. 6, the user stands in front of the balance car, and lifts the left hand so that the left hand remains in the acquisition range of the image acquisition unit. The cradle head control device identifies a trigger gesture that the five fingers are opened and slightly shake from the acquired image. The pan-tilt is not in the gesture control state at this time. Further, the calculated holding time of the trigger gesture reaches the preset time 2S, so that the cradle head is controlled to enter a gesture control state.
the user then lowers the left hand and raises the right hand, holding it in the position shown in figure 3a, and makes a clockwise rotational movement through the image capture range. By identifying the acquired image, it is determined that a predetermined gesture exists in the acquired image, and the predetermined gesture is specifically a first predetermined gesture as shown in fig. 3a, so as to invoke a corresponding first control strategy.
And according to the first control strategy, obtaining a rigid body motion track of the first preset gesture. The rigid motion trajectory indicates that the first predetermined gesture has a clockwise rotation direction and a rotation angle of 40 °, so that it is determined that the rotational angle of the pan/tilt target needs to be rotated clockwise by 20 °. Further, a second control command is invoked and executed, and the pan/tilt head rotates clockwise by 20 ° about the roll axis.
Example three:
As shown in fig. 6, the user stands in front of the balance car, and lifts the left hand so that the left hand remains in the acquisition range of the image acquisition unit. The cradle head control device identifies a trigger gesture that the five fingers are opened and slightly shake from the acquired image. The pan-tilt is not in the gesture control state at this time. Further, the calculated holding time of the trigger gesture reaches the preset time 2S, so that the cradle head is controlled to enter a gesture control state.
the user then lowers the left hand and raises the right hand, holding the right hand in the position shown in figure 4 for movement of the index finger and thumb towards each other in the image capture range. By identifying the acquired image, it is determined that a predetermined gesture exists in the acquired image, and the predetermined gesture is specifically a first predetermined gesture as shown in fig. 4, so as to invoke a corresponding first control strategy.
According to the first control strategy, the relative movement directions of the fingers obtaining the first preset gesture are mutually gathered, so that a third control instruction is called and executed, and the picture displayed on the display unit of the current holder is reduced.
Example four:
As shown in fig. 6, the user stands in front of the balance car, and lifts the left hand so that the left hand remains in the acquisition range of the image acquisition unit. The cradle head control device identifies a trigger gesture that the five fingers are opened and slightly shake from the acquired image. The pan-tilt is not in the gesture control state at this time. Further, the calculated holding time of the trigger gesture reaches the preset time 2S, so that the cradle head is controlled to enter a gesture control state.
The user then puts down the left hand, lifts up the right hand, and holds the right hand stationary in the image capture range in the pose shown in fig. 5 b. By recognizing the acquired image, it is determined that a predetermined gesture exists in the acquired image, and the predetermined gesture is specifically a first predetermined gesture as shown in fig. 5b, so as to invoke a corresponding first control strategy.
According to a first control strategy, the indicated direction of the first predetermined gesture is obtained downwards, thus determining that a rotation of the default target pitch angle-20 ° is required. And then, calling and executing a fifth control command, and rotating the tripod head downwards by a pitch angle of 20 degrees.
Further, if the user no longer needs to control the pan/tilt head, the pan/tilt head may exit the gesture control state. In the embodiment of the invention, the cradle head can be controlled to exit from the gesture control state in various ways. Several of which are listed below, including but not limited to the following in the course of specific implementations.
the first method comprises the following steps:
When the holder is in a gesture control state, judging whether the static time of the trigger gesture reaches exit time or not; and when the static time reaches the exit time, controlling the holder to exit the gesture control state.
specifically, when the user needs the cradle head to enter the gesture control state, the trigger gesture can be kept for a preset time, and when the user needs the cradle head to exit from the gesture control state, the trigger gesture can be kept still for an exit time. The manner of obtaining the rest time is similar to the manner of obtaining the retention time, and therefore, the description thereof is omitted. The exit time is, for example, 2S, 3S, etc., and the present invention is not particularly limited. And when the static time reaches the exit time, determining that the user needs to exit the gesture control state of the cradle head, and further controlling the cradle head to exit the gesture control state.
For example, assume that the exit time and the preset time are both 1S. At time T1, the user remotely controls the balance car to move to his/her face, and wishes to control the pan/tilt head of the balance car. The user lifts his right hand to make the trigger gesture and holds. According to the image acquired by the image acquisition unit, the cradle head control device identifies the trigger gesture and calculates that the holding time of the trigger gesture reaches 1S, so that the cradle head is controlled to enter a gesture control state. And after the user confirms that the holder enters the gesture control state, the user starts to control the holder to rotate through a preset gesture. At time T2, which is after time T1, the pan/tilt head is rotated to the user's desired pose, and the user again holds the right hand stationary. And the cradle head control device judges that the static time of the trigger gesture reaches 1S, and controls the cradle head to exit the gesture control state.
And the second method comprises the following steps:
when the holder is in a gesture control state, judging whether a preset gesture is converted into an exit gesture; and when the preset gesture is converted into an exit gesture, controlling the holder to exit the gesture control state.
specifically, in embodiments of the present invention, the exit gesture is a different gesture than the trigger gesture. And when the holder is in a gesture control state, recognizing the acquired image, and if an exit gesture is recognized, determining that the preset gesture is converted into the exit gesture. Wherein the recognition of the exit gesture is similar to the recognition of the predetermined gesture. And when the preset gesture is converted into an exit gesture, determining that the user needs to exit the gesture control state of the cradle head, and further enabling the cradle head to exit the gesture control state.
For example, assume that the predetermined time is 1S. At time T3, the user remotely controls the balance car to move to his/her face, and wishes to control the pan/tilt head of the balance car. So the user lifts his own right hand to make the five-finger open gesture and hold. According to the image acquired by the image acquisition unit, the cradle head control device identifies the trigger gesture and calculates that the holding time of the trigger gesture reaches 1S, so that the cradle head is controlled to enter a gesture control state. And after the user confirms that the holder enters the gesture control state, the user starts to control the holder. At time T4 after T3, after the pan/tilt head has rotated to the user's desired pose, the user makes a fist-making gesture with the right hand. And the cradle head control device identifies the exit gesture and controls the cradle head to exit the gesture control state.
And the third is that:
Judging whether the preset gesture exists in the acquired image or not;
and when the preset gesture does not exist in the acquired image, controlling the holder to exit the gesture control state.
Specifically, when the pan/tilt head is in the gesture control state, the pan/tilt head control device will continuously recognize the predetermined gesture. When the hand of the user moves out of the acquisition range of the image acquisition unit, the acquired image does not recognize the preset gesture any more. And when the preset gesture is not recognized from the acquired image, judging that the preset gesture does not exist in the acquired image, determining that the user needs to quit the gesture control state of the cradle head, and further enabling the cradle head to quit the gesture control state.
For example, assume that the predetermined time is 1S. At time T5, the user remotely controls the balance car to move to his/her face, and wishes to control the pan/tilt head of the balance car. So the user lifts his own right hand to make the five-finger open gesture and hold. According to the image acquired by the image acquisition unit, the cradle head control device identifies the trigger gesture and calculates that the holding time of the trigger gesture reaches 1S, so that the cradle head is controlled to enter a gesture control state. And after the user confirms that the holder enters the gesture control state, the user starts to control the holder. At a time T6 after T5, after the picture of the pan/tilt head is adjusted to the display size desired by the user, the user puts down the hand and moves the hand out of the capturing range. And the cradle head control device identifies the preset gesture at the moment of T6, further judges that the preset gesture does not exist, and controls the cradle head to exit the gesture control state.
Based on the same inventive concept as the pan/tilt control method in the foregoing embodiment, a second aspect of the present invention further provides a pan/tilt control apparatus, as shown in fig. 7, including:
The identification module 101 is used for acquiring an image through an image acquisition unit carried on the holder, identifying the acquired image and obtaining an identification result; the recognition result represents that a preset gesture exists in the acquired image, and the preset gesture is a first preset gesture;
The starting module 102 is configured to start a first control strategy corresponding to the first predetermined gesture based on the recognition result;
An obtaining module 103, configured to obtain a gesture parameter of the first predetermined gesture from the acquired image according to the first control policy;
And the calling module 104 is configured to call a control instruction corresponding to the attitude parameter according to the attitude parameter, and execute the control instruction to control the pan/tilt head.
specifically, the attitude parameter is a particle motion track of the first predetermined gesture, and the invoking module 104 is configured to determine a target yaw angle and/or a target pitch angle, at which the pan/tilt head in the gesture control state needs to rotate, according to a motion direction of the first predetermined gesture represented by the particle motion track, and invoke a first control instruction based on the target yaw angle and/or the target pitch angle to rotate the pan/tilt head by the target yaw angle and/or the target pitch angle.
Further, the calling module 104 is configured to obtain, according to the movement distance of the first predetermined gesture represented by the particle motion trajectory, a first distance corresponding to a pitch axis and/or a second distance corresponding to a yaw axis; and determining an angle corresponding to the first distance as the target yaw angle and/or an angle corresponding to the second distance as the target pitch angle.
Or the attitude parameter is a rigid body motion trajectory of the first predetermined gesture, and the invoking module 104 is configured to determine a target roll angle at which the pan-tilt in the gesture control state needs to rotate according to a rotation direction of the first predetermined gesture represented by the rigid body motion trajectory in a plane in which a yaw axis and a pitch axis are located, and invoke a second control instruction based on the target roll angle, so as to rotate the pan-tilt by the target roll angle.
Further, the calling module 104 is configured to determine, according to a rotation angle of the first predetermined gesture represented by the rigid body motion trajectory in a plane where the yaw axis and the pitch axis are located, that the rotation angle is the target roll angle.
or the gesture parameter is a relative movement direction of the fingers of the first predetermined gesture, and the calling module 104 is configured to call a third control instruction for zooming out the specific display object when the relative movement directions of the fingers are mutually gathered; or when the relative movement directions of the fingers are mutually separated, calling a fourth control instruction for enlarging the specific display object.
or the attitude parameter is an indication direction of the first predetermined gesture, and the calling module 104 is configured to determine that the pan/tilt head in the gesture control state needs to adjust the yaw angle and/or the pitch angle according to the indication direction, and call a fifth control instruction for adjusting the yaw angle and/or the pitch angle.
Still further, the apparatus further comprises:
The judging module is used for judging whether the holder is in the gesture control state or not;
And the triggering module is used for controlling the cradle head to enter the gesture control state when the cradle head is not in the gesture control state and the cradle head detects a triggering gesture.
Various changes and specific examples of the foregoing pan and tilt head control method in the embodiments of fig. 1 to 6 are also applicable to the pan and tilt head control device of the present embodiment, and through the foregoing detailed description of the pan and tilt head control method, those skilled in the art can clearly know the implementation method of the pan and tilt head control device in the present embodiment, so for the brevity of the description, detailed description is omitted here.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
In the embodiment of the invention, firstly, an image is collected through an image collecting unit carried on a holder, and the collected image is identified to obtain an identification result which shows that a preset gesture exists in the collected image and the preset gesture is a first preset gesture; then, starting a first control strategy corresponding to the first preset gesture based on the recognition result; further, acquiring attitude parameters of a first preset gesture from the acquired image according to a first control strategy; and further, calling a control instruction corresponding to the attitude parameter according to the attitude parameter, and executing the control instruction to control the holder. Therefore, the corresponding command is called to control the cloud platform based on the first preset gesture of the user and the gesture of the first preset gesture, so that the cloud platform can make corresponding response in real time according to the preset gesture. Furthermore, the user can control the cradle head simply by gestures. And, because the visual angle does not need to be switched through gesture control, so the user can observe the real-time state of the cloud platform simultaneously, thereby adjusting the gesture in time to make the cloud platform correspondingly adjust. Therefore, the invention solves the technical problem of long delay time of the pan-tilt control and realizes the technical effect of controlling the pan-tilt in real time.
as will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
the present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (12)
1. a holder control method is characterized by comprising the following steps:
Acquiring an image through an image acquisition unit carried on a holder, and identifying the acquired image to obtain an identification result; the recognition result represents that a preset gesture exists in the acquired image, and the preset gesture is a first preset gesture;
starting a first control strategy corresponding to the first preset gesture based on the recognition result; starting a first control strategy corresponding to the first preset gesture, namely calling a corresponding instruction set according to the first preset gesture;
Acquiring the attitude parameter of the first preset gesture from the acquired image according to the first control strategy;
calling a control instruction corresponding to the attitude parameter according to the attitude parameter, wherein the control instruction corresponding to the attitude parameter is called, and the control instruction corresponding to the attitude parameter is determined and called from a called instruction set;
executing the control command to control the holder;
The gesture parameter is a rigid body motion track of the first preset gesture, and a control instruction corresponding to the gesture parameter is called according to the gesture parameter, and the method comprises the following steps:
Determining a target roll angle of the cradle head in a gesture control state, which needs to rotate, according to the rotation directions of the first preset gesture represented by the rigid body motion track in planes of a yaw axis and a pitch axis, and calling a second control instruction based on the target roll angle to enable the cradle head to rotate the target roll angle;
the gesture parameter is the relative movement direction of the fingers of the first preset gesture, and a control instruction corresponding to the gesture parameter is called according to the gesture parameter, and the method comprises the following steps:
When the relative movement directions of the fingers are mutually gathered, calling a third control instruction for reducing the specific display object; or
And calling a fourth control instruction for enlarging the specific display object when the relative movement directions of the fingers are mutually separated.
2. the method of claim 1, wherein the gesture parameter is a particle motion trajectory of the first predetermined gesture, and wherein invoking a control instruction corresponding to the gesture parameter according to the gesture parameter comprises:
and determining a target yaw angle and/or a target pitch angle of the cradle head in a gesture control state, which needs to be rotated, according to the motion direction of the first preset gesture represented by the particle motion track, and calling a first control instruction based on the target yaw angle and/or the target pitch angle to enable the cradle head to rotate by the target yaw angle and/or the target pitch angle.
3. The method of claim 2, wherein determining a target yaw angle and/or a target pitch angle at which the pan/tilt head in the gesture control state needs to be rotated comprises:
Obtaining a first distance corresponding to a pitch axis and/or a second distance corresponding to a yaw axis of the motion distance of the first predetermined gesture represented by the particle motion trajectory;
And determining an angle corresponding to the first distance as the target yaw angle and/or an angle corresponding to the second distance as the target pitch angle.
4. the method of claim 1, wherein determining a target roll angle at which the pan/tilt head in the gesture control state needs to be rotated comprises:
and determining the rotation angle as the target roll angle according to the rotation angles of the first preset gesture represented by the rigid body motion trail in the planes of the yaw axis and the pitch axis.
5. The method of claim 1, wherein the gesture parameter is an indicated direction of the first predetermined gesture, and invoking a control instruction corresponding to the gesture parameter according to the gesture parameter comprises:
And determining that the pan-tilt in the gesture control state needs to adjust the yaw angle and/or the pitch angle according to the indication direction, and calling a fifth control instruction for adjusting the yaw angle and/or the pitch angle.
6. the method of any one of claims 2-5, further comprising
Judging whether the holder is in the gesture control state or not;
and when the cradle head is not in the gesture control state and the cradle head detects a trigger gesture, controlling the cradle head to enter the gesture control state.
7. A pan/tilt control device, comprising:
The identification module is used for acquiring images through an image acquisition unit carried on the holder, identifying the acquired images and obtaining an identification result; the recognition result represents that a preset gesture exists in the acquired image, and the preset gesture is a first preset gesture;
The starting module is used for starting a first control strategy corresponding to the first preset gesture based on the recognition result; starting a first control strategy corresponding to the first preset gesture, namely calling a corresponding instruction set according to the first preset gesture;
The acquisition module is used for acquiring the attitude parameter of the first preset gesture from the acquired image according to the first control strategy;
the calling module is used for calling a control instruction corresponding to the attitude parameter according to the attitude parameter, wherein the control instruction corresponding to the attitude parameter is called, and the control instruction corresponding to the attitude parameter is determined and called from a called instruction set;
Executing the control command to control the holder;
the attitude parameter is a rigid body motion track of the first preset gesture, the calling module is used for determining a target roll angle of the cradle head in a gesture control state, which needs to rotate, according to the rotation directions of the first preset gesture represented by the rigid body motion track in planes of a yaw axis and a pitch axis, and calling a second control instruction based on the target roll angle so as to enable the cradle head to rotate the target roll angle;
the gesture parameter is the relative movement direction of the fingers of the first preset gesture, and the calling module is used for calling a third control instruction for reducing a specific display object when the relative movement directions of the fingers are mutually gathered; or when the relative movement directions of the fingers are mutually separated, calling a fourth control instruction for enlarging the specific display object.
8. the apparatus of claim 7, wherein the attitude parameter is a particle motion trajectory of the first predetermined gesture, and the invoking module is configured to determine a target yaw angle and/or a target pitch angle at which the pan/tilt head needs to be rotated in the gesture control state according to a motion direction of the first predetermined gesture represented by the particle motion trajectory, and invoke a first control instruction based on the target yaw angle and/or the target pitch angle to rotate the pan/tilt head by the target yaw angle and/or the target pitch angle.
9. The apparatus of claim 8, wherein the invoking module is to obtain a first distance corresponding to a pitch axis and/or a second distance corresponding to a yaw axis from a movement distance of the first predetermined gesture represented by the particle motion trajectory; and determining an angle corresponding to the first distance as the target yaw angle and/or an angle corresponding to the second distance as the target pitch angle.
10. The apparatus of claim 7, wherein the invoking module is configured to determine the rotation angle as the target roll angle according to a rotation angle of the first predetermined gesture represented by the rigid body motion trajectory within a plane in which the yaw axis and the pitch axis are located.
11. The apparatus according to claim 7, wherein the attitude parameter is an indication direction of the first predetermined gesture, and the invoking module is configured to determine that the pan/tilt head in the gesture control state needs to adjust a yaw angle and/or a pitch angle according to the indication direction, and invoke a fifth control instruction for adjusting the yaw angle and/or the pitch angle.
12. The apparatus of any of claims 7-11, wherein the apparatus further comprises:
the judging module is used for judging whether the holder is in the gesture control state or not;
And the triggering module is used for controlling the cradle head to enter the gesture control state when the cradle head is not in the gesture control state and the cradle head detects a triggering gesture.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610798151.5A CN106339093B (en) | 2016-08-31 | 2016-08-31 | Cloud deck control method and device |
PCT/CN2017/097420 WO2018040906A1 (en) | 2016-08-31 | 2017-08-14 | Pan-tilt control method and device, and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610798151.5A CN106339093B (en) | 2016-08-31 | 2016-08-31 | Cloud deck control method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106339093A CN106339093A (en) | 2017-01-18 |
CN106339093B true CN106339093B (en) | 2019-12-13 |
Family
ID=57822542
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610798151.5A Active CN106339093B (en) | 2016-08-31 | 2016-08-31 | Cloud deck control method and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106339093B (en) |
WO (1) | WO2018040906A1 (en) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106339093B (en) * | 2016-08-31 | 2019-12-13 | 纳恩博(北京)科技有限公司 | Cloud deck control method and device |
CN110337625B (en) * | 2018-03-29 | 2022-06-03 | 深圳市大疆创新科技有限公司 | Pan-tilt track planning method and device |
CN110337624A (en) * | 2018-05-31 | 2019-10-15 | 深圳市大疆创新科技有限公司 | Posture conversion method, posture display methods and clouds terrace system |
WO2020037617A1 (en) * | 2018-08-23 | 2020-02-27 | 深圳市大疆创新科技有限公司 | Gimbal control method, gimbal and gimbal control system |
CN110873563B (en) * | 2018-08-30 | 2022-03-08 | 杭州海康机器人技术有限公司 | Cloud deck attitude estimation method and device |
CN110337622A (en) * | 2018-08-31 | 2019-10-15 | 深圳市大疆创新科技有限公司 | Vertical tranquilizer control method, vertical tranquilizer and image acquisition equipment |
EP3889729A4 (en) * | 2018-11-28 | 2022-07-13 | SZ DJI Technology Co., Ltd. | Control method for gimbal, gimbal, mobile platform, and computer readable storage medium |
CN111443698A (en) * | 2018-12-28 | 2020-07-24 | 上海太昂科技有限公司 | Posture self-adjusting mobile balancing device and method, electronic terminal and storage medium |
CN112154652A (en) * | 2019-08-13 | 2020-12-29 | 深圳市大疆创新科技有限公司 | Control method and control device of handheld cloud deck, handheld cloud deck and storage medium |
CN111123986A (en) * | 2019-12-25 | 2020-05-08 | 四川云盾光电科技有限公司 | Control device for controlling two-degree-of-freedom turntable based on gestures |
CN111638730B (en) * | 2020-05-25 | 2023-07-25 | 浙江大华技术股份有限公司 | Double-cradle head control method and device, electronic equipment and storage medium |
CN111596693B (en) * | 2020-06-17 | 2023-05-26 | 中国人民解放军国防科技大学 | Ground target tracking control method and system for unmanned aerial vehicle based on pan-tilt camera |
WO2022021092A1 (en) * | 2020-07-28 | 2022-02-03 | 深圳市大疆创新科技有限公司 | Gimbal control method and apparatus, device, and computer-readable storage medium |
CN112274920B (en) * | 2020-11-24 | 2022-05-31 | 亓乐(北京)文化科技有限公司 | Virtual reality gesture control method, platform, server and readable storage medium |
CN113772599A (en) * | 2021-09-15 | 2021-12-10 | 湖南星邦智能装备股份有限公司 | Scissor-fork type aerial work platform and control system and method thereof |
CN114157806A (en) * | 2021-11-23 | 2022-03-08 | 深圳市商汤科技有限公司 | Holder control method and device, holder and medium |
CN114860068A (en) * | 2022-03-31 | 2022-08-05 | 联想(北京)有限公司 | Input method and electronic equipment |
CN114845056B (en) * | 2022-04-29 | 2023-06-06 | 清华大学 | Auxiliary photographing robot |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103092437A (en) * | 2012-12-13 | 2013-05-08 | 同济大学 | Portable touch interactive system based on image processing technology |
CN103442177A (en) * | 2013-08-30 | 2013-12-11 | 程治永 | PTZ video camera control system and method based on gesture identification |
CN104767873A (en) * | 2014-01-03 | 2015-07-08 | 哈曼国际工业有限公司 | Seamless content transfer |
CN105892668A (en) * | 2016-04-01 | 2016-08-24 | 纳恩博(北京)科技有限公司 | Equipment control method and device |
CN106249888A (en) * | 2016-07-28 | 2016-12-21 | 纳恩博(北京)科技有限公司 | A kind of cloud platform control method and device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104808799A (en) * | 2015-05-20 | 2015-07-29 | 成都通甲优博科技有限责任公司 | Unmanned aerial vehicle capable of indentifying gesture and identifying method thereof |
CN106339093B (en) * | 2016-08-31 | 2019-12-13 | 纳恩博(北京)科技有限公司 | Cloud deck control method and device |
-
2016
- 2016-08-31 CN CN201610798151.5A patent/CN106339093B/en active Active
-
2017
- 2017-08-14 WO PCT/CN2017/097420 patent/WO2018040906A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103092437A (en) * | 2012-12-13 | 2013-05-08 | 同济大学 | Portable touch interactive system based on image processing technology |
CN103442177A (en) * | 2013-08-30 | 2013-12-11 | 程治永 | PTZ video camera control system and method based on gesture identification |
CN104767873A (en) * | 2014-01-03 | 2015-07-08 | 哈曼国际工业有限公司 | Seamless content transfer |
CN105892668A (en) * | 2016-04-01 | 2016-08-24 | 纳恩博(北京)科技有限公司 | Equipment control method and device |
CN106249888A (en) * | 2016-07-28 | 2016-12-21 | 纳恩博(北京)科技有限公司 | A kind of cloud platform control method and device |
Also Published As
Publication number | Publication date |
---|---|
WO2018040906A1 (en) | 2018-03-08 |
CN106339093A (en) | 2017-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106339093B (en) | Cloud deck control method and device | |
US20200104598A1 (en) | Imaging control method and device | |
CN104486543B (en) | System for controlling pan-tilt camera in touch mode of intelligent terminal | |
US9721396B2 (en) | Computer and computer system for controlling object manipulation in immersive virtual space | |
US20170316582A1 (en) | Robust Head Pose Estimation with a Depth Camera | |
WO2007088856A1 (en) | Automatic tracking device and automatic tracking method | |
US8416189B2 (en) | Manual human machine interface operation system and method thereof | |
WO2021035646A1 (en) | Wearable device and control method therefor, gesture recognition method, and control system | |
KR20150048623A (en) | Screen Operation Apparatus and Screen Operation Method Cross-Reference to Related Application | |
JP5509227B2 (en) | Movement control device, movement control device control method, and program | |
WO2014161306A1 (en) | Data display method, device, and terminal, and display control method and device | |
CN105930775B (en) | Facial orientation recognition methods based on sensitivity parameter | |
JP4802012B2 (en) | Camera control apparatus and camera control method | |
CN110489027B (en) | Handheld input device and display position control method and device of indication icon of handheld input device | |
US11720178B2 (en) | Information processing device, information processing method, and computer-readable recording medium | |
WO2021026789A1 (en) | Photographing method based on handheld gimbal, and handheld gimbal and storage medium | |
JP2023073252A (en) | Image display system, image display program, display control device, and image display method | |
CN113287296A (en) | Control method, handheld cloud deck, system and computer readable storage medium | |
JP6276954B2 (en) | Video surveillance system | |
KR101856547B1 (en) | Method for processing of signal of user and apparatus for performing the method | |
JP5884584B2 (en) | Information processing apparatus, menu selection program, and menu selection method | |
WO2022061541A1 (en) | Control method, handheld gimbal, system, and computer-readable storage medium | |
CN112445342A (en) | Display screen control method and device and electronic equipment | |
JP4364861B2 (en) | Information display device | |
CN113260942A (en) | Handheld holder control method, handheld holder, system and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |