CN115761602B - Intelligent video identification method for vehicle window control system - Google Patents
Intelligent video identification method for vehicle window control system Download PDFInfo
- Publication number
- CN115761602B CN115761602B CN202310021086.5A CN202310021086A CN115761602B CN 115761602 B CN115761602 B CN 115761602B CN 202310021086 A CN202310021086 A CN 202310021086A CN 115761602 B CN115761602 B CN 115761602B
- Authority
- CN
- China
- Prior art keywords
- arm
- suspected
- target
- area
- arm area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000012216 screening Methods 0.000 claims abstract description 14
- 230000000877 morphologic effect Effects 0.000 claims abstract description 9
- 238000003909 pattern recognition Methods 0.000 abstract 1
- 210000000245 forearm Anatomy 0.000 description 11
- 230000000694 effects Effects 0.000 description 6
- 239000005357 flat glass Substances 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 210000002758 humerus Anatomy 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 210000000623 ulna Anatomy 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of data identification, in particular to an intelligent video identification method for a vehicle window control system, which comprises the following steps: acquiring a vehicle window control video of a vehicle window control area through a camera, and identifying a motion area of a vehicle window control image in the vehicle window control video; determining the motion area as a suspected arm area; performing arm morphological feature identification on each suspected arm area in the suspected arm area set; performing arm movement possible feature recognition on each suspected arm area; determining the movement possibility of the target arm; determining an arm height index and an arm speed index corresponding to each suspected arm area; screening out the maximum arm height index and the maximum arm speed index; and controlling the lifting of the target car window. The method adopts a pattern recognition mode to obtain the car window control video and perform related data recognition, improves the efficiency and accuracy of car window control, and is applied to car window control.
Description
Technical Field
The invention relates to the technical field of data identification, in particular to an intelligent video identification method for a vehicle window control system.
Background
Window control is one of the functions often used in everyday driving. A relatively common vehicle window control method is as follows: based on simple motor and lifting mechanism, through installing the door window lift button on the door, control the rotational speed and the direction of motor to realize the lift of window glass, when adopting this method to carry out window control, often need the cooperation of people's hand and eyes just can accomplish, use considerable inconvenience. In view of the above problems, a gesture window control method for performing window control by using a gesture is proposed, which includes: the method comprises the steps of obtaining a gesture image in a gesture monitoring video, carrying out gesture recognition on the gesture image, comparing the recognized gesture with a preset template gesture, and controlling a car window according to a comparison result.
However, when the window control is performed by using the gesture, the following technical problems often exist:
firstly, due to various reasons, such as long-sleeve shielding, gestures in a gesture image may be incomplete, which often results in inaccurate judgment of the gestures, thereby resulting in wrong comparison results and further low accuracy of vehicle window control;
second, because the window state often includes that window glass opens completely, window glass closes completely, window glass opens completely and closes a plurality of heights between with closing completely, so when adopting the gesture to carry out window control, often need a plurality of template gestures to distinguish different window states, consequently, when the user adopted the gesture to carry out window control, often need to recall or look over a plurality of template gestures, often lead to the waste of time, thereby lead to the inefficiency to window control, and, the template gesture of once remembering of recalling may be inaccurate, thereby lead to the degree of accuracy low to window control.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In order to solve the technical problem that the accuracy of vehicle window control is low, the invention provides a video intelligent identification method for a vehicle window control system.
The invention provides a video intelligent identification method for a vehicle window control system, which comprises the following steps:
the method comprises the steps of obtaining a car window control video of a car window control area, carrying out motion area identification on each frame of car window control image in the car window control video, and determining the car window control image as a target control image when the motion area is identified from the car window control image to obtain a target control image set;
when the area of a motion area in a target control image in the target control image set is larger than or equal to a preset area threshold, determining the motion area as a suspected arm area to obtain a suspected arm area set;
performing arm morphological feature identification on each suspected arm area in the suspected arm area set to obtain the arm possibility corresponding to the suspected arm area;
performing arm movement possible feature identification on each suspected arm area in the suspected arm area set to obtain arm movement possibility corresponding to the suspected arm area;
determining the movement possibility of a target arm according to the arm possibility and the arm movement possibility corresponding to each suspected arm area in the suspected arm area set;
when the movement possibility of the target arm is larger than a preset movement threshold, determining an arm height index and an arm speed index corresponding to each suspected arm area in the set of suspected arm areas;
screening out the maximum arm height index and the maximum arm speed index from the arm height indexes and the arm speed indexes corresponding to all suspected arm areas in the suspected arm area set, and respectively taking the maximum arm height index and the maximum arm speed index as a target height index and a target speed index;
and controlling the lifting of the target car window according to the target height index or the target speed index.
Further, the performing arm morphology feature identification on each suspected arm area in the set of suspected arm areas to obtain an arm possibility corresponding to the suspected arm area includes:
for each pixel point on the edge of the suspected arm area, performing fitting vectorization on the pixel point and an edge pixel point in a preset target neighborhood corresponding to the pixel point to obtain a fitting vector corresponding to the pixel point;
determining an included angle between a fitting vector corresponding to each pixel point on the edge of the suspected arm area and a preset target direction as a target included angle corresponding to the pixel point;
clustering all pixel points on the edge of the suspected arm area according to the target included angles corresponding to all the pixel points on the edge of the suspected arm area to obtain an edge pixel point cluster set corresponding to the suspected arm area;
determining clustering differences corresponding to the suspected arm areas according to target included angles corresponding to clustering centers of clustering included by all edge pixel points in the edge pixel point clustering set corresponding to the suspected arm areas;
for each pixel point on the edge of the suspected arm area, screening an edge pixel point closest to the pixel point from edge pixel point clusters where the pixel points are located in an edge pixel point cluster set corresponding to the suspected arm area, and using the edge pixel point as a target edge pixel point corresponding to the pixel point;
determining an absolute value of a difference value between each pixel point on the edge of the suspected arm area and a target included angle corresponding to a target edge pixel point corresponding to the pixel point as an edge difference corresponding to the pixel point;
determining the target length and the target width corresponding to the suspected arm area according to the position corresponding to each pixel point on the edge of the suspected arm area;
determining a first arm possibility and a second arm possibility corresponding to the suspected arm area according to cluster differences corresponding to the suspected arm area, a target length and a target width, a pre-obtained standard width-length ratio, edge differences corresponding to all pixel points on the edge of the suspected arm area, a target included angle corresponding to a cluster center included in an edge pixel point cluster where all pixel points on the edge of the suspected arm area are located, and a target included angle corresponding to all edge pixel points in the edge pixel point cluster where all pixel points on the edge of the suspected arm area are located;
and determining the product of the first arm possibility and the second arm possibility corresponding to the suspected arm area as the arm possibility corresponding to the suspected arm area.
Further, the determining the first arm likelihood and the second arm likelihood corresponding to the suspected arm area includes:
determining a formula corresponding to the first arm possibility corresponding to the suspected arm area as follows:
wherein,is the first arm likelihood corresponding to the jth suspected arm area in the set of suspected arm areas, j is the serial number of the suspected arm areas in the set of suspected arm areas,is the cluster difference corresponding to the jth suspected arm area in the suspected arm area set,is the number of pixel points on the edge of the jth suspected arm area in the set of suspected arm areas,is the edge difference corresponding to the ith pixel point on the jth suspected arm area edge in the suspected arm area set, i is the serial number of the pixel point on the suspected arm area edge,is the number of edge pixel points in an edge pixel point cluster where the ith pixel point on the edge of the jth suspected arm area in the suspected arm area set is located,is a target included angle corresponding to the xth edge pixel point in the edge pixel point cluster where the ith pixel point on the jth suspected arm area edge in the suspected arm area set is located,is the jth suspected arm area in the suspected arm area setThe target included angle corresponding to the cluster center included in the edge pixel cluster where the ith pixel point on the edge is located,is a preset parameter index;
determining a formula corresponding to the second arm possibility corresponding to the suspected arm area as follows:
wherein,is the second arm probability corresponding to the jth suspected arm area in the suspected arm area set, j is the serial number of the suspected arm area in the suspected arm area set,is a function of the absolute value of the value,is the target width corresponding to the jth suspected arm area in the suspected arm area set,is the target length corresponding to the jth suspected arm area in the suspected arm area set, and U is the standard aspect ratio.
Further, the performing arm movement possibility feature identification on each suspected arm area in the set of suspected arm areas to obtain an arm movement possibility corresponding to the suspected arm area includes:
establishing a coordinate system by taking the lower left corner of the vehicle window control image where the suspected arm area is located as an origin, and taking the coordinate system as a target coordinate system corresponding to the suspected arm area;
determining a position corresponding to a pixel point on the edge of the suspected arm area, which is closest to a target origin, as a first edge point position corresponding to the suspected arm area, wherein the target origin is an origin of a target coordinate system corresponding to the suspected arm area;
determining a second edge point position and a third edge point position corresponding to the suspected arm area according to a target arm point identification network which is trained in advance;
determining the position corresponding to the pixel point farthest from the target origin on the edge of the suspected arm area as the position of a fourth edge point corresponding to the suspected arm area;
and determining the arm movement possibility corresponding to the suspected arm area according to a reference arm area corresponding to the suspected arm area and a first edge point position, a second edge point position, a third edge point position and a fourth edge point position corresponding to the suspected arm area, wherein the reference arm area corresponding to the suspected arm area is a suspected arm area included in a previous frame of window control image of the window control image in which the suspected arm area is located.
Further, the determining, according to a reference arm area corresponding to the suspected arm area and a first edge point position, a second edge point position, a third edge point position, and a fourth edge point position corresponding to the suspected arm area, an arm movement possibility corresponding to the suspected arm area includes:
determining a first moving distance corresponding to the suspected arm area according to a first edge point position corresponding to a reference arm area corresponding to the suspected arm area and a first edge point position corresponding to the suspected arm area;
determining a second moving distance, a third moving distance and a fourth moving distance corresponding to the suspected arm area according to a reference arm area corresponding to the suspected arm area and a second edge point position, a third edge point position and a fourth edge point position corresponding to the suspected arm area;
determining a first relative movement speed index corresponding to the suspected arm area according to a first movement distance and a second movement distance corresponding to the suspected arm area;
determining a second relative movement speed index corresponding to the suspected arm area according to the first movement distance, the second movement distance and the third movement distance corresponding to the suspected arm area;
determining a third relative movement speed index corresponding to the suspected arm area according to the first movement distance, the second movement distance, the third movement distance and the fourth movement distance corresponding to the suspected arm area;
and determining the sum of the first relative movement speed index, the second relative movement speed index and the third relative movement speed index corresponding to the suspected arm area as the arm movement possibility corresponding to the suspected arm area.
Further, the determining a third relative moving speed index corresponding to the suspected arm area according to the first moving distance, the second moving distance, the third moving distance, and the fourth moving distance corresponding to the suspected arm area includes:
acquiring target duration corresponding to the suspected arm area;
determining the difference between the fourth movement distance and the first movement distance corresponding to the suspected arm area as a first relative distance corresponding to the suspected arm area;
determining the difference between the fourth moving distance and the second moving distance corresponding to the suspected arm area as a second relative distance corresponding to the suspected arm area;
determining the difference between a fourth moving distance and a third moving distance corresponding to the suspected arm area as a third relative distance corresponding to the suspected arm area;
determining the average value of the first relative distance, the second relative distance and the third relative distance corresponding to the suspected arm area as the average value of the relative distances corresponding to the suspected arm area;
and determining the ratio of the relative distance mean value corresponding to the suspected arm area to the target duration as a third relative moving speed index corresponding to the suspected arm area.
Further, the determining an arm height index and an arm speed index corresponding to each suspected arm area in the set of suspected arm areas includes:
determining a longitudinal coordinate included in a fourth coordinate corresponding to a fourth edge point position corresponding to the suspected arm area as an arm height index corresponding to the suspected arm area, wherein the fourth coordinate is a coordinate of the fourth edge point position corresponding to a target coordinate system;
and determining the ratio of the fourth moving distance corresponding to the suspected arm area to the target duration as an arm speed index corresponding to the suspected arm area.
Further, the controlling the lifting of the target window according to the target height index or the target speed index includes:
when the target speed index is larger than a preset speed index threshold, controlling a target window to be completely opened or completely closed according to two edge pixel points corresponding to a fourth moving distance corresponding to the last suspected arm area in the suspected arm area set;
and when the target speed index is smaller than or equal to a speed index threshold value, controlling the height of the target car window according to the target height index.
Further, the controlling the target window to be completely opened or completely closed according to two edge pixel points corresponding to the fourth moving distance corresponding to the last suspected arm area in the suspected arm area set includes:
merging two edge pixel points corresponding to a fourth moving distance corresponding to the last suspected arm area into a preset standard coordinate system;
determining a target vector according to the positions of two edge pixel points corresponding to a fourth movement distance corresponding to the last suspected arm area under the standard coordinate system;
determining an included angle between the target vector and a preset longitudinal axis vector as a direction included angle, wherein the direction of the longitudinal axis vector is the same as the positive direction of the longitudinal axis of a standard coordinate system;
when the included angle of the directions is larger than 90 degrees, controlling the target car window to be completely opened;
and when the direction included angle is smaller than or equal to 90 degrees, controlling the target vehicle window to be completely closed.
Further, the controlling the height of the target window according to the target height index includes:
when the target height index is larger than or equal to a first target height acquired in advance, controlling the target car window to be completely opened;
when the target height index is smaller than or equal to a second target height acquired in advance, controlling the target window to be completely closed;
and when the target height index is smaller than the first target height and larger than the second target height, determining the product of the target proportion coefficient and the target height index which are acquired in advance as a target adjusting height, controlling the target window, and adjusting the height of the target window to the target adjusting height.
The invention has the following beneficial effects:
according to the intelligent video identification method for the vehicle window control system, the vehicle window control video is obtained in a pattern identification mode, relevant data identification is carried out, the problem that the vehicle window control efficiency and accuracy are low is solved, and the vehicle window control efficiency and accuracy are improved. Firstly, carrying out motion area identification on each frame of window control image in the acquired window control video, and screening the window control images with the motion areas identified out to obtain a target control image set. According to the invention, the vehicle window is controlled by identifying the motion state of the arm, and when the vehicle window control image has no motion area, the vehicle window control image is often free of the arm which moves, so that the target control image which possibly comprises the moving arm is screened out, the subsequent vehicle window control image without the motion area is not required to be identified, the calculated amount is reduced, and the occupation of calculation resources is reduced. Then, according to the area of the motion area, whether the motion area is a suspected arm area or not is judged, and whether the motion area is a motion arm area or not can be preliminarily judged. And performing arm morphological feature recognition and arm movement possible feature recognition on the suspected arm area to determine the movement possibility of the target arm, so that the accuracy of determining the movement possibility of the target arm can be improved. And then, when the movement possibility of the target arm is greater than a preset movement threshold, determining an arm height index and an arm speed index corresponding to each suspected arm area in the set of suspected arm areas. And determining the arm height index and the arm speed index, so that the target window can be conveniently controlled subsequently. And then, screening out the maximum arm height index and the maximum arm speed index from the arm height indexes and the arm speed indexes corresponding to the suspected arm areas in the suspected arm area set, and respectively using the maximum arm height index and the maximum arm speed index as a target height index and a target speed index. And finally, controlling the lifting of the target car window according to the target height index or the target speed index. Because the recognition of the motion state of the arm is not influenced when the shooting of the opponent is incomplete, compared with a gesture car window control method depending on gestures, the car window control method based on the gesture has the advantage that the car window control accuracy can be improved. Compared with a gesture car window control method needing to recall or check a plurality of template gestures, the car window control efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a video intelligent recognition method for a vehicle window control system according to the present invention;
FIG. 2 is a schematic view of a window control area according to the present invention;
FIG. 3 is a schematic view of a camera mounting location according to the present invention;
FIG. 4 is a schematic view of a fitting vector according to the present invention;
FIG. 5 is a schematic view of an arm state involved in window control according to the present invention;
fig. 6 is a schematic diagram of the moving distance according to the present invention.
Wherein the reference numerals include: a first rectangle 201, a second rectangle 202, a position 301, a pixel 401, a first edge pixel 402, a second edge pixel 403, a fitting line 404, a third edge pixel 601, a fourth edge pixel 602, a fifth edge pixel 603, a sixth edge pixel 604, a seventh edge pixel 605, an eighth edge pixel 606, a ninth edge pixel 607, and a tenth edge pixel 608.
Detailed Description
To further explain the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the embodiments, structures, features and effects of the technical solutions according to the present invention will be given with reference to the accompanying drawings and preferred embodiments. In the following description, different references to "one embodiment" or "another embodiment" do not necessarily refer to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The invention provides a video intelligent identification method for a vehicle window control system, which comprises the following steps:
acquiring a vehicle window control video of a vehicle window control area, performing motion area identification on each frame of vehicle window control image in the vehicle window control video, and determining the vehicle window control image as a target control image to obtain a target control image set when the motion area is identified from the vehicle window control image;
when the area of a motion area in a target control image in the target control image set is larger than or equal to a preset area threshold, determining the motion area as a suspected arm area to obtain a suspected arm area set;
performing arm morphological feature identification on each suspected arm area in the suspected arm area set to obtain the arm possibility corresponding to the suspected arm area;
performing arm movement possible feature identification on each suspected arm area in the suspected arm area set to obtain arm movement possibility corresponding to the suspected arm area;
determining the movement possibility of the target arm according to the arm possibility and the arm movement possibility corresponding to each suspected arm area in the suspected arm area set;
when the movement possibility of the target arm is larger than a preset movement threshold, determining an arm height index and an arm speed index corresponding to each suspected arm area in the suspected arm area set;
screening out the maximum arm height index and the maximum arm speed index from the arm height indexes and the arm speed indexes corresponding to the suspected arm areas in the suspected arm area set, and respectively using the maximum arm height index and the maximum arm speed index as a target height index and a target speed index;
and controlling the lifting of the target car window according to the target height index or the target speed index.
The following steps are detailed:
referring to fig. 1, a flow diagram of some embodiments of a video intelligent recognition method for a vehicle window control system according to the present invention is shown. The intelligent video identification method for the vehicle window control system comprises the following steps:
step S1, obtaining a window control video of a window control area, carrying out motion area identification on each frame of window control image in the window control video, and determining the window control image as a target control image when the motion area is identified from the window control image to obtain a target control image set.
In some embodiments, a window control video of a window control area may be acquired, motion area identification may be performed on each frame of window control image in the window control video, and when a motion area is identified from the window control image, the window control image is determined as a target control image, resulting in a target control image set.
The window control area may be a preset area with a window as a starting position. As shown in fig. 2, a first rectangle 201 may characterize a vehicle head interior region of a vehicle. The first rectangle 201 is the larger rectangle in fig. 2. The line segment to the left of the first rectangle 201 may represent a vehicle door in which the window to be controlled is mounted. The second rectangle 202 may characterize the window control area. The second rectangle 202 is the smaller rectangle in fig. 2. And recording the vehicle window control video, wherein the vehicle window control video is a video in a preset time period in a vehicle window control area. The preset time period may be a preset time period. For example, the preset time period may correspond to a duration of 1 second. The duration corresponding to the preset time period can be set according to the number of the window control images required by actual conditions. For example, the number of window control images in the window control video may be 20. The window control image may be an image constituting a window control video. The motion region may be a region in which a moving object is located in the window control image. For example, the moving object may be a moving arm. Due to the continuity of the motion region, the target control images in the set of target control images tend to be images of consecutive frames. The time duration between two adjacent window control images may be equal.
It should be noted that the shorter the duration corresponding to the preset time period is, the higher the timeliness of the subsequent vehicle window control is. Because the motion state of the arm in the window control area needs to be judged subsequently, the duration corresponding to the preset time period is often too short to be set, and the setting can be carried out according to the actual situation. The motion state of the arm in the window control area is judged, so that the window can be controlled conveniently according to the motion state of the arm in the window control area. The arms can be divided into large arms, small arms, and hands according to joints. The window control system may be a software system for window control.
As an example, this step may include the steps of:
in the first step, a vehicle window control video of a vehicle window control area is obtained.
For example, as shown in fig. 3, a window control video in the window control area may be recorded by a camera mounted at position 301.
And secondly, carrying out motion area identification on each frame of window control image in the window control video, and judging whether a motion area exists in the window control image.
For example, the motion region recognition may be performed on each frame of window control image in the window control video by a frame difference method, and it may be determined whether or not a motion region exists in the window control image.
And thirdly, when a motion area is identified from the window control image, determining the window control image as a target control image to obtain a target control image set.
Optionally, when there is no motion region in each frame of window control image in the window control video, there is often no need to adjust the window state. The window state may include: whether the window is open and the height at which the window is open.
And S2, when the area of the motion area in the target control image set is larger than or equal to a preset area threshold, determining the motion area as a suspected arm area to obtain a suspected arm area set.
In some embodiments, when the area of the motion region in the target control image set is greater than or equal to a preset area threshold, the motion region is determined as a suspected arm region, resulting in a suspected arm region set.
The area of the motion region can be represented by the number of pixel points in the motion region. The area threshold may be a preset area of a minimum arm region required for determining the arm state in the target control image. For example, the area threshold may correspond to the formula:
where S is the area threshold.The ratio is preset and can be adjusted according to actual conditions. For example,. M is the number of pixels in the target control image.
According to the area ratio of the motion area to the target control image, the area threshold value when the motion area is the arm area is determined, the accuracy of determining the area threshold value can be improved, and whether the motion area is the arm area or not can be preliminarily judged.
It should be noted that the suspected arm areas belong to a motion area, and due to the continuity of the motion area, the images in which the suspected arm areas in the set of suspected arm areas are located are often images of consecutive frames.
Optionally, when the area of the motion region in each target control image in the target control image set is smaller than the area threshold, it is often indicated that the motion region is not an arm region, and the window state is often not required to be adjusted.
And S3, performing arm morphological feature identification on each suspected arm area in the suspected arm area set to obtain the arm possibility corresponding to the suspected arm area.
In some embodiments, an arm morphology feature identification may be performed on each suspected arm area in the set of suspected arm areas to obtain an arm probability corresponding to the suspected arm area.
The arm likelihood may represent how likely the suspected arm area is an arm area.
As an example, this step may include the steps of:
the method comprises the following steps that firstly, for each pixel point on the edge of the suspected arm area, fitting vectorization is carried out on the pixel point and an edge pixel point in a preset target neighborhood corresponding to the pixel point, and a fitting vector corresponding to the pixel point is obtained.
Wherein, the edge pixel points are the pixel points on the edge of the suspected arm area. The target neighborhood may be a preset neighborhood. For example, the target neighborhood may be a 5 × 5 neighborhood.
For example, fitting and vectorizing the pixel point and the edge pixel point in the target neighborhood corresponding to the pixel point to obtain the fitting vector corresponding to the pixel point may include the following substeps:
the first substep is to fit the pixel point and the edge pixel point in the target neighborhood corresponding to the pixel point to obtain a fitting line segment corresponding to the pixel point.
And the second substep, determining the length of the fitting line segment corresponding to the pixel point as the size of the fitting vector corresponding to the pixel point, and determining the trend direction of the fitting line segment corresponding to the pixel point from left to right in the target neighborhood as the direction of the fitting vector corresponding to the pixel point.
For example, when the target neighborhood is an eight neighborhood, as shown in fig. 4, two edge pixels in the eight neighborhood of the pixel 401 are a first edge pixel 402 and a second edge pixel 403, respectively. And fitting the pixel point 401, the first edge pixel point 402 and the second edge pixel point 403 to obtain a fitting line segment 404 corresponding to the pixel point 401. The direction indicated by the arrow in fig. 4 may be a direction of the trend of the fitted line segment 404 from left to right in the target neighborhood.
And secondly, determining an included angle between a fitting vector corresponding to each pixel point on the edge of the suspected arm area and a preset target direction as a target included angle corresponding to the pixel point.
Wherein the target direction may be a preset direction. For example, the target direction may be a horizontal direction from left to right.
And thirdly, clustering all the pixel points on the edge of the suspected arm area according to the target included angles corresponding to all the pixel points on the edge of the suspected arm area to obtain an edge pixel point cluster set corresponding to the suspected arm area.
For example, according to a target included angle corresponding to each pixel point on the edge of the suspected arm area, using K-means clustering, and setting K =2, clustering each pixel point on the edge of the suspected arm area to obtain two edge pixel point clusters, which are used as an edge pixel point cluster set corresponding to the suspected arm area.
And fourthly, determining cluster difference corresponding to the suspected arm area according to a target included angle corresponding to a cluster center included by each edge pixel point cluster in the edge pixel point cluster set corresponding to the suspected arm area.
For example, the mean of the absolute values of the differences between the target included angles corresponding to the cluster centers included in each edge pixel cluster in the edge pixel cluster set corresponding to the suspected arm area may be determined as the cluster difference corresponding to the suspected arm area.
For another example, when the number of edge pixel clusters in the edge pixel cluster set is 2, the formula for determining the cluster difference corresponding to the suspected arm area may be:
wherein,the cluster difference corresponding to the jth suspected arm area in the suspected arm area set is obtained. j is the serial number of the suspected arm area in the suspected arm area set.Is an absolute value function.The target included angle is the target included angle corresponding to the cluster center included by the 1 st edge pixel point cluster included by the edge pixel point cluster set corresponding to the jth suspected arm area in the suspected arm area set.The target included angle is corresponding to the cluster center of the 2 nd edge pixel point cluster included in the edge pixel point cluster set corresponding to the jth suspected arm area in the suspected arm area set.
It should be noted that the larger the cluster difference corresponding to the suspected arm area is, the larger the difference between the clusters is, and the better the clustering effect on each pixel point on the edge of the suspected arm area is.
And fifthly, for each pixel point on the edge of the suspected arm area, screening an edge pixel point closest to the pixel point from an edge pixel point cluster where the pixel point is in the edge pixel point cluster set corresponding to the suspected arm area, and using the edge pixel point as a target edge pixel point corresponding to the pixel point.
For example, first, a pixel point on the edge of the suspected arm area may be used as the first pixel point. Then, the edge pixel point closest to the first pixel point can be screened from the edge pixel point cluster where the first pixel point is located in the edge pixel point cluster set corresponding to the suspected arm area, and the edge pixel point is used as the target edge pixel point corresponding to the first pixel point.
And sixthly, determining the absolute value of the difference value between each pixel point on the edge of the suspected arm area and the target included angle corresponding to the target edge pixel point corresponding to the pixel point as the edge difference corresponding to the pixel point.
For example, the formula for determining the edge difference corresponding to the pixel point can be:
wherein,and the edge difference corresponding to the ith pixel point on the jth suspected arm area edge in the suspected arm area set. j is the serial number of the suspected arm area in the suspected arm area set. i is the serial number of the pixel point on the edge of the suspected arm area.Is an absolute value function.Is a suspected arm areaAnd the target included angle corresponding to the ith pixel point on the jth suspected arm area edge in the domain set.And the target included angle is the target included angle corresponding to the target edge pixel point corresponding to the ith pixel point on the jth suspected arm area edge in the suspected arm area set.
It should be noted that the smaller the edge difference corresponding to the ith pixel point is, the smaller the edge direction change corresponding to the ith pixel point and the target edge pixel point is, and the smaller the edge direction change corresponding to the ith pixel point and the target edge pixel point is, the closer the edge direction is to the ith pixel point and the target edge pixel point, and the target included angle may represent the edge direction.
And seventhly, determining the target length and the target width corresponding to the suspected arm area according to the position corresponding to each pixel point on the edge of the suspected arm area.
For example, determining the target length and the target width corresponding to the suspected arm area according to the position corresponding to each pixel point on the edge of the suspected arm area may include the following sub-steps:
in the first substep, the distance between each pixel point on the edge of the suspected arm area is determined according to the corresponding position of each pixel point on the edge of the suspected arm area, so as to obtain a distance set corresponding to the suspected arm area.
And a second substep of screening the maximum distance from the distance set corresponding to the suspected arm area as the target length corresponding to the suspected arm area.
And a third substep of determining a target line segment corresponding to the suspected arm area according to two pixel points corresponding to the target length corresponding to the suspected arm area.
For example, when the distance between two pixels is the target length, the two pixels may be two pixels corresponding to the target length. Two pixel points corresponding to the target length corresponding to the suspected arm area can be connected, and the connected line segment is used as the target line segment corresponding to the suspected arm area.
And a fourth substep of determining a straight line perpendicular to the target line segment corresponding to the suspected arm region as a sliding straight line corresponding to the suspected arm region.
And a fifth substep of determining a target width corresponding to the suspected arm area according to the edge of the suspected arm area, the target line segment corresponding to the suspected arm area and the sliding straight line.
The target width corresponding to the suspected arm area may represent an average width of the suspected arm area.
For example, determining the target width corresponding to the suspected arm area according to the position corresponding to each pixel point on the edge of the suspected arm area, the target line segment corresponding to the suspected arm area, and the sliding straight line may include the following steps:
firstly, a sliding straight line is slid in a suspected arm area, so that the sliding straight line traverses the suspected arm area.
And then, when the number of intersection points between the sliding straight line and the edge of the suspected arm area is 1, determining the distance between the intersection points and the target line segment corresponding to the suspected arm area as the width distance.
Then, when the number of intersections between the sliding straight line and the edge of the suspected arm area is 2, the two intersections are respectively taken as a first intersection and a second intersection, an intersection between the sliding straight line and a target line segment corresponding to the suspected arm area is determined as a third intersection, when the first intersection and the second intersection are on the same side of the target line segment, the distance between the first intersection and the third intersection is determined as a first distance, the distance between the second intersection and the third intersection is determined as a second distance, and the absolute value of the difference between the first distance and the second distance is determined as a width distance. When the first intersection point and the second intersection point are on different sides of the target line segment, determining a distance between the first intersection point and the second intersection point as a width distance.
Then, when the number of the intersection points between the sliding straight line and the edge of the suspected arm area is greater than 2, the distance between the intersection points is determined, two intersection points with the farthest distance are screened out, and the width distance can be determined by referring to a method for determining the width distance when the number of the intersection points between the sliding straight line and the edge of the suspected arm area is 2.
And finally, determining the average value of the width distances as the target width corresponding to the suspected arm area.
And eighthly, determining a first arm possibility and a second arm possibility corresponding to the suspected arm area according to the cluster difference, the target length and the target width corresponding to the suspected arm area, the pre-obtained standard width-to-length ratio, the edge difference corresponding to each pixel point on the edge of the suspected arm area, the target included angle corresponding to the cluster center included in the edge pixel point cluster where each pixel point on the edge of the suspected arm area is located, and the target included angle corresponding to each edge pixel point in the edge pixel point cluster where each pixel point on the edge of the suspected arm area is located.
Wherein, the standard width-to-length ratio can represent the width-to-length ratio of the arm. For example, the standard aspect ratio may be 0.25.
For example, the formula for determining the first arm likelihood corresponding to the suspected arm area may be:
wherein,is the first arm likelihood corresponding to the jth suspected arm region in the set of suspected arm regions. j is the serial number of the suspected arm area in the suspected arm area set.Is the cluster difference corresponding to the jth suspected arm area in the set of suspected arm areas.Is the number of pixel points on the edge of the jth suspected arm area in the set of suspected arm areas.The edge difference corresponding to the ith pixel point on the jth suspected arm area edge in the suspected arm area set. i is the serial number of the pixel point on the edge of the suspected arm area.The number of the edge pixel points in the edge pixel point cluster where the ith pixel point on the jth suspected arm area edge in the suspected arm area set is located is shown.And the target included angle is corresponding to the xth edge pixel point in the edge pixel point cluster where the ith pixel point on the jth suspected arm area edge in the suspected arm area set is located.The target included angle is corresponding to the cluster center included by the edge pixel cluster where the ith pixel point on the jth suspected arm area edge in the suspected arm area set is located.Is a preset parameter index mainly used for preventing the denominator from being 0. For example, in the case of a liquid,。
it should be noted that the clustering difference corresponding to the jth suspected arm regionThe larger the difference between clusters is, the better the clustering effect on each pixel point on the edge of the suspected arm area is. The smaller the edge difference corresponding to each pixel point on the edge of the suspected arm area is, the more stable the edge of the suspected arm area is, and the more stable the edge of the suspected arm area is, the more likely the edge of the suspected arm area is to be an area edgeA rim.The difference between each edge pixel point in the edge pixel point cluster where the ith pixel point is located can be obtained. Therefore, the first and second electrodes are formed on the substrate,the larger the size is, the worse the clustering effect of the edge pixel point cluster where the ith pixel point is located is, therefore,the smaller the size of the hole is,the larger the number of the suspected arm areas, the more likely the jth suspected arm area is to be an arm area.
For another example, the formula for determining the second arm likelihood corresponding to the suspected arm area may be:
wherein,is the second arm likelihood corresponding to the jth suspect arm zone in the set of suspect arm zones. j is the serial number of the suspected arm area in the suspected arm area set.Is an absolute value function.Is the target width corresponding to the jth suspected arm area in the suspected arm area set.Is corresponding to the jth suspected arm area in the suspected arm area setA target length. U is the standard width to length ratio.
It should be noted that, in the following description,the closer to the standard width-to-length ratio U,the smaller the size of the tube is,the larger the second arm probability corresponding to the jth suspected arm area is, the more likely the jth suspected arm area is to be an arm area.
And a ninth step of determining the product of the first arm likelihood and the second arm likelihood corresponding to the suspected arm area as the arm likelihood corresponding to the suspected arm area.
For example, the formula for determining the arm likelihood corresponding to the suspected arm area may be:
wherein,is the arm probability corresponding to the jth suspected arm area in the suspected arm area set. j is the serial number of the suspected arm area in the suspected arm area set.Is the first arm likelihood corresponding to the jth suspected arm region in the set of suspected arm regions.Is the second arm likelihood corresponding to the jth suspect arm zone in the set of suspect arm zones.
It should be noted that, because the shape of the arm is often different from the shapes of other objects, the step performs the arm morphological feature identification on the suspected arm area to obtain the arm possibility corresponding to the suspected arm area, which may facilitate the subsequent judgment of whether the suspected arm area is the arm area. Moreover, because the arm movement process is often the movement that the big arm drives the small arm and the hand (the big arm is composed of the humerus and the small arm is composed of the ulna and the radius), the small arm and the hand often move synchronously in the movement process, the edge directions of the small arm and the hand are often consistent, the included angle between the small arm and the hand is often smaller, and the large arm and the small arm often have an included angle, therefore, according to the target included angle corresponding to each pixel point on the edge of the suspected arm area, each pixel point on the edge of the suspected arm area is clustered, when the clustering is of two types, the clustering effect is better, the more likely the arm possibility corresponding to the suspected arm area is often greater, and the more likely the suspected arm area is the arm area. In addition, the smaller the edge difference corresponding to each pixel point on the edge of the suspected arm area is, the more stable the edge of the suspected arm area is, the more consistent the edge characteristics of the suspected arm area is, the greater the possibility of the arm corresponding to the suspected arm area is, and the more possible the suspected arm area is.
And S4, performing arm movement possible feature recognition on each suspected arm area in the suspected arm area set to obtain the arm movement possibility corresponding to the suspected arm area.
In some embodiments, an arm movement possibility feature identification may be performed on each suspected arm area in the set of suspected arm areas to obtain an arm movement possibility corresponding to the suspected arm area.
The arm movement probability may represent a possible degree that the suspected arm area is a moving arm area.
As an example, this step may comprise the steps of:
firstly, a coordinate system is established by taking the lower left corner of the window control image where the suspected arm area is located as an origin, and the coordinate system is used as a target coordinate system corresponding to the suspected arm area.
The target coordinate system may be a coordinate system in which the lower left corner of the window control image is used as an origin, the side below the window control image is used as a horizontal axis, and the side to the left of the window control image is used as a vertical axis, so that the window control image is located in the first quadrant.
For example, as shown in fig. 5, the area where the arm is located in fig. 5 may be a suspected arm area in the window control image. In the target coordinate system corresponding to the suspected arm area, the large arm included in the arm in fig. 5 is closest to the origin of the target coordinate system, and the hand included in the arm in fig. 5 is farthest from the origin of the target coordinate system. The target coordinate system corresponding to the suspected arm area may be a coordinate system in which the window control image is located in a first quadrant with a lower left corner of the window control image where the suspected arm area is located as an origin, an edge below the window control image as a horizontal axis, and an edge to the left of the window control image as a vertical axis. When the target coordinate system corresponding to the suspected arm area is the lower left corner of the window control image where the suspected arm area is located, the origin may be at the lower left corner of the suspected arm area, that is, the origin may be at the lower left corner of the area where the arm is located in fig. 5.
And secondly, determining the position corresponding to the pixel point closest to the target origin on the edge of the suspected arm area as the first edge point position corresponding to the suspected arm area.
And the target origin is the origin of a target coordinate system corresponding to the suspected arm area. The first edge point position may be characterized by coordinates of the first position pixel point in the target coordinate system. The first position pixel point may be a pixel point on the edge of the suspected arm region closest to the target origin. If the suspected arm area is an arm area, the first edge point position may represent a position on the edge of the forearm corresponding to a pixel point closest to the target origin.
And thirdly, determining a second edge point position and a third edge point position corresponding to the suspected arm area according to a target arm point identification network which is trained in advance.
The target arm point identification network may be a Back Propagation (BP) neural network. If the suspected arm area is an arm area, the second edge point location may represent a location of a junction of the upper arm and the lower arm, and the third edge point location may represent a location of a junction of the lower arm and the hand.
For example, the suspected arm area may be input into a target arm point recognition network, and the second edge point position and the third edge point position corresponding to the suspected arm area may be determined through the target arm point recognition network.
Optionally, the training process of the target arm point recognition network may include the following sub-steps:
in the first substep, a set of sample arm regions is obtained.
The sample arm region in the sample arm region set may be an arm region in the sample arm region image. The sample arm area image and the sample arm area may correspond one to one. The sample arm region may be an arm region for which the first arm position and the second arm position are known. Wherein the first arm position may be indicative of a location of a junction of the forearm and the forearm. The second arm position may be indicative of the location of the forearm and hand interface. The coordinates corresponding to the first arm position and the second arm position may be coordinates in a sample coordinate system. The sample coordinate system may be a coordinate system in which the sample arm region image is located in the first quadrant with the lower left corner of the sample arm region image as an origin, the edge below the sample arm region image as a horizontal axis, and the edge on the left of the sample arm region image as a vertical axis. The size of the sample arm area image may be the same size as the window control image.
In the second sub-step, the sample arm region set may be used as a training sample, the first arm position and the second arm position corresponding to each sample arm region in the sample arm region set are used as training labels, and the constructed target arm point identification network is trained to obtain a trained target arm point identification network.
And fourthly, determining the position corresponding to the pixel point farthest from the target origin on the edge of the suspected arm area as the position of a fourth edge point corresponding to the suspected arm area.
And the fourth edge point position can be represented by the coordinates of the fourth position pixel point in the target coordinate system. The fourth position pixel point can be the pixel point farthest from the target origin on the edge of the suspected arm area. If the suspected arm area is an arm area, the fourth edge point location may represent the location on the edge of the hand corresponding to the pixel point farthest from the target origin.
And fifthly, determining the arm movement possibility corresponding to the suspected arm area according to the reference arm area corresponding to the suspected arm area and the first edge point position, the second edge point position, the third edge point position and the fourth edge point position corresponding to the suspected arm area.
The reference arm area corresponding to the suspected arm area may be a suspected arm area included in a previous frame of window control image of the window control image in which the suspected arm area is located.
For example, determining the arm movement possibility corresponding to the suspected arm area according to the reference arm area corresponding to the suspected arm area and the first edge point position, the second edge point position, the third edge point position and the fourth edge point position corresponding to the suspected arm area may include the following sub-steps:
a first substep of determining a first movement distance corresponding to the suspected arm area according to a first edge point position corresponding to a reference arm area corresponding to the suspected arm area and a first edge point position corresponding to the suspected arm area.
The first moving distance may be a distance between a first edge point position corresponding to a reference arm area corresponding to the suspected arm area and a first edge point position corresponding to the suspected arm area. If the suspected arm area is an arm area, the first movement distance may represent a movement distance of the large arm.
For example, the formula for determining the first movement distance corresponding to the suspected arm area may be:
wherein,is the first movement distance corresponding to the jth suspected arm area in the suspected arm area set. j is the serial number of the suspected arm area in the suspected arm area set.Is an abscissa included in the first edge point position corresponding to the jth suspected arm area in the suspected arm area set.Is a vertical coordinate included in the first edge point position corresponding to the jth suspected arm area in the suspected arm area set.Is the abscissa included in the first edge point position corresponding to the j-1 th suspected arm area in the suspected arm area set.Is a vertical coordinate included in the first edge point position corresponding to the j-1 th suspected arm area in the suspected arm area set. The first edge point position corresponding to the j-1 th suspected arm area in the suspected arm area set may be a reference arm area corresponding to the j th suspected arm area.
And a second substep of determining a second movement distance, a third movement distance and a fourth movement distance corresponding to the suspected arm area according to the reference arm area corresponding to the suspected arm area and the second edge point position, the third edge point position and the fourth edge point position corresponding to the suspected arm area.
The second moving distance may be a distance between a second edge point position corresponding to a reference arm area corresponding to the suspected arm area and a second edge point position corresponding to the suspected arm area. The third movement distance may be a distance between a third edge point position corresponding to a reference arm region corresponding to the suspected arm region and a third edge point position corresponding to the suspected arm region. The fourth movement distance may be a distance between a fourth edge point position corresponding to a reference arm area corresponding to the suspected arm area and a fourth edge point position corresponding to the suspected arm area. If the suspected arm area is an arm area, the second movement distance may represent a movement distance at the intersection of the forearm and the forearm, the third movement distance may represent a movement distance at the intersection of the forearm and the hand, and the fourth movement distance may represent a movement distance of the hand.
For example, the determination manner of the second movement distance, the third movement distance, and the fourth movement distance may refer to the determination manner of the first movement distance.
And a third substep of determining a first relative movement speed index corresponding to the suspected arm area according to the first movement distance and the second movement distance corresponding to the suspected arm area.
If the suspected arm area is the arm area, the first relative movement speed index can represent the movement speed of the joint of the large arm and the small arm relative to the large arm.
For example, the formula for determining the difference between the second movement distance and the first movement distance corresponding to the suspected arm area as the first relative movement speed index corresponding to the suspected arm area may be:
wherein, dR1 is a first relative movement speed index corresponding to the suspected arm area.Is the second movement distance corresponding to the suspected arm area.Is the first movement distance corresponding to the suspected arm area.
For another example, a ratio of a first target difference corresponding to the suspected arm area to the target duration may be determined as a first relative movement speed index corresponding to the suspected arm area. The first target difference corresponding to the suspected arm area may be a difference between a second movement distance and a first movement distance corresponding to the suspected arm area. The target duration may be a duration between two adjacent window control images. For example, the formula for determining the first relative movement speed index corresponding to the suspected arm area may be:
wherein, dR1 is a first relative movement speed index corresponding to the suspected arm area.Is the second movement distance corresponding to the suspected arm area.Is the first movement distance corresponding to the suspected arm area. T is the target duration.
And a fourth substep of determining a second relative movement speed index corresponding to the suspected arm area according to the first movement distance, the second movement distance and the third movement distance corresponding to the suspected arm area.
Wherein, if the suspected arm area is an arm area, the second relative movement speed index may characterize a movement rate of a junction of the forearm and the hand relative to a junction of the forearm, the forearm and the forearm.
For example, the formula for determining the second relative movement speed index corresponding to the suspected arm area may be:
wherein, dR2 is a second relative movement speed index corresponding to the suspected arm area.Is the first movement distance corresponding to the suspected arm area.Is the second movement distance corresponding to the suspected arm area.Is the third movement distance corresponding to the suspected arm area.
For another example, the formula for determining the second relative movement speed index corresponding to the suspected arm area may be:
wherein, dR2 is a second relative movement speed index corresponding to the suspected arm area.Is the first movement distance corresponding to the suspected arm area.Is the second movement distance corresponding to the suspected arm area.Is the third movement distance corresponding to the suspected arm area. T is the target duration.
And a fifth substep of determining a third relative movement speed index corresponding to the suspected arm area according to the first movement distance, the second movement distance, the third movement distance and the fourth movement distance corresponding to the suspected arm area.
And if the suspected arm area is the arm area, the third relative movement speed index can represent the movement rate of the hand relative to the joint of the large arm, the large arm and the small arm and the joint of the small arm and the hand.
For example, determining the third relative moving speed index corresponding to the suspected arm area according to the first moving distance, the second moving distance, the third moving distance and the fourth moving distance corresponding to the suspected arm area may include the following steps:
first, a target duration corresponding to the suspected arm area is obtained.
Wherein the target duration may be a duration between two adjacent window control images.
Next, the difference between the fourth movement distance and the first movement distance corresponding to the suspected arm area is determined as the first relative distance corresponding to the suspected arm area.
Then, the difference between the fourth movement distance and the second movement distance corresponding to the suspected arm area is determined as the second relative distance corresponding to the suspected arm area.
And then, determining the difference between the fourth moving distance and the third moving distance corresponding to the suspected arm area as a third relative distance corresponding to the suspected arm area.
And then, determining the average value of the first relative distance, the second relative distance and the third relative distance corresponding to the suspected arm area as the average value of the relative distances corresponding to the suspected arm area.
And finally, determining the ratio of the relative distance mean value corresponding to the suspected arm area to the target duration as a third relative moving speed index corresponding to the suspected arm area.
For another example, the formula for determining the third relative moving speed index corresponding to the suspected arm area may be:
wherein, dR3 is the third relative moving speed index corresponding to the suspected arm area.Is the first movement distance corresponding to the suspected arm area.Is the second movement distance corresponding to the suspected arm area.Is the third movement distance corresponding to the suspected arm area.Is the fourth movement distance corresponding to the suspected arm area.
As shown in fig. 6, the first movement distanceMay be the distance between the third edge pixel 601 and the fourth edge pixel 602. Second distance of movementMay be the distance between the fifth edge pixel 603 and the sixth edge pixel 604. Third distance of movementMay be the distance between the seventh edge pixel point 605 and the eighth edge pixel point 606. A fourth movement distanceMay be the distance between ninth edge pixel 607 and tenth edge pixel 608.
And a sixth substep of determining the sum of the first relative movement speed index, the second relative movement speed index and the third relative movement speed index corresponding to the suspected arm area as the arm movement possibility corresponding to the suspected arm area.
For example, the formula for determining the arm movement possibility corresponding to the suspected arm area may be:
wherein,the motion probability of the arm corresponding to the jth suspected arm area in the suspected arm area set is obtained. j is the serial number of the suspected arm area in the suspected arm area set.Is a first relative movement speed index corresponding to the jth suspected arm area in the suspected arm area set.And the second relative movement speed index is the second relative movement speed index corresponding to the jth suspected arm area in the suspected arm area set.Is the third relative moving speed index corresponding to the jth suspected arm area in the suspected arm area set.
It should be noted that, when the arm moves, the relative movement situations among the three parts of the arm, including the large arm, the small arm and the hand, are often different, and the difference is that the relative movement degree between the small arm and the hand is often small, and the relative movement degree between the small arm and the large arm is often large, so the relative movement degrees among the three parts of the large arm, the small arm and the hand are comprehensively considered, and the accuracy of judging whether the suspected arm area is the arm area can be improved. Wherein, the motion degree can be characterized by a relative moving speed index. That is, when the first relative movement speed index, the second relative movement speed index, and the third relative movement speed index corresponding to the suspected arm area are larger, the suspected arm area is more likely to be the arm area.
And S5, determining the movement possibility of the target arm according to the arm possibility and the arm movement possibility corresponding to each suspected arm area in the suspected arm area set.
In some embodiments, the target arm movement probability may be determined according to the arm possibility and the arm movement possibility corresponding to each suspected arm area in the suspected arm area set.
As an example, the formula for determining the likelihood of movement of the target arm may be:
where FP is the target arm movement probability. N is the number of suspected arm regions in the set of suspected arm regions. j is the serial number of the suspected arm area in the suspected arm area set.Is the arm probability corresponding to the jth suspected arm area in the suspected arm area set.Is the arm movement possibility corresponding to the jth suspected arm area in the suspected arm area set.
It should be noted that when the first relative movement speed index, the second relative movement speed index, and the third relative movement speed index corresponding to each suspected arm area are larger, the more likely the suspected arm area is to be an arm area, the more likely the vehicle window control video is to be recorded to a moving arm, and the more likely the vehicle window needs to be controlled.
And S6, when the movement possibility of the target arm is larger than a preset movement threshold, determining an arm height index and an arm speed index corresponding to each suspected arm area in the suspected arm area set.
In some embodiments, when the target arm movement probability is greater than a preset movement threshold, an arm height index and an arm speed index corresponding to each suspected arm area in the set of suspected arm areas may be determined.
The motion threshold may be the maximum allowable target arm motion possibility when the preset suspected arm area is not a motion arm area. For example, the motion threshold may be 5. The arm height indicator may characterize the height of the hand. The arm velocity index may be indicative of the velocity of the hand.
In practical cases, when the possibility of the movement of the target arm is less than or equal to the movement threshold, it is often indicated that the suspected arm area is not a moving arm area, and the window state is often not required to be adjusted.
As an example, this step may include the steps of:
and determining a vertical coordinate included in a fourth coordinate corresponding to a fourth edge point position corresponding to the suspected arm area as an arm height index corresponding to the suspected arm area.
And the fourth coordinate is a coordinate of the fourth edge point position corresponding to the target coordinate system.
And secondly, determining the ratio of the fourth moving distance corresponding to the suspected arm area to the target duration as an arm speed index corresponding to the suspected arm area.
Optionally, the fourth movement distance corresponding to the suspected arm area may be determined as the arm speed index corresponding to the suspected arm area.
It should be noted that, in the arm movement process, the movement speed of the hand is often the maximum speed in the arm movement process, and when the suspected arm area is the arm area, the fourth edge point position may represent the position corresponding to the pixel point on the edge of the hand farthest from the target origin, so the arm speed index corresponding to the suspected arm area may represent the maximum speed of the arm at the time corresponding to the suspected arm area. Secondly, because the height of the hand is often the maximum height in the arm movement process, the arm height index corresponding to the suspected arm area can represent the maximum height of the arm at the moment corresponding to the suspected arm area.
And S7, screening the maximum arm height index and the maximum arm speed index from the arm height indexes and the arm speed indexes corresponding to the suspected arm areas in the suspected arm area set, and respectively using the maximum arm height index and the maximum arm speed index as a target height index and a target speed index.
In some embodiments, the maximum arm height index and arm speed index may be selected from the arm height indexes and arm speed indexes corresponding to each suspected arm area in the set of suspected arm areas, and the maximum arm height index and the maximum arm speed index may be used as the target height index and the target speed index, respectively.
The target height index can represent the highest height of the arm in a time period corresponding to the vehicle window control video. The target speed index can represent the maximum speed of the arm in a time period corresponding to the vehicle window control video.
As an example, the largest arm height index may be screened from the arm height indexes corresponding to each suspected arm area in the set of suspected arm areas, and the largest arm height index may be used as the target height index. And screening out the maximum arm speed index from the arm speed indexes corresponding to all the suspected arm areas in the suspected arm area set as a target speed index.
And S8, controlling the lifting of the target car window according to the target height index or the target speed index.
In some embodiments, the lifting of the target window may be controlled according to the target height index or the target speed index.
The target window may be a window to be subjected to state adjustment.
As an example, this step may include the steps of:
and step one, when the target speed index is larger than a preset speed index threshold, controlling the target window to be completely opened or completely closed according to two edge pixel points corresponding to a fourth moving distance corresponding to the last suspected arm area in the suspected arm area set.
Wherein the speed index threshold may be a preset speed index. For example, the speed index threshold may be 10. When the distance between the two edge pixel points is the fourth moving distance, the two edge pixel points may be two edge pixel points corresponding to the fourth moving distance.
For example, according to two edge pixel points corresponding to the fourth movement distance corresponding to the last suspected arm area in the suspected arm area set, controlling the target window to be completely opened or completely closed may include the following sub-steps:
and in the first sub-step, two edge pixel points corresponding to the fourth movement distance corresponding to the last suspected arm area are combined to a preset standard coordinate system.
The standard coordinate system may be a preset two-dimensional coordinate system.
For example, merging two edge pixel points corresponding to the fourth movement distance corresponding to the last suspected arm area into a preset standard coordinate system may include the following steps:
first, the acquisition time corresponding to the image can be controlled according to the vehicle window where the two edge pixel points corresponding to the fourth moving distance corresponding to the last suspected arm area are located, and the two edge pixel points are respectively used as a first target edge point and a second target edge point. The acquisition time corresponding to the window control image may be the time for acquiring the window control image. The acquisition time corresponding to the vehicle window control image in which the second target edge point is located is closer to the current time than the acquisition time corresponding to the vehicle window control image in which the first target edge point is located.
Then, the first target edge point may be moved to a first coordinate of the standard coordinate system, and the second target edge point may be moved to a second coordinate of the standard coordinate system. Wherein the first coordinate may be the same as the coordinate of the first target edge point in the target coordinate system. The second coordinates may be the same as the coordinates of the second target edge point in the target coordinate system.
And a second substep of determining a target vector according to the positions of two edge pixel points corresponding to the fourth moving distance corresponding to the last suspected arm area under the standard coordinate system.
For example, determining the target vector according to the positions of the two edge pixel points corresponding to the fourth movement distance corresponding to the last suspected arm area in the standard coordinate system may include the following steps:
first, the acquisition time corresponding to the image can be controlled according to the vehicle window where the two edge pixel points corresponding to the fourth moving distance corresponding to the last suspected arm area are located, and the two edge pixel points are respectively used as a first target edge point and a second target edge point. The acquisition time corresponding to the window control image may be the time for acquiring the window control image. The acquisition time corresponding to the window control image with the second target edge point is closer to the current time than the acquisition time corresponding to the window control image with the first target edge point.
Then, the first target edge point and the second target edge point may be connected in a standard coordinate system, the length of the connected line segment may be determined as the size of the target vector, and the direction from the first target edge point to the second target edge point may be determined as the direction of the target vector.
And a third substep, determining an included angle between the target vector and a preset longitudinal axis vector as a direction included angle.
And the direction of the longitudinal axis vector is the same as the positive direction of the longitudinal axis of the standard coordinate system. The vertical axis vector may be a unit vector.
And a fourth substep, controlling the target window to be fully opened when the above direction included angle is larger than 90 deg.
When the included angle of the direction is larger than 90 degrees, the arm in the car window control video is usually considered to move upwards.
And a fifth sub-step of controlling the target window to be completely closed when the included angle is less than or equal to 90 degrees.
When the included angle is smaller than or equal to 90 degrees, the arm in the vehicle window control video is considered to move downwards.
It should be noted that when the target speed index is greater than the speed index threshold, it often indicates that the movement speed of the arm is relatively fast, and in general, when the window is opened or closed urgently, the movement speed of the arm of the user is often relatively fast, and when the window is opened, for example, when the window is opened and the card is swiped, the movement direction of the arm is often upward, and when the window is closed, the movement direction of the arm is often downward. Therefore, according to the direction included angle, the target vehicle window is controlled to be completely opened or closed, user habits are relatively met, and user experience can be improved.
And secondly, controlling the height of the target vehicle window according to the target height index when the target speed index is less than or equal to the speed index threshold value.
Wherein the height of the target window may be characterized by the height of the target window glass.
For example, according to the above target height indicator, controlling the height of the target window may comprise the sub-steps of:
and a first sub-step of controlling the target window to be fully opened when the target height index is greater than or equal to a first target height acquired in advance.
Wherein the first target height may be a pre-acquired height. For example, the first target height may be the ordinate of a position point at the highest height of the target window when the window control image is captured, the position point being in the target coordinate system. The highest height of the target window may be the height at which the target window is fully closed.
And a second sub-step of controlling the target window to be completely closed when the target height index is less than or equal to a second target height acquired in advance.
Wherein the second target height may be a pre-acquired height. For example, the second target height may be the ordinate of a position point at the lowest height of the target window, when the window control image is captured, in the target coordinate system. The lowest height of the target window may be the height of the target window when fully open.
And a third substep of determining the product of the target proportionality coefficient and the target height index as a target adjusting height, controlling the target window and adjusting the height of the target window to the target adjusting height when the target height index is smaller than the first target height and larger than the second target height.
The target proportionality coefficient may be a ratio of a preset reference window height to a vertical coordinate corresponding to the reference window height in a target coordinate system. The reference window height may be a possible height at which the target window is open. For example, the reference window height may be (second height, first height). Wherein the second height may be a height at which the target window is fully open. The first height may be a height at which the target window is fully closed. The ordinate corresponding to the reference window height may be an ordinate of a position point at the reference window height when the position point is captured by the window control image, the position point being in the target coordinate system.
For example, the formula for determining the target adjustment height may be:
wherein,is the target adjustment height. Q is the target scaling factor.Is a target height indicator.
It should be noted that when the target speed index is less than or equal to the speed index threshold, it often indicates that the movement speed of the arm is relatively slow, and generally, when the window is opened or closed relatively inefficiently, the window can be opened to a target adjustment height, and the target adjustment height can represent the height of the arm relative to the window glass, and the height of the window can be determined directly according to the height of the arm, so that the template gesture corresponding to the height does not need to be memorized, and the user experience is improved.
According to the intelligent video identification method for the vehicle window control system, the vehicle window control video is obtained in a pattern identification mode, relevant data identification is carried out, the problem that the vehicle window control efficiency and accuracy are low is solved, and the vehicle window control efficiency and accuracy are improved. Firstly, carrying out motion area identification on each frame of window control image in the acquired window control video, and screening the window control images with the motion areas identified out to obtain a target control image set. According to the invention, the vehicle window is controlled by identifying the motion state of the arm, and when the vehicle window control image has no motion area, the vehicle window control image is often free of the arm which moves, so that the target control image which possibly comprises the moving arm is screened out, the subsequent vehicle window control image without the motion area is not required to be identified, the calculated amount is reduced, and the occupation of calculation resources is reduced. Then, according to the area of the motion area, whether the motion area is a suspected arm area or not is judged, and whether the motion area is a motion arm area or not can be preliminarily judged. And performing arm morphological feature recognition and arm movement possible feature recognition on the suspected arm area to determine the movement possibility of the target arm, so that the accuracy of determining the movement possibility of the target arm can be improved. And then, when the movement possibility of the target arm is greater than a preset movement threshold, determining an arm height index and an arm speed index corresponding to each suspected arm area in the set of suspected arm areas. And determining the arm height index and the arm speed index, so that the target window can be conveniently controlled subsequently. And then, screening out the maximum arm height index and arm speed index from the arm height indexes and arm speed indexes corresponding to all the suspected arm areas in the suspected arm area set, and respectively using the maximum arm height index and the maximum arm speed index as a target height index and a target speed index. And finally, controlling the lifting of the target car window according to the target height index or the target speed index. Because the recognition of the motion state of the arm is not influenced when the shooting of the opponent is incomplete, compared with a gesture car window control method depending on gestures, the car window control method based on the gesture has the advantage that the car window control accuracy can be improved. Compared with a gesture car window control method needing to recall or check a plurality of template gestures, the car window control efficiency is improved.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; the modifications or substitutions do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present application, and are included in the protection scope of the present application.
Claims (8)
1. A video intelligent identification method for a vehicle window control system is characterized by comprising the following steps:
the method comprises the steps of obtaining a vehicle window control video of a vehicle window control area, carrying out motion area identification on each frame of vehicle window control image in the vehicle window control video, and determining the vehicle window control image as a target control image to obtain a target control image set when the motion area is identified from the vehicle window control image;
when the area of a motion area in a target control image in the target control image set is larger than or equal to a preset area threshold, determining the motion area as a suspected arm area to obtain a suspected arm area set;
performing arm morphological feature identification on each suspected arm area in the suspected arm area set to obtain the arm possibility corresponding to the suspected arm area;
performing arm movement possible feature identification on each suspected arm area in the suspected arm area set to obtain arm movement possibility corresponding to the suspected arm area;
determining the movement possibility of a target arm according to the arm possibility and the arm movement possibility corresponding to each suspected arm area in the suspected arm area set;
when the movement possibility of the target arm is larger than a preset movement threshold, determining an arm height index and an arm speed index corresponding to each suspected arm area in the set of suspected arm areas;
screening out the maximum arm height index and the maximum arm speed index from the arm height indexes and the arm speed indexes corresponding to all suspected arm areas in the suspected arm area set, and respectively taking the maximum arm height index and the maximum arm speed index as a target height index and a target speed index;
controlling the lifting of the target car window according to the target height index or the target speed index;
the identifying of the morphological characteristics of the arm of each suspected arm area in the set of suspected arm areas to obtain the possibility of the arm corresponding to the suspected arm area includes:
for each pixel point on the edge of the suspected arm area, performing fitting vectorization on the pixel point and an edge pixel point in a preset target neighborhood corresponding to the pixel point to obtain a fitting vector corresponding to the pixel point;
determining an included angle between a fitting vector corresponding to each pixel point on the edge of the suspected arm area and a preset target direction as a target included angle corresponding to the pixel point;
clustering all pixel points on the edge of the suspected arm area according to the target included angles corresponding to all the pixel points on the edge of the suspected arm area to obtain an edge pixel point cluster set corresponding to the suspected arm area;
determining clustering differences corresponding to the suspected arm areas according to target included angles corresponding to clustering centers of all edge pixel point clusters in the edge pixel point clustering set corresponding to the suspected arm areas;
for each pixel point on the edge of the suspected arm area, screening an edge pixel point closest to the pixel point from an edge pixel point cluster where the pixel point is located in an edge pixel point cluster set corresponding to the suspected arm area, and using the edge pixel point as a target edge pixel point corresponding to the pixel point;
determining an absolute value of a difference value between each pixel point on the edge of the suspected arm area and a target included angle corresponding to a target edge pixel point corresponding to the pixel point as an edge difference corresponding to the pixel point;
determining the target length and the target width corresponding to the suspected arm area according to the position corresponding to each pixel point on the edge of the suspected arm area;
determining a first arm possibility and a second arm possibility corresponding to the suspected arm area according to cluster differences corresponding to the suspected arm area, a target length and a target width, a pre-obtained standard width-length ratio, edge differences corresponding to all pixel points on the edge of the suspected arm area, a target included angle corresponding to a cluster center included in an edge pixel point cluster where all pixel points on the edge of the suspected arm area are located, and a target included angle corresponding to all edge pixel points in the edge pixel point cluster where all pixel points on the edge of the suspected arm area are located;
determining the product of the first arm possibility and the second arm possibility corresponding to the suspected arm area as the arm possibility corresponding to the suspected arm area;
the performing arm movement possible feature identification on each suspected arm area in the suspected arm area set to obtain the arm movement possibility corresponding to the suspected arm area includes:
establishing a coordinate system by taking the lower left corner of the window control image where the suspected arm area is located as an origin, and taking the coordinate system as a target coordinate system corresponding to the suspected arm area;
determining a position corresponding to a pixel point on the edge of the suspected arm area, which is closest to a target origin, as a first edge point position corresponding to the suspected arm area, wherein the target origin is an origin of a target coordinate system corresponding to the suspected arm area;
determining a second edge point position and a third edge point position corresponding to the suspected arm area according to a target arm point identification network which is trained in advance;
determining the position corresponding to the pixel point farthest from the target origin on the edge of the suspected arm area as the position of a fourth edge point corresponding to the suspected arm area;
and determining the arm movement possibility corresponding to the suspected arm area according to a reference arm area corresponding to the suspected arm area and a first edge point position, a second edge point position, a third edge point position and a fourth edge point position corresponding to the suspected arm area, wherein the reference arm area corresponding to the suspected arm area is a suspected arm area included in a previous frame of window control image of the window control image in which the suspected arm area is located.
2. The method as claimed in claim 1, wherein the determining the first arm possibility and the second arm possibility corresponding to the suspected arm area comprises:
determining a formula corresponding to the first arm possibility corresponding to the suspected arm area as follows:
wherein,is the first arm likelihood corresponding to the jth suspected arm area in the set of suspected arm areas, j is the serial number of the suspected arm area in the set of suspected arm areas, and->Is the cluster difference corresponding to the jth suspected arm area in the set of suspected arm areas, and/or is selected based on the cluster difference>Is the number of pixel points on the edge of the jth suspected arm area in the set of suspected arm areas,is the edge difference corresponding to the ith pixel point on the jth suspected arm area edge in the suspected arm area set, i is the serial number of the pixel point on the suspected arm area edge, and/or is the number of the pixel point on the suspected arm area edge>Is the number of edge pixel points in an edge pixel point cluster where the ith pixel point on the jth suspected arm area edge in the suspected arm area set is located, and then the number of the edge pixel points is greater than or equal to>Is the target included angle corresponding to the xth edge pixel point in the edge pixel point cluster where the ith pixel point on the jth suspected arm area edge in the suspected arm area set is located, and is greater than or equal to>Is a target included angle corresponding to a cluster center included by an edge pixel cluster in which an ith pixel point on the jth suspected arm area edge in the suspected arm area set is located, and is based on the target included angle>Is a preset parameter index;
determining a formula corresponding to the second arm possibility corresponding to the suspected arm area as follows:
wherein,is the second arm likelihood corresponding to the jth suspect arm zone in the set of suspect arm zones, j is the sequence number of the suspect arm zone in the set of suspect arm zones, greater than or equal to>Is a function of absolute value>Is the target width corresponding to the jth suspected arm area in the suspected arm area set, and/or is selected>Is the target length corresponding to the jth suspected arm area in the suspected arm area set, and U is the standard aspect ratio.
3. The method according to claim 1, wherein the determining the arm movement possibility corresponding to the suspected arm area according to the reference arm area corresponding to the suspected arm area and the first edge point position, the second edge point position, the third edge point position and the fourth edge point position corresponding to the suspected arm area comprises:
determining a first moving distance corresponding to the suspected arm area according to a first edge point position corresponding to a reference arm area corresponding to the suspected arm area and a first edge point position corresponding to the suspected arm area;
determining a second moving distance, a third moving distance and a fourth moving distance corresponding to the suspected arm area according to a reference arm area corresponding to the suspected arm area and a second edge point position, a third edge point position and a fourth edge point position corresponding to the suspected arm area;
determining a first relative movement speed index corresponding to the suspected arm area according to a first movement distance and a second movement distance corresponding to the suspected arm area;
determining a second relative movement speed index corresponding to the suspected arm area according to the first movement distance, the second movement distance and the third movement distance corresponding to the suspected arm area;
determining a third relative movement speed index corresponding to the suspected arm area according to the first movement distance, the second movement distance, the third movement distance and the fourth movement distance corresponding to the suspected arm area;
and determining the sum of the first relative movement speed index, the second relative movement speed index and the third relative movement speed index corresponding to the suspected arm area as the arm movement possibility corresponding to the suspected arm area.
4. The method according to claim 3, wherein the determining a third relative movement speed index corresponding to the suspected arm area according to the first movement distance, the second movement distance, the third movement distance and the fourth movement distance corresponding to the suspected arm area comprises:
acquiring a target duration corresponding to the suspected arm area;
determining the difference between the fourth moving distance and the first moving distance corresponding to the suspected arm area as a first relative distance corresponding to the suspected arm area;
determining a difference between a fourth movement distance and a second movement distance corresponding to the suspected arm area as a second relative distance corresponding to the suspected arm area;
determining the difference between a fourth moving distance and a third moving distance corresponding to the suspected arm area as a third relative distance corresponding to the suspected arm area;
determining the average value of the first relative distance, the second relative distance and the third relative distance corresponding to the suspected arm area as the average value of the relative distances corresponding to the suspected arm area;
and determining the ratio of the relative distance mean value corresponding to the suspected arm area to the target duration as a third relative moving speed index corresponding to the suspected arm area.
5. The method according to claim 4, wherein the determining an arm height index and an arm speed index corresponding to each suspected arm area in the set of suspected arm areas comprises:
determining a longitudinal coordinate included in a fourth coordinate corresponding to a fourth edge point position corresponding to the suspected arm area as an arm height index corresponding to the suspected arm area, wherein the fourth coordinate is a coordinate of the fourth edge point position corresponding to a target coordinate system;
and determining the ratio of the fourth moving distance corresponding to the suspected arm area to the target duration as an arm speed index corresponding to the suspected arm area.
6. The intelligent video identification method for the vehicle window control system according to claim 3, wherein the controlling of the lifting of the target vehicle window according to the target height index or the target speed index comprises:
when the target speed index is larger than a preset speed index threshold, controlling a target window to be completely opened or completely closed according to two edge pixel points corresponding to a fourth moving distance corresponding to the last suspected arm area in the suspected arm area set;
and when the target speed index is smaller than or equal to a speed index threshold value, controlling the height of the target car window according to the target height index.
7. The method according to claim 6, wherein the controlling of the complete opening or complete closing of the target window according to the two edge pixel points corresponding to the fourth moving distance corresponding to the last suspected arm area in the set of suspected arm areas comprises:
merging two edge pixel points corresponding to a fourth movement distance corresponding to the last suspected arm area into a preset standard coordinate system;
determining a target vector according to the positions of two edge pixel points corresponding to a fourth movement distance corresponding to the last suspected arm area under the standard coordinate system;
determining an included angle between the target vector and a preset longitudinal axis vector as a direction included angle, wherein the direction of the longitudinal axis vector is the same as the positive direction of a longitudinal axis of a standard coordinate system;
when the included angle of the directions is larger than 90 degrees, controlling the target car window to be completely opened;
and when the included angle is smaller than or equal to 90 degrees, controlling the target vehicle window to be completely closed.
8. The intelligent video identification method for the vehicle window control system according to claim 6, wherein the step of controlling the height of the target vehicle window according to the target height index comprises the following steps:
when the target height index is larger than or equal to a first target height acquired in advance, controlling the target window to be completely opened;
when the target height index is smaller than or equal to a second target height acquired in advance, controlling the target window to be completely closed;
and when the target height index is smaller than the first target height and larger than the second target height, determining the product of the target proportion coefficient and the target height index which are acquired in advance as a target adjusting height, controlling the target car window, and adjusting the height of the target car window to the target adjusting height.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310021086.5A CN115761602B (en) | 2023-01-07 | 2023-01-07 | Intelligent video identification method for vehicle window control system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310021086.5A CN115761602B (en) | 2023-01-07 | 2023-01-07 | Intelligent video identification method for vehicle window control system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115761602A CN115761602A (en) | 2023-03-07 |
CN115761602B true CN115761602B (en) | 2023-04-18 |
Family
ID=85348387
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310021086.5A Active CN115761602B (en) | 2023-01-07 | 2023-01-07 | Intelligent video identification method for vehicle window control system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115761602B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021185652A1 (en) * | 2020-03-17 | 2021-09-23 | Brose Fahrzeugteile Se & Co. Kommanditgesellschaft, Bamberg | Entry arrangement for a vehicle, method and vehicle |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102402680B (en) * | 2010-09-13 | 2014-07-30 | 株式会社理光 | Hand and indication point positioning method and gesture confirming method in man-machine interactive system |
CN109426789B (en) * | 2017-08-31 | 2022-02-01 | 京东方科技集团股份有限公司 | Hand and image detection method and system, hand segmentation method, storage medium and device |
CN110936797B (en) * | 2019-12-02 | 2021-08-27 | 恒大恒驰新能源汽车科技(广东)有限公司 | Automobile skylight control method and electronic equipment |
WO2022012337A1 (en) * | 2020-07-11 | 2022-01-20 | 北京术锐技术有限公司 | Moving arm system and control method |
CN114610155A (en) * | 2022-03-22 | 2022-06-10 | 京东方科技集团股份有限公司 | Gesture control method and device, display terminal and storage medium |
-
2023
- 2023-01-07 CN CN202310021086.5A patent/CN115761602B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021185652A1 (en) * | 2020-03-17 | 2021-09-23 | Brose Fahrzeugteile Se & Co. Kommanditgesellschaft, Bamberg | Entry arrangement for a vehicle, method and vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN115761602A (en) | 2023-03-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108446630B (en) | Intelligent monitoring method for airport runway, application server and computer storage medium | |
WO2020125499A1 (en) | Operation prompting method and glasses | |
EP2601615B1 (en) | Gesture recognition system for tv control | |
CN108222749B (en) | Intelligent automatic door control method based on image analysis | |
CN102982598B (en) | Video people counting method and system based on single camera scene configuration | |
CN101739551B (en) | moving object identification method and system | |
CN106600631A (en) | Multiple target tracking-based passenger flow statistics method | |
CN110780739B (en) | Eye control auxiliary input method based on gaze point estimation | |
CN106022243B (en) | A kind of retrograde recognition methods of the car lane vehicle based on image procossing | |
CN108596087B (en) | Driving fatigue degree detection regression model based on double-network result | |
CN108205891B (en) | A kind of vehicle monitoring method of monitoring area | |
CN106203342A (en) | Target identification method based on multi-angle local feature coupling | |
CN110398720A (en) | A kind of anti-unmanned plane detection tracking interference system and photoelectric follow-up working method | |
CN106127148A (en) | A kind of escalator passenger's unusual checking algorithm based on machine vision | |
KR20160136391A (en) | Information processing apparatus and information processing method | |
WO2011097795A1 (en) | Method and system for population flow statistics | |
CN113592911B (en) | Apparent enhanced depth target tracking method | |
CN112836640A (en) | Single-camera multi-target pedestrian tracking method | |
KR101659657B1 (en) | A Novel Multi-view Face Detection Method Based on Improved Real Adaboost Algorithm | |
CN110334753A (en) | Video classification methods, device, electronic equipment and storage medium | |
CN111461001A (en) | Computer vision automatic door opening method and system | |
CN103488294A (en) | Non-contact gesture control mapping adjustment method based on user interactive habits | |
CN103049765A (en) | Method for judging crowd density and number of people based on fish eye camera | |
CN109542233A (en) | A kind of lamp control system based on dynamic gesture and recognition of face | |
CN103123726A (en) | Target tracking algorithm based on movement behavior analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |