CN102662470B - A kind of method and system realizing eye operation - Google Patents
A kind of method and system realizing eye operation Download PDFInfo
- Publication number
- CN102662470B CN102662470B CN201210097796.8A CN201210097796A CN102662470B CN 102662470 B CN102662470 B CN 102662470B CN 201210097796 A CN201210097796 A CN 201210097796A CN 102662470 B CN102662470 B CN 102662470B
- Authority
- CN
- China
- Prior art keywords
- image
- place
- distance
- point
- primary importance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention discloses a kind of method and system realizing eye operation, the distance between client iris center and user canthus point can be determined, and according to the corresponding relation of the distance pre-set and operational order, execution with apart from corresponding operational order.Just can realize eye operation because the present invention wears the equipment such as the helmet without user, therefore user experiences good, and cost is lower.
Description
Technical field
The present invention relates to human-computer interaction technique field, particularly relate to a kind of method and system realizing eye operation.
Background technology
Along with the fast development of computer technology and human-computer interaction technology, new human-computer interaction device and exchange method become a study hotspot of field of human-computer interaction gradually, and traditional computer entry device is as keyboard, the common feature of mouse etc. needs people to operate with hand, but for certain customers (disabled person as limb activity inconvenience), this utilizes traditional human-computer interaction device in naturality by making them, the aspects such as friendly all also exist certain limitation, therefore research meets the trend that the human-computer interaction device being applicable to different object becomes the development of current human-computer interaction technology.
Vision system plays an important role in the sensory system of the mankind, found by research, the perception major part of people's information to external world obtains from vision, if utilize the vision system exploitation man-machine interactive system of people, that will be very effectively and natural.
Man-machine interaction mode based on Visual Trace Technology has directly, friendly and succinct feature.Human-computer interaction technology based on Visual Trace Technology mainly moves principle according to the eye of people and obtains the interested region of user, and the interactive system that exploitation is relevant realizes the control to computing machine or peripherals.Current Visual Trace Technology is contact measurement.Contact measurement needs user to wear the special helmet to move information in order to detect eye, if wear this helmet for a long time, this will bring great puzzlement to user, and contact equipment price is expensive.
Summary of the invention
For solving the problems of the technologies described above, the embodiment of the present invention provides a kind of method and system realizing eye operation, and to realize eye operation under the prerequisite no longer wearing the helmet, technical scheme is as follows:
Realize a method for eye operation, comprising:
Determine the primary importance at place, client iris center and the second place of user canthus point;
Calculate the distance of client iris center and described user canthus point according to described primary importance and the described second place, described distance is the first distance;
According to the corresponding relation of the described distance pre-set with operational order, perform with described first apart from the first corresponding operational order.
Preferably, the described step determining the primary importance at place, client iris center and the second place of user canthus point, comprising:
Gather human body image;
Identify the facial image in described human body image;
Identify the eyes image in described facial image;
According to the primary importance at place, described eyes image determination client iris center and the second place of user canthus point.
Preferably, the step of the facial image in the described human body image of described identification, comprising:
Under YIQ chrominance space, according to the I channel components distributed area of face complexion, identify the human body complexion image in described human body image, described human body complexion image uses rectangle to carry out frame choosing, and described human body complexion image comprises: facial image and neck image;
The minor face that described rectangle is positioned at top keeps fixing, shortens described rectangle, makes it become square, identify that the image in described square is facial image.
Preferably, the step of the facial image in the described human body image of described identification, comprising:
Under YUV chrominance space, according to the distribution range of the phasing degree θ that the tone of face complexion distributes, identify the human body complexion image in described human body image, described human body complexion image uses rectangle to carry out frame choosing, and described human body complexion image comprises: facial image and neck image;
The minor face that described rectangle is positioned at top keeps fixing, shortens described rectangle, makes it become square, identify that the image in described square is facial image.
Preferably, the step of the eyes image in the described facial image of described identification, comprising:
The facial image of described square institute frame choosing is divided into the impartial first half and Lower Half;
The described first half is divided into impartial left side and right-hand part, wherein, in the image in described left side, carries human eye right eye region image, in the image in described right-hand part, carry human eye left eye region image;
Use genetic algorithm to locate human eye right eye region image from the image of described left side, use genetic algorithm to locate human eye left eye region image from the image of described right-hand part.
Preferably, described according in the step of the primary importance at place, described eyes image determination client iris center and the second place of user canthus point, determine the method for described primary importance, comprising:
At least three points of the aberration determination iris edge in bianry image according to iris in eyeball and sclera;
The primary importance at place, described iris center is determined according at least three points of described iris edge.
Preferably, described according in the step of the primary importance at place, described eyes image determination client iris center and the second place of user canthus point, determine the method for the described second place, comprising:
Utilize the primary importance at place, described iris center to estimate the described second place, generate and estimate position;
Use the angle point responsiveness function in the Harris Corner Detection Algorithm improved, in conjunction with variance projection function, estimated position is revised, determine the described second place.
Preferably, the described distance calculating client iris center and described user canthus point according to described primary importance and the described second place, described distance is the step of the first distance, comprising:
According to described primary importance and the described second place, calculate multiple distances of client iris center in preset time period and described user canthus point;
Obtain the mean value of described multiple distance, described mean value is the first distance.
Present invention also offers a kind of system realizing eye operation, comprising: position determination module, distance determination module and instruct execution module,
Described position determination module, for the second place of the primary importance and user canthus point of determining place, client iris center;
Described distance determination module, for calculating the distance of client iris center and described user canthus point according to described primary importance and the described second place, described distance is the first distance;
Described instruct execution module, for according to the corresponding relation of the described distance pre-set with operational order, performs with described first apart from the first corresponding operational order.
Preferably, described position determination module, comprising: image collecting device, face recognition module, eye recognition module, iris center identification module and canthus point identification module,
Described image collecting device, for gathering human body image;
Described face recognition module, for identifying the facial image in described human body image;
Described eye recognition module, for identifying the eyes image in described facial image;
Described iris center identification module, for the primary importance according to place, described eyes image determination client iris center;
Described canthus point identification module, for determining the second place of user canthus point according to described eyes image.
By applying above technical scheme, a kind of method and system realizing eye operation provided by the invention, the distance between client iris center and user canthus point can be determined, and according to the corresponding relation of the distance pre-set and operational order, perform with apart from corresponding operational order.Just can realize eye operation because the present invention wears the equipment such as the helmet without user, therefore user experiences good, and cost is lower.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, the accompanying drawing that the following describes is only some embodiments recorded in the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
A kind of schematic flow sheet realizing eye method of operating that Fig. 1 provides for the embodiment of the present invention;
Fig. 2 realizes the schematic flow sheet of eye method of operating for another kind that the embodiment of the present invention provides;
A kind of schematic diagram realizing eye method of operating and image is divided that Fig. 3 provides for the embodiment of the present invention;
Fig. 4 realizes the schematic flow sheet of eye method of operating for another kind that the embodiment of the present invention provides;
Fig. 5 realizes the schematic flow sheet of eye method of operating for another kind that the embodiment of the present invention provides;
Fig. 6 realizes the schematic flow sheet of eye method of operating for another kind that the embodiment of the present invention provides;
Fig. 7 realizes the schematic flow sheet of eye method of operating for another kind that the embodiment of the present invention provides;
Fig. 8 realizes the schematic flow sheet of eye method of operating for another kind that the embodiment of the present invention provides;
A kind of structural representation realizing eye operating system that Fig. 9 provides for the embodiment of the present invention;
Figure 10 realizes the structural representation of eye operating system for another kind that the embodiment of the present invention provides;
Figure 11 realizes the structural representation of eye operating system for another kind that the embodiment of the present invention provides;
Figure 12 realizes the structural representation of eye operating system for another kind that the embodiment of the present invention provides;
Figure 13 realizes the structural representation of eye operating system for another kind that the embodiment of the present invention provides;
Figure 14 realizes the structural representation of eye operating system for another kind that the embodiment of the present invention provides;
Figure 15 realizes the structural representation of eye operating system for another kind that the embodiment of the present invention provides;
Figure 16 realizes the structural representation of eye operating system for another kind that the embodiment of the present invention provides.
Embodiment
Technical scheme in the present invention is understood better in order to make those skilled in the art person, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, should belong to the scope of protection of the invention.
As shown in Figure 1, a kind of method realizing eye operation that the embodiment of the present invention provides, can comprise: step S1, step S2 and step S3,
The second place of S1, the primary importance determining place, client iris center and user canthus point;
Be understandable that, iris belongs to eyeball middle level, is positioned at the forefront of tunica vasculose, and color is comparatively dark, and the color of the iris of different people is not quite similar, and as can there be blueness, brown in the countries such as America and Europe, the Asian countries such as China are black, grey etc.Iris, in ciliary body front, has the size automatically regulating pupil, regulates and enters the how many effect of intraocular light.Be positioned at the forefront of tunica vasculose, there are pupil in iris central authorities.。Sclera is the white portion outside eyeball, i.e. the white of the eye.
Wherein, step S1 can comprise: step S11 and step S12,
S11, collection human body image;
Concrete, the first-class equipment of shooting can be used to gather human body image, and preferably, this camera is monocular cam, and the position of this camera and shooting angle can regulate, to be more conducive to gather human body image.
S12, the facial image identified in described human body image;
Concrete, step S12 can comprise: step S121 and step S122,
S121, under YIQ chrominance space, according to the I channel components distributed area of face complexion, identify the human body complexion image in described human body image, described human body complexion image uses rectangle to carry out frame choosing, and described human body complexion image comprises: facial image and neck image;
Be understandable that, the colour of skin of neck is very close with the colour of skin of face, and is generally clothes below neck, larger with the colour-difference distance of the colour of skin.Use rectangle to carry out frame choosing facial image and neck image frame to be selected.
It will be appreciated by persons skilled in the art that YIQ chrominance space usually adopt by the television system of North America, belong to NTSC (NationalTelevisionStandardsCommittee) system.Here Y does not refer to yellow, and refers to the legibility (Luminance) of color, i.e. brightness (Brightness).Y is exactly the gray-scale value (Grayvalue) of image in fact, I and Q then refers to tone (Chrominance), i.e. the attribute of Description Image color and saturation degree.In YIQ system, the monochrome information of Y-component representative image, I, Q two components then carry colouring information, and I component representative changes from orange to the color of cyan, and Q component then represents and changes to yellowish green color from purple.
In YIQ color space, the colour of skin of face shows in colourity I channel components assembles compactedness preferably, in YIQ color space, can obtain the data analysis statistical of I channel components of the image comprising face complexion pixel, Skin Color Information is mainly distributed between 40 to 100.
Because the color characteristic of the lip of people is also very outstanding, therefore in a preferred embodiment of the invention, lip color characteristic determination human body complexion image can also be used further.Wherein, the colourity of lip is distributed within certain scope under YIQ chrominance space, to only comprise lip position image pattern lip colour vegetarian refreshments each channel components data analysis and statistics find, the optimal segmenting threshold of each channel components is Y ∈ [90,210] respectively, I ∈ [20,80], Q ∈ [10,28], using the distribution range of each channel components of the I channel components of the colour of skin in YIQ chrominance space and lip look information as a criterion of face area partitioning algorithm.
Certainly, step S121 also can be: under YUV chrominance space, according to the distribution range of the phasing degree θ that the tone of face complexion distributes, identify the human body complexion image in described human body image, described human body complexion image uses rectangle to carry out frame choosing, and described human body complexion image comprises: facial image and neck image;
It will be appreciated by persons skilled in the art that YUV a kind of colour coding method (belonging to PAL) of adopting by eurovision system, be the color space that PAL and SECAM simulation color television system adopts.In modern color television system, usual employing three pipe colour camera or colored CCD video camera carry out capture, then the colour picture signal obtained is obtained RGB after color separation, respectively amplification correction, brightness signal Y and two colour difference signal R-Y (i.e. U), B-Y (i.e. V) is obtained again through matrixer, brightness and aberration three signals are encoded by last transmitting terminal respectively, send with same channel.The method for expressing of this color is exactly that so-called YUV color space represents.The importance of YUV color space is adopted to be that its brightness signal Y is separated with carrier chrominance signal U, V.
Under YUV chrominance space, luminance signal (Y) and carrier chrominance signal (U, V) are separate, saturation degree is determined by modulus value Ch, tone is represented by its phasing degree θ, and the phasing degree θ of the tone distribution of face complexion is distributed between 110 to 155.Under YUV chrominance space, the phasing degree θ of lip look tone distribution is between 80 to 100, using the distribution range of the phasing degree θ of the colour of skin in YUV chrominance space and lip look as the another one criterion of face area partitioning algorithm, in conjunction with the face area segmentation threshold in YIQ tone space for Face datection.Carry out skin color segmentation according to above-mentioned threshold range, recycle fritter isolated in morphologic opening operation and closed operation process removing image, the method finally utilizing region to increase carries out human face region rectification, with rectangle collimation mark note human face region.
The minor face that S122, described rectangle are positioned at top keeps fixing, shortens described rectangle, makes it become square, identify that the image in described square is facial image.
Be understandable that, the neck of area of skin color under normal circumstances containing human body at the position bottom of the rectangle frame of the mark after skin color segmentation, rectangle frame topmost part is then the coboundary of people's face area, therefore rectangle frame uppermost edge position rectangle frame invariant position is kept, reduce rectangle frame lower edge position, its width is constant, highly zooms to equal with width.
S13, the eyes image identified in described facial image;
Wherein, described step S13 can comprise:
S131, the facial image of described square institute frame choosing is divided into the impartial first half and Lower Half;
Be understandable that, human eye area is included in the first half region of human face region, therefore, first above-mentioned rectangle frame is divided into the first half region and the latter half region from center section, wherein the length of the first half and the latter half rectangle frame is equal with width.
S132, the left side described first half being divided into equalization and right-hand part, wherein, carry human eye right eye region image, carry human eye left eye region image in the image in described right-hand part in the image in described left side;
Because human eye is present in the first half region, therefore left-half region and right half part region is divided in the first half region from center section, wherein left-half region is equal with width with the subregional length of right-hand part, and the characteristic information of the left eye of human eye appears in right half part region.
S133, use genetic algorithm locate human eye right eye region image from the image of described left side, use genetic algorithm to locate human eye left eye region image from the image of described right-hand part.
Utilize genetic algorithm in right half part region, solve the thought location human eye left eye region of optimum solution, use the same method location human eye right eye region.
S14, according to the primary importance at place, described eyes image determination client iris center and the second place of user canthus point.
Wherein, determine in step S14 that the method for primary importance can comprise:
S141, according to iris in eyeball and sclera at least three points of the aberration determination iris edge in bianry image;
S142, determine the primary importance at place, described iris center according at least three points of described iris edge.
Wherein, what the shape of the iris of people can be similar to is expressed as circle, and the geometric properties of the circle of utilization can obtain, and when knowing circumferentially three points, just can determine the center of circle of circle.
Wherein, determine in step S14 that the method for the second place can comprise:
S143, utilize the primary importance at place, described iris center to estimate the described second place, generate and estimate position;
S144, the angle point responsiveness function used in the Harris Corner Detection Algorithm of improvement, revise estimated position in conjunction with variance projection function, determine the described second place.
Concrete, the implementation process of step S144 can comprise: step S144a, step S144b, step S144c, step S144d and step S144e.
S144a, utilize the conversion formula of coloured image and gray level image to be first converted into gray level image the coloured image of the human eye area utilizing Genetic algorithm searching, then canthus, process location point coordinate as described below is carried out successively to the topography in this zonule;
S144b, its first order derivative is in the horizontal direction and the vertical direction calculated respectively to each point in human eye area gray level image, multiplying is done to the image after the differentiate in horizontal direction and vertical direction, simultaneously shown in (4.8):
Formula (4.8)
S144c, gaussian filtering is carried out to step 2 imagery exploitation Gaussian function after treatment
The autocorrelation matrix of Harris Corner Detection Algorithm can be expressed as:
The responsiveness function of Harris Corner Detection Algorithm is:
C
RF=AB-C
2-k(A+B)
2
But the value of above formula k needs rule of thumb to judge, if the improper meeting of value causes overall Detection results to decline, in order to avoid metrical error, introduce another method and replace, be shown below: in formula, k ' is a smaller integer,
After the responsiveness function after replacing is determined, at this moment the weighting factor function F of variance projection function has also been decided, and weighting factor function F represents such as formula shown in (4.13): in formula, λ is constant
F=λ·w
S144d, weighted projection function in horizontal direction and vertical direction is such as formula shown to utilize above formula to represent:
Wherein, I (x, y) is the pixel intensity value that point (x, y) is put.
S144e, to calculate human eye canthus weighted projection function in the horizontal direction and the vertical direction according to the method shown in above-mentioned as the horizontal ordinate at canthus and ordinate.
S2, calculate the distance of client iris center and described user canthus point according to described primary importance and the described second place, described distance is the first distance;
S21, according to described primary importance and the described second place, calculate multiple distances of client iris center in preset time period and described user canthus point;
S22, obtain the mean value of described multiple distance, described mean value is the first distance.
Because instability is revealed along with the change list of external environment in the position of iris, the method reliability that the distance relation of the iris center utilizing merely a certain two field picture to locate and canthus point carries out mapping is lower, the preferred embodiments of the present invention are the first distance with the mean value of multiple distances of the client iris center in human eye preset time period and described user canthus point, can well improve reliability.The practical significance of the program is: human eye to the time length of some watching areas (a certain region as screen) as regioselective method, within this time period of the some subregions of people's eye fixation, the present invention constantly processes the image collected, carry out the mapping of the central point of the iris of human eye and the distance at canthus and people's eye fixation screen subregion, get in this time period the mean distance carrying out mapping and carry out mapping human eye subregion is watched attentively to computer screen.Wherein, described preset time period can be 5 seconds, because human eye sight is watched attentively in subregion from one and is moved to another one and watch attentively in subregion in this time interval, distance between human eye iris center and canthus and the mapping relations of subregion can settle out within this time period, the error that the unconscious nictation that simultaneously it also avoid human eye brings.
The described distance that S3, basis pre-set and the corresponding relation of operational order, perform with described first apart from the first corresponding operational order.
Concrete, can first by the display such as display screen, wall, blackboard picture etc. to guide user to watch shown partial content attentively, as: show two parts content by display, the content be positioned at shown by left side is water tumbler, and the content be positioned at shown by right side is lavatory.When user watches the water tumbler in left side attentively, the instruction relevant to water tumbler can be performed, as informed nurse's " client need drinks water ".When user watches the lavatory on right side attentively, the instruction relevant to lavatory can be performed, as informed nurse's " client need is gone to toilet ".Certainly, concrete embodiment is multiple in addition, and be all known in those skilled in the art, the present invention does not repeat them here.
Wherein, described first operational order can be changed dressings for request, be informed the operational orders such as appearance is uncomfortable, request is drunk water.Be understandable that, the function that operational order realizes can be various, and the present invention does not limit at this, and meanwhile, distance can be multiple with the corresponding relation of operational order, and the present invention does not limit equally.
A kind of method realizing eye operation provided by the invention, can determine the distance between client iris center and user canthus point, and according to the corresponding relation of the distance pre-set and operational order, execution with apart from corresponding operational order.Just can realize eye operation because the present invention wears the equipment such as the helmet without user, therefore user experiences good, and cost is lower.Can remove user from due to eye operation uses limbs to carry out the step operated, and the crowd that is therefore suitable for is wider.
Corresponding to embodiment of the method above, the present invention also provides a kind of system realizing eye operation.
As shown in Figure 9, a kind of system realizing eye operation that the embodiment of the present invention provides, comprising: position determination module 100, distance determination module 200 and instruct execution module 300,
Position determination module 100, for the second place of the primary importance and user canthus point of determining place, client iris center;
Wherein, as shown in Figure 10, position determination module 100, can comprise: image collecting device 110, face recognition module 120, eye recognition module 130, iris center identification module 140 and canthus point identification module 150,
Image collecting device 110, for gathering human body image;
Concrete, image collecting device 110 can be the equipment such as camera.Preferably, this camera is monocular cam, and the position of this camera and shooting angle can regulate, to be more conducive to gather human body image.
Face recognition module 120, for identifying the facial image in human body image;
Eye recognition module 130, for identifying the eyes image in facial image;
Iris center identification module 140, for the primary importance according to place, eyes image determination client iris center;
Canthus point identification module 150, for determining the second place of user canthus point according to eyes image.
Wherein, as shown in figure 11, face recognition module 120, can comprise: the first skin color model module 121 and face frame modeling block 122,
First skin color model module 121, for under YIQ chrominance space, according to the I channel components distributed area of face complexion, identify the human body complexion image in human body image, human body complexion image uses rectangle to carry out frame choosing, and human body complexion image comprises: facial image and neck image;
Be understandable that, the colour of skin of neck is very close with the colour of skin of face, and is generally clothes below neck, larger with the colour-difference distance of the colour of skin.Use rectangle to carry out frame choosing facial image and neck image frame to be selected.
It will be appreciated by persons skilled in the art that YIQ chrominance space usually adopt by the television system of North America, belong to NTSC (NationalTelevisionStandardsCommittee) system.Here Y does not refer to yellow, and refers to the legibility (Luminance) of color, i.e. brightness (Brightness).Y is exactly the gray-scale value (Grayvalue) of image in fact, I and Q then refers to tone (Chrominance), i.e. the attribute of Description Image color and saturation degree.In YIQ system, the monochrome information of Y-component representative image, I, Q two components then carry colouring information, and I component representative changes from orange to the color of cyan, and Q component then represents and changes to yellowish green color from purple.
In YIQ color space, the colour of skin of face shows in colourity I channel components assembles compactedness preferably, in YIQ color space, can obtain the data analysis statistical of I channel components of the image comprising face complexion pixel, Skin Color Information is mainly distributed between 40 to 100.
Because the color characteristic of the lip of people is also very outstanding, therefore in a preferred embodiment of the invention, lip color characteristic determination human body complexion image can also be used further.Wherein, the colourity of lip is distributed within certain scope under YIQ chrominance space, to only comprise lip position image pattern lip colour vegetarian refreshments each channel components data analysis and statistics find, the optimal segmenting threshold of each channel components is Y ∈ [90,210] respectively, I ∈ [20,80], Q ∈ [10,28], using the distribution range of each channel components of the I channel components of the colour of skin in YIQ chrominance space and lip look information as a criterion of face area partitioning algorithm.
Face frame modeling block 122, the minor face being positioned at top for rectangle keeps fixing, shortens rectangle, makes it become square, identifies that the image in square is facial image.
As shown in figure 12, in other embodiments of the present invention, face recognition module 120 can comprise: the second skin color model module 123 and face frame modeling block 122,
Second skin color model module 123, for under YUV chrominance space, according to the distribution range of the phasing degree θ that the tone of face complexion distributes, identify the human body complexion image in human body image, human body complexion image uses rectangle to carry out frame choosing, and human body complexion image comprises: facial image and neck image;
It will be appreciated by persons skilled in the art that YUV a kind of colour coding method (belonging to PAL) of adopting by eurovision system, be the color space that PAL and SECAM simulation color television system adopts.In modern color television system, usual employing three pipe colour camera or colored CCD video camera carry out capture, then the colour picture signal obtained is obtained RGB after color separation, respectively amplification correction, brightness signal Y and two colour difference signal R-Y (i.e. U), B-Y (i.e. V) is obtained again through matrixer, brightness and aberration three signals are encoded by last transmitting terminal respectively, send with same channel.The method for expressing of this color is exactly that so-called YUV color space represents.The importance of YUV color space is adopted to be that its brightness signal Y is separated with carrier chrominance signal U, V.
Under YUV chrominance space, luminance signal (Y) and carrier chrominance signal (U, V) are separate, saturation degree is determined by modulus value Ch, tone is represented by its phasing degree θ, and the phasing degree θ of the tone distribution of face complexion is distributed between 110 to 155.Under YUV chrominance space, the phasing degree θ of lip look tone distribution is between 80 to 100, using the distribution range of the phasing degree θ of the colour of skin in YUV chrominance space and lip look as the another one criterion of face area partitioning algorithm, in conjunction with the face area segmentation threshold in YIQ tone space for Face datection.Carry out skin color segmentation according to above-mentioned threshold range, recycle fritter isolated in morphologic opening operation and closed operation process removing image, the method finally utilizing region to increase carries out human face region rectification, with rectangle collimation mark note human face region.
Face frame modeling block 122, the minor face being positioned at top for rectangle keeps fixing, shortens rectangle, makes it become square, identifies that the image in square is facial image.
Be understandable that, the neck of area of skin color under normal circumstances containing human body at the position bottom of the rectangle frame of the mark after skin color segmentation, rectangle frame topmost part is then the coboundary of people's face area, therefore rectangle frame uppermost edge position rectangle frame invariant position is kept, reduce rectangle frame lower edge position, its width is constant, highly zooms to equal with width.
As shown in figure 13, eye recognition module 130, can comprise: first divides module 131, second divides module 132 and human eye locating module 133,
First divides module 131, for the facial image of square institute frame choosing is divided into the impartial first half and Lower Half;
Be understandable that, human eye area is included in the first half region of human face region, therefore, first above-mentioned rectangle frame is divided into the first half region and the latter half region from center section, wherein the length of the first half and the latter half rectangle frame is equal with width.
Second divides module 132, for the first half being divided into impartial left side and right-hand part, wherein, carrying human eye right eye region image in the image in left side, carrying human eye left eye region image in the image in right-hand part;
Because human eye is present in the first half region, therefore left-half region and right half part region is divided in the first half region from center section, wherein left-half region is equal with width with the subregional length of right-hand part, and the characteristic information of the left eye of human eye appears in right half part region.
Human eye locating module 133, for using genetic algorithm to locate human eye right eye region image from the image of left side, uses genetic algorithm from the image of right-hand part, locate human eye left eye region image.
Utilize genetic algorithm in right half part region, solve the thought location human eye left eye region of optimum solution, use the same method location human eye right eye region.
Wherein, as shown in figure 14, iris center identification module 140, can comprise: marginal point determination module 141 and center determination module 142,
Marginal point determination module 141, at least three points of the aberration determination iris edge in bianry image according to iris in eyeball and sclera;
Center determination module 142, for determining the primary importance at place, iris center according at least three points of iris edge.
Wherein, what the shape of the iris of people can be similar to is expressed as circle, and the geometric properties of the circle of utilization can obtain, and when knowing circumferentially three points, just can determine the center of circle of circle.
Wherein, as shown in figure 15, canthus point identification module 150, can comprise: estimate module 151 and correcting module 152,
Estimate module 151, for utilizing the primary importance at place, iris center to estimate the second place, generating and estimating position;
Correcting module 152, for using the angle point responsiveness function in the Harris Corner Detection Algorithm of improvement, revising estimated position in conjunction with variance projection function, determining the second place.
Distance determination module 200, for calculating the distance of client iris center and user canthus point according to primary importance and the second place, distance is the first distance;
Wherein, as shown in figure 16, distance determination module 200, can comprise: distance obtains module 210 and averaging module 220,
Distance obtains module 210, for according to primary importance and the second place, calculates multiple distances of client iris center in preset time period and user canthus point;
Averaging module 220, for obtaining the mean value of multiple distance, mean value is the first distance.
Because instability is revealed along with the change list of external environment in the position of iris, the method reliability that the distance relation of the iris center utilizing merely a certain two field picture to locate and canthus point carries out mapping is lower, the preferred embodiments of the present invention are the first distance with the mean value of multiple distances of the client iris center in human eye preset time period and described user canthus point, can well improve reliability.The practical significance of the program is: human eye to the time length of some watching areas (a certain region as screen) as regioselective method, within this time period of the some subregions of people's eye fixation, the present invention constantly processes the image collected, carry out the mapping of the central point of the iris of human eye and the distance at canthus and people's eye fixation screen subregion, get in this time period the mean distance carrying out mapping and carry out mapping human eye subregion is watched attentively to computer screen.Wherein, described preset time period can be 5 seconds, because human eye sight is watched attentively in subregion from one and is moved to another one and watch attentively in subregion in this time interval, distance between human eye iris center and canthus and the mapping relations of subregion can settle out within this time period, the error that the unconscious nictation that simultaneously it also avoid human eye brings.
Instruct execution module 300, for the corresponding relation according to the distance pre-set and operational order, performs with first apart from the first corresponding operational order.
Wherein, described first operational order can be changed dressings for request, be informed the operational orders such as appearance is uncomfortable, request is drunk water.
A kind of system realizing eye operation provided by the invention, can determine the distance between client iris center and user canthus point, and according to the corresponding relation of the distance pre-set and operational order, execution with apart from corresponding operational order.Just can realize eye operation because the present invention wears the equipment such as the helmet without user, therefore user experiences good, and cost is lower.
For convenience of description, various unit is divided into describe respectively with function when describing above device.Certainly, the function of each unit can be realized in same or multiple software and/or hardware when implementing of the present invention.
As seen through the above description of the embodiments, those skilled in the art can be well understood to the mode that the present invention can add required general hardware platform by software and realizes.Based on such understanding, technical scheme of the present invention can embody with the form of software product the part that prior art contributes in essence in other words, this computer software product can be stored in storage medium, as ROM/RAM, magnetic disc, CD etc., comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) perform the method described in some part of each embodiment of the present invention or embodiment.
Each embodiment in this instructions all adopts the mode of going forward one by one to describe, between each embodiment identical similar part mutually see, what each embodiment stressed is the difference with other embodiments.Especially, for system embodiment, because it is substantially similar to embodiment of the method, so describe fairly simple, relevant part illustrates see the part of embodiment of the method.System embodiment described above is only schematic, the wherein said unit illustrated as separating component or can may not be and physically separates, parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of module wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.Those of ordinary skill in the art, when not paying creative work, are namely appreciated that and implement.
The present invention can be used in numerous general or special purpose computing system environment or configuration.Such as: personal computer, server computer, handheld device or portable set, laptop device, multicomputer system, system, set top box, programmable consumer-elcetronics devices, network PC, small-size computer, mainframe computer, the distributed computing environment comprising above any system or equipment etc. based on microprocessor.
The present invention can describe in the general context of computer executable instructions, such as program module.Usually, program module comprises the routine, program, object, assembly, data structure etc. that perform particular task or realize particular abstract data type.Also can put into practice the present invention in a distributed computing environment, in these distributed computing environment, be executed the task by the remote processing devices be connected by communication network.In a distributed computing environment, program module can be arranged in the local and remote computer-readable storage medium comprising memory device.
It should be noted that, in this article, the such as relational terms of first and second grades and so on is only used for an entity or operation to separate with another entity or operational zone, and not necessarily requires or imply the relation that there is any this reality between these entities or operation or sequentially.
The above is only the specific embodiment of the present invention; it should be pointed out that for those skilled in the art, under the premise without departing from the principles of the invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.
Claims (6)
1. realize a method for eye operation, it is characterized in that, comprising:
Determine the primary importance at place, client iris center and the second place of user canthus point;
Calculate the distance of client iris center and described user canthus point according to described primary importance and the described second place, described distance is the first distance;
According to the corresponding relation of the described distance pre-set with operational order, perform with described first apart from the first corresponding operational order;
Wherein, the described step determining the primary importance at place, client iris center and the second place of user canthus point, comprising:
Monocular cam is used to gather human body image;
Identify the facial image in described human body image;
Identify the eyes image in described facial image;
According to the primary importance at place, described eyes image determination client iris center and the second place of user canthus point;
Wherein, described according in the step of the primary importance at place, described eyes image determination client iris center and the second place of user canthus point, determine the method for described primary importance, comprising:
At least three points of the aberration determination iris edge in bianry image according to iris in eyeball and sclera;
The primary importance at place, described iris center is determined according at least three points of described iris edge;
Wherein, described according in the step of the primary importance at place, described eyes image determination client iris center and the second place of user canthus point, determine the method for the described second place, comprising:
Utilize the primary importance at place, described iris center to estimate the described second place, generate and estimate position;
Use the angle point responsiveness function in the Harris Corner Detection Algorithm improved, in conjunction with variance projection function, estimated position revised, determine the described second place, comprising:
Utilize the conversion formula of coloured image and gray level image to be first converted into gray level image the coloured image of the human eye area utilizing Genetic algorithm searching, then canthus, process location point coordinate as described below is carried out successively to the topography in this zonule;
Its first order derivative is in the horizontal direction and the vertical direction calculated respectively to each point in human eye area gray level image, multiplying is done to the image after the differentiate in horizontal direction and vertical direction simultaneously;
Gaussian filtering is carried out to above-mentioned imagery exploitation Gaussian function after treatment;
Utilize the weighted projection function represented in horizontal direction and vertical direction:
Wherein, I (x, y) is the pixel intensity value that point (x, y) is put, x
irepresent the pixel point set being positioned at an i in vertical direction, y
irepresent the pixel point set being positioned at an i in the horizontal direction, dx represents in vertical direction to x differentiate, and dy represents in vertical direction to y differentiate,
represent in the horizontal direction to the cumulative sum of weighting factor function F,
be expressed as the cumulative sum to weighting factor function F in vertical direction, i represents the pixel on image;
Human eye canthus weighted projection function is in the horizontal direction and the vertical direction calculated as the horizontal ordinate at canthus and ordinate according to the method shown in above-mentioned.
2. method according to claim 1, is characterized in that, the step of the facial image in the described human body image of described identification, comprising:
Under YIQ chrominance space, according to the I channel components distributed area of face complexion, identify the human body complexion image in described human body image, described human body complexion image uses rectangle to carry out frame choosing, and described human body complexion image comprises: facial image and neck image;
The minor face that described rectangle is positioned at top keeps fixing, shortens described rectangle, makes it become square, identify that the image in described square is facial image.
3. method according to claim 1, is characterized in that, the step of the facial image in the described human body image of described identification, comprising:
Under YUV chrominance space, according to the distribution range of the phasing degree θ that the tone of face complexion distributes, identify the human body complexion image in described human body image, described human body complexion image uses rectangle to carry out frame choosing, and described human body complexion image comprises: facial image and neck image;
The minor face that described rectangle is positioned at top keeps fixing, shortens described rectangle, makes it become square, identify that the image in described square is facial image.
4. according to the method in claim 2 or 3, it is characterized in that, the step of the eyes image in the described facial image of described identification, comprising:
The facial image of described square institute frame choosing is divided into the impartial first half and Lower Half;
The described first half is divided into impartial left side and right-hand part, wherein, in the image in described left side, carries human eye right eye region image, in the image in described right-hand part, carry human eye left eye region image;
Use genetic algorithm to locate human eye right eye region image from the image of described left side, use genetic algorithm to locate human eye left eye region image from the image of described right-hand part.
5. method according to claim 1, is characterized in that, the described distance calculating client iris center and described user canthus point according to described primary importance and the described second place, and described distance is the step of the first distance, comprising:
According to described primary importance and the described second place, calculate multiple distances of client iris center in preset time period and described user canthus point;
Obtain the mean value of described multiple distance, described mean value is the first distance.
6. realize a system for eye operation, it is characterized in that, comprising: position determination module, distance determination module and instruct execution module,
Described position determination module, for the second place of the primary importance and user canthus point of determining place, client iris center;
Described distance determination module, for calculating the distance of client iris center and described user canthus point according to described primary importance and the described second place, described distance is the first distance;
Described instruct execution module, for according to the corresponding relation of the described distance pre-set with operational order, performs with described first apart from the first corresponding operational order;
Wherein, described position determination module, comprising: image collecting device, face recognition module, eye recognition module, iris center identification module and canthus point identification module,
Described image collecting device, gathers human body image for using monocular cam;
Described face recognition module, for identifying the facial image in described human body image;
Described eye recognition module, for identifying the eyes image in described facial image;
Described iris center identification module, for the primary importance according to place, described eyes image determination client iris center, comprise: at least three points of the aberration determination iris edge in bianry image according to iris in eyeball and sclera, determine the primary importance at place, described iris center according at least three points of described iris edge;
Described canthus point identification module, for determining the second place of user canthus point according to described eyes image; Comprise:
Utilize the primary importance at place, described iris center to estimate the described second place, generate and estimate position;
Use the angle point responsiveness function in the Harris Corner Detection Algorithm improved, in conjunction with variance projection function, estimated position revised, determine the described second place, comprising:
Utilize the conversion formula of coloured image and gray level image to be first converted into gray level image the coloured image of the human eye area utilizing Genetic algorithm searching, then canthus, process location point coordinate as described below is carried out successively to the topography in this zonule;
Its first order derivative is in the horizontal direction and the vertical direction calculated respectively to each point in human eye area gray level image, multiplying is done to the image after the differentiate in horizontal direction and vertical direction simultaneously;
Gaussian filtering is carried out to above-mentioned imagery exploitation Gaussian function after treatment;
Utilize the weighted projection function represented in horizontal direction and vertical direction:
Wherein, I (x, y) is the pixel intensity value that point (x, y) is put, x
irepresent the pixel point set being positioned at an i in vertical direction, y
irepresent the pixel point set being positioned at an i in the horizontal direction, dx represents in vertical direction to x differentiate, and dy represents in vertical direction to y differentiate,
represent in the horizontal direction to the cumulative sum of weighting factor function F,
be expressed as the cumulative sum to weighting factor function F in vertical direction, i represents the pixel on image;
Human eye canthus weighted projection function is in the horizontal direction and the vertical direction calculated as the horizontal ordinate at canthus and ordinate according to the method shown in above-mentioned.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210097796.8A CN102662470B (en) | 2012-04-01 | 2012-04-01 | A kind of method and system realizing eye operation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210097796.8A CN102662470B (en) | 2012-04-01 | 2012-04-01 | A kind of method and system realizing eye operation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102662470A CN102662470A (en) | 2012-09-12 |
CN102662470B true CN102662470B (en) | 2016-02-10 |
Family
ID=46771975
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210097796.8A Expired - Fee Related CN102662470B (en) | 2012-04-01 | 2012-04-01 | A kind of method and system realizing eye operation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102662470B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105487787A (en) * | 2015-12-09 | 2016-04-13 | 东莞酷派软件技术有限公司 | Terminal operation method and device based on iris recognition and terminal |
CN106990839B (en) * | 2017-03-21 | 2020-06-05 | 张文庆 | Eyeball identification multimedia player and implementation method thereof |
CN107291238B (en) * | 2017-06-29 | 2021-03-05 | 南京粤讯电子科技有限公司 | Data processing method and device |
CN107392152A (en) * | 2017-07-21 | 2017-11-24 | 青岛海信移动通信技术股份有限公司 | A kind of method and device for obtaining iris image |
CN107943527A (en) * | 2017-11-30 | 2018-04-20 | 西安科锐盛创新科技有限公司 | The method and its system of electronic equipment is automatically closed in sleep |
CN109213325B (en) * | 2018-09-12 | 2021-04-20 | 苏州佳世达光电有限公司 | Eye potential feature acquisition method and eye potential identification system |
CN110969084B (en) * | 2019-10-29 | 2021-03-05 | 深圳云天励飞技术有限公司 | Method and device for detecting attention area, readable storage medium and terminal equipment |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101893934A (en) * | 2010-06-25 | 2010-11-24 | 宇龙计算机通信科技(深圳)有限公司 | Method and device for intelligently adjusting screen display |
-
2012
- 2012-04-01 CN CN201210097796.8A patent/CN102662470B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101893934A (en) * | 2010-06-25 | 2010-11-24 | 宇龙计算机通信科技(深圳)有限公司 | Method and device for intelligently adjusting screen display |
Non-Patent Citations (3)
Title |
---|
加权方差投影在眼角定位中的应用;夏海英等;《中国图象图形学报》;20110228;第16卷(第2期);摘要 * |
基于肤色和模板匹配的的人眼定位;舒梅;《计算机工程与应用》;20090215(第2期);摘要,第2-3节 * |
视线跟踪系统研究与设计;高飞;《中国优秀硕士学位论文全文数据库 信息科技辑》;20070615;摘要,第1.2.2节,第3.1-3.2节 * |
Also Published As
Publication number | Publication date |
---|---|
CN102662470A (en) | 2012-09-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102662470B (en) | A kind of method and system realizing eye operation | |
CN101719015B (en) | Method for positioning finger tips of directed gestures | |
CN102081918B (en) | Video image display control method and video image display device | |
CN104809445B (en) | method for detecting fatigue driving based on eye and mouth state | |
CN102749991B (en) | A kind of contactless free space sight tracing being applicable to man-machine interaction | |
CN108549884A (en) | A kind of biopsy method and device | |
CN109558825A (en) | A kind of pupil center's localization method based on digital video image processing | |
CN104133548A (en) | Method and device for determining viewpoint area and controlling screen luminance | |
CN105739702A (en) | Multi-posture fingertip tracking method for natural man-machine interaction | |
CN102496002A (en) | Facial beauty evaluation method based on images | |
Tabrizi et al. | Open/closed eye analysis for drowsiness detection | |
CN102184016B (en) | Noncontact type mouse control method based on video sequence recognition | |
CN104123549A (en) | Eye positioning method for real-time monitoring of fatigue driving | |
Hammal et al. | Parametric models for facial features segmentation | |
CN110032932A (en) | A kind of human posture recognition method based on video processing and decision tree given threshold | |
CN109543518A (en) | A kind of human face precise recognition method based on integral projection | |
CN110287894A (en) | A kind of gesture identification method and system for ultra-wide angle video | |
Tian et al. | Real-time driver's eye state detection | |
Gu et al. | Hand gesture interface based on improved adaptive hand area detection and contour signature | |
CN104898971A (en) | Mouse pointer control method and system based on gaze tracking technology | |
CN102073878B (en) | Non-wearable finger pointing gesture visual identification method | |
CN102799855B (en) | Based on the hand positioning method of video flowing | |
Manaf et al. | Color recognition system with augmented reality concept and finger interaction: Case study for color blind aid system | |
CN102930259A (en) | Method for extracting eyebrow area | |
JPH07311833A (en) | Human face detecting device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160210 Termination date: 20170401 |