CN102662470A - Method and system for implementation of eye operation - Google Patents
Method and system for implementation of eye operation Download PDFInfo
- Publication number
- CN102662470A CN102662470A CN2012100977968A CN201210097796A CN102662470A CN 102662470 A CN102662470 A CN 102662470A CN 2012100977968 A CN2012100977968 A CN 2012100977968A CN 201210097796 A CN201210097796 A CN 201210097796A CN 102662470 A CN102662470 A CN 102662470A
- Authority
- CN
- China
- Prior art keywords
- image
- place
- distance
- human body
- eye
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention discloses a method and a system for implementation of eye operation. The distance between the iris center of a user and the canthus point of the user can be determined, and an operation instruction corresponding to the distance can be carried out according to a preset corresponding relation between the distance and the operation instruction. By the aid of the method and the system for implementation of eye operation, eye operation can be realized with no need of requiring the user to wear a helmet and the like, so that user experience is fine and the cost is low.
Description
Technical field
The present invention relates to human-computer interaction technique field, particularly relate to a kind of realization eye method of operating and system.
Background technology
The fast development of Along with computer technology and human-computer interaction technology; New human-computer interaction device and exchange method become a research focus of field of human-computer interaction gradually; And common characteristic such as traditional calculating machine input equipment such as keyboard, mouse etc. is to need people to operate with hand; Yet for certain customers (like the disabled person of limb activity inconvenience); This will make them utilize traditional human-computer interaction device all to exist certain limitation at aspects such as naturality, friendly, and therefore research meets the human-computer interaction device who is applicable to different objects becomes current human-computer interaction technology Development Trend.
Vision system plays an important role in the mankind's sensory system; Through discovering; People's information perception major part to external world obtain from vision, if utilize people's vision system exploitation man-machine interactive system, that will be very effective and natural.
Man-machine interaction mode based on the eye tracking technology has directly, friendly and succinct characteristics.Human-computer interaction technology based on the eye tracking technology mainly is to obtain the user's interest zone according to people's eye movement principle, and the interactive system that exploitation is relevant realizes the control to computing machine or peripherals.Present eye tracking technology detects for contact.The contact detection needs the user to wear the special helmet in order to detect eye movement information, if wear this helmet for a long time, this will bring great puzzlement to the user, and the contact equipment price is expensive.
Summary of the invention
For solving the problems of the technologies described above, the embodiment of the invention provides a kind of realization eye method of operating and system, and to realize the eye operation under the prerequisite of no longer wearing the helmet, technical scheme is following:
A kind of realization eye method of operating comprises:
Confirm the primary importance at place, client iris center and the second place of user's canthus point;
Calculate the distance of client iris center and said user's canthus point according to said primary importance and the said second place, said distance is first distance;
According to the corresponding relation of the said distance that is provided with in advance, carry out and corresponding first operational order of said first distance with operational order.
Preferably, the step of the primary importance at place, said definite client iris center and the second place of user's canthus point comprises:
Gather human body image;
Discern the facial image in the said human body image;
Discern the eyes image in the said facial image;
Confirm the primary importance at place, client iris center and the second place of user's canthus point according to said eyes image.
Preferably, the step of the facial image in the said human body image of said identification comprises:
Under the YIQ chrominance space, according to the I channel components distributed area of face complexion, discern the human body complexion image in the said human body image, said human body complexion image uses rectangle to carry out the frame choosing, and said human body complexion image comprises: facial image and neck image;
The minor face that said rectangle is positioned at the top keeps fixing, shortens said rectangle, makes it become square, and the image of discerning in the said square is a facial image.
Preferably, the step of the facial image in the said human body image of said identification comprises:
Under the YUV chrominance space; The distribution range of the phasing degree θ that distributes according to the tone of face complexion; Discern the human body complexion image in the said human body image, said human body complexion image uses rectangle to carry out the frame choosing, and said human body complexion image comprises: facial image and neck image;
The minor face that said rectangle is positioned at the top keeps fixing, shortens said rectangle, makes it become square, and the image of discerning in the said square is a facial image.
Preferably, the step of the eyes image in the said facial image of said identification comprises:
The facial image of said square institute frame choosing is divided into the impartial first half and Lower Half;
The said first half is divided into impartial left side and right-hand part, wherein, carries human eye right eye region image in the image in the said left side, carry human eye left eye region image in the image in the said right-hand part;
Use genetic algorithm from the image of said left side, to locate human eye right eye region image, use genetic algorithm from the image of said right-hand part, to locate human eye left eye region image.
Preferably, in the step of the second place of said primary importance and the user's canthus point of confirming client iris center place according to said eyes image, confirm the method for said primary importance, comprising:
Confirm at least three points at iris edge according to iris in the eyeball and the aberration of sclera in bianry image;
Confirm the primary importance at place, said iris center according at least three points at said iris edge.
Preferably, in the step of the second place of said primary importance and the user's canthus point of confirming client iris center place according to said eyes image, confirm the method for the said second place, comprising:
Utilize the primary importance at place, said iris center that the said second place is estimated, generate and estimate the position;
Use the angle point responsiveness function in the improved Harris Corner Detection Algorithm, estimating position is revised, confirm the said second place in conjunction with the variance projection function.
Preferably, describedly calculate the distance of client iris center and said user's canthus point according to said primary importance and the said second place, said distance is the step of first distance, comprising:
According to the said primary importance and the said second place, calculate the interior client iris center of Preset Time section and a plurality of distances of said user's canthus point;
Obtain the mean value of said a plurality of distances, said mean value is first distance.
The present invention also provides a kind of realization eye operated system, comprising: position determination module, apart from determination module and the instruction execution module,
Said position determination module is used for confirming the primary importance at place, client iris center and the second place of user's canthus point;
Said apart from determination module, be used for calculating the distance of client iris center and said user's canthus point according to said primary importance and the said second place, said distance is first distance;
Said instruction execution module is used for according to the corresponding relation of the said distance that is provided with in advance with operational order, carries out and corresponding first operational order of said first distance.
Preferably, said position determination module comprises: image collecting device, face recognition module, human eye identification module, iris center identification module and canthus point identification module,
Said image collecting device is used to gather human body image;
Said face recognition module is used for discerning the facial image of said human body image;
Said human eye identification module is used for discerning the eyes image of said facial image;
Said iris center identification module is used for confirming the primary importance that the client iris center belongs to according to said eyes image;
Said canthus point identification module is used for confirming according to said eyes image the second place of user's canthus point.
Through using above technical scheme; A kind of realization eye method of operating provided by the invention and system; Can confirm the distance between client iris center and the user's canthus point, and, carry out and the corresponding operational order of distance according to the corresponding relation of the distance that is provided with in advance with operational order.Just can realize that eye operates because the present invention wears equipment such as the helmet without the user, so the user experiences well, and cost is lower.
Description of drawings
In order to be illustrated more clearly in the embodiment of the invention or technical scheme of the prior art; To do to introduce simply to the accompanying drawing of required use in embodiment or the description of the Prior Art below; Obviously, the accompanying drawing in describing below only is some embodiment that put down in writing among the present invention, for those of ordinary skills; Under the prerequisite of not paying creative work, can also obtain other accompanying drawing according to these accompanying drawings.
A kind of schematic flow sheet of realizing the eye method of operating that Fig. 1 provides for the embodiment of the invention;
The another kind that Fig. 2 provides for the embodiment of the invention is realized the schematic flow sheet of eye method of operating;
A kind of synoptic diagram of realizing that the eye method of operating is divided image that Fig. 3 provides for the embodiment of the invention;
The another kind that Fig. 4 provides for the embodiment of the invention is realized the schematic flow sheet of eye method of operating;
The another kind that Fig. 5 provides for the embodiment of the invention is realized the schematic flow sheet of eye method of operating;
The another kind that Fig. 6 provides for the embodiment of the invention is realized the schematic flow sheet of eye method of operating;
The another kind that Fig. 7 provides for the embodiment of the invention is realized the schematic flow sheet of eye method of operating;
The another kind that Fig. 8 provides for the embodiment of the invention is realized the schematic flow sheet of eye method of operating;
A kind of structural representation of realizing eye operating system that Fig. 9 provides for the embodiment of the invention;
The another kind that Figure 10 provides for the embodiment of the invention is realized the structural representation of eye operating system;
The another kind that Figure 11 provides for the embodiment of the invention is realized the structural representation of eye operating system;
The another kind that Figure 12 provides for the embodiment of the invention is realized the structural representation of eye operating system;
The another kind that Figure 13 provides for the embodiment of the invention is realized the structural representation of eye operating system;
The another kind that Figure 14 provides for the embodiment of the invention is realized the structural representation of eye operating system;
The another kind that Figure 15 provides for the embodiment of the invention is realized the structural representation of eye operating system;
The another kind that Figure 16 provides for the embodiment of the invention is realized the structural representation of eye operating system.
Embodiment
In order to make those skilled in the art person understand the technical scheme among the present invention better; To combine the accompanying drawing in the embodiment of the invention below; Technical scheme in the embodiment of the invention is carried out clear, intactly description; Obviously, described embodiment only is the present invention's part embodiment, rather than whole embodiment.Based on the embodiment among the present invention, those of ordinary skills are not making the every other embodiment that is obtained under the creative work prerequisite, all should belong to the scope of the present invention's protection.
A kind of realization eye method of operating as shown in Figure 1, that the embodiment of the invention provides can comprise: step S1, step S2 and step S3,
S1, the primary importance of confirming place, client iris center and the second place of user's canthus point;
It is understandable that iris belongs to the eyeball middle level, be positioned at the forefront of tunica vasculose, color is darker, and the color of the iris of different people is not quite similar, and like countries such as America and Europes blueness, brown can be arranged, and Asian countries such as China are black, grey etc.Iris has the size of automatic adjusting pupil in ciliary body the place ahead, regulate to get into what the effect of intraocular light.Be positioned at the forefront of tunica vasculose, there is pupil in iris central authorities.。Sclera is the white portion in the eyeball outside, the i.e. white of the eye.
Wherein, step S1 can comprise: step S11 and step S12,
S11, collection human body image;
Concrete, can use the first-class equipment of shooting to gather human body image, preferred, this camera is a monocular cam, the position of this camera and shooting angle can be regulated, to be more conducive to gather human body image.
Facial image in S12, the said human body image of identification;
Concrete, step S12 can comprise: step S121 and step S122,
S121, under the YIQ chrominance space; I channel components distributed area according to face complexion; Discern the human body complexion image in the said human body image, said human body complexion image uses rectangle to carry out the frame choosing, and said human body complexion image comprises: facial image and neck image;
It is understandable that the colour of skin of the colour of skin of neck and people's face is very close, and the neck below is generally clothes, bigger with the colour-difference distance of the colour of skin.Using rectangle to carry out the frame choosing can well live facial image and the choosing of neck frames images.
It will be appreciated by persons skilled in the art that the YIQ chrominance space is adopted by the television system of North America usually, belong to NT SC (National Television Standards Committee) system.Here Y is not meant yellow, and is meant the legibility (Luminance) of color, i.e. brightness (Brightness).Y is exactly the gray-scale value (Gray value) of image in fact, and I and Q then are meant tone (Chrominance), promptly describe the attribute of image color and saturation degree.In the YIQ system, the monochrome information of Y component representative image, I, two components of Q then carry colouring information, the change color of I component representative from orange to cyan, Q component is then represented from purple to yellowish green change color.
In the YIQ color space; The colour of skin of people's face shows on colourity I channel components assembles compactedness preferably; In the YIQ color space, can get the data analysis statistics of the I channel components of the image that comprises the face complexion pixel, colour of skin information spinner will be distributed between 40 to 100.
Because the color characteristic of people's lip is also very outstanding, therefore in a preferred embodiment of the invention, can also further use the lip color characteristic to confirm the human body complexion image.Wherein, the colourity of lip is being distributed under the YIQ chrominance space within certain scope, and discovery is analyzed and added up to the data of each channel components of the lip colour vegetarian refreshments of the image pattern that only comprises the lip position; The optimal segmenting threshold of each channel components is respectively Y ∈ [90,210], I ∈ [20; 80]; Q ∈ [10,28] is with the distribution range of each channel components of the I channel components of the colour of skin in a YIQ chrominance space and lip look information criterion as the face area partitioning algorithm.
Certainly; Step S121 also can for: under the YUV chrominance space; The distribution range of the phasing degree θ that distributes according to the tone of face complexion; Discern the human body complexion image in the said human body image, said human body complexion image uses rectangle to carry out the frame choosing, and said human body complexion image comprises: facial image and neck image;
It will be appreciated by persons skilled in the art that YUV by a kind of colour coding method (belonging to PAL) that the eurovision system is adopted, is the color space that PAL and SECAM simulation color television system adopt.In modern times in the vitascan; Usually adopt three pipe colour cameras or colored CCD video camera to carry out capture; Then the colour picture signal of obtaining through color separation, obtain RGB behind the amplification correction respectively; Obtain brightness signal Y and two colour difference signal R-Y (being U), B-Y (being V) through matrixer again, last transmitting terminal is encoded brightness and three signals of aberration respectively, sends with same channel.The method for expressing of this color is exactly that so-called YUV color space is represented.The importance that adopts the YUV color space is that its brightness signal Y is separated with carrier chrominance signal U, V.
Luminance signal under the YUV chrominance space (Y) and carrier chrominance signal (U, V) are separate, and saturation degree is by mould value Ch decision, and tone is represented that by its phasing degree θ the phasing degree θ that the tone of face complexion distributes is distributed between 110 to 155.Under the YUV chrominance space; The phasing degree θ that lip look tone distributes is between 80 to 100; With the distribution range of the phasing degree θ of the colour of skin in the YUV chrominance space and lip look another one criterion, be used for people's face in conjunction with the face area segmentation threshold in the YIQ tone space and detect as the face area partitioning algorithm.Carry out skin color segmentation according to above-mentioned threshold range, utilize morphologic opening operation and closed operation to handle again and remove fritter isolated in the image, utilize the method for region growing to carry out the human face region rectification at last, with box mark human face region.
The minor face that S122, said rectangle are positioned at the top keeps fixing, shortens said rectangle, makes it become square, and the image of discerning in the said square is a facial image.
It is understandable that; The area of skin color at the position bottom of the box of the mark after the skin color segmentation generally contains the neck of human body; Box topmost part then is the coboundary of people's face area, therefore keeps box uppermost edge position box invariant position, dwindles box lower limb position; Its width is constant, highly zooms to width to equate.
Eyes image in S13, the said facial image of identification;
Wherein, said step S13 can comprise:
S131, the facial image that said square institute frame is selected are divided into the impartial first half and Lower Half;
It is understandable that; Human eye area is included in the first half zone of human face region; Therefore, above-mentioned box at first partly is divided into the first half zone and the latter half zone from the centre, wherein the length of the first half and the latter half box and width equate.
S132, the said first half is divided into impartial left side and right-hand part, wherein, carries human eye right eye region image in the image in the said left side, carry human eye left eye region image in the image in the said right-hand part;
Because human eye is present in the first half zone; Therefore partly be divided into the left-half zone from the centre in the first half zone regional with right half part; Wherein left-half zone and the subregional length of right-hand part and width equate that the characteristic information of the left eye of human eye appears in the right half part zone.
S133, use genetic algorithm are located human eye right eye region image from the image of said left side, use genetic algorithm from the image of said right-hand part, to locate human eye left eye region image.
Utilize genetic algorithm in the right half part zone, to find the solution the thought location human eye left eye region of optimum solution, location human eye right eye region uses the same method.
S14, confirm the primary importance at client iris center place and the second place of user's canthus point according to said eyes image.
Wherein, the method for definite primary importance can comprise among the step S14:
S141, confirm at least three points at iris edge according to iris in the eyeball and the aberration of sclera in bianry image;
S142, confirm the primary importance at said iris center place according at least three points at said iris edge.
Wherein, the shape of people's iris can be similar to is expressed as circle, and the geometric properties of the circle of utilization can get, and during three points on knowing circumference, just can confirm the center of circle of circle.
Wherein, the method for definite second place can comprise among the step S14:
S143, the primary importance of utilizing said iris center to belong to are estimated the said second place, generate and estimate the position;
Angle point responsiveness function in S144, the improved Harris Corner Detection Algorithm of use is revised estimating position in conjunction with the variance projection function, confirms the said second place.
Concrete, the implementation process of step S144 can comprise: step S144a, step S144b, step S144c, step S144d and step S144e.
S144a, will utilize the coloured image of the human eye area of genetic algorithm search to utilize the conversion formula of coloured image and gray level image at first to be converted into gray level image, the canthus point coordinate is located in the processing that then topography in this zonule is described below successively;
S144b, to each point in the human eye area gray level image calculate respectively its in the horizontal direction with vertical direction on first order derivative, simultaneously the image afterwards of the differentiate on horizontal direction and the vertical direction is done multiplying, shown in (4.8):
Formula (4.8)
S144c, step 2 imagery exploitation Gaussian function is after treatment carried out gaussian filtering
The autocorrelation matrix of Harris Corner Detection Algorithm can be expressed as:
The responsiveness function of Harris Corner Detection Algorithm is:
C
RF=AB-C
2-k(A+B)
2
But the value of following formula k needs rule of thumb to judge, if the improper meeting of value causes whole detection effect to descend, for fear of detecting error, introduces other a kind of method and replaces, and be shown below: k ' is a smaller integer in the formula,
After responsiveness function after replacing was confirmed, at this moment the weighting factor function F of variance projection function had also been decided, and the weighting factor function F is represented suc as formula shown in (4.13): λ is a constant in the formula
F=λ·w
S144d, utilize following formula represent on horizontal direction and the vertical direction the weighted projection function suc as formula shown in:
Wherein, (x y) is point (x, pixel intensity value y) to I.
S144e, according to the method shown in above-mentioned calculate the human eye canthus in the horizontal direction with vertical direction on the weighted projection function as the horizontal ordinate and the ordinate at canthus.
S2, calculate the distance of client iris center and said user's canthus point according to said primary importance and the said second place, said distance is first distance;
S21, according to the said primary importance and the said second place, calculate client iris center and a plurality of distances of said user's canthus point in the Preset Time section;
The mean value of S22, the said a plurality of distances of acquisition, said mean value is first distance.
Because the position of iris is along with the change list of external environment reveals instability; The method reliability of utilizing the distance relation of iris center and the canthus point of a certain two field picture location to shine upon merely is lower; The preferred embodiments of the present invention are first distance with the mean value of a plurality of distances of the client iris center in the human eye Preset Time section and said user's canthus point, can well improve reliability.The practical significance of this scheme is: human eye to some time length of watching zone (like a certain zone of screen) attentively as regioselective method; Within this time period of the some subregions of people's eye fixation; The present invention constantly handles the image that collects; Carry out the mapping of distance and people's eye fixation screen subregion at central point and canthus of the iris of human eye, get the mean distance that shines upon in this time period and shine upon the watch subregion of human eye computer screen.Wherein, Said Preset Time section can be 5 seconds; Because people's an eye line is watched attentively from one and is moved to another one in the subregion and watch attentively in the subregion in this time interval; Distance between human eye iris center and the canthus can settle out in this time period with the mapping relations of subregion, has also avoided simultaneously the error of bringing unconscious nictation of human eye.
The said distance that S3, basis are provided with in advance and the corresponding relation of operational order are carried out and corresponding first operational order of said first distance.
Concrete; Can be at first through demonstration pictures such as display screen, wall, blackboard etc. to guide the user to watch the partial content that is shown attentively; As: shown two parts content through display, being positioned at left side institute content displayed is water tumbler, and being positioned at right side institute content displayed is the lavatory.When the user watches the water tumbler in left side attentively, can carry out the instruction relevant, as inform nurse's " patient need drink water " with water tumbler.When the user watches the lavatory on right side attentively, can carry out the instruction relevant, as inform nurse's " patient need go to toilet " with the lavatory.Certainly, concrete embodiment is multiple in addition, all is known in those skilled in the art, and the present invention repeats no more at this.
Wherein, operational orders such as appearance is uncomfortable, request is drunk water can changed dressings, inform to said first operational order for request.It is understandable that the function that operational order is realized can be various, the present invention does not do qualification at this, and simultaneously, distance can be for multiple with the corresponding relation of operational order, and the present invention does not do qualification equally.
A kind of realization eye method of operating provided by the invention can be confirmed the distance between client iris center and the user's canthus point, and according to the corresponding relation of the distance that is provided with in advance with operational order, carries out and the corresponding operational order of distance.Just can realize that eye operates because the present invention wears equipment such as the helmet without the user, so the user experiences well, and cost is lower.Because the eye operation can be removed the step that the user uses limbs to operate from, the crowd that therefore is suitable for is wider.
Corresponding to top method embodiment, the present invention also provides a kind of realization eye operated system.
A kind of realization eye operated system as shown in Figure 9, that the embodiment of the invention provides comprises: position determination module 100, apart from determination module 200 and the instruction execution module 300,
Wherein, shown in figure 10, position determination module 100 can comprise: image collecting device 110, face recognition module 120, human eye identification module 130, iris center identification module 140 and canthus point identification module 150,
Concrete, image collecting device 110 can be equipment such as camera.Preferably, this camera is a monocular cam, and the position of this camera and shooting angle can be regulated, to be more conducive to gather human body image.
Face recognition module 120 is used for discerning the facial image of human body image;
Human eye identification module 130 is used for discerning the eyes image of facial image;
Iris center identification module 140 is used for confirming the primary importance that the client iris center belongs to according to eyes image;
Canthus point identification module 150 is used for confirming according to eyes image the second place of user's canthus point.
Wherein, shown in figure 11, face recognition module 120 can comprise: first colour of skin identification module 121 and people's face frame modeling piece 122,
First colour of skin identification module 121; Be used under the YIQ chrominance space, according to the I channel components distributed area of face complexion, the human body complexion image in the identification human body image; The human body complexion image uses rectangle to carry out the frame choosing, and the human body complexion image comprises: facial image and neck image;
It is understandable that the colour of skin of the colour of skin of neck and people's face is very close, and the neck below is generally clothes, bigger with the colour-difference distance of the colour of skin.Using rectangle to carry out the frame choosing can well live facial image and the choosing of neck frames images.
It will be appreciated by persons skilled in the art that the YIQ chrominance space is adopted by the television system of North America usually, belong to NT SC (National Television Standards Committee) system.Here Y is not meant yellow, and is meant the legibility (Luminance) of color, i.e. brightness (Brightness).Y is exactly the gray-scale value (Gray value) of image in fact, and I and Q then are meant tone (Chrominance), promptly describe the attribute of image color and saturation degree.In the YIQ system, the monochrome information of Y component representative image, I, two components of Q then carry colouring information, the change color of I component representative from orange to cyan, Q component is then represented from purple to yellowish green change color.
In the YIQ color space; The colour of skin of people's face shows on colourity I channel components assembles compactedness preferably; In the YIQ color space, can get the data analysis statistics of the I channel components of the image that comprises the face complexion pixel, colour of skin information spinner will be distributed between 40 to 100.
Because the color characteristic of people's lip is also very outstanding, therefore in a preferred embodiment of the invention, can also further use the lip color characteristic to confirm the human body complexion image.Wherein, the colourity of lip is being distributed under the YIQ chrominance space within certain scope, and discovery is analyzed and added up to the data of each channel components of the lip colour vegetarian refreshments of the image pattern that only comprises the lip position; The optimal segmenting threshold of each channel components is respectively Y ∈ [90,210], I ∈ [20; 80]; Q ∈ [10,28] is with the distribution range of each channel components of the I channel components of the colour of skin in a YIQ chrominance space and lip look information criterion as the face area partitioning algorithm.
People's face frame modeling piece 122 is used for the minor face that rectangle is positioned at the top and keeps fixing, shortens rectangle, makes it become square, and the image in the identification square is a facial image.
Shown in figure 12, in other embodiment of the present invention, face recognition module 120 can comprise: second colour of skin identification module 123 and people's face frame modeling piece 122,
Second colour of skin identification module 123; Be used under the YUV chrominance space; The distribution range of the phasing degree θ that distributes according to the tone of face complexion; Human body complexion image in the identification human body image, human body complexion image use rectangle to carry out the frame choosing, and the human body complexion image comprises: facial image and neck image;
It will be appreciated by persons skilled in the art that YUV by a kind of colour coding method (belonging to PAL) that the eurovision system is adopted, is the color space that PAL and SECAM simulation color television system adopt.In modern times in the vitascan; Usually adopt three pipe colour cameras or colored CCD video camera to carry out capture; Then the colour picture signal of obtaining through color separation, obtain RGB behind the amplification correction respectively; Obtain brightness signal Y and two colour difference signal R-Y (being U), B-Y (being V) through matrixer again, last transmitting terminal is encoded brightness and three signals of aberration respectively, sends with same channel.The method for expressing of this color is exactly that so-called YUV color space is represented.The importance that adopts the YUV color space is that its brightness signal Y is separated with carrier chrominance signal U, V.
Luminance signal under the YUV chrominance space (Y) and carrier chrominance signal (U, V) are separate, and saturation degree is by mould value Ch decision, and tone is represented that by its phasing degree θ the phasing degree θ that the tone of face complexion distributes is distributed between 110 to 155.Under the YUV chrominance space; The phasing degree θ that lip look tone distributes is between 80 to 100; With the distribution range of the phasing degree θ of the colour of skin in the YUV chrominance space and lip look another one criterion, be used for people's face in conjunction with the face area segmentation threshold in the YIQ tone space and detect as the face area partitioning algorithm.Carry out skin color segmentation according to above-mentioned threshold range, utilize morphologic opening operation and closed operation to handle again and remove fritter isolated in the image, utilize the method for region growing to carry out the human face region rectification at last, with box mark human face region.
People's face frame modeling piece 122 is used for the minor face that rectangle is positioned at the top and keeps fixing, shortens rectangle, makes it become square, and the image in the identification square is a facial image.
It is understandable that; The area of skin color at the position bottom of the box of the mark after the skin color segmentation generally contains the neck of human body; Box topmost part then is the coboundary of people's face area, therefore keeps box uppermost edge position box invariant position, dwindles box lower limb position; Its width is constant, highly zooms to width to equate.
Shown in figure 13, human eye identification module 130 can comprise: first divides module 131, second divides module 132 and human eye locating module 133,
First divides module 131, is used for the facial image of square institute frame choosing is divided into the impartial first half and Lower Half;
It is understandable that; Human eye area is included in the first half zone of human face region; Therefore, above-mentioned box at first partly is divided into the first half zone and the latter half zone from the centre, wherein the length of the first half and the latter half box and width equate.
Because human eye is present in the first half zone; Therefore partly be divided into the left-half zone from the centre in the first half zone regional with right half part; Wherein left-half zone and the subregional length of right-hand part and width equate that the characteristic information of the left eye of human eye appears in the right half part zone.
Human eye locating module 133 is used for using the image location human eye right eye region image of genetic algorithm from the left side, uses genetic algorithm from the image of right-hand part, to locate human eye left eye region image.
Utilize genetic algorithm in the right half part zone, to find the solution the thought location human eye left eye region of optimum solution, location human eye right eye region uses the same method.
Wherein, shown in figure 14, iris center identification module 140 can comprise: marginal point determination module 141 and center determination module 142,
Marginal point determination module 141 is used for confirming according to eyeball iris and the aberration of sclera in bianry image at least three points at iris edge;
Wherein, the shape of people's iris can be similar to is expressed as circle, and the geometric properties of the circle of utilization can get, and during three points on knowing circumference, just can confirm the center of circle of circle.
Wherein, shown in figure 15, canthus point identification module 150 can comprise: estimate module 151 and correcting module 152,
Correcting module 152 is used for using the angle point responsiveness function of improved Harris Corner Detection Algorithm, in conjunction with the variance projection function estimating position is revised, and confirms the second place.
Apart from determination module 200, be used for calculating the distance of client iris center and user's canthus point according to primary importance and the second place, distance is first distance;
Wherein, shown in figure 16, apart from determination module 200, can comprise: distance obtains module 210 and average module 220,
Distance obtains module 210, is used for according to the primary importance and the second place, calculates the interior client iris center of Preset Time section and a plurality of distances of user's canthus point;
Because the position of iris is along with the change list of external environment reveals instability; The method reliability of utilizing the distance relation of iris center and the canthus point of a certain two field picture location to shine upon merely is lower; The preferred embodiments of the present invention are first distance with the mean value of a plurality of distances of the client iris center in the human eye Preset Time section and said user's canthus point, can well improve reliability.The practical significance of this scheme is: human eye to some time length of watching zone (like a certain zone of screen) attentively as regioselective method; Within this time period of the some subregions of people's eye fixation; The present invention constantly handles the image that collects; Carry out the mapping of distance and people's eye fixation screen subregion at central point and canthus of the iris of human eye, get the mean distance that shines upon in this time period and shine upon the watch subregion of human eye computer screen.Wherein, Said Preset Time section can be 5 seconds; Because people's an eye line is watched attentively from one and is moved to another one in the subregion and watch attentively in the subregion in this time interval; Distance between human eye iris center and the canthus can settle out in this time period with the mapping relations of subregion, has also avoided simultaneously the error of bringing unconscious nictation of human eye.
Wherein, operational orders such as appearance is uncomfortable, request is drunk water can changed dressings, inform to said first operational order for request.
A kind of realization eye operated system provided by the invention can be confirmed the distance between client iris center and the user's canthus point, and according to the corresponding relation of the distance that is provided with in advance with operational order, carries out and the corresponding operational order of distance.Just can realize that eye operates because the present invention wears equipment such as the helmet without the user, so the user experiences well, and cost is lower.
For the convenience of describing, be divided into various unit with function when describing above the device and describe respectively.Certainly, when embodiment of the present invention, can in same or a plurality of softwares and/or hardware, realize the function of each unit.
Description through above embodiment can know, those skilled in the art can be well understood to the present invention and can realize by the mode that software adds essential general hardware platform.Based on such understanding; The part that technical scheme of the present invention contributes to prior art in essence in other words can be come out with the embodied of software product; This computer software product can be stored in the storage medium, like ROM/RAM, magnetic disc, CD etc., comprises that some instructions are with so that a computer equipment (can be a personal computer; Server, the perhaps network equipment etc.) carry out the described method of some part of each embodiment of the present invention or embodiment.
Each embodiment in this instructions all adopts the mode of going forward one by one to describe, and identical similar part is mutually referring to getting final product between each embodiment, and each embodiment stresses all is the difference with other embodiment.Especially, for system embodiment, because it is basically similar in appearance to method embodiment, so describe fairly simplely, relevant part gets final product referring to the part explanation of method embodiment.System embodiment described above only is schematic; Wherein said unit as the separating component explanation can or can not be physically to separate also; The parts that show as the unit can be or can not be physical locations also; Promptly can be positioned at a place, perhaps also can be distributed on a plurality of NEs.Can realize the purpose of present embodiment scheme according to the needs selection some or all of module wherein of reality.Those of ordinary skills promptly can understand and implement under the situation of not paying creative work.
The present invention can be used in numerous general or special purpose computingasystem environment or the configuration.For example: personal computer, server computer, handheld device or portable set, plate equipment, multicomputer system, the system based on microprocessor, set top box, programmable consumer-elcetronics devices, network PC, small-size computer, mainframe computer, comprise DCE of above any system or equipment or the like.
The present invention can describe in the general context of the computer executable instructions of being carried out by computing machine, for example program module.Usually, program module comprises the routine carrying out particular task or realize particular abstract, program, object, assembly, data structure or the like.Also can in DCE, put into practice the present invention, in these DCEs, by through communication network connected teleprocessing equipment execute the task.In DCE, program module can be arranged in this locality and the remote computer storage medium that comprises memory device.
Need to prove; In this article; Relational terms such as first and second grades only is used for an entity or operation are made a distinction with another entity or operation, and not necessarily requires or hint relation or the order that has any this reality between these entities or the operation.
The above only is an embodiment of the present invention; Should be pointed out that for those skilled in the art, under the prerequisite that does not break away from the principle of the invention; Can also make some improvement and retouching, these improvement and retouching also should be regarded as protection scope of the present invention.
Claims (10)
1. realize the eye method of operating for one kind, it is characterized in that, comprising:
Confirm the primary importance at place, client iris center and the second place of user's canthus point;
Calculate the distance of client iris center and said user's canthus point according to said primary importance and the said second place, said distance is first distance;
According to the corresponding relation of the said distance that is provided with in advance, carry out and corresponding first operational order of said first distance with operational order.
2. method according to claim 1 is characterized in that, the step of the primary importance at place, said definite client iris center and the second place of user's canthus point comprises:
Gather human body image;
Discern the facial image in the said human body image;
Discern the eyes image in the said facial image;
Confirm the primary importance at place, client iris center and the second place of user's canthus point according to said eyes image.
3. method according to claim 2 is characterized in that, the step of the facial image in the said human body image of said identification comprises:
Under the YIQ chrominance space, according to the I channel components distributed area of face complexion, discern the human body complexion image in the said human body image, said human body complexion image uses rectangle to carry out the frame choosing, and said human body complexion image comprises: facial image and neck image;
The minor face that said rectangle is positioned at the top keeps fixing, shortens said rectangle, makes it become square, and the image of discerning in the said square is a facial image.
4. method according to claim 2 is characterized in that, the step of the facial image in the said human body image of said identification comprises:
Under the YUV chrominance space; The distribution range of the phasing degree θ that distributes according to the tone of face complexion; Discern the human body complexion image in the said human body image, said human body complexion image uses rectangle to carry out the frame choosing, and said human body complexion image comprises: facial image and neck image;
The minor face that said rectangle is positioned at the top keeps fixing, shortens said rectangle, makes it become square, and the image of discerning in the said square is a facial image.
5. according to claim 3 or 4 described methods, it is characterized in that the step of the eyes image in the said facial image of said identification comprises:
The facial image of said square institute frame choosing is divided into the impartial first half and Lower Half;
The said first half is divided into impartial left side and right-hand part, wherein, carries human eye right eye region image in the image in the said left side, carry human eye left eye region image in the image in the said right-hand part;
Use genetic algorithm from the image of said left side, to locate human eye right eye region image, use genetic algorithm from the image of said right-hand part, to locate human eye left eye region image.
6. method according to claim 5 is characterized in that, in the step of the second place of said primary importance and the user's canthus point of confirming client iris center place according to said eyes image, confirms the method for said primary importance, comprising:
Confirm at least three points at iris edge according to iris in the eyeball and the aberration of sclera in bianry image;
Confirm the primary importance at place, said iris center according at least three points at said iris edge.
7. method according to claim 5 is characterized in that, in the step of the second place of said primary importance and the user's canthus point of confirming client iris center place according to said eyes image, confirms the method for the said second place, comprising:
Utilize the primary importance at place, said iris center that the said second place is estimated, generate and estimate the position;
Use the angle point responsiveness function in the improved Harris Corner Detection Algorithm, estimating position is revised, confirm the said second place in conjunction with the variance projection function.
8. method according to claim 1 is characterized in that, describedly calculates the distance of client iris center and said user's canthus point according to said primary importance and the said second place, and said distance is the step of first distance, comprising:
According to the said primary importance and the said second place, calculate the interior client iris center of Preset Time section and a plurality of distances of said user's canthus point;
Obtain the mean value of said a plurality of distances, said mean value is first distance.
9. realize the eye operated system for one kind, it is characterized in that, comprising: position determination module, apart from determination module and instruction execution module,
Said position determination module is used for confirming the primary importance at place, client iris center and the second place of user's canthus point;
Said apart from determination module, be used for calculating the distance of client iris center and said user's canthus point according to said primary importance and the said second place, said distance is first distance;
Said instruction execution module is used for according to the corresponding relation of the said distance that is provided with in advance with operational order, carries out and corresponding first operational order of said first distance.
10. system according to claim 9 is characterized in that, said position determination module comprises: image collecting device, face recognition module, human eye identification module, iris center identification module and canthus point identification module,
Said image collecting device is used to gather human body image;
Said face recognition module is used for discerning the facial image of said human body image;
Said human eye identification module is used for discerning the eyes image of said facial image;
Said iris center identification module is used for confirming the primary importance that the client iris center belongs to according to said eyes image;
Said canthus point identification module is used for confirming according to said eyes image the second place of user's canthus point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210097796.8A CN102662470B (en) | 2012-04-01 | 2012-04-01 | A kind of method and system realizing eye operation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210097796.8A CN102662470B (en) | 2012-04-01 | 2012-04-01 | A kind of method and system realizing eye operation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102662470A true CN102662470A (en) | 2012-09-12 |
CN102662470B CN102662470B (en) | 2016-02-10 |
Family
ID=46771975
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210097796.8A Expired - Fee Related CN102662470B (en) | 2012-04-01 | 2012-04-01 | A kind of method and system realizing eye operation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102662470B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105487787A (en) * | 2015-12-09 | 2016-04-13 | 东莞酷派软件技术有限公司 | Terminal operation method and device based on iris recognition and terminal |
CN106990839A (en) * | 2017-03-21 | 2017-07-28 | 张文庆 | A kind of eyeball identification multimedia player and its implementation |
CN107291238A (en) * | 2017-06-29 | 2017-10-24 | 深圳天珑无线科技有限公司 | A kind of data processing method and device |
CN107392152A (en) * | 2017-07-21 | 2017-11-24 | 青岛海信移动通信技术股份有限公司 | A kind of method and device for obtaining iris image |
CN107943527A (en) * | 2017-11-30 | 2018-04-20 | 西安科锐盛创新科技有限公司 | The method and its system of electronic equipment is automatically closed in sleep |
CN109213325A (en) * | 2018-09-12 | 2019-01-15 | 苏州佳世达光电有限公司 | Eye gesture method for collecting characteristics and eye gesture identification system |
CN110969084A (en) * | 2019-10-29 | 2020-04-07 | 深圳云天励飞技术有限公司 | Method and device for detecting attention area, readable storage medium and terminal equipment |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101893934A (en) * | 2010-06-25 | 2010-11-24 | 宇龙计算机通信科技(深圳)有限公司 | Method and device for intelligently adjusting screen display |
-
2012
- 2012-04-01 CN CN201210097796.8A patent/CN102662470B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101893934A (en) * | 2010-06-25 | 2010-11-24 | 宇龙计算机通信科技(深圳)有限公司 | Method and device for intelligently adjusting screen display |
Non-Patent Citations (3)
Title |
---|
夏海英等: "加权方差投影在眼角定位中的应用", 《中国图象图形学报》, vol. 16, no. 2, 28 February 2011 (2011-02-28) * |
舒梅: "基于肤色和模板匹配的的人眼定位", 《计算机工程与应用》, no. 2, 15 February 2009 (2009-02-15) * |
高飞: "视线跟踪系统研究与设计", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 June 2007 (2007-06-15), pages 138 - 596 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105487787A (en) * | 2015-12-09 | 2016-04-13 | 东莞酷派软件技术有限公司 | Terminal operation method and device based on iris recognition and terminal |
CN106990839A (en) * | 2017-03-21 | 2017-07-28 | 张文庆 | A kind of eyeball identification multimedia player and its implementation |
CN107291238A (en) * | 2017-06-29 | 2017-10-24 | 深圳天珑无线科技有限公司 | A kind of data processing method and device |
CN107291238B (en) * | 2017-06-29 | 2021-03-05 | 南京粤讯电子科技有限公司 | Data processing method and device |
CN107392152A (en) * | 2017-07-21 | 2017-11-24 | 青岛海信移动通信技术股份有限公司 | A kind of method and device for obtaining iris image |
CN107943527A (en) * | 2017-11-30 | 2018-04-20 | 西安科锐盛创新科技有限公司 | The method and its system of electronic equipment is automatically closed in sleep |
CN109213325A (en) * | 2018-09-12 | 2019-01-15 | 苏州佳世达光电有限公司 | Eye gesture method for collecting characteristics and eye gesture identification system |
CN109213325B (en) * | 2018-09-12 | 2021-04-20 | 苏州佳世达光电有限公司 | Eye potential feature acquisition method and eye potential identification system |
CN110969084A (en) * | 2019-10-29 | 2020-04-07 | 深圳云天励飞技术有限公司 | Method and device for detecting attention area, readable storage medium and terminal equipment |
CN110969084B (en) * | 2019-10-29 | 2021-03-05 | 深圳云天励飞技术有限公司 | Method and device for detecting attention area, readable storage medium and terminal equipment |
Also Published As
Publication number | Publication date |
---|---|
CN102662470B (en) | 2016-02-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102662470B (en) | A kind of method and system realizing eye operation | |
CN106250867B (en) | A kind of implementation method of the skeleton tracking system based on depth data | |
CN101090482B (en) | Driver fatigue monitoring system and method based on image process and information mixing technology | |
CN105247539B (en) | Stare the method for tracking | |
CN102081918B (en) | Video image display control method and video image display device | |
WO2017084204A1 (en) | Method and system for tracking human body skeleton point in two-dimensional video stream | |
CN108549884A (en) | A kind of biopsy method and device | |
CN104133548A (en) | Method and device for determining viewpoint area and controlling screen luminance | |
CN104951773A (en) | Real-time face recognizing and monitoring system | |
CN101201695A (en) | Mouse system for extracting and tracing based on ocular movement characteristic | |
CN101719015A (en) | Method for positioning finger tips of directed gestures | |
CN109558825A (en) | A kind of pupil center's localization method based on digital video image processing | |
CN105095840B (en) | Multi-direction upper nystagmus method for extracting signal based on nystagmus image | |
CN102184016B (en) | Noncontact type mouse control method based on video sequence recognition | |
CN102496002A (en) | Facial beauty evaluation method based on images | |
CN102456137A (en) | Sight line tracking preprocessing method based on near-infrared reflection point characteristic | |
CN110032932A (en) | A kind of human posture recognition method based on video processing and decision tree given threshold | |
CN116704553B (en) | Human body characteristic identification auxiliary system based on computer vision technology | |
Everingham et al. | Head-mounted mobility aid for low vision using scene classification techniques | |
CN106297492A (en) | A kind of Educational toy external member and utilize color and the method for outline identification programming module | |
CN109543518A (en) | A kind of human face precise recognition method based on integral projection | |
CN115171024A (en) | Face multi-feature fusion fatigue detection method and system based on video sequence | |
Zhang et al. | Photometric stereo for three-dimensional leaf venation extraction | |
KR101879229B1 (en) | Online lecture monitoring system and method using gaze tracking technology | |
Harangi et al. | Automatic detection of the optic disc using majority voting in a collection of optic disc detectors |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160210 Termination date: 20170401 |