[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US20010046311A1 - System for identifying individuals - Google Patents

System for identifying individuals Download PDF

Info

Publication number
US20010046311A1
US20010046311A1 US09/918,835 US91883501A US2001046311A1 US 20010046311 A1 US20010046311 A1 US 20010046311A1 US 91883501 A US91883501 A US 91883501A US 2001046311 A1 US2001046311 A1 US 2001046311A1
Authority
US
United States
Prior art keywords
identification
image
individuals
data
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/918,835
Other versions
US6404903B2 (en
Inventor
Kenji Okano
Yuji Kuno
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oki Electric Industry Co Ltd
Original Assignee
Oki Electric Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP9165117A external-priority patent/JPH10340342A/en
Priority claimed from JP9177664A external-priority patent/JPH117535A/en
Priority claimed from JP18004797A external-priority patent/JPH1125270A/en
Application filed by Oki Electric Industry Co Ltd filed Critical Oki Electric Industry Co Ltd
Priority to US09/918,835 priority Critical patent/US6404903B2/en
Publication of US20010046311A1 publication Critical patent/US20010046311A1/en
Application granted granted Critical
Publication of US6404903B2 publication Critical patent/US6404903B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/987Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns with the intervention of an operator

Definitions

  • the present invention relates to a system for identifying individuals by comparing an input image with a dictionary image previously stored.
  • a technology is well known by which an individual is identified from physical information (about the face, iris and fingerprints, for example) of human beings and other animals. To cite an example, this technology is disclosed in U.S. Pat. No. 5291560.
  • the present invention adopts the following arrangement.
  • a system for identifying individuals in Claim 1 comprises
  • an image memory means for storing an input image of an individual as an object to be identified
  • a dictionary image memory means for storing dictionary images as a basis on which to extract data on the features of the collation objects
  • an individual identification means for analyzing the input image held in the image memory means and comparing said data on the features stored in the dictionary to thereby identify the individual;
  • an identification result analysis means for analyzing the identification result by the individuals identifying means and detecting that area of the region used for identification in the input image which does not agree with the dictionary data;
  • a result output control means for, according to identification result by the individuals identifying means, issuing an analyze command to the identification result analysis means and, according to analysis result by the identification result analysis means, issuing a command to display the discordant portion to the identification result analysis means, and deciding whether or not to display the input image and the dictionary image;
  • an image output control means for, according to a result of decision by the result output control means, controlling display of the discordant portion, the input image and the dictionary image;
  • a display for displaying an image outputted by the image output control means.
  • the input image and the dictionary image are images of the iris of the human eye, and the individuals identifying means conducts identification by comparing iris codes.
  • the individuals identifying means conducts identification by comparing iris codes.
  • any arrangement other than was described above may be applied so long as it identifies an individual.
  • the identification result analysis means detects an area where the input image does not coincide with the dictionary image by comparing iris codes, for example.
  • the result output control means makes a decision to have at least the where discrepancy occurred displayed according to an analysis result of the identification result analysis means.
  • the result output control means makes a decision not to issue an analyze command to the identification result analysis means nor issue a display command to the image output control means.
  • a case where the input image agrees with the dictionary data to such an extent that the identification result of the individuals identifying means is larger than the predetermined threshold value means that the person is identified as a correct person.
  • the system does not display analysis of the identification result nor display the input image. Therefore, unnecessary processing is not executed and a reduction of processing load can be expected.
  • the result output control means makes a decision not to issue an analyze command to the identification result analysis means nor issue a display command to the image output control means.
  • the invention of Claim 3 is that the analyze process and the image display process are not executed when the person is identified as a correct person or the person is identified as somebody else or the quality of the input image is judged to be inferior, and the analyze process and the image display process are executed only when the identity of the person is uncertain.
  • the cases where the degree of agreement between the input image and the dictionary data is less than the second threshold value include a case of no agreement at all.
  • the identification result analysis means analyzes and judges the discordant portion between the input image and the dictionary image to be larger than the predetermined value
  • the result output control means judges that the input image, for the most part, differs from the dictionary image and makes a decision to cause the input image to be displayed.
  • the invention in Claim 4 is such that according to the result of the analyze process by the identification result analysis means, if the result output control means judges that the input image, for the most part, differs from the dictionary data, only the input image is displayed. In other words, in this case, the dictionary image is not displayed. Therefore, if the image is out of focus or blurred, the user can easily guess the cause of misjudgment by looking at the input image. Furthermore, even when an ill-intentioned person tries to use the system, security can be maintained.
  • a system for identifying individuals in Claim 5 comprises
  • an image memory means for storing an input image of an individual to be identified
  • a dictionary image memory means for storing dictionary images as a basis on which to extract data of the features of the collation objects
  • an individuals identifying means for analyzing the input image held in the image memory means and comparing the data on the features of the dictionary to thereby identify the individual;
  • an identification result analyzing means for analyzing the identification result by the individuals identifying means and detecting that area of the region used for identification in the input image which does not agree with the dictionary data;
  • a result output control means for making a decision not to display any image on the basis of a judgement that the input image, for the most part, differs from the dictionary data if the discordant portion is larger than a predetermined value in the analysis of the identification result analysis means.
  • the invention in Claim 5 is such that as the result of the analyze process by the identification result analysis means, if a decision is made that the input image, for the most part, differs from dictionary data, no image is displayed. In this case, for example, when an ill-intentioned person tries to access the system, security is preserved because no input image is displayed and it is least likely to be known how identification of an individual is carried out. This invention is suitable for cases where emphasis is placed on the preservation of security.
  • the result output control means which makes a decision to cause both the input image and the discordant portion to be displayed if it is judged in the analysis by the identification result analysis means that the input image partially differs from the dictionary image when the discordant portion is smaller than a predetermined value.
  • the invention in Claim 6 is such that both the discordant portion and the input image are displayed in cases of identification by the iris in which the input image is a correct person's but there is a partial difference between the input image and dictionary data when the iris is hidden behind the eye lid or eyelashes or light is reflected by the eye to the camera.
  • the above images serve as an effective clue by which to guess the cause of misjudgment by the user. If an operator attends the system, he can make a final decision on the basis of the above images.
  • Claim 7 displays only the discordant portion in contrast to Claim 6 that displays both the input image and the discordant portion. This invention offers this effective clue by which to guess the cause for the user to be unable to make a correct apprehension. If an operator attends the system, the above images serve as a basis on which he makes a final decision.
  • the system for identifying an individual is characterized in that the object to be identified is the iris in the eye.
  • the invention in Claim 8 is intended to identify individuals by collation of iris images.
  • the effect of this invention is that it is possible to guess the cause of misidentification that even the user is hard to know, such as the iris being hidden behind the eyelid or eyelashes or the reflection of light by the eye to the camera which hampers identification even when an input image of a correct person is used.
  • a system for identifying individuals in Claim 9 comprises
  • a recognition dictionary for having stored in advance data on features of an object to be identified and additional information peculiar to the object obtained from the object to be identified;
  • an identification result decision means for, when having made a decision not to input additional information in a decision regarding whether or not to input additional information according to identification result by the individuals identifying means, outputting identification result by the individuals identifying means as a final result, or, when having made a decision to input additional information, issuing a command to obtain additional information and also a command to conduct a re-identification process, and making a decision to terminate the re-identification process according to a result of the re-identification process, and outputting a final identification result;
  • an additional information inputting means for obtaining arriving additional information upon receiving additional information from the identification result decision means
  • a re-identification means for, on receiving a command to conduct a re-identification process from the identification result decision means, selecting the identification dictionaries containing all of additional information acquired by additional information inputting means, and outputting as the result of re-identification a dictionary having a closest possible value to data on the features of the object under identification among selected dictionaries.
  • Individuals as objects to be identified are human beings, and animals, such as dogs, cats and horses. And, individuals of any other kinds may be identified.
  • the methods for identifying individuals there is the iris identify process using the iris of the eye.
  • any other methods may be used.
  • additional information is about the external features of the object under identification, such as the distinction of sex, the color of the eye, etc. Any other kind of information, such as audio information may be used so long as individuals have a peculiar and common feature.
  • the individuals identifying means conducts an identify process on an individual, the results of which are outputted to the identification result decision means.
  • the identification result decision means outputs this information as the final identification result.
  • the a re-identification command is issued which is added with additional information.
  • Additional information input means obtains additional information entered by the user or operator, and on the basis of this information, the re-identification means executes a re-identification process.
  • the identification result decision means if accordingly the person could be identified, for example, outputs a decision to terminate the re-identification process along with a final identification result.
  • a dictionary has a number of different items of additional information stored in advance, and the re-identification means conducts a re-identification process each time it receives one item of additional information obtained by additional information input means.
  • the invention in Claim 10 is such that additional information is a plurality of items of information, such as the distinction of sex, the fur color, and the re-identification means conducts a re-identification process each time it receives one item of additional information. Therefore, the re-identification process can be carried out even if all items of information are supplied, and superfluous input operations need not be performed.
  • the identification result decision means makes a decision to terminate the re-identification process each time the re-identification means conducts a re-identification process.
  • the invention in Claim 11 is such that a decision to terminate the re-identification process each time one item of additional information is input. Therefore, when the person has been identified by input of any additional information, it is not necessary to any more re-identification process, so that time spent for identification can be shortened.
  • the identification result decision means changes a criterion for re-identification at each execution of the re-identification process by the re-identification means.
  • the invention in Claim 12 has been made to enable the threshold value to be varied to ease the criterion, for example, at each re-identification process. Therefore, it becomes possible to enhance the probability of successful identification of an individual while suppressing a deterioration of identification accuracy.
  • identification of an individual is by collation of iris images and additional information is about the external features of the individual.
  • the invention in Claim 13 is such that the process for identifying an individual is done by collation of iris images and additional information is about the external appearance of an individual under identification. Therefore, even if an obtained image of the eye is inferior in picture quality, identification may succeed. Because additional information is external features, even if the operator is supposed to confirm the entered additional information, the operator can confirm it easily with fewer errors.
  • a system for identifying individuals in Claim 14 comprises
  • an image input unit for taking pictures of an individual as an object to be identified from different camera angles and inputting images of the object
  • an image analyzing means for analyzing the image and extracting the features of the external appearance of the object
  • an output unit for obtaining a name by which to identify an individual under identification according to the features.
  • the image analyzing means extracts the features by selecting a name of a feature corresponding to the image for each of the elements of external appearance out of the names of predetermined features allotted to variations of respective elements of the external appearance.
  • a data base of data representing the features of the external appearance of a plurality of individuals by the names of features, the data being associated with the names of the individuals when the data is accumulated, wherein the output unit is so arranged as to search the database, and obtain the names of individuals having the names of features corresponding to the names of features selected at the image analyzing means as the names of individuals under identification.
  • the individuals to be identified and the plurality of objects are animals, and wherein, the elements of the external appearance, which are used in the image analyzing means, the database and the output unit, are two or more items of the fur color, the white patterns of the head, the location of whirls, and white marks of the legs.
  • the image analyzing means extracts not less than two items of the location of a scar of an animal under identification and the condition of spots other than white marks, wherein the database is associated with the names of the plurality of animals, the location of scars or the condition of spots of each animal when data is accumulated, and wherein the output unit searches the database using the location of the scar or the condition of spots extracted from the image to thereby obtain the name of the animal to be identified.
  • a system for identifying individuals in Claim 19 comprises
  • a name input means for entering the name of an individual under identification
  • an image input unit for inputting images of an individual under identification by taking images of the object
  • an image analyzing means for analyzing the image and extracting the features of the external appearance of an individual under identification
  • a data base for accumulating data representing the features of the external appearance of individuals associated with the names of a plurality of individuals, and searching data corresponding to the names input through the name input means to thereby output the data;
  • a true/false decision unit for making a decision of whether an individual under identification is true or not by comparing the features found at said data base with the features extracted by said image analyzing means.
  • image analyzing means extracts the features by selecting a name of a feature corresponding to the image for each of the elements of external appearance out of the names of predetermined features allotted to variations of respective elements of the external appearance.
  • the data base is accumulated in such a way that items of data on the features of the external appearance of the plurality of individuals are associated with the names of the individuals, the items of data being represented by the names of the features, and the true/false decision unit makes a decision of whether an individual under identification is true or not by comparing names of the features found at the data base with names of the features extracted by the image analyzing means.
  • the individuals under identification and the plurality of individuals are animals, and the elements of the external appearance, which are used in the image analyzing means, the database and the output unit, are two or more items of the fur color, the white patterns of the head, the location of whirls, and white marks of the legs.
  • the image analyzing means extracts not less than two items of the location of a scar and the condition of white marks of an animal under identification
  • the database associates the names of the plurality of animals with the condition of scars of each animal or the condition of spots when accumulating data
  • the true/false decision unit makes a decision of whether the animal under identification is true or not from the location of scars or the condition of spots extracted from the image.
  • the image input unit is formed by a plurality of cameras, disposed around the object under identification, for taking images of the object under identification.
  • the image input unit is formed by a camera movable around an individual under identification to take images of the object under identification.
  • FIG. 1 is a block diagram showing a first embodiment of the individual identification system of the present invention
  • FIG. 2 is a diagram for explaining an example of the recognition dictionary in the first embodiment of the present invention.
  • FIG. 3 is a diagram for explaining an example of the dictionary image memory means in the first embodiment of the individual identification system of the present invention
  • FIG. 4 is a diagram for explaining an example of division of the iris region in the first embodiment of the individual identification system of the present invention.
  • FIG. 5 is a diagram for explaining an example of calculation results of Hamming distances of different regions in the first embodiment of the individual identification system of the present invention
  • FIG. 6 is a diagram for explaining an example of the result of image display in the first embodiment of the individual identification system of the present invention.
  • FIG. 7 is a block diagram of a second embodiment of the individual identification system of the present invention.
  • FIG. 8 is a diagram for explaining the contents of the recognition dictionary in the second embodiment of the individual identification system of the present invention.
  • FIG. 9 is a diagram for explaining the contents of the identification result memory means in the second embodiment of the individual identification of the present invention.
  • FIG. 10 is a diagram for explaining the re-identification in the second embodiment of the present invention.
  • FIG. 11 is a block diagram of the individual identification system according to a third embodiment of the present invention.
  • FIG. 12 is a diagram showing camera positions of the image input portion in FIG. 11;
  • FIG. 13 is a diagram showing a horse photographed by cameras 32 to 34 in FIG. 12;
  • FIG. 14 is a flowchart showing processes of color identification unit 21 in FIG. 11;
  • FIG. 15 is a diagram showing the locations where the color is evaluated
  • FIG. 16 is a diagram showing the names of the colors of horses
  • FIG. 17 is a flowchart showing processes of the head white marks identification unit 22 in FIG. 11;
  • FIG. 18 is a diagram showing the data extraction regions in FIG.17;
  • FIG. 19 is a diagram showing the white patterns (Part 1 ) of the horse head
  • FIG. 20 is a diagram showing the white patterns (Part 2 ) of the horse head
  • FIG. 21 is a diagram showing the names of the white patterns of the horse head
  • FIG. 22 is a flowchart showing the processes of the whirls identification unit 23 in FIG. 11;
  • FIG. 23 is a diagram showing the locations where whirls are detected.
  • FIG. 24 is a diagram showing the names and locations of whirls of horses
  • FIG. 25 is a flowchart showing the processes of the leg white marks identification unit 24 in FIG. 11;
  • FIG. 26 is a diagram showing the data extraction regions in FIG.25;
  • FIG. 27 is a diagram showing names of the white marks of the legs.
  • FIG. 28 is a diagram showing the horse names and the feature names accumulated in the database 31 in FIG. 11;
  • FIG. 29 is a block diagram of the individual identification system showing a fourth embodiment of the present invention.
  • FIG. 1 is a block diagram showing a first embodiment of the system for identifying individuals according to the present invention.
  • the system in FIG. 1 is a computer system and comprises image memory means 1 , a recognition dictionary 2 , dictionary image memory means 3 , individuals identifying means 4 , result output control means 5 , identification result analysis means 6 , image output control means 7 , and display device 8 .
  • image memory means 1 a recognition dictionary 2 , dictionary image memory means 3 , individuals identifying means 4 , result output control means 5 , identification result analysis means 6 , image output control means 7 , and display device 8 .
  • description will be made of identification of an individual by collation of iris images.
  • the image memory means 1 is installed in an auxiliary memory device, such as a semiconductor memory or a disk, and performs a function to store input images of individuals as objects under identification.
  • the images are acquired through A/D conversion of images taken by the video camera, VTR, etc.
  • the images held in this image memory means 1 are used by the individuals identifying means 4 and the image output control means 7 .
  • the recognition dictionary 2 is installed in the auxiliary memory device, such as a semiconductor memory or a disk, and has data on features of objects under identification stored in advance.
  • FIG. 2 is an explanatory diagram of the recognition dictionary 2 .
  • the recognition dictionary 2 holds iris codes as data on features and user information (name, sex, etc.).
  • the dictionary image memory means 3 is mounted in the auxiliary memory device, such as a semiconductor memory or a disk, and holds the images when respective dictionary were created and recorded in the recognition dictionary 2 .
  • FIG. 3 is an explanatory diagram of the dictionary image memory means 3 .
  • the dictionary image memory means 3 holds images entered at the times of creating dictionaries of the recognition dictionary 2 , and the respective images correspond to the respective dictionaries stored in the recognition dictionary 2 on a one-to-one correspondence.
  • the individuals identifying means 4 analyzes an input image held in the image memory means 1 , and recognizes the iris by comparing it with feature data of the recognition dictionary 2 to thereby identify the individual. The results of the individuals identifying means 4 are utilized by the result output control means 5 .
  • the result output control means 5 issues an analyze command to the identification result analysis means 6 according to an identification result of the individuals identifying means 4 , and when having sent the analyze command, in response to an analysis result from the identification result analysis means 6 , makes a decision of whether or not to display the input image held in the image memory means 1 , the dictionary image stored in the dictionary image memory means 3 and an analysis result.
  • the result output control means 5 output a decision result to the image output control means 7 .
  • the identification result analysis means 6 detects that area of the input image used for identification which does not agree with the dictionary image.
  • the image output control means 7 controls image display.
  • the display device 8 is formed by a CRT or a liquid display device, or the like, and displays the discordant portion.
  • the recognition dictionary 2 and the dictionary image memory means 3 each contain the users' iris codes and the images entered at the times of dictionary creation stored previously.
  • the method of generating iris codes in this case is a well-known one, and its description is omitted.
  • the identification result of the individuals identifying means 4 is outputted to the result output control means 5 and the identification result analysis means 6 .
  • the operation of the result output control means 5 will be described.
  • the result output control means 5 makes a decision whether or not to analyze the identification result on the basis of the identification result of the individuals identifying means 4 .
  • the result output control means 5 uses the same threshold value that is used by the individuals identifying means 4 for identification of a person. If the Hamming distance HDMIN outputted by the individuals identifying means 4 is larger than the threshold value HDTH, the result output control means 5 issues a command to the identification result analysis means 6 directing it to analyze the identification result. Analysis by the identification result analysis means 6 will be described later .
  • the result output control means 5 decides a result output mode according to the analysis result. For example, if the identification result analysis means 6 makes a decision that the iris codes differ for the most part, the result output control means 5 issues a command to the image memory means 1 directing it to display only the input image stored therein without displaying the dictionary image.
  • the user can get information about whether or not the failure of identification is due to poor picture quality (blur, out of focus, etc.) or due to his closing the eyes, and so on.
  • the dictionary image is not displayed, security can be maintained even if a completely different person in bad faith should use the system.
  • the identification result analysis means 6 decides that the iris codes differ partially, since there is a possibility that the user is registered in the recognition dictionary 2 , if the area where the iris codes differ is displayed, this will be helpful for conjecturing the cause of failure to recognize a correct user. If an operator attends the system, when the system failed to recognize a correct user, the operator can make a final decision according to a displayed image. Therefore, the result output control means 5 directs the image output control means 7 to output the ‘input image’, the ‘dictionary image’, and the ‘area where the iris codes disagree’ to the display device 8 .
  • the identification result analysis means 6 detects the area in the input image used for identification which does not agree with the dictionary image on the basis of the identification result of the individuals identifying means 4 . Since identification of an individual by iris codes is performed in this embodiment, a decision is made whether or not codes differ in the whole region or whether or not some areas agree even though there are discordant portions. The identification result analysis means 6 also detects the locations of the coincident areas and the discordant portions.
  • FIG. 4 is an explanatory diagram of a case where the iris region is 32 subdivisions.
  • the identification result analysis means 6 calculates for the respective subdivisions Hamming distances generated from the Iris codes of a dictionary selected finally by the individuals identifying means 4 and also from the input image.
  • FIG. 5 shows an example of a calculation result of Hamming distances of the respective subdivisions.
  • a number N 1 representing those Hamming distances larger than a predetermined threshold value TH1 is calculated.
  • N 1 is larger than a predetermined threshold value TH2
  • a decision is made that the iris codes differ on the whole.
  • N 1 is smaller than N 2
  • a decision is made that the iris codes differ partially.
  • the sub-region numbers at which Hamming distances are larger than TH1 are outputted to the image output control means 7 .
  • the image output control means 7 displays images in response to a command from the result output control means 5 . More specifically, the image output control means 7 controls the display of input images held in the image memory means 1 and images that are stored in the dictionary image memory means 3 and that correspond to a dictionary selected finally by the individuals identifying means 4 . Furthermore, the image output control means 7 displays a detection result of discordant sub-divisions outputted from the identification result analysis means 6 .
  • FIG. 6 shows an example of an image display result.
  • the illustrated example is a case in which the sub-region numbers 1 , 3 , 13 , 15 , 17 , 19 , 27 , 29 and 31 are judged to be discordant subdivisions when the iris region was divided as shown in FIG. 4 and analyzed.
  • the discordant sub-divisions are indicated by enclosing with a solid line, but the mode of display is not limited to this method. Any mode of display may be used so long as the discordant sub-divisions can be notified to the user.
  • a dictionary image is not displayed. (Therefore, the dictionary image memory means 3 is obviated.)
  • the result output control means 5 has heretofore used one threshold value in making a decision of whether or not to display images, but here uses a plurality of threshold values.
  • images held in the image memory means 1 and images stored in the dictionary image memory means 3 are used.
  • feature data such as iris codes
  • the pixels of multiple gray levels may be compared.
  • the input image, the dictionary image and the discordant portions are displayed to the user, and therefore the user can easily surmise the cause of faulty recognition.
  • the operator attending the system even if the system failed to identify a correct person, the operator can survey a displayed image, and thereby make a final decision easily.
  • All operations of the first embodiment can be performed under control of a computer program, which performs the function of the individual identification system. Therefore, the individual identification system according to the present invention can be realized, for example, by a method by recording the program on a recording medium, such as a floppy disc or a CD-ROM, and installing the program in a computer, or downloading the program from a network, or by another method, or by another method of installing the program in a hard dick or the like.
  • a recording medium such as a floppy disc or a CD-ROM
  • additional information is used which includes features peculiar to objects under identification, such as the color of the eye, the length and the color of fur. Additional information is stored in a dictionary used for individual identification, along with the quantities of features used for identification (iris codes, for example).
  • iris codes iris codes, for example.
  • FIG. 7 is a block diagram showing a second embodiment of the individual identification system according to the present invention.
  • the system shown in FIG. 1 comprises a recognition dictionary 10 , individuals identifying means 11 , identification result memory means 12 , identification result decision means 13 , additional information input means 14 , and re-identification means 15 .
  • this second embodiment description will be made of a case in which the recognition of the iris of an animal under identification is performed.
  • the recognition dictionary 10 is mounted in the auxiliary memory device, such as a semiconductor memory or a disc, and holds iris codes as feature sizes used for identification in the individuals identifying means 11 , and additional information about external features peculiar to the object under identification.
  • FIG. 8 is a diagram for explaining the contents of the recognition dictionary 10 .
  • the recognition dictionary 10 holds iris codes and additional information about the distinction of sex, the color of the fur and the eye, and the tail.
  • the individuals identifying means 11 extracts feature data from the image of an individual under identification input from image input means, not shown, and compares the feature data with iris codes from the recognition dictionary 10 to thereby identify the individual, and outputs a result to the identification result memory means 12 and the identification result decision means 13 .
  • the identification result memory means 12 holds a result of comparison with dictionaries in the recognition dictionary 10 by the individuals identifying means 11 .
  • Hamming distances are obtained as results, so that Hamming distances of the respective dictionaries are stored in the result memory means 12 .
  • the identification result decision means 13 decides whether or not to perform re-identification according to an identification result of the individuals identifying means 11 ,and when having made decision not to input additional information, outputs the identification result of the individuals identifying means 11 as the final result, and when the decision was not to input additional information, issues a command to the additional information input means 14 directing it to acquire additional information, and issues another command to the re-identification means 15 directing it to perform a re-identification process. Furthermore, the identification result decision means 13 outputs a decision to terminate the re-identification process according to the inputted re-identification result and also outputs a final identification result.
  • the additional information input means 14 urges the user or the operator to input additional information, and if additional information is supplied, outputs the additional information to the re-identification means 15 .
  • the additional information input means 14 is formed by a display device, such as a monitor, and input devices, such as a touch panel, a keyboard and a mouse.
  • the re-identification means 15 starts to run in response to a re-identification command from the identification result decision means 13 .
  • the re-identification means 15 selects that dictionary (ID-No. ) of the recognition dictionary 10 which covers all additional information input heretofore, and reads the selected dictionary and a Hamming distance as an identification result from the identification result memory means 12 , and selects a dictionary at the minimum Hamming distance, and outputs this value to the identification result decision means 13 .
  • the individuals identifying means 11 analyzes the image data to thereby obtain feature sizes, and compares the feature sizes with the feature sizes registered in the recognition dictionary 10 , and thus identifies the individual.
  • a circumscribed circle of the pupil and a circumscribed circle of the iris are obtained from the image of the eye of the object under identification, the image being taken by the a video camera. Then, polar coordinates are set with respect to two circles as references, the iris region is divided into multiple subdivisions, which receive a filter process and a threshold value process, and are outputted as codes of 0 and 1 (hereafter referred to as iris codes). The iris codes thus generated and the iris codes stored in the recognition dictionary 10 are compared to identify the individual.
  • a Hamming distance is calculated between the iris codes generated from the input image and the iris codes of the recognition dictionary 10 , and a dictionary which brings about the minimum Hamming distance is selected.
  • the Hamming distance at this time is designated as HDMIN.
  • HDMIN a predetermined threshold value
  • the individual is judged to be a correct one.
  • the minimum value HDMIN of Hamming distance during identification is used in the identification result decision means 13 .
  • the Hamming distance obtained by comparison with dictionaries of the recognition dictionary 10 is stored in the identification result memory means 12 .
  • FIG. 9 is a diagram for explaining the contents of the identification result memory means 12 .
  • a Hamming distance for each dictionary (ID-No. ) of the recognition dictionary 10 is stored.
  • Information stored in the identification result memory means 12 is used in the re-identification means 15 .
  • the identification result decision means 13 decides whether or not to input additional information on the basis of identification result of the individuals identifying means 11 .
  • the decision means 13 decides not to input additional information. If a decision is made not to input additional information, the identification result outputted by the individual identification means 11 is outputted as the final result.
  • the identification result decision means 13 uses the additional information input means 14 to prompt the operator to supply additional information, and directs the re-identification means 15 to perform the re-identification process.
  • FIG. 10 is a diagram for explaining the re-identification process.
  • the re-identification means 15 selects a dictionary (ID-No. ) matching this additional information, and reads Hamming distances between the selected dictionary and the identification result from the identification result memory means 12 , obtains the minimum value HDMIN 1 of Hamming distances, and outputs this value to the identification result decision means 13 .
  • a dictionary ID-No.
  • the identification result decision means 13 decides whether or not the obtained Hamming distance is smaller than the predetermined threshold value HDTH1 (HDTH1>HDTH), and if it is smaller than the threshold value, outputs a dictionary, which brings about the minimum value HDMIN1, as the final result, and decides that the individual has been identified as the one registered in the dictionary 10 . For example, in the example in FIG. 10, “Sex” was input as the first additional information, and the operator selected “Female.” If the individual is judged to be a registered individual, the identify process is finished.
  • the identification result decision means 13 directs the additional information input means 14 to input the next additional information, and also directs the re-identification means to perform the re-identification process.
  • the re-identification means 15 obtains an identification result HDMIN2 covering the initially-input additional information and additional information this time, and outputs HDMIN2 to the identification result decision means 13 .
  • FIG. 10 shows a case where the individual was not identified as the registered one.
  • the additional information input means 14 prompts the operator to input “the color of the fur” as shown in (B), and the operator selects “White.”
  • the identification result decision means 13 decides whether or not the HDMIN2 obtained by the re-identification process is smaller than the threshold value HDTH2 (HDTH2>HDTH1), if HDMIN2 is found smaller, outputs a dictionary, which brings about the minimum value HDMIN2, as the final result, and decides that the individual has been identified as the one registered in the dictionary. On the other hand, if HDMIN2 is larger than HDTH2, the decision means 13 directs the additional information input means 14 to input the next additional information, and also directs the re-identification means 15 to perform a re-identification process, so that the above-mentioned operation is repeated.
  • a more suitable dictionary is selected each time the operator inputs additional information, and a comparison is made between the minimum value of Hamming distance obtained with the selected more suitable dictionary and a threshold value larger than the previous set value, by which the individual is judged correct or false. Therefore, when the conditions are satisfied before the operator inputs all of additional information, the individual identification process is finished. When the above-mentioned minimum value is not smaller than the preset threshold value after all additional information has been input, the individual is judged to be not any of the individuals registered (not included in the recognition dictionary 10 ).
  • the individual under identification is an animal, the operator inputs additional information.
  • the individual is a human being, however, it is possible for the person to input additional information by himself.
  • the operator manages the system to make sure that the input additional information is correct.
  • the individuals identifying means 11 need not execute the same operation during re-identification. Therefore, the amount of processing can be reduced to a minimum, which contributes to the improvement of the processing speed. As a result, a high-performance system for individual identification can be realized.
  • All operations in the second embodiment can be carried out by control of a computer program to perform the function of the individual identification system.
  • the individual identification system according to the present invention can be materialized by a method by recording the program on a recording medium like a floppy disc and a CD-ROM, installing the program in a computer, or downloading the program from a network, or by another method of installing in a hard disc.
  • FIG. 11 is a block diagram of the individual identification system according to a third embodiment of the present invention.
  • This individual identification system is a system for identifying an individual racing horse, for example, and comprises an image input unit 19 , image analysis means 20 for analyzing an input image and extracting the features of the horse under identification, a horse name output unit 30 connected to the image analysis means 20 , and a database 31 for providing the features of each individual to the horse name analysis unit 30 .
  • the image input unit 19 accepts an image of a horse under identification, and is formed of one or more cameras.
  • the image analysis means 20 is formed of a CPU and a memory, and includes a hair color identifier 21 which identifies a hair color of a horse from an image input from the image input unit 19 .
  • the hair color identifier 21 is connected on its output side to a head white mark identifier 22 .
  • the head white mark identifier 22 is connected on its output side to a whirl identifier 23 .
  • the whirl identifier 23 is connected on its output side to a leg white mark identifier 24 .
  • a horse name output unit 30 is connected to the output side of the leg white mark identifier 24 of the image analysis means 20 .
  • the horse name output unit 30 outputs an identification result of the horse under identification.
  • FIGS. 12 ( a ) and 12 ( b ) show the locations of the cameras of the image input unit 19 in FIG. 11.
  • FIG. 12( a ) is a side view and
  • FIG. 12( b ) is a top view.
  • FIGS. 13 ( a ) and 13 ( b ) are views of the horse photographed by the cameras 32 and 34 in FIG. 12.
  • the cameras 32 to 34 of the image input unit 19 are arranged around the horse H as shown in FIGS. 12 ( a ) and 12 ( b ).
  • Various focal distances and apertures, which are the parameters of the cameras 32 to 34 are provided to suit the distance to the horse H as the subject and the magnification.
  • the side cameras 32 located laterally of the horse H take pictures of the body and side view of the legs of the horse H as shown in FIG. 13( a ).
  • the front cameras 33 arranged in front of the horse H take pictures of the face, the chest and the front view of the legs.
  • the rear cameras 33 located at the rear of the horse H take pictures of the buttock and the rear view of the legs of the horse H.
  • the image input unit 19 outputs the images of the horse under identification to the image analysis unit 20 .
  • FIG. 14 is a flowchart showing the processes of the hair color identifier 21 .
  • FIG. 15 is a diagram showing the locations where hair colors are evaluated.
  • FIG. 16 is a diagram showing the colors of the horse.
  • the hair color identifier 21 refers to color data 21 - 1 and hair color data 21 - 2 stored in memory, not shown, analyzes the images from the image input unit 19 by the processes S 1 to S 8 in FIG. 14, extracts color names defined according to variations of the hair colors as the external appearance data of the horse, and outputs as a result of hair color identification.
  • the specific color names of horses are kurige (chestnut), tochi kurige (tochi-chestnut), kage (dark brown), kuro-kage (blackish dark brown), aokage (bluish dark brown), aoge (bluish dark brown), and ashige (white mixed with black or brown) as shown in FIG. 16.
  • the fur color identification process S 1 extracts color information of the belly region 41 of FIG. 15 from the supplied image, and compares it with color data 21 - 1 . In this comparison, a color closest to the region 41 is determined, and is extracted as an identification result of the fur color.
  • Horse fur color data such as yellowing brown, reddish brown, black, white, brown, etc. is stored as color data 21 - 1 .
  • the RGB colorimetric system using red (R), green (G) and blue (B) components of the pixels or the CIE-XYZ colorimetric system.
  • the identified colors in the processes S 1 to S 8 are compared with a combination of colors stored in hair color data 21 - 2 and the color of the horse H under identification is decided.
  • the color of the fur of the horse H is decided as “kurige”(chestnut), “tochi-kurige”(tochi-chectnut), “kage”(brown), “kurokage”(darker reddish brown), “aokage”(dark-bluish black), “aoge”(bluish black) or “ashige”(white mixed with black or brown).
  • FIG. 17 is a flowchart showing the processes of the head white mark identifier 22
  • FIG. 18 is a diagram showing the extraction regions of FIG. 17.
  • FIG. 19 is a diagram showing the white patterns (Part 1 ) of the horse's head
  • FIG. 20 is a diagram showing the white patterns (Part 2 ) of the horse's head.
  • FIG. 21 is a diagram showing the names of the white marks of the head. The processes of the head white mark identifier 22 will be described with reference to FIGS. 17 to 21 .
  • the head white mark identifier 22 refer to data on head white marks 22 - 1 stored in memory, shown, extracts the white regions from the image taken by the front camera 33 out of the images from the image input unit 19 , analyzes the image in the processes S 11 to S 15 of FIG. 17, obtains the names defined according to variations of the white patterns of the head as the external appearance data of the head, and outputs as an identification result of the head white pattern.
  • the names of white patterns shown in FIG. 21 are stored in the head white mark data 22 - 1 along with the locations and sizes of the white marks.
  • the forehead white mark identification process S 11 extracts the white region 51 in the forehead in FIG. 18 from the input image, and selects the name of the pattern by referring to the head white mark data 22 - 1 and the shape, the size, and the presence or absence of a pattern omission at the center portion.
  • the patterns are divided into “star”, “large star”, “curved start”, “shooting star” or the like according to the white pattern of the forehead of the horse H under identification.
  • the “large star” shown in FIG. 19 is selected as its pattern name, or if the pattern is small, the “star” or “small star” is selected.
  • the “large shooting star” in FIG. 19 is selected. Or, if the white pattern is curved, the pattern is classified as a “curved star.” If the central portion is missing, the pattern is called a “ring star.”
  • the white region 52 on the bridge in FIG. 18 is extracted from the input image, the width of the white region is measured, and a pattern name in FIG. 20 is selected by referring to head white mark data 22 - 1 .
  • the white region of the nose 53 in FIG. 18 is extracted from the input image, the size of the white region is measured, and a pattern name in FIG. 20 is selected by referring to head white pattern data 22 - 1 .
  • the white region of the lip 54 in FIG. 18 is selected from the input image, the width of the pattern is measured, and a pattern name is selected from FIG.
  • the white region of the forehead-nose region in FIG. 18 is extracted from the input image, its size and width are measured, and a pattern name is selected from FIG. 20 by referring to head white mark data 22 - 1 .
  • FIG. 22 is a flowchart showing the processes of the whirl identifier 23 in FIG. 11.
  • FIGS. 23 ( a ) to 23 ( d ) are diagrams showing the locations for detecting whirl patterns.
  • FIG. 24 is a diagram showing the locations and the names of whirls of the horse.
  • the whirl identifier 23 refers to data on whirl pattern 23 - 1 and whirl location data 23 - 2 stored in memory, not shown, carries out the processes S 21 and S 22 in FIG. 22, and obtains whirl data of each horse from the images taken by the cameras 11 to 13 of the image input unit 10 .
  • Whirl pattern data for each fur color is stored in whirl pattern data 23 - 1
  • the names and locations of whirls in FIG. 24 are stored in whirl location data 23 - 2 .
  • the numbers of the whirl locations in FIG. 24 correspond to the numbers allocated to the portions of a horse in FIG. 23.
  • a whirl pattern for the fur color obtained by the fur color identifier 21 is selected from the whirl pattern data 23 - 1 , and locations 61 to 79 where there are whirl patterns from the input image.
  • whirl location identification process S 22 whirl names are obtained from whirl location data 23 - 2 according to the whirl locations 61 to 79 obtained in the whirl extraction process S 21 .
  • FIG. 25 is a flowchart showing the processes of the leg white mark identifier 24 in FIG. 11,
  • FIG. 26 is a diagram showing the extraction regions in FIG. 25, and
  • FIG. 27 is a diagram showing the names of white marks on the legs.
  • the leg white mark identifier 24 refers to the leg white mark data 24 - 1 stored in memory, not shown, extracts the white regions from the image from the image input unit 19 , analyzes the image by the processes S 31 and S 32 in FIG. 25, obtains the names defined according to variations of the leg white patterns as external appearance data of the horse, and outputs as a result of let white mark identification.
  • the names of white marks in FIG. 27 are stored in data on leg white marks 24 - 1 along with the location and size of the white mark.
  • the left white mark identifier 24 extracts the white mark in the region 81 at the hoof in FIG. 26, measures its size and circumference length, and selects a right name for data on leg white mark 24 - 1 . For example, if the area of the white mark is small, “tiny white” is selected.
  • the leg white mark identification process S 32 the leg white mark identifier 24 extracts the white mark in the leg region 82 in FIG. 26, measures the size and the circumference length of the white region, and selects a right name for data on leg white mark 24 - 1 .
  • FIG. 28 is a diagram showing a horse's name and its feature stored in database 31 in FIG. 11.
  • the horse name output unit 30 searches the database 31 by a feature name selected by the image analysis means 20 .
  • a plurality of horse names are associated with the features of the horses when they are stored in the database 31 . Therefore, by searching the database 31 by the names of features selected by the image analysis means 20 , the name of the horse under identification can be obtained. For example, when the names of features selected and extracted by the portions 21 to 24 of the image analysis means 20 are “kurige”(chestnut), “shooting star nose-bridge white”, “shumoku” and “right front small white”, the name of the horse is obtained as “abcdef.”
  • the individual identification system comprises aaan image input unit 19 for taking images of a horse H under identification and acquiring images of the horse, image analysis means 20 for extracting the external features peculiar to the horse, such as the fur color, head white mark, whirls, leg white mark from the image, and a horse name output unit 30 for obtaining the name of the horse H by searching the database 31 . Therefore, it becomes possible to prevent the features from being overlooked, and preclude wrong decisions, such as misjudgment, so that steady identification of horses can be performed without relying on the skill of the operator.
  • the features are assigned feature names and compared, the features of the horses can be easily stored in database 31 and measurements of horses can be obviated, which were required previously. Furthermore, a far smaller storage capacity is required of database 31 than in the prior art in which a plurality of images of a horse were stored.
  • FIG. 29 is a block diagram of the individual identification system according to a fourth embodiment of the present invention.
  • This individual identification system comprises the image input unit 19 , the image analysis means 20 and the database 31 like in the third embodiment, and further includes a horse name input unit 90 and a true/false decision unit 91 , which are not used in the preceding embodiments.
  • the true/false decision unit 91 is provided in place of the horse name output unit 30 in the third embodiment, and is connected to the output side of the image analysis means 20 , and also to the output side of the database 31 .
  • the horse name input unit 90 is adapted to supply the horse names to the database 31 .
  • the name of the horse to be identified is input to the horse name input unit 90 .
  • the horse name input unit 90 sends the supplied horse name to the database 31 directing it to search the names of features corresponding to the name.
  • the image input unit 19 and the image analysis means 20 by the same processes as in the third embodiment, select the names of features as the external appearance data of the horse H to extract the features.
  • the true/false decision unit 91 compares the feature names of the horse H fetched from the database 31 with the feature names supplied from the image analysis means 20 , and if they coincide or are mutually close, outputs information that the horse H under identification in the process of picture taking corresponds to the horse whose name was input to the horse name input unit 90 . Or if they do not coincide, the true/false decision unit 91 outputs information that the horse does not correspond to the horse the name of which was input to the horse name input unit 90 .
  • the individual identification system which includes the image input unit 19 and the image analysis means 20 and which is further added with the horse name input unit 90 and the true/false decision unit 91 , can decide whether or not the horse under identification by comparing the features from the input image and the features obtained by inputting the horse's name when the name is known. As a result, time required for searching data for identification of the horse can be reduced substantially.
  • the fur colors, the head white patterns, whirl locations, or the leg white patterns are extracted, which represent the physical features of the horse H, and the names of the features are stored in the database 31 .
  • some arrangement may be done such that the scars and other marks may be extracted by image analysis and searched on the database 31 .
  • database 31 is organized such that a horse's name is obtained from the external physical features of the horse, but data used for retrieval of a horse's name may include the registration number, date of birth, blood type, pedigree registration, breed (e.g., thoroughbred male), producer, producer's address, producing district, father's name, mother's name, etc.
  • the individual identification system is formed by the output device including the image input unit for inputting images of an individual under identification, which are taken from different angles; image analysis means for extracting the features of an individual under identification obtained by image analysis of the input image; and an output unit added with a database. Therefore, identification of an individual, a horse for example, can be performed steadily without overlooking the features.
  • the individual identification system comprises the image input unit, the image analysis means, the name input means and the true/false decision unit, so that a decision can be made as to whether the individual under identification is true or false.

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Collating Specific Patterns (AREA)

Abstract

A system for identifying individuals, which easily reveals the cause of misjudgment when it occurred, wherein individual identification means 4 identifies an individual by comparing an input image with a recognition dictionary 2, wherein identification result analysis means 6 analyzes an identification result of the individual identification means 4, and detects that portion of a region, used for identification, of the input image, which differs from the recognition dictionary 2, wherein result output control means 5, if an analysis result of the identification result analysis means 6 is that the input image differs from the dictionary image for the most part, directs only the input image to be displayed, and if the identification result is that the input image partially differs from the dictionary image, directs at least the input image and also the discordant portion to be displayed.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0001]
  • The present invention relates to a system for identifying individuals by comparing an input image with a dictionary image previously stored. [0002]
  • 2. Related Art [0003]
  • A technology is well known by which an individual is identified from physical information (about the face, iris and fingerprints, for example) of human beings and other animals. To cite an example, this technology is disclosed in U.S. Pat. No. 5291560. [0004]
  • In the technology revealed in the above literature, the image of the iris of an eye of a human being is converted into codes of 0 and 1, and the codes are stored as a dictionary. In an actual identification process, an input iris image is converted into codes of 0 and 1, and the codes are compared with codes stored in the dictionary to establish the identity of an individual. [0005]
  • By the conversion of an input image into codes of 0 and 1, the amount of processing can be reduced, which leads to savings of storage capacity. In the identification of an individual by fingerprints, too, it is ordinary to detect the features from an input image, and store only data on the features as a dictionary. Likewise, in the identification of an individual by a face image, the amount of information is reduced by the use of mosaic images or the facial features. [0006]
  • In the conventional technology mentioned above, however, the input image is converted into codes of 0 and 1, or information about the facial features. Therefore, if the system should make an error in understanding data, it is difficult for a user without specialized technical knowledge to find the cause of the problem, the system will be a burden for the user to deal with. [0007]
  • To take an identification system using the face as an example, if a smiling face is registered when a dictionary is created and a serious face is input in the actual operation of the system, the system may mistake this as a discrepancy in the comparison step. On the other hand, if days passed from the registration till the identification operation and the user forgot his or her facial expression on the day of registration, it is impossible to conduct identification by simply displaying the registered image and the input image for comparison, with the result that it is impossible to decide whether the identification itself is correct or wrong. [0008]
  • In a system which has images at the time of dictionary creation stored in advance, if a misidentification occurred, even when the stored image at issue is displayed, it takes time until you can find that region from the image of the whole face which was conducive to the misidentification. [0009]
  • Accordingly, it has long been desired that a system should be developed which enables the user to find the cause of the misjudgment even when misidentification occurred. [0010]
  • With a system for identifying individuals by the prior art mentioned above, when this system is applied to identification of animals, such as dogs, cats or horses, a problem arises as follows. [0011]
  • Animals are different from human beings and you cannot expect them to behave in a manner favorable to the system. For example, when taking a picture of the iris of the eye with a video camera, a person is at a standstill squarely facing the camera. However, in the case of an animal, it does not look at the camera, and the face is moving most of the time. Therefore, in the identification of animals, the images taken are inferior in picture quality, which is often responsible for a failure of identification. For this reason, it was necessary to take images repeatedly. [0012]
  • In this respect, it has been expected that a system for identifying individuals should be created which can identify an individual even when the image obtained does not have a good picture quality. [0013]
  • To solve this problem, the present invention adopts the following arrangement. [0014]
  • SUMMARY OF THE INVENTION
  • <Arrangement in [0015] Claim 1>
  • A system for identifying individuals in [0016] Claim 1 comprises
  • an image memory means for storing an input image of an individual as an object to be identified; [0017]
  • a dictionary for having data on the features of collation objects stored in advance; [0018]
  • a dictionary image memory means for storing dictionary images as a basis on which to extract data on the features of the collation objects; [0019]
  • an individual identification means for analyzing the input image held in the image memory means and comparing said data on the features stored in the dictionary to thereby identify the individual; [0020]
  • an identification result analysis means for analyzing the identification result by the individuals identifying means and detecting that area of the region used for identification in the input image which does not agree with the dictionary data; [0021]
  • a result output control means for, according to identification result by the individuals identifying means, issuing an analyze command to the identification result analysis means and, according to analysis result by the identification result analysis means, issuing a command to display the discordant portion to the identification result analysis means, and deciding whether or not to display the input image and the dictionary image; [0022]
  • an image output control means for, according to a result of decision by the result output control means, controlling display of the discordant portion, the input image and the dictionary image; and [0023]
  • a display for displaying an image outputted by the image output control means. [0024]
  • <Description of [0025] Claim 1>
  • The input image and the dictionary image are images of the iris of the human eye, and the individuals identifying means conducts identification by comparing iris codes. However, any arrangement other than was described above may be applied so long as it identifies an individual. [0026]
  • The identification result analysis means detects an area where the input image does not coincide with the dictionary image by comparing iris codes, for example. The result output control means makes a decision to have at least the where discrepancy occurred displayed according to an analysis result of the identification result analysis means. [0027]
  • By this operation, even when the identification of an individual failed to establish the identity of the person, the where discrepancy occurred is displayed, so that it is possible for the user to easily guess the cause of misjudgment. [0028]
  • <Arrangement of [0029] Claim 2>
  • If the input image agrees with the dictionary data to such an extent that the identification result of the individuals identifying means is larger than a predetermined threshold value in [0030] Claim 1, the result output control means makes a decision not to issue an analyze command to the identification result analysis means nor issue a display command to the image output control means.
  • <Description of [0031] Claim 2>
  • A case where the input image agrees with the dictionary data to such an extent that the identification result of the individuals identifying means is larger than the predetermined threshold value means that the person is identified as a correct person. In this case, the system does not display analysis of the identification result nor display the input image. Therefore, unnecessary processing is not executed and a reduction of processing load can be expected. [0032]
  • <Arrangement of [0033] Claim 3>
  • In a system for identifying individuals according to [0034] Claim 1, if the input image agrees with the dictionary data to such an extent that the identification result of the individuals identifying means is higher than the predetermined first threshold value but the amount of agreement is smaller than a second threshold value, which is smaller than the first threshold value, the result output control means makes a decision not to issue an analyze command to the identification result analysis means nor issue a display command to the image output control means.
  • <Description of [0035] Claim 3>
  • The invention of [0036] Claim 3 is that the analyze process and the image display process are not executed when the person is identified as a correct person or the person is identified as somebody else or the quality of the input image is judged to be inferior, and the analyze process and the image display process are executed only when the identity of the person is uncertain. The cases where the degree of agreement between the input image and the dictionary data is less than the second threshold value include a case of no agreement at all.
  • By this arrangement, unnecessary processing can be omitted and security can be secured even if a different person in bad faith attempts to use the system. [0037]
  • <Arrangement of [0038] Claim 4>
  • In a system for identifying individuals according to any of [0039] Claims 1 to 3, the identification result analysis means analyzes and judges the discordant portion between the input image and the dictionary image to be larger than the predetermined value, the result output control means judges that the input image, for the most part, differs from the dictionary image and makes a decision to cause the input image to be displayed.
  • <Description of [0040] Claim 4>
  • The invention in [0041] Claim 4 is such that according to the result of the analyze process by the identification result analysis means, if the result output control means judges that the input image, for the most part, differs from the dictionary data, only the input image is displayed. In other words, in this case, the dictionary image is not displayed. Therefore, if the image is out of focus or blurred, the user can easily guess the cause of misjudgment by looking at the input image. Furthermore, even when an ill-intentioned person tries to use the system, security can be maintained.
  • <Arrangement of [0042] Claim 5>
  • A system for identifying individuals in [0043] Claim 5 comprises
  • an image memory means for storing an input image of an individual to be identified; [0044]
  • a dictionary for having data on the features of collation objects stored in advance; [0045]
  • a dictionary image memory means for storing dictionary images as a basis on which to extract data of the features of the collation objects; [0046]
  • an individuals identifying means for analyzing the input image held in the image memory means and comparing the data on the features of the dictionary to thereby identify the individual; [0047]
  • an identification result analyzing means for analyzing the identification result by the individuals identifying means and detecting that area of the region used for identification in the input image which does not agree with the dictionary data; and [0048]
  • a result output control means for making a decision not to display any image on the basis of a judgement that the input image, for the most part, differs from the dictionary data if the discordant portion is larger than a predetermined value in the analysis of the identification result analysis means. [0049]
  • <Description of [0050] Claim 5>
  • The invention in [0051] Claim 5 is such that as the result of the analyze process by the identification result analysis means, if a decision is made that the input image, for the most part, differs from dictionary data, no image is displayed. In this case, for example, when an ill-intentioned person tries to access the system, security is preserved because no input image is displayed and it is least likely to be known how identification of an individual is carried out. This invention is suitable for cases where emphasis is placed on the preservation of security.
  • <Arrangement of [0052] Claim 6>
  • In a system for identifying individuals according to any of [0053] Claims 1 to 5, the result output control means which makes a decision to cause both the input image and the discordant portion to be displayed if it is judged in the analysis by the identification result analysis means that the input image partially differs from the dictionary image when the discordant portion is smaller than a predetermined value.
  • <Description of [0054] Claim 6>
  • The invention in [0055] Claim 6 is such that both the discordant portion and the input image are displayed in cases of identification by the iris in which the input image is a correct person's but there is a partial difference between the input image and dictionary data when the iris is hidden behind the eye lid or eyelashes or light is reflected by the eye to the camera. Thus, the above images serve as an effective clue by which to guess the cause of misjudgment by the user. If an operator attends the system, he can make a final decision on the basis of the above images.
  • <Arrangement of [0056] Claim 7>
  • The system for identifying individuals according to any of [0057] Claims 1 to 5, wherein the result output control means which makes a decision to cause only the discordant portion to be displayed on the ground that the input image partially differs from dictionary data when the discordant portion is smaller than a predetermined value in the analysis by the identification result analysis means.
  • <Description of [0058] Claim 7>
  • The invention of [0059] Claim 7 displays only the discordant portion in contrast to Claim 6 that displays both the input image and the discordant portion. This invention offers this effective clue by which to guess the cause for the user to be unable to make a correct apprehension. If an operator attends the system, the above images serve as a basis on which he makes a final decision.
  • <Arrangement of [0060] Claim 8>
  • In a system according to any of [0061] Claims 1 to 7, the system for identifying an individual is characterized in that the object to be identified is the iris in the eye.
  • <Description of [0062] Claim 8>
  • The invention in [0063] Claim 8 is intended to identify individuals by collation of iris images. The effect of this invention is that it is possible to guess the cause of misidentification that even the user is hard to know, such as the iris being hidden behind the eyelid or eyelashes or the reflection of light by the eye to the camera which hampers identification even when an input image of a correct person is used.
  • <Arrangement of [0064] Claim 9>
  • A system for identifying individuals in [0065] Claim 9 comprises
  • a recognition dictionary for having stored in advance data on features of an object to be identified and additional information peculiar to the object obtained from the object to be identified; [0066]
  • an individuals identifying means for identifying an individual by comparing the data on features of the object to be identified with data on features of the dictionary data; [0067]
  • an identification result decision means for, when having made a decision not to input additional information in a decision regarding whether or not to input additional information according to identification result by the individuals identifying means, outputting identification result by the individuals identifying means as a final result, or, when having made a decision to input additional information, issuing a command to obtain additional information and also a command to conduct a re-identification process, and making a decision to terminate the re-identification process according to a result of the re-identification process, and outputting a final identification result; [0068]
  • an additional information inputting means for obtaining arriving additional information upon receiving additional information from the identification result decision means; and [0069]
  • a re-identification means for, on receiving a command to conduct a re-identification process from the identification result decision means, selecting the identification dictionaries containing all of additional information acquired by additional information inputting means, and outputting as the result of re-identification a dictionary having a closest possible value to data on the features of the object under identification among selected dictionaries. [0070]
  • <Description of [0071] Claim 9>
  • Individuals as objects to be identified are human beings, and animals, such as dogs, cats and horses. And, individuals of any other kinds may be identified. Among the methods for identifying individuals, there is the iris identify process using the iris of the eye. However, any other methods may be used. Furthermore, additional information is about the external features of the object under identification, such as the distinction of sex, the color of the eye, etc. Any other kind of information, such as audio information may be used so long as individuals have a peculiar and common feature. [0072]
  • The individuals identifying means conducts an identify process on an individual, the results of which are outputted to the identification result decision means. When the person is identified as a correct person according to the identification result of the individuals identifying means, the identification result decision means outputs this information as the final identification result. On the other hand, if the person is not identified as a correct person, the a re-identification command is issued which is added with additional information. Additional information input means obtains additional information entered by the user or operator, and on the basis of this information, the re-identification means executes a re-identification process. By using the re-identification result by the re-identification means, the identification result decision means, if accordingly the person could be identified, for example, outputs a decision to terminate the re-identification process along with a final identification result. [0073]
  • Therefore, even if the identification of the person failed in the individual identification process, re-identification is performed focusing on certain dictionaries by using additional information, so that identification of an individual is still possible even when an obtainable image has a poor picture quality. [0074]
  • <Arrangement of [0075] Claim 10>
  • In a system for identifying individuals according to [0076] Claim 9, a dictionary has a number of different items of additional information stored in advance, and the re-identification means conducts a re-identification process each time it receives one item of additional information obtained by additional information input means.
  • <Description of [0077] Claim 10>
  • The invention in [0078] Claim 10 is such that additional information is a plurality of items of information, such as the distinction of sex, the fur color, and the re-identification means conducts a re-identification process each time it receives one item of additional information. Therefore, the re-identification process can be carried out even if all items of information are supplied, and superfluous input operations need not be performed.
  • <Arrangement of [0079] Claim 11>
  • In a system for identifying individuals according to [0080] Claim 10, the identification result decision means makes a decision to terminate the re-identification process each time the re-identification means conducts a re-identification process.
  • <Description of [0081] Claim 11>
  • The invention in [0082] Claim 11 is such that a decision to terminate the re-identification process each time one item of additional information is input. Therefore, when the person has been identified by input of any additional information, it is not necessary to any more re-identification process, so that time spent for identification can be shortened.
  • <Arrangement of [0083] Claim 12>
  • In a system for identifying individuals according to [0084] Claim 11, the identification result decision means changes a criterion for re-identification at each execution of the re-identification process by the re-identification means.
  • <Description of [0085] Claim 12>
  • The invention in [0086] Claim 12 has been made to enable the threshold value to be varied to ease the criterion, for example, at each re-identification process. Therefore, it becomes possible to enhance the probability of successful identification of an individual while suppressing a deterioration of identification accuracy.
  • <Arrangement of [0087] Claim 13>
  • In a system for identifying individuals according to any of [0088] Claims 9 to 13, identification of an individual is by collation of iris images and additional information is about the external features of the individual.
  • <Description of [0089] Claim 13>
  • The invention in [0090] Claim 13 is such that the process for identifying an individual is done by collation of iris images and additional information is about the external appearance of an individual under identification. Therefore, even if an obtained image of the eye is inferior in picture quality, identification may succeed. Because additional information is external features, even if the operator is supposed to confirm the entered additional information, the operator can confirm it easily with fewer errors.
  • <Arrangement of [0091] Claim 14>
  • A system for identifying individuals in [0092] Claim 14 comprises
  • an image input unit for taking pictures of an individual as an object to be identified from different camera angles and inputting images of the object; [0093]
  • an image analyzing means for analyzing the image and extracting the features of the external appearance of the object; and [0094]
  • an output unit for obtaining a name by which to identify an individual under identification according to the features. [0095]
  • <Arrangement of [0096] Claim 15>
  • In a system for identifying individuals according to [0097] Claim 14, the image analyzing means extracts the features by selecting a name of a feature corresponding to the image for each of the elements of external appearance out of the names of predetermined features allotted to variations of respective elements of the external appearance.
  • <Arrangement of [0098] Claim 16>
  • In a system for identifying individuals according to [0099] Claim 15, there is further provided a data base of data representing the features of the external appearance of a plurality of individuals by the names of features, the data being associated with the names of the individuals when the data is accumulated, wherein the output unit is so arranged as to search the database, and obtain the names of individuals having the names of features corresponding to the names of features selected at the image analyzing means as the names of individuals under identification.
  • <Arrangement of [0100] Claim 17>
  • In a system for identifying individuals according to [0101] Claim 15 or 16, the individuals to be identified and the plurality of objects are animals, and wherein, the elements of the external appearance, which are used in the image analyzing means, the database and the output unit, are two or more items of the fur color, the white patterns of the head, the location of whirls, and white marks of the legs.
  • <Arrangement of [0102] Claim 18>
  • In a system for identifying individuals according to [0103] Claim 17, the image analyzing means extracts not less than two items of the location of a scar of an animal under identification and the condition of spots other than white marks, wherein the database is associated with the names of the plurality of animals, the location of scars or the condition of spots of each animal when data is accumulated, and wherein the output unit searches the database using the location of the scar or the condition of spots extracted from the image to thereby obtain the name of the animal to be identified.
  • <Arrangement of [0104] Claim 19>
  • A system for identifying individuals in [0105] Claim 19 comprises
  • a name input means for entering the name of an individual under identification; [0106]
  • an image input unit for inputting images of an individual under identification by taking images of the object; [0107]
  • an image analyzing means for analyzing the image and extracting the features of the external appearance of an individual under identification; [0108]
  • a data base for accumulating data representing the features of the external appearance of individuals associated with the names of a plurality of individuals, and searching data corresponding to the names input through the name input means to thereby output the data; [0109]
  • a true/false decision unit for making a decision of whether an individual under identification is true or not by comparing the features found at said data base with the features extracted by said image analyzing means. [0110]
  • <Arrangement of [0111] Claim 20>
  • In a system for identifying individuals according to [0112] Claim 19, image analyzing means extracts the features by selecting a name of a feature corresponding to the image for each of the elements of external appearance out of the names of predetermined features allotted to variations of respective elements of the external appearance.
  • <Arrangement of [0113] Claim 21>
  • In a system for identifying individuals according to [0114] Claim 20, the data base is accumulated in such a way that items of data on the features of the external appearance of the plurality of individuals are associated with the names of the individuals, the items of data being represented by the names of the features, and the true/false decision unit makes a decision of whether an individual under identification is true or not by comparing names of the features found at the data base with names of the features extracted by the image analyzing means.
  • <Arrangement of [0115] Claim 22>
  • In a system for identifying individuals according to [0116] Claim 20 or 21, the individuals under identification and the plurality of individuals are animals, and the elements of the external appearance, which are used in the image analyzing means, the database and the output unit, are two or more items of the fur color, the white patterns of the head, the location of whirls, and white marks of the legs.
  • <Arrangement of [0117] Claim 23>
  • In a system for identifying individuals according to [0118] Claim 20 or 21, the image analyzing means extracts not less than two items of the location of a scar and the condition of white marks of an animal under identification, the database associates the names of the plurality of animals with the condition of scars of each animal or the condition of spots when accumulating data, and the true/false decision unit makes a decision of whether the animal under identification is true or not from the location of scars or the condition of spots extracted from the image.
  • <Arrangement of [0119] Claim 24>
  • In a system for identifying individuals according to any of [0120] Claims 14 to 23, the image input unit is formed by a plurality of cameras, disposed around the object under identification, for taking images of the object under identification.
  • <Arrangement of [0121] Claim 25>
  • In a system for identifying individuals according to any of [0122] Claims 14 to 23, the image input unit is formed by a camera movable around an individual under identification to take images of the object under identification.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram showing a first embodiment of the individual identification system of the present invention; [0123]
  • FIG. 2 is a diagram for explaining an example of the recognition dictionary in the first embodiment of the present invention; [0124]
  • FIG. 3 is a diagram for explaining an example of the dictionary image memory means in the first embodiment of the individual identification system of the present invention; [0125]
  • FIG. 4 is a diagram for explaining an example of division of the iris region in the first embodiment of the individual identification system of the present invention; [0126]
  • FIG. 5 is a diagram for explaining an example of calculation results of Hamming distances of different regions in the first embodiment of the individual identification system of the present invention; [0127]
  • FIG. 6 is a diagram for explaining an example of the result of image display in the first embodiment of the individual identification system of the present invention; [0128]
  • FIG. 7 is a block diagram of a second embodiment of the individual identification system of the present invention; [0129]
  • FIG. 8 is a diagram for explaining the contents of the recognition dictionary in the second embodiment of the individual identification system of the present invention; [0130]
  • FIG. 9 is a diagram for explaining the contents of the identification result memory means in the second embodiment of the individual identification of the present invention; and [0131]
  • FIG. 10 is a diagram for explaining the re-identification in the second embodiment of the present invention; [0132]
  • FIG. 11 is a block diagram of the individual identification system according to a third embodiment of the present invention; [0133]
  • FIG. 12 is a diagram showing camera positions of the image input portion in FIG. 11; [0134]
  • FIG. 13 is a diagram showing a horse photographed by [0135] cameras 32 to 34 in FIG. 12;
  • FIG. 14 is a flowchart showing processes of [0136] color identification unit 21 in FIG. 11;
  • FIG. 15 is a diagram showing the locations where the color is evaluated; [0137]
  • FIG. 16 is a diagram showing the names of the colors of horses; [0138]
  • FIG. 17 is a flowchart showing processes of the head white marks [0139] identification unit 22 in FIG. 11;
  • FIG. 18 is a diagram showing the data extraction regions in FIG.17; [0140]
  • FIG. 19 is a diagram showing the white patterns (Part [0141] 1) of the horse head;
  • FIG. 20 is a diagram showing the white patterns (Part [0142] 2) of the horse head;
  • FIG. 21 is a diagram showing the names of the white patterns of the horse head; [0143]
  • FIG. 22 is a flowchart showing the processes of the [0144] whirls identification unit 23 in FIG. 11;
  • FIG. 23 is a diagram showing the locations where whirls are detected; [0145]
  • FIG. 24 is a diagram showing the names and locations of whirls of horses; [0146]
  • FIG. 25 is a flowchart showing the processes of the leg white marks [0147] identification unit 24 in FIG. 11;
  • FIG. 26 is a diagram showing the data extraction regions in FIG.25; [0148]
  • FIG. 27 is a diagram showing names of the white marks of the legs; [0149]
  • FIG. 28 is a diagram showing the horse names and the feature names accumulated in the [0150] database 31 in FIG. 11; and
  • FIG. 29 is a block diagram of the individual identification system showing a fourth embodiment of the present invention.[0151]
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Embodiments of the present invention will be described in detail with reference to the accompanying drawings. [0152]
  • <[0153] Embodiment 1>
  • <Composition>[0154]
  • FIG. 1 is a block diagram showing a first embodiment of the system for identifying individuals according to the present invention. [0155]
  • The system in FIG. 1 is a computer system and comprises image memory means [0156] 1, a recognition dictionary 2, dictionary image memory means 3, individuals identifying means 4, result output control means 5, identification result analysis means 6, image output control means 7, and display device 8. In this embodiment, description will be made of identification of an individual by collation of iris images.
  • The image memory means [0157] 1 is installed in an auxiliary memory device, such as a semiconductor memory or a disk, and performs a function to store input images of individuals as objects under identification. The images are acquired through A/D conversion of images taken by the video camera, VTR, etc. The images held in this image memory means 1 are used by the individuals identifying means 4 and the image output control means 7.
  • The [0158] recognition dictionary 2 is installed in the auxiliary memory device, such as a semiconductor memory or a disk, and has data on features of objects under identification stored in advance.
  • FIG. 2 is an explanatory diagram of the [0159] recognition dictionary 2.
  • As illustrated, the [0160] recognition dictionary 2 holds iris codes as data on features and user information (name, sex, etc.).
  • The dictionary image memory means [0161] 3 is mounted in the auxiliary memory device, such as a semiconductor memory or a disk, and holds the images when respective dictionary were created and recorded in the recognition dictionary 2.
  • FIG. 3 is an explanatory diagram of the dictionary image memory means [0162] 3.
  • As illustrated, the dictionary image memory means [0163] 3 holds images entered at the times of creating dictionaries of the recognition dictionary 2, and the respective images correspond to the respective dictionaries stored in the recognition dictionary 2 on a one-to-one correspondence.
  • The [0164] individuals identifying means 4 analyzes an input image held in the image memory means 1, and recognizes the iris by comparing it with feature data of the recognition dictionary 2 to thereby identify the individual. The results of the individuals identifying means 4 are utilized by the result output control means 5.
  • The result output control means [0165] 5 issues an analyze command to the identification result analysis means 6 according to an identification result of the individuals identifying means 4, and when having sent the analyze command, in response to an analysis result from the identification result analysis means 6, makes a decision of whether or not to display the input image held in the image memory means 1, the dictionary image stored in the dictionary image memory means 3 and an analysis result. The result output control means 5 output a decision result to the image output control means 7.
  • On the basis of an identification result from the [0166] individuals identifying means 4, the identification result analysis means 6 detects that area of the input image used for identification which does not agree with the dictionary image.
  • Also on the basis of a decision result from the result output control means [0167] 5, the image output control means 7 controls image display. The display device 8 is formed by a CRT or a liquid display device, or the like, and displays the discordant portion.
  • <Operation>[0168]
  • First of all, suppose that the [0169] recognition dictionary 2 and the dictionary image memory means 3 each contain the users' iris codes and the images entered at the times of dictionary creation stored previously. The method of generating iris codes in this case is a well-known one, and its description is omitted.
  • Also suppose that under the above condition, an image was input and held in the image memory means [0170] 1. The individuals identifying means 4 analyzes the input image held in the image memory means 1, and compares it with dictionary data 2. This process will be described briefly. From the image of the eye held in the image memory means 1, a circumscribed circle of the pupil and a circumscribed circuit of the iris are obtained. The, polar coordinates are set relative to two circles as the references, the iris region is divided into subdivisions, which are subjected to a filter process and a threshold process, outputted as codes consisting of 0 and 1(hereafter referred to as iris codes). Identification of an individual is performed by comparing the codes, thus generated, with codes stored in the recognition dictionary 2.
  • To cite an example, in the literature mentioned in the paragraph on the prior art, Hamming distances between the iris code created from the input image and the iris codes of the [0171] recognition dictionary 2 are calculated, and a dictionary which brings about the smallest Hamming distance is selected. (The Hamming distance at this time is called HDMIN.) When the Hamming distance at this time is smaller than a preset threshold value (HDTH), a decision is made that the person has been identified. The minimum value of Hamming distances during identification HDMIN is used in the result output control means 5.
  • The identification result of the [0172] individuals identifying means 4 is outputted to the result output control means 5 and the identification result analysis means 6. The operation of the result output control means 5 will be described. The result output control means 5 makes a decision whether or not to analyze the identification result on the basis of the identification result of the individuals identifying means 4. For example, in this decision making, the result output control means 5 uses the same threshold value that is used by the individuals identifying means 4 for identification of a person. If the Hamming distance HDMIN outputted by the individuals identifying means 4 is larger than the threshold value HDTH, the result output control means 5 issues a command to the identification result analysis means 6 directing it to analyze the identification result. Analysis by the identification result analysis means 6 will be described later .
  • When the identification result analysis means [0173] 6 finishes the analysis of the identification result, the result output control means 5 decides a result output mode according to the analysis result. For example, if the identification result analysis means 6 makes a decision that the iris codes differ for the most part, the result output control means 5 issues a command to the image memory means 1 directing it to display only the input image stored therein without displaying the dictionary image.
  • Because the input image is displayed, the user can get information about whether or not the failure of identification is due to poor picture quality (blur, out of focus, etc.) or due to his closing the eyes, and so on. In this case, since the dictionary image is not displayed, security can be maintained even if a completely different person in bad faith should use the system. [0174]
  • Even when the identification result analysis means [0175] 6 decides that the iris codes differ partially, since there is a possibility that the user is registered in the recognition dictionary 2, if the area where the iris codes differ is displayed, this will be helpful for conjecturing the cause of failure to recognize a correct user. If an operator attends the system, when the system failed to recognize a correct user, the operator can make a final decision according to a displayed image. Therefore, the result output control means 5 directs the image output control means 7 to output the ‘input image’, the ‘dictionary image’, and the ‘area where the iris codes disagree’ to the display device 8.
  • The operation of the identification result analysis means [0176] 6 will be described in detail.
  • The identification result analysis means [0177] 6 detects the area in the input image used for identification which does not agree with the dictionary image on the basis of the identification result of the individuals identifying means 4. Since identification of an individual by iris codes is performed in this embodiment, a decision is made whether or not codes differ in the whole region or whether or not some areas agree even though there are discordant portions. The identification result analysis means 6 also detects the locations of the coincident areas and the discordant portions.
  • If it is found by making those decisions that the iris codes do not agree on the whole, there is a possibility that the person under identification is totally different from the person registered in the dictionary or the input image has a poor picture quality. On the other hand, if the iris codes disagree in a limited area, a possibility is that a part of the iris is hidden behind the eyelid or eyelashes. In other words, though the person is a correct person, a wrong decision was made. In such a case, both the input image and the dictionary image can be displayed to clarify the locations of coincident areas and discordant portions, by which the user can surmise the cause of erroneous recognition. Again, if an operator attends the system, he can make a final decision. [0178]
  • Description will now be made of the method of making a decision of whether the iris codes differ for the most part or partially according to a result of iris recognition. [0179]
  • FIG. 4 is an explanatory diagram of a case where the iris region is [0180] 32 subdivisions.
  • The number of subdivisions may be decided according to the purpose of use, etc. [0181]
  • The identification result analysis means [0182] 6 calculates for the respective subdivisions Hamming distances generated from the Iris codes of a dictionary selected finally by the individuals identifying means 4 and also from the input image.
  • FIG. 5 shows an example of a calculation result of Hamming distances of the respective subdivisions. [0183]
  • Then, a number N[0184] 1 representing those Hamming distances larger than a predetermined threshold value TH1 is calculated. When N1 is larger than a predetermined threshold value TH2, a decision is made that the iris codes differ on the whole. In contrast, when N1 is smaller than N2, a decision is made that the iris codes differ partially. When a decision of partial difference is made, the sub-region numbers at which Hamming distances are larger than TH1 are outputted to the image output control means 7.
  • The operation of the image output control means [0185] 7 will next be described.
  • The image output control means [0186] 7 displays images in response to a command from the result output control means 5. More specifically, the image output control means 7 controls the display of input images held in the image memory means 1 and images that are stored in the dictionary image memory means 3 and that correspond to a dictionary selected finally by the individuals identifying means 4. Furthermore, the image output control means 7 displays a detection result of discordant sub-divisions outputted from the identification result analysis means 6.
  • FIG.[0187] 6 shows an example of an image display result.
  • The illustrated example is a case in which the [0188] sub-region numbers 1, 3, 13, 15, 17, 19, 27, 29 and 31 are judged to be discordant subdivisions when the iris region was divided as shown in FIG. 4 and analyzed.
  • As is clear from this case, when comparing dictionary images with the input image, the user or the operator needs to compare only those subdivisions where discordance occurred, so that the reason for failure of recognition can be surmised easily. In this example, the subdivisions where discordance occurred are obviously those that are hidden by the eyelid. Because the images agree in other subdivisions, even if the system could not judge the person to be a correct person, the operator can identify him as a correct person. In the case of FIG. 6, the discordant sub-divisions are indicated by enclosing with a solid line, but the mode of display is not limited to this method. Any mode of display may be used so long as the discordant sub-divisions can be notified to the user. [0189]
  • Some of possible methods are as follows. [0190]
  • The discordant subdivisions lying successively are enclosed by a single line as a single region. [0191]
  • An image showing only the discordant subdivisions is displayed. [0192]
  • The discordant subdivisions are highlighted (smoothening of brightness value, for example). [0193]
  • To show an illustrative example of individual identification, a case, which uses the iris, has been described. However, this embodiment may be applied to individual identification processes for human beings or animals, which uses analysis of images. [0194]
  • A modified embodiment of the present invention will be described in the following. [0195]
  • In a certain type of application (an operator attends the system at all times, for example), the input image and the dictionary image are always displayed. (in this case, the result output control means can be obviated.) [0196]
  • When the analysis results of the identification result analysis means [0197] 6 differ for the most part, neither the dictionary image nor the input image is displayed. This is used in a case where emphasis is placed on the preservation of security against a totally different person in bad faith tries to use the system.
  • A dictionary image is not displayed. (Therefore, the dictionary image memory means [0198] 3 is obviated.)
  • The result output control means [0199] 5 has heretofore used one threshold value in making a decision of whether or not to display images, but here uses a plurality of threshold values.
  • When a decision is made using two threshold values, for example, [0200]
  • HDMIN<HDTH1→No image displayed [0201]
  • HDTH1≦HDMIN<HDTH2→No image displayed [0202]
  • HDTH2≦HDMIN→No image displayed [0203]
  • For analysis by the identification result analysis means [0204] 6, images held in the image memory means 1 and images stored in the dictionary image memory means 3 are used. For example, in the first embodiment, feature data, such as iris codes, is compared. However, in comparison between an input image and a dictionary image, the pixels of multiple gray levels may be compared.
  • <Effects>[0205]
  • As has been described, according to the first embodiment, even when a correct person could not be identified in individual identification, the input image, the dictionary image and the discordant portions are displayed to the user, and therefore the user can easily surmise the cause of faulty recognition. When there is an operator attending the system, even if the system failed to identify a correct person, the operator can survey a displayed image, and thereby make a final decision easily. Thus, it is possible to realize an individual identification system for the user or the operator to operate very easily. [0206]
  • An input image and a dictionary image are displayed selectively after the identification result is analyzed, so that security can be maintained. In addition, images are not always displayed, and therefore hopes are held for effects of suppressing the amount of processing. [0207]
  • All operations of the first embodiment can be performed under control of a computer program, which performs the function of the individual identification system. Therefore, the individual identification system according to the present invention can be realized, for example, by a method by recording the program on a recording medium, such as a floppy disc or a CD-ROM, and installing the program in a computer, or downloading the program from a network, or by another method, or by another method of installing the program in a hard dick or the like. [0208]
  • <[0209] Embodiment 2>
  • According to a second embodiment of the present invention, additional information is used which includes features peculiar to objects under identification, such as the color of the eye, the length and the color of fur. Additional information is stored in a dictionary used for individual identification, along with the quantities of features used for identification (iris codes, for example). When identification failed, additional information is input sequentially, and the identification process is performed again using only dictionaries having conditions confirming with the conditions of inputted additional information. In the re-identification, the threshold values for a decision are varied in a sequential order. [0210]
  • <Arrangement>[0211]
  • FIG.[0212] 7 is a block diagram showing a second embodiment of the individual identification system according to the present invention. The system shown in FIG. 1 comprises a recognition dictionary 10, individuals identifying means 11, identification result memory means 12, identification result decision means 13, additional information input means 14, and re-identification means 15. Regarding this second embodiment, description will be made of a case in which the recognition of the iris of an animal under identification is performed.
  • The [0213] recognition dictionary 10 is mounted in the auxiliary memory device, such as a semiconductor memory or a disc, and holds iris codes as feature sizes used for identification in the individuals identifying means 11, and additional information about external features peculiar to the object under identification.
  • FIG. 8 is a diagram for explaining the contents of the [0214] recognition dictionary 10.
  • As illustrated, the [0215] recognition dictionary 10 holds iris codes and additional information about the distinction of sex, the color of the fur and the eye, and the tail.
  • The individuals identifying means [0216] 11 extracts feature data from the image of an individual under identification input from image input means, not shown, and compares the feature data with iris codes from the recognition dictionary 10 to thereby identify the individual, and outputs a result to the identification result memory means 12 and the identification result decision means 13.
  • The identification result memory means [0217] 12 holds a result of comparison with dictionaries in the recognition dictionary 10 by the individuals identifying means 11. In the case of this embodiment, Hamming distances are obtained as results, so that Hamming distances of the respective dictionaries are stored in the result memory means 12.
  • The identification result decision means [0218] 13 decides whether or not to perform re-identification according to an identification result of the individuals identifying means 11,and when having made decision not to input additional information, outputs the identification result of the individuals identifying means 11 as the final result, and when the decision was not to input additional information, issues a command to the additional information input means 14 directing it to acquire additional information, and issues another command to the re-identification means 15 directing it to perform a re-identification process. Furthermore, the identification result decision means 13 outputs a decision to terminate the re-identification process according to the inputted re-identification result and also outputs a final identification result.
  • The additional information input means [0219] 14 urges the user or the operator to input additional information, and if additional information is supplied, outputs the additional information to the re-identification means 15. The additional information input means 14 is formed by a display device, such as a monitor, and input devices, such as a touch panel, a keyboard and a mouse.
  • The re-identification means [0220] 15 starts to run in response to a re-identification command from the identification result decision means 13. On receiving each item of additional information from the additional information input means 14, the re-identification means 15 selects that dictionary (ID-No. ) of the recognition dictionary 10 which covers all additional information input heretofore, and reads the selected dictionary and a Hamming distance as an identification result from the identification result memory means 12, and selects a dictionary at the minimum Hamming distance, and outputs this value to the identification result decision means 13.
  • <Operation>[0221]
  • When image data of an individual under identification is input to this system, the [0222] individuals identifying means 11 analyzes the image data to thereby obtain feature sizes, and compares the feature sizes with the feature sizes registered in the recognition dictionary 10, and thus identifies the individual.
  • In the identification process, a circumscribed circle of the pupil and a circumscribed circle of the iris are obtained from the image of the eye of the object under identification, the image being taken by the a video camera. Then, polar coordinates are set with respect to two circles as references, the iris region is divided into multiple subdivisions, which receive a filter process and a threshold value process, and are outputted as codes of 0 and 1 (hereafter referred to as iris codes). The iris codes thus generated and the iris codes stored in the [0223] recognition dictionary 10 are compared to identify the individual.
  • For example, in the literature mentioned when the prior art was referred to, a Hamming distance is calculated between the iris codes generated from the input image and the iris codes of the [0224] recognition dictionary 10, and a dictionary which brings about the minimum Hamming distance is selected. (The Hamming distance at this time is designated as HDMIN.) When the Hamming distance at this time is lower than a predetermined threshold value (HDTH), the individual is judged to be a correct one. The minimum value HDMIN of Hamming distance during identification is used in the identification result decision means 13. The Hamming distance obtained by comparison with dictionaries of the recognition dictionary 10 is stored in the identification result memory means 12.
  • FIG. 9 is a diagram for explaining the contents of the identification result memory means [0225] 12.
  • As shown in FIG. 9, a Hamming distance for each dictionary (ID-No. ) of the [0226] recognition dictionary 10 is stored. Information stored in the identification result memory means 12 is used in the re-identification means 15.
  • Description will then be made of a decision about identification results and the re-identification process. [0227]
  • The identification result decision means [0228] 13 decides whether or not to input additional information on the basis of identification result of the individuals identifying means 11. When the minimum value HDMIN of Hamming distance outputted by the individual identification means 11 is smaller than a predetermined threshold value HDTH, the decision means 13 decides not to input additional information. If a decision is made not to input additional information, the identification result outputted by the individual identification means 11 is outputted as the final result.
  • In contrast, when the minimum value HDMIN of Hamming distance is higher than the predetermined threshold value HDTH, the identification result decision means [0229] 13 uses the additional information input means 14 to prompt the operator to supply additional information, and directs the re-identification means 15 to perform the re-identification process.
  • FIG. 10 is a diagram for explaining the re-identification process. [0230]
  • Each time additional information is input to the additional information input means [0231] 14, the re-identification means 15 selects a dictionary (ID-No. ) matching this additional information, and reads Hamming distances between the selected dictionary and the identification result from the identification result memory means 12, obtains the minimum value HDMIN1 of Hamming distances, and outputs this value to the identification result decision means 13.
  • The identification result decision means [0232] 13 decides whether or not the obtained Hamming distance is smaller than the predetermined threshold value HDTH1 (HDTH1>HDTH), and if it is smaller than the threshold value, outputs a dictionary, which brings about the minimum value HDMIN1, as the final result, and decides that the individual has been identified as the one registered in the dictionary 10. For example, in the example in FIG. 10, “Sex” was input as the first additional information, and the operator selected “Female.” If the individual is judged to be a registered individual, the identify process is finished.
  • On the other hand, also in the re-identification process in the re-identification means [0233] 15, which includes the first additional information, if the obtained HDMIN1 is larger than HDTH1, the identification result decision means 13 directs the additional information input means 14 to input the next additional information, and also directs the re-identification means to perform the re-identification process. When additional information is input, as with the first information, the re-identification means 15 obtains an identification result HDMIN2 covering the initially-input additional information and additional information this time, and outputs HDMIN2 to the identification result decision means 13. For example, in the example of FIG. 10, (A) shows a case where the individual was not identified as the registered one. Suppose that the additional information input means 14 prompts the operator to input “the color of the fur” as shown in (B), and the operator selects “White.”
  • The identification result decision means [0234] 13 decides whether or not the HDMIN2 obtained by the re-identification process is smaller than the threshold value HDTH2 (HDTH2>HDTH1), if HDMIN2 is found smaller, outputs a dictionary, which brings about the minimum value HDMIN2, as the final result, and decides that the individual has been identified as the one registered in the dictionary. On the other hand, if HDMIN2 is larger than HDTH2, the decision means 13 directs the additional information input means 14 to input the next additional information, and also directs the re-identification means 15 to perform a re-identification process, so that the above-mentioned operation is repeated.
  • As has been described, a more suitable dictionary is selected each time the operator inputs additional information, and a comparison is made between the minimum value of Hamming distance obtained with the selected more suitable dictionary and a threshold value larger than the previous set value, by which the individual is judged correct or false. Therefore, when the conditions are satisfied before the operator inputs all of additional information, the individual identification process is finished. When the above-mentioned minimum value is not smaller than the preset threshold value after all additional information has been input, the individual is judged to be not any of the individuals registered (not included in the recognition dictionary [0235] 10).
  • In the foregoing second embodiment, the individual under identification is an animal, the operator inputs additional information. When the individual is a human being, however, it is possible for the person to input additional information by himself. In this case, preferably, the operator manages the system to make sure that the input additional information is correct. [0236]
  • <Effects>[0237]
  • According to the second embodiment, even if the decision about an individuals identity failed, the user or the operator is urged to input additional information, a dictionary is selected by using the input additional information, and an identity decision is made after the threshold value for the decision is changed (relaxed little by little). Even when the system failed in the identity decision, it is still possible to make an identity decision on the individual. [0238]
  • Each time an item of additional information is input, a decision is made by executing a re-identification process. Therefore, even before all additional information is input, it is possible to make a decision on an individual's identity. Accordingly, redundant input operations need not be performed, with the result that time spent for a decision can be reduced. [0239]
  • Furthermore, because the identification result of the [0240] individuals identifying means 11 is stored in the identification result memory means 12, the individuals identifying means 11 need not execute the same operation during re-identification. Therefore, the amount of processing can be reduced to a minimum, which contributes to the improvement of the processing speed. As a result, a high-performance system for individual identification can be realized.
  • All operations in the second embodiment can be carried out by control of a computer program to perform the function of the individual identification system. [0241]
  • Accordingly, the individual identification system according to the present invention can be materialized by a method by recording the program on a recording medium like a floppy disc and a CD-ROM, installing the program in a computer, or downloading the program from a network, or by another method of installing in a hard disc. [0242]
  • <[0243] Embodiment 3>
  • FIG. 11 is a block diagram of the individual identification system according to a third embodiment of the present invention. [0244]
  • This individual identification system is a system for identifying an individual racing horse, for example, and comprises an [0245] image input unit 19, image analysis means 20 for analyzing an input image and extracting the features of the horse under identification, a horse name output unit 30 connected to the image analysis means 20, and a database 31 for providing the features of each individual to the horse name analysis unit 30.
  • The [0246] image input unit 19 accepts an image of a horse under identification, and is formed of one or more cameras. The image analysis means 20 is formed of a CPU and a memory, and includes a hair color identifier 21 which identifies a hair color of a horse from an image input from the image input unit 19. The hair color identifier 21 is connected on its output side to a head white mark identifier 22. The head white mark identifier 22 is connected on its output side to a whirl identifier 23. The whirl identifier 23 is connected on its output side to a leg white mark identifier 24. A horse name output unit 30 is connected to the output side of the leg white mark identifier 24 of the image analysis means 20. The horse name output unit 30 outputs an identification result of the horse under identification.
  • The operation of the individual identification system will be described with reference to FIGS. [0247] 12 to 28.
  • FIGS. [0248] 12(a) and 12(b) show the locations of the cameras of the image input unit 19 in FIG. 11. FIG. 12(a) is a side view and FIG. 12(b) is a top view. FIGS. 13(a) and 13(b) are views of the horse photographed by the cameras 32 and 34 in FIG. 12.
  • The [0249] cameras 32 to 34 of the image input unit 19 are arranged around the horse H as shown in FIGS. 12(a) and 12(b). Various focal distances and apertures, which are the parameters of the cameras 32 to 34 are provided to suit the distance to the horse H as the subject and the magnification. The side cameras 32 located laterally of the horse H take pictures of the body and side view of the legs of the horse H as shown in FIG. 13(a). The front cameras 33 arranged in front of the horse H take pictures of the face, the chest and the front view of the legs. The rear cameras 33 located at the rear of the horse H take pictures of the buttock and the rear view of the legs of the horse H. FIGS. 12(a) and 12(b) show a case of using a plurality of cameras 32 to 34, but the images corresponding to FIGS. 13(a) and 13(b) may be taken by moving one camera. The image input unit 19 outputs the images of the horse under identification to the image analysis unit 20.
  • FIG. 14 is a flowchart showing the processes of the [0250] hair color identifier 21. FIG. 15 is a diagram showing the locations where hair colors are evaluated. FIG. 16 is a diagram showing the colors of the horse.
  • The [0251] hair color identifier 21 refers to color data 21-1 and hair color data 21-2 stored in memory, not shown, analyzes the images from the image input unit 19 by the processes S1 to S8 in FIG. 14, extracts color names defined according to variations of the hair colors as the external appearance data of the horse, and outputs as a result of hair color identification.
  • The specific color names of horses are kurige (chestnut), tochi kurige (tochi-chestnut), kage (dark brown), kuro-kage (blackish dark brown), aokage (bluish dark brown), aoge (bluish dark brown), and ashige (white mixed with black or brown) as shown in FIG. 16. The fur color identification process S[0252] 1 extracts color information of the belly region 41 of FIG. 15 from the supplied image, and compares it with color data 21-1. In this comparison, a color closest to the region 41 is determined, and is extracted as an identification result of the fur color. Horse fur color data, such as yellowing brown, reddish brown, black, white, brown, etc. is stored as color data 21-1. For the color representation method of color data 21-1, it is possible to use the RGB colorimetric system using red (R), green (G) and blue (B) components of the pixels or the CIE-XYZ colorimetric system.
  • In the long hair color identification process S[0253] 2 after the fur color identification process S1, color information of the mane region 42 and tail region 43 is extracted from the input image, and like in the fur color identification process S1, the color of long hair (mane and tail) of the horse H is extracted as an identification result. In the four-leg lower portion fur color identification process S3, color information of the leg regions 44, 45 in FIG. 15 is extracted from the input image, and like in the fur color identification process S1, the fur color of the legs of the horse H is extracted as an identification result. In the eye peripheral region fur color identification process S4, color information of the fur of the surrounding region of the eye in FIG. 15 is extracted from the input image, and like in the fur color identification process S1, the fur color in the surrounding region of the eye of the horse H is extracted as an identification result. In the underarm hair identification process S5, color information of the underarm region 47 in FIG. 15 is extracted from the input image, and like in the fur color identification process S1, the hair color of the underarm of the horse H is extracted as an identification result. In the belly fur color identification process S6, color information of the belly region 48 in FIG. 15 is extracted from the input image, and like in the fur color identification process S1, the fur color of the belly of the horse H is extracted as an identification result. In the nose peripheral region fur color identification process S7, color information of the peripheral region 49 of the nose in FIG. 15 is extracted from the input image, and like in the fur color identification process S1, the fur color of the peripheral region of the nose of the horse H is extracted as an identification result.
  • In the hair color decision process S[0254] 8, the identified colors in the processes S1 to S8 are compared with a combination of colors stored in hair color data 21-2 and the color of the horse H under identification is decided. By this decision, the color of the fur of the horse H is decided as “kurige”(chestnut), “tochi-kurige”(tochi-chectnut), “kage”(brown), “kurokage”(darker reddish brown), “aokage”(dark-bluish black), “aoge”(bluish black) or “ashige”(white mixed with black or brown).
  • FIG. 17 is a flowchart showing the processes of the head [0255] white mark identifier 22, and FIG. 18 is a diagram showing the extraction regions of FIG. 17. FIG. 19 is a diagram showing the white patterns (Part 1) of the horse's head, and FIG. 20 is a diagram showing the white patterns (Part 2) of the horse's head. FIG. 21 is a diagram showing the names of the white marks of the head. The processes of the head white mark identifier 22 will be described with reference to FIGS. 17 to 21.
  • The head [0256] white mark identifier 22 refer to data on head white marks 22-1 stored in memory, shown, extracts the white regions from the image taken by the front camera 33 out of the images from the image input unit 19, analyzes the image in the processes S11 to S15 of FIG. 17, obtains the names defined according to variations of the white patterns of the head as the external appearance data of the head, and outputs as an identification result of the head white pattern. The names of white patterns shown in FIG. 21 are stored in the head white mark data 22-1 along with the locations and sizes of the white marks.
  • The forehead white mark identification process S[0257] 11 extracts the white region 51 in the forehead in FIG. 18 from the input image, and selects the name of the pattern by referring to the head white mark data 22-1 and the shape, the size, and the presence or absence of a pattern omission at the center portion. The patterns are divided into “star”, “large star”, “curved start”, “shooting star” or the like according to the white pattern of the forehead of the horse H under identification. When the pattern has a round shape, its size is large, the number is one, and no omission of the central portion is found in measurement, the “large star” shown in FIG. 19 is selected as its pattern name, or if the pattern is small, the “star” or “small star” is selected. If the pattern is not round and has a tailpiece, the “large shooting star” in FIG. 19 is selected. Or, if the white pattern is curved, the pattern is classified as a “curved star.” If the central portion is missing, the pattern is called a “ring star.”
  • In the nose bridge white mark identification process S[0258] 12, the white region 52 on the bridge in FIG. 18 is extracted from the input image, the width of the white region is measured, and a pattern name in FIG. 20 is selected by referring to head white mark data 22-1. In the nose white mark identification process S13, the white region of the nose 53 in FIG. 18 is extracted from the input image, the size of the white region is measured, and a pattern name in FIG. 20 is selected by referring to head white pattern data 22-1. In the lip white mark identification process S14, the white region of the lip 54 in FIG. 18 is selected from the input image, the width of the pattern is measured, and a pattern name is selected from FIG. 20 by referring to head white mark data 22-1. In the forehead-nose white mark identification process S15, the white region of the forehead-nose region in FIG. 18 is extracted from the input image, its size and width are measured, and a pattern name is selected from FIG. 20 by referring to head white mark data 22-1.
  • FIG. 22 is a flowchart showing the processes of the [0259] whirl identifier 23 in FIG. 11. FIGS. 23(a) to 23(d) are diagrams showing the locations for detecting whirl patterns. FIG. 24 is a diagram showing the locations and the names of whirls of the horse.
  • The [0260] whirl identifier 23 refers to data on whirl pattern 23-1 and whirl location data 23-2 stored in memory, not shown, carries out the processes S21 and S22 in FIG. 22, and obtains whirl data of each horse from the images taken by the cameras 11 to 13 of the image input unit 10.
  • Whirl pattern data for each fur color is stored in whirl pattern data [0261] 23-1, and the names and locations of whirls in FIG. 24 are stored in whirl location data 23-2. The numbers of the whirl locations in FIG. 24 correspond to the numbers allocated to the portions of a horse in FIG. 23.
  • In the whirl extraction process S[0262] 21 in FIG. 22, a whirl pattern for the fur color obtained by the fur color identifier 21 is selected from the whirl pattern data 23-1, and locations 61 to 79 where there are whirl patterns from the input image. In the whirl location identification process S22, whirl names are obtained from whirl location data 23-2 according to the whirl locations 61 to 79 obtained in the whirl extraction process S21.
  • FIG. 25 is a flowchart showing the processes of the leg [0263] white mark identifier 24 in FIG. 11, FIG. 26 is a diagram showing the extraction regions in FIG. 25, and FIG. 27 is a diagram showing the names of white marks on the legs.
  • The leg [0264] white mark identifier 24 refers to the leg white mark data 24-1 stored in memory, not shown, extracts the white regions from the image from the image input unit 19, analyzes the image by the processes S31 and S32 in FIG. 25, obtains the names defined according to variations of the leg white patterns as external appearance data of the horse, and outputs as a result of let white mark identification. The names of white marks in FIG. 27 are stored in data on leg white marks 24-1 along with the location and size of the white mark.
  • In the hoof white mark identification process S[0265] 31 in FIG. 25, the left white mark identifier 24 extracts the white mark in the region 81 at the hoof in FIG. 26, measures its size and circumference length, and selects a right name for data on leg white mark 24-1. For example, if the area of the white mark is small, “tiny white” is selected. In the leg white mark identification process S32, the leg white mark identifier 24 extracts the white mark in the leg region 82 in FIG. 26, measures the size and the circumference length of the white region, and selects a right name for data on leg white mark 24-1.
  • FIG. 28 is a diagram showing a horse's name and its feature stored in [0266] database 31 in FIG. 11.
  • The horse [0267] name output unit 30 searches the database 31 by a feature name selected by the image analysis means 20. A plurality of horse names are associated with the features of the horses when they are stored in the database 31. Therefore, by searching the database 31 by the names of features selected by the image analysis means 20, the name of the horse under identification can be obtained. For example, when the names of features selected and extracted by the portions 21 to 24 of the image analysis means 20 are “kurige”(chestnut), “shooting star nose-bridge white”, “shumoku” and “right front small white”, the name of the horse is obtained as “abcdef.”
  • The individual identification system comprises aaan [0268] image input unit 19 for taking images of a horse H under identification and acquiring images of the horse, image analysis means 20 for extracting the external features peculiar to the horse, such as the fur color, head white mark, whirls, leg white mark from the image, and a horse name output unit 30 for obtaining the name of the horse H by searching the database 31. Therefore, it becomes possible to prevent the features from being overlooked, and preclude wrong decisions, such as misjudgment, so that steady identification of horses can be performed without relying on the skill of the operator.
  • Since the features are assigned feature names and compared, the features of the horses can be easily stored in [0269] database 31 and measurements of horses can be obviated, which were required previously. Furthermore, a far smaller storage capacity is required of database 31 than in the prior art in which a plurality of images of a horse were stored.
  • <[0270] Embodiment 4>
  • FIG. 29 is a block diagram of the individual identification system according to a fourth embodiment of the present invention. [0271]
  • This individual identification system comprises the [0272] image input unit 19, the image analysis means 20 and the database 31 like in the third embodiment, and further includes a horse name input unit 90 and a true/false decision unit 91, which are not used in the preceding embodiments. The true/false decision unit 91 is provided in place of the horse name output unit 30 in the third embodiment, and is connected to the output side of the image analysis means 20, and also to the output side of the database 31. The horse name input unit 90 is adapted to supply the horse names to the database 31.
  • In the third embodiment, if the name of the horse H under identification is not known, by extracting the features from the images taken, the name of the horse is found. In the individual identification system according to the fourth embodiment, when the name of the horse to be identified is already known, while pictures are being taken, a decision is made whether or not this horse coincides with the horse H under identification. [0273]
  • Regarding this individual identification system, description will be made of the process in which a decision is made of whether or not this horse corresponds to the horse H under identification. [0274]
  • First of all, the name of the horse to be identified is input to the horse [0275] name input unit 90. The horse name input unit 90 sends the supplied horse name to the database 31 directing it to search the names of features corresponding to the name.
  • On the other hand, the [0276] image input unit 19 and the image analysis means 20, by the same processes as in the third embodiment, select the names of features as the external appearance data of the horse H to extract the features.
  • The true/[0277] false decision unit 91 compares the feature names of the horse H fetched from the database 31 with the feature names supplied from the image analysis means 20, and if they coincide or are mutually close, outputs information that the horse H under identification in the process of picture taking corresponds to the horse whose name was input to the horse name input unit 90. Or if they do not coincide, the true/false decision unit 91 outputs information that the horse does not correspond to the horse the name of which was input to the horse name input unit 90.
  • As has been described, the individual identification system according to the fourth embodiment, which includes the [0278] image input unit 19 and the image analysis means 20 and which is further added with the horse name input unit 90 and the true/false decision unit 91, can decide whether or not the horse under identification by comparing the features from the input image and the features obtained by inputting the horse's name when the name is known. As a result, time required for searching data for identification of the horse can be reduced substantially.
  • The present invention is not limited to the above embodiments but various modifications are possible. Possible modifications are as follows. [0279]
  • (1) In the image analysis means [0280] 20 according to the above embodiments, the fur colors, the head white patterns, whirl locations, or the leg white patterns are extracted, which represent the physical features of the horse H, and the names of the features are stored in the database 31. However, some arrangement may be done such that the scars and other marks may be extracted by image analysis and searched on the database 31.
  • (2) In the third embodiment, [0281] database 31 is organized such that a horse's name is obtained from the external physical features of the horse, but data used for retrieval of a horse's name may include the registration number, date of birth, blood type, pedigree registration, breed (e.g., thoroughbred male), producer, producer's address, producing district, father's name, mother's name, etc.
  • (3) In the foregoing embodiments, description has been made of the individual identification system for a horse H, but this invention may obviously be applied to other animals so long as the animal has similar external features to those of the horses, and may further be applied to things other than animals by using other external features and names. [0282]
  • As has been described, according to the third embodiment, the individual identification system is formed by the output device including the image input unit for inputting images of an individual under identification, which are taken from different angles; image analysis means for extracting the features of an individual under identification obtained by image analysis of the input image; and an output unit added with a database. Therefore, identification of an individual, a horse for example, can be performed steadily without overlooking the features. [0283]
  • According to the fourth embodiment, the individual identification system comprises the image input unit, the image analysis means, the name input means and the true/false decision unit, so that a decision can be made as to whether the individual under identification is true or false. [0284]

Claims (25)

What is Claimed is:
1. A system for identifying individuals, comprising:
an image memory means for storing an input image of an individual to be identified;
a dictionary for having data on the features of collation objects stored in advance;
a dictionary image memory means for storing dictionary images as a basis on which to extract data on the features of said collation objects;
an individuals identifying means for analyzing the input image held in said image memory means and comparing said data on the features stored in said dictionary to thereby identify the individual;
an identification result analyzing means for analyzing the identification result by said individuals identifying means and detecting that area of the region used for identification in said input image which does not agree with the dictionary data;
a result output control means for, according to identification result by said individuals identifying means, issuing an analyze command to said identification result analyzing mean and, according to analysis result by said identification result analyzing means, issuing a command to display said area of disagreement to said identification result analyzing means, and deciding whether or not to display said input image and said dictionary image;
an image output control means for, according to a result of decision by said result output control means, controlling display of said area of disagreement, said input image and said dictionary image; and
a display for displaying an image output by said image output control means.
2. A system for identifying individuals according to
claim 1
, wherein when said input image agrees with said dictionary data to such an extent that the identification result of said individuals identifying means is larger than a preset threshold value, said result output control means makes a decision not to issue an analyze command to said identification result analyzing means nor issue a display command to said image output control means.
3. A system for identifying individuals according to
claim 1
, wherein said result output control means makes a decision not to issue an analyze command to said identification result analyzing means nor issue a display command to said image output control means when the identification result of said individuals identifying means even if said input image agrees with said dictionary data to such an extent that the identification result of said individuals identification means is larger than a first threshold value preset, but if said first threshold value is smaller than a second threshold value.
4. A system for identifying individuals according to any of
claims 1
to
3
, wherein when said identification result analyzing means analyzes the area of disagreement between said input image and said dictionary data as larger than a preset value, said result output control means concludes that said input image generally differs from said dictionary data and makes a decision to cause only said input image to be displayed.
5. A system for identifying individuals, comprising:
an image memory means for storing an input image of an individual to be identified;
a dictionary for having data on the features of collation objects stored in advance;
a dictionary image memory means for storing dictionary images as a basis on which to extract data of the features of said collation objects;
an individuals identifying means for analyzing said input image held in said image memory means and comparing said data on the features stored in said dictionary to thereby identify the individual;
an identification result analyzing means for analyzing the identification result by said individuals identifying means and detecting that area of the region used for identification in said input image which does not agree with said dictionary data; and
a result output control means for making a decision not to display any image on the basis of a judgement that said input image generally differs from said dictionary data if said area of disagreement is larger than a predetermined value in the analysis of said identification result analyzing means.
6. A system for identifying individuals according to any of
claims 1
to
5
, wherein there is provided result output control means for making a decision to cause said input image and said area of disagreement to be displayed on the basis of a judgement that the input image partially differs from said dictionary data if the area of disagreement is smaller than a predetermined value in the analysis of said identification result analyzing means.
7. A system for identifying individuals according to any of
claims 1
to
5
, wherein there is provided result output control means for making a decision to cause only said area of disagreement to be displayed on the basis of a judgement that the input image partially differs from said dictionary data if the area of disagreement is smaller than a predetermined value in the analysis of said identification result analyzing means.
8. A system for identifying individuals according to any of
claims 1
to
7
, wherein said object to be identified is an iris in the eye.
9. A system for identifying individuals, comprising:
a recognition dictionary for having stored in advance data on features of an object to be identified and additional information peculiar to said object obtained from said object to be identified;
an individuals identifying means for identifying an individual by comparing said data on features of said object to be identified with data on features of said dictionary data;
an identification result deciding means for, when having made a decision not to input said additional information in a decision regarding whether to input said additional information according to identification result by said individuals identifying means, outputting identification result by said individuals identifying means as a final result, or, when having made a decision to input said additional information, issuing a command to obtain said additional information and also a command to conduct a re-identify process, and making a decision to terminate said re-identify process according to a result of the re-identify process, and outputting a final identification result;
an additional information inputting means for obtaining arriving additional information upon receiving said additional information from said identification result deciding means; and
a re-identifying means for, on receiving a command to conduct a re-identify process from said identification result deciding means, selecting said identification dictionaries containing all of said additional information acquired by said additional information inputting means, and outputting as the result of re-identification a dictionary having a closest possible value to data on the features of said object under identification among selected dictionaries.
10. A system for identifying individuals according to
claim 9
, wherein said recognition dictionary stores a plurality of items of additional information in advance, and wherein said re-analyzing means conducts said re-identify process at each reception of one item of said additional information obtained by said additional information input means.
11. A system for identifying individuals according to
claim 10
, wherein said identification result deciding means makes a decision to terminate said re-identify process at each re-identify process by said re-identify means.
12. A system for identifying individuals according to
claim 11
, wherein said identification result deciding means changes a criterion for re-identification at each re-identify process by said re-identify means.
13. A system for identifying individuals according to any of
claims 9
to
12
, wherein said identification of individuals is by identification of the iris in the eye, and wherein said additional information is about external features of an individual.
14. A system for identifying individuals, comprising:
an image input unit for taking pictures of an individual as an object to be identified from different camera angles and inputting images of said object;
an image analyzing means for analyzing said image and extracting the features of the external appearance of said object; and
an output unit for obtaining a name by which to identify an individual under identification according to said features.
15. A system for identifying individuals according to
claim 14
, wherein said image analyzing means extracts said features by selecting a name of a feature corresponding to said image for each of the elements of external appearance out of the names of predetermined features allotted to variations of respective elements of said external appearance.
16. A system for identifying individuals according to
claim 15
, further comprising a data base of data representing the features of the external appearance of a plurality of individuals by said names of features, said data being associated with the names of said individuals when said data is accumulated, wherein said output unit is so arranged as to search said data base, and obtain the names of individuals having the names of features corresponding to the names of features selected at said image analyzing means as the names of individuals under identification.
17. A system for identifying individuals according to
claim 15
or
16
, wherein said individuals to be identified and said plurality of objects are animals, and wherein, the elements of the external appearance, which are used in said image analyzing means, said data base and said output unit, are two or more items of the fur color, the white patterns of the head, the location of whirls, and white marks of the legs.
18. A system for identifying individuals according to
claim 17
, wherein said image analyzing means extracts not less than two items of the location of a scar of an animal under identification and the condition of spots other than white marks, wherein said data base is associated with the names of said plurality of animals, the location of scars or the condition of spots of each animal when data is accumulated, and wherein said output unit searches the data base using the location of the scar or the condition of spots extracted from said image to thereby obtain the name of the animal to be identified.
19. A system for identifying individuals, comprising:
a name input means for entering the name of an individual under identification;
an image input unit for inputting images of an individual under identification by taking images of said object;
an image analyzing means for analyzing said image and extracting the features of the external appearance of an individual under identification;
a data base for accumulating data representing the features of the external appearance of individuals associated with the names of a plurality of individuals, and searching data corresponding to the names input through said name input means to thereby output said data;
a true/false decision unit for making a decision of whether an individual under identification is true or not by comparing the features found at said data base with the features extracted by said image analyzing means.
20. A system for identifying individuals according to
claim 19
, wherein said image analyzing means extracts said features by selecting a name of a feature corresponding to said image for each of the elements of external appearance out of the names of predetermined features allotted to variations of respective elements of said external appearance.
21. A system for identifying individuals according to
claim 20
, wherein said data base is accumulated in such a way that items of data on the features of the external appearance of said plurality of individuals are associated with said names of the individuals, said items of data being represented by said names of the features, and wherein said true/false decision unit makes a decision of whether an individual under identification is true or not by comparing said names of the features found at said data base with said names of the features extracted by said image analyzing means.
22. A system for identifying individuals according to
claim 20
or
21
, wherein said individuals under identification and said plurality of individuals are animals, and wherein the elements of the external appearance, which are used in said image analyzing means, said data base and said output unit, are two or more items of the fur color, the white patterns of the head, the location of whirls, and white marks of the legs.
23. A system for identifying individuals according to
claim 20
or
21
, wherein said image analyzing means extracts not less than two items of the location of a scar and the condition of white marks of an animal under identification, wherein said data base associates the names of said plurality of animals with the condition of scars of each animal or the condition of spots when accumulating data, and wherein said true/false decision unit makes a decision of whether the animal under identification is true or not from the location of scars or the condition of spots extracted from said image.
24. A system for identifying individuals according to any of
claims 14
to
23
, wherein said image input unit is formed by a plurality of cameras, disposed around the object under identification, for taking images of said object under identification.
25. A system for identifying individuals according to any of
claims 14
to
23
, wherein said image input unit is formed by a camera movable around an individual under identification to take images of said object under identification.
US09/918,835 1997-06-06 2001-08-01 System for identifying individuals Expired - Fee Related US6404903B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/918,835 US6404903B2 (en) 1997-06-06 2001-08-01 System for identifying individuals

Applications Claiming Priority (9)

Application Number Priority Date Filing Date Title
JP9-165117 1997-06-06
JP165117/1997 1997-06-06
JP9165117A JPH10340342A (en) 1997-06-06 1997-06-06 Individual identification device
JP9-177664 1997-06-18
JP9177664A JPH117535A (en) 1997-06-18 1997-06-18 Individual identification device
JP9-180047 1997-07-04
JP18004797A JPH1125270A (en) 1997-07-04 1997-07-04 Individual identifying device
US09/090,905 US6373968B2 (en) 1997-06-06 1998-06-05 System for identifying individuals
US09/918,835 US6404903B2 (en) 1997-06-06 2001-08-01 System for identifying individuals

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/090,905 Division US6373968B2 (en) 1997-06-06 1998-06-05 System for identifying individuals

Publications (2)

Publication Number Publication Date
US20010046311A1 true US20010046311A1 (en) 2001-11-29
US6404903B2 US6404903B2 (en) 2002-06-11

Family

ID=27322434

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/090,905 Expired - Fee Related US6373968B2 (en) 1997-06-06 1998-06-05 System for identifying individuals
US09/918,835 Expired - Fee Related US6404903B2 (en) 1997-06-06 2001-08-01 System for identifying individuals

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/090,905 Expired - Fee Related US6373968B2 (en) 1997-06-06 1998-06-05 System for identifying individuals

Country Status (1)

Country Link
US (2) US6373968B2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040208114A1 (en) * 2003-01-17 2004-10-21 Shihong Lao Image pickup device, image pickup device program and image pickup method
US20040228528A1 (en) * 2003-02-12 2004-11-18 Shihong Lao Image editing apparatus, image editing method and program
US20050185843A1 (en) * 2004-02-20 2005-08-25 Fuji Photo Film Co., Ltd. Digital pictorial book system, pictorial book searching method, and machine readable medium storing thereon pictorial book searching program
WO2007011395A2 (en) * 2004-10-19 2007-01-25 Sri International Method and apparatus for person identification
US7277561B2 (en) 2000-10-07 2007-10-02 Qritek Co., Ltd. Iris identification
US20070297673A1 (en) * 2006-06-21 2007-12-27 Jonathan Yen Nonhuman animal integument pixel classification
US20110019912A1 (en) * 2005-10-27 2011-01-27 Jonathan Yen Detecting And Correcting Peteye

Families Citing this family (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6700998B1 (en) * 1999-04-23 2004-03-02 Oki Electric Industry Co, Ltd. Iris registration unit
US20020113687A1 (en) * 2000-11-03 2002-08-22 Center Julian L. Method of extending image-based face recognition systems to utilize multi-view image sequences and audio information
KR100374708B1 (en) * 2001-03-06 2003-03-04 에버미디어 주식회사 Non-contact type human iris recognition method by correction of rotated iris image
US7184580B2 (en) * 2001-07-24 2007-02-27 Laurence Hamid Fingerprint scar recognition method and apparatus
US20040199781A1 (en) * 2001-08-30 2004-10-07 Erickson Lars Carl Data source privacy screening systems and methods
CN100345163C (en) * 2002-09-13 2007-10-24 松下电器产业株式会社 Iris coding method, personal identification method, iris code registration device, iris identification device, and iris identification program
US20040064709A1 (en) * 2002-09-30 2004-04-01 Heath James G. Security apparatus and method
JP2004206688A (en) * 2002-12-12 2004-07-22 Fuji Photo Film Co Ltd Face recognition method, face image cutting out method, and imaging apparatus
US7187787B2 (en) * 2003-03-14 2007-03-06 Intelitrac, Inc. Method and apparatus for facial identification enhancement
US20060222212A1 (en) * 2005-04-05 2006-10-05 Yingzi Du One-dimensional iris signature generation system and method
US8260008B2 (en) 2005-11-11 2012-09-04 Eyelock, Inc. Methods for performing biometric recognition of a human eye and corroboration of same
JP4442571B2 (en) * 2006-02-10 2010-03-31 ソニー株式会社 Imaging apparatus and control method thereof
US8364646B2 (en) 2006-03-03 2013-01-29 Eyelock, Inc. Scalable searching of biometric databases using dynamic selection of data subsets
WO2008039252A2 (en) 2006-05-15 2008-04-03 Retica Systems, Inc. Multimodal ocular biometric system
US8604901B2 (en) * 2006-06-27 2013-12-10 Eyelock, Inc. Ensuring the provenance of passengers at a transportation facility
US8121356B2 (en) 2006-09-15 2012-02-21 Identix Incorporated Long distance multimodal biometric system and method
WO2008033784A2 (en) * 2006-09-15 2008-03-20 Retica Systems, Inc. Long distance multimodal biometric system and method
WO2008091401A2 (en) 2006-09-15 2008-07-31 Retica Systems, Inc Multimodal ocular biometric system and methods
EP2076871A4 (en) 2006-09-22 2015-09-16 Eyelock Inc Compact biometric acquisition system and method
US7970179B2 (en) * 2006-09-25 2011-06-28 Identix Incorporated Iris data extraction
US8280120B2 (en) 2006-10-02 2012-10-02 Eyelock Inc. Fraud resistant biometric financial transaction system and method
JP4745207B2 (en) * 2006-12-08 2011-08-10 株式会社東芝 Facial feature point detection apparatus and method
WO2008131201A1 (en) 2007-04-19 2008-10-30 Global Rainmakers, Inc. Method and system for biometric recognition
US8953849B2 (en) 2007-04-19 2015-02-10 Eyelock, Inc. Method and system for biometric recognition
US20120239458A9 (en) * 2007-05-18 2012-09-20 Global Rainmakers, Inc. Measuring Effectiveness of Advertisements and Linking Certain Consumer Activities Including Purchases to Other Activities of the Consumer
US8212870B2 (en) 2007-09-01 2012-07-03 Hanna Keith J Mirror system and method for acquiring biometric data
US9036871B2 (en) 2007-09-01 2015-05-19 Eyelock, Inc. Mobility identity platform
US9117119B2 (en) 2007-09-01 2015-08-25 Eyelock, Inc. Mobile identity platform
US8553948B2 (en) * 2007-09-01 2013-10-08 Eyelock, Inc. System and method for iris data acquisition for biometric identification
US9002073B2 (en) 2007-09-01 2015-04-07 Eyelock, Inc. Mobile identity platform
WO2009102940A1 (en) * 2008-02-14 2009-08-20 The International Performance Registry, Llc System and method for animal identification using iris images
WO2009158662A2 (en) 2008-06-26 2009-12-30 Global Rainmakers, Inc. Method of reducing visibility of illimination while acquiring high quality imagery
JP4710978B2 (en) * 2009-01-09 2011-06-29 ソニー株式会社 Object detection apparatus, imaging apparatus, object detection method, and program
WO2010106644A1 (en) * 2009-03-17 2010-09-23 富士通株式会社 Data collating device and program
US8195044B2 (en) 2009-03-30 2012-06-05 Eyelock Inc. Biometric camera mount system
US20110119141A1 (en) * 2009-11-16 2011-05-19 Hoyos Corporation Siccolla Identity Verification Architecture and Tool
EP2509043B1 (en) * 2009-12-25 2014-08-13 Rakuten, Inc. Image generation device, image generation method, image generation program, and recording medium
US10043229B2 (en) 2011-01-26 2018-08-07 Eyelock Llc Method for confirming the identity of an individual while shielding that individual's personal data
EP2676223A4 (en) 2011-02-17 2016-08-10 Eyelock Llc Efficient method and system for the acquisition of scene imagery and iris imagery using a single sensor
US20120268241A1 (en) 2011-04-19 2012-10-25 Eyelock Inc. Biometric chain of provenance
WO2013028700A2 (en) 2011-08-22 2013-02-28 Eyelock Inc. Systems and methods for capturing artifact free images
JP6044079B2 (en) * 2012-02-06 2016-12-14 ソニー株式会社 Information processing apparatus, information processing method, and program
US9633263B2 (en) 2012-10-09 2017-04-25 International Business Machines Corporation Appearance modeling for object re-identification using weighted brightness transfer functions
US9495526B2 (en) 2013-03-15 2016-11-15 Eyelock Llc Efficient prevention of fraud
BR112016014692A2 (en) 2013-12-23 2017-08-08 Eyelock Llc SYSTEM FOR EFFICIENT IRIS RECOGNITION IN TERMS OF POWER
CN105981047A (en) 2014-01-06 2016-09-28 眼锁有限责任公司 Methods and apparatus for repetitive iris recognition
GB201408948D0 (en) * 2014-05-20 2014-07-02 Scanimal Trackers Ltd ID information for identifying an animal
KR102255351B1 (en) * 2014-09-11 2021-05-24 삼성전자주식회사 Method and apparatus for iris recognition
WO2016040836A1 (en) 2014-09-12 2016-03-17 Eyelock Llc Methods and apparatus for directing the gaze of a user in an iris recognition system
WO2016081609A1 (en) 2014-11-19 2016-05-26 Eyelock Llc Model-based prediction of an optimal convenience metric for authorizing transactions
BR112017015375A2 (en) 2015-01-20 2018-01-16 Eyelock Llc high quality infrared iris image acquisition and visible lens acquisition system
EP3269082B1 (en) 2015-03-12 2020-09-09 Eyelock Llc Methods and systems for managing network activity using biometrics
US10311299B2 (en) 2015-12-21 2019-06-04 Eyelock Llc Reflected optic camera module for iris recognition in a computing device
CA3024128A1 (en) 2016-05-18 2017-11-23 Eyelock, Llc Iris recognition methods and systems based on an iris stochastic texture model
US10534969B2 (en) 2017-02-24 2020-01-14 Eyelock Llc Systems and methods for providing illumination for iris biometric acquisition
CA3015802C (en) 2017-08-31 2021-06-22 Eyelock, Llc Systems and methods of biometric acquistion using positive optical distortion
US20200320334A1 (en) * 2019-04-05 2020-10-08 Ent. Services Development Corporation Lp Systems and methods for digital image-based object authentication

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4641349A (en) * 1985-02-20 1987-02-03 Leonard Flom Iris recognition system
US5291560A (en) 1991-07-15 1994-03-01 Iri Scan Incorporated Biometric personal identification system based on iris analysis
JP3436293B2 (en) * 1996-07-25 2003-08-11 沖電気工業株式会社 Animal individual identification device and individual identification system
US6229905B1 (en) 1997-03-26 2001-05-08 Oki Electric Industry Co., Ltd. Animal identification based on irial granule analysis
US6215891B1 (en) 1997-03-26 2001-04-10 Oki Electric Industry Co., Ltd. Eye image recognition method eye image selection method and system therefor
US6144754A (en) * 1997-03-28 2000-11-07 Oki Electric Industry Co., Ltd. Method and apparatus for identifying individuals
US6285780B1 (en) * 1997-03-28 2001-09-04 Oki Electric Industry Co., Ltd. Apparatus for identifying individual animals and image processing method
US6151403A (en) * 1997-08-29 2000-11-21 Eastman Kodak Company Method for automatic detection of human eyes in digital images

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7277561B2 (en) 2000-10-07 2007-10-02 Qritek Co., Ltd. Iris identification
US20040208114A1 (en) * 2003-01-17 2004-10-21 Shihong Lao Image pickup device, image pickup device program and image pickup method
US20040228528A1 (en) * 2003-02-12 2004-11-18 Shihong Lao Image editing apparatus, image editing method and program
US20050185843A1 (en) * 2004-02-20 2005-08-25 Fuji Photo Film Co., Ltd. Digital pictorial book system, pictorial book searching method, and machine readable medium storing thereon pictorial book searching program
US7450767B2 (en) * 2004-02-20 2008-11-11 Fujifilm Corporation Digital pictorial book system, pictorial book searching method, and machine readable medium storing thereon pictorial book searching program
WO2007011395A2 (en) * 2004-10-19 2007-01-25 Sri International Method and apparatus for person identification
WO2007011395A3 (en) * 2004-10-19 2007-09-20 Stanford Res Inst Int Method and apparatus for person identification
US20070242858A1 (en) * 2004-10-19 2007-10-18 Aradhye Hrishikesh B Method and apparatus for person identification
US7792333B2 (en) 2004-10-19 2010-09-07 Sri International Method and apparatus for person identification
US20110019912A1 (en) * 2005-10-27 2011-01-27 Jonathan Yen Detecting And Correcting Peteye
US20070297673A1 (en) * 2006-06-21 2007-12-27 Jonathan Yen Nonhuman animal integument pixel classification
US8064694B2 (en) * 2006-06-21 2011-11-22 Hewlett-Packard Development Company, L.P. Nonhuman animal integument pixel classification

Also Published As

Publication number Publication date
US6373968B2 (en) 2002-04-16
US6404903B2 (en) 2002-06-11
US20010040985A1 (en) 2001-11-15

Similar Documents

Publication Publication Date Title
US6373968B2 (en) System for identifying individuals
EP0989517B1 (en) Determining the position of eyes through detection of flashlight reflection and correcting defects in a captured frame
US8027521B1 (en) Method and system for robust human gender recognition using facial feature localization
US5450504A (en) Method for finding a most likely matching of a target facial image in a data base of facial images
US7881524B2 (en) Information processing apparatus and information processing method
US7856122B2 (en) Method and device for collating biometric information
KR100996066B1 (en) Face-image registration device, face-image registration method, face-image registration program, and recording medium
JP4543423B2 (en) Method and apparatus for automatic object recognition and collation
US20070122005A1 (en) Image authentication apparatus
US6185337B1 (en) System and method for image recognition
US20100205177A1 (en) Object identification apparatus and method for identifying object
Boehnen et al. A fast multi-modal approach to facial feature detection
US20050129331A1 (en) Pupil color estimating device
EP2557524A1 (en) Method for automatic tagging of images in Internet social networks
US20110013845A1 (en) Optimal subspaces for face recognition
JP2003346149A (en) Face collating device and bioinformation collating device
Monwar et al. Pain recognition using artificial neural network
JPH10269358A (en) Object recognition device
CN110929570B (en) Iris rapid positioning device and positioning method thereof
US12056955B2 (en) Information providing device, information providing method, and storage medium
Bustamin et al. A portable cattle tagging based on muzzle pattern
Subasic et al. Expert system segmentation of face images
CN116092157A (en) Intelligent facial tongue diagnosis method, system and intelligent equipment
US10726259B2 (en) Image processing method and system for iris recognition
JPH10340342A (en) Individual identification device

Legal Events

Date Code Title Description
FPAY Fee payment

Year of fee payment: 4

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20100611