WO2016157827A1 - Determination system, determination method, and determination program - Google Patents
Determination system, determination method, and determination program Download PDFInfo
- Publication number
- WO2016157827A1 WO2016157827A1 PCT/JP2016/001641 JP2016001641W WO2016157827A1 WO 2016157827 A1 WO2016157827 A1 WO 2016157827A1 JP 2016001641 W JP2016001641 W JP 2016001641W WO 2016157827 A1 WO2016157827 A1 WO 2016157827A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- registration
- image
- unit
- determination
- identification information
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B42—BOOKBINDING; ALBUMS; FILES; SPECIAL PRINTED MATTER
- B42D—BOOKS; BOOK COVERS; LOOSE LEAVES; PRINTED MATTER CHARACTERISED BY IDENTIFICATION OR SECURITY FEATURES; PRINTED MATTER OF SPECIAL FORMAT OR STYLE NOT OTHERWISE PROVIDED FOR; DEVICES FOR USE THEREWITH AND NOT OTHERWISE PROVIDED FOR; MOVABLE-STRIP WRITING OR READING APPARATUS
- B42D25/00—Information-bearing cards or sheet-like structures characterised by identification or security features; Manufacture thereof
- B42D25/20—Information-bearing cards or sheet-like structures characterised by identification or security features; Manufacture thereof characterised by a particular use or purpose
- B42D25/26—Entrance cards; Admission tickets
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B42—BOOKBINDING; ALBUMS; FILES; SPECIAL PRINTED MATTER
- B42D—BOOKS; BOOK COVERS; LOOSE LEAVES; PRINTED MATTER CHARACTERISED BY IDENTIFICATION OR SECURITY FEATURES; PRINTED MATTER OF SPECIAL FORMAT OR STYLE NOT OTHERWISE PROVIDED FOR; DEVICES FOR USE THEREWITH AND NOT OTHERWISE PROVIDED FOR; MOVABLE-STRIP WRITING OR READING APPARATUS
- B42D25/00—Information-bearing cards or sheet-like structures characterised by identification or security features; Manufacture thereof
- B42D25/30—Identification or security features, e.g. for preventing forgery
Definitions
- the present invention relates to a determination system that can determine the authenticity of an object or the authenticity of an owner by using object authentication technology.
- Authenticated items such as ID cards and concert tickets are subject to counterfeiting and theft. Therefore, a system that can confirm the authenticity of an object has been developed.
- Patent Document 1 An example of a system that can determine the authenticity of an object is described in Patent Document 1.
- the system of Patent Document 1 includes an input unit, a verification data generation unit, a verification reference data storage unit, and a verification unit, and operates as follows.
- the input means inputs the image data for verification displayed on the ID card to be verified.
- the collation data generation means converts the input image data into binary one-dimensional information.
- the collation reference data storage means stores binary one-dimensional information serving as a collation reference.
- the collation unit collates the binary one-dimensional information generated by the collation data generation unit with the binary one-dimensional information stored in the collation reference data storage unit.
- the system as described above has a problem that it can notify whether the object is authentic, but cannot notify whether the owner of the object is authentic.
- an object of the present invention is to provide a determination system capable of notifying the authenticity of an object and the authenticity of the owner of the object.
- a storage unit that stores registration identification information of a registration target object and a registration image of a specific part of the registration target object and an image obtained by imaging the presentation target object are designated by a user. Obtaining a designated image of the location, obtaining the registered image associated with the registration identification information identical to the presentation identification information of the presentation object from the storage means, the obtained designated image and the A determination unit configured to determine whether the registered images match; and a notification unit configured to notify the determination result of the determination unit.
- the present invention it is possible to provide a determination system capable of notifying the authenticity of an object and the authenticity of the owner of the object.
- FIG. 2nd embodiment It is a flowchart which shows an example of the registration process of 2nd embodiment. It is a flowchart which shows an example of the determination process of 2nd embodiment. It is a block diagram which shows an example of a structure of 3rd embodiment. It is a flowchart which shows an example of the registration process of 3rd embodiment. It is a block diagram which shows an example of a structure of 4th embodiment. It is an example of the information which the memory
- FIG. 1 is a block diagram showing a configuration example of the first embodiment.
- the determination system 2000 includes a reception unit 20, a transmission unit 21, a storage unit 22, an acquisition unit 23, a determination unit 24, and a notification unit 25.
- the storage unit 22 acquires the registration identification information of the registration target received by the reception unit 20 and the registration image of the specific part of the registration target. And the memory
- the registration identification information may be, for example, an identification number of an object to be registered, a manufacturing number, information about an owner, information about an article, and the like. Further, the registration identification information may not be information for specifying one registration object, but may be information that can narrow down one or a plurality of registration objects from all the registration objects.
- the registration identification information may be input by a user of the system using an input device (not shown), or may be read from a registration object by a reading device (not shown). In that case, the receiving unit 20 receives registration identification information from an input device (not shown) or a reading device (not shown). Alternatively, the registration identification information may be input by a user of the system via an input unit (not shown) in the determination system 2000. In that case, the storage unit 22 acquires registration identification information from an input unit (not shown).
- the registered image may be generated from an image of a registration object captured by an imaging device (not shown).
- a processing unit (not shown) generates a registered image by cutting out a specific portion from an image obtained by capturing the registration target.
- the receiving part 20 receives a registration image from the imaging device not shown.
- the registered image may be generated by an imaging device (not shown) imaging a specific part of the registration target. The specific location may be specified by the user or the system.
- FIG. 2 shows an example of the registration object and the specific part.
- FIG. 2 shows an image obtained by capturing a concert ticket 400 that is a registration object.
- the registered image may be an image generated by cutting out the specific portion 40 from the image obtained by capturing the concert ticket 400.
- the registered image may be an image obtained by capturing the specific portion 40 in the concert ticket 400.
- “ID 2100” printed on the concert ticket 400 is an identification number of the concert ticket and corresponds to registration identification information. Therefore, the storage unit 22 acquires the registration identification information “2100” received by the receiving unit 20 and a registered image that is an image of the specific location 40, and stores them in association with each other.
- this system notifies the owner of the registration target object of the location of the specific part of the registration target object.
- the notification of the position of the specific location may be performed by a display unit (not shown).
- a display unit may mark and display the position of a specific location on an image obtained by imaging a registration target.
- the storage unit 22 stores the registration identification information “2100” and the registered image with the file name “bbb” in association with each other. That is, the registered image which is the image of the specific location 40 is stored in the storage unit 22 as the file name “bbb”.
- the storage unit 22 may store two or more registered images in association with one piece of registration identification information. That is, the storage unit 22 may acquire one registration identification information received by the receiving unit 20 and two or more registered images, and store them in association with each other. 3, for example, the storage unit 22 stores registration identification information “2101”, a registered image with the file name “ccc1”, and an image with the file name “ccc2” in association with each other.
- the file name may be set by a processing unit (not shown) when the storage unit 22 stores a registered image, or may be set by a user of this system.
- the obtaining unit 23 obtains a designated image at a location designated by the user among images obtained by capturing the presentation object. That is, the acquisition unit 23 acquires the designated image received by the reception unit 20.
- the designated image is an image of a part designated by the user among images obtained by capturing the presentation object.
- FIG. 4 shows an example of the presentation object and the location designated by the user.
- FIG. 4 shows an image obtained by capturing a concert ticket 500 that is an object to be presented. An image generated by cutting out a part designated by the user from the image obtained by capturing the concert ticket 500 is the designated image. If the location designated by the user (hereinafter referred to as the designated location) is the designated location 50, the designated image is an image of the designated location 50 in the image obtained by capturing the concert ticket 500. Similarly, if the user has designated the designated location 51, the designated image is an image of the designated location 51 in the image obtained by capturing the concert ticket 500.
- the specified image may be generated by a generation unit (not shown) by cutting out a portion specified by the user from an image obtained by capturing the presentation target object.
- a generation unit (not illustrated) acquires position information of a location specified by the user, which is received by the reception unit 20. Furthermore, the generation unit (not shown) acquires an image obtained by capturing the presentation object received by the reception unit 20. And the production
- the acquisition unit 23 acquires a designated image from a generation unit (not shown).
- the acquisition unit 23 acquires a registration image associated with the same registration identification information as the presentation identification information of the presentation object from the storage unit 22. Specifically, the acquisition unit 23 acquires the presentation identification information received by the reception unit 20. Then, the acquisition unit 23 acquires a registration image associated with the same registration identification information as the acquired presentation identification information from the storage unit 22.
- the presentation identification information may be anything as long as it can identify the presentation object. That is, the presentation identification information may be the same information as the registration identification information. That is, the identification number of an object to be presented, a manufacturing number, information about an owner, information about an article, and the like may be used. Further, the presentation identification information may not be information for specifying one presentation object, but may be information that can narrow down one or more presentation objects from all the presentation objects.
- the presentation identification information may be input by a user of the system using an input device (not shown), or may be read from a presentation object by a reading device (not shown). In that case, the receiving unit 20 receives the presentation identification information from an input device (not shown) or a reading device (not shown). Alternatively, the presentation identification information may be input by a user of the system via an input unit (not shown) in the determination system 2000. In that case, the acquisition unit 23 acquires presentation identification information from an input unit (not shown).
- the acquisition unit 23 acquires the presentation identification information “2100” received by the reception unit 20. Then, the acquisition unit 23 acquires from the storage unit 22 a registered image associated with the registration identification information “2100” that is the same as the presentation identification information. That is, when the storage unit 22 stores the information illustrated in FIG. 3, the acquisition unit 23 acquires an image with the file name “bbb” associated with the registration identification information “2100” from the storage unit 22. .
- the determination unit 24 determines whether the designated image acquired by the acquisition unit 23 matches the registered image acquired by the acquisition unit 23.
- Various existing object authentication techniques can be used for the determination by the determination unit 24.
- the determination unit 24 can determine the matching of images by a method using pattern matching, a method using graph matching, a method using the Eugrid distance of feature amounts, and the like.
- the determination unit 24 determines that the designated image and the registered image match, it can be said that the position of the designated portion and the specific portion are the same, and the presentation object and the registration object are the same. This is because even if the presentation object and the registration object are the same, if the position of the designated place and the specific place are different, the images to be judged are different and do not match. Moreover, even if the position of the designated place and the specific place is the same, if the presentation object and the registration object are not the same, the determination target images are different and do not match. Therefore, when the determination unit 24 determines that the specified image and the registered image match, the position of the specified location and the specific location are the same, and the presentation object and the registration object are the same.
- the owner of the registration target object and the owner of the presentation target object are the same. This is because the present system notifies the owner of the registration target object of the position of the specific part in the registration target object, so that the owner of the registration target object knows the position of the specific part. Therefore, when the determination unit 24 determines that they match, the owner of the presentation object is the true owner (the owner at the time of registration), and the presentation object is the true object (the object at the time of registration) )You can say that.
- the determination unit 24 determines the designated image of the designated place 50 in the concert ticket 500 and the registered image (file name “bbb” of the specific place 40 in the concert ticket 400. It is determined whether or not the registered images match. When the determination unit 24 determines that they match, the designated location 50 and the specific location 40 have the same position in the concert ticket. Furthermore, the concert ticket 400 and the concert ticket 500 are the same.
- the determination unit 24 determines whether the designated image at the designated portion 51 in the concert ticket 500 matches the registered image with the file name “bbb”. .
- the determination unit 24 determines that they do not match, the position of the designated place 51 and the specific place 40 in the concert ticket is not the same, or the concert ticket 400 and the concert ticket 500 are not the same.
- the determination unit 24 determines that they do not match, the positions of the designated place 51 and the specific place 40 in the concert ticket are different, and the concert ticket 400 and the concert ticket 500 are not the same.
- the notification unit 25 notifies the determination result of the determination unit 24. Specifically, the notification unit 25 acquires a determination result from the determination unit 24. Then, the acquired determination result is notified. For example, the notification unit 25 notifies the determination result to a display (not shown) via the transmission unit 21. In the case of coincidence determination, for example, a display (not shown) may display such as “successful collation. Both owner and object are true”. In the case of a determination that they do not match, for example, a display (not shown) may display “Matching failure. Owner or object is false”.
- the notification by the notification unit 25 is not limited to that displayed on the display.
- the notification unit 25 may notify the determination result by causing the speaker (not shown) to send audio through the transmission unit 21.
- the notification unit 25 may synthesize the voice with a voice synthesis unit (not shown).
- FIG. 5 is a flowchart showing the registration process of the first embodiment.
- the registration process is a process of the present system until the storage unit 22 stores the registration identification information of the registration target object and the registration image of the specific part of the registration target object in association with each other.
- the storage unit 22 acquires the registration identification information of the registration target received by the reception unit 20 and the registration image of the specific part of the registration target. And the memory
- FIG. 6 is a flowchart showing the determination process of the first embodiment.
- the determination process is a process of the present system from when the acquisition unit 22 acquires registration identification information or a registered image until notification by the notification unit 25 is performed.
- the acquisition unit 23 acquires the designated image received by the reception unit 20.
- the designated image is an image of a part designated by the user among images obtained by capturing the presentation object. Furthermore, the acquisition unit 23 acquires a registration image associated with the same registration identification information as the presentation identification information of the presentation object from the storage unit 22. Specifically, the acquisition unit 23 acquires the presentation identification information received by the reception unit 20. And the acquisition part 23 acquires the registration image linked
- the determination unit 24 determines whether the designated image acquired by the acquisition unit 23 matches the registered image acquired by the acquisition unit 23 (S3).
- the notification unit 25 acquires the determination result from the determination unit 24 and notifies the acquired determination result. In the case of the coincidence determination result, the notification unit 25 notifies the determination result that coincides (S4). The notification unit 25 notifies the determination result to a display (not shown) via the transmission unit 21. In the case of coincidence determination, for example, a display (not shown) may display such as “successful collation. Both owner and object are true”. On the other hand, in the case of a determination result that does not match, the notification unit 25 notifies the determination result that it does not match (S5). As in the case of matching determination, the notification unit 25 notifies the determination result to a display (not shown) via the transmission unit 21. A display (not shown) displays the determination result. In the case of a determination that they do not match, for example, a display (not shown) may display “Matching failure. Owner or object is false”.
- the first embodiment may have the configuration shown in FIG.
- a determination system 1000 illustrated in FIG. 7 includes a storage unit 100, an acquisition unit 101, a determination unit 102, and a notification unit 103. That is, the first embodiment may not include the reception unit 20 and the transmission unit 21. Furthermore, each component in the determination system 1000 may be realized by any hardware and in any combination.
- the storage unit 100 stores the registration identification information of the registration object and the registration image of the specific part of the registration object in association with each other.
- the acquisition unit 101 acquires a designated image at a location designated by the user among images obtained by capturing the presentation target object. Furthermore, the acquisition unit 101 acquires a registration image associated with the same registration identification information as the presentation identification information of the presentation object from the storage unit 101.
- the determination unit 102 determines whether the designated image acquired by the acquisition unit 101 matches the registered image.
- the notification unit 103 notifies the determination result of the determination unit 102.
- FIG. 8 is an example of a hardware configuration of the determination system 1001.
- the communication control unit 8 communicates with an external device via a network.
- a RAM 9 Random Access Memory
- the RAM 9 has a capacity for storing various data necessary for realizing the present embodiment.
- the large-capacity storage unit 10 is a storage medium that stores data such as a database necessary for realizing the present embodiment and an application program executed by the CPU 11 (Central Processing Unit) in a nonvolatile manner.
- the CPU 11 is a processor for arithmetic control, and implements each functional unit of the present invention by executing a program. Specifically, the CPU 11 implements the acquisition unit 101, the determination unit 102, and the notification unit 103 of the present invention by executing a program.
- the large-capacity storage unit 10 can realize the storage unit 100.
- the CPU 11 can realize the acquisition unit 101, the determination unit 102, and the notification unit 103.
- the association between each hardware configuration and each component is not limited to that shown in FIG. 8.
- the acquisition unit 101, the determination unit 102, and the notification unit 103 may be realized by different CPUs.
- FIG. 9 is a flowchart showing the operation of the determination system 1000.
- the storage unit 100 stores the registration identification information of the registration object and the registration image of the specific part of the registration object in association with each other (S6).
- the acquisition unit 101 acquires a designated image at a location designated by the user among images obtained by capturing the presentation target object. Furthermore, the acquisition unit 101 acquires a registration image associated with the same registration identification information as the presentation identification information of the presentation object from the storage unit 100 (S7).
- the determination unit 102 determines whether the designated image acquired by the acquisition unit 101 matches the registered image (S8).
- the notification unit 103 notifies the determination result of the determination unit 102 (S9).
- the determination unit 102 determines whether or not the designated image at the location designated by the user in the image obtained by capturing the presentation object matches the registered image. When it is determined that the determination unit 102 matches, it can be said that the location designated by the user matches the specific location, and the registration target and the presentation target are the same. Therefore, this system can notify the authenticity of the object and the authenticity of the owner of the object.
- the storage unit 22 stores the registration position information of a specific location in the registration target object and the registration identification information in association with each other. That is, the storage unit 22 stores the registration position information in association with the registration identification information and the registration image.
- An example of information stored in the storage unit 22 is shown in FIG. From FIG. 10, for example, the storage unit 22 includes registration identification information “2099”, a registered image with the file name “aaa”, and registration position information (4, 2) (6, 2) (4, 4) (6, 4) is stored in association with each other.
- the registered position information is position information of a specific location in the registration object, and may be generated by a processing unit (not shown).
- a processing unit first acquires an image obtained by imaging a registration target from an imaging unit (not shown).
- a processing unit (not shown) generates position information of a specific portion in the image obtained by capturing the registration target object. For example, in FIG. 2, a specific portion is shown in an image obtained by capturing a concert ticket 400 that is a registration target.
- a processing unit (not shown) acquires an image obtained by imaging the concert ticket 400 from an imaging device (not shown).
- the processing unit When the specific location 40 is designated, the processing unit (not shown) generates the coordinates of the specific location 40 using the lower left corner of the concert ticket 400 as the origin of the xy coordinates. That is, the processing unit not shown in FIG. 2 uses four points whose coordinates (x, y) are (10, 5), (12, 5), (10, 7), (12, 7) as end points. Generate coordinates to do. Thereby, the processing unit (not shown) generates the registered position information of the specific location 40 as (10, 5), (12, 5), (10, 7), (12, 7). Then, the processing unit (not shown) stores the generated registered position information in the storage unit 22. Note that the position of the origin on the registration object may be arbitrarily determined by the system designer or user.
- the acquisition unit 23 acquires the registration position information associated with the same registration identification information as the presentation identification information from the storage unit 22. For example, as in the example of the first embodiment, it is assumed that the presentation identification information is “2100”. Then, the acquisition unit 23 refers to, for example, the information of FIG. 10 stored in the storage unit 23, and the registration position information (10, 5) (12, 12) associated with the registration identification information “2100” that is the same as the presentation identification information. 5) Obtain (10, 7) (12, 7).
- the acquisition unit 23 acquires specified position information of a specified image in an image obtained by imaging the presentation target.
- the designated image is an image of a designated portion designated by the user among images obtained by capturing the presentation target object.
- the designated position information may be generated by a processing unit (not shown) as in the case of the registered position information. That is, a processing unit (not shown) first acquires an image obtained by imaging a presentation target from an imaging unit (not shown). Then, a processing unit (not shown) generates position information of a designated location in an image obtained by capturing the designated object.
- the processing unit designates (10, 5), (12, 5), (10, 7), (12, 7) as designated location information. Generate as On the other hand, when the place designated by the user is the designated place 51, the processing unit (not shown) uses (1, 1), (3, 1), (1, 3), (3, 3) as designated position information. Generate. Then, the acquisition unit 23 acquires designated position information from a processing unit (not shown).
- the determination unit 24 determines whether the registered position information acquired by the acquisition unit 23 matches the specified position information. Specifically, the determination unit 24 acquires the registered position information and the designated position information from the acquisition unit 23 and determines whether or not they match. Then, in the case of matching determination, the determination unit 24 further determines whether or not the designated image and the registered image match.
- the determination that the registered position information and the designated position information match does not require that the position information be completely the same.
- the determination unit 24 may determine that there is a match when the difference in position information between the registered position information and the specified position information is a predetermined value or less.
- the predetermined value may be arbitrarily set by a user or designer of the system.
- the determination unit 24 registers the registered location information (10, 5), (12, 5), (10, 7), (12, 7) and the designated location information ( 10,5), (12,5), (10,7), and (12,7) are acquired, and it is determined that they match. Then, the determination unit 24 further determines whether or not the designated image matches the registered image.
- the determination as to whether or not the designated image matches the registered image is the same as in the first embodiment, and a description thereof will be omitted.
- the determination unit 24 registers the registered location information (10, 5), (12, 5), (10, 7), (12, 7). And the designated position information (1, 1), (3, 1), (1, 3), (3, 3) are determined not to match. In this case, the determination unit 24 may determine that the specified image and the registered image do not match without determining whether or not the specified image and the registered image match.
- a processing unit may convert an image obtained by imaging a registration target object or an image obtained by imaging a presentation target object into a predetermined size when generating registration position information or designated position information. That is, the processing unit (not shown) may convert the size of each image so that the scales of the captured objects are unified.
- the imaging unit captures the registration target object or the presentation target object
- the user of this system captures each image so that the scale of each captured target object is unified. May be.
- the user may adjust the distance from the imaging unit to each object to be constant, or may make the imaging magnification of the imaging unit constant. Thereby, the actual position of each registered object in the registered position information or designated position information indicated by the coordinates is unified.
- FIG. 11 is a flowchart showing the registration process of the second embodiment.
- the operation of the second embodiment differs from the first embodiment in that it has S10. Since other operations are the same as those in the first embodiment, description thereof will be omitted.
- the storage unit 22 acquires registration position information from a processing unit (not shown), and stores the acquired registration position information and registration identification information in association with each other. Note that the storage of the registration identification information and registration image in S1 and the storage of registration position information in S10 may be performed at the same timing or at different timings. The storage unit 22 may store the registration position information and the registration identification information, and then store the registration identification information and the registered image in association with each other.
- FIG. 12 is a flowchart showing the determination process of the second embodiment.
- the operation of the second embodiment differs from the first embodiment in that it has S11 and S12. Since other operations are the same as those in the first embodiment, description thereof will be omitted.
- the acquisition unit 23 acquires registration position information associated with the same registration identification information as the presentation identification information received by the reception unit 20 from the storage unit 22. Furthermore, the acquisition unit 23 acquires specified position information of a specified image in an image obtained by imaging the presentation target object from a processing unit (not illustrated) (S11).
- the determination unit 24 acquires the registered position information and the specified position information acquired by the acquisition unit 23. Then, the determination unit 24 determines whether or not the acquired registered position information matches the specified position information (S12). In the case of matching, the determination unit 24 determines whether the designated image matches the registered image (S3). In the case of the determination that they do not match in S12, the notification unit 25 notifies the determination result that they do not match (S5).
- the determination unit 24 may pass a determination result different from the case where the specified image and the registered image do not match to the notification unit 25.
- the notification part 25 may perform notification different from the case where a designation
- the notification unit 25 may notify that the owner does not match when the registered position information and the specified position information do not match. Specifically, the notification unit 25 may perform notification such as “collation failure. Owner is false”. Further, the notification unit 25 may pass through the fact that the object does not match when the designated image and the registered image do not match. Specifically, the notification unit 25 may notify that “the verification has failed. The object is false.”
- the determination unit 24 determines the match between the designated image and the registered image when the registered position information matches the designated position information. Therefore, in the present system, when the registered position information and the specified position information do not match, it is not necessary to authenticate the images, so that the processing load is reduced and the determination result can be notified to the user earlier. . Furthermore, when the registered position information and the specified position information do not match, the notification unit 25 can make a notification different from the case where the specified image and the registered image do not match. Therefore, the user of this system can know whether the object is false or the owner of the object is false.
- FIG. 13 is a block diagram illustrating a configuration example of the third embodiment.
- the third embodiment is different from the first embodiment in that it includes a designation unit 26 and a generation unit 27.
- the designation unit 26 acquires an image obtained by capturing the registration target received by the reception unit 20. And the designation
- the image of the specific location is an image including information that can identify the registration target.
- An image of the registration target may be captured by an imaging unit (not shown). In that case, the receiving unit 20 receives an image of the registration target from an imaging unit (not shown).
- the information that can identify the registration object may be, for example, a pattern that is generated on the surface of a product or part when it is manufactured.
- the pattern may be a pattern composed of colors or a pattern composed of irregularities on the surface of a product or the like.
- An image including information that can identify a registration object is an image including a pattern generated on the surface of the product or part when the product or part is manufactured.
- an image including information that can identify a registration object includes, for example, a region made of a material containing particles such as lame, a rough region that has been textured or sandblasted, a region made of metal, An image obtained by imaging an area made of fibers such as cloth may be used.
- the image including information that can identify the registration target object may be an image that is captured after manufacturing a product or a part, and that is an image of scratches, distortions, stains, and the like unique to the registration target object.
- FIG. 2 shows an image obtained by capturing a concert ticket 400 that is a registration object.
- the concert ticket 400 includes regions 41, 42, and 43 made of a material containing glitter particles.
- the designation unit 26 first detects an area made of a material containing glitter particles. That is, the designation unit 26 detects the areas 41, 42, and 43. And the designation
- the features of the area detected by the designation unit 26 are determined by the system designer and user. May be determined arbitrarily. Further, the selection of the area by the designation unit 26 may not be random. For example, when a plurality of regions made of a material containing glitter particles are detected, the designating unit 26 may select a region most suitable for identifying the registration object.
- the specification of the specific part by the specification unit 26 may be performed by the user.
- the user inputs an arbitrary specific portion from an image obtained by capturing the registration target using an input unit (not shown) such as a mouse or a keyboard.
- the designation unit 26 acquires information on a specific part input by the user from an input unit (not shown), and designates the specific part.
- the generation unit 27 cuts out an image of a specific location designated by the designation unit 26 from an image obtained by capturing the registration target, and generates a registered image. Further, the generation unit 27 generates registration image position information (hereinafter referred to as registration image position information) in an image obtained by imaging the registration target. Specifically, in the present embodiment, the registered image position information is “(10, 5), (12, 5), (10, 7), (12, 7)”. And the production
- generation part 27 is four points from which the coordinate (x, y) is (10,5), (12,5), (10,7), (12,7) from the image which imaged the registration target object. Cut out a part with the end point as the end point to generate a registered image. Then, the specification unit 27 stores the generated registered image in the storage unit 22.
- FIG. 14 is a flowchart showing the registration process of the third embodiment.
- the designation unit 26 acquires an image obtained by capturing the registration target received by the receiving unit 20 (S13). And the designation
- the generation unit 27 cuts out an image of a specific location designated by the designation unit 26 from an image obtained by capturing the registration target object, and generates a registration image (S15).
- the storage unit 22 stores the registration image generated by the generation unit 26 and the registration identification information received by the reception unit 20 in association with each other (S1).
- the specifying unit 26 may receive the registration identification information from the receiving unit 20 together with an image obtained by capturing the registration object. Then, the generation unit 27 may store the registration identification information acquired from the specification unit 26 in the storage unit 22 together with the registered image.
- the system can specify the specific location for each registration object because the location designated by the designation unit 26 can be designated as the specific location. Therefore, since this system can make the location containing the information which can identify a registration target object into a specific location, the precision of the determination by the determination part 24 is high. Thereby, the user of this system can know more accurately the authenticity of the object and the authenticity of the owner of the object.
- FIG. 15 is a block diagram illustrating a configuration example of the third embodiment.
- the fourth embodiment differs from the first embodiment in that it includes a display control unit 28 and a display unit 29.
- the storage unit 22 stores information that associates the object category, the position information of the region, and the priority in the region. In other words, the storage unit 22 acquires and stores information associated with the object category, the position information of the region, and the priority in the region received by the receiving unit 20.
- An example of information stored in the storage unit 22 is shown in FIG.
- the table of FIG. 16 associates the object category, the position information of the area, and the priority in the area. For example, a region having four endpoints whose coordinates (x, y) are (0, 0), (3, 0), (0, 2), and (3, 2) has a priority of 1. On the other hand, a region having four endpoints whose coordinates (x, y) are (3, 0), (11, 0), (3, 2), and (11, 2) has a priority of 2.
- the priority is a value indicating a degree suitable for a determination process by the determination unit 24.
- the priority is a numerical value representing the level of discriminating power for identifying a registered object and the difficulty of deterioration over time.
- a region having high discrimination power has a large image difference between different registration objects, and when the presentation object and the registration object are different, there is a high possibility that the determination unit 23 does not match.
- areas made of material containing particles such as lame, textured areas that have been textured or sandblasted, areas made of metal, areas made of fibers such as cloth, etc. have high discriminating power, so priority Can be big.
- the determination unit 24 Even if the region subject to deterioration over time is the same object, there is a difference in the image at the time of registration and at the time of determination, and even when the presentation object and the registration object are the same, the determination unit 24 There is a high possibility that it will be determined that they do not match. For example, an area that hits a corner of an object, an edge area, an area that is easily deteriorated due to structural use, etc. is likely to deteriorate over time, and therefore may have a low priority.
- the object category is a category for classifying registered objects.
- the object category may be a general name of an article to be registered, such as a concert ticket, an air ticket, a passport, or a license.
- the object category may be a more detailed category than the general name of the article of the registration object.
- the target object category may be a category in which the registration target object is classified by style or manufacturer, such as a concert ticket “type A” or a concert ticket “manufactured by XX”.
- the designer or user of this system may transmit information relating the object category, area position information, and priority in the area to the determination apparatus 202 from an unillustrated apparatus. That is, the information shown in FIG. 16 may be received by the receiving unit 20 from a device (not shown). Alternatively, a processing unit (not shown) in the determination apparatus 202 evaluates the level of discriminating power for identifying the registered object as described above or the difficulty of aging deterioration from the area of the image obtained by imaging the object, Accordingly, priority may be assigned to each area.
- the display unit 29 displays an image obtained by capturing the registration target object.
- the display unit 29 displays the priority by superimposing the priority on an area indicated by the position information associated with the priority among the images to be displayed.
- the display by the display unit 29 is controlled by the display control unit 28.
- the display control unit 28 acquires an image captured by the receiving unit 20 that is an image of the registration target. Then, the display control unit 28 acquires the object category of the registration object received by the receiving unit 20.
- the object category may be input by a user of the system using an input device (not shown). In that case, the receiving unit 20 receives the object category input by the user from an input device (not shown).
- the user may input the object category when inputting the registration identification information.
- the user may input the object category “concert ticket” using an input device (not shown) when inputting the registration identification information (ID) “2100” of the concert ticket 400 that is the registration object. .
- the user may input the object category at a timing different from the input of the registration identification information.
- the display control unit 28 determines the position information of the area associated with the acquired object category and the priority in the area. The degree is acquired from the storage unit 22. Then, the display control unit 28 causes the display unit 29 to display the priority associated with the region superimposed on the region of the image obtained by capturing the registration target object.
- An example of display by the display unit 29 is shown in FIG. In FIG. 17, priority is superimposed on an image obtained by imaging a registration target. For example, as shown in FIG. 16, a region having four endpoints whose coordinates (x, y) are (0, 0), (3, 0), (0, 2), (3, 2) has a priority of 1. Therefore, the display control unit 28 causes the display unit 29 to display “1” in the area.
- the user inputs an arbitrary specific portion from an image obtained by capturing the registration target object using an input unit (not shown) such as a mouse or a keyboard with reference to the display by the display unit 29.
- the designation unit 26 acquires information on a specific location input by the user from an input unit (not shown) and designates the specific location.
- FIG. 18 is a flowchart showing the operation of the registration process in the fourth embodiment.
- the storage unit 22 acquires and stores information associated with the object category, the position information of the region, and the priority in the region received by the receiving unit 20. (S17).
- the display control unit 28 acquires an image obtained by capturing the registration target object received by the receiving unit 20. And the display control part 28 acquires the target object category of the registration target object which the receiving part 20 received (S18). Further, the display control unit 28 acquires, from the storage unit 22, the position information of the area associated with the acquired object category and the priority in the area. Then, the display control unit 28 causes the display unit 29 to display the priority associated with the region superimposed on the region of the image obtained by capturing the registration target (S19). The user refers to the display by the display unit 29 and inputs an arbitrary specific portion from an image obtained by imaging the registration target using an input unit (not shown) such as a mouse or a keyboard (S20).
- an input unit not shown
- a mouse or a keyboard S20
- the designation unit 26 acquires information regarding the specific location input by the user and designates the specific location.
- a processing unit (not shown) generates a registered image by cutting out a specific portion designated by the designation unit 26 from an image obtained by capturing the registration target.
- the storage unit 22 stores a registration image generated by a processing unit (not shown) in association with registration identification information (S1).
- the display unit 29 of the present system displays the priority superimposed on the image obtained by capturing the registration object. Therefore, the user of this system can grasp
- the display unit 29 and the display control unit 28 may not be provided.
- An example of the operation of the designation unit 26 in that case will be described below.
- the designation unit 26 designates the specific part so that the sum of the priorities associated with the area included in the specific part is equal to or greater than a predetermined threshold. Specifically, the designation unit 26 first acquires an image obtained by capturing the registration target received by the reception unit 20 and a target category of the registration target. Then, the specifying unit 26 acquires the position information of the area associated with the acquired object category and the priority in the area from the storage unit 22.
- the designation unit 26 designates one or a plurality of specific locations from the image obtained by capturing the registration target object. At this time, the designation unit 26 designates the specific location so that the sum of the priorities associated with the area included in the specific location is equal to or greater than a predetermined threshold.
- the predetermined threshold can be arbitrarily set by a designer or user of this system.
- FIG. 19 shows the designation of a specific part by the designation unit 26.
- FIG. 19 shows an image obtained by capturing a concert ticket 400 that is a registration target and the priority.
- the predetermined threshold is 5.
- the number of specific parts designated by the designation unit 26 is two.
- the candidate location 52 which is a candidate for the specific location, includes an area having a priority of 3.
- the designation unit 26 may adopt the priority associated with the area occupying the largest proportion. good.
- the priority associated with the region occupying the largest proportion of the candidate locations 54 is 2. That is, the designation unit 26 may set the priority associated with the area included in the candidate location 54 to 2.
- the method for determining the priority when the candidate location includes two or more areas associated with different priorities is not limited to this.
- the designating unit 26 may employ an average value of two or more different priorities, or may calculate the priorities according to the proportion of each area occupied by candidate locations.
- the designation unit 26 designates the specific part so that the priority is equal to or higher than the threshold value, the specific part suitable for the determination process by the determination unit 23 is designated. Therefore, the accuracy of determination by the determination unit 24 is further improved, and the time and effort for the user to specify a specific location is reduced.
- FIG. 20 is a block diagram showing a configuration example of the fifth embodiment.
- FIG. 20 shows a functional unit block diagram showing components and a hardware configuration example for realizing them.
- the receiving unit 20 and the transmitting unit 21 may be realized by the communication control unit 1.
- the acquisition unit 23, the determination unit 24, the notification unit 25, the designation unit 26, the generation unit 27, and the display control unit 28 may be realized by the CPU 2.
- the storage unit 22 may be realized by the large-capacity storage unit 3.
- the display unit 29 may be realized by the display 4.
- the storage unit 22 may be in a device other than the determination device 203.
- the present invention can be applied, for example, to an apparatus that discriminates the authenticity of an object for which the authenticity of itself and its owner is unknown.
Landscapes
- Inspection Of Paper Currency And Valuable Securities (AREA)
- Image Analysis (AREA)
Abstract
Disclosed is a determination system etc. with which it is possible to report the authenticity of a target object and the authenticity of an owner of the target object. This determination system is provided with: a storage means that stores registration identification information for an object to be registered and a registration image of a specified section of the object to be registered in association with one another; an acquisition means that acquires a designated image of a section of an image of an object to be presented, said section being designated by the user, and acquires, from the storage means, the registration image that is associated with the same registration identification information as the presentation identification information for the object to be presented; a determination means that determines whether or not the acquired designated image and the registration image match; and a reporting means that reports the determination result of the determination means.
Description
本発明は、物体認証技術を用いて、対象物の真偽又は所有者の真偽を判定することができる判定システムに関する。
The present invention relates to a determination system that can determine the authenticity of an object or the authenticity of an owner by using object authentication technology.
IDカード等の本人確認に用いられるものや、コンサートチケット等の高価なものは、偽造や盗難の対象となることがある。そこで、対象物の真偽を確認できるシステムが開発されてきた。
Authenticated items such as ID cards and concert tickets are subject to counterfeiting and theft. Therefore, a system that can confirm the authenticity of an object has been developed.
対象物の真偽を判定できるシステムの一例が、特許文献1に記載されている。特許文献1のシステムは、入力手段、照合用データ生成手段、照合基準データ記憶手段、照合手段、から構成されており、以下のように動作する。入力手段は、照合対象であるIDカードに表示された照合用の画像データを入力する。照合用データ生成手段は、入力された画像データを2値1次元情報に変換する。照合基準データ記憶手段は、照合の基準となる2値1次元情報を記憶する。照合手段は、照合用データ生成手段が生成した2値1次元情報と、照合基準データ記憶手段が記憶する2値1次元情報との照合を行う。
An example of a system that can determine the authenticity of an object is described in Patent Document 1. The system of Patent Document 1 includes an input unit, a verification data generation unit, a verification reference data storage unit, and a verification unit, and operates as follows. The input means inputs the image data for verification displayed on the ID card to be verified. The collation data generation means converts the input image data into binary one-dimensional information. The collation reference data storage means stores binary one-dimensional information serving as a collation reference. The collation unit collates the binary one-dimensional information generated by the collation data generation unit with the binary one-dimensional information stored in the collation reference data storage unit.
しかしながら、上述したようなシステムは、対象物が真正であるかの通知はできるが、当該対象物の所有者が真正であるかの通知はできないという問題点があった。
However, the system as described above has a problem that it can notify whether the object is authentic, but cannot notify whether the owner of the object is authentic.
そこで、本発明の目的は、対象物の真偽及び当該対象物の所有者の真偽を通知することができる判定システムを提供することにある。
Therefore, an object of the present invention is to provide a determination system capable of notifying the authenticity of an object and the authenticity of the owner of the object.
本発明の第一のシステムは、登録対象物の登録識別情報及び前記登録対象物の特定箇所の登録画像を関連付けて記憶する記憶手段と、提示対象物を撮像した画像のうち、ユーザが指定した箇所の指定画像を取得し、前記提示対象物の提示識別情報と同一の前記登録識別情報に関連付けられている前記登録画像を、前記記憶手段から取得する取得手段と、取得した前記指定画像と前記登録画像が一致するか否かを判定する判定手段と、前記判定手段の判定結果を通知する通知手段と、を備える。
In the first system of the present invention, a storage unit that stores registration identification information of a registration target object and a registration image of a specific part of the registration target object and an image obtained by imaging the presentation target object are designated by a user. Obtaining a designated image of the location, obtaining the registered image associated with the registration identification information identical to the presentation identification information of the presentation object from the storage means, the obtained designated image and the A determination unit configured to determine whether the registered images match; and a notification unit configured to notify the determination result of the determination unit.
本発明によれば、対象物の真偽及び当該対象物の所有者の真偽を通知することができる判定システムを提供することができる。
According to the present invention, it is possible to provide a determination system capable of notifying the authenticity of an object and the authenticity of the owner of the object.
以下に、図面を参照して、本発明の実施の形態について詳しく説明する。ただし、以下の実施の形態に記載されている構成要素は例示であり、本発明の技術範囲をそれらに限定する趣旨のものではない。
(第一実施形態)
(第一実施形態の構成)
第一実施形態の構成例について説明する。図1は第一実施形態の構成例を示すブロック図である。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. However, the components described in the following embodiments are exemplifications, and are not intended to limit the technical scope of the present invention.
(First embodiment)
(Configuration of the first embodiment)
A configuration example of the first embodiment will be described. FIG. 1 is a block diagram showing a configuration example of the first embodiment.
(第一実施形態)
(第一実施形態の構成)
第一実施形態の構成例について説明する。図1は第一実施形態の構成例を示すブロック図である。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. However, the components described in the following embodiments are exemplifications, and are not intended to limit the technical scope of the present invention.
(First embodiment)
(Configuration of the first embodiment)
A configuration example of the first embodiment will be described. FIG. 1 is a block diagram showing a configuration example of the first embodiment.
判定システム2000は、受信部20、送信部21、記憶部22、取得部23、判定部24、通知部25を含む。
The determination system 2000 includes a reception unit 20, a transmission unit 21, a storage unit 22, an acquisition unit 23, a determination unit 24, and a notification unit 25.
記憶部22は、受信部20が受信した登録対象物の登録識別情報及び登録対象物の特定箇所の登録画像を取得する。そして、記憶部22は、取得した登録対象物の登録識別情報及び登録対象物の特定箇所の登録画像を関連付けて記憶する。
The storage unit 22 acquires the registration identification information of the registration target received by the reception unit 20 and the registration image of the specific part of the registration target. And the memory | storage part 22 matches and memorize | stores the registration identification information of the acquired registration target object, and the registration image of the specific location of a registration target object.
登録識別情報は、例えば、登録対象物の識別番号、製造番号、所有者に関する情報、物品に関する情報等で良い。また、登録識別情報は、登録対象物を一つに特定する情報でなくても良く、全ての登録対象物の中から一つまたは複数の登録対象物を絞り込める情報であれば良い。登録識別情報は、本システムのユーザが不図示の入力装置を用いて入力しても良いし、不図示の読み取り装置が登録対象物から読み取っても良い。その場合、受信部20は、不図示の入力装置または不図示の読み取り装置から登録識別情報を受信する。または、登録識別情報は、本システムのユーザが判定システム2000内の不図示の入力部を介して入力しても良い。その場合、記憶部22は、不図示の入力部から登録識別情報を取得する。
The registration identification information may be, for example, an identification number of an object to be registered, a manufacturing number, information about an owner, information about an article, and the like. Further, the registration identification information may not be information for specifying one registration object, but may be information that can narrow down one or a plurality of registration objects from all the registration objects. The registration identification information may be input by a user of the system using an input device (not shown), or may be read from a registration object by a reading device (not shown). In that case, the receiving unit 20 receives registration identification information from an input device (not shown) or a reading device (not shown). Alternatively, the registration identification information may be input by a user of the system via an input unit (not shown) in the determination system 2000. In that case, the storage unit 22 acquires registration identification information from an input unit (not shown).
また、登録画像は、不図示の撮像装置が撮像した登録対象物の画像から生成されても良い。その場合、不図示の処理部が、登録対象物を撮像した画像から、特定箇所を切り出して登録画像を生成する。そして、受信部20は、不図示の撮像装置から、登録画像を受信する。又は、登録画像は、不図示の撮像装置が登録対象物の特定箇所を撮像することで生成しても良い。特定箇所は、ユーザが指定しても良いし、システムが指定しても良い。
In addition, the registered image may be generated from an image of a registration object captured by an imaging device (not shown). In this case, a processing unit (not shown) generates a registered image by cutting out a specific portion from an image obtained by capturing the registration target. And the receiving part 20 receives a registration image from the imaging device not shown. Alternatively, the registered image may be generated by an imaging device (not shown) imaging a specific part of the registration target. The specific location may be specified by the user or the system.
ここで、登録対象物と特定箇所の一例を図2に示す。図2には、登録対象物であるコンサートチケット400を撮像した画像が示されている。登録画像は、コンサートチケット400を撮像した画像から、特定箇所40を切り出して生成される画像でも良い。又は、登録画像は、コンサートチケット400のうち、特定箇所40を撮像した画像でも良い。また、コンサートチケット400に印字された「ID 2100」はコンサートチケットの識別番号であり、登録識別情報に該当する。よって、記憶部22は、受信部20が受信した登録識別情報「2100」及び特定箇所40の画像である登録画像を取得し、それらを関連付けて記憶する。
Here, FIG. 2 shows an example of the registration object and the specific part. FIG. 2 shows an image obtained by capturing a concert ticket 400 that is a registration object. The registered image may be an image generated by cutting out the specific portion 40 from the image obtained by capturing the concert ticket 400. Alternatively, the registered image may be an image obtained by capturing the specific portion 40 in the concert ticket 400. “ID 2100” printed on the concert ticket 400 is an identification number of the concert ticket and corresponds to registration identification information. Therefore, the storage unit 22 acquires the registration identification information “2100” received by the receiving unit 20 and a registered image that is an image of the specific location 40, and stores them in association with each other.
なお、本システムは、登録対象物における特定箇所の位置を、登録対象物の所有者に通知する。特定箇所の位置の通知は、不図示の表示部が行っても良い。例えば、不図示の表示部は、図2に示すように、登録対象物を撮像した画像に、特定箇所の位置をマークして表示しても良い。
Note that this system notifies the owner of the registration target object of the location of the specific part of the registration target object. The notification of the position of the specific location may be performed by a display unit (not shown). For example, as shown in FIG. 2, a display unit (not shown) may mark and display the position of a specific location on an image obtained by imaging a registration target.
記憶部22が記憶する情報の一例を図3に示す。図3より、例えば記憶部22は、登録識別情報「2100」と、ファイル名「bbb」の登録画像とを関連付けて記憶している。つまり、特定箇所40の画像である登録画像は、ファイル名「bbb」として、記憶部22が記憶している。また、記憶部22は、一つの登録識別情報に対し二つ以上の登録画像を関連付けて記憶しても良い。つまり、記憶部22は、受信部20が受信した一つの登録識別情報と二つ以上の登録画像を取得し、それらを関連付けて記憶しても良い。図3より、例えば記憶部22は、登録識別情報「2101」と、ファイル名「ccc1」の登録画像及びファイル名「ccc2」の画像とを関連づけて記憶している。なお、ファイル名は、記憶部22が登録画像を記憶する際に、不図示の処理部が設定しても良いし、本システムのユーザが設定しても良い。
An example of information stored in the storage unit 22 is shown in FIG. From FIG. 3, for example, the storage unit 22 stores the registration identification information “2100” and the registered image with the file name “bbb” in association with each other. That is, the registered image which is the image of the specific location 40 is stored in the storage unit 22 as the file name “bbb”. The storage unit 22 may store two or more registered images in association with one piece of registration identification information. That is, the storage unit 22 may acquire one registration identification information received by the receiving unit 20 and two or more registered images, and store them in association with each other. 3, for example, the storage unit 22 stores registration identification information “2101”, a registered image with the file name “ccc1”, and an image with the file name “ccc2” in association with each other. The file name may be set by a processing unit (not shown) when the storage unit 22 stores a registered image, or may be set by a user of this system.
取得部23は、提示対象物を撮像した画像のうち、ユーザが指定した箇所の指定画像を取得する。つまり、取得部23は、受信部20が受信した指定画像を取得する。指定画像は、提示対象物を撮像した画像のうち、ユーザが指定した箇所の画像である。
The obtaining unit 23 obtains a designated image at a location designated by the user among images obtained by capturing the presentation object. That is, the acquisition unit 23 acquires the designated image received by the reception unit 20. The designated image is an image of a part designated by the user among images obtained by capturing the presentation object.
ここで、提示対象物と、ユーザが指定した箇所の一例を図4に示す。図4には、提示対象物であるコンサートチケット500を撮像した画像が示されている。コンサートチケット500を撮像した画像から、ユーザが指定した箇所を切り出して生成される画像が、指定画像である。ユーザが指定した箇所(以下、指定箇所と呼ぶ。)が指定箇所50であれば、指定画像は、コンサートチケット500を撮像した画像のうち、指定箇所50の部分の画像である。同様に、ユーザが指定箇所51を指定したのであれば、指定画像は、コンサートチケット500を撮像した画像のうち、指定箇所51の部分の画像である。
なお、指定画像は、不図示の生成部が、提示対象物を撮像した画像から、ユーザが指定した箇所を切り出して生成しても良い。その場合、不図示の生成部は、受信部20が受信した、ユーザが指定した箇所の位置情報を取得する。さらに、不図示の生成部は、受信部20が受信した、提示対象物を撮像した画像を取得する。そして、不図示の生成部は、提示対象物を撮像した画像から、取得した位置情報に対応する箇所の画像を切り出して、指定画像を生成する。この場合、取得部23は、不図示の生成部から指定画像を取得する。 Here, FIG. 4 shows an example of the presentation object and the location designated by the user. FIG. 4 shows an image obtained by capturing aconcert ticket 500 that is an object to be presented. An image generated by cutting out a part designated by the user from the image obtained by capturing the concert ticket 500 is the designated image. If the location designated by the user (hereinafter referred to as the designated location) is the designated location 50, the designated image is an image of the designated location 50 in the image obtained by capturing the concert ticket 500. Similarly, if the user has designated the designated location 51, the designated image is an image of the designated location 51 in the image obtained by capturing the concert ticket 500.
The specified image may be generated by a generation unit (not shown) by cutting out a portion specified by the user from an image obtained by capturing the presentation target object. In that case, a generation unit (not illustrated) acquires position information of a location specified by the user, which is received by thereception unit 20. Furthermore, the generation unit (not shown) acquires an image obtained by capturing the presentation object received by the reception unit 20. And the production | generation part not shown cuts out the image of the location corresponding to the acquired positional information from the image which imaged the presentation target object, and produces | generates a designation | designated image. In this case, the acquisition unit 23 acquires a designated image from a generation unit (not shown).
なお、指定画像は、不図示の生成部が、提示対象物を撮像した画像から、ユーザが指定した箇所を切り出して生成しても良い。その場合、不図示の生成部は、受信部20が受信した、ユーザが指定した箇所の位置情報を取得する。さらに、不図示の生成部は、受信部20が受信した、提示対象物を撮像した画像を取得する。そして、不図示の生成部は、提示対象物を撮像した画像から、取得した位置情報に対応する箇所の画像を切り出して、指定画像を生成する。この場合、取得部23は、不図示の生成部から指定画像を取得する。 Here, FIG. 4 shows an example of the presentation object and the location designated by the user. FIG. 4 shows an image obtained by capturing a
The specified image may be generated by a generation unit (not shown) by cutting out a portion specified by the user from an image obtained by capturing the presentation target object. In that case, a generation unit (not illustrated) acquires position information of a location specified by the user, which is received by the
さらに、取得部23は、提示対象物の提示識別情報と同一の登録識別情報に関連付けられている登録画像を、記憶部22から取得する。具体的には、取得部23は、受信部20が受信した提示識別情報を取得する。そして、取得部23は、取得した提示識別情報と同一の登録識別情報に関連付けられている登録画像を、記憶部22から取得する。
Furthermore, the acquisition unit 23 acquires a registration image associated with the same registration identification information as the presentation identification information of the presentation object from the storage unit 22. Specifically, the acquisition unit 23 acquires the presentation identification information received by the reception unit 20. Then, the acquisition unit 23 acquires a registration image associated with the same registration identification information as the acquired presentation identification information from the storage unit 22.
提示識別情報は、提示対象物を識別できる情報であれば何でも良い。つまり、提示識別情報は、登録識別情報と同様の情報で良い。すなわち、提示対象物の識別番号、製造番号、所有者に関する情報、物品に関する情報、等で良い。また、提示識別情報は、提示対象物を一つに特定する情報でなくても良く、全ての提示対象物から一つまたは複数の提示対象物を絞り込める情報であれば良い。また、提示識別情報は、本システムのユーザが不図示の入力装置を用いて入力しても良いし、不図示の読み取り装置が提示対象物から読み取っても良い。その場合、受信部20は、不図示の入力装置又は不図示の読み取り装置から提示識別情報を受信する。または、提示識別情報は、本システムのユーザが判定システム2000内の不図示の入力部を介して入力しても良い。その場合、取得部23は、不図示の入力部から提示識別情報を取得する。
The presentation identification information may be anything as long as it can identify the presentation object. That is, the presentation identification information may be the same information as the registration identification information. That is, the identification number of an object to be presented, a manufacturing number, information about an owner, information about an article, and the like may be used. Further, the presentation identification information may not be information for specifying one presentation object, but may be information that can narrow down one or more presentation objects from all the presentation objects. The presentation identification information may be input by a user of the system using an input device (not shown), or may be read from a presentation object by a reading device (not shown). In that case, the receiving unit 20 receives the presentation identification information from an input device (not shown) or a reading device (not shown). Alternatively, the presentation identification information may be input by a user of the system via an input unit (not shown) in the determination system 2000. In that case, the acquisition unit 23 acquires presentation identification information from an input unit (not shown).
例えば、提示対象物が、図4に示すコンサートチケット500である場合、取得部23は、受信部20が受信した提示識別情報「2100」を取得する。そして、取得部23は、提示識別情報と同一の登録識別情報「2100」に関連付けられている登録画像を、記憶部22から取得する。つまり、記憶部22が図3に示す情報を記憶している場合、取得部23は、登録識別情報「2100」に関連付けられている、ファイル名「bbb」の画像を、記憶部22から取得する。
For example, when the presentation object is the concert ticket 500 shown in FIG. 4, the acquisition unit 23 acquires the presentation identification information “2100” received by the reception unit 20. Then, the acquisition unit 23 acquires from the storage unit 22 a registered image associated with the registration identification information “2100” that is the same as the presentation identification information. That is, when the storage unit 22 stores the information illustrated in FIG. 3, the acquisition unit 23 acquires an image with the file name “bbb” associated with the registration identification information “2100” from the storage unit 22. .
判定部24は、取得部23が取得した指定画像と、取得部23が取得した登録画像が一致するか否かを判定する。判定部24の判定には、既存の種々の物体認証技術を用いることができる。例えば、パターンマッチングを利用した方法、グラフマッチングを用いた方法、特徴量のユーグリット距離を用いた方法、等により、判定部24は画像の一致を判定することができる。
The determination unit 24 determines whether the designated image acquired by the acquisition unit 23 matches the registered image acquired by the acquisition unit 23. Various existing object authentication techniques can be used for the determination by the determination unit 24. For example, the determination unit 24 can determine the matching of images by a method using pattern matching, a method using graph matching, a method using the Eugrid distance of feature amounts, and the like.
ここで、判定部24が指定画像と登録画像とが一致するとの判定をした場合、指定箇所と特定箇所の位置が同一であり、かつ、提示対象物と登録対象物が同一であるといえる。なぜならば、提示対象物と登録対象物が同一であっても、指定箇所と特定箇所の位置が異なれば判定対象の画像は異なり、一致しない。また、指定箇所と特定箇所の位置が同一であっても、提示対象物と登録対象物が同一でなければ、判定対象の画像は異なり、一致しない。したがって、判定部24が指定画像と登録画像とが一致するとの判定をした場合は、指定箇所と特定箇所の位置が同一であり、かつ、提示対象物と登録対象物が同一である。
Here, when the determination unit 24 determines that the designated image and the registered image match, it can be said that the position of the designated portion and the specific portion are the same, and the presentation object and the registration object are the same. This is because even if the presentation object and the registration object are the same, if the position of the designated place and the specific place are different, the images to be judged are different and do not match. Moreover, even if the position of the designated place and the specific place is the same, if the presentation object and the registration object are not the same, the determination target images are different and do not match. Therefore, when the determination unit 24 determines that the specified image and the registered image match, the position of the specified location and the specific location are the same, and the presentation object and the registration object are the same.
なお、指定箇所と特定箇所の位置が同一である場合、登録対象物の所有者と、提示対象物の所有者が同一である蓋然性が高い。なぜならば、本システムは、登録対象物における特定箇所の位置を、登録対象物の所有者に通知するため、登録対象物の所有者は、特定箇所の位置を知っているからである。よって、判定部24が一致するとの判定をした場合、提示対象物の所有者が真の所有者(登録時の所有者)であり、かつ提示対象物が真の対象物(登録時の対象物)であるといえる。
In addition, when the position of the designated place and the specific place is the same, there is a high probability that the owner of the registration target object and the owner of the presentation target object are the same. This is because the present system notifies the owner of the registration target object of the position of the specific part in the registration target object, so that the owner of the registration target object knows the position of the specific part. Therefore, when the determination unit 24 determines that they match, the owner of the presentation object is the true owner (the owner at the time of registration), and the presentation object is the true object (the object at the time of registration) )You can say that.
例えば、ユーザが図4に示す指定箇所50を指定した場合、判定部24は、コンサートチケット500における指定箇所50の指定画像と、コンサートチケット400における特定箇所40の登録画像(ファイル名「bbb」の登録画像)が、一致するか否かを判定する。判定部24が、一致するとの判定をした場合、指定箇所50と特定箇所40の、コンサートチケットにおける位置は同一である。さらに、コンサートチケット400とコンサートチケット500は同一である。
For example, when the user designates the designated place 50 illustrated in FIG. 4, the determination unit 24 determines the designated image of the designated place 50 in the concert ticket 500 and the registered image (file name “bbb” of the specific place 40 in the concert ticket 400. It is determined whether or not the registered images match. When the determination unit 24 determines that they match, the designated location 50 and the specific location 40 have the same position in the concert ticket. Furthermore, the concert ticket 400 and the concert ticket 500 are the same.
一方、ユーザが図4に示す指定箇所51を指定した場合、判定部24は、コンサートチケット500における指定箇所51の指定画像と、ファイル名「bbb」の登録画像が一致するか否かを判定する。判定部24が、一致しないとの判定をした場合、指定箇所51と特定箇所40のコンサートチケットにおける位置が同一でないか、コンサートチケット400とコンサートチケット500は同一でない。または、判定部24が、一致しないとの判定をした場合、指定箇所51と特定箇所40のコンサートチケットにおける位置が異なり、かつ、コンサートチケット400とコンサートチケット500は同一でない。
On the other hand, when the user designates the designated portion 51 shown in FIG. 4, the determination unit 24 determines whether the designated image at the designated portion 51 in the concert ticket 500 matches the registered image with the file name “bbb”. . When the determination unit 24 determines that they do not match, the position of the designated place 51 and the specific place 40 in the concert ticket is not the same, or the concert ticket 400 and the concert ticket 500 are not the same. Alternatively, when the determination unit 24 determines that they do not match, the positions of the designated place 51 and the specific place 40 in the concert ticket are different, and the concert ticket 400 and the concert ticket 500 are not the same.
通知部25は、判定部24の判定結果を通知する。具体的には、通知部25は、判定部24から判定結果を取得する。そして、取得した判定結果を通知する。例えば、通知部25は、送信部21を介して不図示のディスプレイに判定結果を通知する。一致する判定の場合、例えば不図示のディスプレイは、「照合成功。所有者、対象物共に真です。」といった表示をしても良い。一致しない判定の場合、例えば不図示のディスプレイは、「照合失敗。所有者又は対象物が偽です。」といった表示をしても良い。
The notification unit 25 notifies the determination result of the determination unit 24. Specifically, the notification unit 25 acquires a determination result from the determination unit 24. Then, the acquired determination result is notified. For example, the notification unit 25 notifies the determination result to a display (not shown) via the transmission unit 21. In the case of coincidence determination, for example, a display (not shown) may display such as “successful collation. Both owner and object are true”. In the case of a determination that they do not match, for example, a display (not shown) may display “Matching failure. Owner or object is false”.
通知部25による通知は、ディスプレイに表示されるものに限られない。例えば、通知部25は、送信部21を介して不図示のスピーカに音声を送出させることで判定結果を通知しても良い。音声は、通知部25が不図示の音声合成部に合成させても良い。
The notification by the notification unit 25 is not limited to that displayed on the display. For example, the notification unit 25 may notify the determination result by causing the speaker (not shown) to send audio through the transmission unit 21. The notification unit 25 may synthesize the voice with a voice synthesis unit (not shown).
第一実施形態の動作
次に、第一実施形態の動作について説明する。図5は、第一実施形態の登録処理を示すフローチャートである。なお、登録処理とは、記憶部22が、登録対象物の登録識別情報及び登録対象物の特定箇所の登録画像を関連付けて記憶するまでの、本システムの処理である。 Operation of First Embodiment Next, the operation of the first embodiment will be described. FIG. 5 is a flowchart showing the registration process of the first embodiment. The registration process is a process of the present system until thestorage unit 22 stores the registration identification information of the registration target object and the registration image of the specific part of the registration target object in association with each other.
次に、第一実施形態の動作について説明する。図5は、第一実施形態の登録処理を示すフローチャートである。なお、登録処理とは、記憶部22が、登録対象物の登録識別情報及び登録対象物の特定箇所の登録画像を関連付けて記憶するまでの、本システムの処理である。 Operation of First Embodiment Next, the operation of the first embodiment will be described. FIG. 5 is a flowchart showing the registration process of the first embodiment. The registration process is a process of the present system until the
記憶部22は、受信部20が受信した登録対象物の登録識別情報及び登録対象物の特定箇所の登録画像を取得する。そして、記憶部22は、取得した登録対象物の登録識別情報及び登録対象物の特定箇所の登録画像を関連付けて記憶する(S1)。
The storage unit 22 acquires the registration identification information of the registration target received by the reception unit 20 and the registration image of the specific part of the registration target. And the memory | storage part 22 matches and memorize | stores the registration identification information of the acquired registration target object, and the registration image of the specific location of a registration target object (S1).
図6は、第一実施形態の判定処理を示すフローチャートである。なお、判定処理とは、取得部22が登録識別情報又は登録画像を取得してから、通知部25による通知が行われるまでの、本システムの処理である。
FIG. 6 is a flowchart showing the determination process of the first embodiment. The determination process is a process of the present system from when the acquisition unit 22 acquires registration identification information or a registered image until notification by the notification unit 25 is performed.
取得部23は、受信部20が受信した指定画像を取得する。指定画像は、提示対象物を撮像した画像のうち、ユーザが指定した箇所の画像である。さらに、取得部23は、提示対象物の提示識別情報と同一の登録識別情報に関連付けられている登録画像を、記憶部22から取得する。具体的には、取得部23は、受信部20が受信した提示識別情報を取得する。そして、取得部23は、取得した提示識別情報と同一の登録識別情報に関連付けられている登録画像を、記憶部22から取得する(S2)。
The acquisition unit 23 acquires the designated image received by the reception unit 20. The designated image is an image of a part designated by the user among images obtained by capturing the presentation object. Furthermore, the acquisition unit 23 acquires a registration image associated with the same registration identification information as the presentation identification information of the presentation object from the storage unit 22. Specifically, the acquisition unit 23 acquires the presentation identification information received by the reception unit 20. And the acquisition part 23 acquires the registration image linked | related with the same registration identification information as the acquired presentation identification information from the memory | storage part 22 (S2).
判定部24は、取得部23が取得した指定画像と、取得部23が取得した登録画像が一致するか否かを判定する(S3)。
The determination unit 24 determines whether the designated image acquired by the acquisition unit 23 matches the registered image acquired by the acquisition unit 23 (S3).
通知部25は、判定部24から判定結果を取得し、取得した判定結果を通知する。一致の判定結果の場合、通知部25は、一致するとの判定結果を通知する(S4)。通知部25は、送信部21を介して不図示のディスプレイに判定結果を通知する。一致する判定の場合、例えば不図示のディスプレイは、「照合成功。所有者、対象物共に真です。」といった表示をしても良い。一方、一致しない判定結果の場合、通知部25は、一致しないとの判定結果を通知する(S5)。一致する判定の場合と同様に、通知部25は、送信部21を介して不図示のディスプレイに判定結果を通知する。そして、不図示のディスプレイは、判定結果を表示する。一致しない判定の場合、例えば不図示のディスプレイは、「照合失敗。所有者又は対象物が偽です。」といった表示をしても良い。
The notification unit 25 acquires the determination result from the determination unit 24 and notifies the acquired determination result. In the case of the coincidence determination result, the notification unit 25 notifies the determination result that coincides (S4). The notification unit 25 notifies the determination result to a display (not shown) via the transmission unit 21. In the case of coincidence determination, for example, a display (not shown) may display such as “successful collation. Both owner and object are true”. On the other hand, in the case of a determination result that does not match, the notification unit 25 notifies the determination result that it does not match (S5). As in the case of matching determination, the notification unit 25 notifies the determination result to a display (not shown) via the transmission unit 21. A display (not shown) displays the determination result. In the case of a determination that they do not match, for example, a display (not shown) may display “Matching failure. Owner or object is false”.
なお、第一実施形態は、図7に示す構成であっても良い。図7に示す判定システム1000は、記憶部100、取得部101、判定部102、通知部103を含む。すなわち、第一実施形態は、受信部20及び送信部21を含まなくても良い。さらに、判定システム1000内の各構成要素は、どのハードウェアによって、また、どのような組み合わせで実現されても良い。
The first embodiment may have the configuration shown in FIG. A determination system 1000 illustrated in FIG. 7 includes a storage unit 100, an acquisition unit 101, a determination unit 102, and a notification unit 103. That is, the first embodiment may not include the reception unit 20 and the transmission unit 21. Furthermore, each component in the determination system 1000 may be realized by any hardware and in any combination.
この場合、記憶部100は、登録対象物の登録識別情報及び登録対象物の特定箇所の登録画像を関連付けて記憶する。取得部101は、提示対象物を撮像した画像のうち、ユーザが指定した箇所の指定画像を取得する。さらに、取得部101は、提示対象物の提示識別情報と同一の登録識別情報に関連付けられている登録画像を、記憶部101から取得する。判定部102は、取得部101が取得した指定画像と登録画像が一致するか否かを判定する。通知部103は、判定部102の判定結果を通知する。
In this case, the storage unit 100 stores the registration identification information of the registration object and the registration image of the specific part of the registration object in association with each other. The acquisition unit 101 acquires a designated image at a location designated by the user among images obtained by capturing the presentation target object. Furthermore, the acquisition unit 101 acquires a registration image associated with the same registration identification information as the presentation identification information of the presentation object from the storage unit 101. The determination unit 102 determines whether the designated image acquired by the acquisition unit 101 matches the registered image. The notification unit 103 notifies the determination result of the determination unit 102.
図8は、判定システム1001のハードウェア構成の一例である。通信制御部8は、ネットワークを介して外部装置と通信する。RAM9(Random Access Memory)は、CPU11が一時記憶のワークエリアとして使用するランダムアクセスメモリである。RAM9には、本実施形態の実現に必要な種々のデータを記憶する容量が確保される。大容量記憶部10は、本実施形態の実現に必要なデータベース等のデータや、CPU11(Central Processing Unit)が実行するアプリケーションプログラムを、不揮発に記憶する記憶媒体である。CPU11は演算制御用のプロセッサであり、プログラムを実行することで本発明の各機能手段を実現する。具体的には、CPU11は、プログラムを実行することによって、本発明の取得部101、判定部102、通知部103、を実現する。
FIG. 8 is an example of a hardware configuration of the determination system 1001. The communication control unit 8 communicates with an external device via a network. A RAM 9 (Random Access Memory) is a random access memory that the CPU 11 uses as a work area for temporary storage. The RAM 9 has a capacity for storing various data necessary for realizing the present embodiment. The large-capacity storage unit 10 is a storage medium that stores data such as a database necessary for realizing the present embodiment and an application program executed by the CPU 11 (Central Processing Unit) in a nonvolatile manner. The CPU 11 is a processor for arithmetic control, and implements each functional unit of the present invention by executing a program. Specifically, the CPU 11 implements the acquisition unit 101, the determination unit 102, and the notification unit 103 of the present invention by executing a program.
図8に示すように、大容量記憶部10は、記憶部100を実現することができる。CPU11は、取得部101、判定部102、通知部103を実現することができる。なお、各ハードウェア構成と、各構成要素との対応付けは、図8に示すものに限られないまた、取得部101、判定部102、通知部103は、異なるCPUによって実現されても良い。
As shown in FIG. 8, the large-capacity storage unit 10 can realize the storage unit 100. The CPU 11 can realize the acquisition unit 101, the determination unit 102, and the notification unit 103. The association between each hardware configuration and each component is not limited to that shown in FIG. 8. The acquisition unit 101, the determination unit 102, and the notification unit 103 may be realized by different CPUs.
図9は、判定システム1000の動作を示すフローチャートである。記憶部100は、登録対象物の登録識別情報及び登録対象物の特定箇所の登録画像を関連付けて記憶する(S6)。取得部101は、提示対象物を撮像した画像のうち、ユーザが指定した箇所の指定画像を取得する。さらに、取得部101は、提示対象物の提示識別情報と同一の登録識別情報に関連付けられている登録画像を、記憶部100から取得する(S7)。判定部102は、取得部101が取得した指定画像と登録画像が一致するか否かを判定する(S8)。通知部103は、判定部102の判定結果を通知する(S9)。
FIG. 9 is a flowchart showing the operation of the determination system 1000. The storage unit 100 stores the registration identification information of the registration object and the registration image of the specific part of the registration object in association with each other (S6). The acquisition unit 101 acquires a designated image at a location designated by the user among images obtained by capturing the presentation target object. Furthermore, the acquisition unit 101 acquires a registration image associated with the same registration identification information as the presentation identification information of the presentation object from the storage unit 100 (S7). The determination unit 102 determines whether the designated image acquired by the acquisition unit 101 matches the registered image (S8). The notification unit 103 notifies the determination result of the determination unit 102 (S9).
第一実施形態より、判定部102は、提示対象物を撮像した画像のうちユーザが指定した箇所の指定画像と、登録画像とが一致するか否かを判定する。判定部102が一致するとの判定を行った場合、ユーザが指定した箇所と特定箇所とが一致し、かつ、登録対象物と提示対象物が同一であるといえる。そのため、本システムは、対象物の真偽及び当該対象物の所有者の真偽を通知することができる。
From the first embodiment, the determination unit 102 determines whether or not the designated image at the location designated by the user in the image obtained by capturing the presentation object matches the registered image. When it is determined that the determination unit 102 matches, it can be said that the location designated by the user matches the specific location, and the registration target and the presentation target are the same. Therefore, this system can notify the authenticity of the object and the authenticity of the owner of the object.
(第二実施形態)
(第二実施形態の構成)
第二実施形態の構成例について説明する。第二実施形態の構成は、第一実施形態と同様に、図1に示される。しかし、第二実施形態の構成要素は、第一実施形態に加えて以下の動作を行う点で異なる。 (Second embodiment)
(Configuration of Second Embodiment)
A configuration example of the second embodiment will be described. The structure of 2nd embodiment is shown by FIG. 1 similarly to 1st embodiment. However, the components of the second embodiment differ in that the following operations are performed in addition to the first embodiment.
(第二実施形態の構成)
第二実施形態の構成例について説明する。第二実施形態の構成は、第一実施形態と同様に、図1に示される。しかし、第二実施形態の構成要素は、第一実施形態に加えて以下の動作を行う点で異なる。 (Second embodiment)
(Configuration of Second Embodiment)
A configuration example of the second embodiment will be described. The structure of 2nd embodiment is shown by FIG. 1 similarly to 1st embodiment. However, the components of the second embodiment differ in that the following operations are performed in addition to the first embodiment.
記憶部22は、登録対象物における特定箇所の登録位置情報と、登録識別情報とを関連付けて記憶する。つまり、記憶部22は、登録識別情報と登録画像に加えて、登録位置情報を関連付けて記憶する。記憶部22が記憶する情報の一例を図10に示す。図10より、例えば、記憶部22は、登録識別情報「2099」と、ファイル名「aaa」の登録画像と、登録位置情報(4,2)(6,2)(4,4)(6,4)とを、関連付けて記憶している。
The storage unit 22 stores the registration position information of a specific location in the registration target object and the registration identification information in association with each other. That is, the storage unit 22 stores the registration position information in association with the registration identification information and the registration image. An example of information stored in the storage unit 22 is shown in FIG. From FIG. 10, for example, the storage unit 22 includes registration identification information “2099”, a registered image with the file name “aaa”, and registration position information (4, 2) (6, 2) (4, 4) (6, 4) is stored in association with each other.
登録位置情報とは、登録対象物における特定箇所の位置情報であり、不図示の処理部が生成しても良い。その場合、不図示の処理部は、まず不図示の撮像部から、登録対象物を撮像した画像を取得する。そして、不図示の処理部は、登録対象物を撮像した画像における特定箇所の位置情報を生成する。例えば、図2には、登録対象物であるコンサートチケット400を撮像した画像に、特定箇所が示されている。まず、不図示の処理部は、不図示の撮像装置からコンサートチケット400を撮像した画像を取得する。そして、特定箇所40が指定されると、不図示の処理部は、コンサートチケット400の左下端をxy座標の原点として、特定箇所40の座標を生成する。すなわち、不図示の処理部は、図2より、座標(x,y)が(10,5)、(12,5)、(10,7)、(12,7)である4点を端点とする座標を生成する。これにより、不図示の処理部は、特定箇所40の登録位置情報を、(10,5)、(12,5)、(10,7)、(12,7)として生成する。そして、不図示の処理部は、生成した登録位置情報を記憶部22に記憶させる。なお、登録対象物上の原点の位置は、システムの設計者やユーザが任意に定めて良い。
The registered position information is position information of a specific location in the registration object, and may be generated by a processing unit (not shown). In that case, a processing unit (not shown) first acquires an image obtained by imaging a registration target from an imaging unit (not shown). Then, a processing unit (not shown) generates position information of a specific portion in the image obtained by capturing the registration target object. For example, in FIG. 2, a specific portion is shown in an image obtained by capturing a concert ticket 400 that is a registration target. First, a processing unit (not shown) acquires an image obtained by imaging the concert ticket 400 from an imaging device (not shown). When the specific location 40 is designated, the processing unit (not shown) generates the coordinates of the specific location 40 using the lower left corner of the concert ticket 400 as the origin of the xy coordinates. That is, the processing unit not shown in FIG. 2 uses four points whose coordinates (x, y) are (10, 5), (12, 5), (10, 7), (12, 7) as end points. Generate coordinates to do. Thereby, the processing unit (not shown) generates the registered position information of the specific location 40 as (10, 5), (12, 5), (10, 7), (12, 7). Then, the processing unit (not shown) stores the generated registered position information in the storage unit 22. Note that the position of the origin on the registration object may be arbitrarily determined by the system designer or user.
取得部23は、提示識別情報と同一の登録識別情報に関連付けられている登録位置情報を記憶部22から取得する。例えば、第一実施形態の例と同様に、提示識別情報が「2100」であるとする。すると、取得部23は、例えば記憶部23が記憶する図10の情報を参照し、提示識別情報と同一の登録識別情報「2100」に関連付けられている登録位置情報(10,5)(12,5)(10,7)(12,7)を取得する。
The acquisition unit 23 acquires the registration position information associated with the same registration identification information as the presentation identification information from the storage unit 22. For example, as in the example of the first embodiment, it is assumed that the presentation identification information is “2100”. Then, the acquisition unit 23 refers to, for example, the information of FIG. 10 stored in the storage unit 23, and the registration position information (10, 5) (12, 12) associated with the registration identification information “2100” that is the same as the presentation identification information. 5) Obtain (10, 7) (12, 7).
さらに、取得部23は、提示対象物を撮像した画像における指定画像の指定位置情報を取得する。ここで、前述したように、指定画像とは、提示対象物を撮像した画像のうち、ユーザが指定した指定箇所の画像である。そして、指定位置情報は、登録位置情報と同様に、不図示の処理部が生成しても良い。すなわち、不図示の処理部は、まず不図示の撮像部から、提示対象物を撮像した画像を取得する。そして、不図示の処理部は、指定対象物を撮像した画像における指定箇所の位置情報を生成する。
Furthermore, the acquisition unit 23 acquires specified position information of a specified image in an image obtained by imaging the presentation target. Here, as described above, the designated image is an image of a designated portion designated by the user among images obtained by capturing the presentation target object. The designated position information may be generated by a processing unit (not shown) as in the case of the registered position information. That is, a processing unit (not shown) first acquires an image obtained by imaging a presentation target from an imaging unit (not shown). Then, a processing unit (not shown) generates position information of a designated location in an image obtained by capturing the designated object.
ユーザが指定した箇所が図4における指定箇所50である場合、不図示の処理部は、(10,5)、(12,5)、(10,7)、(12,7)を指定位置情報として生成する。一方、ユーザが指定した箇所が指定箇所51である場合、不図示の処理部は、(1,1)、(3,1)、(1,3)、(3,3)を指定位置情報として生成する。そして、取得部23は、不図示の処理部から指定位置情報を取得する。
When the location designated by the user is the designated location 50 in FIG. 4, the processing unit (not shown) designates (10, 5), (12, 5), (10, 7), (12, 7) as designated location information. Generate as On the other hand, when the place designated by the user is the designated place 51, the processing unit (not shown) uses (1, 1), (3, 1), (1, 3), (3, 3) as designated position information. Generate. Then, the acquisition unit 23 acquires designated position information from a processing unit (not shown).
判定部24は、取得部23が取得した登録位置情報と指定位置情報とが一致するか否かを判定する。具体的には、判定部24は、取得部23から登録位置情報と指定位置情報を取得し、これらが一致するか否かを判定する。そして、判定部24は、一致する判定の場合、指定画像および登録画像が一致するか否かをさらに判定する。ここで、登録位置情報と指定位置情報とが一致するとの判定には、完全に同一の位置情報であることを必要としない。例えば、判定部24は、登録位置情報と指定位置情報との位置情報の差が所定値以下である場合に、一致するとの判定をしても良い。なお、所定値は、本システムのユーザや設計者が任意に設定して良い。
The determination unit 24 determines whether the registered position information acquired by the acquisition unit 23 matches the specified position information. Specifically, the determination unit 24 acquires the registered position information and the designated position information from the acquisition unit 23 and determines whether or not they match. Then, in the case of matching determination, the determination unit 24 further determines whether or not the designated image and the registered image match. Here, the determination that the registered position information and the designated position information match does not require that the position information be completely the same. For example, the determination unit 24 may determine that there is a match when the difference in position information between the registered position information and the specified position information is a predetermined value or less. The predetermined value may be arbitrarily set by a user or designer of the system.
例えば、ユーザが指定した指定箇所50である場合、判定部24は、登録位置情報(10,5)、(12,5)、(10,7)、(12,7)と、指定位置情報(10,5)、(12,5)、(10,7)、(12,7)とを取得し、これらが一致するとの判定を行う。そして、判定部24は、指定画像および登録画像が一致するか否かをさらに判定する。指定画像と登録画像とが一致するか否かの判定は、実施形態1と同様であるため、説明を省略する。
For example, in the case of the designated location 50 designated by the user, the determination unit 24 registers the registered location information (10, 5), (12, 5), (10, 7), (12, 7) and the designated location information ( 10,5), (12,5), (10,7), and (12,7) are acquired, and it is determined that they match. Then, the determination unit 24 further determines whether or not the designated image matches the registered image. The determination as to whether or not the designated image matches the registered image is the same as in the first embodiment, and a description thereof will be omitted.
一方、ユーザが指定した箇所が図4に示す指定箇所51である場合、判定部24は、登録位置情報(10,5)、(12,5)、(10,7)、(12,7)と、指定位置情報(1,1)、(3,1)、(1,3)、(3,3)とが一致しないとの判定を行う。この場合、判定部24は、指定画像および登録画像が一致するか否かの判定を行わずに、一致しないとの判定を行っても良い。
On the other hand, when the location designated by the user is the designated location 51 shown in FIG. 4, the determination unit 24 registers the registered location information (10, 5), (12, 5), (10, 7), (12, 7). And the designated position information (1, 1), (3, 1), (1, 3), (3, 3) are determined not to match. In this case, the determination unit 24 may determine that the specified image and the registered image do not match without determining whether or not the specified image and the registered image match.
なお、不図示の処理部は、登録位置情報又は指定位置情報を生成する際に、登録対象物を撮像した画像又は提示対象物を撮像した画像を、所定の大きさに変換しても良い。すなわち、不図示の処理部は、撮像された各対象物の縮尺を統一するように、各画像の大きさを変換しても良い。または、本システムのユーザは、不図示の撮像部が登録対象物又は提示対象物を撮像する際に、撮像された撮像された各対象物の縮尺が統一されるように、各画像を撮像しても良い。例えばユーザは、撮像部から各対象物までの距離が一定になるように調整良いし、撮像部の撮像倍率を一定にしても良い。これにより、座標によって示される登録位置情報又は指定位置情報の、実際の各対象物における位置が統一される。
Note that a processing unit (not shown) may convert an image obtained by imaging a registration target object or an image obtained by imaging a presentation target object into a predetermined size when generating registration position information or designated position information. That is, the processing unit (not shown) may convert the size of each image so that the scales of the captured objects are unified. Alternatively, when the imaging unit (not shown) captures the registration target object or the presentation target object, the user of this system captures each image so that the scale of each captured target object is unified. May be. For example, the user may adjust the distance from the imaging unit to each object to be constant, or may make the imaging magnification of the imaging unit constant. Thereby, the actual position of each registered object in the registered position information or designated position information indicated by the coordinates is unified.
第二実施形態の動作
次に、第二実施形態の動作について説明する。図11は、第二実施形態の登録処理を示すフローチャートである。第二実施形態の動作は、S10を有する点で第一実施形態と異なる。その他の動作については、第一実施形態と同様であるため、説明を省略する。 Operation of Second Embodiment Next, the operation of the second embodiment will be described. FIG. 11 is a flowchart showing the registration process of the second embodiment. The operation of the second embodiment differs from the first embodiment in that it has S10. Since other operations are the same as those in the first embodiment, description thereof will be omitted.
次に、第二実施形態の動作について説明する。図11は、第二実施形態の登録処理を示すフローチャートである。第二実施形態の動作は、S10を有する点で第一実施形態と異なる。その他の動作については、第一実施形態と同様であるため、説明を省略する。 Operation of Second Embodiment Next, the operation of the second embodiment will be described. FIG. 11 is a flowchart showing the registration process of the second embodiment. The operation of the second embodiment differs from the first embodiment in that it has S10. Since other operations are the same as those in the first embodiment, description thereof will be omitted.
記憶部22は、不図示の処理部から登録位置情報を取得し、取得した登録位置情報と、登録識別情報とを関連付けて記憶する。なお、S1の登録識別情報と登録画像の記憶と、S10の登録位置情報の記憶は、同じタイミングで行われても良いし、異なるタイミングで行われても良い。また、記憶部22は、登録位置情報と登録識別情報を記憶した後に、登録識別情報と登録画像とを関連付けて記憶しても良い。
The storage unit 22 acquires registration position information from a processing unit (not shown), and stores the acquired registration position information and registration identification information in association with each other. Note that the storage of the registration identification information and registration image in S1 and the storage of registration position information in S10 may be performed at the same timing or at different timings. The storage unit 22 may store the registration position information and the registration identification information, and then store the registration identification information and the registered image in association with each other.
図12は、第二実施形態の判定処理を示すフローチャートである。第二実施形態の動作は、S11及びS12を有する点で第一実施形態と異なる。その他の動作については、第一実施形態と同様であるため、説明を省略する。
FIG. 12 is a flowchart showing the determination process of the second embodiment. The operation of the second embodiment differs from the first embodiment in that it has S11 and S12. Since other operations are the same as those in the first embodiment, description thereof will be omitted.
まず、取得部23は、受信部20が受信した提示識別情報と同一の登録識別情報に関連付けられている登録位置情報を、記憶部22から取得する。さらに、取得部23は、不図示の処理部から、提示対象物を撮像した画像における指定画像の指定位置情報を取得する(S11)。判定部24は、取得部23が取得した登録位置情報と指定位置情報を取得する。そして、判定部24は、取得した登録位置情報と指定位置情報とが一致するか否かを判定する(S12)。一致する判定の場合、判定部24は、指定画像と登録画像の一致を判定する(S3)。S12で一致しない判定の場合、通知部25は、一致しないとの判定結果を通知する(S5)。
First, the acquisition unit 23 acquires registration position information associated with the same registration identification information as the presentation identification information received by the reception unit 20 from the storage unit 22. Furthermore, the acquisition unit 23 acquires specified position information of a specified image in an image obtained by imaging the presentation target object from a processing unit (not illustrated) (S11). The determination unit 24 acquires the registered position information and the specified position information acquired by the acquisition unit 23. Then, the determination unit 24 determines whether or not the acquired registered position information matches the specified position information (S12). In the case of matching, the determination unit 24 determines whether the designated image matches the registered image (S3). In the case of the determination that they do not match in S12, the notification unit 25 notifies the determination result that they do not match (S5).
なお、判定部24は、登録位置情報と指定位置情報が一致しない場合、指定画像と登録画像が一致しない場合とは異なる判定結果を通知部25に受け渡しても良い。そして、通知部25は、登録位置情報と指定位置情報とが一致しない場合、指定画像と登録画像が一致しない場合とは異なる通知を行っても良い。例えば、通知部25は、登録位置情報と指定位置情報が一致しない場合、所有者が一致しなかった旨を通知しても良い。具体的には、通知部25は、「照合失敗。所有者が偽です。」といった通知を行っても良い。また、通知部25は、指定画像と登録画像が一致しない場合、対象物が一致しなかった旨を通しても良い。具体的には、通知部25は、「照合失敗。対象物が偽です。」との通知を行っても良い。
Note that, when the registered position information and the specified position information do not match, the determination unit 24 may pass a determination result different from the case where the specified image and the registered image do not match to the notification unit 25. And the notification part 25 may perform notification different from the case where a designation | designated image and a registration image do not correspond, when registration position information and designation | designated position information do not correspond. For example, the notification unit 25 may notify that the owner does not match when the registered position information and the specified position information do not match. Specifically, the notification unit 25 may perform notification such as “collation failure. Owner is false”. Further, the notification unit 25 may pass through the fact that the object does not match when the designated image and the registered image do not match. Specifically, the notification unit 25 may notify that “the verification has failed. The object is false.”
第二実施形態より、判定部24は、登録位置情報と指定位置情報とが一致する場合、指定画像と登録画像との一致を判定する。よって、本システムは、登録位置情報と指定位置情報とが一致しない場合、画像どうしの認証を行う必要がないため、処理にかかる負荷が低減し、より早くユーザに判定結果を通知することができる。さらに、通知部25は、登録位置情報と指定位置情報とが一致しない場合、指定画像と登録画像が一致しない場合とは異なる通知を行うことができる。そのため、本システムのユーザは、対象物が偽であるか、対象物の所有者が偽であるかを知ることができる。
According to the second embodiment, the determination unit 24 determines the match between the designated image and the registered image when the registered position information matches the designated position information. Therefore, in the present system, when the registered position information and the specified position information do not match, it is not necessary to authenticate the images, so that the processing load is reduced and the determination result can be notified to the user earlier. . Furthermore, when the registered position information and the specified position information do not match, the notification unit 25 can make a notification different from the case where the specified image and the registered image do not match. Therefore, the user of this system can know whether the object is false or the owner of the object is false.
第三実施形態
第三実施形態の構成
第三実施形態の構成例について説明する。図13は第三実施形態の構成例を示すブロック図である。第三実施形態は、指定部26及び生成部27を有する点で、第一実施形態と異なる。 Configuration of Third Embodiment Third Embodiment A configuration example of the third embodiment will be described. FIG. 13 is a block diagram illustrating a configuration example of the third embodiment. The third embodiment is different from the first embodiment in that it includes adesignation unit 26 and a generation unit 27.
第三実施形態の構成
第三実施形態の構成例について説明する。図13は第三実施形態の構成例を示すブロック図である。第三実施形態は、指定部26及び生成部27を有する点で、第一実施形態と異なる。 Configuration of Third Embodiment Third Embodiment A configuration example of the third embodiment will be described. FIG. 13 is a block diagram illustrating a configuration example of the third embodiment. The third embodiment is different from the first embodiment in that it includes a
指定部26は、受信部20が受信した登録対象物を撮像した画像を取得する。そして、指定部26は、登録対象物を撮像した画像から、一つまたは複数の特定箇所を指定する。ここで、特定箇所の画像は、登録対象物を識別できる情報を含む画像である。登録対象物の画像は、不図示の撮像部が撮像しても良い。その場合、受信部20は、不図示の撮像部から登録対象物の画像を受信する。
The designation unit 26 acquires an image obtained by capturing the registration target received by the reception unit 20. And the designation | designated part 26 designates one or some specific location from the image which imaged the registration target object. Here, the image of the specific location is an image including information that can identify the registration target. An image of the registration target may be captured by an imaging unit (not shown). In that case, the receiving unit 20 receives an image of the registration target from an imaging unit (not shown).
登録対象物を識別できる情報とは、例えば、製品や部品の製造時に、それらの表面上に発生する模様で良い。模様は、色彩から成る模様でも良いし、製品等表面の凹凸から成る模様でも良い。登録対象物を識別できる情報を含む画像とは、上記の、製品や部品の製造時にそれらの表面上に発生する模様を含む画像である。具体的には、登録対象物を識別できる情報を含む画像とは、例えば、ラメ等の粒子入りの素材から成る領域、梨地加工やサンドブラスト加工等がされたざらつきのある領域、金属から成る領域、布等の繊維から成る領域、を撮像した画像で良い。また、登録対象物を識別できる情報を含む画像は、製品や部品の製造後に発生した、その登録対象物固有の傷、ゆがみ、染み等を撮像した画像でも良い。
The information that can identify the registration object may be, for example, a pattern that is generated on the surface of a product or part when it is manufactured. The pattern may be a pattern composed of colors or a pattern composed of irregularities on the surface of a product or the like. An image including information that can identify a registration object is an image including a pattern generated on the surface of the product or part when the product or part is manufactured. Specifically, an image including information that can identify a registration object includes, for example, a region made of a material containing particles such as lame, a rough region that has been textured or sandblasted, a region made of metal, An image obtained by imaging an area made of fibers such as cloth may be used. In addition, the image including information that can identify the registration target object may be an image that is captured after manufacturing a product or a part, and that is an image of scratches, distortions, stains, and the like unique to the registration target object.
図2を用いて、指定部26による特定箇所の指定を説明する。図2には、登録対象物であるコンサートチケット400を撮像した画像が示されている。また、コンサートチケット400には、ラメ粒子入りの素材からなる領域41、42、43が存在する。指定部26は、まず、ラメ粒子入りの素材からなる領域を検出する。つまり、指定部26は、領域41、42、43を検出する。そして、指定部26は、検出した3つの領域から領域41をランダムに選択する。さらに指定部26は、選択した領域41の少なくとも一部を含む特定箇所40を指定する。図2より、コンサートチケット400の左下端がxy座標の原点の場合、指定部26は、座標(x,y)が(10,5)、(12,5)、(10,7)、(12,7)である4点を端点とする箇所を特定箇所として指定する。
Referring to FIG. 2, the designation of a specific part by the designation unit 26 will be described. FIG. 2 shows an image obtained by capturing a concert ticket 400 that is a registration object. The concert ticket 400 includes regions 41, 42, and 43 made of a material containing glitter particles. The designation unit 26 first detects an area made of a material containing glitter particles. That is, the designation unit 26 detects the areas 41, 42, and 43. And the designation | designated part 26 selects the area | region 41 at random from the detected three area | regions. Furthermore, the designation unit 26 designates a specific portion 40 including at least a part of the selected area 41. As shown in FIG. 2, when the lower left corner of the concert ticket 400 is the origin of the xy coordinates, the designation unit 26 has coordinates (x, y) of (10, 5), (12, 5), (10, 7), (12 , 7) is designated as a specific place.
なお、指定部26が検出する領域の特徴(ラメ粒子入りの素材、梨地加工、等)、特定箇所の数、特定箇所の面積、登録対象物上の原点の位置は、システムの設計者やユーザが任意に定めて良い。また、指定部26による領域の選択は、ランダムでなくても良い。指定部26は、例えば、ラメ粒子入りの素材からなる領域を複数検出した場合、最も登録対象物を識別するのに適した領域を選択しても良い。
It should be noted that the features of the area detected by the designation unit 26 (the material containing the glitter particles, the satin finish, etc.), the number of specific parts, the area of the specific part, and the position of the origin on the registration object are determined by the system designer and user. May be determined arbitrarily. Further, the selection of the area by the designation unit 26 may not be random. For example, when a plurality of regions made of a material containing glitter particles are detected, the designating unit 26 may select a region most suitable for identifying the registration object.
また、指定部26による特定箇所の指定は、ユーザが行っても良い。その場合、ユーザは、マウスやキーボード等の不図示の入力部を用いて、登録対象物を撮像した画像から任意の特定箇所を入力する。指定部26は、不図示の入力部からユーザの入力した特定箇所に関する情報を取得し、当該特定箇所を指定する。
Also, the specification of the specific part by the specification unit 26 may be performed by the user. In this case, the user inputs an arbitrary specific portion from an image obtained by capturing the registration target using an input unit (not shown) such as a mouse or a keyboard. The designation unit 26 acquires information on a specific part input by the user from an input unit (not shown), and designates the specific part.
生成部27は、登録対象物を撮像した画像から、指定部26が指定した特定箇所の画像を切り出し、登録画像を生成する。また、生成部27は、登録対象物を撮像した画像における登録画像の位置情報(以下、登録画像位置情報と呼ぶ)を生成する。具体的には、本実施形態では、登録画像位置情報は、「(10,5)、(12,5)、(10,7)、(12,7)」である。そして、生成部27は、登録対象物を撮像した画像から、座標(x,y)が(10,5)、(12,5)、(10,7)、(12,7)である4点を端点とする箇所を切り出し、登録画像を生成する。そして、指定部27は、生成した登録画像を記憶部22に記憶させる。
The generation unit 27 cuts out an image of a specific location designated by the designation unit 26 from an image obtained by capturing the registration target, and generates a registered image. Further, the generation unit 27 generates registration image position information (hereinafter referred to as registration image position information) in an image obtained by imaging the registration target. Specifically, in the present embodiment, the registered image position information is “(10, 5), (12, 5), (10, 7), (12, 7)”. And the production | generation part 27 is four points from which the coordinate (x, y) is (10,5), (12,5), (10,7), (12,7) from the image which imaged the registration target object. Cut out a part with the end point as the end point to generate a registered image. Then, the specification unit 27 stores the generated registered image in the storage unit 22.
第三実施形態の動作
次に、第三実施形態の動作について説明する。図14は、第三実施形態の登録処理を示すフローチャートである。 Operation of Third Embodiment Next, the operation of the third embodiment will be described. FIG. 14 is a flowchart showing the registration process of the third embodiment.
次に、第三実施形態の動作について説明する。図14は、第三実施形態の登録処理を示すフローチャートである。 Operation of Third Embodiment Next, the operation of the third embodiment will be described. FIG. 14 is a flowchart showing the registration process of the third embodiment.
まず、指定部26は、受信部20が受信した登録対象物を撮像した画像を取得する(S13)。そして、指定部26は、登録対象物を撮像した画像から、一つまたは複数の特定箇所を指定する(S14)。生成部27は、登録対象物を撮像した画像から、指定部26が指定した特定箇所の画像を切り出し、登録画像を生成する(S15)。記憶部22は、生成部26が生成した登録画像と、受信部20が受信した登録識別情報とを関連付けて記憶する(S1)。
First, the designation unit 26 acquires an image obtained by capturing the registration target received by the receiving unit 20 (S13). And the designation | designated part 26 designates one or some specific location from the image which imaged the registration target object (S14). The generation unit 27 cuts out an image of a specific location designated by the designation unit 26 from an image obtained by capturing the registration target object, and generates a registration image (S15). The storage unit 22 stores the registration image generated by the generation unit 26 and the registration identification information received by the reception unit 20 in association with each other (S1).
なお、指定部26は、登録対象物を撮像した画像とともに登録識別情報を受信部20から受信しても良い。そして、生成部27は、指定部26から取得した登録識別情報を、登録画像とともに記憶部22に記憶させても良い。
The specifying unit 26 may receive the registration identification information from the receiving unit 20 together with an image obtained by capturing the registration object. Then, the generation unit 27 may store the registration identification information acquired from the specification unit 26 in the storage unit 22 together with the registered image.
第三実施形態より、本システムは、指定部26が指定した箇所を特定箇所とすることができるため、登録対象物ごとに、特定箇所を指定することがきる。よって、本システムは、登録対象物を識別できる情報を含む箇所を特定箇所とすることができるため、判定部24による判定の精度が高い。これにより、本システムのユーザは、対象物の真偽及び当該対象物の所有者の真偽をより正確に知ることができる。
From the third embodiment, the system can specify the specific location for each registration object because the location designated by the designation unit 26 can be designated as the specific location. Therefore, since this system can make the location containing the information which can identify a registration target object into a specific location, the precision of the determination by the determination part 24 is high. Thereby, the user of this system can know more accurately the authenticity of the object and the authenticity of the owner of the object.
第四実施形態
第四実施形態の構成
第四実施形態の構成例について説明する。図15は第三実施形態の構成例を示すブロック図である。第四実施形態は、表示制御部28、表示部29を有する点で第一実施形態と異なる。 Fourth Embodiment Configuration of Fourth Embodiment A configuration example of the fourth embodiment will be described. FIG. 15 is a block diagram illustrating a configuration example of the third embodiment. The fourth embodiment differs from the first embodiment in that it includes adisplay control unit 28 and a display unit 29.
第四実施形態の構成
第四実施形態の構成例について説明する。図15は第三実施形態の構成例を示すブロック図である。第四実施形態は、表示制御部28、表示部29を有する点で第一実施形態と異なる。 Fourth Embodiment Configuration of Fourth Embodiment A configuration example of the fourth embodiment will be described. FIG. 15 is a block diagram illustrating a configuration example of the third embodiment. The fourth embodiment differs from the first embodiment in that it includes a
記憶部22は、第一実施形態に加えて、対象物カテゴリと、領域の位置情報と、当該領域における優先度とを関連付けた情報を記憶する。つまり、記憶部22は、受信部20が受信した、対象物カテゴリと、領域の位置情報と、当該領域における優先度とを関連付けた情報を取得し、記憶する。記憶部22が記憶する情報の一例を図16に示す。図16の表は、対象物カテゴリと、領域の位置情報と、当該領域における優先度とを関連付けている。例えば、座標(x,y)が(0,0)、(3,0)、(0,2)、(3,2)である4点を端点とする領域は、優先度が1である。一方、座標(x,y)が(3,0)、(11,0)、(3,2)、(11,2)である4点を端点とする領域は、優先度が2である。
In addition to the first embodiment, the storage unit 22 stores information that associates the object category, the position information of the region, and the priority in the region. In other words, the storage unit 22 acquires and stores information associated with the object category, the position information of the region, and the priority in the region received by the receiving unit 20. An example of information stored in the storage unit 22 is shown in FIG. The table of FIG. 16 associates the object category, the position information of the area, and the priority in the area. For example, a region having four endpoints whose coordinates (x, y) are (0, 0), (3, 0), (0, 2), and (3, 2) has a priority of 1. On the other hand, a region having four endpoints whose coordinates (x, y) are (3, 0), (11, 0), (3, 2), and (11, 2) has a priority of 2.
ここで、優先度とは、判定部24による判定処理の対象として適している度合を示す値である。例えば優先度は、登録対象物を識別する識別力の高さや、経年劣化のしにくさを数値で表したものである。識別力が高い領域は、異なる登録対象物間で画像の差異が大きく、提示対象物と登録対象物が異なる場合、判定部23が一致しないとの判定をする可能性が高い。例えば、ラメ等の粒子入りの素材から成る領域、梨地加工やサンドブラスト加工等がされたざらつきのある領域、金属から成る領域、布等の繊維から成る領域、等は識別力が高いため、優先度が大きくて良い。また、経年劣化のしやすい領域は、同一の対象物であっても登録時と判定時で画像に差異が生じてしまい、提示対象物と登録対象物が同一である場合でも、判定部24が一致しないという判定をしてしまう可能性が高い。例えば、対象物の角に当たる領域、辺縁の領域、構造上使用により劣化しやすい領域等は、経年劣化しやすいため、優先度が小さくて良い。
Here, the priority is a value indicating a degree suitable for a determination process by the determination unit 24. For example, the priority is a numerical value representing the level of discriminating power for identifying a registered object and the difficulty of deterioration over time. A region having high discrimination power has a large image difference between different registration objects, and when the presentation object and the registration object are different, there is a high possibility that the determination unit 23 does not match. For example, areas made of material containing particles such as lame, textured areas that have been textured or sandblasted, areas made of metal, areas made of fibers such as cloth, etc. have high discriminating power, so priority Can be big. In addition, even if the region subject to deterioration over time is the same object, there is a difference in the image at the time of registration and at the time of determination, and even when the presentation object and the registration object are the same, the determination unit 24 There is a high possibility that it will be determined that they do not match. For example, an area that hits a corner of an object, an edge area, an area that is easily deteriorated due to structural use, etc. is likely to deteriorate over time, and therefore may have a low priority.
また、対象物カテゴリとは、登録対象物を分類するカテゴリである。対象物カテゴリは、例えば、コンサートチケット、航空券、パスポート、免許証など、登録対象物の物品の一般名称で良い。又は、対象物カテゴリは、登録対象物の物品の一般名称よりも更に詳細なカテゴリでも良い。例えば、対象物カテゴリは、コンサートチケット“タイプA”、コンサートチケット“○○社製”、など、登録対象物を様式や製造元で分類するカテゴリでも良い。
Also, the object category is a category for classifying registered objects. The object category may be a general name of an article to be registered, such as a concert ticket, an air ticket, a passport, or a license. Alternatively, the object category may be a more detailed category than the general name of the article of the registration object. For example, the target object category may be a category in which the registration target object is classified by style or manufacturer, such as a concert ticket “type A” or a concert ticket “manufactured by XX”.
なお、本システムの設計者やユーザが不図示の装置から、対象物カテゴリと、領域の位置情報と、当該領域における優先度とを関連付けた情報を判定装置202に送信しても良い。すなわち、図16に示すような情報は、不図示の装置から、受信部20が受信しても良い。又は、判定装置202内の不図示の処理部が、対象物を撮像した画像の領域から、上述したような登録対象物を識別する識別力の高さ又は経年劣化のしにくさを評価し、それに応じて各領域に優先度を割当てても良い。
It should be noted that the designer or user of this system may transmit information relating the object category, area position information, and priority in the area to the determination apparatus 202 from an unillustrated apparatus. That is, the information shown in FIG. 16 may be received by the receiving unit 20 from a device (not shown). Alternatively, a processing unit (not shown) in the determination apparatus 202 evaluates the level of discriminating power for identifying the registered object as described above or the difficulty of aging deterioration from the area of the image obtained by imaging the object, Accordingly, priority may be assigned to each area.
表示部29は、登録対象物を撮像した画像を表示する。また、表示部29は、表示する画像のうち、優先度が関連付けられている位置情報が示す領域に、該優先度を重畳して表示する。表示部29による表示は、表示制御部28が制御している。
The display unit 29 displays an image obtained by capturing the registration target object. In addition, the display unit 29 displays the priority by superimposing the priority on an area indicated by the position information associated with the priority among the images to be displayed. The display by the display unit 29 is controlled by the display control unit 28.
表示制御部28は、まず、受信部20が受信した、登録対象物を撮像した画像を取得する。そして、表示制御部28は、受信部20が受信した登録対象物の対象物カテゴリを取得する。対象物カテゴリは、本システムのユーザが不図示の入力装置を用いて入力しても良い。その場合、受信部20は、不図示の入力装置からユーザが入力した対象物カテゴリを受信する。また、ユーザは、登録識別情報の入力の際に、対象物カテゴリを入力しても良い。例えばユーザは、登録対象物であるコンサートチケット400の登録識別情報(ID)「2100」の入力の際に、対象物カテゴリ「コンサートチケット」を、不図示の入力装置を用いて入力しても良い。ユーザは、登録識別情報の入力とは異なるタイミングで、対象物カテゴリを入力しても良い
そして表示制御部28は、取得した対象物カテゴリに関連付けられている領域の位置情報と、当該領域における優先度とを記憶部22から取得する。そして、表示制御部28は、表示部29に、登録対象物を撮像した画像の領域に、当該領域に関連付けられた優先度を重畳して表示させる。表示部29による表示の一例を図17に示す。図17では、登録対象物を撮像した画像に、優先度が重畳して表示されている。例えば、図16より、座標(x,y)が(0,0)、(3,0)、(0,2)、(3,2)である4点を端点とする領域は優先度が1であるため、表示制御部28は、表示部29に、当該領域に「1」を表示させる。ユーザは、表示部29による表示を参考に、マウスやキーボード等の不図示の入力部を用いて、登録対象物を撮像した画像から任意の特定箇所を入力する。 First, thedisplay control unit 28 acquires an image captured by the receiving unit 20 that is an image of the registration target. Then, the display control unit 28 acquires the object category of the registration object received by the receiving unit 20. The object category may be input by a user of the system using an input device (not shown). In that case, the receiving unit 20 receives the object category input by the user from an input device (not shown). In addition, the user may input the object category when inputting the registration identification information. For example, the user may input the object category “concert ticket” using an input device (not shown) when inputting the registration identification information (ID) “2100” of the concert ticket 400 that is the registration object. . The user may input the object category at a timing different from the input of the registration identification information. The display control unit 28 determines the position information of the area associated with the acquired object category and the priority in the area. The degree is acquired from the storage unit 22. Then, the display control unit 28 causes the display unit 29 to display the priority associated with the region superimposed on the region of the image obtained by capturing the registration target object. An example of display by the display unit 29 is shown in FIG. In FIG. 17, priority is superimposed on an image obtained by imaging a registration target. For example, as shown in FIG. 16, a region having four endpoints whose coordinates (x, y) are (0, 0), (3, 0), (0, 2), (3, 2) has a priority of 1. Therefore, the display control unit 28 causes the display unit 29 to display “1” in the area. The user inputs an arbitrary specific portion from an image obtained by capturing the registration target object using an input unit (not shown) such as a mouse or a keyboard with reference to the display by the display unit 29.
そして表示制御部28は、取得した対象物カテゴリに関連付けられている領域の位置情報と、当該領域における優先度とを記憶部22から取得する。そして、表示制御部28は、表示部29に、登録対象物を撮像した画像の領域に、当該領域に関連付けられた優先度を重畳して表示させる。表示部29による表示の一例を図17に示す。図17では、登録対象物を撮像した画像に、優先度が重畳して表示されている。例えば、図16より、座標(x,y)が(0,0)、(3,0)、(0,2)、(3,2)である4点を端点とする領域は優先度が1であるため、表示制御部28は、表示部29に、当該領域に「1」を表示させる。ユーザは、表示部29による表示を参考に、マウスやキーボード等の不図示の入力部を用いて、登録対象物を撮像した画像から任意の特定箇所を入力する。 First, the
指定部26は、不図示の入力部からユーザの入力した特定箇所に関する情報を取得し、当該特定箇所を指定する。
The designation unit 26 acquires information on a specific location input by the user from an input unit (not shown) and designates the specific location.
第四実施形態の動作
次に、第四実施形態の動作について説明する。図18は、第四実施形態における登録処理の動作を示すフローチャートである。 Operation of Fourth Embodiment Next, the operation of the fourth embodiment will be described. FIG. 18 is a flowchart showing the operation of the registration process in the fourth embodiment.
次に、第四実施形態の動作について説明する。図18は、第四実施形態における登録処理の動作を示すフローチャートである。 Operation of Fourth Embodiment Next, the operation of the fourth embodiment will be described. FIG. 18 is a flowchart showing the operation of the registration process in the fourth embodiment.
記憶部22は、受信部20が受信した、対象物カテゴリと、領域の位置情報と、当該領域における優先度と、を関連付けた情報を取得し、記憶する。(S17)。
The storage unit 22 acquires and stores information associated with the object category, the position information of the region, and the priority in the region received by the receiving unit 20. (S17).
表示制御部28は、受信部20が受信した、登録対象物を撮像した画像を取得する。そして、表示制御部28は、受信部20が受信した登録対象物の対象物カテゴリを取得する(S18)。さらに、表示制御部28は、取得した対象物カテゴリに関連付けられている領域の位置情報と、当該領域における優先度とを記憶部22から取得する。そして、表示制御部28は、登録対象物を撮像した画像の領域に当該領域に関連付けられた優先度を重畳して、表示部29に表示させる(S19)。ユーザは、表示部29による表示を参考に、マウスやキーボード等の不図示の入力部を用いて、登録対象物を撮像した画像から任意の特定箇所を入力する(S20)。指定部26はユーザの入力した特定箇所に関する情報を取得し、当該特定箇所を指定する。不図示の処理部は、登録対象物を撮像した画像から、指定部26が指定した特定箇所を切り出して登録画像を生成する。記憶部22は、不図示の処理部が生成した登録画像を、登録識別情報と関連付けて記憶する(S1)。
The display control unit 28 acquires an image obtained by capturing the registration target object received by the receiving unit 20. And the display control part 28 acquires the target object category of the registration target object which the receiving part 20 received (S18). Further, the display control unit 28 acquires, from the storage unit 22, the position information of the area associated with the acquired object category and the priority in the area. Then, the display control unit 28 causes the display unit 29 to display the priority associated with the region superimposed on the region of the image obtained by capturing the registration target (S19). The user refers to the display by the display unit 29 and inputs an arbitrary specific portion from an image obtained by imaging the registration target using an input unit (not shown) such as a mouse or a keyboard (S20). The designation unit 26 acquires information regarding the specific location input by the user and designates the specific location. A processing unit (not shown) generates a registered image by cutting out a specific portion designated by the designation unit 26 from an image obtained by capturing the registration target. The storage unit 22 stores a registration image generated by a processing unit (not shown) in association with registration identification information (S1).
第四実施形態より、本システムの表示部29は、登録対象物を撮像した画像に優先度を重畳して表示する。そのため、本システムのユーザは、登録対象物の領域のうち、特定箇所に適している領域を把握し、指定することができる。これにより、ユーザは特定箇所を選択しやすくなり、かつ、判定部24による判定の精度が向上する。
From the fourth embodiment, the display unit 29 of the present system displays the priority superimposed on the image obtained by capturing the registration object. Therefore, the user of this system can grasp | ascertain and designate the area | region suitable for a specific location among the area | regions of a registration target object. Thereby, it becomes easy for the user to select a specific location, and the accuracy of determination by the determination unit 24 is improved.
なお、第四実施形態は、表示部29及び表示制御部28を有さなくても良い。その場合の指定部26の動作の例を以下に説明する。
In the fourth embodiment, the display unit 29 and the display control unit 28 may not be provided. An example of the operation of the designation unit 26 in that case will be described below.
指定部26は、特定箇所が含む領域に関連付けられている優先度の合計が所定の閾値以上となるように、特定箇所を指定する。具体的には、指定部26は、まず、受信部20が受信した登録対象物を撮像した画像及び登録対象物の対象物カテゴリを取得する。そして、指定部26は、取得した対象物カテゴリに関連付けられている領域の位置情報と、当該領域における優先度とを記憶部22から取得する。
The designation unit 26 designates the specific part so that the sum of the priorities associated with the area included in the specific part is equal to or greater than a predetermined threshold. Specifically, the designation unit 26 first acquires an image obtained by capturing the registration target received by the reception unit 20 and a target category of the registration target. Then, the specifying unit 26 acquires the position information of the area associated with the acquired object category and the priority in the area from the storage unit 22.
さらに、指定部26は、登録対象物を撮像した画像から、一つまたは複数の特定箇所を指定する。このとき、指定部26は、特定箇所が含む領域に関連付けられている優先度の合計が所定の閾値以上となるように、特定箇所を指定する。なお、所定の閾値は、本システムの設計者やユーザが任意に設定できるものである。
Furthermore, the designation unit 26 designates one or a plurality of specific locations from the image obtained by capturing the registration target object. At this time, the designation unit 26 designates the specific location so that the sum of the priorities associated with the area included in the specific location is equal to or greater than a predetermined threshold. The predetermined threshold can be arbitrarily set by a designer or user of this system.
指定部26による特定箇所の指定を表した図が図19である。図19には、登録対象物であるコンサートチケット400を撮像した画像と、優先度が示されている。ここで、所定の閾値は5であるとする。また、指定部26が指定する特定箇所の数は2つであるとする。
FIG. 19 shows the designation of a specific part by the designation unit 26. FIG. 19 shows an image obtained by capturing a concert ticket 400 that is a registration target and the priority. Here, it is assumed that the predetermined threshold is 5. Further, it is assumed that the number of specific parts designated by the designation unit 26 is two.
特定箇所の候補である候補箇所52は、優先度が3である領域を含んでいる。また、候補箇所53も、優先度が3である領域を含んでいる。したがって、指定部26は、候補箇所52及び53が含む領域に関連付けられている優先度の合計を、3+3=6であると算出する。よって、特定箇所が含む領域に関連付けられている優先度の合計は5以上であるため、指定部26は候補箇所52及び53を特定箇所として指定することができる。より具体的には、指定部26は、記憶部22から取得した図16の情報を参照し、2つの特定箇所がそれぞれ含む領域に関連付けられた優先度の合計が5以上となるように、特定箇所を指定する。そのような条件を満たす特定箇所の組み合わせが複数ある場合は、指定部26は、その中からランダムに特定箇所を指定しても良い。
The candidate location 52, which is a candidate for the specific location, includes an area having a priority of 3. The candidate location 53 also includes an area with a priority of 3. Therefore, the designating unit 26 calculates the sum of the priorities associated with the areas included in the candidate locations 52 and 53 as 3 + 3 = 6. Therefore, since the sum of the priorities associated with the area included in the specific place is 5 or more, the designation unit 26 can designate the candidate places 52 and 53 as the specific places. More specifically, the designation unit 26 refers to the information of FIG. 16 acquired from the storage unit 22 and specifies the total of the priorities associated with the areas included in the two specific locations so that it is 5 or more. Specify the location. When there are a plurality of combinations of specific locations that satisfy such conditions, the specification unit 26 may specify specific locations randomly from among the combinations.
なお、候補箇所54のように、異なる優先度が関連付けられている二つ以上の領域を含む場合、指定部26は、最も大きな割合を占めている領域に関連付けられた優先度を採用しても良い。ここで、候補箇所54のうち、最も大きな割合を占めている領域に関連付けられた優先度は2である。つまり、指定部26は、候補箇所54が含む領域に関連付けられている優先度を2としても良い。候補箇所が、異なる優先度が関連付けられている二つ以上の領域を含む場合の優先度の決め方は、これに限られない。例えば指定部26は、異なる2つ以上の優先度の平均値を採用しても良いし、それぞれの領域が候補箇所で占める割合に応じて優先度を算出しても良い。
In addition, when including two or more areas associated with different priorities, such as the candidate location 54, the designation unit 26 may adopt the priority associated with the area occupying the largest proportion. good. Here, the priority associated with the region occupying the largest proportion of the candidate locations 54 is 2. That is, the designation unit 26 may set the priority associated with the area included in the candidate location 54 to 2. The method for determining the priority when the candidate location includes two or more areas associated with different priorities is not limited to this. For example, the designating unit 26 may employ an average value of two or more different priorities, or may calculate the priorities according to the proportion of each area occupied by candidate locations.
これにより、指定部26は、優先度が閾値以上となるように特定箇所を指定するため、判定部23による判定処理に適した特定箇所が指定される。よって、判定部24による判定の精度がさらに向上し、かつ、ユーザによる特定箇所を指定する手間が削減される。
Thereby, since the designation unit 26 designates the specific part so that the priority is equal to or higher than the threshold value, the specific part suitable for the determination process by the determination unit 23 is designated. Therefore, the accuracy of determination by the determination unit 24 is further improved, and the time and effort for the user to specify a specific location is reduced.
第五実施形態
第五実施形態の構成
第五実施形態の構成例について説明する。図20は第五実施形態の構成例を示すブロック図である。図20では、構成要素を示す機能単位のブロック図及びそれらを実現するハードウェアの構成例を示している。 Configuration of Fifth Embodiment Fifth Embodiment A configuration example of the fifth embodiment will be described. FIG. 20 is a block diagram showing a configuration example of the fifth embodiment. FIG. 20 shows a functional unit block diagram showing components and a hardware configuration example for realizing them.
第五実施形態の構成
第五実施形態の構成例について説明する。図20は第五実施形態の構成例を示すブロック図である。図20では、構成要素を示す機能単位のブロック図及びそれらを実現するハードウェアの構成例を示している。 Configuration of Fifth Embodiment Fifth Embodiment A configuration example of the fifth embodiment will be described. FIG. 20 is a block diagram showing a configuration example of the fifth embodiment. FIG. 20 shows a functional unit block diagram showing components and a hardware configuration example for realizing them.
受信部20、送信部21は、通信制御部1によって実現されても良い。取得部23、判定部24、通知部25、指定部26、生成部27、表示制御部28、はCPU2によって実現されても良い。記憶部22は、大容量記憶部3によって実現されても良い。表示部29は、ディスプレイ4によって実現されても良い。
The receiving unit 20 and the transmitting unit 21 may be realized by the communication control unit 1. The acquisition unit 23, the determination unit 24, the notification unit 25, the designation unit 26, the generation unit 27, and the display control unit 28 may be realized by the CPU 2. The storage unit 22 may be realized by the large-capacity storage unit 3. The display unit 29 may be realized by the display 4.
各構成要素及びハードウェアについては、上述の実施形態と同様であるため、説明を省略する。なお、ハードウェア構成と、各構成要素との対応付けは、図20に示すものに限らず、任意の組み合わせによって実現される。例えば、記憶部22は、判定装置203以外の装置内にあっても良い。
Since each component and hardware are the same as those in the above-described embodiment, description thereof is omitted. Note that the association between the hardware configuration and each component is not limited to that illustrated in FIG. 20, and is realized by any combination. For example, the storage unit 22 may be in a device other than the determination device 203.
上述した各実施形態は本発明を具体化した一例に過ぎず、請求の範囲に記載された本発明の趣旨の範囲内であれば、種々変更することができるものである。
Each embodiment mentioned above is only an example which materialized the present invention, and can be variously changed if it is in the range of the meaning of the present invention indicated in the claim.
この出願は、2015年3月30日に出願された日本出願特願2015-069328を基礎とする優先権を主張し、その開示の全てをここに取り込む。
This application claims priority based on Japanese Patent Application No. 2015-0669328 filed on Mar. 30, 2015, the entire disclosure of which is incorporated herein.
本発明は、例えば、それ自体およびその所有者の真偽が不明である対象物の該真偽を判別する装置に適用できる。
The present invention can be applied, for example, to an apparatus that discriminates the authenticity of an object for which the authenticity of itself and its owner is unknown.
1 通信制御部
2 CPU
3 大容量記憶部
4 ディスプレイ
20 受信部
21 送信部
22 記憶部
23 取得部
24 判定部
25 通知部
26 指定部
27 生成部
28 表示制御部
29 表示部
40 特定箇所
41 領域
42 領域
43 領域
50 指定箇所
51 指定箇所
52 候補箇所
53 候補箇所
54 候補箇所
201 判定装置
202 判定装置
203 判定装置
400 コンサートチケット
500 コンサートチケット
1000 判定システム
1001 判定システム
2000 判定システム
2001 判定システム
2002 判定システム
2003 判定システム 1Communication control unit 2 CPU
DESCRIPTION OFSYMBOLS 3 Large capacity | capacitance memory | storage part 4 Display 20 Reception part 21 Transmission part 22 Storage part 23 Acquisition part 24 Determination part 25 Notification part 26 Notification part 26 Specification part 27 Generation part 28 Display control part 29 Display part 40 Specific location 41 Area 42 Area 43 Area 50 Specification location 51 Specified Location 52 Candidate Location 53 Candidate Location 54 Candidate Location 201 Judgment Device 202 Judgment Device 203 Judgment Device 400 Concert Ticket 500 Concert Ticket 1000 Judgment System 1001 Judgment System 2000 Judgment System 2001 Judgment System 2002 Judgment System 2003 Judgment System
2 CPU
3 大容量記憶部
4 ディスプレイ
20 受信部
21 送信部
22 記憶部
23 取得部
24 判定部
25 通知部
26 指定部
27 生成部
28 表示制御部
29 表示部
40 特定箇所
41 領域
42 領域
43 領域
50 指定箇所
51 指定箇所
52 候補箇所
53 候補箇所
54 候補箇所
201 判定装置
202 判定装置
203 判定装置
400 コンサートチケット
500 コンサートチケット
1000 判定システム
1001 判定システム
2000 判定システム
2001 判定システム
2002 判定システム
2003 判定システム 1
DESCRIPTION OF
Claims (7)
- 登録対象物の登録識別情報及び前記登録対象物の特定箇所の登録画像を関連付けて記憶する記憶手段と、
提示対象物を撮像した画像のうち、ユーザが指定した箇所の指定画像を取得し、前記提示対象物の提示識別情報と同一の前記登録識別情報に関連付けられている前記登録画像を、前記記憶手段から取得する取得手段と、
取得した前記指定画像と前記登録画像が一致するか否かを判定する判定手段と、
前記判定手段の判定結果を通知する通知手段と、
を有する判定システム。 Storage means for storing the registration identification information of the registration object and the registration image of the specific part of the registration object in association with each other;
Among the images obtained by imaging the presentation object, a designated image at a location designated by the user is acquired, and the registration image associated with the registration identification information identical to the presentation identification information of the presentation object is stored in the storage unit. Acquisition means for acquiring from,
Determination means for determining whether or not the acquired designated image matches the registered image;
Notification means for notifying the determination result of the determination means;
A determination system. - 前記記憶手段はさらに、前記登録対象物における前記特定箇所の登録位置情報と、前記登録識別情報とを関連付けて記憶し、
前記取得手段はさらに、前記提示識別情報と同一の前記登録識別情報に関連付けられている前記登録位置情報を前記記憶手段から取得し、前記提示対象物を撮像した画像における前記指定画像の指定位置情報を取得し、
前記判定手段は、さらに、取得した前記登録位置情報と前記指定位置情報とが一致するか否かを判定し、一致する場合、前記指定画像および前記登録画像が一致するか否かを判定する
請求項1記載の判定システム。 The storage means further stores the registration position information of the specific location in the registration object and the registration identification information in association with each other,
The acquisition unit further acquires the registered position information associated with the registration identification information that is the same as the presentation identification information from the storage unit, and specifies the specified position information of the specified image in an image obtained by capturing the presentation target object. Get
The determination means further determines whether or not the acquired registered position information and the specified position information match, and if they match, determines whether or not the specified image and the registered image match. Item 2. The determination system according to Item 1. - 前記登録対象物を撮像した画像から、一つまたは複数の特定箇所を指定する指定手段と、前記登録対象物を撮像した画像から前記特定箇所の画像を切り出し、登録画像を生成する生成手段と、を有し、
前記特定箇所の画像は前記登録対象物を識別できる情報を含む
請求項1又は2記載の判定システム。 A designation unit for designating one or a plurality of specific locations from an image obtained by imaging the registration target; a generation unit for cutting out an image of the specific location from an image obtained by imaging the registration target and generating a registration image; Have
The determination system according to claim 1, wherein the image of the specific location includes information capable of identifying the registration object. - 前記登録対象物を撮像した画像から、一つまたは複数の特定箇所を指定する指定手段を有し、前記登録対象物を撮像した画像を表示する表示手段を有し、
前記表示手段は、表示する前記画像のうち、優先度が関連付けられている位置情報が示す領域に、該優先度を重畳して表示する
請求項1乃至3のいずれか一つに記載の判定システム。 From the image obtained by imaging the registration object, having designation means for designating one or a plurality of specific locations, and having a display means for displaying an image obtained by imaging the registration object,
The determination system according to any one of claims 1 to 3, wherein the display unit superimposes and displays the priority on a region indicated by position information associated with the priority among the images to be displayed. . - 前記指定手段はさらに、前記特定箇所が含む領域に関連付けられている優先度の合計が所定の閾値以上となるように、前記特定箇所を指定する
請求項3又は4記載の判定システム。 5. The determination system according to claim 3, wherein the specifying unit further specifies the specific location such that a sum of priorities associated with an area included in the specific location is equal to or greater than a predetermined threshold. - 登録対象物の登録識別情報及び前記登録対象物の特定箇所の登録画像を関連付けて記憶し、
提示対象物を撮像した画像のうち、ユーザが指定した箇所の指定画像を取得し、
前記提示対象物の提示識別情報と同一の前記登録識別情報に関連付けられている前記登録画像を、前記記憶手段から取得し、取得した前記指定画像と前記登録画像が一致するか否かを判定し、前記判定手段の判定結果を通知する、
判定方法。 Storing the registration identification information of the registration object and the registration image of the specific part of the registration object in association with each other,
Acquire a designated image of the location designated by the user among images captured of the presentation object,
The registered image associated with the registration identification information identical to the presentation identification information of the presentation object is acquired from the storage unit, and it is determined whether or not the acquired designated image matches the registered image. , Notifying the determination result of the determination means,
Judgment method. - コンピュータに、登録対象物の登録識別情報及び前記登録対象物の特定箇所の登録画像を関連付けて記憶する記憶処理と、
提示対象物を撮像した画像のうち、ユーザが指定した箇所の指定画像を取得し、前記提示対象物の提示識別情報と同一の前記登録識別情報に関連付けられている前記登録画像を、前記記憶手段から取得する取得処理と、
取得した前記指定画像と前記登録画像が一致するか否かを判定する判定処理と、前記判定手段の判定結果を通知する通知処理と、
を実行させる判定プログラム。 A storage process for associating and storing registration identification information of a registration object and a registration image of a specific part of the registration object in a computer;
Among the images obtained by imaging the presentation object, a designated image at a location designated by the user is acquired, and the registration image associated with the registration identification information identical to the presentation identification information of the presentation object is stored in the storage unit. The acquisition process acquired from
A determination process for determining whether the acquired specified image matches the registered image, a notification process for notifying a determination result of the determination unit,
Judgment program that executes
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017509259A JP6481754B2 (en) | 2015-03-30 | 2016-03-22 | Judgment system, judgment method, judgment program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015-069328 | 2015-03-30 | ||
JP2015069328 | 2015-03-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016157827A1 true WO2016157827A1 (en) | 2016-10-06 |
Family
ID=57004912
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/001641 WO2016157827A1 (en) | 2015-03-30 | 2016-03-22 | Determination system, determination method, and determination program |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP6481754B2 (en) |
WO (1) | WO2016157827A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106920318A (en) * | 2017-03-07 | 2017-07-04 | 深圳怡化电脑股份有限公司 | The discrimination method and device of a kind of bank note |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001216395A (en) * | 2000-02-03 | 2001-08-10 | Michimasa Hatana | Authentication system using possessed paper money and application of the system |
JP2004118643A (en) * | 2002-09-27 | 2004-04-15 | Hitachi Ltd | Auxiliary device for id card authenticity determination |
JP2010530095A (en) * | 2007-06-01 | 2010-09-02 | カーベーアー−ジオリ ソシエテ アノニム | Authentication of security documents, especially banknotes |
-
2016
- 2016-03-22 WO PCT/JP2016/001641 patent/WO2016157827A1/en active Application Filing
- 2016-03-22 JP JP2017509259A patent/JP6481754B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001216395A (en) * | 2000-02-03 | 2001-08-10 | Michimasa Hatana | Authentication system using possessed paper money and application of the system |
JP2004118643A (en) * | 2002-09-27 | 2004-04-15 | Hitachi Ltd | Auxiliary device for id card authenticity determination |
JP2010530095A (en) * | 2007-06-01 | 2010-09-02 | カーベーアー−ジオリ ソシエテ アノニム | Authentication of security documents, especially banknotes |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106920318A (en) * | 2017-03-07 | 2017-07-04 | 深圳怡化电脑股份有限公司 | The discrimination method and device of a kind of bank note |
Also Published As
Publication number | Publication date |
---|---|
JPWO2016157827A1 (en) | 2018-01-11 |
JP6481754B2 (en) | 2019-03-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2020161149A (en) | Method, system, and program for determining authenticity | |
JP2010526361A (en) | Fast fingerprint identification method by indexing feature point pairs | |
KR20150024421A (en) | Verification method, verification system, verification device, and program therefor | |
JP6245880B2 (en) | Information processing apparatus, information processing method, and program | |
JP5765749B2 (en) | Individual identification information generation apparatus, article determination apparatus, article determination system and method | |
CN110738204B (en) | Certificate area positioning method and device | |
JP6810392B2 (en) | Individual identification device | |
KR101274098B1 (en) | Certification system and method for the original using physical feature information and computer readable recoding medium for performing it | |
EP2736012A1 (en) | Object identification system and program | |
CN112465517A (en) | Anti-counterfeiting verification method and device and computer readable storage medium | |
JP6236825B2 (en) | Vending machine sales product recognition apparatus, sales product recognition method, and computer program | |
CN112017352B (en) | Certificate authentication method, device, equipment and readable storage medium | |
JP6481754B2 (en) | Judgment system, judgment method, judgment program | |
JP6541226B2 (en) | Information terminal device and program | |
JPWO2011010705A1 (en) | Marker generation device, marker generation detection system, marker generation detection device, marker, marker generation method and program thereof | |
CN111932281A (en) | Anti-counterfeiting detection method and device | |
JP6555338B2 (en) | Determination system, determination method, and determination program | |
JP2015045919A (en) | Image recognition method and robot | |
JP2005228150A (en) | Image verification device | |
JP7130423B2 (en) | Parts information management system and parts information management program | |
US10902584B2 (en) | Detection of surface irregularities in coins | |
JP2009151445A (en) | Subarea detection device, object identification apparatus, and program | |
JP6204634B1 (en) | Shape discrimination device, shape discrimination method, and shape discrimination program | |
JP2017091252A (en) | Information input device and information input program | |
JP2016018403A (en) | Image processing device, image processing system, image processing method, and image processing program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16771719 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017509259 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16771719 Country of ref document: EP Kind code of ref document: A1 |