CN118574003A - Article surface texture image acquisition system and method - Google Patents
Article surface texture image acquisition system and method Download PDFInfo
- Publication number
- CN118574003A CN118574003A CN202410621197.4A CN202410621197A CN118574003A CN 118574003 A CN118574003 A CN 118574003A CN 202410621197 A CN202410621197 A CN 202410621197A CN 118574003 A CN118574003 A CN 118574003A
- Authority
- CN
- China
- Prior art keywords
- image acquisition
- image
- target
- surface texture
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 107
- 238000005286 illumination Methods 0.000 claims description 24
- 238000003860 storage Methods 0.000 claims description 20
- 238000001514 detection method Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 8
- 239000013598 vector Substances 0.000 description 103
- 238000012795 verification Methods 0.000 description 71
- 230000008859 change Effects 0.000 description 37
- 230000008569 process Effects 0.000 description 30
- 238000000605 extraction Methods 0.000 description 27
- 239000000463 material Substances 0.000 description 24
- 238000010586 diagram Methods 0.000 description 22
- 238000004364 calculation method Methods 0.000 description 21
- 238000012216 screening Methods 0.000 description 20
- 238000004422 calculation algorithm Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 12
- 238000012545 processing Methods 0.000 description 12
- 238000004891 communication Methods 0.000 description 10
- 230000002776 aggregation Effects 0.000 description 9
- 238000004220 aggregation Methods 0.000 description 9
- 239000000284 extract Substances 0.000 description 8
- 238000013519 translation Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 230000009286 beneficial effect Effects 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 5
- 238000006073 displacement reaction Methods 0.000 description 4
- 230000035772 mutation Effects 0.000 description 4
- 238000002360 preparation method Methods 0.000 description 4
- 238000013441 quality evaluation Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000004873 anchoring Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000005242 forging Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 239000002655 kraft paper Substances 0.000 description 2
- 239000010985 leather Substances 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 101000836337 Homo sapiens Probable helicase senataxin Proteins 0.000 description 1
- 238000012369 In process control Methods 0.000 description 1
- 241001629511 Litchi Species 0.000 description 1
- 229910015234 MoCo Inorganic materials 0.000 description 1
- 102100027178 Probable helicase senataxin Human genes 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 239000011324 bead Substances 0.000 description 1
- 230000005465 channeling Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- -1 cowhide Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004313 glare Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010965 in-process control Methods 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 235000015110 jellies Nutrition 0.000 description 1
- 239000008274 jelly Substances 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 229920001690 polydopamine Polymers 0.000 description 1
- 238000004886 process control Methods 0.000 description 1
- 238000001303 quality assessment method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000452 restraining effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/675—Focus control based on electronic image sensor signals comprising setting of focusing regions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/54—Extraction of image or video features relating to texture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
One or more embodiments of the present disclosure provide an article surface texture image acquisition system and method. The system comprises: the height determining module is used for determining the target height according to the thickness of the target object to be subjected to image acquisition; the target height is the sum of the thickness of the target object and the initial height, and the initial height is the interval distance between the image acquisition equipment and the surface of the conveying device under the condition that the image acquisition equipment meets the shooting condition required by the target object and the surface of the conveying device is focused clearly; the height adjustment control module is used for controlling the gesture control device to adjust the image acquisition device to the target height; and the image acquisition control module is used for controlling the image acquisition equipment to acquire images of the target object placed on the conveying device at the target height so as to obtain a surface texture image of the target object.
Description
Technical Field
One or more embodiments of the present disclosure relate to the field of image acquisition technology, and in particular, to a system and a method for acquiring a texture image of a surface of an article.
Background
For the purpose of anchoring the uniqueness and authenticity of the commodity, an anti-counterfeiting label or an anti-counterfeiting beacon is generally used as an auxiliary anti-counterfeiting product which is irrelevant to the content of the commodity, but illegal camping activities such as goods fleeing, imitation and the like can still exist by forging and damaging the anti-counterfeiting label.
In the related art, in order to solve the problem that a third party anti-counterfeit label or anti-counterfeit label cannot be added to a part of commodities or the anti-counterfeit beacon is damaged, a method for taking textures of commodity contents as unique Identity (ID) of the commodity is provided. The main current micro-texture acquisition scheme is realized on the basis of improvement of a high-definition scanner, and an acquisition device realized in the mode is designed for low-frequency manual acquisition, so that the acquisition device faces the following challenges in the acquisition process of the micro-texture of the commodity: during the production of goods, factories can have the condition of transferring production of goods with different specifications, for example: when the ceramic tiles with different sizes are produced in different time periods, the sampling areas are different due to different sizes, so that the acquisition system needs to be adjusted, in the related art, after the size of the commodity is changed, the acquisition system needs to be manually readjusted and assembled, the acquisition system needs to be manually adjusted according to specific scales, and then the test operation is performed again, so that the problem of low efficiency exists.
Disclosure of Invention
In view of this, one or more embodiments of the present disclosure provide the following technical solutions:
according to a first aspect of one or more embodiments of the present specification, there is provided an article surface texture image acquisition system comprising:
The height determining module is used for determining the target height according to the thickness of the target object to be subjected to image acquisition; the target height is the sum of the thickness of the target object and the initial height, and the initial height is the interval distance between the image acquisition equipment and the surface of the conveying device under the condition that the image acquisition equipment meets the shooting condition required by the target object and the surface of the conveying device is focused clearly;
The height adjustment control module is used for controlling the gesture control equipment to adjust the image acquisition equipment to the target height;
And the image acquisition control module is used for controlling the image acquisition equipment to acquire the image of the target object placed on the conveying device at the target height so as to obtain a surface texture image of the target object.
According to a second aspect of one or more embodiments of the present disclosure, there is provided a method for acquiring a texture image of a surface of an article, including:
Determining the target height according to the thickness of the target object to be subjected to image acquisition; the target height is the sum of the thickness of the target object and the initial height, and the initial height is the interval distance between the image acquisition equipment and the surface of the conveying device under the condition that the image acquisition equipment meets the shooting condition required by the target object and the surface of the conveying device is focused clearly;
Controlling a gesture control device to adjust the image acquisition device to a target height;
and controlling the image acquisition equipment to acquire images of the target object placed on the conveying device at the target height so as to obtain a surface texture image of the target object.
According to a third aspect of embodiments of the present specification, there is also provided an electronic device comprising a processor;
a memory for storing processor-executable instructions;
Wherein the processor implements the method of the first aspect described above by executing the executable instructions.
According to a fourth aspect of embodiments of the present description, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of the first aspect.
According to a fifth aspect of embodiments of the present specification, there is also provided a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method of the first aspect described above.
The technical scheme provided by the embodiment of the specification can comprise the following beneficial effects:
As can be seen from the above embodiments, the present disclosure sets an initial height for an image capturing device, so that the image capturing device can simultaneously meet a shooting condition required by a target object and focus on a surface of a conveying device at the initial height, thereby fixing shooting parameters required by the image capturing device. Further, the control device obtains the thickness of the target object and generates the height adjustment command accordingly, so that the gesture control device adjusts the image acquisition device to the target height based on the height adjustment command, the distance between the image acquisition device and the surface of the target object can be ensured to be consistent with the distance between the image acquisition device and the surface of the conveying device under the initial height, the image acquisition device can meet the shooting conditions required by the target object and the surface focusing of the conveying device at the same time under the condition that the shooting parameters are unchanged, after the target object is adjusted to the target height, even if the target object is not refocused, the acquired clear surface texture image of the target object can still be enabled to meet the shooting parameters, and therefore, the purposes of self-adaptive adjustment of the height of the image acquisition device according to the object thickness under the condition that the image acquisition quality is ensured to be stable are achieved, the problems of low efficiency and high complexity caused by manual adjustment are avoided, the problem that the image acquisition efficiency is improved, and the image acquisition quality is possibly not stable due to the fact that the manual adjustment is not caused.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
FIG. 1 is a block diagram of an article surface texture image acquisition system provided in an exemplary embodiment.
FIG. 2 is a block diagram of an article surface texture image acquisition hardware system provided in an exemplary embodiment.
Fig. 3 is a schematic structural diagram of an article surface texture image acquisition hardware system according to an exemplary embodiment.
Fig. 4 is a schematic diagram of a positional relationship between an image capturing device and a conveying device in an article surface texture image capturing hardware system according to an exemplary embodiment.
Fig. 5 is a schematic diagram of a positional relationship among an image capturing device, a target object, and a conveying device in another object surface texture image capturing hardware system according to an exemplary embodiment.
Fig. 6 is a flowchart of a method for acquiring a texture image of a surface of an object according to an exemplary embodiment.
Fig. 7 is a schematic diagram of an apparatus according to an exemplary embodiment.
Fig. 8 is a schematic hardware architecture of an article verification system according to an exemplary embodiment.
Fig. 9 is a schematic structural diagram of an image capturing device for a verifier according to an exemplary embodiment.
Fig. 10 is a schematic structural diagram of a photographing assembly in a verifier image capturing apparatus according to an exemplary embodiment.
Fig. 11 is a schematic view of a scenario for identifying authenticity of an article based on texture characteristics of the article according to an exemplary embodiment.
Fig. 12a is a schematic diagram of a texture feature extracted in the event of excessive pixel precision provided by an exemplary embodiment.
Fig. 12b is a schematic diagram of a texture feature extracted in the case of too small a pixel precision, as provided by an exemplary embodiment.
Fig. 12c is a schematic diagram of a texture feature extracted with proper pixel accuracy provided by an exemplary embodiment.
Fig. 13 is a schematic diagram of a plot of sum of squares error of integrated similarity versus pixel accuracy, as provided by an example embodiment.
Detailed Description
For anchoring the uniqueness and authenticity of the commodity, an anti-counterfeiting label or an anti-counterfeiting beacon is generally used as an auxiliary anti-counterfeiting product which is irrelevant to the content of the commodity, but illegal camping activities such as goods channeling, imitation and the like can still exist through forging and destroying the anti-counterfeiting label or the anti-counterfeiting beacon.
In the related art, in order to solve the problem that a third party anti-counterfeiting mark cannot be added to a part of commodities or the anti-counterfeiting mark is damaged, a method for taking textures of commodity contents as unique Identity (ID) of the commodities is provided. The current mainstream micro-texture acquisition scheme is realized by improving the basis of a high-definition scanner, and the acquisition equipment realized by the mode is designed for low-frequency manual acquisition, so that when different objects are scanned, especially different types of objects are scanned, the height of the acquisition equipment is required to be manually adjusted if the precision of a micro-texture image obtained by shooting is required to be kept fixed due to different thicknesses of the different objects, but the height of the acquisition equipment is greatly influenced by manpower due to manual adjustment, and the probability of the scanning precision change of the adjusted acquisition equipment is greatly improved.
As can be seen, the acquisition device in the related art faces the following challenges in the acquisition process of microscopic textures of commodities: the manual readjustment and assembly of the acquisition system can be involved in the process of changing the type of commodities needing to be acquired by micro textures, so that the precision of the acquisition system can be changed, and the imaging precision is inconsistent.
FIG. 1 is a block diagram of an article surface texture image acquisition system provided in an exemplary embodiment. As shown in fig. 1, the object surface texture image acquisition system includes:
A height determining module 11, configured to determine a target height according to a thickness of a target object to be subjected to image acquisition; the target height is the sum of the thickness of the target object and the initial height, and the initial height is the interval distance between the image acquisition equipment and the surface of the conveying device under the condition that the image acquisition equipment meets the shooting condition required by the target object and the surface of the conveying device is focused clearly;
a height adjustment control module 12 for controlling the attitude control apparatus to adjust the image pickup apparatus to a target height;
the image acquisition control module 13 is used for controlling the image acquisition device to acquire images of the target object placed on the conveying device at the target height so as to obtain a surface texture image of the target object.
FIG. 2 is a block diagram of the device connectivity of an article surface texture image acquisition hardware system according to an exemplary embodiment. The object surface texture image acquisition system as described above may be applied to the control device 21 of the object surface texture image acquisition hardware system as shown in fig. 2, the devices in the object surface texture image acquisition hardware system including: a control device 21, an image acquisition device 22, and a posture acquisition device 23;
The height determining module 11 in the control device 21 may determine a height adjustment instruction according to the thickness of the target object to be subjected to image acquisition, and send the height adjustment instruction to the gesture acquisition device 23; wherein the height adjustment instruction is used for instructing the gesture acquisition device 23 to adjust the image acquisition device 22 to the target height;
The height adjustment control module 12 may be configured to control the gesture collection device 23 to adjust the image collection device 22 to the target height by sending the height adjustment instruction to the gesture collection device, and the gesture collection device 23 adjusts the image collection device 22 to the target height in response to the received height adjustment instruction;
After determining that the image acquisition device 22 is adjusted to the target height, the control device 21 may control the image acquisition device 22 to perform image acquisition of the target object placed on the conveyor at the target height by the image acquisition control module 13 to obtain a surface texture image of the target object.
In this embodiment, the control device 21 may be an upper computer that is an OPC (Object LINKING AND Embedding for Process Control, object connection and embedding in process control) server, and the upper computer may include, but is not limited to, one or more of the following: personal computers, workstations, industrial personal computers, servers, embedded systems, single-chip computers, intelligent mobile terminals and the like.
Alternatively, the thickness of the target object may be measured in advance, for example, the thickness of the target object may be measured by an ultrasonic thickness gauge, a laser ranging sensor, or the like, or the thickness of the target object may be measured manually, and after the thickness is obtained, thickness information corresponding to the thickness may be sent to the control device 21.
After determining the thickness of the target object, the control device 21 may determine the target height based on the initial height and thickness of the image pickup device 22 in a case where the photographing condition required for the target object is satisfied and the surface of the conveying apparatus is in focus, and generate the height adjustment instruction. The photographing conditions may be conditions including pixel accuracy of a desired photographed image, and further, may include, but are not limited to: item brightness conditions, item area conditions, etc., which may be conditions that at least ensure the brightness of the target item that can be clearly photographed, and item area conditions, which may be conditions that at least ensure that the area of the target item can be fully image captured by image capture device 22. Pixel Accuracy (Pixel Accuracy) is commonly referred to in the art of image processing, photography, and display as the measure of Accuracy or real-world resolution in which each Pixel in an image represents a physical dimension. The conveyor may be a conveyor belt for conveying the target article so that the surface of the conveyor is in contact with the bottom surface of the target article. Further, after the target object is placed on the conveyor, the difference in height between the image pickup device 22 and the target object is the difference between the initial height and the thickness, in order to avoid a change in shooting conditions of the image pickup device 22 and to avoid the need for refocusing by the image pickup device 22, a height adjustment instruction for instructing the sum of the thickness of the target object and the initial height is generated. And, the control device 21 may transmit the height adjustment instruction to the posture collection device 23.
The posture acquisition device 23 may be a device for at least adjusting the height of the image acquisition device 22. Further, fig. 4 is a schematic diagram of a positional relationship between an image capturing device and a conveying device in an article surface texture image capturing hardware system according to an exemplary embodiment, and fig. 5 is a schematic diagram of a positional relationship between an image capturing device, a target article and a conveying device in another article surface texture image capturing hardware system according to an exemplary embodiment, where the posture capturing device 23 may reset the height of the image capturing device 22 to an initial height h1 as shown in fig. 4, and then further adjust the height of the image capturing device 22 according to the height adjustment instruction based on the initial height, that is, adjust a thickness h2 corresponding to the target article to finally adjust the image capturing device 22 to a target height h3=h1+h2 as shown in fig. 5. Meanwhile, the gesture collection device 23 may also determine the real-time height of the image collection device 22 first, then determine the height difference based on the real-time height and the target height, and finally further adjust the height of the image collection device 22 according to the height difference, which may also finally achieve the purpose of adjusting the image collection device 22 to the target height. Fig. 3 is a schematic structural diagram of an object surface texture image capturing hardware system according to an exemplary embodiment, as shown in fig. 3, the gesture capturing device 23 may include a gesture control PLC (Programmable Logic Controller ) 331 and a lifting motor 332, and the image capturing device 22 may employ a line scan camera 32. The gesture control PLC331 is in communication connection with the control device 21, so that the gesture control PLC331 can acquire a height adjustment instruction from the control device 21, the gesture control PLC331 is further in driving connection with the lifting motor 332, and the gesture control PLC331 can drive the lifting motor 332 to operate according to the height adjustment instruction, so that the line scanning camera 32 is adjusted to a target height.
After the image capture device 22 is adjusted to the target height, image capture may be performed on the target object placed on the conveyor at the target height to obtain a surface texture image of the target object.
For example, as shown in fig. 3, on the basis that the attitude acquisition device 23 includes an attitude control PLC331 and a lifting motor 332, and the line scanning camera 32 is used as the image acquisition device 22, the upper computer 31 may be used as the control device 21, and the upper computer 31 may acquire the height h2 of the target object to be acquired, and when the initial height of the line scanning camera 32 is h1 and the lifting height is 0, the line scanning camera 32 is focused clearly with respect to the plane of the conveyor belt, and a specific pixel precision, for example, 12.9um/px (acquisition pixel 8192px physical size 105mm, physical size/pixel) is satisfied. After the height of the line scanning camera 32 is reset (i.e., the initial height is h1 and the horizontal direction is shifted by 0), the PLC331 drives the motor through the attitude control, and the height of the image capturing apparatus 22 is raised to h3=h1+h2. In addition, the line camera 32 may be replaced with an area camera satisfying the following conditions: when the image of the object under the target running speed is acquired, the complete image of the object can be acquired on the premise of no distortion, and the target speed is the highest running speed of the conveying device.
According to the embodiment, the initial height is set for the image acquisition equipment, so that the image acquisition equipment can simultaneously meet shooting conditions required by a target object and the surface of the conveying device is focused clearly under the initial height, and shooting parameters required by the image acquisition equipment are fixed. Further, the control device obtains the thickness of the target object and generates the height adjustment command accordingly, so that the gesture control device adjusts the image acquisition device to the target height based on the height adjustment command, the distance between the image acquisition device and the surface of the target object can be ensured to be consistent with the distance between the image acquisition device and the surface of the conveying device under the initial height, the image acquisition device can meet the shooting conditions required by the target object and the surface focusing of the conveying device at the same time under the condition that the shooting parameters are unchanged, after the target object is adjusted to the target height, even if the target object is not refocused, the acquired clear surface texture image of the target object can still be enabled to meet the shooting parameters, and therefore, the purposes of self-adaptive adjustment of the height of the image acquisition device according to the object thickness under the condition that the image acquisition quality is ensured to be stable are achieved, the problems of low efficiency and high complexity caused by manual adjustment are avoided, the problem that the image acquisition efficiency is improved, and the image acquisition quality is possibly not stable due to the fact that the manual adjustment is not caused.
As an alternative embodiment: the article surface texture image acquisition system further comprises: the offset information determining module is used for determining a target area on the surface of the target object, which needs to perform image acquisition, and offset information of the target area relative to a preset position on the conveying device, and determining horizontal movement information of the image acquisition equipment according to the target area and the offset information; the horizontal movement information comprises an x-axis maximum coordinate and/or a minimum coordinate of an occupied area of the target area under a target coordinate system, wherein the target coordinate system takes a preset position as an origin, takes the direction of the surface of the conveying device, which is perpendicular to the moving direction of the conveying device, as an x-axis and takes the moving direction of the conveying device as a y-axis;
And the horizontal adjustment control module is used for controlling the gesture control device to move the image acquisition device to the position indicated by the horizontal movement information so that the field angle of the image acquisition device covers the target area.
That is, the offset information determining module in the control device 21 is further configured to determine a target area on the surface of the target object where image acquisition needs to be performed and offset information of the target area relative to a preset position on the conveying device, determine a horizontal movement instruction of the image acquisition device 22 according to the target area and the offset information, and transmit the horizontal movement instruction to the gesture acquisition device 23;
the posture collection device 23 is also used to control the image collection device 22 to move in the direction and distance indicated by the horizontal movement instruction so that the field angle of the image collection device 22 covers the target area.
In this embodiment, the target area of the target object may be measured in advance, for example, the image acquisition area required for the surface of the target object may be measured by means of image recognition, laser radar, or the like, so as to obtain the target area, and in general, the target area is a rectangular area whose side direction coincides with the width direction or the length direction of the conveying device, the width direction is a direction perpendicular to the moving direction of the surface of the conveying device, and the length direction is the moving direction of the conveying device. In addition, the target area of the target object can also be obtained by manual measurement. In some alternative embodiments, the target area may be all areas of the side of the target object facing the image capture device 22, or may be a specific area of all areas of the side of the target object facing the image capture device 22. The target region may be characterized by coordinates to obtain corresponding target region information, for example, the target region information may be obtained by using a point on the target region as an origin, and performing coordinate characterization on the target region. Since the image is generally rectangular, when the target area does not satisfy a rectangular area in which the direction of the side coincides with the width direction and the length direction of the transfer device, it is easy to know that the direction of the side coincides with the width direction or the length direction of the transfer device by specifying an circumscribed rectangle corresponding to the target area and using the circumscribed rectangle as the target area information.
Further, the front conveyor may be provided with a stop device for fixing the position of the object item relative to the conveyor, and different stop devices may be provided for different types of items so that the items may be located on the conveyor at positions suitable for image acquisition, e.g. the stop device may be used to keep the occupied area of the items on the conveyor from exceeding the corresponding area of the conveyor. After the target object is limited by the limiting device, the offset between the target object and the plane of the conveyor belt is relatively fixed, so that offset information of the target area relative to a preset position on the conveyor device can be obtained, wherein the preset position can be the center position of the moving direction of the conveyor device or the edge of the conveyor device. The offset information may be a positional offset relationship between a target position on the target area and a preset position, and the target position may be an origin point to which the target area information corresponds, that is, the offset information is a conversion vector that converts coordinates of the target area from a coordinate system to which the target area information is obtained to a coordinate system to which the transmission device corresponds. Further, in order to facilitate the later calculation, when the target position is the center of the target area, the preset position may also be the center of the conveying device; when the target position is the left edge of the target area, the preset position may be the left edge of the conveying device, and the like, and meanwhile, the positional relationship of the target position on the target area and the positional relationship of the preset position on the conveying device may also be inconsistent, which is not exemplified herein. Further, in general, the view angle of the image pickup device 22 covers both sides in the width direction of the conveyor, and if the view angle of the image pickup device 22 cannot cover both sides in the width direction of the conveyor, the view angle of the image pickup device 22 may cover both sides in the width direction of the target area.
After the target area and the offset information are determined, the offset information determining module can determine the occupied area of the target area on the conveying device according to the target area and the offset information, and then can determine the horizontal movement information. Alternatively, the target area information and the offset information may be based on the target area, calculating to obtain the occupied area information of the target area on the conveying device, the occupancy zone information is then determined based on the occupancy zone information as x-axis maximum coordinates and/or minimum coordinates. Thereby determining the horizontal movement information of the required movement, which the image acquisition device 22 can acquire the complete image of the target area, and obtaining the horizontal movement instruction corresponding to the horizontal movement information and used for transmitting to the gesture acquisition device 23.
After determining the horizontal movement instruction, the horizontal adjustment control module can control the gesture control device to move the image acquisition device to the position indicated by the horizontal movement information by sending the horizontal movement instruction to the gesture acquisition device 23, so that the view angle of the image acquisition device covers the target area. The position indicated by the horizontal movement information may be selected according to a position labeling mode corresponding to the image capturing device 22, for example: when the position labeling mode of the image capturing device 22 is to label the minimum x-axis position of the image captured by the image capturing device 22, the position indicated by the horizontal movement information can be the x-axis minimum coordinate, and when the position labeling mode of the image capturing device 22 is to label the maximum x-axis position of the image captured by the image capturing device 22, the position indicated by the horizontal movement information can be the x-axis maximum coordinate. The posture collection device 23 may reset the horizontal position of the image collection device 22 to the initial horizontal position in response to the horizontal movement instruction, and then adjust the horizontal position of the image collection device 22 according to the position corresponding to the horizontal movement instruction, so that the field angle of view of the image collection device 22 after being adjusted to the target horizontal position covers the target area. The initial horizontal position may be an optimal horizontal position for image acquisition of the preset position by the image acquisition device 22 or a horizontal position that can obtain a better acquisition effect. In addition, the control device 21 may also determine a real-time offset of the image capturing device 22 relative to the initial horizontal position, determine a real-time movement instruction based on the real-time offset and the offset information, and transmit the real-time movement instruction to the gesture capturing device 23, so that the gesture capturing device 23 may directly adjust the horizontal position of the image capturing device 22 according to the direction and the distance corresponding to the real-time movement instruction without resetting the horizontal position of the image capturing device 22 to the initial horizontal position. Optionally, as shown in fig. 3, when the gesture capturing device 23 includes a gesture control PLC331 and a translation motor 333, and the image capturing device 22 adopts the line scanning camera 32, the gesture control PLC331 is in driving connection with the translation motor 333, and further can drive the translation motor 333 to operate according to the horizontal movement instruction, so that the line scanning camera 32 is adjusted to the target horizontal position.
For example, as shown in fig. 3, the control device 21 may employ the upper computer 31 on the basis that the posture acquisition device 23 includes a posture control PLC331 and a translation motor 333, and the image acquisition device 22 employs the line scanning camera 32. The upper computer 31 may determine in advance a target region R of the target object to be collected, where r_tl represents an upper left corner of the target region R, and r_br represents a lower right corner of the target region R. When the target article is restrained by the restraining device, the Offset information of the target article with respect to the conveying device is Offset (offset_x, offset_y), and based on this, the area of the target article to be acquired can be known, and the position of the occupied area corresponding to the conveying area is r_tl+offset, r_br+offset. Where offset_x is the widthwise Offset of the target article relative to the conveyor plane, and offset_y is the lengthwise Offset of the target article relative to the conveyor plane. After resetting the horizontal position of the line scan camera 32 (i.e., resetting to the initial horizontal position, denoted as horizontal position 0), and the line scan camera 32 is marked by the minimum x-axis position of the image that can be acquired by the line scan camera 32, the posture control PLC331 performs horizontal displacement of the line scan camera 32 to the target horizontal position, i.e., r_tl_x+offset_x, by operating the translation motor 333. The posture adjustment of the horizontal position of the line camera 32 is completed. Further, after the height and horizontal position of the wire sweep camera 32 are adjusted, the current position of the wire sweep camera 32 can be electromagnetically locked to avoid displacement.
By the method of the embodiment, the image capturing device 22 can be adjusted in horizontal position according to the target area of the target object and the offset information of the target area relative to the preset position on the conveying device, so that even if the target area is not at the optimal horizontal capturing position of the image capturing device 22 at present, the image capturing device 22 can be adjusted in horizontal position by the offset information, and the image capturing device 22 can capture the texture image of the target area at the optimal image capturing position.
As an alternative embodiment, the object surface texture image acquisition system further comprises:
A speed acquisition module for acquiring speed information of the conveying device detected by the speed detection device 24 for conveying the target article;
the acquisition frequency control module is used for determining the image acquisition frequency according to the speed information and controlling the image acquisition device 22 to acquire the surface texture image according to the image acquisition frequency.
As shown in fig. 2, the article surface texture image acquisition hardware system further includes: a speed detecting device 24;
The speed detecting device 24 is configured to detect speed information of the conveying device for conveying the target article, and transmit the speed information to the control device 21;
the control device 21 is further configured to determine an image acquisition frequency according to the speed information, and send the image acquisition frequency to the image acquisition device 22;
The image acquisition device 22 is also used for surface texture image acquisition at an image acquisition frequency.
In this embodiment, the speed detection device 24 may employ one or more of the following: linear velocity encoders, photoelectric encoders, laser displacement encoders, and the like. The speed information of the conveyor when conveying the target article can be determined by the speed detection device 24. The speed detecting device 24 is communicatively connected to the control device 21, so that the control device 21 can obtain the speed information from the speed detecting device 24, the control device 21 can also determine the pixel accuracy of the image capturing device 22, and then can determine the image capturing frequency based on the speed information and the pixel accuracy, that is, the actual physical size corresponding to each captured image of the image capturing device 22 is the distance obtained by integrating the speed indicated by the speed information and the image capturing period corresponding to the image capturing frequency, and after obtaining the image capturing frequency, the image capturing frequency can be sent to the image capturing device 22, so that the image capturing device 22 can perform texture image capturing on the object according to the image capturing frequency. Therefore, the image acquisition equipment 22 can accurately acquire the surface texture image of the target object according to the image acquisition frequency, and the conditions of missing or repeated surface texture images and the like are avoided, so that later-stage texture authentication is affected.
For example, as shown in fig. 3, when the control device 21 is the upper computer 31, the speed detecting device 24 is the linear speed encoder 34, the conveying device is the conveyor belt, and the image collecting device 22 is the line scanning camera 32, the line scanning camera line frequency (i.e., the image collecting frequency of the line scanning camera) M may be obtained by performing an operation based on the 1s internal pulse data Q of the linear speed encoder 34 and the line frequency of the line scanning camera 32 may be transmitted into the line scanning camera controller through the upper computer 31. Assuming that the perimeter of the linear velocity encoder 34 is D, the total number of pulses after one revolution of the linear velocity encoder 34 is T, and the camera acquisition accuracy is P, then m=dpq/T, it can be known from the formula that DP/T is a constant and is noted as K, and the line frequency QK of the line scan camera 32 can be obtained by multiplying the real-time pulse data by the coefficient K.
As an alternative embodiment, the object surface texture image acquisition system further comprises:
The delay shooting information determining module is used for determining delay shooting information under the condition that a trigger signal generated when a target object on the conveying device reaches a preset position is received by the object detecting equipment; the time-delay shooting information comprises shooting time delay and shooting time length, wherein the shooting time delay is calculated according to the interval distance between an image acquisition area of the image acquisition equipment and a preset position and the moving speed of the conveying device, and the shooting time length is calculated according to the moving direction length and the moving speed of the target object;
And the delay shooting control module is used for controlling the image acquisition equipment to carry out delay shooting according to the shooting delay and shooting duration indicated by the delay shooting information.
As shown in fig. 2, the article surface texture image acquisition hardware system further includes: an object detection device 25;
The object detection device 25 is configured to generate a trigger signal when the target object on the conveying device reaches a preset position, and send the generated trigger signal to the control device 21;
The control device 21 is further configured to send a time-lapse shooting instruction to the image capturing device 22 when receiving the trigger signal; the time-delay shooting instruction comprises time-delay shooting information, wherein the time-delay shooting information comprises shooting time delay and shooting time length, the shooting time delay is calculated according to the interval distance between an image acquisition area of the image acquisition equipment 22 and a preset position and the moving speed of the conveying device, and the shooting time length is calculated according to the moving direction length and the moving speed of the target object;
The image capturing device 22 is configured to perform time-lapse capturing according to the capturing time lapse and the capturing duration indicated by the time-lapse capturing instruction.
In the present embodiment, the object detection device 25 may be a device for detecting whether or not there is a conveyed article on the conveyor. Alternatively, the object detection device 25 may include, but is not limited to, one or more of the following: laser triggers, image recognition devices, etc. The detection area of the object detection device 25 is a preset position on the conveyor, and therefore, when the object detection device 25 detects the presence of an object at the preset position, a trigger signal indicating the presence of an object at the preset position may be generated, and then, the trigger signal is transmitted to the control device 21 based on the communication channel between the object detection device 25 and the control device 21.
After acquiring the trigger signal, the control device 21 may calculate the photographing delay T1 based on the distance d1 between the image capturing area of the image capturing device 22 and the preset position and the moving speed v of the conveying means, that is, t1=d1/v. And the photographing time period T2 may also be calculated from the movement direction length d2 of the target object and the moving speed v, that is, t2=d2/v. And obtains a time-delay shooting instruction for controlling the image acquisition device 22 based on the shooting time delay and the shooting time length, and finally sends the time-delay shooting instruction to the image acquisition device 22.
The image pickup device 22 starts image pickup after the time delay of the pickup after receiving the time delay pickup instruction, and stops the pickup after the pickup time period arrives.
For example, as shown in fig. 3, when the object detecting device 25 employs the laser trigger 35 and the image capturing device 22 employs the line scan camera 32, if the laser trigger 35 is d away from the length of the line scan capturing area, and the speed average v (v=dq/T) in 1 second of the conveyor, the line scan camera 32 needs to start capturing with a delay of t= (d+r_tl_y)/v seconds, capture c= (r_br_y-r_tl_y)/v seconds, generate a file from the image captured in c seconds, and return the file to the control device 21 for uploading.
By the method of the embodiment, the image acquisition device 22 can be accurately controlled to acquire images, and the image acquisition device 22 is controlled by the determined shooting time delay and shooting time length, so that the aim of only acquiring images related to a target object can be fulfilled, and the acquisition of too many useless images is avoided.
As an alternative embodiment, the object surface texture image acquisition system further comprises:
The initial module is used for determining target pixel precision required by image acquisition under the condition that shooting conditions comprise pixel precision, determining an initial height control gesture control device to adjust the image acquisition device to the initial height according to the target pixel precision and device parameters of the image acquisition device, and acquiring adjustment completion information generated by the gesture control device so as to trigger and control the image acquisition device to focus.
That is, the control device 21 implements the following method by the above-described initial module: in the case that the photographing condition includes pixel accuracy, determining a target pixel accuracy required for image acquisition, determining an initial height according to the target pixel accuracy and device parameters of the image acquisition device 22, and transmitting an initialization adjustment instruction corresponding to the initial height to the posture acquisition device 23;
The gesture collection device 23 is configured to adjust the image collection device 22 to an initial height according to the acquired initialization adjustment instruction, and feed back the generated adjustment completion information to the control device 21, so as to trigger the control device 21 to send a focusing instruction to the image collection device 22;
the image capturing device 22 is further configured to focus according to the focus instruction.
In the present embodiment, when the photographing condition includes the pixel precision, a desired target pixel precision may be manually input to the control apparatus 21 in advance. After the control device 21 obtains the target pixel precision, the initial height may be determined according to the target pixel precision and the device parameters of the image capturing device 22, and optionally, the control device 21 may calculate the initial height based on the focal length of the image capturing device 22 and the device parameters such as the pixel, and use the initial height as a default height of the image capturing device 22. After determining the initial height, an initialization adjustment instruction corresponding to the initial height may be generated and transmitted to the gesture collection device 23 through a communication channel with the gesture collection device 23.
After acquiring the initialization adjustment instruction, the posture acquisition device 23 can adjust the height of the image acquisition device 22 in accordance with the initial height indicated in the initialization adjustment instruction, and when the image acquisition device 22 has been adjusted to the initial height, generates adjustment completion information indicating that the image acquisition device 22 has been adjusted to the initial height. And feeds back the adjustment completion information to the control device 21. And after the adjustment of the image-capturing device 22 is completed, the current position can be electromagnetically locked, avoiding displacement.
The control device 21 may generate a focusing instruction for instructing the image capturing device 22 to focus after acquiring the adjustment completion information, and send the focusing instruction to the image capturing device 22 through a communication channel with the image capturing device 22, and the image capturing device 22 may focus according to the object instruction after acquiring the focusing instruction, so that the image capturing device 22 may focus on an object at an initial height from itself and clearly capture the object.
As an alternative embodiment, the object surface texture image acquisition system further comprises: and the illumination control module is used for controlling the illumination equipment to carry out lateral light source illumination on the target object under the condition that the preset photographing requirement is determined to be met.
As an alternative embodiment, the article surface texture image acquisition hardware system further comprises: an illumination device; the illumination device is used for illuminating the target object by a side light source. In this embodiment, the illumination device may be a light strip disposed in a rectangular frame, and the light strip also encloses a rectangular shape, and further, the relative position between the illumination device and the image capturing device 22 is protected and fixed, alternatively, the image capturing device 22 may be fixed under the platform, and the rectangular frame is also fixed around the platform, and the illumination device is also located under the platform. The preset position is illuminated for each light strip towards the preset position towards which the image acquisition device 22 is directed, and in a side illumination manner. Therefore, when shooting the target object, the target object can be illuminated by the side light source. And, after the illumination device illuminates the target object, the image acquired by the target object can clearly identify the texture of the target object, alternatively, as shown in fig. 3, the angle between the lateral light source and the plane of the conveying device can be between 25 ° and 40 °, and preferably, the angle between the lateral light source and the plane of the conveying device can be 30 °. Therefore, overexposure caused by direct irradiation of the target object can be avoided, and further the situation that the surface texture of the target object cannot be clearly shot can be avoided.
The control device 21 implements the following method by the above-described illumination control module: it is first determined whether the current situation meets a preset photographing requirement, for example, whether an object to be photographed, such as a target object, exists currently, or whether a photographing instruction is received currently, and so on. Under the condition that the preset photographing requirement is met, the illumination equipment can be controlled to conduct lateral light source illumination on the target object.
Furthermore, the object surface texture image acquisition hardware system can be a system running in a dark field environment, wherein the mode of a line scanning light source belongs to dark field illumination, and the following beneficial effects can be achieved through the dark field illumination: contrast enhancement: since the target object is illuminated from the side, any light reflected or refracted from the surface of the target object will make the sample appear very bright against a dark background. Protruding details: dark field illumination may reveal minor details of the target article. Reducing interference of direct light: since no light directly from the light source enters the objective lens, the image of the target object is not overexposed, and light diffusion and glare are reduced.
As an alternative embodiment, the object surface texture image acquisition system further comprises:
and the storage control module is used for storing the surface texture image into the storage device after acquiring the surface texture image from the image acquisition device.
As shown in fig. 2, the article surface texture image acquisition hardware system further includes: a storage device 26; the control device 21 implements the following method by the above-described storage control module: after the surface texture image from the image acquisition device 22 is acquired, the surface texture image is stored in the storage device 26. The storage device 26 may be a separate device from the control device 21 or may be a device integrated into the control device 21, so that after the control device 21 acquires the surface texture image, the surface texture image may be stored in the storage device 26 through a communication channel with the storage device 26, so that the surface texture image may be acquired directly from the storage device 26 at a later stage, and a complete surface texture image corresponding to the target object may be obtained through processing.
In the related art, an area array camera module is adopted to collect texture images, and the defects are as follows: 1. the collection range is limited and the cost is high, and the large-area array camera module of the current mainstream almost has only CMOS module, however, even the largest module, the collection range is limited, and all images of the expected area can not be necessarily collected. And the size of the CMOS camera module is exponentially related to the price. 2. The acquisition quality is low, and the large-area array camera module of the current mainstream is CMOS, however, the natural characteristic of CMOS is that jelly effect can be generated when shooting high-speed moving objects, and micro texture imaging can be seriously affected.
To overcome the above technical problems, as an alternative embodiment, the image capturing device 22 is a line scan camera. Further, the resolution and number of line scan cameras used in the image capture device 22 are not limited herein, and typically one or more line scan cameras may be used to capture images side-by-side across the width of the conveyor. The number and resolution of line scan cameras depends on the maximum width of the microscopic texture of the item to be acquired.
Fig. 6 is a flowchart of a method for acquiring texture images of a surface of an article according to an exemplary embodiment. The article surface texture image acquisition hardware system comprises: control device, image acquisition device and gesture control device, and the method is applied to the control device, includes:
Step S601, determining the target height according to the thickness of a target object to be subjected to image acquisition; the target height is the sum of the thickness of the target object and the initial height, and the initial height is the interval distance between the image acquisition equipment and the surface of the conveying device under the condition that the image acquisition equipment meets the shooting condition required by the target object and the surface of the conveying device is focused clearly;
Step S602, controlling the gesture control device to adjust the image acquisition device to a target height;
In step S603, the image capturing apparatus is controlled to perform image capturing of the target object placed on the conveyor at the target height to obtain a surface texture image of the target object.
As an alternative embodiment, the method for acquiring texture image of the surface of an article further comprises the following steps:
Determining a target area of the surface of the target object to be subjected to image acquisition and offset information of the target area relative to a preset position on the conveying device, and determining horizontal movement information of the image acquisition equipment according to the target area and the offset information; the horizontal movement information comprises an x-axis maximum coordinate and/or a minimum coordinate of an occupied area of the target area under a target coordinate system, wherein the target coordinate system takes a preset position as an origin, takes the direction of the surface of the conveying device, which is perpendicular to the moving direction of the conveying device, as an x-axis and takes the moving direction of the conveying device as a y-axis;
The attitude control apparatus is controlled to move the image pickup apparatus to a position indicated by the horizontal movement information so that the angle of view of the image pickup apparatus covers the target area.
As an optional embodiment, the method for acquiring a texture image of a surface of an article further includes:
acquiring speed information of a conveying device detected by speed detection equipment for conveying a target object;
And determining the image acquisition frequency according to the speed information, and controlling the image acquisition equipment to acquire the surface texture image according to the image acquisition frequency.
As an optional embodiment, the method for acquiring a texture image of a surface of an article further includes:
Determining delay shooting information under the condition that a trigger signal generated by object detection equipment when a target object on a conveying device reaches a preset position is received; the time-delay shooting information comprises shooting time delay and shooting time length, wherein the shooting time delay is calculated according to the interval distance between an image acquisition area of the image acquisition equipment and a preset position and the moving speed of the conveying device, and the shooting time length is calculated according to the moving direction length and the moving speed of the target object;
And controlling the image acquisition equipment to carry out time-delay shooting according to the shooting time delay and shooting time length indicated by the time-delay shooting information.
As an optional embodiment, the method for acquiring a texture image of a surface of an article further includes:
Under the condition that shooting conditions comprise pixel precision, determining target pixel precision required by image acquisition, determining an initial height control gesture control device to adjust the image acquisition device to the initial height according to the target pixel precision and device parameters of the image acquisition device, and acquiring adjustment completion information generated by the gesture control device so as to trigger and control the image acquisition device to focus.
As an optional embodiment, the method for acquiring a texture image of a surface of an article further includes:
After the surface texture image from the image acquisition device is acquired, the surface texture image is stored in a storage device.
The specific implementation manner of the method for acquiring the texture image on the surface of the article in any of the above embodiments may refer to the implementation manner of the corresponding portion in the embodiment of the system shown in fig. 1, and will not be described herein again.
As an alternative embodiment, the method further comprises:
and under the condition that the preset photographing requirement is met, controlling the illumination equipment to conduct lateral light source illumination on the target object.
The specific implementation manner of the control method in this embodiment may refer to the implementation manner of the corresponding portion in the system embodiment shown in fig. 1, and will not be described herein.
Fig. 7 is a schematic block diagram of an apparatus according to an exemplary embodiment. Referring to fig. 7, at the hardware level, the device includes a processor 702, an internal bus 704, a network interface 706, a memory 708, and a non-volatile storage 710, although other hardware required for other functions may be included. One or more embodiments of the present description may be implemented in a software-based manner, such as by the processor 702 reading a corresponding computer program from the non-volatile storage 710 into the memory 708 and then running. Of course, in addition to software implementation, one or more embodiments of the present disclosure do not exclude other implementation manners, such as a logic device or a combination of software and hardware, etc., that is, the execution subject of the following processing flow is not limited to each logic unit, but may also be hardware or a logic device. The object surface texture image acquisition system shown in fig. 1 can be applied to the device shown in fig. 7 to implement the technical solution of the present specification.
To solve the above-described problems in the related art, embodiments of the present disclosure propose an article authentication scheme based on a surface texture image in order to authenticate an article based on the surface texture image of the article (including microscopic texture features of the surface of the article). This scheme is described below with reference to the drawings.
Embodiments of the present specification provide a surface texture image-based item verification system, the system comprising:
The image acquisition module is used for respectively acquiring corresponding pre-registered surface texture images for N pre-registered objects in a registration stage, wherein N is a positive integer; and acquiring a surface texture image to be verified for the object to be verified in the verification stage;
the article registration module is used for acquiring and maintaining pre-registration texture description information respectively extracted from N pre-registration surface texture images acquired by the image acquisition module in the registration stage;
And the article verification module is used for acquiring texture description information to be verified extracted from the surface texture image to be verified in a verification stage, and determining whether the article to be verified is one of the N preregistered articles according to the comparison result between the texture description information to be verified and the N preregistered texture description information.
Under the condition of n=1, the scheme can realize 1:1 verification of the article to be verified and the pre-registered article; in the case of N >1, the scheme can realize 1:n verification of the to-be-verified object and the N pre-registered objects: not only can it be determined whether the item to be verified is a preregistered item, but also it can be determined exactly which preregistered item is in particular.
In addition, the N preregistered articles participating in the comparison can be set according to actual needs so as to realize corresponding verification results. For example, if the pre-registered articles corresponding to the N pre-registered surface texture images involved in the comparison are compliant products produced by a certain manufacturer, it can be determined whether the article to be verified is one of the products through verification, that is, whether the article to be verified is a compliant product produced by the manufacturer is verified, so that the authenticity verification of the article is completed. For another example, if the preregistered articles corresponding to the N preregistered surface texture images respectively involved in the comparison are a lot of articles sold by a certain brand party authorized by a certain store (the brand party can also authorize other stores to sell corresponding articles), whether the article to be verified is one of the lot of articles can be determined through verification, namely whether the article to be verified is transferred from other stores is verified, and therefore whether the article is in a channel is verified. Of course, the method can also be applied to other scenes, and will not be described in detail.
As can be seen from the function of the respective devices described above, the item verification scheme implemented by the item verification system comprises two phases, namely a registration phase (for pre-registered items) and a verification phase (for items to be verified). Wherein, M1 and M2 mutually cooperate to finish the registration stage, and M2 and M3 cooperate to finish the verification stage.
It should be noted that, each functional module in the article verification system (such as the image acquisition module, the article registration module, the article verification module, and each sub-module described below) may be implemented by the foregoing article surface texture image acquisition system, where relevant codes of the image acquisition function may be executed in the article surface texture image acquisition system, so as to implement the foregoing image acquisition module; the server may run relevant codes of the article registration function to implement the registration module and the like, which are not described in detail.
As an alternative embodiment, as shown in fig. 8, a hardware architecture schematic of the article verification system is also provided. The item verification system may include a network 80, a server 81, an item surface texture image acquisition hardware system 82 of any of the preceding embodiments, a verifier device 83, and the like. The verifier device 83 may include a mobile phone 831, a handheld camera 832, and the like, where the handheld camera 832 is configured to collect a corresponding surface texture image to be verified for the item 85 to be verified.
The server 81 may be a physical server comprising a separate host, or the server 81 may be a virtual server carried by a cluster of hosts. During operation, the server 81 may run a server-side program of an application to implement functions related to the application, such as a service platform for providing an item verification service when the server 81 runs a program of the item verification service. For example, the method can be used for maintaining the article description information of the preregistered article and the corresponding preregistered texture description information thereof, and comparing and matching the information of the texture description information to be verified and the preregistered texture description information.
The host computer 31 and the mobile phone 831 are only some types of electronic devices that can be used by the user. Indeed, it is obvious that the user may also use electronic devices of the type such as: tablet devices, notebook computers, personal computers (PDAs), personal DIGITAL ASSISTANTS, wearable devices (e.g., smart glasses, smart watches, etc.), etc., to which one or more embodiments of the present disclosure are not limited. During operation, the electronic device may operate a program on a client side of an application to implement a function associated with the application, such as being implemented as a client of an item verification service when the electronic device is operating a program of the item verification service. A registrar client may be run in the upper computer 31, through which a relevant user (hereinafter referred to as registrar user) can complete registration of an item to be registered (for simplicity of description, the items before and after registration are collectively referred to as preregistered items in the embodiment of the present specification, but it should be understood that the item after registration may be regarded as an item to be registered, and the item after registration becomes a registered item); the mobile phone 831 can run a verifier client through which a related user (hereinafter referred to as verifier user) can verify an item to be verified. Wherein an application of a client of the upper item verification service may be launched and run on the electronic device. The client-side program may be a native application installed on the electronic device, or the client-side program may be an applet, a quick application, or other similar form. Of course, when using web page technology such as HTML5 or the like, the relevant functions may be implemented through the pages presented by the browser, where the browser may be a stand-alone browser application, or may be a browser module embedded in some applications.
It is to be understood that the article surface texture image capturing hardware system 82 shown in fig. 8 includes the upper computer 31, the line scanning camera 32 and the motor module (i.e. includes the gesture control PLC331, the lifting motor 332 and the translation motor 333), the verifier device 83 includes the mobile phone 831 and the hand-held camera 832, and the article surface texture image capturing hardware system 82 and the verifier device 83 can be regarded as small systems formed by corresponding devices, and the following embodiments will describe the article verification scheme by taking this form as an example. However, in addition to the above-described configuration, the apparatus may be a single apparatus having the image capturing function and the associated processing logic, for any of the apparatus of the object surface texture image capturing hardware system 82 and the verifier apparatus 83. For example, the verifier device 83 may be a handheld electronic device equipped with a high-precision camera, where the aforementioned verifier client may be operated, so that a verifier user may operate the client to collect a corresponding surface texture image to be verified for an article to be verified, and complete verification of the article by interaction with the client, which is not described in detail.
For the network 80 for interaction between the electronic devices such as the upper computer 31 and the mobile phone 831 and the server 81, the communication can be specifically selected to be implemented by using a wired or wireless network based on the communication mode supported by the corresponding electronic devices, which is not limited in this specification. For example, the upper computer 31 may support both wired and wireless communication, and may use a wired or wireless network to communicate as needed, while the handset 831 typically supports only wireless communication, and may use a wireless network to communicate. In addition, it should be noted that, in fig. 8, lightning symbols representing any two interactions among the upper computer 31, the line scanning camera 32 and the motor module are not connected to the network 80, but this is only for the convenience of embodying the integral relationship between the three and the object surface texture image acquisition hardware system 82, and not for limiting the interaction between the three to be necessarily realized not through the network 80, in fact, the upper computer 31, the line scanning camera 32 and the gesture control device 23 may all have network connection functions, and interaction is realized through the network 80, which is not repeated.
Taking the surface texture image as a preregistration surface texture image for registration, corresponding preregistration texture description information needs to be extracted from the preregistration surface texture image for registering preregistration objects.
Illustratively, in the foregoing article verification system, the article registration module may be implemented by the server 81 shown in fig. 8, the image acquisition module may be implemented by the article surface texture image acquisition hardware system 82 shown in fig. 8 to acquire the pre-registered surface texture image during the registration phase, the verifier device 83 shown in fig. 8 to acquire the surface texture image to be verified during the verification phase, and the article verification module may be implemented by the verifier device 83 shown in fig. 8, and a process of implementing the article verification scheme of the system will be described below by taking this as an example.
In one embodiment, the preregistered texture description information may be extracted by a server. For example, the hardware system for acquiring surface texture images of an article (such as the upper computer 31) may send the acquired N pre-registered surface texture images to the server, and for example, each image may be acquired and sent to the server, or may be sent to the server once after the N images are acquired, or may be sent in batches according to a preset number, for example, when n=1000 or N is unknown, each 10 or 100 images may be sent, which is not repeated. Upon receiving the aforementioned pre-registered surface texture image uploaded by the item surface texture image acquisition hardware system, the server may extract therefrom corresponding pre-registered texture description information. The server may extract information when receiving an image or may extract a plurality of images in batch in correspondence with the transmission image, and the embodiment of the present invention is not limited thereto. It can be understood that the image feature extraction process can be implemented by adopting a feature extraction model which is trained in advance, and in view of the fact that resources (such as storage resources and computing resources) required by running of the model are often large, uploading of the image to a server with high computing capability is completed, efficient extraction of preregistered texture description information can be achieved, and the realization of a lightweight client on one side of an object surface texture image acquisition hardware system is facilitated.
In another embodiment, the pre-registered texture description information can also be extracted by the object surface texture image acquisition hardware system. For example, the object surface texture image acquisition hardware system may extract corresponding pre-registered texture description information from the acquired N pre-registered surface texture images, and send the pre-registered texture description information to the server; correspondingly, the server can receive the preregistered texture description information sent by the article surface texture image acquisition hardware system. Similar to the previous embodiment, when the article surface texture image acquisition hardware system uploads the pre-registered texture description information extracted by itself, the pre-registered texture description information may be uploaded one by one or in batches, which is not repeated.
The preregistered texture description information described in the embodiments of the present specification may include various forms. Taking as an example pre-registered texture description information of any pre-registered surface texture image, it may include pre-registered keypoint description information (e.g., number, keypoint coordinates, etc.) of each pre-registered keypoint identified from the image; and in the case that the image is divided into a plurality of preregistered subgraphs according to a preset size, the preregistered texture description information may further include preregistered subgraph location information of each preregistered subgraph and preregistered aggregate feature vectors respectively corresponding to each preregistered subgraph. The specific role of the above information may be described in detail in the following embodiments of the method for verifying an article based on a surface texture image applied to an article verification system, which are not described herein. When the server maintains the preregistered texture description information, the information such as the vector can be stored in a vector database so as to facilitate subsequent retrieval.
In an embodiment, the embodiment of the present disclosure further proposes a surface texture image-based article verification method applied to an article verification system. As previously described, the system includes an item surface texture image acquisition hardware system, a server, and a verifier device. The method comprises the steps of a registration phase for a pre-registered item and a verification phase for an item to be verified as follows:
Registration phase for preregistered items: the registrar equipment respectively acquires corresponding pre-registered surface texture images aiming at N pre-registered articles, wherein N is a positive integer; the server acquires and maintains preregistered texture description information respectively extracted from the acquired N preregistered surface texture images.
Verification phase for the item to be verified: the verifier equipment acquires a surface texture image to be verified aiming at an object to be verified; before verifying the object to be verified, the verifier device may collect a surface texture image of the object to be verified for the object to facilitate subsequent extraction of corresponding texture description information for comparison. The server or the verifier device acquires texture description information to be verified, which is extracted from the surface texture image to be verified, and determines whether the object to be verified is one of the N pre-registered objects according to the comparison result between the texture description information to be verified and the N pre-registered texture description information.
In addition, in addition to acquiring and maintaining preregistered texture description information, the server may also acquire and maintain item description information of preregistered items corresponding to the preregistered surface texture images. For example, the article description information of any article may include an article unique Identifier (ID), and may further include related information such as a brand, a manufacturer, and a date of manufacture, which will not be described in detail. In this regard, when maintaining preregistered texture description information extracted from any preregistered surface texture image, the server may maintain a mapping relationship between the article description information of the preregistered article corresponding to the image and the preregistered texture description information, e.g., may maintain the two types of information with the unique identification of the article as an information index, so as to enable quick retrieval during subsequent comparison.
It can be understood that the server successfully maintains the preregistered texture description information (or the preregistered texture description information and the corresponding article description information) of the N preregistered articles, i.e. the registration of the N preregistered articles is completed. It will be appreciated that the preregistration texture description information of each preregistration item corresponds to the corresponding item one by one, and the preregistration texture description information of any item can be regarded as a micro fingerprint of the item and can be used for comparing whether the item to be verified is a preregistration item or not later.
The verification process is described below with reference to the accompanying drawings:
In an embodiment, the verifier device may include a verifier control device and a verifier image acquisition device, where the verifier image acquisition device is configured to acquire a surface texture image to be verified for an article to be verified, and send the acquired surface texture image to be verified to the verifier control device. As shown in fig. 8, the verifier device includes a verifier control device and a verifier image acquisition device, where the verifier image acquisition device is configured to acquire a corresponding surface texture image to be verified for a verifier object, and send the acquired surface texture image to be verified to the verifier control device; the verifier control device is arranged to interact with the server 81 and the verifier image capture device.
In an embodiment, the verifier image acquisition device may be a portable device, so that the verifier operates the device to acquire a corresponding surface texture image to be verified for the item to be verified. The device may be made up of a plurality of components, such as a camera component and a shade component, the complete side view of which may be seen in fig. 9. Wherein, the shading component can comprise a shell and a luminous part (such as a plurality of LED lamp beads or luminous lamp bands, etc.), wherein, a hollow part is formed in the shell for placing the articles to be verified; the light-emitting part can be arranged on the inner wall of the shell, so that light rays emitted by the light-emitting part irradiate towards the inside of the view angle of the camera module (such as the position of the bottom of the hollow part for placing the object to be verified), and a high-quality lighting effect is realized. Through the structure of the shell and the light source, a dark field can be constructed for the object to be shot, so that the quality of the acquired image is improved.
The shooting assembly comprises a camera module and a connecting mechanism, wherein the camera module is used for completing image acquisition, and the connecting mechanism is used for being connected with the connecting mechanism at the corresponding position on the shading assembly so as to realize close fit of the camera module and the connecting mechanism. The connection mechanism may be any form, such as a buckle, a screw/nut, an internal/external thread, etc., which is not limited in the embodiment of the present disclosure. It can be understood that the above two mutually matched connection modes enable the same shooting component to replace shading components with different sizes, so that image acquisition can be conducted on objects to be verified with different sizes, and universality of image acquisition equipment of a verification party is improved.
Fig. 10 shows a bottom view, a top view and a side view of the shooting assembly. As shown in the bottom view, a light control connector is provided on the camera module housing through which a cable can be used to connect the light sources in the light emitting module. As shown in side view and side view, a control device interface (e.g., a USB TypeC interface) may be provided on the camera assembly to connect to the handset 831 shown in fig. 8 through the interface using a cable (e.g., a USB TypeC connection). Of course, the shooting component may also be provided with a wireless communication module so as to establish a wireless connection with the mobile phone 831, thereby avoiding inconvenience in use that may be caused by a wired connection. As shown in the side view, the transparent glass cover plate is arranged below the camera module, so that dust, water drops and the like on the shooting site are prevented from polluting the camera, damage to the camera is avoided, and image quality is guaranteed. The mainboard and the heat conducting plate above the camera module are used for controlling the camera module to work normally. The 60mm and 40mm shown in the bottom view, top view and side view of fig. 10 are merely exemplary of the dimensions of the photographing module, and the photographing module is not limited to these dimensions, but may be flexibly set according to the specific conditions such as the size of the article in actual use, and the embodiment of the present invention is not limited thereto.
As described above, the verifier device may include a verifier control device and a verifier image acquisition device, where the verifier image acquisition device is configured to acquire a surface texture image to be verified for an article to be verified, and send the acquired image to the verifier control device; correspondingly, the verifier control device is arranged to receive the image. As shown in fig. 8, the verifier control device and the verifier image capturing device may be a mobile phone 831 and a hand-held camera 832, respectively, wherein a user may use the hand-held camera 832 to capture a surface texture image to be verified for an item to be verified and send it to the mobile phone 831.
In one embodiment, the verifier image capture device may capture a surface texture image to be verified for an item to be verified in a number of ways. For example, the verifier image acquisition device can independently complete image acquisition without the control of other devices, and the acquisition process of the surface texture image to be verified does not need the intervention of the verifier control device. If the verifier user can automatically adjust the shooting parameters in the handheld camera 832 to finish focusing, and manually shoot to obtain a surface texture image to be verified; or in the case where the hand-held camera 832 itself is provided with control logic, the verifier user may operate the camera to automatically complete image acquisition.
For another example, the verifier image capturing device may also receive an image capturing instruction (which may be included in the instruction) sent by the verifier control device, and then capture the surface texture image to be verified according to the instruction, where the verifier image capturing device needs to be controlled by the verifier control device to complete image capturing (that is, needs to capture an image according to the image capturing instruction). It can be appreciated that, with respect to the verifier image capturing device whose core function is image capturing, the verifier control device may implement a richer software function, and thus may also be used for richer and more accurate control of the verifier image capturing device.
As shown in fig. 8, the verifier user may separately operate the handheld camera 832 to collect the surface texture image to be verified, or may control the handheld camera 832 to collect the image through the mobile phone 831 if the mobile phone 831 is connected to the handheld camera 832 (e.g. TypeC wired connection shown in fig. 9 or fig. 9).
In an embodiment, whether the verifier image capturing device performs image capturing under the control of the verifier control device or not, the verifier control device may be configured to display operation prompt information for the verifier image capturing device, so as to prompt a verifier user to operate the verifier image capturing device to capture the surface texture image to be verified according to a preset operation mode. The operation prompt information may include focusing position indication information (for indicating a user to focus the device to a position of an object to be verified), camera parameter indication information (such as an aperture, a shutter, an exposure time, etc., for indicating the user to accurately adjust a camera parameter to acquire a higher quality image), light source parameters (such as brightness, an illumination angle, etc., for indicating the user to create a higher quality light scene for the object to be verified), a number of shots (a plurality of images may be continuously shot and an optimal surface texture image is selected as the object to be verified), and the like, which will not be repeated. The user of the verification party operates the image acquisition equipment of the verification party according to the preset operation mode indicated by the operation prompt information, so that a high-quality surface texture image to be verified can be acquired, and a data base is laid for obtaining accurate comparison results subsequently.
After the surface texture image to be verified is acquired, the verifier image acquisition device may provide the image to the verifier control device, for example, the image may be sent based on wired connection or wireless connection, or the image may be transferred through a storage medium such as a usb disk, an SD card, or the like.
Further, the verifier control device may further send the received surface texture image to be verified to the server, so that the server may extract corresponding texture description information to be verified from the received image. Or the verifier control device can also extract corresponding texture description information to be verified from the received surface texture image to be verified and send the corresponding texture description information to the server, and the server can directly acquire the description information and quickly start comparison. As shown in fig. 8, after the mobile phone 831 obtains the surface texture image to be verified, the corresponding texture description information to be verified can be directly extracted locally, or can be sent to the server 81, so that the server extracts the corresponding texture description information to be verified.
As described above, the verifier device may extract the texture description information to be verified by itself and send it to the server. However, it should be noted that, in this embodiment, the server performs the information comparison (that is, compares the texture description information to be verified with the N pieces of preregistered texture description information), that is, if the subsequent information comparison is completed by the verifier device, the verifier device does not need to send the information to the server.
The image acquisition and information extraction in the verification process are completed, and then the server or the verifier device can compare the information, namely, N pieces of pre-registered texture description information (extracted in the registration process) maintained by the server and the texture description information to be verified extracted in the verification process, so as to determine whether the object to be verified is one of the N pieces of pre-registered objects according to the comparison result.
In an embodiment, the texture description information described in the embodiments of the present disclosure may be an overall texture feature vector directly extracted for the overall texture image of the surface to be verified. At this time, indexes such as vector distance, vector similarity and the like between the overall texture feature vector to be verified and a certain preregistered overall texture feature vector can be calculated so as to determine the similarity degree between the surface texture image to be verified and each preregistered surface texture image to be verified; if the similarity corresponding to any pre-registered surface texture image to be verified is greater than the similarity threshold, the image can be determined to be matched with the surface texture image to be verified, namely the pre-registered object corresponding to the image and the object to be verified are the same object. At this point there is no need to compare the remaining preregistered items.
In another embodiment, the texture descriptive information may be local texture features extracted for a plurality of small regions in the image. At this time, each local texture feature in the pre-registered surface texture image to be verified and each local texture feature in each surface texture image to be verified can be respectively subjected to cross comparison so as to determine a final comparison result. However, since the surface texture image to be verified may contain massive micro textures, the calculation amount of the comparison process in the mode is generally large, and the calculation force requirement on the verification party (such as a commodity buyer) is high, so that the common buyer is difficult to apply; and the verification efficiency is low, and batch verification is difficult to realize.
In this regard, the present specification embodiment proposes an image comparison scheme including a coarse screening stage, a fine screening stage, and a comparison stage. At least one stage of the coarse screening stage and the fine screening stage can filter the feature vectors based on geometric consistency constraint conditions, so that the calculation amount is reduced while the verification accuracy is ensured, and the verification efficiency is improved. Geometric consistency refers to the property of the interrelationship between two or more geometric objects to remain unchanged in geometry. In short, geometric consistency describes the ability of a set of geometric objects to remain similar and relative position after transformation (e.g., rotation, translation, or scaling). In this scheme, the relative positional relationship between different subgraphs (and/or different keypoints) is verified by geometric consistency constraints.
In an embodiment, the texture description information may include sub-graph position information, key point description information and an aggregate feature vector, that is, the pre-registered texture description information includes pre-registered sub-graph position information, pre-registered key point description information and pre-registered aggregate feature vector, and the texture description information to be verified includes sub-graph position information to be verified, key point description information to be verified and aggregate feature vector to be verified. In this scenario, it is necessary to briefly explain the extraction method of the texture description information:
The article surface texture image acquisition hardware system or the server divides the pre-registered surface texture image according to a preset size to obtain pre-registered subgraphs, and each pre-registered subgraph is aimed at: the location information of the preregistration sub-image in the preregistration surface texture image to which the preregistration sub-image belongs (such as the coordinate of the preregistration sub-image in the image relative to the origin of the image) can be determined, preregistration key points in the preregistration sub-image are identified, and preregistration key point description information of each preregistration key point (such as preregistration key point location information used for representing the location of the preregistration key point in the image and preregistration key point feature vector used for representing the micro texture features of the preregistration key point and the local area nearby) and preregistration aggregation feature vector (used for representing the whole micro texture features of the preregistration sub-image) corresponding to the preregistration sub-image are extracted based on the identification result. For any pre-registered subgraph, determining a feature vector of a key point to be verified and a credibility value of each pre-registered key point according to each pre-registered key point identified in the subgraph (wherein the credibility value of any pre-registered key point is proportional to the accuracy/credibility of an identification result of the pre-registered key point, that is, the larger the credibility value of any pre-registered key point identified is, the larger the possibility that the key point actually exists in the pre-registered subgraph is), then determining at least one part of pre-registered key points with the credibility value not smaller than a score threshold value in each pre-registered key point, and performing aggregation processing on the feature vectors of the pre-registered key points corresponding to at least one part of key points to be verified respectively to obtain a pre-registered aggregated feature vector corresponding to the pre-registered subgraph. Then, the server may store, for each preregistered subgraph obtained by dividing each preregistered surface texture image, preregistered subgraph location information, preregistered key point description information, and preregistered aggregate feature vector association corresponding to the preregistered subgraph in a vector database. It can be understood that, for any pre-registered surface texture image to be verified, the above information (i.e. a set of pre-registered sub-image position information, pre-registered key point description information and pre-registered aggregate feature vector) corresponding to each pre-registered sub-image obtained by dividing the image together form pre-registered texture description information extracted from the image.
Similarly, in the verification process, the server or the verifier device may intercept the sub-graph to be verified from the surface texture image to be verified through the sliding window with the preset size, and then for each sub-graph to be verified: determining position information of the sub-image to be verified in the surface texture image to be verified (such as coordinates of the sub-image to be verified in the image relative to an image origin), identifying key points to be verified in the sub-image to be verified, and extracting key point description information to be verified of each key point to be verified (such as position information of the key points to be verified for representing positions of the key points to be verified in the image and key point feature vectors to be verified for representing micro-texture features of the key points to be verified and local areas nearby) and aggregate feature vectors to be verified (for representing micro-texture features of the whole sub-image to be verified) of the sub-image to be verified based on identification results.
Further, for each aggregated feature vector to be verified: and searching a group of preregistered aggregated feature vectors with highest vector similarity with the aggregated feature vector to be verified in the vector database, wherein the aggregated feature vector to be verified and each feature vector in the group of preregistered aggregated feature vectors respectively form an aggregated vector pair. And then, according to the position information of each sub-image to be verified and each preregistered sub-image in the surface texture image to which the sub-image belongs, screening out a target aggregate vector pair of which the corresponding sub-image meets the geometric consistency constraint condition from the aggregate vector pair formed by the aggregate feature vectors to be verified, and recalling at least one part of preregistered surface texture images corresponding to the target aggregate vector pair. The surface texture image may include a distance constraint condition (for constraining a distance between two sub-image positions), an angle constraint condition (for constraining an included angle formed by connecting lines of three sub-image positions), a common constraint condition (for constraining a distance between coordinates of one sub-image and an origin, and an angle of a connecting line between the two sub-images and the origin), and the like. And finally, acquiring to-be-verified key point description information corresponding to the to-be-verified key points identified from the to-be-verified surface texture image and pre-registered key point description information corresponding to the pre-registered key points respectively identified from each recalled pre-registered surface texture image, and determining whether the to-be-verified object is one of the N pre-registered objects according to a comparison result between the to-be-verified key point description information and the pre-registered key point description information.
This embodiment proposes filtering the feature vectors using vector similarity and geometric consistency constraints in the coarse screening stage: for the to-be-verified aggregate feature vector of each sub-image extracted from each sub-image of the to-be-verified surface texture image, a group of pre-registered aggregate feature vectors with highest similarity are screened from the pre-registered aggregate feature vectors stored in a vector database and used for forming an aggregate vector pair with the to-be-verified aggregate feature vector, then target aggregate vector pairs, corresponding to the sub-images, meeting geometric consistency constraint conditions are screened from all the aggregate vector pairs according to the position information of the sub-images in the surface texture image, and at least one part of the pre-registered surface texture images corresponding to the target aggregate vector pairs are recalled. Further, in the fine screening stage and the comparison stage, for the part of the preregistered surface texture image recalled in the coarse screening stage, whether the preregistered surface texture image matched with the surface texture image to be verified exists or not is determined according to a comparison result between the key point description information to be verified and preregistered key point description information (corresponding to the preregistered key point in the part of the preregistered surface texture image) so as to complete the comparison.
The embodiment of the specification realizes the verification of the object to be verified by comparing the surface texture image to be verified with the pre-registered surface texture image, and the verification process is based on the following general rule: the surface texture image acquired from the object can reflect the microscopic texture of the object surface, and for any two identical surface texture images and any two sub-images taken from the two images respectively, if the positions of the two sub-images in the corresponding images are the same (or close), the surface textures of the two sub-images are the same (or close). The scheme characterizes the whole micro texture of the subgraph through the aggregation feature vector corresponding to the subgraph, and can be understood as follows: aiming at an aggregate feature vector to be verified, which is acquired from each sub-image to be verified in an object to be verified, and an aggregate feature vector to be pre-registered, which is acquired from each pre-registered sub-image in at least one pre-registered object, if the object to be verified is not a pre-registered object, each pre-registered surface texture image will not be matched with the surface texture image to be verified, so that the large probability of the sub-image to be verified in the surface texture image to be verified and the pre-registered sub-image in each pre-registered surface texture image do not meet the geometric consistency constraint condition (the condition is used for constraining the relative positions of the sub-images); in addition, if the object to be verified is a certain preregistered object (i.e. the two objects are the same object), the preregistered surface texture image acquired from the preregistered object will be matched with the surface texture image to be verified, and correspondingly, since the preregistered surface texture image is divided to obtain a plurality of preregistered subgraphs and the surface texture image to be verified is intercepted to obtain a plurality of subgraphs to be verified, the positions of the subgraphs in the corresponding images are not completely the same, and therefore, only partial aggregate vectors exist to meet the general rule on the positions of the corresponding subgraphs. It can be seen that only a small portion of the aggregate vector pairs in all aggregate vector pairs prior to coarse screening satisfy the above general rule for the location of the corresponding subgraph.
In contrast, in the scheme of the embodiment, all the aggregated vector pairs are screened by using the constraint conditions of vector similarity and geometric consistency in the coarse screening stage, so that a large number of aggregated vector pairs in which the corresponding subgraphs do not meet the general rule can be primarily screened out, the processing pressure of the subsequent fine screening stage and the comparison stage is reduced, and the overall operation amount of the verification process is greatly reduced. Therefore, the scheme not only reduces the calculation power requirement of the verification process on the verification party (such as a commodity buyer) but also reduces the application difficulty of the scheme; and the verification efficiency is improved because the operand is reduced, thereby being beneficial to realizing batch verification of a large number of articles.
In another embodiment, the key point description information may include key point position information and key point feature vectors corresponding to the key points, that is, the pre-registered key point description information includes pre-registered key point position information and pre-registered key point feature vectors, and the to-be-verified key point description information includes to-be-verified key point position information and to-be-verified key point feature vectors. Based on this, when the verifier device or the server determines whether the to-be-verified item is one of the N pre-registered items according to the comparison result between the to-be-verified key point description information and the pre-registered key point description information, the following manner may be implemented:
Firstly, determining a key point vector pair, namely aiming at each acquired key point feature vector to be verified: retrieving a group of pre-registered key point feature vectors with highest vector similarity with the key point feature vector to be verified from the obtained pre-registered key point feature vectors, wherein the key point feature vector to be verified and each feature vector in the group of pre-registered key point feature vectors respectively form a key point vector pair, and the retrieved pre-registered key point vectors correspond to a group of pre-registered surface texture images in the recalled at least part of pre-registered surface texture images; then screening out target key point vector pairs of which the corresponding key points meet geometric consistency constraint conditions from the key point vector pairs formed by the obtained key point feature vectors to be verified; and finally, determining whether an image matched with the surface texture image to be verified exists in the pre-registered surface texture image corresponding to the target key point vector pair, wherein if the pre-registered surface texture image is not matched with the surface texture image to be verified, the article to be verified is judged to be not one of the N pre-registered articles, and if the pre-registered surface texture image is matched with the surface texture image to be verified, the article to be verified is the pre-registered article corresponding to the surface texture image to be verified.
As described above, the pre-registration aggregate feature vector corresponding to the pre-registration surface texture image needs to be extracted in the registration stage, and the aggregate feature vector corresponding to the surface texture image needs to be extracted in the verification stage. The aggregate feature vector corresponding to any one surface texture image can be extracted based on the feature extraction model. The feature extraction model comprises a local feature extraction module and a global feature extraction module, wherein the local feature extraction module is used for identifying key points from an input image and extracting corresponding key point description information, and the global feature extraction module is used for generating corresponding aggregate feature vectors according to at least part of the key point description information extracted by the local feature extraction module. The model may be deployed with an image feature extraction Network for extracting image features, such as ResNet (Residual Network), VGG (Visual Geometry Group), inception (Google Inception Network, also called GoogLeNet), and the model may be pre-trained on a large number of texture datasets (such as STEX, KTH-TIPS2, brodatz, visTexture, kylbergTextureDataset, and the like) through a contrast learning method such as MoCo or SimCLR.
As can be seen from this embodiment, for at least a part of the preregistered surface texture images recalled in the coarse screening stage, in the fine screening stage, preregistered key point position information and preregistered key point feature vectors of preregistered key points identified from each preregistered surface texture image are first acquired, and key point information to be verified and key point feature vectors to be verified of key points identified from the surface texture images to be verified are acquired; then, aiming at each acquired feature vector of the key point to be verified: and searching a group of pre-registered key point feature vectors with highest vector similarity with the key point feature vector to be verified from the obtained pre-registered key point feature vectors, wherein the group of pre-registered key point feature vectors are used for forming key point vector pairs with the key point feature vector to be verified, and screening target key point vector pairs of which the corresponding key points meet geometric consistency constraint conditions from all the key point vector pairs. In the comparison phase: and determining whether an image matched with the surface texture image to be verified exists in the pre-registered surface texture image corresponding to the target key point vector pair so as to complete comparison.
The embodiment of the specification realizes the verification of the object to be verified by comparing the surface texture image to be verified with the pre-registered surface texture image, and the reliability of the comparison result is based on the following general rule: the surface texture image acquired from the object can reflect the microscopic texture of the object surface, and for any two identical surface texture images and any two keypoints identified from the two images respectively, if the positions of the two keypoints in the corresponding images are the same (or close), the local textures of the two keypoints are the same (or close). The proposal characterizes the local micro texture of the key point through the key point characteristic vector corresponding to the key point, and can be understood as follows: aiming at the to-be-verified aggregate feature vector of each to-be-verified key point identified in the to-be-verified surface texture image and the pre-registration aggregate feature vector of each pre-registration key point identified in each pre-registration surface texture image, if the to-be-verified object is not a pre-registration object, each pre-registration surface texture image is not matched with the to-be-verified surface texture image, so that the large probability of the to-be-verified key point in the to-be-verified surface texture image and the pre-registration key point in each pre-registration surface texture image does not meet the geometric consistency constraint condition (the condition is used for constraining the relative positions of the key points); in addition, if the article to be verified is a certain preregistered article (namely, the article to be verified and the preregistered article are the same article), the preregistered surface texture image acquired from the preregistered article is matched with the surface texture image to be verified; accordingly, since at least one key point to be verified is identified in the pre-registered surface texture image and at least one pre-registered key point is identified in the surface texture image to be verified, the positions of the key points in the corresponding images are not completely the same, so that only the positions of partial key point vectors corresponding to the key points meet the general rule. It can be seen that in at least a portion of the pre-registered surface texture images recalled during the coarse screening stage, there may still be locations of keypoints in some of the pre-registered surface texture images that do not satisfy the general rules described above.
For this, for at least a part of the key point vectors in the preregistered surface texture image recalled in the coarse screening stage, the scheme of the embodiment uses the constraint conditions of vector similarity and geometric consistency to carry out finer screening again in the fine screening stage, and the key point vector pairs in which the corresponding key points do not meet the general rule can be screened out, so that the overall operation amount of the subsequent comparison process is further reduced. Similar to the embodiment of the fine screening stage, the scheme can further reduce the calculation force requirement of the comparison process on the verification party on the basis of the scheme, and the comparison efficiency is higher.
As described above, when maintaining preregistered texture description information extracted from any preregistered surface texture image, the server can maintain a mapping relationship between the information and item description information of an item corresponding to the image. Based on the above, the verifier device may output the article description information of the preregistered article obtained according to the mapping relationship under the condition that the article to be verified is determined to be one of the N preregistered articles, so that the verifier user can accurately know which preregistered article is the article to be verified, and accurate identification and positioning of the article are realized.
As can be seen from the foregoing embodiments, N pre-registered articles may be registered based on the pre-registered surface texture image and the pre-registered texture description information by the cooperation of the article surface texture image acquisition hardware system, the server, and the verifier device in the article verification system; and through the mutual coordination between the verifier device and the server, whether the to-be-verified object and one of the N objects which have completed registration are the same object or not can be verified based on the surface texture image to be verified and the texture description information to be verified, namely, whether the to-be-verified object is one of the N pre-registered objects or not. The texture description information extracted from the surface texture image of any article can reflect the micro texture characteristics of the surface of the article, so that N pieces of pre-registered texture description information maintained by a server in the registration process can respectively reflect the micro texture characteristics of the surface of the N pieces of pre-registered articles, and the texture description information to be verified extracted in the verification process can reflect the micro texture characteristics of the surface of the article to be verified. The scheme is based on the microscopic texture characteristics of the to-be-verified object and the N pre-registered objects, and verification of the to-be-verified object is achieved.
It can be understood that the scheme can realize that the article does not need special treatment on the article only by collecting the image and based on the microscopic texture characteristics of the article, and does not need to post anti-counterfeiting accessories such as two-dimensional codes or passive induction labels on the article, thereby avoiding affecting or even damaging the appearance of the article and being beneficial to reducing the circulation cost of the article.
As an alternative embodiment, there is also provided a method for determining pixel accuracy, where the implementation manner is as follows:
Fig. 11 is a schematic view of a scenario for identifying authenticity of an article based on texture characteristics of the article according to an exemplary embodiment. As shown in fig. 11, there are an item a to be verified and a trusted item B in this scenario, where the authenticity of the item a to be verified is suspected, the trusted item B can be understood as an authentic item corresponding to the item to be verified. For example, the item to be authenticated a appears to be a kraft bag for which the user is doubtful that the item to be authenticated is a bag made of non-kraft material. Then the authentic article B is a genuine cow leather bag. In order to verify the authenticity of the item to be verified a, it is necessary to acquire a surface texture image a of the item to be verified a and a surface texture image B of the authentic item B, respectively. Then, the surface texture features of the item a to be authenticated are extracted from the surface texture image a, and the surface texture features of the authentic item B are extracted from the surface texture image B. Comparing whether the surface texture features of the article A to be verified are matched with the surface texture features of the trusted article B, and if so, indicating that the article A to be verified is real; if the items A do not match, the items A to be verified are false.
Among them, in the process of acquiring texture images of the surface of an object, pixel accuracy plays a critical role, which directly relates to the accuracy and fineness of texture feature extraction. Taking tile P and tile Q as an example, fig. 12a is a schematic diagram of a texture feature extracted in the case of excessive pixel accuracy provided by an exemplary embodiment. As shown in fig. 12a, when the pixel accuracy is too high, the surface texture images of the tile P and the tile Q are acquired with excessive pixel accuracy, respectively, and the surface texture features extracted from the two surface texture images reflect macro texture information of the tile, but cannot reflect micro texture information of the article. Thus, from the macro texture information shown in fig. 12a, an erroneous recognition result is obtained that tile P and tile Q are the same tile.
Fig. 12b is a schematic diagram of a texture feature extracted in the case of too small a pixel precision, as provided by an exemplary embodiment. In fig. 12b, block 1201 characterizes a single pixel in case the pixel precision is too small. Obviously, the single pixel is used for collecting texture characteristic information of the tile P and the tile Q, so that the effective texture characteristic information contained in the single pixel is too little, and further, an error conclusion that the tile P and the tile Q are the same tile is identified.
Fig. 12c is a schematic diagram of a texture feature extracted with proper pixel accuracy provided by an exemplary embodiment. In fig. 12c, a block 1201 characterizes a single pixel in case of an excessively small pixel precision, and a block 1202 characterizes a single pixel in case of an appropriate pixel precision. The block 1202 contains more valid texture features than the block 1201, based on which tiles P and Q can be distinguished as different tiles.
In summary, in a scene of identifying authenticity of an article based on texture features of the article, it is important to select pixel accuracy suitable for the article. However, in the related art, the pixel precision for acquiring the texture image of the surface of the article is often set based on experience, and the pixel precision suitable for each article cannot be accurately selected, so that the accuracy of identifying the authenticity of the article is low. In view of this, the present specification proposes a method of determining pixel accuracy for determining pixel accuracy suitable for surface texture image acquisition for each article.
As an alternative embodiment, there is also provided a method for determining pixel accuracy, the process may include the steps of:
1202: for M target objects prepared from the same material, respectively acquiring surface texture images of each target object acquired under the precision of N candidate pixels, wherein M, N is a positive integer greater than 1.
The surface texture image of the article may be used to reveal microscopic or macroscopic geometric features of the surface of the article (or the surface texture image may comprise microscopic or macroscopic texture features of the article). Because the texture features of the articles prepared from different materials are different, the surface texture images of the articles need to be acquired by adopting pixel precision matched with the materials of the articles so as to extract accurate texture features of the articles from the surface texture images. Therefore, in determining pixel accuracy, the present disclosure provides a method for determining pixel accuracy of an object, which is suitable for surface texture image acquisition of the object made of the object material, according to the preparation material of the object.
The target object in this embodiment may be any object, but M target objects are required to be made of the same material. For example, the material of the article may be litchi leather, cowhide, ceramic tile, etc. And respectively acquiring the surface texture images acquired by each target object under the precision of N candidate pixels for each target object in the M target objects. There are many ways to acquire the surface texture image of the target object, for example, the camera may be used to directly acquire the surface texture image of the target object. The size of the pixel precision is determined by factors such as focal length, angle of view, aperture and the like of the camera lens, so that for each candidate pixel precision in the N candidate pixel precision, the camera lens matched with each candidate pixel precision can be determined, then the camera lens with the corresponding candidate pixel precision is utilized for focusing, and under the condition of successful focusing, the surface texture image of the target object is shot, so that the surface texture image of the target object is acquired according to the corresponding candidate pixel precision. Obviously, this approach requires repeated lens replacement and refocusing when capturing images with different candidate pixel precision.
In one embodiment, a surface texture image of each target item acquired at a minimum candidate pixel precision of the N candidate pixel accuracies may be acquired first. For example, a surface texture image (referred to as an "initial surface texture image") of each target object may be acquired using a camera lens that matches the minimum candidate pixel precision. And then, carrying out downsampling processing on the initial surface texture image of each target object to obtain surface texture images of each target object, wherein the surface texture images correspond to N-1 candidate pixel precision except the minimum candidate pixel precision. In the field of image processing, downsampling processing generally refers to obtaining a new image by reducing the number of pixels of the image. It can be seen that, in the case of a fixed image size, the pixel accuracy of the downsampled image increases, so that the initial surface texture image of the target object needs to be acquired with the minimum candidate pixel accuracy. For example, the candidate pixel precision is 2um/px, 5um/px, 10um/px, 20um/px, respectively. In the process of acquiring the surface texture images of the target objects, firstly acquiring initial surface texture images of M target objects according to candidate pixel precision of 2um/px, and then respectively carrying out downsampling treatment on the initial surface texture images of each target object to obtain three surface texture images of each target object, which respectively correspond to the three candidate pixel precision of 5um/px, 10um/px and 20um/px. There are many algorithms for implementing the downsampling process, such as an interpolation downsampling algorithm, a hierarchical downsampling algorithm, an approximation downsampling algorithm, and the like. The interpolation algorithm may include, but is not limited to, a bilinear interpolation algorithm, a tri-linear interpolation algorithm, and a neighbor interpolation algorithm, and the present specification does not limit a specific downsampling algorithm.
According to the surface texture image acquisition mode, only one lens is required to acquire the initial surface texture image, other surface texture images can be acquired according to the initial surface texture image, the lens is not required to be repeatedly replaced and focusing is not required to be repeatedly carried out in the whole acquisition process, and the acquisition efficiency of the surface texture image of the target object is greatly improved. And the main image characteristics/image information of the initial surface texture image can be kept as much as possible by the downsampling processing, so that the surface texture image obtained by the downsampling processing is an effective image containing the texture characteristics of the object, and the accuracy of the subsequent image similarity analysis is improved.
In an embodiment, if the M target objects are print-like objects, after the surface texture images of the target objects are obtained, the surface texture images of each target object may be aligned on a macroscopic level, so as to avoid interference with subsequent similarity calculation due to image deformation, inclination, and the like.
1204: For each candidate pixel precision of the N candidate pixel precision: and carrying out similarity calculation on the surface texture images acquired by each two target objects in the M target objects under the precision of the corresponding candidate pixels to obtain a group of initial similarity, and carrying out similarity aggregation on the group of initial similarity to obtain the comprehensive similarity corresponding to the precision of the corresponding candidate pixels.
Through the image acquisition process of 1202, there are M surface texture images at each candidate pixel precision. And for each candidate pixel precision, carrying out similarity calculation on each two surface texture images under the candidate pixel precision to obtain a group of initial similarity.
In an embodiment, the similarity calculation may be performed on each two surface texture images by using an image quality evaluation algorithm, where the specific principle of the algorithm is to perform the similarity calculation on the image quality evaluation index of the image. The image quality evaluation index may include, but is not limited to, contrast of an image, brightness, image structure information, peak signal-to-noise ratio, and the like. There are many image quality assessment algorithms, such as PSNR (peak signal to Noise Ratio, PEAK SIGNAL-to-Noise Ratio), SSIM (structural similarity, structural Similarity Index), and the like. Wherein, PSNR algorithm calculates the similarity of the images by comparing the peak signal-to-noise ratio of the images. The SSIM algorithm calculates the similarity of images by comparing the contrast, brightness, and structural information of the images. Those skilled in the art can select the image quality evaluation algorithm according to the actual requirements, and the present specification is not limited. The similarity calculation mode does not need to train a model, has simple and efficient calculation process, and is particularly suitable for scenes for calculating the image similarity in batches.
In addition, the similarity between every two surface texture images can be calculated by comparing the image feature vectors. And respectively extracting features of the M surface texture images according to the M surface texture images under each candidate pixel precision to obtain image features corresponding to the M surface texture images. The process can be realized by using a gray level Co-occurrence matrix (GLCM, gray Level Co-occurrence Matrix), a local binary pattern (LBP, local Binary Pattern) and other feature extraction algorithms, and can also be realized by using a pre-trained image feature extraction model. Image feature extraction networks for extracting image features, such as ResNet (Residual Network), VGG (Visual Geometry Group), inception (Google Inception Network, also referred to as GoogLeNet), and the like, are deployed in the image feature extraction model. When feature extraction is performed by using a pre-trained image feature extraction model, the input of the image feature extraction model is a surface texture image, and the output is the image feature of the surface texture image. Image features may include texture features, shape descriptors, edges, corner points, etc. And then integrating the image characteristics of each surface texture image into a vector form representation to obtain the image characteristic vector corresponding to each surface texture image. The image feature vector is equivalent to mathematical summary of image features, and is convenient for subsequent calculation by simplifying complex image feature information into a data structure which can be operated mathematically. For example, the similarity of the image feature vectors of each two images may be calculated by cosine similarity or euclidean distance, etc. to obtain a set of initial similarity, which is not limited in this specification. Because the image features can better reflect the high-level semantic information of the image, the similarity calculation mode can further improve the accuracy of the initial similarity obtained by calculation.
According to the permutation and combination principle, the initial similarity set corresponding to each candidate pixel precision comprises M (M-1) initial similarities. And respectively carrying out similarity aggregation on a group of initial similarity corresponding to each candidate pixel precision, so as to obtain the comprehensive similarity corresponding to each candidate pixel precision. The similarity aggregation aims at integrating a group of attributes or characteristics of initial similarity corresponding to the precision of the corresponding candidate pixels, so that the integrated similarity obtained by aggregation can more comprehensively and accurately reflect the overall similarity degree of the surface texture images of the M target objects acquired under the precision of the corresponding candidate pixels. There are many specific ways of similarity aggregation, for example, a weighted average process is performed on a set of initial similarities, and the weighted average result is taken as the integrated similarity, or the maximum value/minimum value in a set of initial similarities is taken as the integrated similarity, which is not limited in this specification.
In one embodiment, the integrated similarity obtained by similarity aggregation of a set of initial similarities may include any of the following statistics: the average value of the set of initial similarities, the median of the set of initial similarities, the maximum value of the set of initial similarities, the weighted average value of the set of initial similarities, the minimum value of the set of initial similarities. Taking the comprehensive similarity as an average value of a group of initial similarities as an example, the calculation principle of the comprehensive similarity can refer to formula (1):
in the formula (1), p represents any candidate pixel precision in N candidate pixel precision, similarity P represents the comprehensive similarity corresponding to the candidate pixel precision p, M represents the number of target articles, i and j represent the ith target article and the jth target article in M target articles respectively, Characterizing the surface texture image of the ith target object acquired at candidate pixel precision p,Characterizing the surface texture image of the jth target article acquired at candidate pixel precision p,And (3) representing initial similarity obtained by performing similarity calculation on the surface texture images acquired by the ith and jth target objects under the candidate pixel precision p. The numerator in equation (1) characterizes a set of initial similarities corresponding to candidate pixel precision p.
1206: And determining the change rate of the comprehensive similarity corresponding to the N candidate pixel accuracies according to the sequence from small to large of the corresponding pixel accuracies, and determining the candidate pixel accuracy corresponding to the first abrupt change of the change rate as the target pixel accuracy suitable for surface texture image acquisition of the object prepared from the same material.
Analyzing the change condition of the change rate of the comprehensive similarity corresponding to the N candidate pixel accuracies according to the sequence from small pixel accuracy to large pixel accuracy, and determining the candidate pixel accuracy corresponding to the first mutation of the change rate as target pixel accuracy, wherein the target pixel accuracy is suitable for acquiring surface texture images of the articles with the same preparation material as the target articles. First, mutation in the rate of change of similarity typically occurs when an alignment is made based on a large amount of valid texture feature information. If the collected surface texture features contain less effective texture feature information, the change amplitude of the similarity change rate is smoother. Therefore, when the change rate of the comprehensive similarity changes suddenly, the method indicates that the surface texture image of the object is acquired according to the corresponding candidate pixel precision after the sudden change, and the acquired surface texture image can contain enough effective texture feature information, and under the condition, the enough effective texture feature information can be used as an identity for identifying the authenticity of the object. For example, when the rate of change of the integrated similarity changes with the candidate pixel precision from 5um/px to 8um/px, the 5um/px is the candidate pixel precision corresponding to the candidate pixel before the mutation, and the 8um/px is the candidate pixel precision corresponding to the candidate pixel after the mutation. Secondly, the candidate pixel precision corresponding to the first abrupt change of the change rate is selected as the target pixel precision because the candidate pixel precision corresponding to the first abrupt change is smaller on the basis that the pixel precision is arranged in the order from small to large. The surface texture image of the object is acquired according to the smaller pixel precision, so that the micro texture information of the object can be extracted from the surface texture image. The micro-texture information focuses more on the fine surface structure and detail of the article than the macro-texture information, and is more helpful to distinguish differences between different articles.
In the above embodiment, the surface texture images of the target object prepared from M identical materials are respectively acquired according to the N candidate pixel accuracies, and the target pixel accuracies suitable for the target object are determined according to the similarity between the surface texture images, so that the surface texture images of the target object acquired according to the target pixel accuracies can contain enough surface texture feature information, and the difference between different target objects prepared from the identical materials can be effectively distinguished based on the surface texture feature information. Therefore, the accuracy of the true and false identification of the article can be greatly improved by taking the surface texture characteristic information as the identity of the article.
In an embodiment, the analysis of the change condition of the comprehensive similarity change rate corresponding to the N candidate pixel accuracies according to the order from the smaller pixel accuracy to the larger pixel accuracy may be implemented by using a K-means elbow method. The K-means elbow method is used to determine the optimal number of clusters used in the K-means clustering algorithm, and specifically to identify inflection points by analyzing the sum of squares of errors (SSE, sum of Squared Errors) or the trend of intra-class errors over different numbers of clusters as the number of clusters increases. The inflection point indicates that the error reduction rate is significantly slowed down, so the number of clusters corresponding to the inflection point can be regarded as the optimal number of clusters.
In this embodiment, when the K-means elbow method is used to analyze the change condition of the comprehensive similarity change rate corresponding to the N candidate pixel accuracies, the error square sum of the comprehensive similarity corresponding to each candidate pixel accuracy needs to be calculated first, and the calculation principle can refer to the related content of the K-means elbow method in the related technology, which is not described here again. And then generating a change curve of the error square sum of the integrated similarity along with the precision of the candidate pixels according to the sequence from small to large of the corresponding pixel precision. Fig. 13 is a schematic diagram of a plot of sum of squares error of integrated similarity versus candidate pixel accuracy, as provided by an example embodiment. As shown in fig. 13, the inflection point in the change curve indicates that the change rate of the error sum of squares of the integrated similarity is abrupt from 10um/px to 20 um/px. Wherein, 10um/px is the corresponding candidate pixel precision before the change rate is suddenly changed, and 20um/px is the corresponding candidate pixel precision after the change rate is suddenly changed. Thus, the candidate pixel precision (i.e., 20 um/px) corresponding to the first inflection point in the variation curve shown in fig. 13 may be determined as a target pixel precision suitable for acquiring a surface texture image for an article of the same manufacturing material as the target article.
According to the method, the change curve of the error square sum of the comprehensive similarity along with the precision of the candidate pixels is generated, so that the change rate of the comprehensive similarity is suddenly changed for the first time and intuitively and clearly appears as the first inflection point in the change curve, complex calculation is not needed, the calculation pressure of a system is reduced, and meanwhile, the determination efficiency and accuracy of the precision of the target pixels are improved.
In an embodiment, the change condition of the change rate of the integrated similarity may also be represented by the second derivative of the integrated similarity. Specifically, the comprehensive similarity corresponding to each of the N candidate pixel accuracies is arranged in order from small to large to form a comprehensive similarity sequence. And then, obtaining a second derivative of the integrated similarity sequence to obtain a corresponding second derivative sequence. Since the integrated similarity sequence is a numerical sequence, the second derivative of the integrated similarity sequence is obtained, and the second derivative is actually obtained as a second difference of the discrete function corresponding to the numerical sequence (i.e., the second derivative sequence is actually a second difference sequence). The first order difference may be first obtained for the integrated similarity sequence, and each element in the first order difference sequence is a difference between two adjacent integrated similarities in the integrated similarity sequence. And then carrying out differential operation on the first-order differential sequence again to obtain a second-order differential sequence, wherein each element in the second-order differential sequence is the difference between two adjacent elements in the first-order differential sequence. For example, assuming that the integrated similarity sequence is {0.10,0.15,0.50,0.58,0.60}, the first-order difference sequence is {0.05,0.35,0.08,0.02} by the first-order difference operation. The first differential sequence is subjected to differential operation again, and a second differential sequence {0.30, -0.27, -0.06} is obtained. Then, the candidate pixel precision corresponding to the first second derivative larger than the preset threshold value in the second derivative sequence is determined as the target pixel precision.
Since the first derivative of the integrated similarity is used to reflect the rate of change of the integrated similarity, the second derivative is used to represent the rate of change of the first derivative, i.e., the second derivative may reflect the rate of change of the integrated similarity. Therefore, when the second derivative of the integrated similarity is greater than the preset threshold, it is indicated that the change rate of the integrated similarity is abrupt. Further, the candidate pixel precision corresponding to the second derivative whose first is larger than the preset threshold value may be determined as the target pixel precision. The method utilizes the physical meaning of the second derivative (namely, the change condition reflecting the change rate of the original numerical sequence) to clearly and accurately determine the accuracy of the target pixel, and is beneficial to improving the accuracy of the determined accuracy of the target pixel.
For an article made of any material, after determining a target pixel precision suitable for surface texture image acquisition of the article, surface texture features of the article may be extracted based on the target pixel precision. As an exemplary embodiment, there is also provided a method of extracting a surface texture feature of an article, the method may include the steps of:
and determining target pixel precision according to the preparation material adopted by the object to be processed, wherein the target pixel precision is determined according to the method for determining the pixel precision.
As previously mentioned, since the texture features of articles made from different materials are quite different, it is desirable to acquire a surface texture image of the article with pixel precision that is tailored to the material of the article. For an object to be processed, which needs to implement texture feature extraction operation, the target pixel precision can be determined according to the preparation material adopted by the object to be processed. The specific determination process of the target pixel accuracy may refer to the foregoing embodiments, and will not be described herein. Of course, after determining the target pixel precision applicable to the article made of any material according to the method of the foregoing embodiment, the association relationship between the target pixel precision and any material may be stored in the database, so that when the analysis processing of the texture features of the article made of the same material is required, the target pixel precision associated with the material is directly acquired from the database, and the surface texture image of the article is acquired according to the target pixel precision.
And acquiring a surface texture image of the object to be processed according to the target pixel precision.
And extracting the characteristics of the surface texture image of the object to be processed to obtain the surface texture characteristic information of the object to be processed.
The surface texture characteristic information of the article comprises microscopic or macroscopic geometric structure characteristics such as the material, arrangement mode, roughness, directionality and the like of the surface of the article. There are various ways to extract the surface texture feature information of the object from the surface texture image of the object, such as gray level co-occurrence matrix, local binary pattern, texture spectrum analysis (e.g. FFT, fast Fourier Transform), neural network for feature extraction (e.g. CNNs, convolutional Neural Networks), etc., and those skilled in the art can select a suitable feature extraction way according to the actual requirement, which is not limited in this specification.
In the above embodiment, the surface texture image of the object to be processed is acquired according to the target pixel precision adapted to the material of the object to be processed, so that the surface texture image contains enough and effective surface texture feature information, and the difference between the object to be processed and other objects can be effectively distinguished based on the surface texture feature information.
In addition, aiming at scenes such as article counterfeiting, article identification, article tracing and the like, the specification also provides a registration method of the surface texture characteristics of the article and an article verification method. In the scenes of article counterfeiting, article identification, article tracing and the like, a registrar and an verifier generally exist. The registering party can register the surface texture characteristic information of the article to the database after the production of the article is finished/before delivery/before selling, so that the surface texture characteristic information of the article is used as the identity of the article to ensure the authenticity of the article. And after receiving the article, the verifier can verify the surface texture characteristic information of the received article if the authenticity of the article is doubtful, so as to identify the authenticity of the article according to the verification result.
As an exemplary embodiment, there is also provided a registration method of a surface texture feature of an article, the registration method may include the steps of:
and acquiring surface texture feature information of the object to be registered, wherein the surface texture feature information is extracted according to the extraction method of the surface texture features of the object.
The item to be registered may be any item that has been produced/has not been shipped/has not yet been sold.
And storing the surface texture characteristic information of the object to be registered into a database, so that the surface texture characteristic information is used as an authenticity verification standard corresponding to the object to be registered.
And storing the surface texture characteristic information of the object to be registered into a database, so as to finish the registration operation of the surface texture characteristic information of the object to be registered. The surface texture feature information stored in the database can be used as an authenticity verification standard (namely an identity) corresponding to the article, so that a verifier can verify the authenticity of the article.
As a method for verifying authenticity of an article provided by an exemplary embodiment, the method may include the steps of:
And acquiring surface texture feature information of the object to be verified, wherein the surface texture feature information is extracted according to the extraction method of the surface texture features of the object.
The item to be verified is an item whose authenticity is in doubt. When the verifying party verifies the authenticity of the object to be verified, the verifying party needs to acquire the surface texture feature information of the object to be verified, and specifically, the surface texture feature information of the object to be verified can be extracted according to the method of the foregoing embodiment, which is not described herein again.
And determining the authenticity of the to-be-verified object according to whether the surface texture characteristic information of the to-be-verified object is matched with the surface texture characteristic information of the pre-registered trusted object.
The verifier can locally verify the authenticity of the object to be verified, and can also upload the surface texture characteristic information of the object to be verified to the cloud server so as to verify the authenticity of the object to be verified by the cloud server. Whether the verification is local verification or cloud verification, the surface texture characteristic information of the to-be-verified object is required to be compared with the surface texture characteristic information of the pre-registered trusted object in the verification process. Wherein a trusted item may be understood as a genuine item to be verified. The surface texture characteristic information of the trusted object can be registered into the database through the registration method of the surface texture characteristic of the trusted object, so that the surface texture characteristic information of the trusted object is acquired from the database for verification in the verification process.
And if the comparison result shows that the surface texture characteristic information of the to-be-verified object is matched with the surface texture characteristic information of the trusted object, judging that the to-be-verified object passes verification, namely that the to-be-verified object is real. If the comparison result shows that the surface texture characteristic information of the to-be-verified object cannot be matched with the surface texture characteristic information of the trusted object, the to-be-verified object is judged to be unverified, namely the to-be-verified object is a fake product.
The foregoing embodiments ensure the authenticity of an article by capturing a surface texture image of the article with a target pixel precision suitable for the article, and extracting accurate surface texture feature information from the surface texture image that can be used to distinguish between different articles, such that the surface texture feature information is used as an identity of the article. Because the surface texture characteristic information of the article is difficult to imitate, the mode can effectively distinguish true articles and fake articles, and the anti-counterfeiting effect of the article is greatly improved.
Based on the same conception as the above method, the present specification also provides an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to implement the steps of the method according to any of the embodiments described above by executing the executable instructions.
Based on the same conception as the above method, the present specification also provides a computer readable storage medium having stored thereon computer instructions which when executed by a processor perform the steps of the method according to any of the above embodiments.
Based on the same conception as the above method, the present specification also provides a computer program product comprising a computer program/instruction which, when executed by a processor, implements the steps of the method according to any of the embodiments described above.
Claims (17)
1. An article surface texture image acquisition system, comprising:
The height determining module is used for determining the target height according to the thickness of the target object to be subjected to image acquisition; the target height is the sum of the thickness of the target object and the initial height, and the initial height is the interval distance between the image acquisition equipment and the surface of the conveying device under the condition that the image acquisition equipment meets the shooting condition required by the target object and the surface of the conveying device is focused clearly;
The height adjustment control module is used for controlling the gesture control equipment to adjust the image acquisition equipment to the target height;
And the image acquisition control module is used for controlling the image acquisition equipment to acquire the image of the target object placed on the conveying device at the target height so as to obtain a surface texture image of the target object.
2. The system of claim 1, wherein the system further comprises:
The offset information determining module is used for determining a target area on the surface of the target object, which needs to be subjected to image acquisition, and offset information of the target area relative to a preset position on the conveying device, and determining horizontal movement information of the image acquisition equipment according to the target area and the offset information; the horizontal movement information comprises an x-axis maximum coordinate and/or a minimum coordinate of an occupied area of the target area under a target coordinate system, wherein the target coordinate system takes the preset position as an original point, takes the direction of the surface of the conveying device, which is perpendicular to the movement direction of the conveying device, as an x-axis and takes the movement direction of the conveying device as a y-axis;
and the horizontal adjustment control module is used for controlling the gesture control device to move the image acquisition device to the position indicated by the horizontal movement information so that the field angle of the image acquisition device covers the target area.
3. The system of claim 1, wherein the system further comprises:
The speed acquisition module is used for acquiring speed information of the conveying device for conveying the target object, which is detected by the speed detection equipment;
And the acquisition frequency control module is used for determining the image acquisition frequency according to the speed information and controlling the image acquisition equipment to acquire the surface texture image according to the image acquisition frequency.
4. The system of claim 1, wherein the system further comprises:
The delay shooting information determining module is used for determining delay shooting information under the condition that a trigger signal generated when the object detection equipment reaches a preset position on the target object on the conveying device is received; the time-delay shooting information comprises shooting time delay and shooting time length, wherein the shooting time delay is calculated according to the interval distance between an image acquisition area of the image acquisition equipment and the preset position and the moving speed of the conveying device, and the shooting time length is calculated according to the moving direction length of the target object and the moving speed;
And the delay shooting control module is used for controlling the image acquisition equipment to carry out delay shooting according to the shooting delay and shooting duration indicated by the delay shooting information.
5. The system of claim 1, wherein the system further comprises:
the initial module is used for determining target pixel precision required by image acquisition under the condition that the shooting condition comprises pixel precision, determining the initial height according to the target pixel precision and equipment parameters of the image acquisition equipment, controlling the gesture control equipment to adjust the image acquisition equipment to the initial height, and acquiring adjustment completion information generated by the gesture control equipment so as to trigger and control the image acquisition equipment to focus.
6. The system of claim 1, wherein the system further comprises:
And the storage control module is used for storing the surface texture image into a storage device after acquiring the surface texture image from the image acquisition device.
7. The system of claim 1, wherein the system further comprises:
And the illumination control module is used for controlling the illumination equipment to carry out lateral light source illumination on the target object under the condition that the preset photographing requirement is determined to be met.
8. The method for acquiring the texture image of the surface of the article is characterized by comprising the following steps of:
Determining the target height according to the thickness of the target object to be subjected to image acquisition; the target height is the sum of the thickness of the target object and the initial height, and the initial height is the interval distance between the image acquisition equipment and the surface of the conveying device under the condition that the image acquisition equipment meets the shooting condition required by the target object and the surface of the conveying device is focused clearly;
Controlling a gesture control device to adjust the image acquisition device to a target height;
and controlling the image acquisition equipment to acquire images of the target object placed on the conveying device at the target height so as to obtain a surface texture image of the target object.
9. The method of claim 8, wherein the method further comprises:
Determining a target area of the surface of the target object, which needs to be subjected to image acquisition, and offset information of the target area relative to a preset position on the conveying device, and determining horizontal movement information of the image acquisition equipment according to the target area and the offset information; the horizontal movement information comprises an x-axis maximum coordinate and/or a minimum coordinate of an occupied area of the target area under a target coordinate system, wherein the target coordinate system takes the preset position as an original point, takes the direction of the surface of the conveying device, which is perpendicular to the movement direction of the conveying device, as an x-axis and takes the movement direction of the conveying device as a y-axis;
and controlling the gesture control device to move the image acquisition device to the position indicated by the horizontal movement information so that the field angle of the image acquisition device covers the target area.
10. The method of claim 8, wherein the method further comprises:
acquiring speed information of the conveying device for conveying the target object, which is detected by a speed detection device;
And determining the image acquisition frequency according to the speed information, and controlling the image acquisition equipment to acquire the surface texture image according to the image acquisition frequency.
11. The method of claim 8, wherein the method further comprises:
Under the condition that a trigger signal generated by object detection equipment when a target object on the conveying device reaches a preset position is received, delay shooting information is determined; the time-delay shooting information comprises shooting time delay and shooting time length, wherein the shooting time delay is calculated according to the interval distance between an image acquisition area of the image acquisition equipment and the preset position and the moving speed of the conveying device, and the shooting time length is calculated according to the moving direction length of the target object and the moving speed;
and controlling the image acquisition equipment to carry out time-delay shooting according to the shooting time delay and the shooting time length indicated by the time-delay shooting information.
12. The method of claim 8, wherein the method further comprises:
And under the condition that the shooting condition comprises pixel precision, determining target pixel precision required by image acquisition, determining the initial height according to the target pixel precision and equipment parameters of the image acquisition equipment, controlling the attitude control equipment to adjust the image acquisition equipment to the initial height, and acquiring adjustment completion information generated by the attitude control equipment so as to trigger and control the image acquisition equipment to focus.
13. The method of claim 8, wherein the method further comprises:
after the surface texture image from the image acquisition device is acquired, the surface texture image is stored into a storage device.
14. The method of claim 8, wherein the method further comprises:
And under the condition that the preset photographing requirement is met, controlling the illumination equipment to conduct lateral light source illumination on the target object.
15. An electronic device, comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to implement the steps of the method according to any of claims 8 to 14 by executing the executable instructions.
16. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method of any of claims 8 to 14.
17. A computer program product comprising computer programs/instructions which, when executed by a processor, implement the steps of the method of any of claims 8 to 14.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410621197.4A CN118574003A (en) | 2024-05-17 | 2024-05-17 | Article surface texture image acquisition system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410621197.4A CN118574003A (en) | 2024-05-17 | 2024-05-17 | Article surface texture image acquisition system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118574003A true CN118574003A (en) | 2024-08-30 |
Family
ID=92466279
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410621197.4A Pending CN118574003A (en) | 2024-05-17 | 2024-05-17 | Article surface texture image acquisition system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118574003A (en) |
-
2024
- 2024-05-17 CN CN202410621197.4A patent/CN118574003A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11062118B2 (en) | Model-based digital fingerprinting | |
JP6708973B2 (en) | Judgment method, judgment system, judgment device, and program thereof | |
US11023762B2 (en) | Independently processing plurality of regions of interest | |
Raghavendra et al. | Exploring the usefulness of light field cameras for biometrics: An empirical study on face and iris recognition | |
WO2016086343A1 (en) | System and method for personal identification based on multimodal biometric information | |
Takahashi et al. | Fibar: Fingerprint imaging by binary angular reflection for individual identification of metal parts | |
WO2016010720A1 (en) | Multispectral eye analysis for identity authentication | |
JP2013522754A (en) | Iris recognition apparatus and method using a plurality of iris templates | |
WO2016086341A1 (en) | System and method for acquiring multimodal biometric information | |
US11341348B2 (en) | Hand biometrics system and method using digital fingerprints | |
Wang et al. | Fit-sphere unwrapping and performance analysis of 3D fingerprints | |
JP6810392B2 (en) | Individual identification device | |
CN112232163B (en) | Fingerprint acquisition method and device, fingerprint comparison method and device, and equipment | |
CN112016525A (en) | Non-contact fingerprint acquisition method and device | |
CN111191644B (en) | Identity recognition method, system and device | |
CN112232155A (en) | Non-contact fingerprint identification method and device, terminal and storage medium | |
US11080511B2 (en) | Contactless rolled fingerprints | |
CN112232159B (en) | Fingerprint identification method, device, terminal and storage medium | |
US11164337B2 (en) | Autocalibration for multiple cameras using near-infrared illuminators | |
US11450140B2 (en) | Independently processing plurality of regions of interest | |
CN112232157B (en) | Fingerprint area detection method, device, equipment and storage medium | |
Noh et al. | Empirical study on touchless fingerprint recognition using a phone camera | |
Di Martino et al. | Liveness detection using implicit 3D features | |
CN118574003A (en) | Article surface texture image acquisition system and method | |
CN118537586A (en) | Article verification method and system based on surface texture image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |