CN119090568A - Store authenticity verification method, device, storage medium and computer equipment - Google Patents
Store authenticity verification method, device, storage medium and computer equipment Download PDFInfo
- Publication number
- CN119090568A CN119090568A CN202410902025.4A CN202410902025A CN119090568A CN 119090568 A CN119090568 A CN 119090568A CN 202410902025 A CN202410902025 A CN 202410902025A CN 119090568 A CN119090568 A CN 119090568A
- Authority
- CN
- China
- Prior art keywords
- detection
- image
- store
- target image
- authenticity verification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012795 verification Methods 0.000 title claims abstract description 122
- 238000000034 method Methods 0.000 title claims abstract description 70
- 238000003860 storage Methods 0.000 title claims abstract description 19
- 238000001514 detection method Methods 0.000 claims abstract description 312
- 238000012545 processing Methods 0.000 claims abstract description 40
- 230000008569 process Effects 0.000 claims abstract description 21
- 238000012549 training Methods 0.000 claims description 32
- 238000000605 extraction Methods 0.000 claims description 30
- 239000013598 vector Substances 0.000 claims description 26
- 238000012790 confirmation Methods 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 5
- 230000004044 response Effects 0.000 claims description 4
- 238000012552 review Methods 0.000 claims description 4
- 238000013507 mapping Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 5
- 238000012546 transfer Methods 0.000 description 5
- 230000007306 turnover Effects 0.000 description 5
- 238000012360 testing method Methods 0.000 description 4
- 238000011176 pooling Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000000737 periodic effect Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000012550 audit Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 239000003999 initiator Substances 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000009987 spinning Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0282—Rating or review of business operators or products
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Finance (AREA)
- Multimedia (AREA)
- Strategic Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Development Economics (AREA)
- Accounting & Taxation (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Entrepreneurship & Innovation (AREA)
- Artificial Intelligence (AREA)
- Game Theory and Decision Science (AREA)
- General Health & Medical Sciences (AREA)
- Economics (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a method, a device, a storage medium and computer equipment for verifying authenticity of a store. The method comprises the steps of obtaining target images of shops and first shop information, carrying out detection processing on a plurality of image detection items on the target images to obtain detection results of the image detection items, wherein the image detection items comprise at least one of shop front detection, moire detection, screenshot detection or watermark detection, identifying second shop information in the target images when the detection results of the image detection items meet preset conditions, and determining authenticity verification results of the shops according to the similarity between the first shop information and the second shop information. The method reduces manual intervention in the auditing process, reduces labor cost and time cost, greatly improves the authenticity verification efficiency of the merchant store, and greatly improves the authenticity verification accuracy of the merchant.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and apparatus for verifying authenticity of a store, a storage medium, and a computer device.
Background
The merchant network access audit is an important component of the third party payment mechanism wind control system, and is important for constructing a healthy, safe and efficient payment ecological system. When the merchant is checked on the internet, a great deal of time and labor are consumed to distinguish the authenticity of the store photo provided by the user, and the store photo checking accuracy is low.
Disclosure of Invention
In view of the above, the application provides a method, a device, a storage medium and computer equipment for verifying the authenticity of a store, which realize automatic and intelligent authenticity verification of store information submitted by a user.
According to an aspect of the present application, there is provided an authenticity verification method of a store, including:
acquiring a target image of a store and first store information;
Performing detection processing of an image detection item on the target image to obtain a detection result of the image detection item, wherein the image detection item comprises at least one of shop front detection, moire detection, screenshot detection or watermark detection;
identifying second shop information in the target image under the condition that the detection result of the image detection item meets a preset condition;
And determining an authenticity verification result of the store according to the similarity between the first store information and the second store information.
Optionally, the image detection item includes shop front detection, and the detecting process of the image detection item on the target image specifically includes:
performing mirror inversion processing on the target image to obtain an inverted image;
Respectively carrying out text recognition processing on the target image and the turnover image, and determining the confidence coefficient of a first text in the target image and the confidence coefficient of a second text in the turnover image;
Replacing the target image with the mirror image under the condition that the confidence coefficient of the first text is smaller than that of the second text;
inputting the target image into a door head detection model to obtain a detection result that the target image contains a shop door head, wherein the door head detection model is obtained by training according to a first sample image and a door head label, and the first sample image comprises the shop door head image.
Optionally, the image detection item includes moire detection, and the detecting process of the image detection item on the target image specifically includes:
acquiring a moire image and a shop template image;
extracting moire features contained in the moire image;
performing perspective transformation processing on the shop template image according to the moire features to form a second sample image and edge mapping of the moire features in the second sample image;
training a moire recognition model according to the second sample image and the edge mapping;
inputting the target image into a moire recognition model to obtain the probability that the target image contains moire;
And determining the detection result of the moire detection according to the probability.
Optionally, the image detection item includes screenshot detection, and the detecting process of the image detection item on the target image specifically includes:
and reading image file format information associated with the target image, and determining a detection result of the screenshot detection according to a reading completion state of the image file format information, wherein the reading completion state comprises successful reading or failure reading, and the image file format information comprises at least one of shooting time, equipment model, focal length parameters and aperture parameters.
Optionally, the image detection item includes screenshot detection, and the detecting process of the image detection item on the target image specifically includes:
And determining a detection result of the screenshot detection according to the size relation between the resolution of the target image and the preset screenshot resolution.
Optionally, the image detection item includes screenshot detection, and the detecting process of the image detection item on the target image specifically includes:
and calculating the similarity of the aspect ratio between the aspect ratio of the target image and a plurality of preset aspect ratios by adopting a least common multiple method, and determining the detection result of the screenshot detection according to the magnitude relation between the similarity and the preset similarity.
Optionally, the image detection item includes screenshot detection, and the detecting process of the image detection item on the target image specifically includes:
inputting the target image into an interface element identification model to obtain interface element characteristics in the target image, and determining a detection result of screenshot detection according to the interface element characteristics, wherein the interface element identification model is obtained by training according to a third sample image and an interface element label, and the interface element characteristics comprise information prompt bar characteristics, application program identification characteristics and screen boundary characteristics.
Optionally, before the inputting the target image into the interface element recognition model, the method further includes:
inputting the third sample image into a feature extraction model to obtain a global feature vector and a local feature vector of the third sample image, wherein the feature extraction model comprises a global feature extraction channel and a local feature extraction channel;
determining a channel attention weight according to the global feature vector and the local feature vector;
and updating the third sample image according to the channel attention weight.
Optionally, the image detection item includes watermark detection, and the detecting process of the image detection item on the target image specifically includes:
Inputting the target image into a watermark detection model to obtain a detection result that the target image contains a watermark, wherein the watermark detection model is obtained by training according to a fourth sample image and a watermark label, and the fourth sample image comprises a watermark image.
Optionally, the determining the authenticity verification result of the store according to the similarity between the first store name and the second store name specifically includes:
Determining that the authenticity verification result is passed when the similarity between the first store information and the second store information is greater than a similarity threshold;
transmitting the first store information to a review node when the similarity between the first store information and the second store information is less than or equal to a similarity threshold;
Responding to a confirmation instruction fed back by the rechecking node, and determining that the authenticity verification result is passed;
and under the condition that the confirmation instruction fed back by the rechecking node is not received within a preset period, determining that the authenticity verification result is not passed.
Optionally, the method further comprises:
and under the condition that the detection junction of any image detection item does not accord with the preset condition, determining that the authenticity verification result is not passed.
According to another aspect of the present application, there is provided an authenticity verification device for a store, comprising:
the acquisition module is used for acquiring target images of the stores and first store information;
The image verification module is used for carrying out detection processing of an image detection item on the target image to obtain a detection result of the image detection item, wherein the image detection item comprises at least one of shop front detection, moire detection, screenshot detection or watermark detection;
The identification module is used for identifying second shop information in the target image under the condition that the detection result of the image detection item meets the preset condition;
And the information verification module is used for determining an authenticity verification result of the store according to the similarity between the first store information and the second store information.
Optionally, the image detection item includes shop front detection, and the apparatus further includes:
the image overturning module is used for carrying out mirror image inversion processing on the target image to obtain an overturning image;
The recognition module is further used for respectively carrying out text recognition processing on the target image and the turnover image, and determining the confidence coefficient of the first text in the target image and the confidence coefficient of the second text in the turnover image;
The image verification module is specifically configured to replace the target image with the mirror image when the confidence coefficient of the first text is smaller than that of the second text, and input the target image into a door head detection model to obtain a detection result that the target image includes a door head of a store, wherein the door head detection model is obtained by training according to a first sample image and a door head label, and the first sample image includes the door head image of the store.
Optionally, the image detection item includes moire detection, and the apparatus further includes:
The device comprises a first feature processing module, a second feature processing module and a third feature processing module, wherein the first feature processing module is used for extracting moire features contained in the moire images, and performing perspective transformation processing on the shop template images according to the moire features to form second sample images and edge mapping of the moire features in the second sample images;
the training module is used for training a moire recognition model according to the second sample image and the edge mapping;
the image verification module is specifically used for acquiring a moire image and a shop template image, inputting the target image into a moire recognition model to obtain the probability that the target image contains moire, and determining the detection result of moire detection according to the probability.
Optionally, the image detection item includes screenshot detection, and the image verification module is specifically configured to read image file format information associated with the target image, and determine a detection result of the screenshot detection according to a reading completion state of the image file format information, where the reading completion state includes success or failure of reading, and the image file format information includes at least one of shooting time, equipment model, focal length parameter, and aperture parameter.
Optionally, the image detection item includes screenshot detection, and the image verification module is specifically configured to determine a detection result of the screenshot detection according to a size relationship between a resolution of the target image and a preset screenshot resolution.
Optionally, the image detection item includes screenshot detection, and the image verification module is specifically configured to calculate an aspect ratio similarity between an aspect ratio of the target image and a plurality of preset aspect ratios by using a least common multiple method, and determine a detection result of the screenshot detection according to a magnitude relation between the similarity and the preset similarity.
Optionally, the image detection item includes screenshot detection, and the image verification module is specifically configured to input the target image into an interface element identification model to obtain an interface element feature in the target image, and determine a detection result of the screenshot detection according to the interface element feature, where the interface element identification model is obtained by training according to a third sample image and an interface element tag, and the interface element feature includes an information prompt bar feature, an application identification feature and a screen boundary feature.
Optionally, the apparatus further comprises:
The second feature processing module is used for inputting the third sample image into a feature extraction model to obtain a global feature vector and a local feature vector of the third sample image, wherein the feature extraction model comprises a global feature extraction channel and a local feature extraction channel;
And the updating module is used for updating the third sample image according to the channel attention weight.
Optionally, the image detection item includes watermark detection, and the image verification module is specifically configured to input the target image into a watermark detection model to obtain a detection result that the target image includes a watermark, where the watermark detection model is obtained by training according to a fourth sample image and a watermark label, and the fourth sample image includes a watermark image.
Optionally, the information verification module is specifically configured to determine that the authenticity verification result is passing if the similarity between the first store information and the second store information is greater than a similarity threshold, send the first store information to a rechecking node if the similarity between the first store information and the second store information is less than or equal to the similarity threshold, determine that the authenticity verification result is passing in response to a confirmation instruction fed back by the rechecking node, determine that the authenticity verification result is not passing if the confirmation instruction fed back by the rechecking node is not received within a preset period, and determine that the authenticity verification result is not passing if a detection node of any one of the image detection items does not meet the preset condition.
According to still another aspect of the present application, there is provided a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the above-described store authenticity verification method.
According to still another aspect of the present application, there is provided a computer device comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, the processor implementing the steps of the above-described store authenticity verification method when executing the program.
By means of the technical scheme, when the authenticity of store information submitted by a user needs to be verified, the system acquires the target image of the store submitted by the user and the first store information related to the store. The system respectively verifies the authenticity of the target image from multiple aspects such as whether the target image is a store door head image, whether the target image is a live-action shot, whether the target image is a device screenshot, and whether the target image is a transfer image through at least one detection item of store door head detection, moire detection, screenshot detection and watermark detection. After the detection result of the image detection item meets the preset condition, namely the verification is passed, the system can judge that the target image is a shop live-action image shot by the user. At this time, text recognition may be performed on the target image to obtain second store information exhibited by the target image. And comparing the second shop information in the image with the first shop information submitted by the user, if the comparison is passed, determining that the user is a real merchant, and executing a subsequent flow. Therefore, a double authentication mechanism of the images and the information is realized through an automatic shop image verification and information verification process. On one hand, manual intervention in the auditing process is reduced, the labor cost and the time cost are reduced, and the authenticity verification efficiency of the merchant store is greatly improved. On the other hand, the intelligent authenticity verification is carried out from multiple aspects by combining multiple detection technologies, so that the merchant authenticity verification accuracy is greatly improved.
The foregoing description is only an overview of the present application, and is intended to be implemented in accordance with the teachings of the present application in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present application more readily apparent.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a schematic flow chart of a method for verifying authenticity of a store according to an embodiment of the present application;
fig. 2 shows a block diagram of a store authenticity verification device according to an embodiment of the present application.
Detailed Description
The application will be described in detail hereinafter with reference to the drawings in conjunction with embodiments. It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other.
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly fused. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
Exemplary embodiments according to the present application will now be described in more detail with reference to the accompanying drawings. These exemplary embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. It should be appreciated that these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of these exemplary embodiments to those skilled in the art.
In this embodiment, there is provided a method for verifying authenticity of a store, as shown in fig. 1, the method including:
step 101, acquiring a target image of a store and first store information;
The first store information is store related information submitted by the user, such as store names, addresses, contact ways and the like.
The method for verifying the authenticity of the store, provided by the embodiment of the application, can be applied to the terminal, the server and software running in the terminal or the server. In some embodiments, the terminal may be a smart phone, a tablet computer, a notebook computer, a desktop computer, etc., the server may be configured as an independent physical server, may be configured as a server cluster or a distributed system formed by a plurality of physical servers, and may be configured as a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligent platforms, and the software may be an application for implementing the intention recognition method, but is not limited to the above form.
102, Carrying out detection processing of image detection items on a target image to obtain detection results of the image detection items;
The image detection item comprises at least one of shop front detection, moire detection, screenshot detection or watermark detection. The shop front detection is used for detecting whether the target image is a front image containing shop information. The moire detection is used for detecting whether the target image is a moire including influence on definition so as to judge whether the target image is an image obtained by shooting a screen of the equipment and whether the target image is clear. The screenshot detection is used to detect whether the target image is a device screenshot. Watermark detection is used to detect whether a target image is an image that contains a watermark, to determine whether the target image is an image obtained by a transfer.
In this embodiment, upon the need to verify the authenticity of store information submitted by a user, the system obtains a target image of the store submitted by the user and store-related first store information. The system respectively verifies the authenticity of the target image from multiple aspects such as whether the target image is a store door head image, whether the target image is a live-action shot, whether the target image is a device screenshot, and whether the target image is a transfer image through at least one detection item of store door head detection, moire detection, screenshot detection and watermark detection. And judging the target image as a shop live-action image shot by the user according to the detection result of the image detection item. The automatic image detection omits the workload of manual auditing, and the intelligent authenticity verification is carried out from multiple aspects by combining multiple detection technologies, so that the merchant authenticity verification accuracy can be greatly improved.
In an actual application scene, aiming at shop front detection, detection processing is carried out on a target image, specifically, the method comprises the steps of carrying out mirror image inversion processing on the target image to obtain a turned image, carrying out text recognition processing on the target image and the turned image respectively, determining the confidence coefficient of a first text in the target image and the confidence coefficient of a second text in the turned image, replacing the target image with the mirror image under the condition that the confidence coefficient of the first text is smaller than the confidence coefficient of the second text, and inputting the target image into a shop front detection model to obtain a detection result that the target image contains shop front.
The door head detection model is obtained through training according to a first sample image and a door head label, and the first sample image comprises a shop door head image. For example, the door head detection model is a model trained based on YOLOv a target detection algorithm. Specifically, a dataset comprising real scene picture data, network portal photograph data and open source data is collected, an input picture is uniformly scaled to a uniform size, random clipping, overturning, rotation, translation, brightness, contrast, deduplication, mixup data enhancement, mosaic data enhancement and the like are used for data enhancement pretreatment, an annotation tool is used for manually annotating the dataset after pretreatment, wherein the annotation comprises an image and corresponding annotation, the annotation designates the position and the category of an object in the picture, and the annotation uses a YOLO format (class, x, y, w, h). Dividing the marked data into a training set, a verification set and a test set according to a set proportion so as to construct a training data set. Training is carried out by using YOLOv, training set image data and corresponding labels are input into an initial model, the model predicts a boundary box and class probability, loss is calculated according to prediction and labels, and the weight of the model is updated in a back propagation mode so as to minimize the loss until the set number of training iterations is reached. Different learning rates, optimization algorithms, and other super parameters are tried to ensure that the model converges effectively, fine tuning and optimization of the model. And then, the performance of the trained model on the independent verification data set is performed so as to avoid over fitting, and the model is continuously optimized according to the verification result. After model training is completed, testing the performance of the model by using a testing set, judging whether the model needs to be optimized again according to the map of the testing result, and outputting the door head detection model for use if the map meets the requirement.
In this embodiment, the confidence of the text in the same text box in the target image and the flipped image is compared, and the text more conforming to the text semantic and the word order under two viewing angles is determined, so as to determine whether the target image is a mirror image. When the confidence coefficient of the first text is larger than or equal to that of the second text, the confidence coefficient of the first text of the target image is higher, the target image submitted by the user can be judged to be a non-mirror image, and the follow-up door head detection can be directly carried out. Otherwise, when the confidence coefficient of the first text is smaller than that of the second text, the confidence coefficient of the second text obtained after the turning is higher, and the second text accords with natural language, the target image submitted by the user can be judged to be a mirror image, at the moment, the mirror image is used for covering the target image, and door head detection is carried out through the updated target image. Therefore, the mirror image problem of the image is found in time, text recognition errors caused by the image direction problem are eliminated by using the mirror image processing result, and accurate region information is provided for subsequent analysis and processing. And even if the user submits the mirror image picture, the verification system can correct the mirror image picture, so that the requirement of the user for submitting the target image is reduced, and the user operation is facilitated.
Taking a merchant network access verification scene as an example, merchants generally use a mobile phone as a shooting device for mirror image shooting, in the process of shooting by using the mobile phone, mirror image pictures are often shot due to the use of a front-end camera or some configurations, and because many characters may become non-existent characters or other characters after being overturned, serious deviation occurs between the recognized characters and actual meanings. For this purpose, after reading the store picture (origin_image) submitted by the merchant, a copy may be copied and turned to obtain a turned picture (flip_image). And respectively carrying out text box detection, text detection box text recognition and calculating confidence average values of all text box text recognition based on the store picture and the turnover picture in parallel. For example, the first text "electric car", "123456789", confidence level 0.9991 and 0.9043, respectively, is identified in the store picture, and the second text "east electric", "ertyufgvh", confidence level 0.3077 and 0.2268, respectively, is identified in the flip picture. And (3) counting confidence average values of all the first texts as 0.9517 and confidence average values of all the second texts as 0.26725,3, comparing the sizes of a and b, wherein the confidence average value of the first texts is larger than that of the second texts, which indicates that the original store picture is an unopened picture, and sending the original picture into a portal detection model. And judging whether the current picture has a door head or not based on the information output by the door head detection model. Specifically, if the door head exists and the confidence is greater than a preset threshold, the current picture is considered to be the door head picture, otherwise, the current picture is considered to be not the door head picture, merchant related information is reported, a business person performs rechecking, and returns to the initiator after no error, and the processing is terminated.
The method comprises the steps of obtaining a moire image and a shop template image, extracting moire features contained in the moire image, performing perspective transformation processing on the shop template image according to the moire features to form a second sample image and edge mapping of the moire features in the second sample image, training a moire recognition model according to the second sample image and the edge mapping, inputting the target image into the moire recognition model to obtain the probability that the target image contains the moire, and determining a detection result of the moire detection according to the probability.
The moire is an optical phenomenon commonly seen in the fields of image processing, printing, spinning, photography, and the like. Moire is an interference pattern resulting from the superposition of two or more periodic structures (e.g., pixel arrays, gratings, lines, grids, or corrugations, etc.), due to their close or slightly different frequencies. When these repeated patterns overlap and do not align properly, the human eye or sensor will perceive a new, often irregular, periodic pattern of stripes or waves, which is known as moire.
Further, the moire image may be a moire pattern on a solid background to facilitate segmentation of the moire. The background can be reduced to the moire interference by utilizing chromatic aberration, the moire interference is further filtered by using a segmentation algorithm, and finally fine-granularity moire extraction is performed by using a screening algorithm.
In this embodiment, the master overlay concept is utilized to perspective transform the store template image through moire features in the moire image to generate a second sample image with moire and store background, and an edge map of the moire features in the second sample image is constructed. And taking the second sample image as a sample, and taking the edge map as a note to train a moire recognition model. The probability that the target image contains moire can be determined by the moire recognition model. And under the condition that the probability is larger than the preset probability, determining that the target image contains moire, and taking a non-live-action picture of the electronic screen. Therefore, on the basis of realizing accurate and automatic mole pattern detection, the method can simulate the shooting of an actual camera, furthest reserves the characteristics of the background and the mole pattern, ensures that the finally synthesized second sample image not only corresponds to the mole pattern layer in pixel level, but also approximates to the actual shop picture, can rapidly realize the preparation of high-quality, large-scale and low-cost mole pattern training data, and improves the training efficiency and difficulty of a mole pattern recognition model.
Illustratively, for a given store template image B ε [0,255] m×n×3 and original moire feature layer M, a pixel-by-pixel multiplication of the perspective transformation matrix of the store template image and original moire feature layer M is used to synthesize a second sample image I ε [0,255] m×n×3 and a corresponding moire edge map D ε [0,255] m×n, thereby enabling the second sample image to preserve the texture and color features of M. Using the second sample image I and the corresponding moire fringe map D as training data, feature values of the moire at different granularity, different size and different position are sufficiently captured using a High-layer encoder (High-Level Encoder), a Low-layer encoder (Low-Level Encoder) and a spatial encoder (Spatial Encoder). For the three encoders, three corresponding loss functions are also provided, feature extraction and model generalization are effectively guided, and model training efficiency and moire detection precision are further improved.
Wherein, the calculation formula of I is as follows:
where Tra (·) represents a perspective transformation operation, t represents a perspective transformation matrix for calculating a target mask of M on a rectangular region of size [ M, n ], and t may be empirically varied to stretch, squeeze, rotate, shear, and reflect, thereby mimicking different camera pose and camera shooting distance.
Aiming at screenshot detection, target images are detected and processed, and the method is specifically implemented as follows:
In one mode, image file format information associated with a target image is read, and a detection result of screenshot detection is determined according to the reading completion state of the image file format information.
Wherein the reading completion state includes a success or failure of reading, and the image file format information (EXIF) includes at least one of photographing time, device model number, focal length parameter, and aperture parameter.
In this embodiment, if the image file format information associated with the target image is not successfully read, which indicates that the target image lacks metadata related to the capturing, or the information is incomplete, the target image may be roughly determined to be a screenshot. Otherwise, if the target picture contains EXIF data, it is stated that the target picture is necessarily a photographed image.
And secondly, determining a detection result of screenshot detection according to the size relation between the resolution of the target image and the preset screenshot resolution.
The screenshot resolution in the mobile device, that is, the device screen resolution, may be set to be the device screen resolution, and the device screen resolution is typically less than 720x1280.
In this embodiment, whether the image is a screenshot may be determined by comparing the resolution of the target image with the device screen resolution. If the resolution of the target image is smaller than the preset screenshot resolution, the target image can be judged to be screenshot.
And thirdly, calculating the similarity of the aspect ratio between the aspect ratio of the target image and a plurality of preset aspect ratios by adopting a least common multiple method, and determining the detection result of screenshot detection according to the magnitude relation between the similarity and the preset similarity.
The preset aspect ratio of conventional shots is substantially between 2:1 and 21:9, i.e., 2:1, 19:9, 16:9, 19.5:9, 15.8:9, 21:9, etc.
In this embodiment, the aspect ratio of the target image is compared with the normal aspect ratio one by one, and the similarity maximum value is taken as the aspect ratio similarity. If the aspect ratio similarity is greater than the preset similarity, the aspect ratio of the target image is considered to be consistent with a certain conventional aspect ratio, and the high probability is screenshot.
And in a fourth mode, inputting the target image into an interface element identification model to obtain the interface element characteristics in the target image, and determining the detection result of screenshot detection according to the interface element characteristics.
The interface element identification model is obtained through training according to a third sample image and an interface element label, and interface element characteristics comprise information prompt bar characteristics, application program identification characteristics and screen boundary characteristics.
In this embodiment, since the screenshot will typically contain screen-specific content such as status bars, navigation bars, application interface elements, etc., these will not appear in a normal photo. Sometimes even after clipping, the edges may remain subtle pixel traces or unnatural screen boundaries. Interface element features can be automatically learned and extracted from the third sample image in advance by utilizing a deep learning model, so that complex nonlinear feature relations in the image are effectively captured and modeled. And judging whether the target image contains interface element characteristics by using the interface element identification model, and judging that the target image is a screenshot if the target image contains the interface element characteristics.
For example, the model framework may be identified using convolutional neural network structure EFFICIENTNET as an interface element.
The method for verifying the authenticity of the store further comprises the steps of inputting a third sample image into a feature extraction model to obtain a global feature vector and a local feature vector of the third sample image, determining a channel attention weight according to the global feature vector and the local feature vector, and updating the third sample image according to the channel attention weight.
The feature extraction model comprises a global feature extraction channel and a local feature extraction channel.
In this embodiment, global and local feature vectors of the third sample image are extracted based on an attention mechanism to capture information of the whole image by the global feature vector, while local feature vectors may capture details and local features in the image. And updating the third sample image through the channel attention weight obtained according to the global feature vector and the local feature vector. Therefore, the contribution degree of different channels to the characteristics can be dynamically adjusted, only the channel which is most useful for the current image is reserved, the attention of the network to key information is enhanced by dynamically weighting the characteristic diagram in the channel and space dimensions, and further, the performance and the interpretation of the model are improved, and particularly, the model has better performance when the unseen image is processed. But also reduces the computation of useless characteristic channels, thereby reducing the computation amount and the storage requirement of the model.
Illustratively, GAP (Global Average Pooling ) and GMP (Global Max Pooling, global maximum pooling) operations are performed on the original third sample image, respectively, compressing the third sample image into a1×1×c vector, where C is the number of channels, to capture the global and local features between the channels. And the obtained global feature vector and the local feature vector pass through two independent full-connection layers to learn the correlation among channels, and the output of the full-connection layers is normalized through a Sigmoid function to obtain the channel attention weight between 0 and 1. The output of the Sigmoid function is applied to each channel of the original third sample image, and the feature images after the channel attention and the spatial attention are multiplied, so that the weighted third sample image which considers the importance of the channel and the importance of the spatial position can be obtained.
The method specifically comprises the steps of inputting a target image into a watermark detection model to obtain a detection result of the target image containing the watermark.
The watermark detection model is obtained through training according to a fourth sample image and a watermark label, wherein the fourth sample image comprises a watermark image.
In this embodiment, by the watermark detection model, it is possible to detect whether or not the target image contains a watermark. If the watermark in the target image is detected, the target image can be judged to be possibly tampered or reprinted, and the reliability and the credibility of the information are improved.
Illustratively, the watermark detection model includes YOLOV detection models and large language models. The network picture is usually provided with a watermark identifier, so that the authenticity of the current target picture can be judged based on judging whether the watermark identifier is contained in the picture. The specific implementation flow is basically consistent with the detection principle of the top-level camera, a YOLOV detection model is also used, but the training data is a large-scale public watermark data set marked manually, and the training process is not repeated. Further, when YOLOV model detects that a watermark is present in the picture and the confidence is less than the preset confidence, indicating that the blurred portion of the picture is not very certain as to whether the watermark is present or not present in the picture, the large language model will be used for detection. The content in the picture can be analyzed and corresponding results can be given by utilizing the powerful semantic understanding capability and thinking capability of the large model. Based on a large language model, the high-precision detection effect is realized.
It is worth mentioning that, considering the difference of the requirements for the authenticity of the store information under different scenes, the image detection items to be executed can be matched for different application scenes. For example, when the business needs to be strictly controlled in the scenes of business network access, business inspection, customer wind control and the like, four detection of shop door head detection, moire detection, screenshot detection and watermark detection can be performed. In the customer management scene, the aim is to determine whether the information of the merchant is changed, so that only store door head detection and moire detection are needed to ensure the accuracy of information identification in the graph.
Step 103, identifying second shop information in the target image when the detection result of the image detection item meets the preset condition;
In this embodiment, after the detection result of the image detection item meets the preset condition, that is, the verification is passed, the system may determine that the target image is a live-action image of the store photographed by the user. At this time, text recognition may be performed on the target image to obtain second store information exhibited by the target image. Otherwise, under the condition that the detection junction of any image detection item does not accord with the preset condition, determining that the authenticity verification result is not passed. When the image is ensured to be true and reliable and the text can be accurately identified, the subsequent store information identification function is triggered, the image detection is utilized to carry out the first authenticity verification, the authenticity verification process is optimized, the verification efficiency is improved, and the identification accuracy of the store information can be ensured.
Illustratively, a door head portion in the target image is segmented to obtain a picture of the door head region. And (5) identifying the door head picture by using an OCR algorithm, and splicing the identification result. And inputting the OCR recognition result into a UIE (unified information extraction framework) extraction model to obtain the name of the merchant. The UIE extraction model is realized based on PADDLENLP tool library, and supports automatic extraction of structural information from unstructured or semi-structured texts, and mainly comprises tasks such as entity identification, relation extraction, event extraction, emotion analysis, comment extraction and the like. The UIE extraction model may be pre-trained based on PADDLENLP tool base and fine-tuned by real scene data to adapt to the current scene.
And 104, determining an authenticity verification result of the store according to the similarity between the first store information and the second store information.
According to the method for verifying the authenticity of the store, when the authenticity of the store information submitted by the user needs to be verified, the system acquires the target image of the store submitted by the user and the first store information related to the store. The system respectively verifies the authenticity of the target image from multiple aspects such as whether the target image is a store door head image, whether the target image is a live-action shot, whether the target image is a device screenshot, and whether the target image is a transfer image through at least one detection item of store door head detection, moire detection, screenshot detection and watermark detection. After the detection result of the image detection item meets the preset condition, namely the verification is passed, the system can judge that the target image is a shop live-action image shot by the user. At this time, text recognition may be performed on the target image to obtain second store information exhibited by the target image. And comparing the second shop information in the image with the first shop information submitted by the user, if the comparison is passed, determining that the user is a real merchant, and executing a subsequent flow. Therefore, a double authentication mechanism of the images and the information is realized through an automatic shop image verification and information verification process. On one hand, manual intervention in the auditing process is reduced, the labor cost and the time cost are reduced, and the authenticity verification efficiency of the merchant store is greatly improved. On the other hand, the intelligent authenticity verification is carried out from multiple aspects by combining multiple detection technologies, so that the merchant authenticity verification accuracy is greatly improved.
In a specific application scenario, step 104, namely determining an authenticity verification result of a store according to the similarity between the first store information and the second store information, specifically comprises determining that the authenticity verification result is passed when the similarity between the first store information and the second store information is greater than a similarity threshold value, transmitting the first store information to a rechecking node when the similarity between the first store information and the second store information is less than or equal to the similarity threshold value, determining that the authenticity verification result is passed in response to a confirmation instruction fed back by the rechecking node, and determining that the authenticity verification result is failed when the confirmation instruction fed back by the rechecking node is not received within a preset period.
The similarity threshold value can be reasonably set according to the required verification precision, and the embodiment of the application is not particularly limited.
In this embodiment, if the similarity of the first store information and the second store information is greater than the similarity threshold, it may be determined that the authenticity verification result is passed, indicating that the store information provided by the user and the store information identified by the image are highly similar, the store information provided by the user may be authentic. Otherwise, if the similarity of the first store information and the second store information is smaller, the first store information is sent to the rechecking node, and staff corresponding to the rechecking node can manually recheck and verify the first store information, so that the authenticity of the first store information is ensured. After the staff confirms that the first shop information is true, the system can acquire the first shop information by issuing a confirmation instruction. If the staff has a question on the first shop information, a return instruction can be issued or the operation is not performed, so that after the system detects that the confirmation instruction fed back by the rechecking node is not received in a preset period, the authenticity verification result is determined to be failed. Therefore, the automatic verification of the information credibility of the shops is realized, the workload of manual auditing is reduced, the false verification result caused by errors of automatic identification and judgment is avoided, and the strictness and credibility of the verification process are increased. In addition, when the system judges that the similarity does not meet the requirement, the manual review is triggered, so that the data transmission channel is reduced, the risk of data leakage is reduced, and the data safety and privacy protection are enhanced.
It should be noted that, the sequence number of each step in the above embodiment does not mean the sequence of execution sequence, and the execution sequence of each process should be determined by its function and internal logic, and should not limit the implementation process of the embodiment of the present application in any way.
Further, as shown in fig. 2, as a specific implementation of the above-mentioned store authenticity verification method, an embodiment of the present application provides a store authenticity verification device 200, where the store authenticity verification device 200 includes an acquisition module 201, an image verification module 202, an identification module 203, and an information verification module 204.
Wherein, the acquisition module 201 is used for target images of stores and first store information;
The image verification module 202 is configured to perform detection processing of an image detection item on a target image to obtain a detection result of the image detection item, where the image detection item includes at least one of shop front detection, moire detection, screenshot detection, or watermark detection;
an identifying module 203, configured to identify second store information in the target image if the detection result of the image detection item meets a preset condition;
The information verification module 204 is configured to determine an authenticity verification result of the store according to the similarity between the first store information and the second store information.
Further, the image detection items comprise shop front detection, the shop authenticity verification device 200 further comprises an image overturning module (not shown in the figure), an identification module 203, an image verification module 202 and a target image input module, wherein the image overturning module is used for carrying out mirror image inversion processing on the target image to obtain an overturning image, the identification module 203 is further used for respectively carrying out text identification processing on the target image and the overturning image to determine the confidence coefficient of a first text in the target image and the confidence coefficient of a second text in the overturning image, the image verification module 202 is specifically used for replacing the target image with a mirror image when the confidence coefficient of the first text is smaller than the confidence coefficient of the second text, and the target image is input into a door front detection model to obtain a detection result of the target image comprising the shop front, wherein the door front detection model is obtained by training according to the first sample image and the door front label, and the first sample image comprises the shop front image.
Further, the image detection items comprise moire detection, the authenticity verification device 200 of the store further comprises a first feature processing module (not shown in the figure), wherein the first feature processing module is used for extracting moire features contained in the moire images, perspective transformation processing is carried out on the store template images according to the moire features to form second sample images and edge mapping of the moire features in the second sample images, a training module (not shown in the figure), the training module is used for training a moire recognition model according to the second sample images and the edge mapping, the image verification module 202 is specifically used for obtaining the moire images and the store template images, inputting the target images into the moire recognition model to obtain the probability that the target images contain the moire, and determining the detection result of the moire detection according to the probability.
Further, the image detection item includes screenshot detection, and the image verification module 202 is specifically configured to read image file format information associated with the target image, and determine a detection result of the screenshot detection according to a reading completion state of the image file format information, where the reading completion state includes success or failure of reading, and the image file format information includes at least one of a shooting time, a device model, a focal length parameter, and an aperture parameter.
Further, the image detection items include screenshot detection, and the image verification module 202 is specifically configured to determine a detection result of the screenshot detection according to a size relationship between a resolution of the target image and a preset screenshot resolution.
Further, the image detection items include screenshot detection, and the image verification module 202 is specifically configured to calculate the similarity of the aspect ratio between the aspect ratio of the target image and a plurality of preset aspect ratios by using a least common multiple method, and determine a detection result of the screenshot detection according to the magnitude relation between the similarity and the preset similarity.
Further, the image detection item includes screenshot detection, and the image verification module 202 is specifically configured to input the target image into an interface element recognition model to obtain an interface element feature in the target image, and determine a detection result of the screenshot detection according to the interface element feature, where the interface element recognition model is obtained by training according to a third sample image and an interface element tag, and the interface element feature includes an information prompt bar feature, an application program identification feature, and a screen boundary feature.
Further, the store authenticity verification device 200 further comprises a second feature processing module (not shown in the figure), wherein the second feature processing module is used for inputting the third sample image into a feature extraction model to obtain a global feature vector and a local feature vector of the third sample image, the feature extraction model comprises a global feature extraction channel and a local feature extraction channel, channel attention weights are determined according to the global feature vector and the local feature vector, and an updating module (not shown in the figure) is used for updating the third sample image according to the channel attention weights.
Further, the image detection item includes watermark detection, and the image verification module 202 is specifically configured to input the target image into a watermark detection model to obtain a detection result that the target image includes a watermark, where the watermark detection model is obtained by training according to a fourth sample image and a watermark label, and the fourth sample image includes a watermark image.
Further, the information verification module 204 is specifically configured to determine that the authenticity verification result is passed if the similarity between the first store information and the second store information is greater than a similarity threshold, send the first store information to the rechecking node if the similarity between the first store information and the second store information is less than or equal to the similarity threshold, determine that the authenticity verification result is passed in response to a confirmation instruction fed back by the rechecking node, determine that the authenticity verification result is not passed if the confirmation instruction fed back by the rechecking node is not received within a preset period, and the image verification module 202 is further configured to determine that the authenticity verification result is not passed if a detection node of any one of the image detection items does not meet a preset condition.
For specific limitations on the authenticity verification device of the store, reference may be made to the above limitation on the authenticity verification method of the store, and no further description is given here. The respective modules in the above-described store authenticity verification device may be realized in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Based on the method shown in fig. 1, correspondingly, the embodiment of the application also provides a readable storage medium, on which a computer program is stored, which when executed by a processor, implements the method for verifying the authenticity of the store shown in fig. 1.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.), and includes several instructions for causing a computer device (may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective implementation scenario of the present application.
Based on the method shown in fig. 1 and the virtual device embodiment shown in fig. 2, in order to achieve the above object, an embodiment of the present application further provides a computer device, which may specifically be a personal computer, a server, a network device, or the like, where the computer device includes a storage medium and a processor, the storage medium is used to store a computer program, and the processor is used to execute the computer program to implement the method for verifying authenticity of a store shown in fig. 1.
Optionally, the computer device may also include a user interface, a network interface, a camera, radio Frequency (RF) circuitry, sensors, audio circuitry, WI-FI modules, and the like. The user interface may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., bluetooth interface, WI-FI interface), etc.
It will be appreciated by those skilled in the art that the architecture of a computer device provided in the present embodiment is not limited to the computer device, and may include more or fewer components, or may combine certain components, or may be arranged in different components.
The storage medium may also include an operating system, a network communication module. An operating system is a program that manages and saves computer device hardware and software resources, supporting the execution of information handling programs and other software and/or programs. The network communication module is used for realizing communication among all components in the storage medium and communication with other hardware and software in the entity equipment.
Through the description of the above embodiments, it is clear for a person skilled in the art that the method can be realized by adding necessary general hardware platform to software, or can be realized by hardware to obtain the target image and the first store information of the store, and performing detection processing of an image detection item on the target image to obtain a detection result of the image detection item, where the image detection item includes at least one of store door detection, moire detection, screenshot detection or watermark detection, and if the detection result of the image detection item meets a preset condition, the second store information in the target image is identified, and the authenticity verification result of the store is determined according to the similarity between the first store information and the second store information. According to the embodiment of the application, when the authenticity of store information submitted by a user needs to be verified, a system acquires a target image of a store submitted by the user and first store information related to the store. The system respectively verifies the authenticity of the target image from multiple aspects such as whether the target image is a store door head image, whether the target image is a live-action shot, whether the target image is a device screenshot, and whether the target image is a transfer image through at least one detection item of store door head detection, moire detection, screenshot detection and watermark detection. After the detection result of the image detection item meets the preset condition, namely the verification is passed, the system can judge that the target image is a shop live-action image shot by the user. At this time, text recognition may be performed on the target image to obtain second store information exhibited by the target image. And comparing the second shop information in the image with the first shop information submitted by the user, if the comparison is passed, determining that the user is a real merchant, and executing a subsequent flow. Therefore, a double authentication mechanism of the images and the information is realized through an automatic shop image verification and information verification process. On one hand, manual intervention in the auditing process is reduced, the labor cost and the time cost are reduced, and the authenticity verification efficiency of the merchant store is greatly improved. On the other hand, the intelligent authenticity verification is carried out from multiple aspects by combining multiple detection technologies, so that the merchant authenticity verification accuracy is greatly improved.
Those skilled in the art will appreciate that the drawing is merely a schematic illustration of a preferred implementation scenario and that the modules or flows in the drawing are not necessarily required to practice the application. Those skilled in the art will appreciate that modules in an apparatus in an implementation scenario may be distributed in an apparatus in an implementation scenario according to an implementation scenario description, or that corresponding changes may be located in one or more apparatuses different from the implementation scenario. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above-mentioned inventive sequence numbers are merely for description and do not represent advantages or disadvantages of the implementation scenario. The foregoing disclosure is merely illustrative of some embodiments of the application, and the application is not limited thereto, as modifications may be made by those skilled in the art without departing from the scope of the application.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410902025.4A CN119090568A (en) | 2024-07-05 | 2024-07-05 | Store authenticity verification method, device, storage medium and computer equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410902025.4A CN119090568A (en) | 2024-07-05 | 2024-07-05 | Store authenticity verification method, device, storage medium and computer equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN119090568A true CN119090568A (en) | 2024-12-06 |
Family
ID=93696431
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410902025.4A Pending CN119090568A (en) | 2024-07-05 | 2024-07-05 | Store authenticity verification method, device, storage medium and computer equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119090568A (en) |
-
2024
- 2024-07-05 CN CN202410902025.4A patent/CN119090568A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11481878B2 (en) | Content-based detection and three dimensional geometric reconstruction of objects in image and video data | |
Huh et al. | Fighting fake news: Image splice detection via learned self-consistency | |
CN109961009B (en) | Pedestrian detection method, system, device and storage medium based on deep learning | |
Walia et al. | Digital image forgery detection: a systematic scrutiny | |
TWI766201B (en) | Methods and devices for biological testing and storage medium thereof | |
US9779296B1 (en) | Content-based detection and three dimensional geometric reconstruction of objects in image and video data | |
CN109815843B (en) | Image processing method and related product | |
KR20200116138A (en) | Method and system for facial recognition | |
EP4085369A1 (en) | Forgery detection of face image | |
CN110046644A (en) | A kind of method and device of certificate false proof calculates equipment and storage medium | |
CN111931153B (en) | Identity verification method and device based on artificial intelligence and computer equipment | |
US11216961B2 (en) | Aligning digital images by selectively applying pixel-adjusted-gyroscope alignment and feature-based alignment models | |
CN112651333A (en) | Silence living body detection method and device, terminal equipment and storage medium | |
Zhang et al. | Improved Fully Convolutional Network for Digital Image Region Forgery Detection. | |
CN113111880B (en) | Certificate image correction method, device, electronic equipment and storage medium | |
CN117523586A (en) | Check seal verification method and device, electronic equipment and medium | |
CN114550051A (en) | Vehicle loss detection method and device, computer equipment and storage medium | |
Liu et al. | Overview of image inpainting and forensic technology | |
CN117689935A (en) | Certificate information identification method, device and system, electronic equipment and storage medium | |
CN119090568A (en) | Store authenticity verification method, device, storage medium and computer equipment | |
Chaitra et al. | Digital image forgery: taxonomy, techniques, and tools–a comprehensive study | |
CN114299371A (en) | Method, system, device and medium for certificate recognition model training and certificate recognition | |
JP7594140B1 (en) | PROGRAM, AUTHENTICITY DETERMINATION METHOD, AND AUTHENTICITY DETERMINATION DEVICE | |
CN115460456B (en) | Target area extraction for digital content addition | |
Kakar | Passive approaches for digital image forgery detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |