CN112633181A - Data processing method, system, device, equipment and medium - Google Patents
Data processing method, system, device, equipment and medium Download PDFInfo
- Publication number
- CN112633181A CN112633181A CN202011563272.4A CN202011563272A CN112633181A CN 112633181 A CN112633181 A CN 112633181A CN 202011563272 A CN202011563272 A CN 202011563272A CN 112633181 A CN112633181 A CN 112633181A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- target object
- sub
- specified wavelength
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
Embodiments of the present disclosure relate to methods, apparatuses, devices, and media for data processing. The method comprises the following steps: acquiring a target image associated with a target object, wherein the target image presents stripes, and the stripes are generated by reflecting laser with specified wavelength by the target object; determining a target sub-image associated with a specific part of the target object from the target image; and performing in-vivo detection on the target object based on the target sub-image. In this way, it is possible to accurately and efficiently determine whether the target object is a living user.
Description
Technical Field
Embodiments of the present disclosure relate generally to data processing, and more particularly, relate to data processing methods, systems, apparatuses, electronic devices, and computer storage media.
Background
Currently, face recognition technology is widely used in various fields, such as finance, security, consumption, medical care, and the like. However, with the widespread application of face recognition technology, there are more and more attacks on face recognition systems. For example, typical attack means include photo copying, video editing, and the like. In order to ensure the security and reliability of the face recognition system, the living body detection technology is gradually applied to the face recognition system to prevent the diversified attack means. However, the performance of the conventional in vivo detection technique is poor.
Disclosure of Invention
According to an embodiment of the present disclosure, a data processing scheme is provided.
In a first aspect of the disclosure, a data processing method is provided. The method comprises the following steps: acquiring a target image associated with a target object, wherein the target image presents stripes, and the stripes are generated by reflecting laser with specified wavelength by the target object; determining a target sub-image associated with a specific part of the target object from the target image; and performing in-vivo detection on the target object based on the target sub-image.
In a second aspect of the disclosure, a data processing system is provided. The system comprises: the laser emitter is used for emitting laser with a specified wavelength to the target object; a laser receiver for generating a target image associated with a target object, the target image presenting a speckle generated by the target object reflecting laser light of a specified wavelength; and a controller configured to perform the method according to the first aspect of the present disclosure.
In a third aspect of the disclosure, an apparatus for data processing is provided. The device includes: an acquisition module configured to acquire a target image associated with a target object, the target image presenting a speckle generated by the target object reflecting laser light of a specified wavelength; a determination module configured to determine a target sub-image associated with a specific part of the target object from the target image; and a detection module configured to perform a live body detection of the target object based on the target sub-image.
In a fourth aspect of the present disclosure, an electronic device is provided. The electronic device includes: one or more processors; and memory for storing one or more programs that, when executed by the one or more processors, cause the electronic device to implement the method according to the first aspect of the disclosure.
In a fifth aspect of the present disclosure, a computer readable medium is provided, on which a computer program is stored, which program, when executed by a processor, performs the method according to the first aspect of the present disclosure.
In a sixth aspect of the present disclosure, a method of detection is provided. The method comprises the following steps: emitting laser with specified wavelength to a target object; receiving a target image associated with a target object, the target image presenting a speckle, the speckle being generated by the target object reflecting laser light of a specified wavelength; and identifying whether the reflection part of the target object is a specific material or not based on the target image.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of an exemplary environment in which embodiments of the present disclosure can be implemented;
FIG. 2 illustrates a flow diagram of a data processing method according to some embodiments of the present disclosure;
FIG. 3 illustrates a schematic diagram of an example of acquiring a target image, according to some embodiments of the present disclosure;
fig. 4 illustrates a schematic diagram of an example of determining a target sub-image, in accordance with some embodiments of the present disclosure;
FIG. 5 illustrates a block diagram of an apparatus for data processing, in accordance with some embodiments of the present disclosure; and
FIG. 6 illustrates a block diagram of an electronic device capable of implementing embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
In describing embodiments of the present disclosure, the terms "include" and its derivatives should be interpreted as being inclusive, i.e., "including but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
As described above, the living body detection technology is adopted in the face recognition system to prevent malicious attacks. Conventional liveness detection techniques can be classified into action liveness detection and silence liveness detection. In motion liveness detection techniques, a user is required to make certain interactive motions so that the user can be identified as a real live user. In the silent liveness detection technique, the user is not required to make any interactive action, but is identified as a real live user or a false attack object through an algorithm.
Conventionally, silent biopsy techniques mainly include a monocular biopsy technique, a binocular biopsy technique, a dazzling biopsy technique, and a deep biopsy technique. The monocular liveness detection technique may employ an RGB (red green blue) imaging technique. Which recognizes a living user using an RGB image imaged by natural light. However, the monocular liveness detection technology has a low defense rate against paper mask and screen display type attack targets, and has a poor defense capability under natural light.
The binocular biopsy technique utilizes black and white grayscale images of infrared imaging. Because infrared light imaging is reflection imaging, an attack object of a screen display class cannot be imaged, and therefore the defense capability of the attack object of the screen display class is good. However, the defense capability of the binocular biopsy technology to the paper mask class is not significantly improved. In addition, since the binocular biopsy technology requires an additional infrared camera, hardware costs are increased.
The dazzle color live body detection technique performs live body detection by irradiating natural light of different colors onto a target object. The colorful in-vivo detection technology has defects because the accuracy of the colorful in-vivo detection technology is affected by illumination. In general, it is difficult for a glare biopsy technique to accurately perform biopsy under strong light or dim conditions. In the process of dazzling the living body detection, the light color of each irradiation can be displayed, so that a malicious attacker can attack according to the light color, and the safety and the secrecy are low.
Further, the depth liveness detection technique may utilize a depth imaging technique, such as 3D (three-dimensional) structured light imaging. Since the real living user is stereoscopic and the attack object of the paper mask class or the screen display class is planar, the real living user and the false attack object can be identified by obtaining the overall depth information of the target object.
In addition to their respective drawbacks, these conventional live body detection techniques have slow recognition speed due to the need for high quality images for the target object for operations such as frame extraction. In view of the above, the conventional methods cannot perform the in-vivo detection in a simple, fast and accurate manner.
To this end, embodiments of the present disclosure provide a scheme for data processing. In this approach, a target image associated with a target object may be acquired. The target image exhibits speckle. The speckle is generated by the target object reflecting laser light of a specified wavelength. Further, a target sub-image associated with a specific part of the target object may be determined from the target image, and the target object may be subjected to a live detection based on the target sub-image.
In this way, the living body detection can be performed simply, quickly, and accurately using the speckle generated by the reflection of the laser light of the specified wavelength by the target object. Embodiments of the present disclosure will be described below in detail with reference to the accompanying drawings.
FIG. 1 illustrates a schematic diagram of an exemplary environment 100 in which embodiments of the present disclosure can be implemented. The environment 100 includes a controller 110. The controller 110 may contain at least a processor, memory, and other components typically found in a general purpose computer to implement the functions of computing, storage, communication, control, and the like. For example, the controller 110 may be a smart phone, a tablet computer, a personal computer, a desktop computer, a notebook computer, a server, a mainframe, a distributed computing system, and so forth.
In the environment 100, the controller 110 is configured to acquire a target image 130 associated with a target object 120. The target image 130 exhibits speckle. The speckle is generated by the target object 120 reflecting laser light of a specified wavelength. Conventionally, laser speckle technology is mainly applied to industrial product detection in the industrial field, for example, for detecting roughness of a mobile phone shell, surface roughness of a mechanical part, and the like. When a monochromatic highly correlated beam such as laser light illuminates an object, a fine grain structure occurs by the reflection of the laser light through the object, and the surface of the object, which is not related to the microscopic properties of the object, becomes a generation source of the secondary wavelet. The surface of objects with different roughness and different materials will reflect or scatter the scattered light composed of different wavelets. Since the generated wavelets are different due to different scattering angles of scattered light of objects with different roughness and different materials, an optical wave interference effect is generated among a large number of wavelets with random phase differences, and a speckle image is generated. The laser speckle technique enables fast acquisition of characteristic data of the object surface through speckle images.
In view of this, since the skin surface roughness of a living user is significantly different from other materials, the patch image of the living user will be different from the patch image of an attack object (e.g., an object in a printed photograph, a copied photograph, a dubbed video, an edited video, etc.). In this case, it is possible to distinguish between a real living user and a false attack object by irradiating the target object 120 with laser light of a specified wavelength and distinguishing between a skin surface and different materials such as plain paper, a screen, and the like using different patches generated by light waves scattered from surfaces of different materials by the laser light. Specifically, laser light of a specified wavelength may be emitted to the target object 120 and a target image associated with the target object 120 may be received, the target image exhibiting speckle generated by the target object reflecting the laser light of the specified wavelength. Thus, it is possible to recognize whether or not the reflection portion of the target object 120 is a specific material (for example, skin) based on the target image, and to perform the living body detection on the target object 120.
Further, in some embodiments, a target sub-image 140 associated with a particular part (e.g., face, palm, etc.) of the target object 120 may be determined from the target image 130 to improve the efficiency of liveness detection. In particular, the controller 110 may determine a target sub-image 140 associated with a particular region of the target object 120 from the target image 130. The target sub-image 140 is a speckle image of a specific part of the target object 120. As described above, speckle images produced by objects of different coarseness and different materials are different. Thus, the speckle image of a particular part of a real live user will be different from the speckle image of a particular part of a false attack object (e.g., an object in a printed photograph, a filmed video, an edited video, etc.). In view of this, the controller 110 may perform a live detection on the target object 120 based on the target sub-images 140 to generate a live detection result 150 of whether the target object 120 is a live user.
Hereinafter, the operation of the controller 110 will be described in conjunction with fig. 2-4. Fig. 2 illustrates a flow diagram of a data processing method 200 according to some embodiments of the present disclosure. The method 200 may be implemented by the controller 110 as shown in fig. 1. Alternatively, the method 200 may be implemented by a body other than the controller 110. It should be understood that method 200 may also include additional steps not shown and/or may omit steps shown, as the scope of the present disclosure is not limited in this respect.
At 210, the controller 110 acquires the target image 130 associated with the target object 120. The target image 130 exhibits speckle. The speckle is generated by the target object 120 reflecting laser light of a specified wavelength. Fig. 3 illustrates a schematic diagram of an example 300 of acquiring a target image, according to some embodiments of the present disclosure.
As shown in fig. 3, the laser emitter 310 may be used to emit laser light of a specified wavelength toward the target object 120. Specifically, in some embodiments, the laser emitter 310 may include a power supply, a laser generating device, a filtering device, a lens, and the like. The power supply may provide power to the laser transmitter 310. In this case, the laser generating device may generate the initial laser light. The initial laser includes a portion at the specified wavelength and a portion not at the specified wavelength. The filter device can filter out the part of the initial laser light which is not at the designated wavelength so as to generate the laser light at the designated wavelength. The lens, such as a straight lens, may adjust an emission direction of the laser light of the designated wavelength to emit the laser light of the designated wavelength toward the target object 120.
As described above, when laser light of a specified wavelength is irradiated onto the surface of the target object 120, scattered light composed of different wavelets is reflected or scattered by the surface. An optical wave interference effect is generated between a large number of wavelets having random phase differences, thereby generating interference ripples. Since the roughness of the skin surface of a living user is significantly different from other materials, it is possible to distinguish a real living user from a false attack object by irradiating a specific portion (for example, the face) of a target object with laser light of a specified wavelength and distinguishing the skin surface from different materials such as plain paper, a screen, and the like using different interference fringes generated by light waves scattered from the surfaces of the different materials by the laser light. Thus, the laser receiver 320 may receive these interference fringes and generate the target image 130 associated with the target object 120.
In certain embodiments, the laser receiver 320 may include a photo-electric booster, a data collector, and an image generator. The photo-booster may receive interference ripples generated by the target object 120 reflecting laser light of a specified wavelength and amplify the interference ripples. For example, the photo-multiplier may amplify the interference ripple to a degree recognizable by the data receiving device at a designated ratio after receiving the interference ripple. The data receiving device may receive the amplified interference ripple and convert the amplified interference ripple into a digital signal. The image generator may receive the digital signal and generate the target image 130 based on the digital signal. Thus, the controller 110 may acquire the target image 130.
Referring back to fig. 2, at 220, controller 110 determines target sub-image 140 associated with the particular region of target object 120 from target image 130. In some embodiments, the RGB image associated with target object 120 may be utilized to help determine target sub-image 140, as there are various detection algorithms for the RGB image. Fig. 4 illustrates a schematic diagram of an example 400 of determining a target sub-image, according to some embodiments of the present disclosure.
As shown in fig. 4, in some embodiments, the controller 110 may acquire a reference image 420 associated with the target object 120 captured by the camera 410. For example, camera 410 may be an RGB camera and reference image 420 may be an RGB image. The reference image 420 may be captured at or near the same time as the target image 130. Alternatively, the reference image 420 may be captured at a different time than the target image 130, as the invention is not limited in this respect.
The controller 110 may detect a reference sub-image 430 containing a specific location in the reference image 420. For example, the controller 110 may utilize any suitable face detection algorithm to determine a region or portion of the reference image 420 that contains the face of the target object 120 and use that portion as the reference sub-image 430. For example, the face detection algorithm may be an MTCNN (Multi-Task Convolutional Neural Network) model, a Facebox model, or the like.
Further, controller 110 may determine a portion corresponding to reference sub-image 430 from target image 130 as target sub-image 140. That is, the controller 110 may map the reference sub-image 430 to the target image 130 to locate a portion of the target image 130 associated with a particular location.
In some embodiments, the controller 110 may determine the coordinates of reference pixel points of the reference sub-image 430 in the reference image 420. For example, the controller 110 may determine the upper-left and lower-right coordinates of the reference sub-image 430. The controller 110 may map the coordinates to corresponding coordinates in the target image 130. For example, the controller 110 may transform, translate, scale, etc., the coordinates to map to corresponding coordinates in the target image 130. Thus, the controller 110 may determine the portion indicated by the corresponding coordinates from the target image 130 as the target sub-image 140.
Referring back to fig. 2, at 230, the controller 110 performs a liveness detection of the target object 120 based on the target sub-image 104. Thus, the controller 110 can determine whether the target object 120 is a live user or a non-live attack object (e.g., an object in a printed photograph, a copied video, an edited video, etc.).
In certain embodiments, the controller 110 may apply the target sub-images 140 to a trained detection model to perform in vivo detection of the target object 120. For example, the detection model may be any suitable convolutional neural Network model, such as a ResNet (Residual Network) model, an inclusion model, or the like.
The trained detection model is trained based on a set of training images. These training images include real training images and dummy training images. The real training image presents speckle generated by the reflection of laser light of a specified wavelength by a living user. The false training image presents speckle generated by non-living users reflecting laser light of a specified wavelength. Each training image may also be labeled as being associated with a live user or a non-live user. Thus, the trained detection model may accurately classify the target object 120 as a live user or a non-live user.
In this way, the living body detection can be performed simply, quickly, and accurately using the speckle generated by the reflection of the laser light of the specified wavelength by the target object.
Fig. 5 illustrates a block diagram of an apparatus 500 for data processing according to some embodiments of the present disclosure. For example, the apparatus 500 may be provided in the controller 110. As shown in fig. 5, the apparatus 500 includes a description information acquisition module 510 configured to acquire a target image associated with a target object, the target image presenting a speckle generated by the target object reflecting laser light of a specified wavelength; a determination module 520 configured to determine a target sub-image associated with a specific part of the target object from the target image; and a detection module 530 configured to perform a live detection of the target object based on the target sub-image.
In some embodiments, the determining module 520 includes: a reference image acquisition module configured to acquire a reference image associated with a target object captured by a camera; the reference sub-image detection module is configured to detect a reference sub-image containing a specific position in the reference image; and a target sub-image determination module configured to determine a portion corresponding to the reference sub-image from the target image as a target sub-image.
In some embodiments, the target sub-image determination module comprises: a coordinate determination module configured to determine coordinates of reference pixel points of the reference sub-image in the reference image; a mapping module configured to map the coordinates to corresponding coordinates in the target image; and a sub-image determination module configured to determine a portion indicated by the corresponding coordinates from the target image as a target sub-image.
In some embodiments, the detection module 530 includes: a liveness detection module configured to apply the target sub-image to the trained detection model to perform liveness detection on the target object.
In some embodiments, the trained detection model is trained based on a set of training images including a real training image that presents speckle generated by a living user reflecting laser light of a specified wavelength and a fake training image that presents speckle generated by a non-living user reflecting laser light of a specified wavelength.
FIG. 6 illustrates a schematic block diagram of an electronic device 600 that may be used to implement embodiments of the present disclosure. Device 600 may be used to implement apparatus 500 of fig. 5. As shown, device 600 includes a Central Processing Unit (CPU)601 that may perform various appropriate actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM)602 or loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Various processes and processes described above, such as method 200, may be performed by processing unit 601. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into RAM 603 and executed by CPU 601, one or more steps of method 200 described above may be performed. Alternatively, in other embodiments, CPU 601 may be configured to perform method 200 by any other suitable means (e.g., by way of firmware).
The present disclosure may be methods, apparatus, systems, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for carrying out various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or digital signals transmitted over electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (16)
1. A method of data processing, comprising:
acquiring a target image associated with a target object, wherein the target image presents a speckle generated by the target object reflecting laser light of a specified wavelength;
determining a target sub-image associated with a particular region of the target object from the target image; and
performing in vivo detection on the target object based on the target subimages.
2. The method of claim 1, wherein determining the target sub-image comprises:
acquiring a reference image associated with the target object captured by a camera;
detecting a reference sub-image containing the specific part in the reference image; and
and determining a part corresponding to the reference sub-image from the target image as the target sub-image.
3. The method of claim 2, wherein determining, from the target image, a portion corresponding to the reference sub-image as the target sub-image comprises:
determining coordinates of reference pixel points of the reference sub-images in the reference image;
mapping the coordinates to corresponding coordinates in the target image; and
determining the portion indicated by the corresponding coordinates from the target image as the target sub-image.
4. The method of claim 1, wherein in vivo detection of the target object comprises:
applying the target sub-image to a trained detection model to perform in vivo detection on the target object.
5. The method of claim 4, wherein the trained detection model is trained based on a set of training images including a real training image that presents speckle generated by a living user reflecting laser light of the specified wavelength and a fake training image that presents speckle generated by a non-living user reflecting laser light of the specified wavelength.
6. A data processing system comprising:
the laser emitter is used for emitting laser with a specified wavelength to the target object;
a laser receiver to generate a target image associated with the target object, the target image presenting speckle generated by the target object reflecting the laser light of the specified wavelength; and
a controller configured to perform the method of any one of claims 1-5.
7. The method of claim 6, wherein the laser emitter is configured to:
generating an initial laser including a portion at a specified wavelength and a portion not at the specified wavelength;
filtering out the part of the initial laser light which is not at the specified wavelength to generate the laser light at the specified wavelength; and
and adjusting the emission direction of the laser with the specified wavelength to emit the laser with the specified wavelength to the target object.
8. The method of claim 6, wherein the laser receiver is configured to:
receiving interference ripples generated by the target object reflecting the laser light of the specified wavelength;
amplifying the interference fringes;
converting the amplified interference ripple into a digital signal; and
generating the target image based on the digital signal.
9. An apparatus for data processing, comprising:
an acquisition module configured to acquire a target image associated with a target object, the target image presenting speckle generated by the target object reflecting laser light of a specified wavelength;
a determination module configured to determine a target sub-image associated with a particular region of the target object from the target image; and
a detection module configured to perform a live body detection of the target object based on the target sub-image.
10. The apparatus of claim 9, wherein the determining means comprises:
a reference image acquisition module configured to acquire a reference image associated with the target object captured by a camera;
a reference sub-image detection module configured to detect a reference sub-image containing the specific location in the reference image; and
a target sub-image determination module configured to determine a portion corresponding to the reference sub-image from the target image as the target sub-image.
11. The apparatus of claim 10, wherein the target sub-image determination module comprises:
a coordinate determination module configured to determine coordinates of reference pixel points of the reference sub-image in the reference image;
a mapping module configured to map the coordinates to corresponding coordinates in the target image; and
a sub-image determination module configured to determine the portion indicated by the corresponding coordinates from the target image as the target sub-image.
12. The apparatus of claim 9, wherein the detection module comprises:
a liveness detection module configured to apply the target sub-image to a trained detection model to perform liveness detection on the target object.
13. The device of claim 12, wherein the trained detection model is trained based on a set of training images including a real training image that presents speckle generated by a living user reflecting laser light of the specified wavelength and a fake training image that presents speckle generated by a non-living user reflecting laser light of the specified wavelength.
14. An electronic device, the electronic device comprising:
one or more processors; and
memory storing one or more programs that, when executed by the one or more processors, cause the electronic device to implement the method of any of claims 1-5.
15. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
16. A method of detection, comprising:
emitting laser with specified wavelength to a target object;
receiving a target image associated with the target object, the target image presenting speckle generated by the target object reflecting the laser light of the specified wavelength; and
identifying whether a reflection part of the target object is a specific material based on the target image.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011563272.4A CN112633181B (en) | 2020-12-25 | 2020-12-25 | Data processing method, system, device, equipment and medium |
PCT/CN2021/123759 WO2022134754A1 (en) | 2020-12-25 | 2021-10-14 | Data processing method, system, device, equipment, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011563272.4A CN112633181B (en) | 2020-12-25 | 2020-12-25 | Data processing method, system, device, equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112633181A true CN112633181A (en) | 2021-04-09 |
CN112633181B CN112633181B (en) | 2022-08-12 |
Family
ID=75325116
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011563272.4A Active CN112633181B (en) | 2020-12-25 | 2020-12-25 | Data processing method, system, device, equipment and medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112633181B (en) |
WO (1) | WO2022134754A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113435378A (en) * | 2021-07-06 | 2021-09-24 | 中国银行股份有限公司 | Living body detection method, device and system |
WO2022134754A1 (en) * | 2020-12-25 | 2022-06-30 | 北京嘀嘀无限科技发展有限公司 | Data processing method, system, device, equipment, and storage medium |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11967184B2 (en) * | 2021-05-21 | 2024-04-23 | Ford Global Technologies, Llc | Counterfeit image detection |
US11636700B2 (en) | 2021-05-21 | 2023-04-25 | Ford Global Technologies, Llc | Camera identification |
US11769313B2 (en) | 2021-05-21 | 2023-09-26 | Ford Global Technologies, Llc | Counterfeit image detection |
CN117011950B (en) * | 2023-08-29 | 2024-02-02 | 国政通科技有限公司 | Living body detection method and device |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1245947A (en) * | 1998-05-22 | 2000-03-01 | 夏普公司 | Image processing device |
CN201378323Y (en) * | 2009-03-20 | 2010-01-06 | 公安部第一研究所 | Multi-modal combining identity authentication device |
US20160106327A1 (en) * | 2014-10-15 | 2016-04-21 | Samsung Electronics Co., Ltd. | Apparatus and method for acquiring bio-information |
CN205405544U (en) * | 2016-02-27 | 2016-07-27 | 南京福瑞林生物科技有限公司 | Living body fingerprint recognition device |
CN106061373A (en) * | 2011-01-28 | 2016-10-26 | 巴伊兰大学 | Method and system for non-invasively monitoring biological or biochemical parameters of individual |
CN107316272A (en) * | 2017-06-29 | 2017-11-03 | 联想(北京)有限公司 | Method and its equipment for image procossing |
US20180060639A1 (en) * | 2016-08-24 | 2018-03-01 | Samsung Electronics Co., Ltd. | Apparatus and method using optical speckle |
CN107820005A (en) * | 2017-10-27 | 2018-03-20 | 广东欧珀移动通信有限公司 | Image processing method, device and electronic installation |
US20180165512A1 (en) * | 2015-06-08 | 2018-06-14 | Beijing Kuangshi Technology Co., Ltd. | Living body detection method, living body detection system and computer program product |
CN108495113A (en) * | 2018-03-27 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | control method and device for binocular vision system |
CN108509888A (en) * | 2018-03-27 | 2018-09-07 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating information |
CN108668078A (en) * | 2018-04-28 | 2018-10-16 | Oppo广东移动通信有限公司 | Image processing method, device, computer readable storage medium and electronic equipment |
CN108804895A (en) * | 2018-04-28 | 2018-11-13 | Oppo广东移动通信有限公司 | Image processing method, device, computer readable storage medium and electronic equipment |
CN108833887A (en) * | 2018-04-28 | 2018-11-16 | Oppo广东移动通信有限公司 | Data processing method, device, electronic equipment and computer readable storage medium |
US20190354746A1 (en) * | 2018-05-18 | 2019-11-21 | Beijing Sensetime Technology Development Co., Ltd | Method and apparatus for detecting living body, electronic device, and storage medium |
CN110942060A (en) * | 2019-10-22 | 2020-03-31 | 清华大学 | Material identification method and device based on laser speckle and modal fusion |
CN111344559A (en) * | 2018-12-26 | 2020-06-26 | 合刃科技(深圳)有限公司 | Defect detection method and defect detection system |
CN111476143A (en) * | 2020-04-03 | 2020-07-31 | 华中科技大学苏州脑空间信息研究院 | Device for acquiring multi-channel image, biological multi-parameter and identity recognition |
CN111639522A (en) * | 2020-04-17 | 2020-09-08 | 北京迈格威科技有限公司 | Living body detection method, living body detection device, computer equipment and storage medium |
CN112016525A (en) * | 2020-09-30 | 2020-12-01 | 墨奇科技(北京)有限公司 | Non-contact fingerprint acquisition method and device |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108764052B (en) * | 2018-04-28 | 2020-09-11 | Oppo广东移动通信有限公司 | Image processing method, image processing device, computer-readable storage medium and electronic equipment |
CN109145653B (en) * | 2018-08-01 | 2021-06-25 | Oppo广东移动通信有限公司 | Data processing method and device, electronic equipment and computer readable storage medium |
CN110059638A (en) * | 2019-04-19 | 2019-07-26 | 中控智慧科技股份有限公司 | A kind of personal identification method and device |
CN112633181B (en) * | 2020-12-25 | 2022-08-12 | 北京嘀嘀无限科技发展有限公司 | Data processing method, system, device, equipment and medium |
-
2020
- 2020-12-25 CN CN202011563272.4A patent/CN112633181B/en active Active
-
2021
- 2021-10-14 WO PCT/CN2021/123759 patent/WO2022134754A1/en active Application Filing
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1245947A (en) * | 1998-05-22 | 2000-03-01 | 夏普公司 | Image processing device |
CN201378323Y (en) * | 2009-03-20 | 2010-01-06 | 公安部第一研究所 | Multi-modal combining identity authentication device |
CN106061373A (en) * | 2011-01-28 | 2016-10-26 | 巴伊兰大学 | Method and system for non-invasively monitoring biological or biochemical parameters of individual |
US20160106327A1 (en) * | 2014-10-15 | 2016-04-21 | Samsung Electronics Co., Ltd. | Apparatus and method for acquiring bio-information |
US20180165512A1 (en) * | 2015-06-08 | 2018-06-14 | Beijing Kuangshi Technology Co., Ltd. | Living body detection method, living body detection system and computer program product |
CN205405544U (en) * | 2016-02-27 | 2016-07-27 | 南京福瑞林生物科技有限公司 | Living body fingerprint recognition device |
US20180060639A1 (en) * | 2016-08-24 | 2018-03-01 | Samsung Electronics Co., Ltd. | Apparatus and method using optical speckle |
CN107316272A (en) * | 2017-06-29 | 2017-11-03 | 联想(北京)有限公司 | Method and its equipment for image procossing |
CN107820005A (en) * | 2017-10-27 | 2018-03-20 | 广东欧珀移动通信有限公司 | Image processing method, device and electronic installation |
CN108509888A (en) * | 2018-03-27 | 2018-09-07 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating information |
CN108495113A (en) * | 2018-03-27 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | control method and device for binocular vision system |
CN108668078A (en) * | 2018-04-28 | 2018-10-16 | Oppo广东移动通信有限公司 | Image processing method, device, computer readable storage medium and electronic equipment |
CN108804895A (en) * | 2018-04-28 | 2018-11-13 | Oppo广东移动通信有限公司 | Image processing method, device, computer readable storage medium and electronic equipment |
CN108833887A (en) * | 2018-04-28 | 2018-11-16 | Oppo广东移动通信有限公司 | Data processing method, device, electronic equipment and computer readable storage medium |
US20190354746A1 (en) * | 2018-05-18 | 2019-11-21 | Beijing Sensetime Technology Development Co., Ltd | Method and apparatus for detecting living body, electronic device, and storage medium |
CN111344559A (en) * | 2018-12-26 | 2020-06-26 | 合刃科技(深圳)有限公司 | Defect detection method and defect detection system |
CN110942060A (en) * | 2019-10-22 | 2020-03-31 | 清华大学 | Material identification method and device based on laser speckle and modal fusion |
CN111476143A (en) * | 2020-04-03 | 2020-07-31 | 华中科技大学苏州脑空间信息研究院 | Device for acquiring multi-channel image, biological multi-parameter and identity recognition |
CN111639522A (en) * | 2020-04-17 | 2020-09-08 | 北京迈格威科技有限公司 | Living body detection method, living body detection device, computer equipment and storage medium |
CN112016525A (en) * | 2020-09-30 | 2020-12-01 | 墨奇科技(北京)有限公司 | Non-contact fingerprint acquisition method and device |
Non-Patent Citations (2)
Title |
---|
赵建林: "《高等光学》", 30 September 2002 * |
邓茜文等: "基于近红外与可见光双目视觉的活体人脸检测方法", 《计算机应用》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022134754A1 (en) * | 2020-12-25 | 2022-06-30 | 北京嘀嘀无限科技发展有限公司 | Data processing method, system, device, equipment, and storage medium |
CN113435378A (en) * | 2021-07-06 | 2021-09-24 | 中国银行股份有限公司 | Living body detection method, device and system |
Also Published As
Publication number | Publication date |
---|---|
WO2022134754A1 (en) | 2022-06-30 |
CN112633181B (en) | 2022-08-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112633181B (en) | Data processing method, system, device, equipment and medium | |
US11948282B2 (en) | Image processing apparatus, image processing method, and storage medium for lighting processing on image using model data | |
US10937167B2 (en) | Automated generation of pre-labeled training data | |
US9392262B2 (en) | System and method for 3D reconstruction using multiple multi-channel cameras | |
Treibitz et al. | Turbid scene enhancement using multi-directional illumination fusion | |
KR102524982B1 (en) | Apparatus and method for applying noise pattern to image processed bokeh | |
EP3561777B1 (en) | Method and apparatus for processing a 3d scene | |
CN107463659B (en) | Object searching method and device | |
US20160245641A1 (en) | Projection transformations for depth estimation | |
CN112270745B (en) | Image generation method, device, equipment and storage medium | |
US20180249095A1 (en) | Material characterization from infrared radiation | |
JP2014165617A (en) | Digital watermark embedding method and digital watermark detecting method | |
JP2015175644A (en) | Ranging system, information processing device, information processing method, and program | |
CN115131419A (en) | Image processing method for forming Tyndall light effect and electronic equipment | |
JP2014535101A (en) | Method and apparatus for facilitating detection of text in an image | |
Czajkowski et al. | Two-edge-resolved three-dimensional non-line-of-sight imaging with an ordinary camera | |
KR20150101343A (en) | Video projection system | |
JP2018196426A (en) | Pore detection method and pore detection device | |
Riaz et al. | Single image dehazing with bright object handling | |
EP3850834B1 (en) | Generating a representation of an object from depth information determined in parallel from images captured by multiple cameras | |
Bremner et al. | Impact of resolution, colour, and motion on object identification in digital twins from robot sensor data | |
CN114627521A (en) | Method, system, equipment and storage medium for judging living human face based on speckle pattern | |
JP2006177937A (en) | Distance measuring device and distance measurement method | |
Jung et al. | Color image enhancement using depth and intensity measurements of a time-of-flight depth camera | |
CN113450391A (en) | Method and equipment for generating depth map |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |