[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114170432A - Image processing method, image identification method and related device - Google Patents

Image processing method, image identification method and related device Download PDF

Info

Publication number
CN114170432A
CN114170432A CN202111405835.1A CN202111405835A CN114170432A CN 114170432 A CN114170432 A CN 114170432A CN 202111405835 A CN202111405835 A CN 202111405835A CN 114170432 A CN114170432 A CN 114170432A
Authority
CN
China
Prior art keywords
image
image data
target
type
marked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111405835.1A
Other languages
Chinese (zh)
Inventor
冯国安
黄剑勇
贾欣欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orbbec Inc
Original Assignee
Orbbec Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbbec Inc filed Critical Orbbec Inc
Priority to CN202111405835.1A priority Critical patent/CN114170432A/en
Publication of CN114170432A publication Critical patent/CN114170432A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image processing method, an image identification method and a related device, wherein the method comprises the following steps: before image data is transmitted, a camera carries out parallax matching on a target type image acquired by an image acquisition sensor and a reference image with the same image type to obtain marked image data of a target pixel area; after receiving the image data, the terminal acquires the characteristic information of the target pixel area of the image data, then carries out consistency comparison with the marked information of the marked image data, and finally identifies the type of the currently received image data according to the comparison result. By implementing the scheme of the application, the image type is identified by adopting the image marking and characteristic comparison mode, the calculated amount of the image type identification is effectively reduced, and the efficiency of the image type identification is improved.

Description

Image processing method, image identification method and related device
Technical Field
The present application relates to the field of computer vision, and in particular, to an image processing method, an image recognition method, and a related apparatus.
Background
With the continuous development of scientific technology, the infrared camera is more and more widely applied in the field of computer vision, in practical application, the infrared camera can acquire two types of images, namely a structured light image and a gray image, and then the infrared camera transmits the acquired images to a terminal application platform for image processing.
At present, a terminal application platform needs to adaptively adopt corresponding image processing algorithms for different types of images, so that the terminal application platform needs to recognize the type of a received image before image processing, in the related art, multiple images are usually calculated, and then the image types are recognized by comparing multiple calculation results, so that the calculation amount of the method is large, processor resources and memory resources are seriously consumed, the image type recognition efficiency is low, and the rapid recognition requirement cannot be met in practical application.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image identification method and a related device, which can at least solve the problems of large calculation amount and low image type identification efficiency of an image type identification mode adopted in the related technology.
A first aspect of an embodiment of the present application provides an image processing method, which is applied to a camera, and includes:
acquiring image data acquired by an image acquisition sensor; wherein the image data comprises the following types of images: a structured light image and/or a grayscale image;
when the image data comprises a target type image, acquiring a reference image corresponding to the target type image;
performing parallax matching processing on the target type image and the reference image in the image data to obtain marked image data; the marked image data comprises a target image area and marking information, wherein the target image area of the marked image data is provided with the marking information, and the marking information is used for the terminal to identify the image type;
and sending the image data to be sent to the terminal.
A second aspect of the embodiments of the present application provides an image recognition method, which is applied to a terminal, and includes:
receiving image data sent by a camera; wherein the image data comprises the following types of images: a structured light image and/or a grayscale image, a target pixel region of the target type image having marking information;
acquiring characteristic information of the target pixel region of the image data;
performing consistency comparison on the characteristic information based on the marking information;
and identifying the type of the current image data according to the comparison result.
A third aspect of the embodiments of the present application provides an image processing apparatus, which is applied to a camera, and includes:
the first acquisition module is used for acquiring image data acquired by the image acquisition sensor; wherein the image data comprises the following types of images: a structured light image and/or a grayscale image;
a second obtaining module, configured to obtain a reference image corresponding to a target type image when the image data includes the target type image;
the processing module is used for carrying out parallax matching processing on the target type image and the reference image in the image data to obtain marked image data; the marked image data comprises a target image area and marking information, wherein the target image area of the marked image data is provided with the marking information, and the marking information is used for the terminal to identify the image type;
and the sending module is used for sending the image data to be sent to the terminal.
A fourth aspect of the embodiments of the present application provides an image recognition apparatus, which is applied to a terminal, and includes:
the receiving module is used for receiving image data sent by the camera; wherein the image data comprises the following types of images: a structured light image and/or a grayscale image, a target pixel region of the target type image having marking information;
a third obtaining module, configured to obtain feature information of the target pixel region of the image data;
the comparison module is used for carrying out consistency comparison on the characteristic information based on the marking information;
and the identification module is used for identifying the type of the current image data according to the comparison result.
A fifth aspect of an embodiment of the present application provides an electronic device, including: the image recognition method includes a memory and a processor, where the processor is configured to execute a first computer program or a second computer program stored on the memory, and when the processor executes the first computer program, the processor implements the steps in the image processing method provided by the first aspect of the embodiment of the present application, and when the processor executes the second computer program, the processor implements the steps in the image recognition method provided by the second aspect of the embodiment of the present application.
A sixth aspect of the embodiments of the present application provides a computer-readable storage medium, on which a first computer program or a second computer program is stored, the first computer program, when being executed by a processor, implementing the steps in the image processing method provided by the first aspect of the embodiments of the present application, and the second computer program, when being executed by the processor, implementing the steps in the image recognition method provided by the second aspect of the embodiments of the present application.
As can be seen from the above, according to the image processing method, the image recognition method and the related device provided in the present application, before image data transmission, the camera performs parallax matching on a target type image acquired by the image acquisition sensor and a reference image of the same image type to obtain image data with a marked target pixel region; after receiving the image data, the terminal acquires the characteristic information of the target pixel area of the image data, then carries out consistency comparison with the marked information of the marked image data, and finally identifies the type of the currently received image data according to the comparison result. By implementing the scheme of the application, the image type is identified by adopting the image marking and characteristic comparison mode, the calculated amount of the image type identification is effectively reduced, and the efficiency of the image type identification is improved.
Drawings
Fig. 1 is a schematic basic flow chart of an image processing method according to a first embodiment of the present application;
fig. 2 is a schematic diagram illustrating a first pixel region of an image being marked according to a first embodiment of the present application;
fig. 3 is a schematic view of disparity matching according to a first embodiment of the present application;
fig. 4 is a schematic basic flow chart of an image recognition method according to a first embodiment of the present application;
fig. 5 is a schematic flowchart of a refinement of an image interaction method according to a second embodiment of the present application;
fig. 6 is a schematic diagram of program modules of an image processing apparatus according to a third embodiment of the present application;
fig. 7 is a schematic diagram of program modules of an image recognition apparatus according to a third embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the present invention more apparent and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, after image data is acquired by a camera, the image data generally needs to be transmitted to a terminal for image processing, taking an infrared camera as an example, in the process of transmitting the image data between the infrared camera and the terminal, a structured light image (i.e. a depth image) and a gray image (i.e. an infrared image) are generally transmitted to a receiving end alternately, and generally, if a current frame received by the terminal is a structured light image, a next frame is a gray image. In practical applications, when the structured light image and the gray scale image are used for image processing (such as face recognition), the two types of images should correspond to different algorithms, for example, the gray scale image is required to be put into an image processing algorithm based on the gray scale image, and the structured light image is required to be put into an image processing algorithm based on the structured light image. If the image is put into an image processing algorithm which is not suitable for the image type, an error image processing result is obtained, so that the terminal needs to distinguish the type of the received image to ensure the accuracy of subsequent image processing.
In order to solve the problems of a large amount of calculation and low efficiency of image type recognition in the related art, a first embodiment of the present application provides an image processing method applied to a camera, where the camera includes an image capture sensor (i.e., an IRSensor) and a camera chip, the camera of this embodiment is preferably an infrared camera, and as shown in fig. 1, the image processing method includes the following steps:
step 101, acquiring image data acquired by an image acquisition sensor.
Specifically, the image data of the present embodiment may include the following types of images: in practical application, the image acquisition sensor acquires corresponding types of images by controlling floodlight and structured light projection, namely the image acquisition sensor can sense the types of the images acquired by the image acquisition sensor. It should be noted that, in this embodiment, one image capture sensor may be used to alternately capture a structured light image and a grayscale image, or a plurality of image capture sensors may be used to capture a structured light image and a grayscale image respectively, and in addition, when a single image capture sensor performs image alternate capture, it is not limited to alternate polling capture, and for example, alternate capture of one frame of grayscale image of two frames of structured light images may also be performed, which is not limited to this embodiment.
Step 102, when the image data comprises a target type image, acquiring a reference image corresponding to the target type image.
In particular, in practical applications, the target type image may be one or more of a plurality of types of images captured by the image sensor. In a preferred embodiment of the present invention, for two types of images collected by an image sensor of an infrared camera, one type of image is subjected to subsequent image processing to form a difference feature with the other type of image, so that identification of the two types of images can be realized only by one image marking action.
Preferably, the target type image of the embodiment may be a structured light image, in practical applications, the structured light image may be a speckle image, the speckle image is an image with fixed characteristic spots, and of course, in other implementations, the target type image may also be a grayscale image, which may be determined according to practical use requirements.
In some embodiments of this embodiment, the step of acquiring the reference image corresponding to the target type image includes: obtaining a calibrated image with the same type as the target type image; and erasing the target pixel area of the calibrated image to obtain a reference image of which the target pixel area is marked as a null value.
First, it should be noted that, preferably, the target pixel region of the present embodiment is the image top line pixel region. In practical application, the IRSensor of the infrared camera is a photosensitive element, light is inconsistent, that is, there is noise in the image acquisition process, and the noise does not exist a situation that all of the exposure pixels of a specific first row are 0.
Fig. 2 is a schematic diagram illustrating a first line of pixel regions of an image according to this embodiment, and further, as shown in fig. 2(a), the first line of pixel regions of this embodiment may be a complete first line of pixel regions, and as shown in fig. 2(b), the first line of pixel regions of this embodiment may be a partial pixel region in the complete first line of pixel regions.
In addition, as one preferred implementation of this embodiment, a laboratory calibration image with the same type as the target type image is pre-stored in the camera of this embodiment, and then, for the calibrated image, the target pixel area of the calibrated image is erased, so as to mark the pixel value of the target pixel area as a null value, that is, pixels with continuous 0 s appear. Because the reference image has the specific mark information after the marking processing, the parallax image obtained by performing the parallax matching processing on the reference image and the target type image which is not subjected to the marking processing correspondingly has the mark information.
In other embodiments of this embodiment, the step of obtaining the reference image corresponding to the target type image includes: obtaining a calibrated image with the same type as the target type image; and marking an ultra-large value in a target pixel area of the calibrated image to obtain a reference image.
The method is different from the previous embodiment in that pixel erasing processing is performed, and for a calibrated image, the present embodiment stores a super large value in a target pixel area, where the super large value can be understood as a pixel value larger than a preset threshold.
It should also be understood that, in this embodiment, the reference image may be obtained by completing the marking process at a laboratory stage and then storing the marking process in the camera in advance, so that the camera may directly call the marked reference image from the memory in the actual image processing process.
And 103, performing parallax matching processing on the target type image and the reference image in the image data to obtain marked image data.
Specifically, the parallax matching in this embodiment may be understood as performing operation combination on the target type image and the corresponding reference image, where the operation set of the target type image and the corresponding reference image is a parallax image, and the marked image data includes the parallax image. In the present embodiment, the target image area of the parallax image in the marked image data has the mark information used for the terminal to perform the image type identification.
In some embodiments of this embodiment, the step of continuously receiving the embodiment that the reference image target pixel region is marked with a super large value further includes the step of performing disparity matching processing on the target type image in the image data and the reference image to obtain marked image data, including: and carrying out subtraction operation on the target pixel area of the target type image and the reference image in the image data to obtain marked image data.
Accordingly, in this embodiment, the target pixel region of the parallax image corresponding to the target type image in the marked image data is marked as null.
As shown in fig. 3, a schematic view of disparity matching provided by this embodiment is specifically provided, in this embodiment, a target pixel region is a first-row partial pixel region, in the drawing, a is label information of a reference image, b is label information of a disparity image, and an operation manner between a target type image and the reference image is subtraction operation.
It should also be noted that in other implementations of this embodiment, the reference image is a calibrated image of the same type as the target image, that is, the reference image is not subjected to the labeling process. Accordingly, the specific implementation manners of performing the parallax matching processing on the target type image and the reference image in the image data to obtain the marked image data include, but are not limited to, the following two types:
in a first mode, the target image area of the target type image in the image data is marked to obtain the marked target type image; and carrying out parallax matching processing on the marked target type image and the reference image to obtain marked image data.
Performing parallax matching processing on the target type image and the reference image in the image data to obtain a parallax image; and marking the target image area of the parallax image to obtain marked image data.
Specifically, in order to obtain the labeled image data after the parallax matching processing is implemented, the embodiment may further perform the parallax matching processing by using the labeled target type image and the unlabeled reference image, and the finally obtained parallax image may also have the specific label. Alternatively, the target type image and the reference image are not subjected to the labeling processing, but the target image area of the parallax image is directly subjected to the labeling processing, and the parallax image with the specific label can be obtained in the same way.
And step 104, sending the image data to be sent to a terminal.
Specifically, in practical application, the embodiment performs marking processing on one of the two types of images acquired by the image acquisition sensor, but does not perform marking processing on the non-target type image, that is, in the image transmission process, the type of the image to be transmitted includes an image to be marked and an image not to be marked. After receiving the pictures sent by the camera, the terminal identifies the subsequent images without corresponding marks as images of another type based on the appointed image mark type. It should be noted that, in a preferred embodiment of the present invention, data transmission between the camera and the terminal may be implemented based on a Mobile Industry Processor Interface (MIPI). It should be understood that, although the manner of acquiring the image by the image acquisition sensor may be a single-camera single-frame acquisition manner, or a two-camera synchronous acquisition manner, in practical applications, the acquired image is output in a single-frame manner.
In some embodiments of this embodiment, the image processing method further includes: acquiring an image data output sequence of a current image interaction process; and transmitting the image data output sequence to the terminal.
Specifically, in practical applications, the image interaction process between the camera and the terminal is a continuous process, in which hundreds of frames of images may be continuously transmitted, the image transmission between the camera and the terminal is usually transmitted alternately according to a predetermined specific rule, and a typical image data output sequence of the camera may be: ABABAB … … means that two types of images are alternately outputted one by one, and another typical image data output order may be AABAABAAB … …, i.e., two frame types a and one frame type B.
Correspondingly, the first embodiment of the present invention further provides an image recognition method, which is applied to a terminal, where the terminal may preferably be a door lock, and it should be understood that the camera may be a component of the terminal, or may be an external device independent of the terminal, and the two may communicate in a wired or wireless manner, which is not limited herein.
As shown in fig. 4, which is a basic flowchart of the image recognition method provided in this embodiment, the image recognition method includes the following steps:
step 401, receiving image data sent by a camera;
step 402, acquiring characteristic information of a target pixel area of image data;
step 403, comparing consistency of the characteristic information based on the marking information;
and step 404, identifying the type of the current image data according to the comparison result.
In the present embodiment, the image data includes the following types of images: a structured light image and/or a grayscale image, a target pixel region of the target type image having marking information. After receiving current frame image data, the terminal extracts features of a target pixel region of the current frame image data, then compares the feature information with mark information of a parallax image obtained by performing parallax matching on a target type image by a camera in a consistent manner, if the comparison is consistent, the current frame image data is the target type image, and if the comparison is not consistent, the current frame image is the other type image. Therefore, after the camera marks one type of image, the terminal application platform can distinguish the received image, specifically the structured light image or the gray level image, at high speed and high efficiency.
Further, in some embodiments of this embodiment, the image recognition method further includes: receiving an image data output sequence of a current image interaction process sent by a camera; the image data receiving order is determined according to the image data output order. Correspondingly, after the step of identifying the type of the current image data according to the comparison result, the method further includes: and determining the type of the image data which is continuously received according to the type of the current image data and the receiving sequence of the image data.
Specifically, image transmission between the camera and the terminal is usually transmitted alternately according to a predetermined specific rule, and if the terminal receives and identifies the current frame image, the type of the subsequently received image data can be directly distinguished by combining the current frame image type and the corresponding image data receiving sequence, where the image data receiving sequence is: ABABAB … …, wherein the type of the current frame image is A as an example, and the type of the next frame image is B, then in the later image interaction process, the camera does not need to perform marking processing on each frame image, and the terminal does not need to perform feature extraction and feature comparison processing on the current received image, so that the image processing and recognition efficiency can be further improved.
Based on the technical scheme of the embodiment of the application, before image data is transmitted, a camera performs parallax matching on a target type image acquired by an image acquisition sensor and a reference image with the same image type to obtain marked image data of a target pixel area; after receiving the image data, the terminal acquires the characteristic information of the target pixel area of the image data, then carries out consistency comparison with the marked information of the marked image data, and finally identifies the type of the currently received image data according to the comparison result. By implementing the scheme of the application, the image type is identified by adopting the image marking and characteristic comparison mode, the calculated amount of the image type identification is effectively reduced, and the efficiency of the image type identification is improved.
The method in fig. 5 is a refined image interaction method provided in a second embodiment of the present application, and is applied to an image interaction system including an infrared camera and a terminal, where the image interaction method includes:
step 501, the infrared camera acquires image data acquired by the image acquisition sensor.
Specifically, the image data of the present embodiment includes the following types of images: the structured light image and/or the grayscale image may be acquired by one infrared camera one by one, or may be acquired by a plurality of infrared cameras respectively.
Step 502, when the image data comprises a target type image, the infrared camera acquires a calibrated image with the same type as the target type image, and marks an extra large value in a target pixel area of the calibrated image to obtain a reference image.
The target type image of the present embodiment may be any one of a structured light image and a grayscale image.
In step 503, the infrared camera performs subtraction operation on the target type image in the image data and the target pixel area of the reference image to obtain image data with target mark information.
Specifically, the target pixel area of the parallax image corresponding to the target type image in the image data marked by the infrared camera is marked as a null value. In this embodiment, the operation mode between the target type image and the reference image is subtraction operation, and since the parallax image is obtained by subtracting the pixel value of the reference image from the pixel value of the target type image, when the target pixel region of the reference image stores the super large value, a large negative value is obtained by subtracting the super large value from the target type image, and the large negative value is filtered to be zero, so that the target pixel region of the obtained parallax image forms a mark that the pixel values are continuously 0.
And step 504, the infrared camera sends the image data to be sent and the whole image data output sequence of the current image interaction process to the terminal.
Specifically, in the whole image interaction process, a typical image data output sequence of the infrared camera may be: ABABAB … … means that two types of images are alternately outputted one by one, and another typical image data output order may be AABAABAAB … …, i.e., two frame types a and one frame type B.
And 505, the terminal acquires the characteristic information of the target pixel area of the received current frame image data.
Specifically, the infrared camera transmits the image to the terminal frame by frame, the target pixel region is usually the first row pixel region of the image, and in addition, the characteristic information of this embodiment is a pixel value.
Step 506, the terminal compares the consistency of the characteristic information based on the target mark information.
And step 507, the terminal identifies the type of the current frame image data according to the comparison result.
In this embodiment, the terminal compares the feature information of the target pixel region with the flag information of the parallax image obtained by performing parallax matching on the target type image by the infrared camera, and if the comparison is consistent, it indicates that the current frame image data is the target type image, and if the comparison is inconsistent, it indicates that the current frame image is the other type image.
And step 508, the terminal determines an image data receiving sequence according to the image data output sequence of the infrared camera, and determines the type of image data which is continuously received according to the type of the current frame image data and the image data receiving sequence.
Specifically, image interaction between the infrared camera and the terminal is usually transmitted alternately according to a predetermined specific rule, if the terminal receives and identifies the current frame image, the type of the subsequently received image data can be directly distinguished by combining the current frame image type and the corresponding image data receiving sequence, then in the image interaction process, the infrared camera does not need to mark each frame image, and the terminal does not need to perform feature extraction and feature comparison processing on the currently received image, so that the image processing and identification efficiency can be further improved.
It should be understood that, the size of the serial number of each step in this embodiment does not mean the execution sequence of the step, and the execution sequence of each step should be determined by its function and inherent logic, and should not be limited uniquely to the implementation process of the embodiment of the present application.
Fig. 6 is an image processing apparatus according to a third embodiment of the present application. The image processing apparatus may be used to implement the image processing method in the foregoing embodiment, and the image processing apparatus mainly includes:
a first obtaining module 601, configured to obtain image data collected by an image collection sensor; wherein the image data comprises the following types of images: a structured light image and/or a grayscale image;
a second obtaining module 602, configured to obtain a reference image corresponding to the target type image when the image data includes the target type image;
the processing module 603 is configured to perform disparity matching processing on the target type image and the reference image in the image data to obtain marked image data; the marked image data comprises a target image area and marking information, wherein the target image area of the marked image data is provided with the marking information, and the marking information is used for the terminal to identify the image type;
a sending module 604, configured to send image data to be sent to a terminal.
In some embodiments of this embodiment, the target pixel region is a first row pixel region of the image.
In some embodiments of this embodiment, the second obtaining module is specifically configured to: obtaining a calibrated image with the same type as the target type image; and erasing the target pixel area of the calibrated image to obtain a reference image of which the target pixel area is marked as a null value.
In other embodiments of this embodiment, the second obtaining module is specifically configured to: obtaining a calibrated image with the same type as the target type image; and marking an ultra-large value in a target pixel area of the calibrated image to obtain a reference image.
Further, in some embodiments of this embodiment, the processing module is specifically configured to: carrying out subtraction operation on a target type image in the image data and a target pixel area of a reference image to obtain marked image data; and marking the target pixel area of the parallax image corresponding to the target type image in the marked image data as a null value.
In some embodiments of this embodiment, the reference image is a calibrated image of the same type as the target type image. Correspondingly, the processing module is specifically configured to: marking a target image area of a target type image in the image data to obtain a marked target type image; and carrying out parallax matching processing on the marked target type image and the reference image to obtain marked image data. Or, the processing module is specifically configured to: performing parallax matching processing on the target type image and the reference image in the image data to obtain a parallax image; and marking the target image area of the parallax image to obtain marked image data.
In some implementations of this embodiment, the first obtaining module of this embodiment is further configured to: and acquiring the image data output sequence of the current image interaction process. Correspondingly, the sending module is further configured to: and transmitting the image data output sequence to the terminal.
It should be noted that the image processing method in the first embodiment can be implemented based on the image processing apparatus provided in this embodiment, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the image processing apparatus described in this embodiment may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Fig. 7 is an image recognition apparatus according to a third embodiment of the present application. The image recognition apparatus can be used to implement the image recognition method in the foregoing embodiment, and the image recognition apparatus mainly includes:
a receiving module 701, configured to receive image data sent by a camera; wherein the image data comprises the following types of images: a structured light image and/or a grayscale image, a target pixel region of the target type image having marking information;
a third obtaining module 702, configured to obtain feature information of a target pixel region of the image data;
a comparison module 703, configured to perform consistency comparison on the feature information based on the tag information;
and the identifying module 704 is configured to identify the type of the current image data according to the comparison result.
In some embodiments of this embodiment, the image processing apparatus of this embodiment further includes: and determining a module. Wherein, the receiving module is further configured to: receiving an image data output sequence of a current image interaction process sent by a camera; the determining module is configured to: determining an image data receiving sequence according to the image data output sequence; and determining the type of the image data which is continuously received according to the type of the current image data and the receiving sequence of the image data.
It should be noted that the image recognition method in the first embodiment can be implemented based on the image recognition device provided in this embodiment, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the image recognition device described in this embodiment may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Based on the technical scheme of the embodiment of the application, before image data is transmitted, a camera performs parallax matching on a target type image acquired by an image acquisition sensor and a reference image with the same image type to obtain marked image data of a target pixel area; after receiving the image data, the terminal acquires the characteristic information of the target pixel area of the image data, then carries out consistency comparison with the marked information of the marked image data, and finally identifies the type of the currently received image data according to the comparison result. By implementing the scheme of the application, the image type is identified by adopting the image marking and characteristic comparison mode, the calculated amount of the image type identification is effectively reduced, and the efficiency of the image type identification is improved.
Fig. 8 is an electronic device according to a fourth embodiment of the present application. The electronic device may be configured to implement the image processing method in the foregoing embodiment, and mainly includes:
a memory 801, a processor 802, and a computer program 803 stored on the memory 801 and executable on the processor 802, the memory 801 and the processor 802 being communicatively coupled. The processor 802, when executing the computer program 803, implements the image processing method in the foregoing embodiments. Wherein the number of processors may be one or more.
The Memory 801 may be a high-speed Random Access Memory (RAM) Memory or a non-volatile Memory (non-volatile Memory), such as a disk Memory. The memory 801 is used to store executable program code, and the processor 802 is coupled to the memory 801.
Further, an embodiment of the present application also provides a computer-readable storage medium, where the computer-readable storage medium may be provided in the electronic device in the foregoing embodiments, and the computer-readable storage medium may be the memory in the foregoing embodiment shown in fig. 8.
The computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the image processing method in the foregoing embodiments. Further, the computer-readable storage medium may be various media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a RAM, a magnetic disk, or an optical disk.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules is merely a division of logical functions, and an actual implementation may have another division, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
Modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a readable storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned readable storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In view of the above description of the image processing method, the image recognition method and the related apparatus provided in the present application, those skilled in the art will recognize that there are variations in the embodiments and applications of the concepts according to the embodiments of the present application.

Claims (13)

1. An image processing method applied to a camera, comprising:
acquiring image data acquired by an image acquisition sensor; wherein the image data comprises the following types of images: a structured light image and/or a grayscale image;
when the image data comprises a target type image, acquiring a reference image corresponding to the target type image;
performing parallax matching processing on the target type image and the reference image in the image data to obtain marked image data; the marked image data comprises a target image area and marking information, wherein the target image area of the marked image data is provided with the marking information, and the marking information is used for the terminal to identify the image type;
and sending the image data to be sent to the terminal.
2. The image processing method according to claim 1, wherein the target pixel region is a first-row pixel region of an image.
3. The image processing method according to claim 1, wherein the step of acquiring a reference image corresponding to the target type image comprises:
obtaining a calibrated image with the same type as the target type image;
and erasing the target pixel area of the calibrated image to obtain a reference image marked as a null value in the target pixel area.
4. The image processing method according to claim 1, wherein the step of acquiring a reference image corresponding to the target type image comprises:
obtaining a calibrated image with the same type as the target type image;
and marking an ultra-large value in the target pixel area of the calibrated image to obtain a reference image.
5. The image processing method according to claim 4, wherein the step of performing disparity matching processing on the target type image and the reference image in the image data to obtain labeled image data comprises:
performing subtraction operation on the target type image in the image data and the target pixel area of the reference image to obtain marked image data; wherein the target pixel region of the disparity image corresponding to the target type image in the marked image data is marked as null.
6. The image processing method according to claim 1, wherein the reference image is a calibrated image of the same type as the target type image;
the step of performing disparity matching processing on the target type image and the reference image in the image data to obtain marked image data includes:
marking the target image area of the target type image in the image data to obtain a marked target type image;
performing parallax matching processing on the marked target type image and the reference image to obtain marked image data;
or, performing parallax matching processing on the target type image and the reference image in the image data to obtain a parallax image;
and marking the target image area of the parallax image to obtain marked image data.
7. The image processing method according to any one of claims 1 to 6, further comprising:
acquiring an image data output sequence of a current image interaction process;
and transmitting the image data output sequence to the terminal.
8. An image recognition method is applied to a terminal, and is characterized by comprising the following steps:
receiving image data sent by a camera; wherein the image data comprises the following types of images: a structured light image and/or a grayscale image, a target pixel region of the target type image having marking information;
acquiring characteristic information of the target pixel region of the image data;
performing consistency comparison on the characteristic information based on the marking information;
and identifying the type of the current image data according to the comparison result.
9. The image processing method according to claim 8, further comprising:
receiving an image data output sequence of a current image interaction process sent by the camera;
determining an image data receiving sequence according to the image data output sequence;
after the step of identifying the type of the current image data according to the comparison result, the method further comprises the following steps:
and determining the type of the image data which is continuously received according to the type of the current image data and the receiving sequence of the image data.
10. An image processing apparatus applied to a camera, comprising:
the first acquisition module is used for acquiring image data acquired by the image acquisition sensor; wherein the image data comprises the following types of images: a structured light image and/or a grayscale image;
a second obtaining module, configured to obtain a reference image corresponding to a target type image when the image data includes the target type image;
the processing module is used for carrying out parallax matching processing on the target type image and the reference image in the image data to obtain marked image data; the marked image data comprises a target image area and marking information, wherein the target image area of the marked image data is provided with the marking information, and the marking information is used for the terminal to identify the image type;
and the sending module is used for sending the image data to be sent to the terminal.
11. An image recognition device applied to a terminal, comprising:
the receiving module is used for receiving image data sent by the camera; wherein the image data comprises the following types of images: a structured light image and/or a grayscale image, a target pixel region of the target type image having marking information;
a third obtaining module, configured to obtain feature information of the target pixel region of the image data;
the comparison module is used for carrying out consistency comparison on the characteristic information based on the marking information;
and the identification module is used for identifying the type of the current image data according to the comparison result.
12. An electronic device comprising a memory and a processor, wherein:
the processor is operable to execute a first computer program or a second computer program stored on the memory;
the processor, when executing the first computer program, realizes the steps of the method of any one of claims 1 to 7, and the processor, when executing the second computer program, realizes the steps of the method of claim 8 or 9.
13. A computer-readable storage medium, on which a first computer program or a second computer program is stored, which, when being executed by a processor, carries out the steps of the method of one of claims 1 to 7, and which, when being executed by a processor, carries out the steps of the method of claim 8 or 9.
CN202111405835.1A 2021-11-24 2021-11-24 Image processing method, image identification method and related device Pending CN114170432A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111405835.1A CN114170432A (en) 2021-11-24 2021-11-24 Image processing method, image identification method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111405835.1A CN114170432A (en) 2021-11-24 2021-11-24 Image processing method, image identification method and related device

Publications (1)

Publication Number Publication Date
CN114170432A true CN114170432A (en) 2022-03-11

Family

ID=80480707

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111405835.1A Pending CN114170432A (en) 2021-11-24 2021-11-24 Image processing method, image identification method and related device

Country Status (1)

Country Link
CN (1) CN114170432A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114781846A (en) * 2022-04-13 2022-07-22 国家电网有限公司技术学院分公司 Evaluation method and system for overhead transmission line defect identification training
CN115131741A (en) * 2022-08-30 2022-09-30 江苏时代新能源科技有限公司 Method and device for detecting code carving quality, computer equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339607A (en) * 2008-08-15 2009-01-07 北京中星微电子有限公司 Human face recognition method and system, human face recognition model training method and system
CN101751663A (en) * 2008-12-08 2010-06-23 财团法人工业技术研究院 Image segmentation marking method taking area features of pixel as base and system thereof
CN103536305A (en) * 2012-07-11 2014-01-29 通用电气公司 Systems and methods for performing image type recognition
CN104778687A (en) * 2015-03-26 2015-07-15 北京奇虎科技有限公司 Image matching method and device
CN110399763A (en) * 2018-04-24 2019-11-01 深圳奥比中光科技有限公司 Face identification method and system
CN110648375A (en) * 2018-06-26 2020-01-03 微软技术许可有限责任公司 Image colorization based on reference information
CN111476729A (en) * 2020-03-31 2020-07-31 北京三快在线科技有限公司 Target identification method and device
CN113111872A (en) * 2021-06-16 2021-07-13 智道网联科技(北京)有限公司 Training method and device of image recognition model, electronic equipment and storage medium
CN113256570A (en) * 2021-05-10 2021-08-13 郑州轻工业大学 Visual information processing method, device, equipment and medium based on artificial intelligence

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339607A (en) * 2008-08-15 2009-01-07 北京中星微电子有限公司 Human face recognition method and system, human face recognition model training method and system
CN101751663A (en) * 2008-12-08 2010-06-23 财团法人工业技术研究院 Image segmentation marking method taking area features of pixel as base and system thereof
CN103536305A (en) * 2012-07-11 2014-01-29 通用电气公司 Systems and methods for performing image type recognition
CN104778687A (en) * 2015-03-26 2015-07-15 北京奇虎科技有限公司 Image matching method and device
CN110399763A (en) * 2018-04-24 2019-11-01 深圳奥比中光科技有限公司 Face identification method and system
CN110648375A (en) * 2018-06-26 2020-01-03 微软技术许可有限责任公司 Image colorization based on reference information
CN111476729A (en) * 2020-03-31 2020-07-31 北京三快在线科技有限公司 Target identification method and device
CN113256570A (en) * 2021-05-10 2021-08-13 郑州轻工业大学 Visual information processing method, device, equipment and medium based on artificial intelligence
CN113111872A (en) * 2021-06-16 2021-07-13 智道网联科技(北京)有限公司 Training method and device of image recognition model, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114781846A (en) * 2022-04-13 2022-07-22 国家电网有限公司技术学院分公司 Evaluation method and system for overhead transmission line defect identification training
CN115131741A (en) * 2022-08-30 2022-09-30 江苏时代新能源科技有限公司 Method and device for detecting code carving quality, computer equipment and storage medium
CN115131741B (en) * 2022-08-30 2023-09-22 江苏时代新能源科技有限公司 Method, device, computer equipment and storage medium for detecting quality of code

Similar Documents

Publication Publication Date Title
CN111340864B (en) Three-dimensional scene fusion method and device based on monocular estimation
CN110705405B (en) Target labeling method and device
US10740431B2 (en) Apparatus and method of five dimensional (5D) video stabilization with camera and gyroscope fusion
CN110580428A (en) image processing method, image processing device, computer-readable storage medium and electronic equipment
CN109727275B (en) Object detection method, device, system and computer readable storage medium
US20200043198A1 (en) Camera calibration method and apparatus, electronic device, and computer-readable storage medium
CN109327626B (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN114170432A (en) Image processing method, image identification method and related device
CN113822942B (en) Method for measuring object size by monocular camera based on two-dimensional code
CN110121031B (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN112819722A (en) Infrared image face exposure method, device, equipment and storage medium
CN113673584A (en) Image detection method and related device
CN113705426A (en) Face checking method, device, server and readable storage medium
CN109068060B (en) Image processing method and device, terminal device and computer readable storage medium
CN112802081A (en) Depth detection method and device, electronic equipment and storage medium
CN112364693B (en) Binocular vision-based obstacle recognition method, device, equipment and storage medium
CN109120846B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112750157B (en) Depth image generation method and device
CN107633498B (en) Image dark state enhancement method and device and electronic equipment
EP4332910A1 (en) Behavior detection method, electronic device, and computer readable storage medium
CN112819953B (en) Three-dimensional reconstruction method, network model training method, device and electronic equipment
CN113538337B (en) Detection method, detection device and computer readable storage medium
CN113240723A (en) Monocular depth estimation method and device and depth evaluation equipment
CN115484369A (en) Video frame delay time determination method, device, medium, and remote driving system
CN110213457B (en) Image transmission method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination