[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113780286A - Object recognition method and device, storage medium and electronic device - Google Patents

Object recognition method and device, storage medium and electronic device Download PDF

Info

Publication number
CN113780286A
CN113780286A CN202111139080.5A CN202111139080A CN113780286A CN 113780286 A CN113780286 A CN 113780286A CN 202111139080 A CN202111139080 A CN 202111139080A CN 113780286 A CN113780286 A CN 113780286A
Authority
CN
China
Prior art keywords
image
determining
target
transformation
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111139080.5A
Other languages
Chinese (zh)
Inventor
敦婧瑜
薛佳乐
张湾湾
李轶锟
江歆霆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202111139080.5A priority Critical patent/CN113780286A/en
Publication of CN113780286A publication Critical patent/CN113780286A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides an object identification method and device, a storage medium and an electronic device, wherein the method comprises the following steps: inputting an image to be processed into a first target model, and determining M transformation parameters of the image to be processed output by the first target model, wherein M is a natural number greater than 1; transforming the image to be processed by using the M transformation parameters to obtain a target image; a target object in the target image is identified. The invention solves the problem of object identification in the related technology and achieves the effects of fast image correction and usability.

Description

Object recognition method and device, storage medium and electronic device
Technical Field
The embodiment of the invention relates to the field of images, in particular to an object identification method and device, a storage medium and an electronic device.
Background
The license plate is used as an important information carrier of a vehicle, and plays an important role in traffic scenes, traffic management scenes and other various application scenes. In a standard application scene, the license plate generally has good imaging quality and a very ideal shooting angle. But in some cases, the license plate may be rotated, chamfered or otherwise more severely distorted, subject to installation conditions or scene constraints. When the deformation occurs, the recognition rate of the license plate is greatly affected. Therefore, before the license plate is identified, the license plate needs to be corrected first to obtain a better identification effect, so that the service based on the license plate information can be completed better in the following.
However, in the prior art, the image correction is based on the traditional image method, has higher requirements on image quality and license plate detection accuracy, and is easy to be interfered; the optimization is difficult, the end-to-end processing cannot be realized, and the time consumption is increased correspondingly along with the increase of the processing size. Moreover, only rotation correction can be completed, and beveling correction cannot be realized; the rough character position information needs to be obtained by performing recognition once, and recognition needs to be performed again after correction, which increases time consumption.
In view of the above technical problems, no effective solution has been proposed in the related art.
Disclosure of Invention
The embodiment of the invention provides an object identification method and device, a storage medium and an electronic device, which are used for at least solving the problem of image correction in the related art.
According to an embodiment of the present invention, there is provided an object recognition method including: inputting an image to be processed into a first target model, and determining M transformation parameters of the image to be processed output by the first target model, wherein M is a natural number greater than 1; transforming the image to be processed by using the M transformation parameters to obtain a target image; a target object in the target image is identified.
According to another embodiment of the present invention, there is provided an object recognition apparatus including: a first input module, configured to input an image to be processed into a first target model, and determine M transformation parameters of the image to be processed output by the first target model, where M is a natural number greater than 1; the first transformation module is used for transforming the image to be processed by utilizing the M transformation parameters to obtain a target image; and the first identification module is used for identifying the target object in the target image.
In an exemplary embodiment, the first input module includes: a first input unit, configured to input the image to be processed into N convolutional layers in the first target model, and determine image feature information of the image to be processed output by the N convolutional layers, where N is a natural number greater than 1; and a second input unit configured to input the image feature information into a full connection layer in the first object model and determine the M conversion parameters.
In an exemplary embodiment, the apparatus further includes: a first determining module, configured to determine a first rotation matrix using the determined rotation angle of the sample image; a second determining module, configured to determine a first scaling parameter based on the first rotation matrix; a third determining module, configured to determine a first transformation matrix based on the first rotation matrix and the first scaling parameter, where the first transformation matrix is used to control rotation and scaling of the sample image; a fourth determining module, configured to determine a tilt correction matrix using the determined tilt angle of the sample image; a fifth determining module, configured to determine a second scaling parameter based on the tilt correction matrix, where the second scaling parameter is used to scale a value of an x-axis of the sample image; a sixth determining module, configured to determine a second transformation matrix based on the first transformation matrix, the tilt correction matrix, and the second scaling parameter, where the second transformation matrix is used to control beveling, scaling, and rotation of the sample image.
In an exemplary embodiment, the apparatus further includes: a first calculation module, configured to calculate the first scaling parameter and the second scaling parameter according to the rotation angle and the tilt angle after determining a second transformation matrix based on the first transformation matrix, the tilt correction matrix, and the second scaling parameter; a seventh determining module, configured to determine a third transformation matrix based on the first scaling parameter, the second scaling parameter, and the second transformation matrix.
In an exemplary embodiment, the apparatus further includes: and an eighth determining module, configured to determine a third transformation matrix based on the first scaling parameter, the second scaling parameter, and the second transformation matrix, and then translate and transform the sample image back to the original coordinate system with a center point of the sample image as an origin, so as to determine a coordinate of the sample image after translation and transformation.
In an exemplary embodiment, the apparatus further includes: a ninth determining module, configured to determine a translation-transformed coordinate of the sample image, and then determine the target transformation matrix based on a target translation parameter of the sample image, the translation-transformed coordinate, and a third transformation matrix.
In an exemplary embodiment, the apparatus further includes: the first constraint module is used for inputting an image to be processed into a first target model, and after M transformation parameters of the image to be processed output by the first target model are determined, the numerical range of the M transformation parameters is constrained by using a preset activation function.
In an exemplary embodiment, the first transforming module includes: a second determining unit, configured to determine a target transformation matrix of the image to be processed by using the M transformation parameters; and the first transformation unit is used for transforming the image to be processed based on the target transformation matrix to obtain the target image.
According to a further embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, M transformation parameters of the image to be processed output by the first target model are determined by inputting the image to be processed into the first target model, wherein M is a natural number greater than 1; transforming the image to be processed by using the M transformation parameters to obtain a target image; a target object in the target image is identified. The purpose of rapidly correcting the image is achieved, so that the problem of object identification in the related technology can be solved, and the effects of rapidly correcting the image and facilitating use are achieved.
Drawings
Fig. 1 is a block diagram of a hardware structure of a mobile terminal of an object recognition method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an object identification method according to an embodiment of the invention;
FIG. 3 is an overall flow diagram of a target model according to an embodiment of the invention;
FIG. 4 is a schematic illustration of object recognition according to an embodiment of the present invention;
fig. 5 is a block diagram of a structure of an object recognition apparatus according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking an example of the application in a mobile terminal, fig. 1 is a block diagram of a hardware structure of the mobile terminal of an object identification method according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, wherein the mobile terminal may further include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used for storing computer programs, for example, software programs and modules of application software, such as computer programs corresponding to the object recognition method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In the present embodiment, an object identification method is provided, and fig. 2 is a flowchart of an object identification method according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, inputting an image to be processed into a first target model, and determining M transformation parameters of the image to be processed output by the first target model, wherein M is a natural number greater than 1;
step S204, transforming the image to be processed by utilizing M transformation parameters to obtain a target image;
in step S206, a target object in the target image is identified.
The embodiment includes, but is not limited to, application in a scene of recognizing an object, for example, in a scene of recognizing a license plate photo acquired in a traffic field, so as to recognize a correct license plate number.
In this embodiment, the image to be processed includes, but is not limited to, an image acquired by a camera or a video camera, for example, a license plate image.
In this embodiment, for example, as shown in fig. 3, the first target model may include a plurality of convolution layers as a feature extraction layer for extracting feature information of the license plate image; using the full connection layer after the volume base layer, outputting M transformation parameters, for example, 2 transformation parameters; transforming the 2 transformation parameters obtained by learning through a space transformation layer by using a target transformation matrix, wherein the initial values of the parameters are
Figure BDA0003283195840000051
And the transformed image is used as the input of a license plate recognition network for license plate recognition.
The execution subject of the above steps may be a terminal, but is not limited thereto.
Through the steps, M transformation parameters of the image to be processed output by the first target model are determined by inputting the image to be processed into the first target model, wherein M is a natural number greater than 1; transforming the image to be processed by using the M transformation parameters to obtain a target image; a target object in the target image is identified. The purpose of rapidly correcting the image is achieved, so that the problem of object identification in the related technology can be solved, and the effects of rapidly correcting the image and facilitating use are achieved.
In one exemplary implementation, inputting image feature information into a fully-connected layer in a first object model, determining M transformation parameters for the fully-connected layer output, comprising:
s1, determining a first rotation matrix using the determined rotation angle α of the sample image, wherein the first rotation matrix includes:
Figure BDA0003283195840000061
s2, determining a first scaling parameter based on the first rotation matrix
Figure BDA0003283195840000062
Wherein,
Figure BDA0003283195840000063
w is used to represent the width of the sample image, and H is used to represent the height of the sample image;
s3, determining a first transformation matrix based on the first rotation matrix and the first scaling parameter, wherein the first transformation matrix comprises:
Figure BDA0003283195840000064
the first transformation matrix is used for controlling the rotation and the scaling of the sample image;
s4, determining a tilt correction matrix using the determined tilt angle of the sample image, including:
Figure BDA0003283195840000065
s5, determining a second scaling parameter based on the tilt correction matrix
Figure BDA0003283195840000066
Wherein β is less than 90 degrees, and the second scaling parameter is used for scaling the value of the x-axis of the sample image;
and S6, determining a second transformation matrix based on the first transformation matrix, the inclination correction matrix and the second scaling parameter, wherein the second transformation matrix is used for controlling the beveling, scaling and rotation of the sample image.
The second transformation matrix includes:
Figure BDA0003283195840000067
in an exemplary embodiment, after determining the second transformation matrix based on the first transformation matrix, the tilt correction matrix, and the second scaling parameter, the method further comprises:
calculating a first scaling parameter and a second scaling parameter according to the rotation angle and the inclination angle; for example, order
Figure BDA0003283195840000071
(constant); first scaling parameter
Figure BDA0003283195840000072
Figure BDA0003283195840000073
Second scaling parameter
Figure BDA0003283195840000074
Determining a third transformation matrix based on the first scaling parameter, the second scaling parameter, and the second transformation matrix, wherein the third transformation matrix comprises:
Figure BDA0003283195840000075
in an exemplary embodiment, after determining the third transformation matrix based on the first scaling parameter, the second scaling parameter, and the second transformation matrix, the method further comprises:
s1, using the central point of the sample image as the origin, translating the sample image and then translating it back to the original coordinate system, determining the coordinates of the sample image, for example, let
Figure BDA0003283195840000076
Figure BDA0003283195840000077
Figure BDA0003283195840000078
With the image center as the origin, translating and transforming the image and then translating the image back to the original coordinate system comprises:
Figure BDA0003283195840000079
in one exemplary embodiment, after determining the translation transformed coordinates of the sample image, the method further comprises:
s1, determining a target transformation matrix based on the target translation parameter of the sample image, the translation transformation coordinates and the third transformation matrix, including:
Figure BDA00032831958400000710
in one exemplary embodiment, inputting an image to be processed into a first target model, and determining M transformation parameters of the image to be processed output by the first target model, includes:
s1, inputting the image to be processed into N convolutional layers in the first target model, and determining the image characteristic information of the image to be processed output by the N convolutional layers, wherein N is a natural number greater than 1;
and S2, inputting the image characteristic information into a full connection layer in the first target model, and determining M transformation parameters output by the full connection layer.
In this embodiment, the output of the full connection layer is the transformation parameters. The number of outputs of the full connection layer is equal to the number of parameters.
In an exemplary embodiment, after inputting the image to be processed into the first target model and determining M transformation parameters of the image to be processed output by the first target model, the method further includes:
and S1, constraining the numerical range of the M transformation parameters by using a preset activation function.
In this embodiment, the spatial transform layer corrects the image to be processed as shown in fig. 4.
In an exemplary embodiment, after the image to be processed is corrected by using the feature information of the image to be processed in the spatial transformation layer in the first target model, the method further includes:
s1, inputting the target image into a second target model to identify the object in the target image.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, an object recognition apparatus is further provided, and the object recognition apparatus is used for implementing the foregoing embodiments and preferred embodiments, which have already been described and are not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 5 is a block diagram of a structure of an object recognition apparatus according to an embodiment of the present invention, as shown in fig. 5, the apparatus including:
a first input module 52, configured to input an image to be processed into a first target model, and determine M transformation parameters of the image to be processed output by the first target model, where M is a natural number greater than 1;
a first transformation module 54, configured to transform the image to be processed by using the M transformation parameters to obtain a target image;
the first recognition module 56 is configured to recognize a target object in the target image.
In an exemplary embodiment, the first input module includes:
a first input unit, configured to input the image to be processed into N convolutional layers in the first target model, and determine image feature information of the image to be processed output by the N convolutional layers, where N is a natural number greater than 1;
and a second input unit configured to input the image feature information into a full connection layer in the first object model and determine the M conversion parameters.
In an exemplary embodiment, the apparatus further includes:
a first determining module, configured to determine a first rotation matrix using the determined rotation angle of the sample image;
a second determining module, configured to determine a first scaling parameter based on the first rotation matrix;
a third determining module, configured to determine a first transformation matrix based on the first rotation matrix and the first scaling parameter, where the first transformation matrix is used to control rotation and scaling of the sample image;
a fourth determining module, configured to determine a tilt correction matrix using the determined tilt angle of the sample image;
a fifth determining module, configured to determine a second scaling parameter based on the tilt correction matrix, where the second scaling parameter is used to scale a value of an x-axis of the sample image;
a sixth determining module, configured to determine a second transformation matrix based on the first transformation matrix, the tilt correction matrix, and the second scaling parameter, where the second transformation matrix is used to control beveling, scaling, and rotation of the sample image.
In an exemplary embodiment, the apparatus further includes:
a first calculation module, configured to calculate the first scaling parameter and the second scaling parameter according to the rotation angle and the tilt angle after determining a second transformation matrix based on the first transformation matrix, the tilt correction matrix, and the second scaling parameter;
a seventh determining module, configured to determine a third transformation matrix based on the first scaling parameter, the second scaling parameter, and the second transformation matrix.
In an exemplary embodiment, the apparatus further includes:
and an eighth determining module, configured to determine a third transformation matrix based on the first scaling parameter, the second scaling parameter, and the second transformation matrix, and then translate and transform the sample image back to the original coordinate system with a center point of the sample image as an origin, so as to determine a coordinate of the sample image after translation and transformation.
In an exemplary embodiment, the apparatus further includes:
a ninth determining module, configured to determine a translation-transformed coordinate of the sample image, and then determine the target transformation matrix based on a target translation parameter of the sample image, the translation-transformed coordinate, and a third transformation matrix.
In an exemplary embodiment, the apparatus further includes:
the first constraint module is used for inputting an image to be processed into a first target model, and after M transformation parameters of the image to be processed output by the first target model are determined, the numerical range of the M transformation parameters is constrained by using a preset activation function.
In an exemplary embodiment, the first transforming module includes:
a second determining unit, configured to determine a target transformation matrix of the image to be processed by using the M transformation parameters;
and the first transformation unit is used for transforming the image to be processed based on the target transformation matrix to obtain the target image.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
In the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for executing the above steps.
In an exemplary embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
In an exemplary embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
In an exemplary embodiment, the processor may be configured to execute the above steps by a computer program.
For specific examples in this embodiment, reference may be made to the examples described in the above embodiments and exemplary embodiments, and details of this embodiment are not repeated herein.
It will be apparent to those skilled in the art that the various modules or steps of the invention described above may be implemented using a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and they may be implemented using program code executable by the computing devices, such that they may be stored in a memory device and executed by the computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into various integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. An object recognition method, comprising:
inputting an image to be processed into a first target model, and determining M transformation parameters of the image to be processed output by the first target model, wherein M is a natural number greater than 1;
transforming the image to be processed by using the M transformation parameters to obtain a target image;
identifying a target object in the target image.
2. The method of claim 1, wherein inputting an image to be processed into a first object model, determining M transformation parameters of the image to be processed output by the first object model, comprises:
inputting the image to be processed into N convolutional layers in the first target model, and determining image characteristic information of the image to be processed output by the N convolutional layers, wherein N is a natural number greater than 1;
inputting the image characteristic information into a full connection layer in the first target model, and determining the M transformation parameters output by the full connection layer.
3. The method of claim 2, wherein inputting the image feature information into a fully-connected layer in the first target model, determining the M transformation parameters output by the fully-connected layer, comprises:
determining a first rotation matrix by using the determined rotation angle of the sample image;
determining a first scaling parameter based on the first rotation matrix;
determining a first transformation matrix based on the first rotation matrix and the first scaling parameter, wherein the first transformation matrix is used to control the rotation and scaling of the sample image;
determining a tilt correction matrix by using the determined tilt angle of the sample image;
determining a second scaling parameter based on the tilt correction matrix, wherein the second scaling parameter is used for scaling the value of the x-axis of the sample image;
determining a second transformation matrix based on the first transformation matrix, the tilt correction matrix, and the second scaling parameter, wherein the second transformation matrix is used to control the beveling, scaling, and rotation of the sample image.
4. The method of claim 3, wherein after determining a second transformation matrix based on the first transformation matrix, the tilt correction matrix, and the second scaling parameter, the method further comprises:
calculating the first scaling parameter and the second scaling parameter according to the rotation angle and the inclination angle;
determining a third transformation matrix based on the first scaling parameter, the second scaling parameter, and the second transformation matrix.
5. The method of claim 4, wherein after determining a third transformation matrix based on the first scaling parameter, the second scaling parameter, and the second transformation matrix, the method further comprises:
and with the central point of the sample image as an origin, translating and transforming the sample image and then translating the sample image back to the original coordinate system, and determining the translation transformation coordinates of the sample image.
6. The method of claim 5, wherein after determining the translationally transformed coordinates of the sample image, the method further comprises:
determining a target transformation matrix based on the target translation parameter of the sample image, the coordinates of the translation transformation, and a third transformation matrix.
7. The method according to claim 1, wherein after inputting an image to be processed into a first object model and determining M transformation parameters of the image to be processed output by the first object model, the method further comprises:
and constraining the numerical range of the M transformation parameters by using a preset activation function.
8. The method according to claim 1, wherein transforming the image to be processed using the M transformation parameters to obtain a target image comprises:
determining a target transformation matrix of the image to be processed by using the M transformation parameters;
and transforming the image to be processed based on the target transformation matrix to obtain the target image.
9. An object recognition apparatus, comprising:
the image processing device comprises a first input module, a second input module and a processing module, wherein the first input module is used for inputting an image to be processed into a first target model and determining M transformation parameters of the image to be processed output by the first target model, and M is a natural number greater than 1;
the first transformation module is used for transforming the image to be processed by utilizing the M transformation parameters to obtain a target image;
and the first identification module is used for identifying the target object in the target image.
10. A computer-readable storage medium, in which a computer program is stored, which computer program, when being executed by a processor, carries out the method of any one of claims 1 to 8.
11. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 8.
CN202111139080.5A 2021-09-27 2021-09-27 Object recognition method and device, storage medium and electronic device Pending CN113780286A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111139080.5A CN113780286A (en) 2021-09-27 2021-09-27 Object recognition method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111139080.5A CN113780286A (en) 2021-09-27 2021-09-27 Object recognition method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN113780286A true CN113780286A (en) 2021-12-10

Family

ID=78853924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111139080.5A Pending CN113780286A (en) 2021-09-27 2021-09-27 Object recognition method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN113780286A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117714894A (en) * 2023-08-03 2024-03-15 荣耀终端有限公司 Target identification method, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018219054A1 (en) * 2017-06-02 2018-12-06 杭州海康威视数字技术股份有限公司 Method, device, and system for license plate recognition
CN109145927A (en) * 2017-06-16 2019-01-04 杭州海康威视数字技术股份有限公司 The target identification method and device of a kind of pair of strain image
CN111462245A (en) * 2020-01-09 2020-07-28 华中科技大学 Zoom camera attitude calibration method and system based on rectangular structure

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018219054A1 (en) * 2017-06-02 2018-12-06 杭州海康威视数字技术股份有限公司 Method, device, and system for license plate recognition
CN109145927A (en) * 2017-06-16 2019-01-04 杭州海康威视数字技术股份有限公司 The target identification method and device of a kind of pair of strain image
CN111462245A (en) * 2020-01-09 2020-07-28 华中科技大学 Zoom camera attitude calibration method and system based on rectangular structure

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117714894A (en) * 2023-08-03 2024-03-15 荣耀终端有限公司 Target identification method, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113869293B (en) Lane line recognition method and device, electronic equipment and computer readable medium
CN112541484B (en) Face matting method, system, electronic device and storage medium
CN111310710A (en) Face detection method and system
CN111507894B (en) Image stitching processing method and device
CN111598176B (en) Image matching processing method and device
CN110717864B (en) Image enhancement method, device, terminal equipment and computer readable medium
CN113780286A (en) Object recognition method and device, storage medium and electronic device
CN111428732B (en) YUV image recognition method, system and computer equipment
CN110659638A (en) License plate recognition method and device, computer equipment and storage medium
CN110660091A (en) Image registration processing method and device and photographing correction operation system
EP4075381B1 (en) Image processing method and system
CN111429529B (en) Coordinate conversion calibration method, electronic equipment and computer storage medium
CN113284077B (en) Image processing method, device, communication equipment and readable storage medium
CN110874814B (en) Image processing method, image processing device and terminal equipment
CN111127529A (en) Image registration method and device, storage medium and electronic device
CN111656759A (en) Image color correction method and device and storage medium
CN112580638B (en) Text detection method and device, storage medium and electronic equipment
CN112419459B (en) Method, apparatus, computer device and storage medium for baking model AO mapping
CN113112531B (en) Image matching method and device
CN115393213A (en) Image visual angle transformation method and device, electronic equipment and readable storage medium
CN114757846A (en) Image correction method and device, storage medium and electronic device
CN112308809B (en) Image synthesis method, device, computer equipment and storage medium
CN112733565A (en) Two-dimensional code coarse positioning method, equipment and storage medium
CN112669346A (en) Method and device for determining road surface emergency
CN112601029A (en) Video segmentation method, terminal and storage medium with known background prior information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination