[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113132645A - Image acquisition method, device, equipment and storage medium - Google Patents

Image acquisition method, device, equipment and storage medium Download PDF

Info

Publication number
CN113132645A
CN113132645A CN201911418618.9A CN201911418618A CN113132645A CN 113132645 A CN113132645 A CN 113132645A CN 201911418618 A CN201911418618 A CN 201911418618A CN 113132645 A CN113132645 A CN 113132645A
Authority
CN
China
Prior art keywords
image
pair
determining
acquisition parameters
image acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911418618.9A
Other languages
Chinese (zh)
Other versions
CN113132645B (en
Inventor
王玉波
祝玉宝
陈晓青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201911418618.9A priority Critical patent/CN113132645B/en
Publication of CN113132645A publication Critical patent/CN113132645A/en
Application granted granted Critical
Publication of CN113132645B publication Critical patent/CN113132645B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Exposure Control For Cameras (AREA)

Abstract

The embodiment of the invention discloses an image acquisition method, an image acquisition device, image acquisition equipment and a storage medium. The method comprises the following steps: if a moving target exists in a first image of the current image pair, determining a target area of a second image according to difference information of the first image and the background image and the second image in the current image pair; determining image acquisition parameters of a third image and a fourth image in a next image pair according to the brightness information of the target area in the second image and the image acquisition parameters of the first image; and if the moving target does not exist in the first image, determining image acquisition parameters of a third image and a fourth image in the next image pair according to the brightness information of the first image and the image acquisition parameters of the second image. According to the embodiment of the application, the exposure parameter adjustment mode is determined in a targeted manner under the conditions that the moving target exists and the moving target does not exist, so that the accuracy and the adaptability of exposure of the moving object are improved.

Description

Image acquisition method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of image acquisition, in particular to an image acquisition method, an image acquisition device, image acquisition equipment and a storage medium.
Background
The human face detection rate, the human face recognition rate and the human face snapshot image effect are important indexes for measuring human face detection, human face recognition and human face snapshot technologies. Traditional people's face camera through whether there is the people's face in the detection collection image, then carries out the main part through the people's face photometry mode of setting for and photometry to the people's face region, exposes the adjustment according to the photometry result to promote people's face detection rate, recognition rate and people's face snapshot image effect, make the people's face reach the best exposure effect.
For a scene with uniform illumination, the above precaution can meet the exposure effect of the basic face snapshot. However, if the camera is in a scene with uneven illumination, such as a wide dynamic scene, the following problems may occur: (1) the outdoor illumination is strong, the face is completely overexposed and cannot be detected; (2) when outdoor illumination is strong and the indoor environment is dark, people walk from dark to bright and then walk from bright to dark, the brightness of the human face can be changed. The traditional face photometric method cannot be applied to exposure adjustment in the scene, and the face snapshot and the exposure effect of the whole image are greatly influenced.
Disclosure of Invention
The embodiment of the invention provides an image acquisition method, an image acquisition device, image acquisition equipment and a storage medium, which are used for adjusting image acquisition parameters according to the detection of a moving target and improving the image acquisition quality and the exposure adaptability.
In a first aspect, an embodiment of the present invention provides an image acquisition method, where the method includes:
if a moving target exists in a first image of a current image pair, determining a target area of a second image according to difference information of the first image and a background image and the second image of the current image pair; wherein the background image is determined from a previous frame image in an image pair acquired prior to the current image pair;
determining image acquisition parameters of a third image and a fourth image in a next image pair according to the brightness information of the target area in the second image and the image acquisition parameters of the first image;
and if the moving target does not exist in the first image, determining image acquisition parameters of a third image and a fourth image in a next image pair according to the brightness information of the first image and the image acquisition parameters of the second image.
In a second aspect, an embodiment of the present invention provides an image capturing apparatus, including:
the target area determining module is used for determining a target area of a second image according to difference information of the first image and a background image and the second image in the current image pair if a moving target exists in the first image of the current image pair; wherein the background image is determined from a previous frame image in an image pair acquired prior to the current image pair;
the first image acquisition parameter determining module is used for determining image acquisition parameters of a third image and a fourth image in a next image pair according to the brightness information of the target area in the second image and the image acquisition parameters of the first image;
and the second image acquisition parameter determining module is used for determining the image acquisition parameters of a third image and a fourth image in a next image pair according to the brightness information of the first image and the image acquisition parameters of the second image if the moving target does not exist in the first image.
In a third aspect, an embodiment of the present invention further provides an apparatus, where the apparatus includes:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement an image acquisition method as in any one of the embodiments of the invention.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the image capturing method according to any one of the embodiments of the present invention.
In the embodiment of the invention, when the moving target exists, the exposure parameter is adjusted according to the brightness information of the target area in the second image, and when the moving target does not exist, the exposure parameter is adjusted according to the brightness information of the first image, so that the exposure parameter adjusting mode is determined in a targeted manner under the conditions that the moving target exists and does not exist, and the accuracy and the adaptability of the exposure of the moving object are improved.
Drawings
Fig. 1 is a flowchart of an image acquisition method according to an embodiment of the present invention;
fig. 2 is a flowchart of an image capturing method according to another embodiment of the present invention;
FIG. 3 is a first schematic diagram of an image output according to yet another embodiment of the present invention;
FIG. 4 is a second schematic diagram of an image output according to yet another embodiment of the present invention;
FIG. 5 is a diagram illustrating a relationship between weight and sharpness according to another embodiment of the present invention;
fig. 6 is a schematic structural diagram of an image capturing device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an apparatus according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Fig. 1 is a flowchart of an image capturing method according to an embodiment of the present invention. The image acquisition method provided by the embodiment can be suitable for adjusting the image acquisition parameters, and typically, the embodiment of the invention can be suitable for adjusting the image acquisition parameters to acquire images according to the detected brightness of the moving target area when the illumination is not uniform or strong. The method may be specifically performed by an image acquisition apparatus, which may be implemented by means of software and/or hardware, which may be integrated in a device. Referring to fig. 1, the method of the embodiment of the present invention specifically includes:
s110, if a moving target exists in a first image of a current image pair, determining a target area of a second image according to difference information of the first image and a background image and the second image of the current image pair; wherein the background image is determined from a previous frame image in an image pair acquired prior to the current image pair;
wherein the image acquisition parameter may be an exposure parameter. The current image pair may be the image pair most recently acquired by the image acquirer, and the image pair may be two consecutive frames of images acquired by the image acquirer. The moving object may be an object newly appearing in the first image, or may be an object whose position in the first image is different from that in the background image. In the embodiment of the application, if the exposure parameter is not adjusted after being stabilized, the image acquisition parameter of the first image in the current image pair is the same as the image acquisition parameter of the previous image in the previous image pair, and then the previous image in the previous image pair is used as the background image to detect the moving target of the first image in the current image pair. The previous image pair may be an image pair adjacent to the current image pair that was acquired by the image acquirer prior to acquiring the current image pair. If the first image in the current image pair is the second frame image, the previous frame image in the previous image pair may be the first frame image. For example, if the current image pair is the second frame image and the third frame image, the previous frame image in the previous image pair is the first frame image. If the current image pair is the third frame image and the fourth frame image, the previous image pair is the first frame image and the second frame image, and the previous image in the previous image pair is the first frame image. And if the current image pair is the seventh frame image and the eighth frame image, the previous image pair is the fifth frame image and the sixth frame image, and the previous image in the previous image pair is the fifth frame image. In the embodiment of the present application, the previous frame image in the image pair may be used as a global image for detecting a moving target, and the next frame image may be used as a local image for displaying the moving target, so as to adjust the exposure parameter according to the brightness information of the moving target. The first image in the current image pair and the previous frame image in the previous image pair may be global images in each image pair.
Specifically, when the illumination brightness changes greatly or the illumination is strong, the acquired image often generates overexposure, the target area in the image cannot be accurately detected, and the exposure parameter of the next image cannot be adjusted according to the brightness of the target area. Therefore, in the embodiment of the application, the target area of the moving target in the second image is accurately determined according to the difference information of the first image and the background image, so that the exposure parameter is determined according to the brightness information of the target area.
In this embodiment of the present application, determining a target region of a second image according to difference information between a first image and a background image and the second image in the current image pair includes: determining the target area coordinates of the moving target in the first image according to the difference information of the first image and the background image; and determining a region corresponding to the target region coordinates in the second image as the target region in the second image.
For example, since the first image and the second image are two adjacent frames of images, the acquisition time interval is short, and the displacement of the moving object is small, the target area coordinates in the first image can be used as the target area coordinates of the moving object in the second image. And determining the target area coordinate of the moving target in the first image according to the difference information, and mapping the target area coordinate into the second image correspondingly, thereby acquiring the area corresponding to the target area coordinate in the second image as the target area in the second image.
And S120, determining image acquisition parameters of a third image and a fourth image in a next image pair according to the brightness information of the target area in the second image and the image acquisition parameters of the first image.
The third image may be a previous frame image in the next image pair, and the fourth image may be a next frame image in the next image pair.
In an embodiment of the present invention, determining image capturing parameters of a third image and a fourth image in a next image pair according to brightness information of a target region in the second image and image capturing parameters of the first image includes: taking the image acquisition parameter of the first image as the image acquisition parameter of the third image in the next image pair; and determining the image acquisition parameters of the fourth image in the next image pair according to the brightness information of the target area in the second image and the corresponding relation between the brightness information and the image acquisition parameters.
For example, for a global image, because a moving target exists at present, in order to ensure accuracy of detecting the moving target and avoid that brightness of the image changes due to exposure parameter adjustment, so that the moving target cannot be determined according to difference information, in the embodiment of the present application, an exposure parameter of the global image is not adjusted, that is, an exposure parameter of a first image is used as an exposure parameter of a third image, so that exposure parameters of two images are ensured to be consistent, and existing difference information is difference information generated by moving the moving target.
For the local image, if a moving target exists, the exposure parameter of the local image needs to be determined according to the brightness information of the target area of the moving target in the second image, so that the local image is subjected to image acquisition according to the exposure parameter, and the local image suitable for current illumination is obtained. And determining an exposure parameter corresponding to the brightness information of the target area as the exposure parameter of the fourth image, so that the exposure parameter of the fourth image is adjusted, and even if the second image has an overexposure condition, the target area and the brightness information can be accurately acquired according to the first image and the background image, so that the exposure parameter is adjusted. In addition, even if a moving target exists, the exposure parameter of the target area can be adjusted by tracking the movement of the target, so that an image with high image quality is acquired under the current illumination environment.
And S130, if the moving target does not exist in the first image, determining image acquisition parameters of a third image and a fourth image in a next image pair according to the brightness information of the first image and the image acquisition parameters of the second image.
In this embodiment of the present application, determining image capturing parameters of a third image and a fourth image in a next image pair according to the brightness information of the first image and the image capturing parameters of the second image includes: taking the image acquisition parameter of the second image as the image acquisition parameter of a fourth image of a next image pair; and determining the image acquisition parameters of the third image in the next image pair according to the brightness information of the first image and the corresponding relation between the brightness information and the image acquisition parameters.
For example, when there is no moving target, the moving target does not need to be detected and extracted for the local image, and therefore, the exposure parameter of the local image is not adjusted, and the image capture parameter of the second image is used as the exposure parameter of the local image in the next image pair. For the global image, whether the exposure parameter needs to be adjusted or not can be determined according to the brightness information of the global image, so that the quality of the global image is improved.
In the embodiment of the invention, when the moving target exists, the exposure parameter is adjusted according to the brightness information of the target area in the second image, and when the moving target does not exist, the exposure parameter is adjusted according to the brightness information of the first image, so that the exposure parameter adjusting mode is determined in a targeted manner under the conditions that the moving target exists and does not exist, and the accuracy and the adaptability of the exposure of the moving object are improved.
Fig. 2 is a flowchart of an image capturing method according to another embodiment of the present invention. For further optimization of the embodiments, details which are not described in detail in the embodiments are described in the embodiments. Referring to fig. 2, the image capturing method provided in this embodiment may include:
and S210, acquiring a current image pair acquired by the image acquisition device.
In the embodiment of the present application, the image output by the image collector may adopt a time division multiplexing mechanism, as shown in fig. 3, that is, odd frames and even frames are output in sequence, and the odd frames are used as global images and the even frames are used as local images, or the odd frames are used as local images and the even frames are used as global images. The DOL-WDR (digital overlap-Wide Dynamic Range) technique may also be adopted to output the global image and the local image in an interleaved and overlapped manner, as shown in fig. 4.
S220, judging whether the image acquisition parameter of the first image in the current image pair is the same as the image acquisition parameter of the previous image in the previous image pair; if yes, go to S230; if not, go to S260.
Specifically, if the image capture parameter of the first image in the current image pair is the same as the image capture parameter of the previous image in the previous image pair, which indicates that the exposure parameter is not adjusted, S230 is executed to perform moving target detection, so as to improve the detection accuracy. If the difference is not the same, it means that the exposure parameters are adjusted, and S260 is executed, and the moving object is not detected.
In another embodiment of the present invention, if the exposure parameters are adjusted, the background image is not updated and the moving object detection is not performed. If the exposure parameters are not adjusted after being stabilized, the image acquisition parameters of the first image in the current image pair are the same as the image acquisition parameters of the previous image in the previous image pair, and then the previous image in the previous image pair is used as a background image to detect the moving target of the first image in the current image pair.
S230, detecting a moving target of the first image according to the difference information of the first image and the background image, and judging whether the moving target exists or not; if so, executing S240-S250; if not, go to S260.
For example, if the image acquisition parameters of the two images are the same, but a part of the area is different, for example, brightness is different, it indicates that the object in the area moves, and the position in the two images is different. Therefore, when the image acquisition parameter of the first image in the current image pair is the same as the image acquisition parameter of the previous image in the previous image pair, that is, the exposure parameter is not adjusted, it is determined whether difference information exists between the first image and the background image, so as to detect whether a moving target exists.
In another embodiment of the present invention, the moving object detection of the first image includes: comparing the brightness information of the first image with the brightness information of the background image to determine brightness difference information; if the brightness difference information is larger than the brightness difference information threshold value and the perimeter is larger than a preset perimeter threshold value or the area is larger than a preset area threshold value in the first image, determining that a moving target exists in the first image; otherwise, determining that the moving target does not exist in the first image.
Illustratively, the brightness information of each pixel point in the first image is compared with the brightness information of the corresponding pixel point in the background image, and if the brightness difference value of the pixel point is greater than a preset brightness difference information threshold value, the pixel point is taken as a moving target pixel point. And judging whether the perimeter of the area of the target pixel point is larger than a preset perimeter threshold or not, or whether the area is larger than an area threshold or not, if so, indicating that a moving target exists, so that the accuracy of determining the area of the moving target is ensured, and the problem that the difference generated by exposure is wrongly judged as the moving target is solved. And taking the area formed by the moving target pixel points as a target area. And if the target area exists, determining that the moving target exists in the first image, otherwise, determining that the moving target does not exist in the first image.
In another embodiment of the present invention, after detecting a moving object of the first image according to difference information between the first image and a background image, the method further includes: and removing the moving target from the first image or the first background image obtained by the first image, and fusing the first background image with the background image to obtain a new background image.
Illustratively, if the moving target is detected to exist, after the target area is determined according to the difference information, the first image, or a first background image obtained by removing the moving target from the first image, is fused with the background image to obtain a new background image, so as to update the background image.
S240, determining a target area of a second image according to the difference information of the first image and the background image and the second image in the current image pair.
And S250, determining image acquisition parameters of a third image and a fourth image in a next image pair according to the brightness information of the target area in the second image and the image acquisition parameters of the first image.
And S260, determining image acquisition parameters of a third image and a fourth image in a next image pair according to the brightness information of the first image and the image acquisition parameters of the second image.
In yet another embodiment of the present application, the luminance information is determined by: dividing a target area in the first image or the second image into at least two sub-areas; determining the definition of at least two sub-areas and determining a weight value corresponding to the definition; and determining the brightness information of the target area in the first image or the second image according to the brightness and the weight value of the at least two sub-areas.
Illustratively, a target area in the first image or the second image is divided into M × N blocks, luminance and sharpness evaluation indexes in each block are respectively counted, and then the sharpness in each block is ranked, wherein the higher the sharpness evaluation value is, the higher the corresponding weight is, the greater the influence on the finally calculated luminance result is. The relationship between sharpness value and weight value may be as shown in fig. 5. The first definition threshold and the second definition threshold may be set according to actual conditions, for example, values corresponding to one quarter and three quarters of the maximum definition value are set. The luminance information is calculated according to the following formula:
Figure BDA0002351807990000101
wherein L isi,jRepresenting the regional brightness, ωi,jWeights generated for the sharpness evaluation value, LAvgRepresenting the overall luminance value of the image, i and j represent the row and column, respectively, in which the block is located.
And S270, controlling the image collector to collect the third image and the fourth image according to the image collecting parameters.
And S280, fusing the third image and the fourth image to obtain a fused image and outputting the fused image.
For example, the global image, the local image or the fused image of the global image and the local image can be selectively output according to the user configuration. If the user selects to output the output respectively, the corresponding image is directly output, and if the user selects to output the two fused images, the global image and the local image are fused, and fusion algorithm based on weighted average, fusion algorithm based on principal component analysis, fusion algorithm based on pyramid transformation and the like can be adopted for fusion.
According to the technical scheme of the embodiment of the invention, when the moving target exists, the exposure parameter of the local image is adjusted according to the brightness information, the global image is not adjusted, and when the moving target does not exist, the local image is not adjusted, and the global image is adjusted according to the brightness information, so that the linkage of motion detection and exposure parameter adjustment is realized, the extracted moving target is more accurate, and the accuracy and the adaptability of exposure of a moving object in a complex scene are improved. And the image fused with the global image and the local image can be obtained, so that the functions of intelligently identifying the moving face or the vehicle and the like can be realized. By means of multi-exposure control, two paths of stably exposed images or videos of the global image and the local image are obtained, and the problem that the overall brightness is unstable due to photometric switching of the global image and the local image of one path of video in the traditional scheme is solved.
Fig. 6 is a schematic structural diagram of an image capturing device according to an embodiment of the present invention. The device can be suitable for adjusting the image acquisition parameters, and typically, the embodiment of the invention can be suitable for adjusting the image acquisition parameters to acquire images according to the detected brightness of the moving target area when the illumination is uneven or strong. The apparatus may be implemented by software and/or hardware, and the apparatus may be integrated in a device. Referring to fig. 6, the apparatus specifically includes:
a target area determining module 310, configured to determine, if a moving target exists in a first image of a current image pair, a target area of a second image according to difference information between the first image and a background image and the second image of the current image pair;
a first image acquisition parameter determining module 320, configured to determine image acquisition parameters of a third image and a fourth image in a next image pair according to brightness information of a target region in the second image and the image acquisition parameters of the first image;
the second image capturing parameter determining module 330 is configured to, after detecting the moving object in the first image, determine image capturing parameters of a third image and a fourth image in a next image pair according to the brightness information of the first image and the image capturing parameters of the second image if the moving object does not exist in the first image.
In this embodiment, the target area determining module 310 includes:
the target area coordinate determining unit is used for determining the target area coordinate of the moving target in the first image according to the difference information of the first image and the background image;
a corresponding region determining unit for determining a region in the second image corresponding to the target region coordinates as the target region in the second image.
In an embodiment of the present application, the first image acquisition parameter determining module 320 includes:
the first adjusting unit is used for taking the image acquisition parameter of the first image as the image acquisition parameter of the third image in the next image pair;
and the second adjusting unit is used for determining the image acquisition parameters of the fourth image in the next image pair according to the brightness information of the target area in the second image and the corresponding relation between the brightness information and the image acquisition parameters.
In an embodiment of the present application, the apparatus further includes:
the moving target detection module is used for detecting a moving target of a first image according to difference information between the first image and a background image if image acquisition parameters of the first image in a current image pair are the same as those of any image in a previous image pair; wherein the background image is determined from a previous frame image in an image pair acquired prior to the current image pair;
and the parameter determining module is used for determining the image acquisition parameters of a third image and a fourth image in a next image pair according to the brightness information of the first image and the image acquisition parameters of the second image if the image acquisition parameters of the first image in the current image pair are different from the image acquisition parameters of the previous frame image in the previous image pair.
In an embodiment of the present application, the moving object detecting module includes:
the contrast unit is used for comparing the brightness information of the first image with the brightness information of the background image to determine brightness difference information;
a first result determining unit, configured to determine that a moving target exists in the first image if the luminance difference information is greater than the luminance difference information threshold, and the perimeter is greater than a preset perimeter threshold or the area is greater than a preset area threshold;
a second result determination unit for determining that the moving object does not exist in the first image if the moving object does not exist in the first image.
In an embodiment of the present application, the second image acquisition parameter determining module 330 and/or the parameter determining module includes:
the third adjusting unit is used for taking the image acquisition parameter of the second image as the image acquisition parameter of the fourth image of the next image pair;
and the fourth adjusting unit is used for determining the image acquisition parameters of the third image in the next image pair according to the brightness information of the first image and the corresponding relation between the brightness information and the image acquisition parameters.
In the embodiment of the present application, the luminance information is determined by:
a sub-region dividing unit for dividing a target region in the first image or the second image into at least two sub-regions;
the weight value determining module is used for determining the definitions of at least two sub-areas and determining the weight values corresponding to the definitions;
and the brightness information determining module is used for determining the brightness information of the target area in the first image or the second image according to the brightness and the weight values of the at least two sub-areas.
In an embodiment of the present application, the apparatus further includes:
and the new background image determining module is used for fusing the first image or the first background image obtained by removing the moving target from the first image with the background image to obtain a new background image after the moving target detection is carried out on the first image according to the difference information between the first image and the background image.
In an embodiment of the present application, the apparatus further includes:
the image acquisition module is used for controlling the image acquisition device to acquire the third image and the fourth image according to the image acquisition parameters;
and the fusion output module is used for fusing the third image and the fourth image to obtain a fused image for output.
According to the technical scheme of the embodiment of the invention, when the moving target exists, the exposure parameter is adjusted according to the brightness information of the target area in the second image, and when the moving target does not exist, the exposure parameter is adjusted according to the brightness information of the first image, so that the exposure parameter adjusting mode is determined in a targeted manner under the conditions that the moving target exists and the moving target does not exist, and the accuracy and the adaptability of the exposure of the moving object are improved.
Fig. 7 is a schematic structural diagram of an apparatus according to an embodiment of the present invention. FIG. 7 illustrates a block diagram of an exemplary device 412 suitable for use in implementing embodiments of the present invention. The device 412 shown in fig. 7 is only an example and should not impose any limitation on the functionality or scope of use of embodiments of the present invention.
As shown in fig. 7, the apparatus 412 includes: one or more processors 416; the memory 428 is configured to store one or more programs that, when executed by the one or more processors 416, enable the one or more processors 416 to implement the image capturing method provided by the embodiments of the present invention, including:
if a moving target exists in a first image of a current image pair, determining a target area of a second image according to difference information of the first image and a background image and the second image of the current image pair; wherein the background image is determined from a previous frame image in an image pair acquired prior to the current image pair;
determining image acquisition parameters of a third image and a fourth image in a next image pair according to the brightness information of the target area in the second image and the image acquisition parameters of the first image;
and if the moving target does not exist in the first image, determining image acquisition parameters of a third image and a fourth image in a next image pair according to the brightness information of the first image and the image acquisition parameters of the second image.
Is expressed in the form of general-purpose equipment. The components of device 412 may include, but are not limited to: one or more processors or processors 416, a system memory 428, and a bus 418 that couples the various system components (including the system memory 428 and the processors 416).
Bus 418 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Device 412 typically includes a variety of computer system readable storage media. These storage media may be any available storage media that can be accessed by device 412 and includes both volatile and nonvolatile storage media, removable and non-removable storage media.
The system memory 428 may include computer system readable storage media in the form of volatile memory, such as Random Access Memory (RAM)430 and/or cache memory 432. The device 412 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 434 may be used to read from and write to non-removable, nonvolatile magnetic storage media (not shown in FIG. 7, commonly referred to as "hard drives"). Although not shown in FIG. 7, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical storage medium) may be provided. In these cases, each drive may be connected to bus 418 by one or more data storage media interfaces. Memory 428 can include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 440 having a set (at least one) of program modules 442 may be stored, for instance, in memory 428, such program modules 462 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 462 generally perform the functions and/or methodologies of the described embodiments of the invention.
The device 412 may also communicate with one or more external devices 414 (e.g., keyboard, pointing device, display 426, etc.), with one or more devices that enable a user to interact with the device 412, and/or with any devices (e.g., network card, modem, etc.) that enable the device 412 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 422. Also, the device 412 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) through the network adapter 420. As shown, network adapter 420 communicates with the other modules of device 412 over bus 418. It should be appreciated that although not shown in FIG. 7, other hardware and/or software modules may be used in conjunction with device 412, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 416 performs various functional applications and data processing, such as implementing an image capture method provided by embodiments of the present invention, by executing at least one of the other programs stored in the system memory 428.
One embodiment of the present invention provides a storage medium containing computer-executable instructions that, when executed by a computer processor, perform a method of image acquisition comprising:
if a moving target exists in a first image of a current image pair, determining a target area of a second image according to difference information of the first image and a background image and the second image of the current image pair; wherein the background image is determined from a previous frame image in an image pair acquired prior to the current image pair;
determining image acquisition parameters of a third image and a fourth image in a next image pair according to the brightness information of the target area in the second image and the image acquisition parameters of the first image;
and if the moving target does not exist in the first image, determining image acquisition parameters of a third image and a fourth image in a next image pair according to the brightness information of the first image and the image acquisition parameters of the second image.
Computer storage media for embodiments of the present invention can take the form of any combination of one or more computer-readable storage media. The computer readable storage medium may be a computer readable signal storage medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the invention, the computer readable storage medium may be any tangible storage medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal storage medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal storage medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable storage medium may be transmitted using any appropriate storage medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or device. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. An image acquisition method, characterized in that the method comprises:
if a moving target exists in a first image of a current image pair, determining a target area of a second image according to difference information of the first image and a background image and the second image of the current image pair; wherein the background image is determined from a previous frame image in an image pair acquired prior to the current image pair;
determining image acquisition parameters of a third image and a fourth image in a next image pair according to the brightness information of the target area in the second image and the image acquisition parameters of the first image;
and if the moving target does not exist in the first image, determining image acquisition parameters of a third image and a fourth image in a next image pair according to the brightness information of the first image and the image acquisition parameters of the second image.
2. The method of claim 1, wherein determining the target region of the second image according to the difference information between the first image and the background image and the second image in the current image pair comprises:
determining the target area coordinates of the moving target in the first image according to the difference information of the first image and the background image;
and determining a region corresponding to the target region coordinates in the second image as the target region in the second image.
3. The method of claim 1, further comprising:
if the image acquisition parameters of a first image in a current image pair are the same as the image acquisition parameters of a previous image in a previous image pair, detecting a moving target of the first image according to the difference information of the first image and a background image;
and if the image acquisition parameters of the first image in the current image pair are different from the image acquisition parameters of the previous image in the previous image pair, determining the image acquisition parameters of the third image and the fourth image in the next image pair according to the brightness information of the first image and the image acquisition parameters of the second image.
4. The method of claim 3, wherein performing moving object detection on the first image comprises:
comparing the brightness information of the first image with the brightness information of the background image to determine brightness difference information;
if the brightness difference information is larger than the brightness difference information threshold value and the perimeter is larger than a preset perimeter threshold value or the area is larger than a preset area threshold value in the first image, determining that a moving target exists in the first image;
otherwise, determining that no moving target exists in the first image;
after detecting the moving object of the first image according to the difference information between the first image and the background image, the method further includes:
and removing the moving target from the first image or the first background image obtained by the first image, and fusing the first background image with the background image to obtain a new background image.
5. The method of claim 1, wherein determining image acquisition parameters for a third image and a fourth image in a next image pair based on brightness information of a target region in the second image and image acquisition parameters of the first image comprises:
taking the image acquisition parameter of the first image as the image acquisition parameter of the third image in the next image pair;
and determining the image acquisition parameters of the fourth image in the next image pair according to the brightness information of the target area in the second image and the corresponding relation between the brightness information and the image acquisition parameters.
6. The method according to any one of claims 1-3, wherein determining image acquisition parameters for a third image and a fourth image in a next image pair based on the brightness information of the first image and the image acquisition parameters of the second image comprises:
taking the image acquisition parameter of the second image as the image acquisition parameter of a fourth image of a next image pair;
and determining the image acquisition parameters of the third image in the next image pair according to the brightness information of the first image and the corresponding relation between the brightness information and the image acquisition parameters.
7. The method according to any of claims 1-5, wherein the luminance information is determined by:
dividing a target area in the first image or the second image into at least two sub-areas;
determining the definition of at least two sub-areas and determining a weight value corresponding to the definition;
and determining the brightness information of the target area in the first image or the second image according to the brightness and the weight value of the at least two sub-areas.
8. An image acquisition apparatus, characterized in that the apparatus comprises:
the target area determining module is used for determining a target area of a second image according to difference information of the first image and a background image and the second image in the current image pair if a moving target exists in the first image of the current image pair; wherein the background image is determined from a previous frame image in an image pair acquired prior to the current image pair;
the first image acquisition parameter determining module is used for determining image acquisition parameters of a third image and a fourth image in a next image pair according to the brightness information of the target area in the second image and the image acquisition parameters of the first image;
and the second image acquisition parameter determining module is used for determining the image acquisition parameters of a third image and a fourth image in a next image pair according to the brightness information of the first image and the image acquisition parameters of the second image if the moving target does not exist in the first image.
9. An apparatus, characterized in that the apparatus comprises: one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement an image acquisition method as recited in any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out an image acquisition method as set forth in any one of claims 1 to 7.
CN201911418618.9A 2019-12-31 2019-12-31 Image acquisition method, device, equipment and storage medium Active CN113132645B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911418618.9A CN113132645B (en) 2019-12-31 2019-12-31 Image acquisition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911418618.9A CN113132645B (en) 2019-12-31 2019-12-31 Image acquisition method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113132645A true CN113132645A (en) 2021-07-16
CN113132645B CN113132645B (en) 2022-09-06

Family

ID=76769367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911418618.9A Active CN113132645B (en) 2019-12-31 2019-12-31 Image acquisition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113132645B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008206111A (en) * 2007-02-23 2008-09-04 Victor Co Of Japan Ltd Photographing apparatus and photographing method
US20090231449A1 (en) * 2008-03-11 2009-09-17 Zoran Corporation Image enhancement based on multiple frames and motion estimation
CN106603933A (en) * 2016-12-16 2017-04-26 中新智擎有限公司 Exposure method and apparatus
CN107147823A (en) * 2017-05-31 2017-09-08 广东欧珀移动通信有限公司 Exposure method, device, computer-readable recording medium and mobile terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008206111A (en) * 2007-02-23 2008-09-04 Victor Co Of Japan Ltd Photographing apparatus and photographing method
US20090231449A1 (en) * 2008-03-11 2009-09-17 Zoran Corporation Image enhancement based on multiple frames and motion estimation
CN106603933A (en) * 2016-12-16 2017-04-26 中新智擎有限公司 Exposure method and apparatus
CN107147823A (en) * 2017-05-31 2017-09-08 广东欧珀移动通信有限公司 Exposure method, device, computer-readable recording medium and mobile terminal

Also Published As

Publication number Publication date
CN113132645B (en) 2022-09-06

Similar Documents

Publication Publication Date Title
US6226388B1 (en) Method and apparatus for object tracking for automatic controls in video devices
CN109218628B (en) Image processing method, image processing device, electronic equipment and storage medium
US10070053B2 (en) Method and camera for determining an image adjustment parameter
CN107886048A (en) Method for tracking target and system, storage medium and electric terminal
US20190230269A1 (en) Monitoring camera, method of controlling monitoring camera, and non-transitory computer-readable storage medium
CN108550258B (en) Vehicle queuing length detection method and device, storage medium and electronic equipment
CN108600736B (en) Terminal light sensation calibration method and device, terminal and storage medium
WO2021037285A1 (en) Light measurement adjustment method, apparatus, device, and storage medium
US11107237B2 (en) Image foreground detection apparatus and method and electronic device
CN107993256A (en) Dynamic target tracking method, apparatus and storage medium
US12081876B2 (en) Method for determining photographing mode, electronic device and storage medium
EP2817959B1 (en) Vision system comprising an image sensor and means for analysis and reducing loss of illumination towards periphery of the field of view using multiple frames
WO2022179016A1 (en) Lane detection method and apparatus, device, and storage medium
CN110933304B (en) Method and device for determining to-be-blurred region, storage medium and terminal equipment
CN113132645B (en) Image acquisition method, device, equipment and storage medium
CN110636222B (en) Photographing control method and device, terminal equipment and storage medium
CN113052019A (en) Target tracking method and device, intelligent equipment and computer storage medium
CN109218620B (en) Photographing method and device based on ambient brightness, storage medium and mobile terminal
JPWO2018179119A1 (en) Video analysis device, video analysis method, and program
CN111917986A (en) Image processing method, medium thereof, and electronic device
CN112887593B (en) Image acquisition method and device
CN115018742B (en) Target detection method, device, storage medium, electronic equipment and system
CN112883944B (en) Living body detection method, model training method, device, storage medium and equipment
CN113301324B (en) Virtual focus detection method, device, equipment and medium based on camera device
JP2004208209A (en) Device and method for monitoring moving body

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant