[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US9270893B2 - Monitoring method and camera - Google Patents

Monitoring method and camera Download PDF

Info

Publication number
US9270893B2
US9270893B2 US14/133,105 US201314133105A US9270893B2 US 9270893 B2 US9270893 B2 US 9270893B2 US 201314133105 A US201314133105 A US 201314133105A US 9270893 B2 US9270893 B2 US 9270893B2
Authority
US
United States
Prior art keywords
camera
control area
image control
settings
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US14/133,105
Other versions
US20140168432A1 (en
Inventor
Johan Nystrom
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Axis AB
Original Assignee
Axis AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Axis AB filed Critical Axis AB
Priority to US14/133,105 priority Critical patent/US9270893B2/en
Assigned to AXIS AB reassignment AXIS AB ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NYSTROM, JOHAN
Publication of US20140168432A1 publication Critical patent/US20140168432A1/en
Application granted granted Critical
Publication of US9270893B2 publication Critical patent/US9270893B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N5/235
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N5/23212
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position

Definitions

  • the present invention relates to a monitoring camera and a method for controlling camera settings.
  • Monitoring cameras are used in many different applications, both indoors and outdoors, to monitor a variety of environments. In order to receive useful images of good quality from such a camera, it is important to use appropriate settings that are correctly adapted to the present conditions of the monitored scene and which allow an operator of the camera to see any events of interest within the monitored scene. Some cameras are able to monitor different parts of a scene by changing or moving a field of view e.g. by panning, tilting or zooming the camera. For such cameras it may be even more challenging to find the correct settings since e.g. the lighting conditions of the monitored environment may change as the camera moves.
  • a method of controlling camera settings for a monitoring camera arranged to monitor a scene comprises the steps of
  • the camera settings may include at least one of: focus settings, exposure settings, white balance settings, IR cut filter settings, iris control settings and settings for an illumination unit.
  • the brightness of the image may also be influenced by an IR cut filter mode, e.g. if it is on or off, and the iris control, e.g. in the form of f-number. Additionally or as an alternative, the brightness may also be improved by increasing or decreasing illumination from an illumination unit. Controlling white balance and gain settings based on the overlapping region will also improve the image quality of that region.
  • the step of controlling camera settings may comprise automatically controlling camera settings based on image data captured by the monitoring camera in the overlapping region. This would e.g. mean that an autofocus algorithm is instructed to only use image data from the overlapping region to control the autofocus setting. In other words, image data from the overlapping region is used as input to automatic procedures for determining camera settings which are used for the whole image within the current field of view.
  • the step of controlling camera settings may additionally or alternatively comprise accessing data relating to predefined camera settings values related to the camera settings control area, and controlling camera settings according to the predefined camera settings values. This may be useful when a user would like to specifically set e.g. a focus distance to a certain value once a camera settings control area is within the current field of view. It could e.g. be used when there are trees standing in front of an entrance, and the focus distance may then be set to the distance to the entrance, making sure that an autofocus algorithm does not use the trunks of the trees to set the focus distance.
  • camera settings which influence the entire image may in this way be set in a way that makes sure that the overlapping region is depicted with good image quality.
  • the step of accessing data may comprise accessing data defining a selection of camera settings to be controlled, and the step of controlling camera settings may comprise controlling the defined selection of camera settings.
  • the step of accessing data defining a camera settings control area may comprise accessing data defining the camera settings control area in a coordinate system for the scene, and the step of determining if there is an overlapping region may comprises comparing coordinates of the camera settings control area to coordinates of the current field of view in the coordinate system for the scene.
  • the step of defining a camera settings control area may comprise defining a first and a second camera settings control area
  • the step of determining an overlapping region may comprise determining if there is a first and a second overlapping region corresponding to the first and the second camera settings control area, respectively, and, if there is a first and second overlapping region
  • the step of controlling camera settings may comprise selecting one of the first and the second overlapping region based on region properties for the first and the second overlapping region, and controlling camera settings based on the selected overlapping region, wherein the region properties include at least one of: size of region, amount of detected motion in region, a priority setting for the camera settings control area corresponding to the respective region.
  • a monitoring camera arranged to monitor a scene which comprises
  • a field of view altering unit arranged to alter a field of view of the camera such that the camera is able to capture images of different parts of a scene
  • a data input arranged to receive data defining a camera settings control area within the scene
  • an overlap determining unit arranged to determine if there is an overlapping region between a current field of view of the camera and the camera settings control area
  • a camera settings control unit arranged to control camera settings based on the overlapping region.
  • the field of view altering unit may comprise a motor arranged to alter the field of view of the camera in at least one of a pan, tilt or zoom direction.
  • the pan, tilt or zoom may be controlled in any suitable manner, e.g. by receiving input from a user via a joystick or by receiving input relating to a field of view altering scheme, such as a guard tour.
  • the data input may be arranged to receive data defining a selection of camera settings to be controlled, and the camera settings control unit may be arranged to control the defined selection of camera settings.
  • the data defining a camera settings control area and/or the data defining a selection of camera settings to be controlled may be based on user input.
  • the user input may e.g. be via a graphical user input where the user draws shapes around any area in the current view of the scene which are to be used as a camera settings control area.
  • the user may also be allowed to move the current field of view of the camera during set-up of the camera settings control areas to be able to define another camera settings control area for another part of the scene.
  • the selection of camera settings to be controlled based on one or more of the camera settings control areas may e.g. be selected by ticking a box in a user interface or by selecting from a drop-down list. It would possible to select some manual settings, e.g.
  • the data input may be arranged to receive data defining the camera settings control area in a coordinate system for the scene, and the overlap determining unit may be arranged to compare coordinates of the camera settings control area to coordinates of the current field of view.
  • the data input may be arranged to receive data defining a first and a second camera settings control area
  • the overlap determining unit may be arranged to determine if there is a first and a second overlapping region corresponding to the first and the second camera settings control area, respectively, and, if there is a first and second overlapping region, the camera settings control unit may be arranged to select one of the first and the second overlapping region based on region properties for the first and the second overlapping region, and to control camera settings based on the selected overlapping region, wherein the region properties include at least one of: size of region, amount of detected motion in region, a priority setting for the camera settings control area corresponding to the respective region.
  • a computer-readable recording medium having recorded thereon a program for implementing the herein described method when executed on a device having processing capabilities.
  • FIG. 1 shows a monitored scene
  • FIG. 2 shows a monitoring camera
  • FIG. 3 illustrates a method according to the invention.
  • FIG. 1 illustrates a scene 1 with an office building 3 , a number of cars 5 driving by, trees 7 standing next to the building 3 and a car park 9 where some cars are parked and which is lit by a lamp post 11 .
  • the scene is monitored by a monitoring camera 13 , details of which are shown in FIG. 2 .
  • the camera 13 is able to move or shift, or in other words alter, change or adjust, its field of view by panning or tilting, or by zooming using some type of zoom mechanism such as an adjustable zoom lens.
  • a current field of view of the camera 13 is illustrated by the rectangle 15 .
  • the camera 13 covers varying parts of the scene 1 .
  • the camera 13 may change how much of the scene 1 that is covered; i.e. when zooming in using a telephoto setting, a smaller part of the scene 1 will be covered, and when zooming out to a wide-angle setting, a larger part of the scene 1 will be covered.
  • the camera may be a so called PTZ camera, but it may also be capable of altering its field of view in only one of the pan, tilt or zoom “dimensions”, or in any two of those.
  • the camera 13 is usually mounted in a set position and has a movable field of view and it is therefore able to determine a position of its current field of view within a coordinate system for the scene 1 .
  • a coordinate system may sometimes be denoted a common coordinate system or an absolute coordinate system.
  • the camera settings used when capturing images are dynamically adapted based on the captured image in order to achieve a good image quality. Different algorithms may be used for this. If motion is detected in part of the current field of view, the camera settings may be adjusted to give a good image quality in that part of the field of view. Another common algorithm is based on detecting contrast in the image and then choosing the focus distance which gives the largest contrast for the overall field of view.
  • the current field of view may cover a part of the scene where some portion of the field of view is given an inappropriate attention in the settings. This may e.g. happen if motion is detected in an “uninteresting” part of the scene covered by the current field of view and may cause more important parts covered to e.g. become blurry and less useful for surveillance purposes.
  • Such a situation may occur when the current field of view 15 is positioned as shown in FIG. 1 .
  • moving cars are present in the lower part of the field of view, but the most important part to watch is actually around the entrance 17 to the office building 3 .
  • the camera were to use the fact that motion is detected in lower part of the currently captured image, which also is closer to the camera than the office building 3 , and set e.g. focus and exposure based on that part, the area around the entrance 17 may be captured with less sharpness and additionally be too dark or to bright.
  • the camera were to use a contrast-aided method for setting autofocus, it may be the case that the tree trunks of the trees 7 standing next to the building 3 are given too much attention. As the trees 7 in this case are closer to the camera than the building, the office building 3 may in this case too become slightly out of focus, meaning that the real area of interest—the entrance 17 —will not be as sharp as desired.
  • a number of camera settings control areas 19 have been defined within the scene 1 . These areas may be set-up at installation of the camera or later on based on the needs of a user to cover places of specific interest in the scene, such as in this case the entrance 17 to the office building 3 .
  • the camera settings control areas 19 are defined in a manner which allows the camera to determine the relation and a possible overlap between its current field of view and the camera settings control areas.
  • the camera settings control areas may be defined in the coordinate system for the scene. In other words, the camera settings control areas are defined in what could be denoted a PTZ-aware manner.
  • the camera When at least part of a camera settings control area 19 is covered by the current field of view of the camera, the camera will control its camera settings based on this part, illustrated by the shadowed region 21 in FIG. 1 .
  • the control of the camera settings could be done by using image data captured by the monitoring camera in the overlapping region as input to one or more automatic algorithms for determining camera settings.
  • Another option is to use predefined values for one or more camera settings as soon as a camera settings control area being associated with one or more such predefined camera settings values is overlapped by the current field of view.
  • two camera settings control areas 19 have been defined, a first one for the entrance to the office building 3 and another one for the car park 9 . It may be noted that the camera settings control area 19 which covers part of the car park 9 has been set up to not cover the lamp on the lamp post 11 , thereby making sure that an automatic exposure setting is not adjusted based on image data around the possibly very bright lamp at the lamp post 11 .
  • the camera will check, by comparing the position or coordinates of the current field of view with the position or coordinates of any camera settings control areas, if any such areas are encountered.
  • the coordinates of the field of view and the camera settings control areas may be defined in the coordinate system for the scene to start with, or they may be translated into a common coordinate system for the purpose of the comparison.
  • the camera settings will be controlled based on the part of the image which corresponds to overlapping region 21 of the camera settings control area 17 and the current field of view 15 .
  • the larger of the overlapping regions may be used to control the camera settings.
  • Another option is to detect if motion is present within any of the overlapping regions, and select the region were motion is detected. If motion is detected in more than one overlapping region, it would be possible to either quantify the amount of motion detected and choose the region with the most motion detected, or to simply go back to selecting the larger overlapping region.
  • Yet another option is to allow priorities to be set for the different camera settings control areas and select the overlapping region which is associated with the camera settings control area with the highest priority. It could also be possible in some instances to combine the image data captured in both the covered camera settings control areas to control some camera settings.
  • the set-up of the camera settings control areas may be defined by a user, e.g. by input to a graphical user interface or by selecting coordinates. It would also be possible to perform some type of intelligent image analysis assisted set-up, e.g. where image elements indicating doors are detected and suggested to the user as possible areas of interest.
  • the camera settings to be controlled may be at least one of focus settings, exposure settings, white balance settings, IR cut filter settings, iris control setting and settings for an illumination unit, and which of these to be controlled may be chosen beforehand. It may also be possible to select some of the camera settings to be controlled automatically based on image data captured in the overlapping region and some to be set to a respective predefined value which typically is determined to be appropriate for that camera setting and the area or fix object in the scene which is covered by that camera settings control area.
  • an illumination unit may either be integrated in the camera or provided as a separate unit, mounted next to or at a distance from the camera.
  • Such an illumination unit may e.g. comprise a number of LEDs and may e.g. provide visible or infra-red illumination.
  • each camera settings control area may be associated with one or more camera settings to be controlled, so that one camera settings control area would be used only for focus purposes and another only for exposure settings.
  • the camera settings to control may be selected by a user, one convenient option would be to use a graphical user interface and let the user check different boxes in a list of available camera settings to control, either applying to all defined camera settings control areas, or different choices for the different camera settings control areas.
  • FIG. 2 illustrates in more detail the monitoring camera 13 arranged to monitor the scene 1 .
  • the camera 13 may e.g. be a digital network camera adapted for video surveillance of the scene 1 .
  • the camera 13 captures images of the part of the scene 1 covered by the current field of view 15 of the camera 13 .
  • the camera 13 comprises a field of view altering unit 23 arranged to alter the field of view 15 of the camera 13 .
  • the field of view altering unit 23 may e.g. comprise one or more devices such as motors which can change one or more of the pan, tilt or zoom settings of the camera, by moving the viewing direction of the camera or changing the setting of a zoom lens.
  • the camera may also be capable of digitally panning, tilting or zooming to alter the field of view 15 .
  • a data input 25 is arranged to receive data defining a camera settings control area 19 within the scene 1 . This data may e.g. be in the form of coordinates in a coordinate system for the scene 1 .
  • An overlap determining unit 27 is provided to determine if there is an overlapping region 21 between a current field of view 15 of the camera 13 and a camera settings control area 17 , e.g. by comparing the position of the current field of view and the position of the camera settings control area 17 .
  • the positions may be expressed in coordinates in a coordinate system for the scene 1 .
  • the camera 13 further comprises a camera settings control unit 29 arranged to control camera settings based on the overlapping region 21 .
  • the camera settings may be set solely based on this overlapping region, ignoring the remaining image. It could also be possible that other parts of the image is used to some extent, e.g. by using appropriate weighting.
  • the automatic exposure settings may include gain and exposure time.
  • the brightness may also be influenced by letting the automatic iris control set an f-number for an iris for the camera lens based on image data from the overlapping region. In case the camera is equipped with a filter for infra-red light, i.e.
  • an IR cut filter the position or state (usually on or off) of such a filter may also be set based on the image data in the overlapping region.
  • the white balance may also be controlled based on the overlapping region, more or less ignoring any negative effects that could have on the image quality, typically the colors, of the remaining image captured in the current field of view.
  • Another camera setting which could be controlled is a setting for an illumination unit. This unit may then be controlled to make sure that the overlapping region is properly lit. Depending on the properties of the illumination unit it may e.g. be possible to control the direction of the illumination or the strength of the illumination, e.g. by switching on or off part of the illumination or by controlling the strength of illumination from a dimmable light source. The direction of the illumination could also be controlled by mechanically turning the unit to steer the illumination in the desired direction. It may also be possible to control the type of illumination, e.g. IR illumination or visible light.
  • FIG. 3 illustrates a method 300 for monitoring a scene by a monitoring camera.
  • the camera accesses data defining a camera setting control area within the scene, which area may have been previously defined e.g. by a user.
  • the camera determines if an overlapping region exists between the current field of view, FOV, and the camera settings control area. This may e.g. be done by comparing coordinates for the current field of view with coordinates for the camera settings control area, where the coordinates are expressed in a coordinate system for the scene. The determining may also be performed by an external processing unit connected to the camera.
  • camera settings for the monitoring camera are controlled in step 307 based on the overlapping region. If it is determined that no overlap exists, camera settings may instead be controlled in a conventional manner. It would also be possible to determine if the current field of view overlaps more than one camera settings control areas. This could either be done in sequence to the first determination or in parallel. Data regarding the further camera settings control areas would then be accessed by the camera prior to the respective determination.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Burglar Alarm Systems (AREA)

Abstract

A monitoring camera arranged to monitor a scene and a method of controlling camera settings for a monitoring camera arranged to monitor a scene wherein images are captured of different parts of the scene by altering a field of view of the camera, wherein data defining a camera settings control area within the scene is accessed, and it is determined if there is an overlapping region between a current field of view of the camera and the camera settings control area, and if there is an overlapping region, camera settings are controlled based on the overlapping region.

Description

TECHNICAL FIELD
The present invention relates to a monitoring camera and a method for controlling camera settings.
BACKGROUND
Monitoring cameras are used in many different applications, both indoors and outdoors, to monitor a variety of environments. In order to receive useful images of good quality from such a camera, it is important to use appropriate settings that are correctly adapted to the present conditions of the monitored scene and which allow an operator of the camera to see any events of interest within the monitored scene. Some cameras are able to monitor different parts of a scene by changing or moving a field of view e.g. by panning, tilting or zooming the camera. For such cameras it may be even more challenging to find the correct settings since e.g. the lighting conditions of the monitored environment may change as the camera moves.
SUMMARY OF THE INVENTION
In view of the above, it is thus an object of the present invention to overcome or at least mitigate the above-mentioned challenges and to provide an improved way of controlling camera settings.
According to a first aspect of the invention a method of controlling camera settings for a monitoring camera arranged to monitor a scene, the monitoring camera being arranged to capture images of different parts of the scene by altering a field of view of the camera, comprises the steps of
accessing data defining a camera settings control area within the scene,
determining if there is an overlapping region between a current field of view of the camera and the camera settings control area, and
if there is an overlapping region, controlling camera settings based on image data captured by the monitoring camera in the overlapping region.
In this way it is possible to ensure that the most important parts of a scene are depicted with good image quality. This improves the usefulness of the captured video and makes it more likely that interesting details will be noticed correctly.
The camera settings may include at least one of: focus settings, exposure settings, white balance settings, IR cut filter settings, iris control settings and settings for an illumination unit.
Other settings may also be possible to control. When controlling focus settings based on the overlapping region, it is ensured that the image captured in that region will be sharp. When controlling exposure settings it is ensured that the brightness of the image captured in the overlapping region is correct. The brightness of the image may also be influenced by an IR cut filter mode, e.g. if it is on or off, and the iris control, e.g. in the form of f-number. Additionally or as an alternative, the brightness may also be improved by increasing or decreasing illumination from an illumination unit. Controlling white balance and gain settings based on the overlapping region will also improve the image quality of that region.
The step of controlling camera settings may comprise automatically controlling camera settings based on image data captured by the monitoring camera in the overlapping region. This would e.g. mean that an autofocus algorithm is instructed to only use image data from the overlapping region to control the autofocus setting. In other words, image data from the overlapping region is used as input to automatic procedures for determining camera settings which are used for the whole image within the current field of view.
The step of controlling camera settings may additionally or alternatively comprise accessing data relating to predefined camera settings values related to the camera settings control area, and controlling camera settings according to the predefined camera settings values. This may be useful when a user would like to specifically set e.g. a focus distance to a certain value once a camera settings control area is within the current field of view. It could e.g. be used when there are trees standing in front of an entrance, and the focus distance may then be set to the distance to the entrance, making sure that an autofocus algorithm does not use the trunks of the trees to set the focus distance. Once again, camera settings which influence the entire image may in this way be set in a way that makes sure that the overlapping region is depicted with good image quality.
The step of accessing data may comprise accessing data defining a selection of camera settings to be controlled, and the step of controlling camera settings may comprise controlling the defined selection of camera settings.
In this way the settings which have the most impact on image quality for a certain scene may be selected, and other which are less important in some circumstances may be set in a conventional manner. This gives further possibilities to adapt to a specific use case.
The step of accessing data defining a camera settings control area may comprise accessing data defining the camera settings control area in a coordinate system for the scene, and the step of determining if there is an overlapping region may comprises comparing coordinates of the camera settings control area to coordinates of the current field of view in the coordinate system for the scene.
This is a convenient and reasonably computationally intensive way of performing the comparison. Another option would be to translate the coordinates of the camera settings control areas to a relative coordinate system of each current field of view.
The step of defining a camera settings control area may comprise defining a first and a second camera settings control area, and the step of determining an overlapping region may comprise determining if there is a first and a second overlapping region corresponding to the first and the second camera settings control area, respectively, and, if there is a first and second overlapping region, the step of controlling camera settings may comprise selecting one of the first and the second overlapping region based on region properties for the first and the second overlapping region, and controlling camera settings based on the selected overlapping region, wherein the region properties include at least one of: size of region, amount of detected motion in region, a priority setting for the camera settings control area corresponding to the respective region.
In this way it is possible to adapt the captured image to correctly depict to more than one important area in the monitored scene, such as when two entrances to a building are present within the scene. The selection of region based on different region properties provides a convenient and easily implemented way of selecting a region to prioritize for the camera settings control.
According to another aspect of the invention a monitoring camera arranged to monitor a scene is provided which comprises
a field of view altering unit arranged to alter a field of view of the camera such that the camera is able to capture images of different parts of a scene,
a data input arranged to receive data defining a camera settings control area within the scene,
an overlap determining unit arranged to determine if there is an overlapping region between a current field of view of the camera and the camera settings control area, and
a camera settings control unit arranged to control camera settings based on the overlapping region.
The field of view altering unit may comprise a motor arranged to alter the field of view of the camera in at least one of a pan, tilt or zoom direction.
The pan, tilt or zoom may be controlled in any suitable manner, e.g. by receiving input from a user via a joystick or by receiving input relating to a field of view altering scheme, such as a guard tour.
The data input may be arranged to receive data defining a selection of camera settings to be controlled, and the camera settings control unit may be arranged to control the defined selection of camera settings.
The data defining a camera settings control area and/or the data defining a selection of camera settings to be controlled may be based on user input. The user input may e.g. be via a graphical user input where the user draws shapes around any area in the current view of the scene which are to be used as a camera settings control area. The user may also be allowed to move the current field of view of the camera during set-up of the camera settings control areas to be able to define another camera settings control area for another part of the scene. The selection of camera settings to be controlled based on one or more of the camera settings control areas may e.g. be selected by ticking a box in a user interface or by selecting from a drop-down list. It would possible to select some manual settings, e.g. a certain focus distance, and some automatic settings, e.g. exposure, which should be applied when a certain camera settings control area is overlapped by the current field of view. In this way it is made sure that the focus is always correctly adapted to a certain area or fix object within the scene—such as an entrance—while the exposure may be set based on the current level of light within that area.
The data input may be arranged to receive data defining the camera settings control area in a coordinate system for the scene, and the overlap determining unit may be arranged to compare coordinates of the camera settings control area to coordinates of the current field of view.
The data input may be arranged to receive data defining a first and a second camera settings control area, and the overlap determining unit may be arranged to determine if there is a first and a second overlapping region corresponding to the first and the second camera settings control area, respectively, and, if there is a first and second overlapping region, the camera settings control unit may be arranged to select one of the first and the second overlapping region based on region properties for the first and the second overlapping region, and to control camera settings based on the selected overlapping region, wherein the region properties include at least one of: size of region, amount of detected motion in region, a priority setting for the camera settings control area corresponding to the respective region.
According to another aspect of the invention a computer-readable recording medium is provided having recorded thereon a program for implementing the herein described method when executed on a device having processing capabilities.
These two latter aspects of the invention provide corresponding advantages to the first aspect of the invention.
A further scope of applicability of the present invention will become apparent from the detailed description given below. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the invention, are given by way of illustration only, since various changes and modifications within the scope of the invention will become apparent to those skilled in the art from this detailed description.
Hence, it is to be understood that this invention is not limited to the particular component parts of the device described or steps of the methods described as such device and method may vary. It is also to be understood that the terminology used herein is for purpose of describing particular embodiments only, and is not intended to be limiting. It must be noted that, as used in the specification and the appended claim, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements unless the context clearly dictates otherwise. Thus, for example, a reference to “a sensor” or “the sensor” may include several devices, and the like. Furthermore, the word “comprising” does not exclude other elements or steps.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will now be described in more detail by way of example and with reference to the accompanying schematic drawings, in which:
FIG. 1 shows a monitored scene
FIG. 2 shows a monitoring camera
FIG. 3 illustrates a method according to the invention.
DETAILED DESCRIPTION OF EMBODIMENTS
FIG. 1 illustrates a scene 1 with an office building 3, a number of cars 5 driving by, trees 7 standing next to the building 3 and a car park 9 where some cars are parked and which is lit by a lamp post 11. The scene is monitored by a monitoring camera 13, details of which are shown in FIG. 2. The camera 13 is able to move or shift, or in other words alter, change or adjust, its field of view by panning or tilting, or by zooming using some type of zoom mechanism such as an adjustable zoom lens. A current field of view of the camera 13 is illustrated by the rectangle 15.
By altering its field of view 15, the camera 13 covers varying parts of the scene 1.The camera 13 may change how much of the scene 1 that is covered; i.e. when zooming in using a telephoto setting, a smaller part of the scene 1 will be covered, and when zooming out to a wide-angle setting, a larger part of the scene 1 will be covered. The camera may be a so called PTZ camera, but it may also be capable of altering its field of view in only one of the pan, tilt or zoom “dimensions”, or in any two of those. It may be noted that the camera 13 is usually mounted in a set position and has a movable field of view and it is therefore able to determine a position of its current field of view within a coordinate system for the scene 1. Such a coordinate system may sometimes be denoted a common coordinate system or an absolute coordinate system.
As the camera alters its field of view to cover different parts of the scene, the camera settings used when capturing images, such as autofocus, automatic exposure, automatic white balance settings, automatic IR cut filter settings, automatic iris control settings and possibly also settings for a camera controlled illumination unit, are dynamically adapted based on the captured image in order to achieve a good image quality. Different algorithms may be used for this. If motion is detected in part of the current field of view, the camera settings may be adjusted to give a good image quality in that part of the field of view. Another common algorithm is based on detecting contrast in the image and then choosing the focus distance which gives the largest contrast for the overall field of view.
However, as the inventor has realized, in some instances, when using an automatic method for adapting the camera settings, the current field of view may cover a part of the scene where some portion of the field of view is given an inappropriate attention in the settings. This may e.g. happen if motion is detected in an “uninteresting” part of the scene covered by the current field of view and may cause more important parts covered to e.g. become blurry and less useful for surveillance purposes.
Such a situation may occur when the current field of view 15 is positioned as shown in FIG. 1. In this case moving cars are present in the lower part of the field of view, but the most important part to watch is actually around the entrance 17 to the office building 3. If the camera were to use the fact that motion is detected in lower part of the currently captured image, which also is closer to the camera than the office building 3, and set e.g. focus and exposure based on that part, the area around the entrance 17 may be captured with less sharpness and additionally be too dark or to bright.
If the camera were to use a contrast-aided method for setting autofocus, it may be the case that the tree trunks of the trees 7 standing next to the building 3 are given too much attention. As the trees 7 in this case are closer to the camera than the building, the office building 3 may in this case too become slightly out of focus, meaning that the real area of interest—the entrance 17—will not be as sharp as desired.
To improve this situation, a number of camera settings control areas 19 have been defined within the scene 1. These areas may be set-up at installation of the camera or later on based on the needs of a user to cover places of specific interest in the scene, such as in this case the entrance 17 to the office building 3. The camera settings control areas 19 are defined in a manner which allows the camera to determine the relation and a possible overlap between its current field of view and the camera settings control areas. To this purpose, the camera settings control areas may be defined in the coordinate system for the scene. In other words, the camera settings control areas are defined in what could be denoted a PTZ-aware manner.
When at least part of a camera settings control area 19 is covered by the current field of view of the camera, the camera will control its camera settings based on this part, illustrated by the shadowed region 21 in FIG. 1. This means that when a place of specific interest is at least partly covered by a current field of view of the camera, the camera uses camera settings which will give image data having a good quality in the image parts which depict that place. The control of the camera settings could be done by using image data captured by the monitoring camera in the overlapping region as input to one or more automatic algorithms for determining camera settings. Another option is to use predefined values for one or more camera settings as soon as a camera settings control area being associated with one or more such predefined camera settings values is overlapped by the current field of view.
It may be noted that it would also be possible to set a minimum size of the overlapping region for it to be used to control camera settings. In other words, if only a very small overlap exists, it would be possible to control the camera settings in the normal way, e.g. based on the entire captured image.
In the scene illustrated in FIG. 1, two camera settings control areas 19 have been defined, a first one for the entrance to the office building 3 and another one for the car park 9. It may be noted that the camera settings control area 19 which covers part of the car park 9 has been set up to not cover the lamp on the lamp post 11, thereby making sure that an automatic exposure setting is not adjusted based on image data around the possibly very bright lamp at the lamp post 11.
As the camera field of view 15 moves across the scene, the camera will check, by comparing the position or coordinates of the current field of view with the position or coordinates of any camera settings control areas, if any such areas are encountered. The coordinates of the field of view and the camera settings control areas may be defined in the coordinate system for the scene to start with, or they may be translated into a common coordinate system for the purpose of the comparison. As soon as the current field of view is determined to overlap or cover a camera settings control area, the camera settings will be controlled based on the part of the image which corresponds to overlapping region 21 of the camera settings control area 17 and the current field of view 15.
In case two or more camera settings control areas are overlapped, partly or entirely, by the current field of view, a number of different options exist to solve the situation. As one alternative, the larger of the overlapping regions may be used to control the camera settings. Another option is to detect if motion is present within any of the overlapping regions, and select the region were motion is detected. If motion is detected in more than one overlapping region, it would be possible to either quantify the amount of motion detected and choose the region with the most motion detected, or to simply go back to selecting the larger overlapping region.
Yet another option is to allow priorities to be set for the different camera settings control areas and select the overlapping region which is associated with the camera settings control area with the highest priority. It could also be possible in some instances to combine the image data captured in both the covered camera settings control areas to control some camera settings.
Going more into detail of the set-up of the camera settings control areas, these may be defined by a user, e.g. by input to a graphical user interface or by selecting coordinates. It would also be possible to perform some type of intelligent image analysis assisted set-up, e.g. where image elements indicating doors are detected and suggested to the user as possible areas of interest.
In addition to defining the camera settings control areas, it may also be possible to define which camera settings should be controlled by the image data captured in any overlapping region of a camera settings control area. As noted above, the camera settings to be controlled may be at least one of focus settings, exposure settings, white balance settings, IR cut filter settings, iris control setting and settings for an illumination unit, and which of these to be controlled may be chosen beforehand. It may also be possible to select some of the camera settings to be controlled automatically based on image data captured in the overlapping region and some to be set to a respective predefined value which typically is determined to be appropriate for that camera setting and the area or fix object in the scene which is covered by that camera settings control area.
In case settings for an illumination unit is included in the camera settings to control, it may be noted that such an illumination unit may either be integrated in the camera or provided as a separate unit, mounted next to or at a distance from the camera. Such an illumination unit may e.g. comprise a number of LEDs and may e.g. provide visible or infra-red illumination.
It would also be possible to allow each camera settings control area to be associated with one or more camera settings to be controlled, so that one camera settings control area would be used only for focus purposes and another only for exposure settings. The camera settings to control may be selected by a user, one convenient option would be to use a graphical user interface and let the user check different boxes in a list of available camera settings to control, either applying to all defined camera settings control areas, or different choices for the different camera settings control areas.
FIG. 2 illustrates in more detail the monitoring camera 13 arranged to monitor the scene 1. The camera 13 may e.g. be a digital network camera adapted for video surveillance of the scene 1. The camera 13 captures images of the part of the scene 1 covered by the current field of view 15 of the camera 13.
The camera 13 comprises a field of view altering unit 23 arranged to alter the field of view 15 of the camera 13. The field of view altering unit 23 may e.g. comprise one or more devices such as motors which can change one or more of the pan, tilt or zoom settings of the camera, by moving the viewing direction of the camera or changing the setting of a zoom lens. The camera may also be capable of digitally panning, tilting or zooming to alter the field of view 15. A data input 25 is arranged to receive data defining a camera settings control area 19 within the scene 1. This data may e.g. be in the form of coordinates in a coordinate system for the scene 1.
An overlap determining unit 27 is provided to determine if there is an overlapping region 21 between a current field of view 15 of the camera 13 and a camera settings control area 17, e.g. by comparing the position of the current field of view and the position of the camera settings control area 17. The positions may be expressed in coordinates in a coordinate system for the scene 1.
The camera 13 further comprises a camera settings control unit 29 arranged to control camera settings based on the overlapping region 21. The camera settings may be set solely based on this overlapping region, ignoring the remaining image. It could also be possible that other parts of the image is used to some extent, e.g. by using appropriate weighting.
This may e.g. mean that the autofocus algorithm is instructed to only use the image data from the overlapping region 21 when determining an appropriate focus distance, and, at least to some extent, ignore other parts of the captured image which as a result may become blurry. It may also mean that the automatic exposure is set so that the overlapping region 21 is properly exposed, i.e. not too dark and not too bright, even if this would cause other parts of the captured image to become too dark or too bright. The automatic exposure settings may include gain and exposure time. The brightness may also be influenced by letting the automatic iris control set an f-number for an iris for the camera lens based on image data from the overlapping region. In case the camera is equipped with a filter for infra-red light, i.e. an IR cut filter, the position or state (usually on or off) of such a filter may also be set based on the image data in the overlapping region. The white balance may also be controlled based on the overlapping region, more or less ignoring any negative effects that could have on the image quality, typically the colors, of the remaining image captured in the current field of view.
Another camera setting which could be controlled is a setting for an illumination unit. This unit may then be controlled to make sure that the overlapping region is properly lit. Depending on the properties of the illumination unit it may e.g. be possible to control the direction of the illumination or the strength of the illumination, e.g. by switching on or off part of the illumination or by controlling the strength of illumination from a dimmable light source. The direction of the illumination could also be controlled by mechanically turning the unit to steer the illumination in the desired direction. It may also be possible to control the type of illumination, e.g. IR illumination or visible light.
FIG. 3 illustrates a method 300 for monitoring a scene by a monitoring camera. In step 301 the camera accesses data defining a camera setting control area within the scene, which area may have been previously defined e.g. by a user. In step 303 the camera determines if an overlapping region exists between the current field of view, FOV, and the camera settings control area. This may e.g. be done by comparing coordinates for the current field of view with coordinates for the camera settings control area, where the coordinates are expressed in a coordinate system for the scene. The determining may also be performed by an external processing unit connected to the camera.
If it is determined that an overlapping region exists, camera settings for the monitoring camera are controlled in step 307 based on the overlapping region. If it is determined that no overlap exists, camera settings may instead be controlled in a conventional manner. It would also be possible to determine if the current field of view overlaps more than one camera settings control areas. This could either be done in sequence to the first determination or in parallel. Data regarding the further camera settings control areas would then be accessed by the camera prior to the respective determination.
It will be appreciated that a person skilled in the art can modify the above-described embodiments in many ways and still use the advantages of the invention as shown in the embodiments above.
Thus, the invention should not be limited to the shown embodiments but should only be defined by the appended claims.

Claims (14)

The invention claimed is:
1. A method of controlling camera settings for a camera arranged to capture images of a scene, the method comprising:
detecting a current field of view of the camera within the scene;
accessing data that indicates a first image control area and a second image control area of the camera within the scene;
determining, by circuitry of the camera, whether the current field of view overlaps with both the first image control area and the second image control area; and
adjusting, when the current field of view overlaps with both the first image control area and the second image control area, image capture settings of the camera by selecting one of a first overlapping region and a second overlapping region based on region properties for the first and second overlapping regions, and adjusting the image capture settings based on the selected overlapping region, wherein
the first overlapping region is a portion of the current field of view that overlaps with the first image control area,
the second overlapping region is a portion of the current field of view that overlaps with the second image control area, and
the region properties include at least one of: size of region, amount of detected motion in region, a priority setting for the image control area corresponding to the respective region.
2. The method of claim 1, wherein the image capture settings include at least one of:
focus settings, exposure settings, white balance settings, IR cut filter mode and iris control settings.
3. The method of claim 1, wherein the adjusting of the image capture settings comprises automatically adjusting the image capture settings according to image data captured by the camera in the selected overlapping region.
4. The method of claim 1, wherein the adjusting of the image capture settings comprises:
accessing data relating to predefined camera settings values related to the first and second image control areas, and
adjusting the image capture settings according to the predefined camera settings values.
5. The method of claim 1, further comprising selecting image capture settings for the one of the first and second image capture control areas, wherein
the adjusting of the image capture settings comprises controlling the selected image capture settings.
6. The method of claim 1, wherein
the data indicates both the first image control area and the second image control area according to a coordinate system for the scene, and
the adjusting the image capture settings comprises comparing, in the coordinate system for the scene, coordinates of both the first image control area and the second image control area to coordinate of the current field of view.
7. A camera arranged to monitor a scene, the camera comprising;
circuitry that
detects a current field of view of the camera within the scene;
receive data that indicates a first image control area and a second image control area of the camera within the scene;
determine whether the current field of view overlaps with both the first image control area and the second image control area; and
adjust, when the current field of view overlaps with both the first image control area and the second image control area, image capture settings of the camera by selecting one of a first overlapping region and a second overlapping region based on region properties for the first and second overlapping regions, and adjusting the image capture settings based on the selected overlapping region, wherein
the first overlapping region is a portion of the current field of view that overlaps with the image control area,
the second overlapping region is a portion of the current field of view that overlaps with the second image control area, and
the region properties include at least one of: size of region, amount of detected motion in region, a priority setting for the image control area corresponding to the respective region.
8. The camera of claim 7, further comprising a motor, wherein
the circuitry controls the motor to adjust the field of view of the camera in at least one of a pan, tilt or zoom direction.
9. The camera of claim 7, wherein the circuitry automatically adjusts the image capture settings according to image data captured by the camera in the selected overlapping region.
10. The monitoring camera of claim 7, wherein
the circuitry receives data relating to predefined camera settings values related to the first and second image control areas, and
the circuitry adjusts the image capture settings according to the predefined values.
11. The camera of claim 7, wherein the data that indicates the first and second image control areas is based on user input.
12. The camera of claim 7, wherein
the data indicates both the first image control area and the second image control area according to a coordinate system for the scene, and
the circuitry compares coordinate of both the first image control area and the second image control area to coordinate of the current field of view in the coordinate system for the scene.
13. A non-transitory computer-readable recording medium storing executable instructions that, when executed by circuitry of a camera arranged to capture images of a scene, cause the circuitry to:
detect a current field of view of the camera within the scene;
access data that indicates a first image control area and a second image control area of the camera within the scene;
determine whether the current field of view overlaps with both the first image control area and the second image control area; and
adjust, when the current field of view overlaps with both the first image control area and the second image control area, image capture settings of the camera by selecting one of a first overlapping region and a second overlapping region based on region properties for the first and second overlapping regions, and adjusting the image capture settings based on the selected overlapping region, wherein
the first overlapping region is a portion of the current field of view that overlaps with the first image control area,
the second overlapping region is a portion of the current field of view that overlaps with the second image control area, and
the region properties include at least one of: size of region, amount of detected motion in region, a priority setting for the image control area corresponding to the respective region.
14. The method of claim 1, wherein
the camera includes a light source, and
the image capture settings include lighting settings for the light source.
US14/133,105 2012-12-18 2013-12-18 Monitoring method and camera Active US9270893B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/133,105 US9270893B2 (en) 2012-12-18 2013-12-18 Monitoring method and camera

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
EP12197677.3A EP2747414B1 (en) 2012-12-18 2012-12-18 Monitoring method and camera
EP12197677 2012-12-18
EP12197677.3 2012-12-18
US201261746021P 2012-12-26 2012-12-26
US14/133,105 US9270893B2 (en) 2012-12-18 2013-12-18 Monitoring method and camera

Publications (2)

Publication Number Publication Date
US20140168432A1 US20140168432A1 (en) 2014-06-19
US9270893B2 true US9270893B2 (en) 2016-02-23

Family

ID=47427249

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/133,105 Active US9270893B2 (en) 2012-12-18 2013-12-18 Monitoring method and camera

Country Status (6)

Country Link
US (1) US9270893B2 (en)
EP (1) EP2747414B1 (en)
JP (1) JP5546676B2 (en)
KR (1) KR101578499B1 (en)
CN (1) CN103873765B (en)
TW (1) TWI539227B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6431257B2 (en) 2013-10-21 2018-11-28 キヤノン株式会社 NETWORK SYSTEM, NETWORK DEVICE MANAGEMENT METHOD, NETWORK DEVICE, ITS CONTROL METHOD AND PROGRAM, AND MANAGEMENT SYSTEM
KR102360453B1 (en) 2015-04-10 2022-02-09 삼성전자 주식회사 Apparatus And Method For Setting A Camera
TWI601423B (en) 2016-04-08 2017-10-01 晶睿通訊股份有限公司 Image capture system and sychronication method thereof
US10447910B2 (en) * 2016-08-04 2019-10-15 International Business Machines Corporation Camera notification and filtering of content for restricted sites
US10970915B2 (en) * 2017-01-06 2021-04-06 Canon Kabushiki Kaisha Virtual viewpoint setting apparatus that sets a virtual viewpoint according to a determined common image capturing area of a plurality of image capturing apparatuses, and related setting method and storage medium
JP7165140B2 (en) 2017-05-10 2022-11-02 グラバンゴ コーポレイション Tandem Camera Array for Efficient Placement
EP3422068B1 (en) * 2017-06-26 2019-05-01 Axis AB A method of focusing a camera
US10506217B2 (en) * 2017-10-09 2019-12-10 Facebook Technologies, Llc Head-mounted display tracking system
EP3550823A1 (en) * 2018-04-05 2019-10-09 EVS Broadcast Equipment SA Automatic control of robotic camera for capturing a portion of a playing field
JP7146444B2 (en) * 2018-05-11 2022-10-04 キヤノン株式会社 Control device, control method and program
US20210201431A1 (en) * 2019-12-31 2021-07-01 Grabango Co. Dynamically controlled cameras for computer vision monitoring
JP2023048309A (en) * 2021-09-28 2023-04-07 オムロン株式会社 Setting device, setting method and setting program

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5517236A (en) * 1994-06-22 1996-05-14 Philips Electronics North America Corporation Video surveillance system
US20060250505A1 (en) 2005-05-05 2006-11-09 Gennetten K D Method for achieving correct exposure of a panoramic photograph
US20110228119A1 (en) 2010-03-18 2011-09-22 Canon Kabushiki Kaisha Image pickup apparatus having masking function
US20120151601A1 (en) * 2010-07-06 2012-06-14 Satoshi Inami Image distribution apparatus
US20130021433A1 (en) * 2011-07-21 2013-01-24 Robert Bosch Gmbh Overview configuration and control method for ptz cameras
US8531557B2 (en) * 2010-04-30 2013-09-10 Canon Kabushiki Kaisha Method, apparatus and system for performing a zoom operation

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006005624A (en) * 2004-06-17 2006-01-05 Olympus Corp Imaging device
CN101404725B (en) * 2008-11-24 2010-07-21 华为终端有限公司 Camera, camera set, its control method, apparatus and system
JP5209587B2 (en) * 2009-10-22 2013-06-12 大成建設株式会社 Surveillance camera system, surveillance camera, and light environment adjustment method
JP2013030921A (en) * 2011-07-27 2013-02-07 Canon Inc Imaging device, control method therefor, and program

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5517236A (en) * 1994-06-22 1996-05-14 Philips Electronics North America Corporation Video surveillance system
US20060250505A1 (en) 2005-05-05 2006-11-09 Gennetten K D Method for achieving correct exposure of a panoramic photograph
US20110228119A1 (en) 2010-03-18 2011-09-22 Canon Kabushiki Kaisha Image pickup apparatus having masking function
US8692904B2 (en) * 2010-03-18 2014-04-08 Canon Kabushiki Kaisha Image pickup apparatus having masking function
US8531557B2 (en) * 2010-04-30 2013-09-10 Canon Kabushiki Kaisha Method, apparatus and system for performing a zoom operation
US20120151601A1 (en) * 2010-07-06 2012-06-14 Satoshi Inami Image distribution apparatus
US20130021433A1 (en) * 2011-07-21 2013-01-24 Robert Bosch Gmbh Overview configuration and control method for ptz cameras

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
European Search Report issued Jun. 6, 2013, in European Application No. 12197677.3.

Also Published As

Publication number Publication date
EP2747414A1 (en) 2014-06-25
US20140168432A1 (en) 2014-06-19
JP5546676B2 (en) 2014-07-09
JP2014121092A (en) 2014-06-30
KR101578499B1 (en) 2015-12-18
KR20140079332A (en) 2014-06-26
TWI539227B (en) 2016-06-21
CN103873765A (en) 2014-06-18
CN103873765B (en) 2018-09-25
TW201428411A (en) 2014-07-16
EP2747414B1 (en) 2015-04-08

Similar Documents

Publication Publication Date Title
US9270893B2 (en) Monitoring method and camera
US8899849B2 (en) Camera apparatus and method of controlling camera apparatus
US9781334B2 (en) Control method, camera device and electronic equipment
US8922673B2 (en) Color correction of digital color image
KR101658212B1 (en) Method of selecting an optimal viewing angle position for a camera
US10587809B2 (en) Continuous shooting device, continuous shooting method and continuous shooting control method using preliminary and calculated parameter values
TW200917828A (en) Image pickup apparatus and focusing condition displaying method
JP2010004465A (en) Stereoscopic image photographing system
JP2015103852A (en) Image processing apparatus, imaging apparatus, image processing apparatus control method, image processing apparatus control program, and storage medium
JP2017063245A (en) Imaging device
US20140078326A1 (en) Focus control device, method for controlling focus and image pickup apparatus
JP4310309B2 (en) Optical device and method for controlling optical device
JP2013205675A (en) Imaging apparatus
CN115442941A (en) Method, device, camera, equipment and storage medium for controlling lamplight
US10652460B2 (en) Image-capturing apparatus and image-capturing control method
JP5217843B2 (en) Composition selection apparatus, composition selection method and program
KR101276877B1 (en) Method for controlling gain each area of monitoring camera
JP4438065B2 (en) Shooting control system
JP2019193196A (en) Imaging apparatus and control method of the same
US20090102932A1 (en) Black Card Controlling Method and Electronic Device Thereof
JP5384173B2 (en) Auto focus system
CN111385441B (en) Waterproof camera system with adjustable position and waterproof camera control method
US20210061229A1 (en) Imaging device, method of controlling imaging device, and storage medium
KR101514684B1 (en) Control method of closed circuit television and control apparatus thereof
JP2016200742A (en) Imaging apparatus

Legal Events

Date Code Title Description
AS Assignment

Owner name: AXIS AB, SWEDEN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NYSTROM, JOHAN;REEL/FRAME:032406/0477

Effective date: 20131219

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8