WO2018096208A1 - Imaging device and method - Google Patents
Imaging device and method Download PDFInfo
- Publication number
- WO2018096208A1 WO2018096208A1 PCT/FI2017/050804 FI2017050804W WO2018096208A1 WO 2018096208 A1 WO2018096208 A1 WO 2018096208A1 FI 2017050804 W FI2017050804 W FI 2017050804W WO 2018096208 A1 WO2018096208 A1 WO 2018096208A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- region
- interest
- scene
- seam
- overlap region
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000003384 imaging method Methods 0.000 title description 7
- 238000012544 monitoring process Methods 0.000 claims abstract description 28
- 230000003993 interaction Effects 0.000 claims abstract description 25
- 230000004044 response Effects 0.000 claims abstract description 14
- 230000033001 locomotion Effects 0.000 claims description 32
- 238000004590 computer program Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 7
- 230000000712 assembly Effects 0.000 description 14
- 238000000429 assembly Methods 0.000 description 14
- 238000002156 mixing Methods 0.000 description 14
- 230000000007 visual effect Effects 0.000 description 13
- 238000013135 deep learning Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 6
- 238000005457 optimization Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 241000282414 Homo sapiens Species 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000010845 search algorithm Methods 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000003292 diminished effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000011435 rock Substances 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 241000282472 Canis lupus familiaris Species 0.000 description 1
- 241000282326 Felis catus Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B37/00—Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
- G03B37/04—Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe with cameras or projectors providing touching or overlapping fields of view
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/16—Image acquisition using multiple overlapping images; Image stitching
Definitions
- the present invention relates to an imaging device, method and computer program. More particularly, the present invention relates to an imaging device, method and computer program where two or more adjacent images can be joined together to provide a panoramic, 360 degree or spherical image.
- an imaging device such as a camera to join two or more images together to provide a wide angle, panoramic or 360 degree view of a scene. This can be achieved by taking two or more images or photographs in turn, and then joining them together to provide a panoramic view. It is also known to record images from two or more separate imaging devices (which may be part of the same overall imaging device), and then joining those separate images in a similar fashion. This latter method may be used for obtaining panoramic motion picture images, for example for live streaming of videos. Whichever way is used to obtain the images, the process of joining them together may be referred to as "stitching".
- Stitching can be a computationally heavy process. Furthermore, stitching camera images may have issues on image edges or seams. In a studio environment the stitching can be achieved using complex algorithms such as optical flows. In a live video streaming case it is usually not possible for such algorithms to be used, so typically the camera images will just be cross-faded from their midpoints. This may result in visible artifacts or imperfections in the images, especially in regions with high disparity between the objects obtained from different cameras.
- a method comprising: recording image data of a scene; the recording image data of a scene comprising recording first image data of a first portion of the scene and recording second image data of a second portion of the scene, the second portion of the scene at least partially overlapping with the first portion of the scene at an overlap region; monitoring a region of interest in the scene; and in response to detecting, by a processor, interaction or potential interaction between the region of interest and the overlap region, causing, by the processor, the overlap region to be adapted.
- the monitoring a region of interest comprises monitoring a position of the region of interest in the scene in relation to the overlap region.
- the detecting interaction between the region of interest and the overlap region comprises detecting a location of the region of interest being at least partially within the overlap region.
- the overlap region comprises a seam between the first and second portions of the scene.
- the causing the overlap region to be adapted comprises adjusting a property of the seam.
- the adjusting a property of the seam is carried out in order to prevent or to attempt to prevent the region of interest crossing the seam.
- the adjusting a property of the seam comprises moving the seam from a first position to a second position, the first position being different to the second position.
- the moving the seam from the first position to the second position is carried out in a step-change.
- the adjusting a property of the seam comprises adjusting a shape of the seam.
- the seam comprises a straight line.
- the seam comprises a curved portion.
- the moving the seam from a first position to a second position comprises causing the seam to move away from the region of interest.
- the overlap region comprises a width. According to some embodiments, when there is no interaction between the region of interest and the overlap region the seam is configured to be positioned centrally in the width of the overlap region.
- the causing the overlap region to be adjusted comprises adjusting a width of the overlap region.
- the method comprises adjusting a
- the method comprises associating a priority value to each of the two or more regions of interest, and the causing the overlap region to be adapted comprises a consideration of the relative priorities of each region of interest in the scene.
- causing the seam to move away from the region of interest comprises prioritising movement of the seam away from a region of interest of a relatively higher priority.
- monitoring the region of interest comprises receiving information from an external source.
- the external source comprises one or more of: an RFID device; a magnetometer; a LIDAR device. According to some embodiments, the external source is attached to the region of interest.
- the monitoring a region of interest comprises using a detection algorithm to automatically detect the region of interest.
- the detection algorithm comprises a deep learning algorithm.
- the monitoring a region of interest comprises monitoring movement of the region of interest.
- the monitoring movement of the region of interest comprises monitoring a distance and/or angle of the region of interest relative to a fixed point.
- the image data comprises a plurality of image layers.
- the region of interest is comprised in a first layer of the plurality of image layers.
- a background of the scene is comprised in a second layer of the plurality of image layers that is different from the first layer.
- the causing the overlap region to be adapted is carried out differently in respective different layers.
- the method comprises applying an image optimization algorithm.
- the image optimization algorithm comprises a grid-search algorithm configured to choose optimum parameters from a plurality of available parameters.
- the image optimization algorithm comprises a deep learning algorithm.
- the first image data is recorded by a first lens assembly
- the second image data is recorded by a second lens assembly.
- the first and second lens assemblies are comprised in a same camera device.
- the camera device comprises at least one further lens assembly. According to some embodiments the camera device is configured to record and output panoramic and/or 360 degree and/or spherical image data.
- the recording image data comprises recording motion picture image data.
- the method further comprises recording audio data of the scene.
- a computer program comprising computer executable instructions which when run on one or more processors perform the method of any of the first aspect.
- an apparatus comprising at least one processor; and at least one memory including computer program code; the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to: record first image data of a first portion of a scene and record second image data of a second portion of the scene, the second portion of the scene at least partially overlapping with the first portion of the scene at an overlap region; monitor a region of interest in the scene; and in response to detecting, by the processor, interaction or potential interaction between the region of interest and the overlap region, cause, by the processor, the overlap region to be adapted.
- an apparatus comprising: means for recording first image data of a first portion of a scene and means for recording second image data of a second portion of the scene, the second portion of the scene at least partially overlapping with the first portion of the scene at an overlap region; means for monitoring a region of interest in the scene; and means for causing the overlap region to be adapted in response to detecting interaction or potential interaction between the region of interest and the overlap region.
- the monitoring a region of interest comprises monitoring a position of the region of interest in the scene in relation to the overlap region.
- the detecting interaction between the region of interest and the overlap region comprises detecting a location of the region of interest being at least partially within the overlap region.
- the overlap region comprises a seam between the first and second portions of the scene.
- the causing the overlap region to be adapted comprises adjusting a property of the seam.
- the adjusting a property of the seam is carried out in order to prevent or to attempt to prevent the region of interest crossing the seam.
- the adjusting a property of the seam comprises moving the seam from a first position to a second position, the first position being different to the second position.
- the moving the seam from the first position to the second position is carried out in a step-change.
- the adjusting a property of the seam comprises adjusting a shape of the seam.
- the seam comprises a straight line.
- the seam comprises a curved portion.
- the moving the seam from a first position to a second position comprises causing the seam to move away from the region of interest.
- the overlap region comprises a width
- the seam when there is no interaction between the region of interest and the overlap region the seam is configured to be positioned centrally in the width of the overlap region.
- the causing the overlap region to be adjusted comprises adjusting a width of the overlap region.
- the method comprises adjusting a
- the apparatus is configured to associate a priority value to each of the two or more regions of interest, and the causing the overlap region to be adapted comprises a consideration by the apparatus of the relative priorities of each region of interest in the scene.
- causing the seam to move away from the region of interest comprises prioritising movement of the seam away from a region of interest of a relatively higher priority.
- monitoring the region of interest comprises receiving information at the apparatus from an external source.
- the external source comprises one or more of: an RFID device; a magnetometer; a LIDAR device.
- the external source is attached to the region of interest.
- the monitoring a region of interest comprises using a detection algorithm to automatically detect the region of interest.
- the detection algorithm comprises a deep learning algorithm.
- the monitoring a region of interest comprises monitoring movement of the region of interest.
- the monitoring movement of the region of interest comprises monitoring a distance and/or angle of the region of interest relative to a fixed point.
- the fixed point comprises a fixed point of the apparatus.
- the apparatus is configured to process the image data as a plurality of image layers.
- the region of interest is comprised in a first layer of the plurality of image layers.
- a background of the scene is comprised in a second layer of the plurality of image layers that is different from the first layer.
- the apparatus is configured to cause the overlap region to be adapted differently in respective different layers.
- the apparatus is configured to apply an image optimization algorithm.
- the image optimization algorithm comprises a grid-search algorithm configured to choose optimum parameters from a plurality of available parameters.
- the image optimization algorithm comprises a deep learning algorithm.
- the apparatus comprises a first lens assembly to record the first image data, and a second lens assembly for recording the second image data.
- the apparatus comprises a camera device, the camera device comprising the first and second lens assemblies
- the camera device comprises at least one further lens assembly.
- the camera device is configured to record and output panoramic and/or 360 degree and/or spherical image data.
- the recording image data comprises recording motion picture image data.
- the apparatus is further configured to record audio data of the scene.
- Figure 1 is a schematic diagram of a camera according to an embodiment
- Figure 2 is a schematic diagram showing certain elements of a camera according to an embodiment
- Figure 3 schematically shows a scene being recorded by a camera, according to an embodiment
- Figure 4 schematically shows a scene being recorded by a camera having multiple lens assemblies, according to an embodiment
- Figure 5 shows visual output of a camera, for the purposes of explanation
- Figure 6 shows visual output of a camera, for the purposes of explanation
- Figure 7 shows visual output of a camera, for the purposes of explanation
- Figure 8 shows visual output of a camera, for the purposes of explanation
- Figure 9 shows visual output of a camera, for the purposes of explanation
- Figure 10 schematically shows a scene being recorded by a camera, according to an embodiment, the scene comprising regions of interest
- Figures 1 1 A and 1 1 B schematically show movement of a seam in response to interaction with a region of interest, according to an embodiment
- Figures 12A and 12B schematically show movement of a seam in response to interaction with a region of interest, according to an embodiment
- Figures 13A and 13B schematically show movement of a seam based on relative priorities of regions of interest, according to an embodiment
- Figure 14 schematically shows adjustment of a shape of a seam line, according to an embodiment
- Figure 15 shows a flow chart of a method according to an embodiment.
- FIG. 1 is a schematic plan view of a camera device suitable for practising embodiments of the invention.
- the camera device 100 comprises at least two image receiving devices.
- the at least two image receiving devices comprise lenses or lens assemblies 102, 104, 106 and 108.
- the lenses may be wide angle lenses.
- Each lens is capable of receiving image data of a scene.
- the "scene" may be considered to be a location at which the camera device 100 is located.
- Each lens can record an image of the scene or a portion of the scene, which images can then be joined or stitched together to provide a panoramic or 360° view of the scene.
- Further lenses may also be provided at other locations on the camera 100, in addition to those shown in Figure 1 . For example further lenses may be placed on the bottom and/or top of the camera to enable a spherical image of the scene to be recorded.
- Each lens or lens assembly 102 to 108 may also be termed a camera or camera module, or image receiving device or image receiving means, and these terms are used interchangeably herein. Therefore it may be considered that the camera device 100 may be made up of a number of constituent camera devices or modules 102, 104, 106, 108. Parameters of the camera can be adjusted during use. For example parameters such as focal length, aperture, zoom, white balance (or any one of or any combination thereof) can be adjusted for each lens assembly. In some embodiments these adjustments may be carried out in real time as a scene is being recorded. In some embodiments these parameters are adjusted automatically in response to determining conditions of a scene (e.g. distance from or movement of an object being recorded, light level etc.).
- these parameters may be adjusted manually, either on the camera device 100 or remotely, for example via a connected computing device.
- the camera device is shown as having four lens assemblies, it will of course be appreciated that more or fewer lens assemblies may be provided. According to some embodiments two or more lens assemblies are provided.
- the camera device is mounted for movement thereof. Viewing Figure 1 the camera device can be configured for movement along the X and/or Y axes. The camera may also be mounted so that its vertical position can be adjusted e.g. along a Z axis into the paper when viewing Figure 2.
- the camera device may also be able to rotate.
- the camera device may be configured for rotation about a central axis 101 .
- the camera device 100 may be configured for clockwise and/or anti-clockwise rotation when viewing Figure 1 .
- the camera device may also be configured to pitch and/or tilt. To this end a suitable mounting may be provided for the camera device to enable any one of or any combination of the above movements.
- FIG. 2 is a schematic diagram showing certain elements of the camera 100.
- a lens module (or lens assembly or camera) is shown at 102.
- the lens module 102 is in communication with an image sensor module 1 10.
- the image sensor 1 10 is in communication with processor 1 12 and memory 1 14.
- the camera 100 also comprises an interface 1 16 for interfacing with one or more external apparatus.
- the interface 1 16 may be operable to communicatively connect the camera 100 to an external computing device, such as a computer which can control aspects of the camera 100.
- the interface 1 16 can be a wired and/or wireless interface.
- the camera 100 also comprises a power module 1 18 for powering the camera 100.
- the power module 1 18 may comprise a battery pack.
- the battery pack may be detachable from the camera 100.
- the camera 100 may additionally or alternatively comprise a power interface 120 for interfacing with an external power supply, such as a mains supply.
- the power interface 120 may be operative to directly power the camera 100 and/or to charge the power module
- Figure 2 is by way of example only and that the camera 100 may also include other optional features and/or modules.
- the camera may comprise a display module for displaying to an operator images that have been or are being recorded by the camera.
- a compression module may also be provided for compressing image data, for example for compressing image data as it is received.
- the inclusion in Figure 2 of a single lens 102 is for ease of explanation only, and that further lenses (e.g. lenses 104, 106, 108) may be incorporated within the hardware of the camera 100.
- Figure 3 shows a typical approach to stitching 360° video.
- the camera device is shown generally at 300 and comprises lens assemblies (or cameras) 302, 304, 306 and 308.
- the field of view of the lens 302 is represented by arrow 322.
- the other lens assemblies 304 to 308 similarly record images of a portion or segment of the scene 332 at the location of the camera 300.
- Portion 329 of Figure 3 shows an "opened out" or panoramic view of the images obtained by the camera.
- the region of the scene captured by lens 302 is again shown with arrow 322.
- a region of the scene captured by lens 304 is represented by arrow 324.
- a region of the scene captured by lens 308 is shown by arrow 326.
- a region of the scene captured by lens 306 is represented by arrows 328 and 330.
- regions of overlap between the images recorded by the different lenses For example there is an overlap region 334 between the image portions 322 and 324.
- the overlap regions comprise regions of the scene that have been captured by two (or more) cameras.
- overlap region 336 between the portions 322 and 326.
- overlap region 337 between image portions 330 and 326, and an overlap region 339 between image portions 324 and 328.
- overlap regions may also be referred to as merge regions, convergence regions, blend regions etc., and these terms may be used interchangeably throughout.
- the terms regions, portions, sections, segments etc. may also be used interchangeably.
- Overlap region 337 comprises seam line 309. Overlap region comprises seam line 306. Overlap region 334 comprises seam line 303. Overlap region 339 comprises seam line 305.
- These seam lines may also be referred to as seams, stitch lines or stitching, or join lines etc.
- the seam lines represent positions where two images meet. In some embodiments the seam lines are virtual positions which represent where two neighbouring images meet. Therefore in some embodiments a seam between two images represents a point (or points) which a processor (e.g. image processor 1 12) considers to be where two images are joined or stitched together. Of course there may still be a region of overlap, but the seam may be considered a virtual or notional position that is considered (e.g.
- the seam lines are not visible to a viewer of the stitched image. That is the seam lines shown in the Figures may be considered virtual positions. The seams are shown in Figure 3 in a visible fashion for the purpose of explanation.
- the overlap regions are shown as vertical bands, it will be understood that this is by way of example only. In other embodiments the overlap regions may be oriented differently. The orientation may be dependent upon the orientation of the cameras. For example, for cameras placed side by side then the overlap regions may be vertical or substantially vertical. Similarly, this applies to the seams. For cameras stacked one upon the other then the convergence regions may be horizontal or substantially horizontal. Similarly, this applies to the seams. An image may also comprise a combination of horizontal and vertical overlap regions. Similarly, this applies to the seams. This may be the case for example where the overall camera device comprises a combination of side by side and stacked cameras. Of course, the overlap regions may also be at any angle between horizontal and vertical. Similarly, this applies to the seams.
- the image data in the overlap regions may be blended to minimise any differences between the images.
- a blending algorithm may be used. Blending may enable a smooth transition between adjacent images. Blending may help to lessen the impact of, for example, exposure differences between adjacent images.
- features or aspects in the image may be distorted. This may be the case in particular in the overlap region(s) of the image.
- the distortion may be more visible and disturbing depending on the kind of content which lies within the overlap region.
- the distortion may be particularly disturbing or distracting if content is moving whilst in the overlap region, since the human eye is naturally drawn towards motion.
- ROIs regions of interest
- aspects other than the region of interest may be considered background regions.
- background image data may include the wall behind the lecturer.
- the region of interest may be the lead singer, and further regions of interest may include other band members.
- Background regions may include aspects of the staging and spectators etc.
- the regions of interest are human beings, it will of course be understood that the invention is not limited as such.
- the regions of interest could additionally or alternatively be animals, cars, planes etc. Therefore generally speaking a region of interest can be considered to be a region or element upon which a viewer is expected or predicted to focus upon.
- Consistence may be considered to be the distance at which an image from two adjacent sensors is perfectly aligned. Subjects closer than a convergence point may be obstructed or occluded, whereas subjects further away than the convergence point may be increased in size or doubled. In an example set up, a convergence value of 1 represents infinity, and 1 .06 is about 1 metre. The usable range may lie between these values. It is also to be noted that these values may depend heavily upon the camera set-up and distances between cameras.
- Distortions may be caused by camera calibration parameters. Distortions may also be caused by the blending process which may result in imperfect blending between the images. For example, as mentioned above, colours between two adjacent images may not be blended smoothly, which may make the overall image look discontinuous in nature. Therefore in some embodiments blending algorithms may be used which adjust the colours between images and adjusts the seam lines to minimise the visibility of seams between images. Nevertheless, blending may cause image distortions due to parallax effects of unwanted motion of the optical centre, misregistration errors due to camera calibration parameter errors, radio distortion etc.
- Figure 4 shows an example scene including a region of interest 440.
- the region of interest 440 is a person.
- the person 440 is being recorded by two different cameras or lenses, in this case a first camera 402 and a second camera 404.
- the image frame captured by camera 402 is shown at 422, and the image frame captured by camera 404 is shown at 424.
- this overlap region 434 is susceptible to artifacts.
- the artifacts may comprise image distortion, and/or blending imperfections etc.
- the person 440 is within the overlap region 434, and therefore a viewer's attention will be directed to the region that is most susceptible to errors. It may be considered that the region of interest 440 is interacting with the overlap region 434.
- Figure 5 shows another example.
- the scene 501 is a lecture theatre.
- the region of interest is a lecturer 540.
- Background image data, shown generally at 542 includes the audience and structural items of the lecture theatre.
- the scene is being recorded using a multi-camera or multi lens assembly as previously described.
- the overlap regions are in the form of and/or result in vertical banding 544, 546, 548 and 550.
- the lecturer 540 is outside of the overlap regions and therefore there is little or no distortion on the lecturer or region of interest 540.
- the inventors of the present invention have thus realised that it would be desirable to be able to keep regions of interest outside of overlap or convergence regions of an image.
- Embodiments may also enable determining or choosing a width of the overlap region(s).
- convergence region 546 is relatively narrow. As shown this has caused significant distortion of the lecturer 540, making him appear unnaturally narrow.
- Figure 8 shows the same scene as Figure 7, except a relatively large convergence region 546 is provided within which the lecturer 540 is located.
- the band 546 completely overlays the lecturer 540, but the actual distortion of the physical characteristics of the lecturer 540 is less than when compared with Figure 7.
- Figure 9 shows an example where the lecturer 540 is relatively far away from the camera, and is in a relatively broad convergence region 548. Again this may be undesirable since the visibility of the lecturer 540 is at least somewhat diminished by virtue of his distance from the camera, and the visual quality of the lecturer 540 is further diminished by virtue of being fully contained within visible band 548.
- Distortion effects may also be present in images due to the parallax effect of objects being present in different depth layers of the image.
- region of interest objects are located in a different depth layer than background aspects of an image.
- distortion tends to be more visible and disturbing the closer the object is to the camera.
- FIG. 10 shows a scenario similar to that depicted in Figure 3.
- the camera 1000 comprises four lens assemblies 1002, 1004, 1006 and 1008, each recording image data of portions of a scene 1032 as previously described.
- Convergence region 1036 is a convergence of images captured by cameras 1002 and 1008.
- Convergence region 1036 comprises a seam 1009.
- Convergence region 1034 is formed by the convergence of image data between cameras 1002 and 1004.
- Convergence region 1034 comprises a seam 1003.
- Convergence region 1037 is formed by the convergence of image data from camera 1004 and camera 1006. Convergence region 1037 comprises a seam 1005. A convergence region 1039 is also formed by convergence between image data captured by camera 1006 and camera 1008. Convergence region 1039 comprises a seam 1007. In the scene 1032 there are three regions of interest, represented by circles 1040, 1041 and 1043. As discussed previously it has been recognised by the present inventors that it would be desirable that regions of interest do not cross the seam lines in the convergence regions between adjacent images.
- region of interest 1040 is outside convergence region 1039 (and likewise outside of any other convergence regions), and therefore does not overlap or conflict with convergence region 1039 or seam 1007. Accordingly there is no need to adjust the position of seam 1007 (or of any other seam).
- a seam is positioned in the middle of its associated convergence region.
- the seam runs parallel to and is equidistant from the long edges of the side of the band, by default.
- a seam may be positioned so as to conform to the shape of the edges of the convergence region in which it is located, and to lie mid-way or substantially mid-way between the outer edges of the convergence region. For example if a convergence region is curved, then the seam may similarly be curved.
- ROI 1041 is positioned in convergence region 1036. This overlapping between the ROI and convergence region is detected, for example by a suitable tracking algorithm.
- the position of seam 1009 is shifted in the convergence region 1036 so that it is not crossed by the region of interest 1041 . That is the seam is shifted from its first, default position to a second position that is different from the default position.
- Region of interest 1043 is partially located within convergence region 1037.
- the seam 1005 is in any case moved away from its default position to minimise the chance of a collision with region of interest 1043.
- the seam is moved from its default position in response to a determination that at least a portion of a region of interest is located in an associated convergence/overlap region of the respective seam.
- the tracking algorithm and seam control algorithms can predict whether an ROI is going to or is likely to interact with an overlap region and/or its associated seam.
- the algorithm(s) may cause adaptation of the overlap region e.g. by adjusting the position of the seam, before the interaction takes place. This may be before the ROI enters the overlap region. Therefore in some embodiments it may be considered that an overlap region is caused to be adapted in response to detecting interaction or potential interaction between the region of interest and the overlap region.
- an amount by which a seam is moved from its default position may be dependent upon an amount by which the region of interest overlaps or conflicts with the associated convergence region, and/or dependent upon a distance from the region of interest to the seam.
- the region of interest 1041 is fully contained within convergence region 1036, whereas region of interest 1043 is only partially overlapping with convergence region 1037.
- the seam line 1009 is moved by a greater extent than the seam line 1005 with respect to their default (mid- way) positions.
- the movement of the seam may be controlled by an automatic seam control algorithm.
- the seam control algorithm is in some embodiments configured to place the seam at the centre of its available space. "Available space” may be considered to be the space in a convergence region that is not occupied by an ROI. This provides the widest possible space for cross fade transition.
- FIG. 1 1 A a region of interest 1 140 is adjacent to, but outside of convergence region 1 139. Therefore the seam line 1 107 is positioned in the middle of the convergence region 1 139 i.e. in the middle of the available space.
- the convergence region 1 139 may be considered to have a width X. In this example the distances Y and Z are each equal to X/2.
- the region of interest 1 140 has entered the convergence region 1 139, thus occupying a portion of the convergence region.
- the remaining available space i.e. that space not occupied by ROI 1 140
- each of Y' and Z' are equal to W/2.
- movement vectors may be associated with the tracked ROI(s) so that movement of the ROI(s) can be monitored. Accordingly the seam(s) can be moved away from the moving ROI(s).
- a seam is caused to move in a direction that is the same as a direction of movement of a tracked ROI. This is shown for example in Figure 12A where a tracked ROI 1240 is moving in the direction of Arrow C i.e. is moving from left to right when viewing figure 12A. In response to this the seam 1207 is also moved in the direction of Arrow C, away from region of interest 1240.
- a direction of movement (or movement vector) of a seam is configured to follow or mimic a direction of movement (or movement vector) of an associated ROI. This enables the seam to avoid (or attempt to avoid) the ROI as the ROI approaches the seam, and also enables the seam to maximise the available space as the ROI retreats from the seam.
- priorities may be attached to or associated with ROI(s) which are being monitored.
- the seam control algorithm may be configured to prioritise avoiding ROIs with higher priorities. For example if the camera is recording a rock band at a concert, then the singer may be considered an ROI having a higher priority than the drummer, since it is likely that a viewer's focus will be on the lead singer. This may especially be the case if there is no 3-D position information available (for example only angle information is available).
- FIG. 13 shows two ROIs 1340 and 1341 , both of which are either in or at least partially in convergence region 1339.
- the first ROI 1340 has a priority P1
- the second ROI 1341 has a second priority PO.
- the priority P1 is greater or higher than the priority PO i.e. ROI 1340 is of a higher priority than ROI 1341 .
- the seam control algorithm determines to move or position the seam such that it avoids region of interest 1340 as a priority, even though this may mean that seam 1307 has to cross ROI 1341 , as shown in the example of Figure 13.
- the shape and/or orientation of the seam may be adjusted so as to avoid one or more regions of interest.
- the seamline does not have to be a straight line.
- the seam could be curved, zigzagged, stepped etc. (and/or shifted or transitioned between these shapes) so as to avoid one or more ROIs.
- Figure 14 shows an example where the shape of the seam 1407 is adjusted to a curve so as to avoid both ROIs 1440 and 1441 .
- the shape of the seam line 1407 may be adjusted in real-time and in a fluid manner so as to avoid the regions of interest 1440 and 1441 . Therefore in some embodiments it may be considered that the shape of the seam is adjusted in a continuous manner. In other embodiments the shape of the seam may be adjusted in a step-change manner.
- a similar effect may be achieved by adjusting the orientation of a seam.
- a straight seam could be tilted at an angle to deliver the same effect of avoiding ROIs 1440 and 1441 .
- the automatic seam control algorithm may therefore provide an improved visual output. Additionally, the automatic seam control algorithm means that there is no need to manually adjust the seam.
- the seam may be "jumped" from a first position to a second position to avoid crossing with the ROI. That is the seam may move from a first position to a second, different position in a step change.
- the convergence point and width of the convergence region may be adjusted to mitigate for this. This adjusting may take into account location and parameters as follows:
- adaptive blending and stitching algorithms may be applied separately to the region of interest and the background, in order to minimise distortions caused by depth differences
- width of the convergence region may also be affected by the size of the ROI(s) being tracked. For example this may be horizontal and/or vertical size of the regions of interest being tracked.
- a shape and line detection algorithm may be run so as to optimise/minimise distortions caused by adjusting the other parameters.
- the ROI(s) can be initially selected in a number of different ways.
- the ROI(s) are determined/selected in a manual manner. For example a user may specify, for example using a cursor on a display, the identified regions of interest. Those ROIs can then be tracked by a tracking algorithm. Additionally or alternatively trackers may be physical placed on the ROI(s) so that they are then subsequently tracked by the tracking algorithm. For example RFID tags may be manually placed on ROI(s) such that those ROI(s) are then tracked by the system.
- Magnetometers LIDAR and/or computer vision algorithms may also be used.
- a GPS tracker may also be used. For example a GPS on an ROI's mobile device may be used.
- adjusting the overlap region and/or seam is carried out by adjusting parameter(s) of the lens assemblies/cameras.
- the parameter(s) are physical parameter(s).
- the parameters may comprise one or more of or any combination of: orientation; focal length; aperture; zoom; white balance; frame rate. These parameters may be adjusted in one or more lens assemblies of the camera. Adjustments may be carried out differently in different lens assemblies. These adjustments may be made in real time.
- the orientation of a lens assembly may be adjusted relative to another lens assembly in the camera.
- Such adjustment may comprise adjusting a physical position of a lens assembly. Adjusting the physical position may comprise movement in space, such as translation and/or rotation.
- the entire camera assembly may be moved, whilst the lens assemblies remain in fixed positions relative to each other. For example in order to shift a seam line away from an ROI then the entire camera assembly may be moved e.g. by translation and/or rotation thereof.
- the initial determination/selection of ROI(s) may be carried out automatically. For example a region detection algorithm may be used to select ROI(s) automatically. Additionally or alternatively visual object recognition may be used to determine the ROI(s).
- Visual objects can be anything from human faces to dogs, cats, cars etc Developments in deep learning algorithms mean that thousands of different categories can be identified. Furthermore, new categories can also be added in time, and also the system may be trained to learn new categories during operation or following software updates. Accordingly there is no practical limitation on the number of objects that can be recognised by a visual object recognition algorithm.
- a tracker algorithm can be used to track motion of the ROI.
- the tracking algorithm would then track the ROI throughout the frames and can continuously provide output coordinates to indicate the spatial location of the ROI in every frame.
- the decision making mechanism for the convergence regions e.g. stitching algorithm
- the decision making mechanism for the convergence regions tries to avoid placing the ROI(s) in a position such that they overlap with the convergence regions. Accordingly, viewers are provided with the best possible visual experience without needing a manual operator to adjust camera settings constantly during use.
- the algorithms may use at least the following rules:
- Width of the convergence/overlap regions of adjacent images are chosen according to the distance of the ROI(s) to the camera.
- the distance may be inferred from different information sources, for example depth or size of the tracked ROI(s).
- the blending and/or stitching algorithms can be adaptive.
- the blending and/or stitching algorithms can be applied individually to ROI(s) and the background, which generally appear in different depth layers.
- the shape of the ROI(s) can be deduced by adopting a shape detector algorithm (e.g. a deep learning algorithm), and using this information the algorithm may optimise the parameters in a way so that straight lines in the image remain as straight as possible, and curves in the image remain as smooth as possible.
- a shape detector algorithm e.g. a deep learning algorithm
- the optimisation process can be achieved with either (a) a grid search algorithm where all possible parameters are tried and the best result is chosen, or (b) a deep learning based algorithm which would automatically fit the shape constraints and consequently choose the best parameters.
- the invention provides algorithms to decide on convergence points/regions, blending and convergence widths and may adapt or select the stitching and/or blending algorithms for regions of interest and the background separately, which may eliminate the need for constant manual supervision during a live broadcast. Although adapting the convergence regions may have some visual effect, this is outweighed by the advantages of reduced distortion of the ROI(s) in the image.
- Figure 15 is a flow chart showing a method according to an embodiment.
- step S1 first image data of a first portion of a scene is recorded.
- step S2 second image data of a second portion of the scene is recorded.
- the second portion of the scene at least partially overlaps with the first portion of the scene at an overlap region. Steps S1 and S2 may occur in parallel.
- step S3 a region of interest in the scene is monitored.
- step S4 in response to detecting, by a processor, interaction or potential interaction between the region of interest and the overlap region, the overlap region is caused, by the processor, to be adapted.
- An appropriately adapted computer program code product or products may be used for implementing the embodiments, when loaded on an appropriate data processing apparatus.
- the program code product for providing the operation may be stored on, provided and embodied by means of an appropriate carrier medium.
- An appropriate computer program can be embodied on a computer readable record medium.
- a possibility is to download the program code product via a data network.
- the various embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Embodiments of the inventions may thus be practiced in various components such as integrated circuit modules.
- the design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Studio Devices (AREA)
Abstract
A method comprising: recording image data of a scene (1032); the recording image data of a scene comprising recording first image data of a first portion of the scene and recording second image data of a second portion of the scene, the second portion of the scene at least partially overlapping with the first portion of the scene at an overlap region (1034, 1036, 1037, 1039); monitoring a region of interest (1040, 1041, 1043) in the scene; and in response to detecting, by a processor, interaction or potential interaction between the region of interest and the overlap region, causing, by the processor, the overlap region to be adapted.
Description
Imaging device and method
Field
The present invention relates to an imaging device, method and computer program. More particularly, the present invention relates to an imaging device, method and computer program where two or more adjacent images can be joined together to provide a panoramic, 360 degree or spherical image.
Background It is known to use an imaging device such as a camera to join two or more images together to provide a wide angle, panoramic or 360 degree view of a scene. This can be achieved by taking two or more images or photographs in turn, and then joining them together to provide a panoramic view. It is also known to record images from two or more separate imaging devices (which may be part of the same overall imaging device), and then joining those separate images in a similar fashion. This latter method may be used for obtaining panoramic motion picture images, for example for live streaming of videos. Whichever way is used to obtain the images, the process of joining them together may be referred to as "stitching".
Stitching can be a computationally heavy process. Furthermore, stitching camera images may have issues on image edges or seams. In a studio environment the stitching can be achieved using complex algorithms such as optical flows. In a live video streaming case it is usually not possible for such algorithms to be used, so typically the camera images will just be cross-faded from their midpoints. This may result in visible artifacts or imperfections in the images, especially in regions with high disparity between the objects obtained from different cameras.
Summary
In a first aspect there is provided a method comprising: recording image data of a scene; the recording image data of a scene comprising recording first image data of a first portion of the scene and recording second image data of a second
portion of the scene, the second portion of the scene at least partially overlapping with the first portion of the scene at an overlap region; monitoring a region of interest in the scene; and in response to detecting, by a processor, interaction or potential interaction between the region of interest and the overlap region, causing, by the processor, the overlap region to be adapted.
According to some embodiments the monitoring a region of interest comprises monitoring a position of the region of interest in the scene in relation to the overlap region.
According to some embodiments the detecting interaction between the region of interest and the overlap region comprises detecting a location of the region of interest being at least partially within the overlap region.
According to some embodiments the overlap region comprises a seam between the first and second portions of the scene.
According to some embodiments the causing the overlap region to be adapted comprises adjusting a property of the seam.
According to some embodiments, the adjusting a property of the seam is carried out in order to prevent or to attempt to prevent the region of interest crossing the seam.
According to some embodiments the adjusting a property of the seam comprises moving the seam from a first position to a second position, the first position being different to the second position.
According to some embodiments, the moving the seam from the first position to the second position is carried out in a step-change.
According to some embodiments the adjusting a property of the seam comprises adjusting a shape of the seam.
According to some embodiments, the seam comprises a straight line.
According to some embodiments, the seam comprises a curved portion.
According to some embodiments, the moving the seam from a first position to a second position comprises causing the seam to move away from the region of interest.
According to some embodiments, the overlap region comprises a width. According to some embodiments, when there is no interaction between the region of interest and the overlap region the seam is configured to be positioned centrally in the width of the overlap region.
According to some embodiments the causing the overlap region to be adjusted comprises adjusting a width of the overlap region. According to some embodiments, when it is determined that the region of interest crossing the seam is inevitable, the method comprises adjusting a
convergence point and/or the width of the overlap region.
According to some embodiments there are two or more regions of interest.
According to some embodiments the method comprises associating a priority value to each of the two or more regions of interest, and the causing the overlap region to be adapted comprises a consideration of the relative priorities of each region of interest in the scene.
According to some embodiments, causing the seam to move away from the region of interest comprises prioritising movement of the seam away from a region of interest of a relatively higher priority.
According to some embodiments, monitoring the region of interest comprises receiving information from an external source.
According to some embodiments, the external source comprises one or more of: an RFID device; a magnetometer; a LIDAR device. According to some embodiments, the external source is attached to the region of interest.
According to some embodiments, the monitoring a region of interest comprises using a detection algorithm to automatically detect the region of interest.
According to some embodiments the detection algorithm comprises a deep learning algorithm.
According to some embodiments the monitoring a region of interest comprises monitoring movement of the region of interest. According to some embodiments the monitoring movement of the region of interest comprises monitoring a distance and/or angle of the region of interest relative to a fixed point.
According to some embodiments the image data comprises a plurality of image layers. According to some embodiments the region of interest is comprised in a first layer of the plurality of image layers.
According to some embodiments a background of the scene is comprised in a second layer of the plurality of image layers that is different from the first layer.
According to some embodiments the causing the overlap region to be adapted is carried out differently in respective different layers.
According to some embodiments the method comprises applying an image optimization algorithm.
According to some embodiments the image optimization algorithm comprises a grid-search algorithm configured to choose optimum parameters from a plurality of available parameters.
According to some embodiments the image optimization algorithm comprises a deep learning algorithm.
According to some embodiments the first image data is recorded by a first lens assembly, and the second image data is recorded by a second lens assembly. According to some embodiments the first and second lens assemblies are comprised in a same camera device.
According to some embodiments the camera device comprises at least one further lens assembly.
According to some embodiments the camera device is configured to record and output panoramic and/or 360 degree and/or spherical image data.
According to some embodiments the recording image data comprises recording motion picture image data. According to some embodiments the method further comprises recording audio data of the scene.
In a second aspect there is provided a computer program comprising computer executable instructions which when run on one or more processors perform the method of any of the first aspect. In a third aspect there is provided an apparatus comprising at least one processor; and at least one memory including computer program code; the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to: record first image data of a first portion of a scene and record second image data of a second portion of the scene, the second portion of the scene at least partially overlapping with the first portion of the scene at an overlap region; monitor a region of interest in the scene; and in response to detecting, by the processor, interaction or potential interaction between the region of interest and the overlap region, cause, by the processor, the overlap region to be adapted. In a fourth aspect there is provided an apparatus comprising: means for recording first image data of a first portion of a scene and means for recording second image data of a second portion of the scene, the second portion of the scene at least partially overlapping with the first portion of the scene at an overlap region; means for monitoring a region of interest in the scene; and means for causing the overlap region to be adapted in response to detecting interaction or potential interaction between the region of interest and the overlap region.
According to some embodiments the monitoring a region of interest comprises monitoring a position of the region of interest in the scene in relation to the overlap region.
According to some embodiments the detecting interaction between the region of interest and the overlap region comprises detecting a location of the region of interest being at least partially within the overlap region.
According to some embodiments the overlap region comprises a seam between the first and second portions of the scene.
According to some embodiments the causing the overlap region to be adapted comprises adjusting a property of the seam.
According to some embodiments, the adjusting a property of the seam is carried out in order to prevent or to attempt to prevent the region of interest crossing the seam.
According to some embodiments the adjusting a property of the seam comprises moving the seam from a first position to a second position, the first position being different to the second position.
According to some embodiments, the moving the seam from the first position to the second position is carried out in a step-change.
According to some embodiments the adjusting a property of the seam comprises adjusting a shape of the seam.
According to some embodiments, the seam comprises a straight line.
According to some embodiments, the seam comprises a curved portion. According to some embodiments, the moving the seam from a first position to a second position comprises causing the seam to move away from the region of interest.
According to some embodiments, the overlap region comprises a width.
According to some embodiments, when there is no interaction between the region of interest and the overlap region the seam is configured to be positioned centrally in the width of the overlap region.
According to some embodiments the causing the overlap region to be adjusted comprises adjusting a width of the overlap region.
According to some embodiments, when it is determined that the region of interest crossing the seam is inevitable, the method comprises adjusting a
convergence point and/or the width of the overlap region.
According to some embodiments there are two or more regions of interest. According to some embodiments the apparatus is configured to associate a priority value to each of the two or more regions of interest, and the causing the overlap region to be adapted comprises a consideration by the apparatus of the relative priorities of each region of interest in the scene.
According to some embodiments, causing the seam to move away from the region of interest comprises prioritising movement of the seam away from a region of interest of a relatively higher priority.
According to some embodiments, monitoring the region of interest comprises receiving information at the apparatus from an external source.
According to some embodiments, the external source comprises one or more of: an RFID device; a magnetometer; a LIDAR device.
According to some embodiments, the external source is attached to the region of interest.
According to some embodiments, the monitoring a region of interest comprises using a detection algorithm to automatically detect the region of interest. According to some embodiments the detection algorithm comprises a deep learning algorithm.
According to some embodiments the monitoring a region of interest comprises monitoring movement of the region of interest.
According to some embodiments the monitoring movement of the region of interest comprises monitoring a distance and/or angle of the region of interest relative to a fixed point.
According to some embodiments the fixed point comprises a fixed point of the apparatus.
According to some embodiments the apparatus is configured to process the image data as a plurality of image layers.
According to some embodiments the region of interest is comprised in a first layer of the plurality of image layers. According to some embodiments a background of the scene is comprised in a second layer of the plurality of image layers that is different from the first layer.
According to some embodiments the apparatus is configured to cause the overlap region to be adapted differently in respective different layers.
According to some embodiments the apparatus is configured to apply an image optimization algorithm.
According to some embodiments the image optimization algorithm comprises a grid-search algorithm configured to choose optimum parameters from a plurality of available parameters.
According to some embodiments the image optimization algorithm comprises a deep learning algorithm.
According to some embodiments the apparatus comprises a first lens assembly to record the first image data, and a second lens assembly for recording the second image data.
According to some embodiments the apparatus comprises a camera device, the camera device comprising the first and second lens assemblies
According to some embodiments the camera device comprises at least one further lens assembly.
According to some embodiments the camera device is configured to record and output panoramic and/or 360 degree and/or spherical image data. According to some embodiments the recording image data comprises recording motion picture image data.
According to some embodiments the apparatus is further configured to record audio data of the scene.
Brief description of drawings
Figure 1 is a schematic diagram of a camera according to an embodiment;
Figure 2 is a schematic diagram showing certain elements of a camera according to an embodiment;
Figure 3 schematically shows a scene being recorded by a camera, according to an embodiment;
Figure 4 schematically shows a scene being recorded by a camera having multiple lens assemblies, according to an embodiment;
Figure 5 shows visual output of a camera, for the purposes of explanation;
Figure 6 shows visual output of a camera, for the purposes of explanation;
Figure 7 shows visual output of a camera, for the purposes of explanation;
Figure 8 shows visual output of a camera, for the purposes of explanation;
Figure 9 shows visual output of a camera, for the purposes of explanation;
Figure 10 schematically shows a scene being recorded by a camera, according to an embodiment, the scene comprising regions of interest;
Figures 1 1 A and 1 1 B schematically show movement of a seam in response to interaction with a region of interest, according to an embodiment;
Figures 12A and 12B schematically show movement of a seam in response to interaction with a region of interest, according to an embodiment;
Figures 13A and 13B schematically show movement of a seam based on relative priorities of regions of interest, according to an embodiment;
Figure 14 schematically shows adjustment of a shape of a seam line, according to an embodiment;
Figure 15 shows a flow chart of a method according to an embodiment.
Detailed description of drawings
Figure 1 is a schematic plan view of a camera device suitable for practising
embodiments of the invention. The camera device 100 comprises at least two image receiving devices. In Figure 1 the at least two image receiving devices comprise lenses or lens assemblies 102, 104, 106 and 108. In some embodiments the lenses may be wide angle lenses. Each lens is capable of receiving image data of a scene. The "scene" may be considered to be a location at which the camera device 100 is located. Each lens can record an image of the scene or a portion of the scene, which images can then be joined or stitched together to provide a panoramic or 360° view of the scene. Further lenses may also be provided at other locations on the camera 100, in addition to those shown in Figure 1 . For example further lenses may be placed on the bottom and/or top of the camera to enable a spherical image of the scene to be recorded.
Each lens or lens assembly 102 to 108 may also be termed a camera or camera module, or image receiving device or image receiving means, and these terms are used interchangeably herein. Therefore it may be considered that the camera device 100 may be made up of a number of constituent camera devices or modules 102, 104, 106, 108. Parameters of the camera can be adjusted during use. For example parameters such as focal length, aperture, zoom, white balance (or any one of or any combination thereof) can be adjusted for each lens assembly. In some embodiments these adjustments may be carried out in real time as a scene is being recorded. In some embodiments these parameters are adjusted automatically in response to determining conditions of a scene (e.g. distance from or movement of an object being recorded, light level etc.). Additionally or alternatively these parameters may be adjusted manually, either on the camera device 100 or remotely, for example via a connected computing device. Although in Figure 1 the camera device is shown as having four lens assemblies, it will of course be appreciated that more or fewer lens assemblies may be provided. According to some embodiments two or more lens assemblies are provided.
In some embodiments the camera device is mounted for movement thereof. Viewing Figure 1 the camera device can be configured for movement along the X and/or Y axes. The camera may also be mounted so that its vertical position can be adjusted e.g. along a Z axis into the paper when viewing Figure 2. The camera device may also be able to rotate. For example the camera device may be configured for rotation about a central axis 101 . For example the camera device 100 may be configured for clockwise and/or anti-clockwise rotation when viewing Figure 1 . The
camera device may also be configured to pitch and/or tilt. To this end a suitable mounting may be provided for the camera device to enable any one of or any combination of the above movements.
Figure 2 is a schematic diagram showing certain elements of the camera 100. A lens module (or lens assembly or camera) is shown at 102. The lens module 102 is in communication with an image sensor module 1 10. The image sensor 1 10 is in communication with processor 1 12 and memory 1 14. The camera 100 also comprises an interface 1 16 for interfacing with one or more external apparatus. For example the interface 1 16 may be operable to communicatively connect the camera 100 to an external computing device, such as a computer which can control aspects of the camera 100. The interface 1 16 can be a wired and/or wireless interface. The camera 100 also comprises a power module 1 18 for powering the camera 100. The power module 1 18 may comprise a battery pack. The battery pack may be detachable from the camera 100. The camera 100 may additionally or alternatively comprise a power interface 120 for interfacing with an external power supply, such as a mains supply. The power interface 120 may be operative to directly power the camera 100 and/or to charge the power module 1 18.
It will be understood that Figure 2 is by way of example only and that the camera 100 may also include other optional features and/or modules. For example the camera may comprise a display module for displaying to an operator images that have been or are being recorded by the camera. A compression module may also be provided for compressing image data, for example for compressing image data as it is received. It will also be understood that the inclusion in Figure 2 of a single lens 102 is for ease of explanation only, and that further lenses (e.g. lenses 104, 106, 108) may be incorporated within the hardware of the camera 100.
Figure 3 shows a typical approach to stitching 360° video. In Figure 3 the camera device is shown generally at 300 and comprises lens assemblies (or cameras) 302, 304, 306 and 308. The field of view of the lens 302 is represented by arrow 322. The other lens assemblies 304 to 308 similarly record images of a portion or segment of the scene 332 at the location of the camera 300.
Portion 329 of Figure 3 shows an "opened out" or panoramic view of the images obtained by the camera. The region of the scene captured by lens 302 is again shown with arrow 322. A region of the scene captured by lens 304 is represented by arrow 324. A region of the scene captured by lens 308 is shown by arrow 326. A region of
the scene captured by lens 306 is represented by arrows 328 and 330. As shown there are regions of overlap between the images recorded by the different lenses. For example there is an overlap region 334 between the image portions 322 and 324. In other words, the overlap regions comprise regions of the scene that have been captured by two (or more) cameras.
Likewise there is an overlap region 336 between the portions 322 and 326. There is also an overlap region 337 between image portions 330 and 326, and an overlap region 339 between image portions 324 and 328. These overlap regions may also be referred to as merge regions, convergence regions, blend regions etc., and these terms may be used interchangeably throughout. The terms regions, portions, sections, segments etc. may also be used interchangeably.
In each overlap region there is a seam line. Overlap region 337 comprises seam line 309. Overlap region comprises seam line 306. Overlap region 334 comprises seam line 303. Overlap region 339 comprises seam line 305. These seam lines may also be referred to as seams, stitch lines or stitching, or join lines etc. The seam lines represent positions where two images meet. In some embodiments the seam lines are virtual positions which represent where two neighbouring images meet. Therefore in some embodiments a seam between two images represents a point (or points) which a processor (e.g. image processor 1 12) considers to be where two images are joined or stitched together. Of course there may still be a region of overlap, but the seam may be considered a virtual or notional position that is considered (e.g. by a processor) to be the point(s) at which two images meet. In at least some embodiments the seam lines are not visible to a viewer of the stitched image. That is the seam lines shown in the Figures may be considered virtual positions. The seams are shown in Figure 3 in a visible fashion for the purpose of explanation.
Although in the Figures the overlap regions are shown as vertical bands, it will be understood that this is by way of example only. In other embodiments the overlap regions may be oriented differently. The orientation may be dependent upon the orientation of the cameras. For example, for cameras placed side by side then the overlap regions may be vertical or substantially vertical. Similarly, this applies to the seams. For cameras stacked one upon the other then the convergence regions may be horizontal or substantially horizontal. Similarly, this applies to the seams. An image may also comprise a combination of horizontal and vertical overlap regions. Similarly, this applies to the seams. This may be the case for example where the overall camera
device comprises a combination of side by side and stacked cameras. Of course, the overlap regions may also be at any angle between horizontal and vertical. Similarly, this applies to the seams.
The image data in the overlap regions may be blended to minimise any differences between the images. To this end a blending algorithm may be used. Blending may enable a smooth transition between adjacent images. Blending may help to lessen the impact of, for example, exposure differences between adjacent images.
When stitching images, in particular in a live stitching case (i.e. the stitching of images in real time as they are being recorded) features or aspects in the image may be distorted. This may be the case in particular in the overlap region(s) of the image. The distortion may be more visible and disturbing depending on the kind of content which lies within the overlap region. In particular the distortion may be particularly disturbing or distracting if content is moving whilst in the overlap region, since the human eye is naturally drawn towards motion.
Within a scene that is being recorded there are typically aspects or features that a viewer wants to concentrate on. Such aspects may be considered regions of interest (ROIs). What constitutes a region of interest (ROI) may depend upon the scene in question. For example, if the scene is a lecture being delivered by a lecturer in a lecture theatre, then the person giving the lecture may be considered to be the region of interest. Aspects other than the region of interest may be considered background regions. For example background image data may include the wall behind the lecturer. In a rock concert the region of interest may be the lead singer, and further regions of interest may include other band members. Background regions may include aspects of the staging and spectators etc. Although in these examples the regions of interest are human beings, it will of course be understood that the invention is not limited as such. For example the regions of interest could additionally or alternatively be animals, cars, planes etc. Therefore generally speaking a region of interest can be considered to be a region or element upon which a viewer is expected or predicted to focus upon.
"Convergence" may be considered to be the distance at which an image from two adjacent sensors is perfectly aligned. Subjects closer than a convergence point may be obstructed or occluded, whereas subjects further away than the convergence point may be increased in size or doubled. In an example set up, a convergence value
of 1 represents infinity, and 1 .06 is about 1 metre. The usable range may lie between these values. It is also to be noted that these values may depend heavily upon the camera set-up and distances between cameras.
Distortions may be caused by camera calibration parameters. Distortions may also be caused by the blending process which may result in imperfect blending between the images. For example, as mentioned above, colours between two adjacent images may not be blended smoothly, which may make the overall image look discontinuous in nature. Therefore in some embodiments blending algorithms may be used which adjust the colours between images and adjusts the seam lines to minimise the visibility of seams between images. Nevertheless, blending may cause image distortions due to parallax effects of unwanted motion of the optical centre, misregistration errors due to camera calibration parameter errors, radio distortion etc.
Figure 4 shows an example scene including a region of interest 440. In this example the region of interest 440 is a person. The person 440 is being recorded by two different cameras or lenses, in this case a first camera 402 and a second camera 404. The image frame captured by camera 402 is shown at 422, and the image frame captured by camera 404 is shown at 424. There is a region of overlap 434 between the two image frames 422 and 424. As explained above this overlap region 434 is susceptible to artifacts. For example the artifacts may comprise image distortion, and/or blending imperfections etc. In Figure 4 the person 440 is within the overlap region 434, and therefore a viewer's attention will be directed to the region that is most susceptible to errors. It may be considered that the region of interest 440 is interacting with the overlap region 434.
Figure 5 shows another example. In this example the scene 501 is a lecture theatre. The region of interest is a lecturer 540. Background image data, shown generally at 542 includes the audience and structural items of the lecture theatre. The scene is being recorded using a multi-camera or multi lens assembly as previously described. In this example the overlap regions are in the form of and/or result in vertical banding 544, 546, 548 and 550. In the example of Figure 5 the lecturer 540 is outside of the overlap regions and therefore there is little or no distortion on the lecturer or region of interest 540.
In Figure 6 the lecturer 540 has moved such that he is in convergence region or band 548. This undesirably obscures the image of the lecturer 540.
The inventors of the present invention have thus realised that it would be
desirable to be able to keep regions of interest outside of overlap or convergence regions of an image.
Embodiments may also enable determining or choosing a width of the overlap region(s).
In Figure 7 the region of interest 540 is relatively close to the camera.
Furthermore the convergence region 546 is relatively narrow. As shown this has caused significant distortion of the lecturer 540, making him appear unnaturally narrow.
Figure 8 shows the same scene as Figure 7, except a relatively large convergence region 546 is provided within which the lecturer 540 is located. In Figure 8 the band 546 completely overlays the lecturer 540, but the actual distortion of the physical characteristics of the lecturer 540 is less than when compared with Figure 7.
Figure 9 shows an example where the lecturer 540 is relatively far away from the camera, and is in a relatively broad convergence region 548. Again this may be undesirable since the visibility of the lecturer 540 is at least somewhat diminished by virtue of his distance from the camera, and the visual quality of the lecturer 540 is further diminished by virtue of being fully contained within visible band 548.
Generally speaking, where overlap regions are relatively narrow then the area of the image which is subject to disturbance is correspondingly made relatively small, but the intensity of disturbances within the convergence region may be increased. On the other hand, a relatively wide overlap region allows a smoother transition in the blending between the images, but necessarily also covers a larger portion of the image. Therefore it will be understood that the width of the overlapped regions need to be chosen carefully in addition to the location of the convergence points, seams etc.
Distortion effects may also be present in images due to the parallax effect of objects being present in different depth layers of the image. Generally, region of interest objects are located in a different depth layer than background aspects of an image. Also distortion tends to be more visible and disturbing the closer the object is to the camera.
Currently the location of overlap regions and their widths are typically selected in a manual fashion by a human operator operating the camera. Therefore the systems do not take into account objects of interest or their motion within a scene. Furthermore current systems do not use any depth information of an image in determining where to locate overlap regions.
Figure 10 shows a scenario similar to that depicted in Figure 3. In Figure 10 the camera 1000 comprises four lens assemblies 1002, 1004, 1006 and 1008, each recording image data of portions of a scene 1032 as previously described. Convergence region 1036 is a convergence of images captured by cameras 1002 and 1008. Convergence region 1036 comprises a seam 1009. Convergence region 1034 is formed by the convergence of image data between cameras 1002 and 1004. Convergence region 1034 comprises a seam 1003. Convergence region 1037 is formed by the convergence of image data from camera 1004 and camera 1006. Convergence region 1037 comprises a seam 1005. A convergence region 1039 is also formed by convergence between image data captured by camera 1006 and camera 1008. Convergence region 1039 comprises a seam 1007. In the scene 1032 there are three regions of interest, represented by circles 1040, 1041 and 1043. As discussed previously it has been recognised by the present inventors that it would be desirable that regions of interest do not cross the seam lines in the convergence regions between adjacent images.
In this example, and as shown in the upper region of Figure 10, region of interest 1040 is outside convergence region 1039 (and likewise outside of any other convergence regions), and therefore does not overlap or conflict with convergence region 1039 or seam 1007. Accordingly there is no need to adjust the position of seam 1007 (or of any other seam).
In some embodiments, by default a seam is positioned in the middle of its associated convergence region. For example where the convergence region comprises a band region, the seam runs parallel to and is equidistant from the long edges of the side of the band, by default. Generally speaking, by default, a seam may be positioned so as to conform to the shape of the edges of the convergence region in which it is located, and to lie mid-way or substantially mid-way between the outer edges of the convergence region. For example if a convergence region is curved, then the seam may similarly be curved.
By contrast to ROI 1040, ROI 1041 is positioned in convergence region 1036. This overlapping between the ROI and convergence region is detected, for example by a suitable tracking algorithm. In response, the position of seam 1009 is shifted in the convergence region 1036 so that it is not crossed by the region of interest 1041 . That is the seam is shifted from its first, default position to a second position that is different from the default position.
Region of interest 1043 is partially located within convergence region 1037. Although it may be the case that the region of interest may not cross the default position of seam 1005 in this case, the seam 1005 is in any case moved away from its default position to minimise the chance of a collision with region of interest 1043. Thus according to some embodiments the seam is moved from its default position in response to a determination that at least a portion of a region of interest is located in an associated convergence/overlap region of the respective seam.
According to some embodiments the tracking algorithm and seam control algorithms (which may be part of the same overall algorithm) can predict whether an ROI is going to or is likely to interact with an overlap region and/or its associated seam. The algorithm(s) may cause adaptation of the overlap region e.g. by adjusting the position of the seam, before the interaction takes place. This may be before the ROI enters the overlap region. Therefore in some embodiments it may be considered that an overlap region is caused to be adapted in response to detecting interaction or potential interaction between the region of interest and the overlap region.
According to some embodiments an amount by which a seam is moved from its default position may be dependent upon an amount by which the region of interest overlaps or conflicts with the associated convergence region, and/or dependent upon a distance from the region of interest to the seam. The more the ROI overlaps the convergence region, and/or the closer the ROI is to the seam, the greater distance the seam may be moved from its default position. For example the region of interest 1041 is fully contained within convergence region 1036, whereas region of interest 1043 is only partially overlapping with convergence region 1037. Thus the seam line 1009 is moved by a greater extent than the seam line 1005 with respect to their default (mid- way) positions.
In some embodiments the movement of the seam may be controlled by an automatic seam control algorithm. The seam control algorithm is in some embodiments configured to place the seam at the centre of its available space. "Available space" may be considered to be the space in a convergence region that is not occupied by an ROI. This provides the widest possible space for cross fade transition.
This is shown for example in Figures 1 1 A and 1 1 B. In Figure 1 1 A a region of interest 1 140 is adjacent to, but outside of convergence region 1 139. Therefore the seam line 1 107 is positioned in the middle of the convergence region 1 139 i.e. in the
middle of the available space. The convergence region 1 139 may be considered to have a width X. In this example the distances Y and Z are each equal to X/2. In Figure 1 1 B the region of interest 1 140 has entered the convergence region 1 139, thus occupying a portion of the convergence region. The remaining available space (i.e. that space not occupied by ROI 1 140) is represented by W. In Figure 1 1 B each of Y' and Z' are equal to W/2.
According to some embodiments movement vectors may be associated with the tracked ROI(s) so that movement of the ROI(s) can be monitored. Accordingly the seam(s) can be moved away from the moving ROI(s).
In some embodiments a seam is caused to move in a direction that is the same as a direction of movement of a tracked ROI. This is shown for example in Figure 12A where a tracked ROI 1240 is moving in the direction of Arrow C i.e. is moving from left to right when viewing figure 12A. In response to this the seam 1207 is also moved in the direction of Arrow C, away from region of interest 1240.
In Figure 12B the ROI 1240 is moving in the direction of Arrow D, away from seam 1207. Accordingly the seam 1207 is also moved in the direction of Arrow D so as to maximise use of the (dynamically changing) available space. If region of interest 1240 were to change direction again such that it moved in a direction back towards the seam 1207, then the seam 1207 would also change its direction of movement so as to avoid the region of interest 1240. Therefore it may be considered that in some embodiments a direction of movement (or movement vector) of a seam is configured to follow or mimic a direction of movement (or movement vector) of an associated ROI. This enables the seam to avoid (or attempt to avoid) the ROI as the ROI approaches the seam, and also enables the seam to maximise the available space as the ROI retreats from the seam.
In some embodiments priorities may be attached to or associated with ROI(s) which are being monitored. For example the seam control algorithm may be configured to prioritise avoiding ROIs with higher priorities. For example if the camera is recording a rock band at a concert, then the singer may be considered an ROI having a higher priority than the drummer, since it is likely that a viewer's focus will be on the lead singer. This may especially be the case if there is no 3-D position information available (for example only angle information is available).
This is shown for example in Figure 13 which shows two ROIs 1340 and 1341 , both of which are either in or at least partially in convergence region 1339. The first
ROI 1340 has a priority P1 , and the second ROI 1341 has a second priority PO. In this example the priority P1 is greater or higher than the priority PO i.e. ROI 1340 is of a higher priority than ROI 1341 . Accordingly the seam control algorithm determines to move or position the seam such that it avoids region of interest 1340 as a priority, even though this may mean that seam 1307 has to cross ROI 1341 , as shown in the example of Figure 13.
If more detailed information is available (for example positional information of the ROIs), then in some implementations the shape and/or orientation of the seam may be adjusted so as to avoid one or more regions of interest. In some embodiments the seamline does not have to be a straight line. For example the seam could be curved, zigzagged, stepped etc. (and/or shifted or transitioned between these shapes) so as to avoid one or more ROIs.
Figure 14 shows an example where the shape of the seam 1407 is adjusted to a curve so as to avoid both ROIs 1440 and 1441 . The shape of the seam line 1407 may be adjusted in real-time and in a fluid manner so as to avoid the regions of interest 1440 and 1441 . Therefore in some embodiments it may be considered that the shape of the seam is adjusted in a continuous manner. In other embodiments the shape of the seam may be adjusted in a step-change manner.
A similar effect may be achieved by adjusting the orientation of a seam. Using Figure 14 as an example, instead a straight seam could be tilted at an angle to deliver the same effect of avoiding ROIs 1440 and 1441 .
The automatic seam control algorithm may therefore provide an improved visual output. Additionally, the automatic seam control algorithm means that there is no need to manually adjust the seam.
According to some embodiments, if it is inevitable that a seam will cross with a region of interest (for example the ROI is walking through the entire convergence region), then the seam may be "jumped" from a first position to a second position to avoid crossing with the ROI. That is the seam may move from a first position to a second, different position in a step change.
If a seam has to cross an ROI then the convergence point and width of the convergence region may be adjusted to mitigate for this. This adjusting may take into account location and parameters as follows:
• adaptive blending and stitching algorithms may be applied separately to the region of interest and the background, in order to minimise distortions caused
by depth differences
• locations of the ROI(s) e.g. horizontal and/or vertical positions thereof. This may directly affect the convergence point
• width of the convergence region. This may be affected by distance of the ROI(s) from the camera (i.e. depth data)
• width of the convergence region may also be affected by the size of the ROI(s) being tracked. For example this may be horizontal and/or vertical size of the regions of interest being tracked.
• a shape and line detection algorithm may be run so as to optimise/minimise distortions caused by adjusting the other parameters.
As previously discussed the ROI(s) can be initially selected in a number of different ways. In some embodiments, the ROI(s) are determined/selected in a manual manner. For example a user may specify, for example using a cursor on a display, the identified regions of interest. Those ROIs can then be tracked by a tracking algorithm. Additionally or alternatively trackers may be physical placed on the ROI(s) so that they are then subsequently tracked by the tracking algorithm. For example RFID tags may be manually placed on ROI(s) such that those ROI(s) are then tracked by the system.
Magnetometers, LIDAR and/or computer vision algorithms may also be used. A GPS tracker may also be used. For example a GPS on an ROI's mobile device may be used.
In some embodiments adjusting the overlap region and/or seam is carried out by adjusting parameter(s) of the lens assemblies/cameras. In some embodiments the parameter(s) are physical parameter(s). For example the parameters may comprise one or more of or any combination of: orientation; focal length; aperture; zoom; white balance; frame rate. These parameters may be adjusted in one or more lens assemblies of the camera. Adjustments may be carried out differently in different lens assemblies. These adjustments may be made in real time. The orientation of a lens assembly may be adjusted relative to another lens assembly in the camera. Such adjustment may comprise adjusting a physical position of a lens assembly. Adjusting the physical position may comprise movement in space, such as translation and/or rotation. In some embodiments the entire camera assembly may be moved, whilst the lens assemblies remain in fixed positions relative to each other. For example in order
to shift a seam line away from an ROI then the entire camera assembly may be moved e.g. by translation and/or rotation thereof.
The initial determination/selection of ROI(s) may be carried out automatically. For example a region detection algorithm may be used to select ROI(s) automatically. Additionally or alternatively visual object recognition may be used to determine the ROI(s). Visual objects can be anything from human faces to dogs, cats, cars etc Developments in deep learning algorithms mean that thousands of different categories can be identified. Furthermore, new categories can also be added in time, and also the system may be trained to learn new categories during operation or following software updates. Accordingly there is no practical limitation on the number of objects that can be recognised by a visual object recognition algorithm.
Once an ROI is determined/chosen based on any of the above described input methods, a tracker algorithm can be used to track motion of the ROI. During a live stream, the tracking algorithm would then track the ROI throughout the frames and can continuously provide output coordinates to indicate the spatial location of the ROI in every frame. As discussed above, since the ROI(s) are assumed to be the regions with the utmost visual importance, the decision making mechanism for the convergence regions (e.g. stitching algorithm) tries to avoid placing the ROI(s) in a position such that they overlap with the convergence regions. Accordingly, viewers are provided with the best possible visual experience without needing a manual operator to adjust camera settings constantly during use.
As discussed at least in part above, the algorithms may use at least the following rules:
(1 ) If possible, convergence/overlap regions of adjacent images are placed away from boundaries of the ROI(s).
(2) Width of the convergence/overlap regions of adjacent images are chosen according to the distance of the ROI(s) to the camera. The distance may be inferred from different information sources, for example depth or size of the tracked ROI(s). Generally speaking, the further away the object is from the camera then the narrower the convergence/overlap region needs to be.
(3) The blending and/or stitching algorithms can be adaptive. The blending and/or stitching algorithms can be applied individually to ROI(s) and the background, which generally appear in different depth layers.
(4) The shape of the ROI(s) can be deduced by adopting a shape detector
algorithm (e.g. a deep learning algorithm), and using this information the algorithm may optimise the parameters in a way so that straight lines in the image remain as straight as possible, and curves in the image remain as smooth as possible.
(5) The optimisation process can be achieved with either (a) a grid search algorithm where all possible parameters are tried and the best result is chosen, or (b) a deep learning based algorithm which would automatically fit the shape constraints and consequently choose the best parameters.
Accordingly the invention provides algorithms to decide on convergence points/regions, blending and convergence widths and may adapt or select the stitching and/or blending algorithms for regions of interest and the background separately, which may eliminate the need for constant manual supervision during a live broadcast. Although adapting the convergence regions may have some visual effect, this is outweighed by the advantages of reduced distortion of the ROI(s) in the image.
Figure 15 is a flow chart showing a method according to an embodiment.
At step S1 first image data of a first portion of a scene is recorded.
At step S2 second image data of a second portion of the scene is recorded. The second portion of the scene at least partially overlaps with the first portion of the scene at an overlap region. Steps S1 and S2 may occur in parallel.
At step S3 a region of interest in the scene is monitored.
At step S4, in response to detecting, by a processor, interaction or potential interaction between the region of interest and the overlap region, the overlap region is caused, by the processor, to be adapted.
An appropriately adapted computer program code product or products may be used for implementing the embodiments, when loaded on an appropriate data processing apparatus. The program code product for providing the operation may be stored on, provided and embodied by means of an appropriate carrier medium. An appropriate computer program can be embodied on a computer readable record medium. A possibility is to download the program code product via a data network. In general, the various embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Embodiments of the inventions may thus be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design
into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
It is also noted herein that while the above describes exemplifying embodiments of the invention, there are several variations and modifications which may be made to the disclosed solution without departing from the scope of the present invention.
Claims
1 . A method comprising: recording image data of a scene; the recording image data of a scene comprising recording first image data of a first portion of the scene and recording second image data of a second portion of the scene, the second portion of the scene at least partially overlapping with the first portion of the scene at an overlap region; monitoring a region of interest in the scene; and in response to detecting, by a processor, interaction or potential interaction between the region of interest and the overlap region, causing, by the processor, the overlap region to be adapted.
2. A method according to claim 1 , wherein the monitoring a region of interest comprises monitoring a position of the region of interest in the scene in relation to the overlap region.
3. A method according to claim 1 or claim 2, wherein the detecting interaction between the region of interest and the overlap region comprises detecting a location of the region of interest being at least partially within the overlap region.
4. A method according to any preceding claim, wherein the overlap region comprises a seam between the first and second portions of the scene.
5. A method according to any preceding claim, wherein the causing the overlap region to be adapted comprises adjusting a property of the seam.
6. A method according to claim 5, wherein the adjusting a property of the seam comprises moving the seam from a first position to a second position, the first position being different to the second position.
7. A method according to claim 5 or claim 6, wherein the adjusting a property of the seam comprises adjusting a shape of the seam.
8. A method as set forth in any preceding claim, wherein the causing the overlap region to be adjusted comprises adjusting a width of the overlap region.
9. A method as set forth in any preceding claim, wherein there are two or more regions of interest.
10. A method as set forth in claim 9, wherein the method comprises associating a priority value to each of the two or more regions of interest, and the causing the overlap region to be adapted comprises a consideration of the relative priorities of each region of interest in the scene.
1 1 . A method as set forth in any preceding claim, wherein the monitoring a region of interest comprises monitoring movement of the region of interest.
12. A method according to any preceding claim, wherein the image data comprises a plurality of image layers.
13. A method according to claim 12, wherein the causing the overlap region to be adapted is carried out differently in respective different layers.
14. A method according to any preceding claim, wherein the first image data is recorded by a first lens assembly, and the second image data is recorded by a second lens assembly.
15. A computer program comprising computer executable instructions which when run on one or more processors perform the method of any of claims 1 to 14.
16. An apparatus comprising at least one processor; and at least one memory including computer program code; the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to: record first image data of a first portion of a scene and record second image data of a second portion of the scene, the second portion of the scene at least partially overlapping with the first portion of the scene at an overlap region; monitor a region of interest in the scene; and in response to detecting, by the processor, interaction or potential interaction between the region of interest and the overlap region, cause, by the processor, the overlap region to be adapted.
17. An apparatus according to claim 16, wherein the monitoring a region of interest comprises monitoring a position of the region of interest in the scene in relation to the overlap region.
18. An apparatus according to claim 16 or claim 17, wherein the detecting interaction between the region of interest and the overlap region comprises detecting a location of the region of interest being at least partially within the overlap region.
19. An apparatus according to any preceding claim, wherein the overlap region comprises a seam between the first and second portions of the scene.
20. An apparatus according to any of claims 16 to 19, wherein the causing the overlap region to be adapted comprises adjusting a property of the seam.
21 . An apparatus according to claim 20, wherein the adjusting a property of the seam comprises moving the seam from a first position to a second position, the first position being different to the second position.
22. An apparatus according to claim 20 or claim 21 , wherein the adjusting a property of the seam comprises adjusting a shape of the seam.
23. An apparatus as set forth in any of claims 16 to 22, wherein the causing the overlap region to be adjusted comprises adjusting a width of the overlap region.
24. An apparatus as set forth in any of claims 16 to 23, wherein there are two or more regions of interest.
25. An apparatus as set forth in claim 24, wherein the apparatus is configured to associate a priority value to each of the two or more regions of interest, and the causing the overlap region to be adapted comprises a consideration by the apparatus of the relative priorities of each region of interest in the scene.
26. An apparatus as set forth in any of claims 16 to 25, wherein the monitoring a region of interest comprises monitoring movement of the region of interest.
27. An apparatus according to any preceding claim, wherein the apparatus is configured to process the image data as a plurality of image layers.
28. An apparatus according to claim 27, wherein the apparatus is configured to cause the overlap region to be adapted differently in respective different layers.
29. An apparatus according to any of claims 16 to 28, wherein the apparatus comprises a first lens assembly to record the first image data, and a second lens assembly for recording the second image data.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1620037.0 | 2016-11-28 | ||
GBGB1620037.0A GB201620037D0 (en) | 2016-11-28 | 2016-11-28 | Imaging device and method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018096208A1 true WO2018096208A1 (en) | 2018-05-31 |
Family
ID=58073317
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/FI2017/050804 WO2018096208A1 (en) | 2016-11-28 | 2017-11-22 | Imaging device and method |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB201620037D0 (en) |
WO (1) | WO2018096208A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112369017A (en) * | 2018-07-11 | 2021-02-12 | 诺基亚技术有限公司 | Method and apparatus for virtual reality content stitching control with network-based media processing |
WO2022156537A1 (en) * | 2021-01-22 | 2022-07-28 | 上海涛影医疗科技有限公司 | Scanning imaging system and method based on slit x-rays |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114274117B (en) * | 2021-11-16 | 2024-09-20 | 深圳市普渡科技有限公司 | Robot, obstacle-based robot interaction method, device and medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0837428A2 (en) * | 1996-10-17 | 1998-04-22 | Sharp Kabushiki Kaisha | Picture image forming apparatus |
US6148118A (en) * | 1995-07-05 | 2000-11-14 | Minolta Co., Ltd. | Image processing apparatus capable of reproducing large-sized document |
US20020122113A1 (en) * | 1999-08-09 | 2002-09-05 | Foote Jonathan T. | Method and system for compensating for parallax in multiple camera systems |
US20080278518A1 (en) * | 2007-05-08 | 2008-11-13 | Arcsoft (Shanghai) Technology Company, Ltd | Merging Images |
US20150278988A1 (en) * | 2014-04-01 | 2015-10-01 | Gopro, Inc. | Image Taping in a Multi-Camera Array |
US9466109B1 (en) * | 2015-06-30 | 2016-10-11 | Gopro, Inc. | Image stitching in a multi-camera array |
US20160307350A1 (en) * | 2015-04-14 | 2016-10-20 | Magor Communications Corporation | View synthesis - panorama |
US20170140791A1 (en) * | 2015-11-12 | 2017-05-18 | Intel Corporation | Multiple camera video image stitching by placing seams for scene objects |
-
2016
- 2016-11-28 GB GBGB1620037.0A patent/GB201620037D0/en not_active Ceased
-
2017
- 2017-11-22 WO PCT/FI2017/050804 patent/WO2018096208A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6148118A (en) * | 1995-07-05 | 2000-11-14 | Minolta Co., Ltd. | Image processing apparatus capable of reproducing large-sized document |
EP0837428A2 (en) * | 1996-10-17 | 1998-04-22 | Sharp Kabushiki Kaisha | Picture image forming apparatus |
US20020122113A1 (en) * | 1999-08-09 | 2002-09-05 | Foote Jonathan T. | Method and system for compensating for parallax in multiple camera systems |
US20080278518A1 (en) * | 2007-05-08 | 2008-11-13 | Arcsoft (Shanghai) Technology Company, Ltd | Merging Images |
US20150278988A1 (en) * | 2014-04-01 | 2015-10-01 | Gopro, Inc. | Image Taping in a Multi-Camera Array |
US20160307350A1 (en) * | 2015-04-14 | 2016-10-20 | Magor Communications Corporation | View synthesis - panorama |
US9466109B1 (en) * | 2015-06-30 | 2016-10-11 | Gopro, Inc. | Image stitching in a multi-camera array |
US20170140791A1 (en) * | 2015-11-12 | 2017-05-18 | Intel Corporation | Multiple camera video image stitching by placing seams for scene objects |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112369017A (en) * | 2018-07-11 | 2021-02-12 | 诺基亚技术有限公司 | Method and apparatus for virtual reality content stitching control with network-based media processing |
WO2022156537A1 (en) * | 2021-01-22 | 2022-07-28 | 上海涛影医疗科技有限公司 | Scanning imaging system and method based on slit x-rays |
Also Published As
Publication number | Publication date |
---|---|
GB201620037D0 (en) | 2017-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10638035B2 (en) | Image processing devices, image processing method, and non-transitory computer-readable medium | |
US20200236280A1 (en) | Image Quality Assessment | |
US11314088B2 (en) | Camera-based mixed reality glass apparatus and mixed reality display method | |
US9842624B2 (en) | Multiple camera video image stitching by placing seams for scene objects | |
US7710463B2 (en) | Method and system for compensating for parallax in multiple camera systems | |
US10154194B2 (en) | Video capturing and formatting system | |
US11172158B2 (en) | System and method for augmented video production workflow | |
CN106603912B (en) | Video live broadcast control method and device | |
US20130044181A1 (en) | System and method for multi-viewpoint video capture | |
US10516823B2 (en) | Camera with movement detection | |
US20130335520A1 (en) | Robotic Camera System with Context Display | |
WO2015198675A1 (en) | Image capture equipment | |
US11770618B2 (en) | Systems and methods for obtaining a smart panoramic image | |
WO2018096208A1 (en) | Imaging device and method | |
US20210075958A1 (en) | Method for Operating a Robotic Camera and Automatic Camera System | |
US11057553B2 (en) | Electronic device for capturing media using a bendable display and method thereof | |
TW202002606A (en) | Image-capturing device and method for operating the same | |
EP3206082A1 (en) | System, method and computer program for recording a non-virtual environment for obtaining a virtual representation | |
US11134186B2 (en) | Method for controlling a robotic camera and camera system | |
US12140855B2 (en) | Camera system with a plurality of image sensors | |
JP2004194075A (en) | Photographic system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17873536 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17873536 Country of ref document: EP Kind code of ref document: A1 |