[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111212267A - Partitioning method of panoramic image and server - Google Patents

Partitioning method of panoramic image and server Download PDF

Info

Publication number
CN111212267A
CN111212267A CN202010047913.4A CN202010047913A CN111212267A CN 111212267 A CN111212267 A CN 111212267A CN 202010047913 A CN202010047913 A CN 202010047913A CN 111212267 A CN111212267 A CN 111212267A
Authority
CN
China
Prior art keywords
area
processed
latitude value
image
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010047913.4A
Other languages
Chinese (zh)
Inventor
任子健
史东平
国廷峰
吴连朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Media Network Technology Co Ltd
Original Assignee
Qingdao Hisense Media Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Media Network Technology Co Ltd filed Critical Qingdao Hisense Media Network Technology Co Ltd
Priority to CN202010047913.4A priority Critical patent/CN111212267A/en
Publication of CN111212267A publication Critical patent/CN111212267A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a method and a server for partitioning a panoramic image, wherein the method comprises the following steps: dividing the panoramic image along the transverse direction to obtain an area set; selecting a reference area from the area set, taking the rest areas as areas to be processed, dividing the reference area in equal proportion along the longitudinal direction to obtain c reference image blocks, and cutting the areas to be processed with the latitude values larger than that of the reference area by taking the pixel density of the reference image blocks as a reference so as to enable the pixel density corresponding to the image blocks obtained after cutting projected on the spherical surface to be equal to the pixel density corresponding to the reference image blocks projected on the spherical surface.

Description

Partitioning method of panoramic image and server
Technical Field
The application relates to the technical field of display equipment, in particular to a panoramic image blocking method and a server.
Background
Panoramic video is a new multimedia form developed and extended based on 360-degree panoramic images, and is converted into dynamic panoramic video by continuously playing a series of static panoramic images. Panoramic video is generally formed by using a professional panoramic camera to carry out all-dimensional 360-degree shooting and splicing, video images in all directions are spliced by using software, then a special player is used for playing, a plane video is projected into a 360-degree panoramic mode, and the 360-degree panoramic mode is presented to a spatial view field which is fully surrounded by 360 degrees in the horizontal direction and 180 degrees in the vertical direction of an observer. The viewer can interact with the video content in the modes of head movement, eyeball movement, remote controller control and the like, so that the experience of being personally on the scene is obtained. As a new heterogeneous multimedia service, the panoramic video service stream contains multiple data types such as audio, video, text, interactive, control signaling, etc., and has diversified qos (quality of service) requirements.
In recent years, in order to reduce the bandwidth requirement of panoramic video transmission, reduce data redundancy, and improve supportable video resolution, an FOV transmission scheme is often adopted in the panoramic video transmission. The FOV transmission scheme is a scheme for differentially transmitting panoramic video pictures based on visual angles, mainly focuses on high-quality transmission of pictures in a current visual angle area, realizes that a panoramic video is generally divided in space, then performs multi-rate coding to generate a plurality of video streams, transmits the video streams of corresponding blocks according to the viewpoint position of a user by a terminal, and finally decodes the video streams, merges the blocks and presents the blocks to the user by the terminal.
The FOV transmission scheme requires the panoramic video to be cut into several blocks, and the conventional method is to block the panoramic video uniformly in the horizontal and vertical directions (as shown in fig. 1). For the current most widely applied and most resourceful equidistant cylindrical Projection (ERP) panoramic video Projection format, the number of redundant sampling points of the weft of the video closer to the two poles is more, so that the pixel density of the whole image is not uniformly distributed, and finally, when the video is played on a spherical surface through back Projection, the adverse effects of clear high-weft-value areas and fuzzy low-weft-value areas can be generated.
Disclosure of Invention
The application aims to provide a partitioning method of a panoramic image and a server, so as to solve the technical problems in the prior art.
A first aspect of the embodiments of the present application shows a method for blocking a panoramic image, which is applied to a server side, and includes:
dividing the panoramic image along the transverse direction to obtain an area set; each region in the region set corresponds to a latitude value, the latitude value is obtained according to the corresponding relation between the region and a spherical surface, and the spherical surface is a carrier of the panoramic image in a three-dimensional scene;
selecting a reference area from the area set, wherein the rest areas are to-be-processed areas, and performing equal proportion segmentation on the reference area along the longitudinal direction to obtain c reference image blocks, wherein the latitude value corresponding to the reference area is at least smaller than that corresponding to one of the to-be-processed areas;
and cutting the to-be-processed area with the latitude value larger than that of the reference area by taking the pixel density of the reference image block as a reference, so that the pixel density corresponding to the image block obtained after cutting projected on the spherical surface is equal to the pixel density corresponding to the reference image block projected on the spherical surface.
A first aspect of embodiments of the present application shows a server, including;
the transverse cutting unit is configured to divide the panoramic image along the transverse direction to obtain an area set; each region in the region set corresponds to a latitude value, the latitude value is obtained according to the corresponding relation between the region and a spherical surface, and the spherical surface is a carrier of the panoramic image in a three-dimensional scene;
the selecting unit is configured to select a reference area from the area set, wherein the rest areas are to-be-processed areas, and divide the reference area in an equal proportion along the longitudinal direction to obtain C reference image blocks, wherein the latitude value corresponding to the reference area is at least smaller than the latitude value corresponding to one of the to-be-processed areas;
the longitudinal cutting unit is configured to cut the to-be-processed area with the latitude value larger than that of the reference area by taking the pixel density of the reference image block as a reference, so that the pixel density corresponding to the image block obtained after cutting projected on the spherical surface is equal to the pixel density corresponding to the reference image block projected on the spherical surface.
The embodiment of the application provides a method and a server for partitioning a panoramic image, wherein the method comprises the following steps: dividing the panoramic image along the transverse direction to obtain an area set; selecting a reference area from the area set, taking the rest areas as areas to be processed, dividing the reference area in equal proportion along the longitudinal direction to obtain c reference image blocks, and cutting the areas to be processed with the latitude values larger than that of the reference area by taking the pixel density of the reference image blocks as a reference so as to enable the pixel density corresponding to the image blocks obtained after cutting projected on the spherical surface to be equal to the pixel density corresponding to the reference image blocks projected on the spherical surface. In this way, compared with the mode that the panoramic image is uniformly divided according to the same division interval in the prior art, the blocking method shown in the application adopts different blocking methods for the areas corresponding to different latitudes, and specifically, for some areas to be processed with latitude values larger than the latitude value of the reference area, the areas are cut into image blocks so that the pixel density corresponding to the image blocks obtained after cutting projected on the spherical surface is equal to the pixel density corresponding to the reference image blocks projected on the spherical surface. By adopting the blocking method disclosed by the embodiment of the application, the pixel density distribution of the whole image can be ensured to be uniform when the cut panoramic image is projected on a spherical surface, and the experience of a user is improved. Furthermore, by adopting the partitioning method disclosed by the embodiment of the application, the redundancy of image data of a high-latitude part can be avoided, and the aims of reducing the data redundancy, improving the subsequent coding efficiency and reducing the occupancy rate of resources in the transmission process can be further fulfilled.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic of a conventional uniform chunking method;
fig. 2 is a schematic diagram of an application scenario shown in an embodiment of the present application;
fig. 3 is a flowchart illustrating a method for partitioning a panoramic image according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating a corresponding relationship between a panoramic image and a spherical surface according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a panoramic image shown in an embodiment of the present application;
FIG. 6 is a schematic diagram of a panoramic image shown in an embodiment of the present application;
FIG. 7 is a schematic diagram of a panoramic image shown in an embodiment of the present application;
FIG. 8 is a schematic diagram of a panoramic image shown in an embodiment of the present application;
FIG. 9 is a schematic diagram of a panoramic image shown in an embodiment of the present application;
FIG. 10 is a schematic diagram of a panoramic image shown in an embodiment of the present application;
FIG. 11 is a schematic diagram of a panoramic image shown in an embodiment of the present application;
fig. 12 is a schematic diagram of a server according to an embodiment of the present application.
Detailed Description
For ease of understanding, some of the concepts related to the present application are illustratively presented for reference.
The embodiment of the application is suitable for an isometric-Rectangular Projection (ERP) panoramic video Projection format and a Projection format derived from the ERP panoramic video Projection format.
The present application finds that panoramic video is concerned with: indicate VR panoramic video, also can be called 360 degrees panoramic video or 360 videos, be one kind and carry out the video that all-round 360 degrees were shot with a plurality of cameras, the user is when watching the video, can adjust the video about from top to bottom at will and watch. The panoramic video comprises a plurality of frames of panoramic images, and the technical scheme shown in the embodiment of the application blocks the panoramic images.
The application finds that the 3D panoramic video that relates to: the video comprises two paths of 360-degree panoramic videos, one path of the video is used for left eye display, the other path of the video is used for right eye display, and the contents displayed by the left eye and the right eye in the same frame of the two paths of videos are slightly different, so that a 3D effect appears when a user watches the videos.
The method and the device can be used for processing before coding the panoramic video or part of the panoramic video and packaging the coded code stream, and have corresponding operation and processing in a server and a terminal.
Fig. 2 is a schematic diagram of an application scenario shown in an embodiment of the present application. As shown in fig. 2, the network architecture of the present application may include a server 300 and a terminal 200. The server 300 communicates with a photographing apparatus, which may be used to photograph a 360-degree panoramic video, transmitting the video to the server 300. The server can perform coding pre-processing on the panoramic video, then perform coding or transcoding operation, package the coded code stream into a transmittable file, and transmit the file to a terminal or a content distribution network. The server can also select the content to be transmitted for signal transmission according to the information (such as the user view angle) fed back by the terminal. The terminal 200 may be an electronic device such as VR glasses, a mobile phone, a tablet computer, a television, and a computer, which can be connected to a network. The terminal 200 may receive the data transmitted from the server 300, and perform transcoding, decoding, and displaying, etc.
In order to solve the problems of bandwidth waste of coding transmission and limited decoding capability and speed of a decoding end caused by uniformly dividing images by using a longitude and latitude map during image processing, the method for partitioning the panoramic image can be provided.
Pre-processing encoding:
because the resolution of the original panoramic image is very high, if the original panoramic image is directly transmitted to a terminal, the server has a large data transmission amount, and the pressure of network transmission bandwidth can be increased certainly; for a terminal, if the decoding performance of the terminal is low, the buffering time is long or the terminal cannot decode the panorama image effectively. Therefore, in the embodiment of the present application, the original panoramic image is cut first. As shown in fig. 3, the specific dicing process may include:
s101, the server divides the panoramic image along the transverse direction to obtain an area set; each region in the region set corresponds to a latitude value, the latitude value is obtained according to the corresponding relation between the region and a spherical surface, and the spherical surface is a carrier of the panoramic image in a three-dimensional scene;
taking the example that a server obtains a panoramic image of a video according to the video acquired by shooting equipment as shown in fig. 4, the left side in the figure is a spherical surface which is a carrier of the panoramic image in a three-dimensional scene; the right rectangular image is the panoramic image; the corresponding relation between the longitudinal direction of the panoramic image and the spherical surface latitude lines can be seen, and in the embodiment of the application, the corresponding latitude value of the panoramic image is obtained according to the corresponding relation between the panoramic image and the spherical surface.
In a feasible embodiment, the server cuts the panoramic image into 6 regions: region 1, region 2, region 3, region 1 ', region 2 ' and region 3 ', which form a collection of regions, each region corresponding to a latitude value.
Typically the regions are rectangular, for region a. The lower line of area a corresponds to a latitude value and the upper line corresponds to a latitude value. Therefore, the latitude value of a region may be a latitude value corresponding to the upper border line or a latitude value corresponding to the lower border line. In a possible embodiment the latitude value of an area may also be a latitude value corresponding to the central axis of said area in the transverse direction.
Specifically, referring to fig. 5, for the area a shown in fig. 5, the latitude value corresponding to the upper border line is 18 degrees north latitude; the latitude value corresponding to the lower sideline is 18 degrees in south latitude, and the latitude value corresponding to the central axis in the transverse direction is 0 degree.
S102, selecting a reference area from the area set, wherein the rest areas are to-be-processed areas, and performing equal proportion segmentation on the reference area along the longitudinal direction to obtain C reference image blocks, wherein the latitude value corresponding to the reference area is at least smaller than that corresponding to one of the to-be-processed areas;
a reference region is selected from the region set. In the specific selection process, the latitude values corresponding to each region may be arranged in an ascending order or a descending order, and then the region not corresponding to the maximum latitude is selected as the reference region.
In a feasible embodiment, the region set includes 6 regions: (the latitude value of the area is marked by the latitude line corresponding to the sideline on the area).
Area 3, the corresponding latitude value is 90 degrees north latitude;
region 3', the corresponding latitude value is 90 degrees south latitude;
area 2, the corresponding latitude value is 60 degrees north latitude;
the corresponding latitude value of the area 2' is 60 degrees south latitude;
area 1, corresponding latitude value is 30 degrees north latitude;
zone 1', corresponds to a latitude value of 30 degrees south latitude.
Any one of the region 1, the region 1 ', the region 2, and the region 2' among the above 6 regions may be used as a reference region.
S103, cutting the to-be-processed area with the latitude value larger than that of the reference area by taking the pixel density of the reference image block as a reference, so that the pixel density corresponding to the image block obtained after cutting projected on the spherical surface is equal to the pixel density corresponding to the reference image block projected on the spherical surface.
Specifically, in response to that the latitude value of the to-be-processed area is smaller than or equal to the latitude value of the reference area, performing equal proportion segmentation on the to-be-processed area along the longitudinal direction to obtain c image blocks; and responding to the fact that the latitude value of the to-be-processed area is larger than the latitude value of the reference area, and cutting by taking the pixel density of the reference image block as a reference.
In a feasible embodiment, the panoramic image is transversely cut into 6 areas, namely area 1, area 1 ', area 2 ', area 3 and area 3 '; the server firstly selects an area 2 as a reference area; accordingly, the region 1 ', the region 2 ', the region 3 and the region 3 ' are regions to be treated. The server divides the area 2 in equal proportion along the longitudinal direction to obtain 12 reference image blocks. And then the server respectively reads the latitude values corresponding to the areas to be processed. The area 2 ', the area 3 and the area 3' are smaller than or equal to the latitude value of the reference area, so that the area 2 ', the area 3 and the area 3' are respectively divided into 12 image blocks in equal proportion in the longitudinal direction. The latitude values of zone 1 and zone 1' are greater than the latitude values of zone 2. And cutting the image block according to the pixel density of the area 2, so that the pixel density corresponding to the image block obtained by cutting projected on the spherical surface is equal to the pixel density corresponding to the reference image block projected on the spherical surface.
The specific cutting process is as follows:
the resolution of the panoramic image is m × n, the panoramic image is divided into r areas in the longitudinal direction, and the pixel corresponding to each area is m × n/r. A reference region is selected, the rest regions are regions to be processed, the reference region and the regions to be processed are acted on the spherical surface, and the obtained result can be referred to the spherical surface on the right side of fig. 6. For convenience of distinction, in the solution shown in the embodiment of the present application, an area where the latitude value is greater than the latitude value of the reference area is referred to as a first area to be processed, and an area where the latitude value is less than or equal to the latitude value of the reference area is referred to as a second area to be processed.
Because the pixel redundancy of the image near the equator is low, the pixel redundancy of the image near the two poles is high, and the pixel redundancy of the image near the equator is high, if the same cutting mode is adopted for the areas corresponding to different latitude values, each area is subjected to encoding transmission under the same resolution, the transmission bandwidth waste is large, and in addition, the pixel redundancy of the decoding end is high, so that the requirement of the decoding end on the decoding capacity is high, and the decoding speed is low. Based on this, the technical solution shown in the present application adopts different dividing manners for the first to-be-processed area and the second to-be-processed area.
Specifically, a reference area is selected from the areas, and then the first area to be processed is sequentially downsampled (i.e., compression sampling) by taking the reference area as a reference, so that the redundancy of the pixels of the image transmitted by the area at the high latitude before encoding can be reduced, and the purpose of reducing the bandwidth can be achieved. Meanwhile, the down-sampling reduces the pixel value which needs to be coded and transmitted, reduces the requirement of a decoding end on the decoding capability, reduces the decoding complexity and improves the decoding speed.
For the second to-be-processed area, because the latitude value of the second to-be-processed area is less than or equal to the latitude value of the reference area, and the redundancy value of the pixel corresponding to the corresponding second to-be-processed area is less than or equal to the reference area, in the scheme shown in the embodiment of the present application, the second to-be-processed area is cut in the same cutting manner as the reference area. The two cutting methods are described in detail below with reference to specific examples.
(1) The cutting mode of the second area to be processed is as follows:
and in response to that the latitude value of the to-be-processed area is smaller than or equal to the latitude value of the reference area, performing equal proportion segmentation on the to-be-processed area along the longitudinal direction to obtain c image blocks.
Fig. 7 is a panorama according to a preferred embodiment, which is a cut view of a panoramic image, and the panoramic image is transversely cut into 6 regions, i.e., region 1 ', region 2 ', region 3, and region 3 '. Area 2 is selected as the reference area. The latitude values corresponding to the area 1, the area 1 'and the area 2' are less than or equal to the latitude value corresponding to the area 2, so that the area 1, the area 1 'and the area 2' are all the second areas to be processed, and therefore the area 1, the area 1 'and the area 2' can be respectively cut into 12 image blocks according to the cutting mode of the area 2, and the areas corresponding to the image blocks are equal. The cut image can be referred to fig. 7.
(2) The cutting mode of the first area to be processed is as follows:
and cutting the area to be processed by taking the pixel density of the reference image block as a reference, so that the pixel density corresponding to the image block obtained after cutting projected on the spherical surface is equal to the pixel density corresponding to the reference image block projected on the spherical surface.
The following describes the cutting method of the first region to be processed in detail with reference to the specific embodiment:
in a possible embodiment, a corresponding region near the equator of the sphere may be selected as the reference region, and the remaining regions may be selected as the regions to be processed. Specifically, the cutting process is as follows: firstly, so that the resolution res _ r 'of c _ r' image blocks into which the processed area is equally divided is equal to the resolution res _ r of the reference image block; then, the server divides the processed area in equal proportion along the longitudinal direction to obtain c _ r' image blocks.
In the technical solution shown in the embodiment of the present application, the optimal blocking manner is to minimize the difference in pixel density of all blocks, and the difference in pixel density is reflected in the latitudinal direction, so that the ratio of the circumferences of the wefts in the technical solution shown in the embodiment of the present application is used as the basis for blocking.
The above cutting process is described in detail with reference to specific examples.
Firstly, the server calculates a perimeter ratio according to the reference perimeter and the perimeter to be processed;
in a possible embodiment, the ratio of the spherical circumference of the lateral central axis of the region to be processed (also referred to as the spherical circumference l _ r in the present application) to the spherical circumference of the lateral central axis of the reference region (also referred to as the spherical circumference l _ r in the present application) may be used to determine the down-sampling multiple.
The specific calculation process is as follows:
in the technical solution shown in the embodiment of the present application, different calculation methods are adopted to calculate the latitude value of the reference area and the latitude value of the area to be processed, specifically:
(1) in a possible embodiment, among others, the reference perimeter may be calculated according to the following formula:
l _ R ═ 2 × pi × R cos (lat _ m); wherein, R is the radius of the spherical surface, and lat _ m is the latitude value of the reference area.
Optionally, the area with the smallest latitude value can be selected as the reference area;
calculating a latitude value of the reference area according to the following formula:
Figure BDA0002370056900000061
the calculation process of lat _ m is described in detail below with reference to specific examples:
in a feasible embodiment, the panoramic image is transversely divided into 8 regions, specifically, refer to fig. 8, in fig. 8, the panoramic image is transversely divided into 8 regions, where the region corresponding to the 4 th row is a reference region, and the latitude value of the reference region is lat _ m ═ 11.25;
in a possible embodiment, the panoramic image is cut into 7 regions in the transverse direction, and particularly, refer to fig. 9. Fig. 9 transversely cuts the panoramic image into 7 regions, where the region corresponding to the 4 th row is a reference region whose latitude value lat _ m is 0.
(2) In a possible embodiment the perimeter to be treated can be calculated according to the following formula:
l _ R ═ 2 × pi × R cos (lat _ m); wherein, R is the radius of the spherical surface, and lat _ R '_ m is the latitude value of the R' th row to-be-processed area.
Alternatively, the latitude value lat _ r '_ m of the to-be-processed area in the r' th row may be calculated according to the following formula:
lat_r'_m=90-180*r'/r+90/r=90*(r-2r'/r+1)/r;
the calculation process of lat _ r' _ m is described in detail below with reference to specific examples:
in a possible embodiment, the panoramic image is transversely divided into 8 regions, and specifically, refer to fig. 8, where:
the area corresponding to the 1 st row is an area to be processed, and the latitude value of the area to be processed is as follows:
lat_r'_m=90-180*1/8+90/8;
the area corresponding to the 2 nd row is the area to be processed, and the latitude value of the area to be processed is as follows:
lat_r'_m=90-180*2/8+90/8;
the area corresponding to the 3 rd row is an area to be processed, and the latitude value of the area to be processed is as follows:
lat_r'_m=90-180*3/8+90/8;
the area corresponding to the 5 th row is a to-be-processed area, and the latitude value of the to-be-processed area is as follows:
lat_r'_m=90-180*5/8+90/8;
the area corresponding to the 6 th row is a to-be-processed area, and the latitude value of the to-be-processed area is as follows:
lat_r'_m=90-180*6/8+90/8;
the area corresponding to the 7 th row is a to-be-processed area, and the latitude value of the to-be-processed area is as follows:
lat_r'_m=90-180*7/8+90/8;
the area corresponding to the 8 th row is an area to be processed, and the latitude value of the area to be processed is as follows:
lat_r'_m=90-180*7/8+90/8。
in a possible embodiment, the panoramic image is transversely cut into 7 regions, and specifically, refer to fig. 9, where:
the area corresponding to the 1 st row is an area to be processed, and the latitude value of the area to be processed is as follows: lat _ r' _ m is 90-180 × 1/7+ 90/7;
the area corresponding to the 2 nd row is the area to be processed, and the latitude value of the area to be processed is as follows:
lat_r'_m=90-180*2/7+90/7;
the area corresponding to the 3 rd row is an area to be processed, and the latitude value of the area to be processed is as follows: lat _ r' _ m is 90-180 × 3/7+ 90/7;
the area corresponding to the 5 th row is a to-be-processed area, and the latitude value of the to-be-processed area is as follows: lat _ r' _ m is 90-180 × 5/7+ 90/7;
the area corresponding to the 5 th row is a to-be-processed area, and the latitude value of the to-be-processed area is as follows:
lat_r'_m=90-180*6/7+90/7;
the area corresponding to the 7 th row is a to-be-processed area, and the latitude value of the to-be-processed area is as follows:
lat_r'_m=90-180*7/7+90/7。
the multiple of downsampling is l _ r/l _ r'.
Then, the server performs down-sampling on the corresponding to-be-processed area according to the perimeter ratio to obtain a processed area;
the regions obtained by downsampling the regions to be processed may refer to the regions in the panoramic image shown in fig. 10;
and then, the server divides the processed area in equal proportion along the longitudinal direction to obtain c _ r' image blocks.
Wherein, the derivation process of c _ r' is as follows:
equally cutting the reference region into c reference image blocks along the longitudinal direction to obtain c reference video blocks, wherein the resolution of the inner reference video block is res-m/c-n/r;
obtaining m '/c _ r' ═ m/c according to res _ r '/c _ r' × n/r ═ res ═ m/c × n/r; wherein, m/c is the number of pixels in the horizontal direction of the reference image block, and n/r is the number of pixels in the longitudinal direction of the reference image block; m '/c _ r' is the number of pixels of the image block to be processed in the transverse direction, and n/r is the number of pixels of the image block to be processed in the longitudinal direction;
according to the tile pixel density pd _ r' being equal to the reference tile pixel density pd _ r;
thus, pd _ r ═ pd _ r can be obtained
Figure BDA0002370056900000071
Wherein,
Figure BDA0002370056900000072
Figure BDA0002370056900000073
l _ r and l _ r' can be calculated from the above embodiment.
Selecting the area with the minimum latitude value as a reference area in a feasible embodiment;
Figure BDA0002370056900000074
r is the number of regions in which the panoramic image is cut. lat _ r ' _ m is 90-180 r '/r +90/r 90 (r-2r '/r + 1)/r.
Can calculate out
Figure BDA0002370056900000075
In the application, the server may store the operation logic of c _ r 'in advance, and may directly call c _ r' in the cutting process subsequently.
The following describes in detail the way of calculating the number of blocks to be cut into the region to be processed with reference to a specific example.
In a feasible embodiment, the panoramic image is divided into 8 regions, wherein the region corresponding to the 4 th or 5 th row is a reference region, the reference region is divided into 16 single-division image blocks, the divided image can refer to fig. 10, the region of the 6 th row is divided based on the reference video block, and the number of the divided blocks is calculated as follows:
Figure BDA0002370056900000081
the result of the slicing can be continued with reference to fig. 10, the original panoramic image is divided into 8 lines in total, and the reference video is divided into 4 th or 5 th lines, each divided into 16 blocks (columns). The result of the blocking is rounded up by adopting the blocking method of the patent, and as can be seen from the figure, the 1 st row and the 8 th row are respectively divided into 4 blocks (columns), the 2 nd row and the 7 th row are respectively divided into 10 blocks (columns), and the 3 rd row and the 6 th row are respectively divided into 14 blocks. The partitioning result is sampled according to the reference video partitioning resolution, and the final partitioning result is shown in fig. 10. It can be seen that the total video block number after the panorama image is blocked according to the method is 88 blocks, compared with the block number of 128 blocks in the traditional uniform blocking method, the block number is reduced by about one third, and the redundancy of data is greatly reduced while the visual display effect is ensured.
Example 2:
in a possible embodiment, a corresponding area near the equator of the sphere may be selected as a reference area (i.e., the area with the smallest latitude value is selected as the reference area), and the remaining areas may be selected as the areas to be processed. The method for calculating the number of the cut blocks of the area to be processed comprises the following steps: the server firstly divides the area to be processed in equal proportion along the longitudinal direction to obtain c _ r' image blocks to be processed; and then, downsampling the image blocks to be processed according to the perimeter ratio so as to enable the pixel density of the processed image blocks to be equal to that of the reference image block.
The calculation process of c _ r' may refer to the above embodiments. The processor divides the area to be processed in equal proportion along the longitudinal direction to obtain c _ r' image blocks to be processed.
In a feasible embodiment, the panoramic image is divided into 8 regions, wherein the region corresponding to the 4 th or 5 th row is a reference region, the reference region is divided into 16 single-division image blocks, the region of the 6 th row is divided based on the reference video block, and the number of the divided blocks is calculated as follows:
Figure BDA0002370056900000082
the resulting cut image can be seen in fig. 11.
And then, downsampling the image blocks to be processed according to the perimeter ratio so as to enable the pixel density of the processed image blocks to be equal to that of the reference image block. The resulting cut image can be seen in fig. 10.
In a feasible embodiment, the server may also down-sample the to-be-processed image block based on the pixel density of the reference image block, so that the pixel density of the processed image block is equal to the pixel density of the reference image block.
Wherein the pixel density of the reference image block is: pd _ R ═ (m/c)/l-R ═ m/(c ═ 2 ═ R ═ cos (lat _ m));
and downsampling the image block to be processed by taking pd _ r as a reference. The resulting cut image can be seen in fig. 10.
The image blocking method for the panoramic image is applied to a server, and the blocking method adopts different blocking methods for areas corresponding to different latitudes, specifically, for some areas to be processed with latitude values larger than that of the reference area, the areas are cut into image blocks so that the pixel density corresponding to the image blocks obtained after cutting projected on the spherical surface is equal to the pixel density corresponding to the reference image blocks projected on the spherical surface. By adopting the dicing method disclosed by the embodiment of the application, the pixel density distribution of the whole image can be ensured to be uniform when the cut panoramic image is projected on a spherical surface, and the experience of a user is improved. Furthermore, by adopting the partitioning method disclosed by the embodiment of the application, the redundancy of image data of a high-latitude part can be avoided, and the aims of reducing the data redundancy, improving the subsequent coding efficiency and reducing the occupancy rate of resources in the transmission process can be further fulfilled.
A second aspect of the embodiment of the present application shows a server, please refer to fig. 12, where the server includes:
the transverse cutting unit 21 is configured to divide the panoramic image in the transverse direction to obtain a region set; each region in the region set corresponds to a latitude value, the latitude value is obtained according to the corresponding relation between the region and a spherical surface, and the spherical surface is a carrier of the panoramic image in a three-dimensional scene;
the selecting unit 22 is configured to select a reference region from the region set, where the remaining regions are to-be-processed regions, and divide the reference region in an equal proportion along a longitudinal direction to obtain c reference image blocks, where a latitude value corresponding to the reference region is at least smaller than a latitude value corresponding to one of the to-be-processed regions;
the longitudinal cutting unit 23 is configured to cut the to-be-processed area with the latitude value larger than that of the reference area, with the pixel density of the reference image block as a reference, so that the pixel density corresponding to the image block projected on the spherical surface obtained by cutting is equal to the pixel density corresponding to the reference image block projected on the spherical surface.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied in hardware or in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash Memory, Read Only Memory (ROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a compact disc read only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in a core network interface device. Of course, the processor and the storage medium may reside as discrete components in a core network interface device.
Those skilled in the art will recognize that in one or more of the examples described above, the functions described herein may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The above-mentioned embodiments, objects, technical solutions and advantages of the present application are further described in detail, it should be understood that the above-mentioned embodiments are only examples of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present application should be included in the scope of the present application.

Claims (12)

1. A method for partitioning a panoramic image is applied to a server side, and is characterized by comprising the following steps:
dividing the panoramic image along the transverse direction to obtain an area set; each region in the region set corresponds to a latitude value, the latitude value is obtained according to the corresponding relation between the region and a spherical surface, and the spherical surface is a carrier of the panoramic image in a three-dimensional scene;
selecting a reference area from the area set, wherein the rest areas are to-be-processed areas, and performing equal proportion segmentation on the reference area along the longitudinal direction to obtain c reference image blocks, wherein the latitude value corresponding to the reference area is at least smaller than that corresponding to one of the to-be-processed areas;
and in response to that the latitude value of the to-be-processed area is larger than that of the reference area, cutting the to-be-processed area by taking the pixel density of the reference image block as a reference, so that the pixel density corresponding to the image block projected on the spherical surface obtained after cutting is equal to the pixel density corresponding to the reference image block projected on the spherical surface.
2. The method of claim 1, further comprising:
and in response to that the latitude value of the to-be-processed area is smaller than or equal to the latitude value of the reference area, performing equal proportion segmentation on the to-be-processed area along the longitudinal direction to obtain c image blocks.
3. The method of claim 2, wherein the latitude value corresponding to the region is a latitude value corresponding to the central axis of the region on a spherical surface in a lateral direction.
4. The method of claim 3, wherein the pixel density is a pixel value contained in a unit distance in a latitudinal direction when the image is projected on a spherical surface.
5. The method of claim 4, wherein the projection of the reference region lateral central axis on the sphere is a reference perimeter; the projection of the central axis on the spherical surface in the transverse direction of the area to be processed is the perimeter to be processed;
in response to the latitude value of the to-be-processed area being greater than the latitude value of the reference area, the step of cutting based on the pixel density of the reference image block includes:
according to the perimeter ratio, carrying out down-sampling on the corresponding to-be-processed area to obtain a processed area, wherein the perimeter ratio is the ratio of the reference perimeter to the to-be-processed perimeter;
and performing equal proportion division on the processed area along the longitudinal direction to obtain c _ r' image blocks.
6. The method of claim 4, wherein the projection of the reference region lateral central axis on the sphere is a reference perimeter; the projection of the central axis on the spherical surface in the transverse direction of the area to be processed is the perimeter to be processed;
in response to the latitude value of the to-be-processed area being greater than the latitude value of the reference area, the step of cutting based on the pixel density of the reference image block includes:
dividing the area to be processed in equal proportion along the longitudinal direction to obtain c _ r' image blocks to be processed;
and downsampling the image blocks to be processed according to the perimeter ratio so that the pixel density of the processed image blocks is equal to that of the reference image block, wherein the perimeter ratio is the ratio of the reference perimeter to the perimeter to be processed.
7. The method according to claim 5 or 6, wherein the calculation of c _ r' is as follows:
of image blocksThe resolution res _ r' is equal to the resolution res _ r of the reference image block; namely, it is
Figure FDA0002370056890000011
Wherein m is the number of pixels of the panoramic image in the transverse direction, n is the number of pixels of the panoramic image in the longitudinal direction, r is the number of areas of the panoramic image, m' is the number of pixels of the area of the to-be-processed in the transverse direction, m/c is the number of pixels of the reference image block in the transverse direction, and n/r is the number of pixels of the reference image block in the longitudinal direction; m '/c _ r' is the number of pixels of the image block to be processed in the transverse direction, and n/r is the number of pixels of the image block to be processed in the longitudinal direction;
the value is obtained from pd _ r ═ pd _ r
Figure FDA0002370056890000021
Wherein,
Figure FDA0002370056890000022
Figure FDA0002370056890000023
wherein pd _ r 'is the image block pixel density, pd _ r is the reference image block pixel density, l _ r is the reference perimeter, and l _ r' is the perimeter to be processed.
8. The method of claim 7, wherein l _ r is calculated according to the following formula;
l _ R ═ 2 × pi × R cos (lat _ m), the lat _ m is the latitude value of the reference area; r is a spherical radius;
calculating l _ r' according to the following formula;
l _ R '═ 2 × pi × R cos (lat _ R' _ m), and lat _ m is the latitude value of the reference region.
9. The method according to claim 8, wherein the latitude values lat _ r '_ m of the r' rows of the areas to be processed are calculated according to the following formula;
lat_r'_m=90-180*r'/r+90/r=90*(r-2r'/r+1)/r。
10. the method according to claim 9, characterized in that the area with the smallest latitude value is selected as the reference area;
Figure FDA0002370056890000024
r is the number of regions in which the panoramic image is cut.
11. The method of claim 10,
Figure FDA0002370056890000025
12. a server, comprising;
the transverse cutting unit is configured to divide the panoramic image along the transverse direction to obtain an area set; each region in the region set corresponds to a latitude value, the latitude value is obtained according to the corresponding relation between the region and a spherical surface, and the spherical surface is a carrier of the panoramic image in a three-dimensional scene;
the selecting unit is configured to select a reference area from the area set, wherein the rest areas are to-be-processed areas, and divide the reference area in an equal proportion along the longitudinal direction to obtain c reference image blocks, wherein the latitude value corresponding to the reference area is at least smaller than the latitude value corresponding to one of the to-be-processed areas;
the longitudinal cutting unit is configured to cut the to-be-processed area with the latitude value larger than that of the reference area by taking the pixel density of the reference image block as a reference, so that the pixel density corresponding to the image block obtained after cutting projected on the spherical surface is equal to the pixel density corresponding to the reference image block projected on the spherical surface.
CN202010047913.4A 2020-01-16 2020-01-16 Partitioning method of panoramic image and server Pending CN111212267A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010047913.4A CN111212267A (en) 2020-01-16 2020-01-16 Partitioning method of panoramic image and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010047913.4A CN111212267A (en) 2020-01-16 2020-01-16 Partitioning method of panoramic image and server

Publications (1)

Publication Number Publication Date
CN111212267A true CN111212267A (en) 2020-05-29

Family

ID=70787285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010047913.4A Pending CN111212267A (en) 2020-01-16 2020-01-16 Partitioning method of panoramic image and server

Country Status (1)

Country Link
CN (1) CN111212267A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022088022A1 (en) * 2020-10-30 2022-05-05 深圳市大疆创新科技有限公司 Three-dimensional image processing method and apparatus, and movable platform and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030086003A1 (en) * 2001-10-04 2003-05-08 Tadaharu Koga Video data processing apparatus and method, data distributing apparatus and method, data receiving apparatus and method, storage medium, and computer program
CN108810427A (en) * 2017-05-02 2018-11-13 北京大学 The method and device of panoramic video content representation based on viewpoint
CN109327699A (en) * 2017-07-31 2019-02-12 华为技术有限公司 A kind of processing method of image, terminal and server
CN109547766A (en) * 2017-08-03 2019-03-29 杭州海康威视数字技术股份有限公司 A kind of panorama image generation method and device
CN109792487A (en) * 2016-10-04 2019-05-21 诺基亚技术有限公司 For Video coding and decoded device, method and computer program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030086003A1 (en) * 2001-10-04 2003-05-08 Tadaharu Koga Video data processing apparatus and method, data distributing apparatus and method, data receiving apparatus and method, storage medium, and computer program
CN109792487A (en) * 2016-10-04 2019-05-21 诺基亚技术有限公司 For Video coding and decoded device, method and computer program
CN108810427A (en) * 2017-05-02 2018-11-13 北京大学 The method and device of panoramic video content representation based on viewpoint
CN109327699A (en) * 2017-07-31 2019-02-12 华为技术有限公司 A kind of processing method of image, terminal and server
CN109547766A (en) * 2017-08-03 2019-03-29 杭州海康威视数字技术股份有限公司 A kind of panorama image generation method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MATT YU: "Content Adaptive Representations of Omnidirectional Videos for Cinematic Virtual Reality", 《IMMERSIVE ME"15》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022088022A1 (en) * 2020-10-30 2022-05-05 深圳市大疆创新科技有限公司 Three-dimensional image processing method and apparatus, and movable platform and storage medium

Similar Documents

Publication Publication Date Title
US11700352B2 (en) Rectilinear viewport extraction from a region of a wide field of view using messaging in video transmission
KR102013403B1 (en) Spherical video streaming
US11341715B2 (en) Video reconstruction method, system, device, and computer readable storage medium
CN112204993B (en) Adaptive panoramic video streaming using overlapping partitioned segments
CN106131531B (en) Method for processing video frequency and device
US11483475B2 (en) Adaptive panoramic video streaming using composite pictures
WO2019073117A1 (en) An apparatus, a method and a computer program for volumetric video
US11095936B2 (en) Streaming media transmission method and client applied to virtual reality technology
EP3434021B1 (en) Method, apparatus and stream of formatting an immersive video for legacy and immersive rendering devices
CN110933461B (en) Image processing method, device, system, network equipment, terminal and storage medium
US11270413B2 (en) Playback apparatus and method, and generation apparatus and method
US11202099B2 (en) Apparatus and method for decoding a panoramic video
CN110351492B (en) Video data processing method, device and medium
CN111212267A (en) Partitioning method of panoramic image and server
KR102183895B1 (en) Indexing of tiles for region of interest in virtual reality video streaming
CN108271068B (en) Video data processing method and device based on streaming media technology
CN111866485A (en) Stereoscopic picture projection and transmission method, device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200529