CN112511833A - Reproducing apparatus - Google Patents
Reproducing apparatus Download PDFInfo
- Publication number
- CN112511833A CN112511833A CN202011216551.3A CN202011216551A CN112511833A CN 112511833 A CN112511833 A CN 112511833A CN 202011216551 A CN202011216551 A CN 202011216551A CN 112511833 A CN112511833 A CN 112511833A
- Authority
- CN
- China
- Prior art keywords
- information
- zoom
- zoom region
- content
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 claims abstract description 56
- 238000006243 chemical reaction Methods 0.000 claims abstract description 46
- 238000010586 diagram Methods 0.000 description 27
- 238000004891 communication Methods 0.000 description 17
- 238000005516 engineering process Methods 0.000 description 16
- 101100008638 Caenorhabditis elegans daf-1 gene Proteins 0.000 description 9
- 230000000153 supplemental effect Effects 0.000 description 8
- 239000013256 coordination polymer Substances 0.000 description 7
- 230000011218 segmentation Effects 0.000 description 7
- LTYUPYUWXRTNFQ-UHFFFAOYSA-N 5,6-diamino-3',6'-dihydroxyspiro[2-benzofuran-3,9'-xanthene]-1-one Chemical compound C12=CC=C(O)C=C2OC2=CC(O)=CC=C2C11OC(=O)C2=C1C=C(N)C(N)=C2 LTYUPYUWXRTNFQ-UHFFFAOYSA-N 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234363—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/156—Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/236—Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
- H04N21/23614—Multiplexing of additional data and video streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/454—Content or additional data filtering, e.g. blocking advertisements
- H04N21/4545—Input to filtering algorithms, e.g. filtering a region of the image
- H04N21/45455—Input to filtering algorithms, e.g. filtering a region of the image applied to a region of the image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/4728—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/167—Position within a video image, e.g. region of interest [ROI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/236—Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The technique of the present invention relates to a reproduction apparatus including: a decoding unit that decodes the encoded video data or the encoded audio data, and decodes a plurality of pieces of zoom region information that specify a region to be zoomed; a zoom region selection unit that selects one or more pieces of zoom region information from a plurality of pieces of zoom region information that specify a region to be zoomed; and a data processing unit that performs a cropping process on video data obtained by decoding based on the selected zoom region information, or performs an audio conversion process on audio data obtained by decoding based on a position of a sound source object and the selected zoom region information.
Description
The present application is a divisional application of an application having a national application number of 201580053817.8, an international application date of 2015, 9, 28, and an entry national date of 2017, 4, 1, entitled "encoding apparatus and method, reproducing apparatus and method, and program".
Technical Field
The present technology relates to an encoding device, an encoding method, a reproduction device, a reproduction method, and a program, and more particularly, to an encoding device, an encoding method, a reproduction device, a reproduction method, and a program that enable each reproduction device to reproduce appropriate content in a simplified manner.
Background
In recent years, high-resolution video content called 4K or 8K is known. Such 4K or 8K video content is often generated in consideration of a large viewing angle, i.e., reproduction on a large screen.
In addition, since the 4K or 8K video content has a high resolution, the resolution is also sufficient in the case where a part of the screen of such video content is cropped, and therefore, such video content can be cropped and reproduced (for example, see non-patent document 1).
Reference list
Non-patent document
Non-patent document 1: FDR-AX100, [ Online ], [ 24 th search at 9/2014 ], Internet < URL: http:// www.sony.net/Products/di/en-us/Products/j4it/index. html >
Disclosure of Invention
Problems to be solved by the invention
Meanwhile, video reproduction devices are diversified, and reproduction in various screen sizes from a large screen to a smart phone (multifunctional mobile phone) is considered. However, in the present case, the same content is reproduced enlarged or reduced to match each screen size.
Meanwhile, the above-described 4K or 8K video content is often generated in consideration of reproduction on a large screen. Therefore, it is not appropriate to reproduce such video content using a reproduction device having a relatively small screen, such as a tablet Personal Computer (PC) or a smartphone.
Therefore, for example, for reproduction apparatuses having screen sizes and the like different from each other, in order to provide contents suitable for each screen size, screen shape and the like, it is necessary to separately prepare contents suitable for each screen size, screen shape and the like.
The present technology takes these circumstances into consideration, and enables each reproduction device to reproduce appropriate content in a simplified manner.
Solution to the problem
A reproduction apparatus according to a first aspect of the present technology includes: a decoding unit that decodes the encoded video data or the encoded audio data; a zoom region selection unit that selects one or more pieces of zoom region information from a plurality of pieces of zoom region information that specify a region to be zoomed; and a data processing unit that performs a cropping process on video data obtained by the decoding or performs an audio conversion process on audio data obtained by the decoding based on the selected zoom region information.
In the plurality of pieces of zoom region information, zoom region information specifying a region for each type of reproduction target apparatus may be included.
In the plurality of pieces of zoom region information, zoom region information specifying a region for each reproduction target apparatus rotation direction may be included.
In the plurality of pieces of zoom region information, zoom region information specifying a region for each specific video object may be included.
The zoom region selection unit may be caused to select the zoom region information according to an operation input by the user.
The zoom region selection unit may be caused to select the zoom region information based on information relating to the reproduction apparatus.
The zoom region selection unit may be caused to select the zoom region information by using at least any one of information indicating a type of the reproduction apparatus and information indicating a rotation direction of the reproduction apparatus as the information relating to the reproduction apparatus.
The reproduction method or program according to the first aspect of the present technology includes the steps of: decoding the encoded video data or the encoded audio data; selecting one or more pieces of zoom region information from a plurality of pieces of zoom region information that specify a region to be zoomed; and performing a cropping process on the video data obtained by the decoding or performing an audio conversion process on the audio data obtained by the decoding, based on the selected zoom region information.
According to a first aspect of the present technique, encoded video data or encoded audio data is decoded; selecting one or more pieces of zoom region information from a plurality of pieces of zoom region information that specify a region to be zoomed; and performing a cropping process on the video data obtained by the decoding or performing an audio conversion process on the audio data obtained by the decoding, based on the selected zoom region information.
An encoding device according to a second aspect of the present technology includes: an encoding unit that encodes video data or encodes audio data; and a multiplexer generating a bitstream by multiplexing the encoded video data or the encoded audio data with a plurality of pieces of zoom region information specifying a region to be zoomed.
The encoding method or program according to the second aspect of the present technology includes the steps of: encoding video data or encoding audio data; and generating a bitstream by multiplexing the encoded video data or the encoded audio data with a plurality of pieces of zoom region information specifying regions to be zoomed.
According to a second aspect of the present technique, video data is encoded or audio data is encoded; and generating a bitstream by multiplexing the encoded video data or the encoded audio data with a plurality of pieces of zoom region information specifying regions to be zoomed.
Effects of the invention
According to the first and second aspects of the present technology, each reproduction apparatus can reproduce appropriate content in a simplified manner.
Note that the effects of the present technology are not limited to those described herein, but may be any effects described in the present disclosure.
Drawings
Fig. 1 is a diagram showing an example of the configuration of an encoding apparatus.
Fig. 2 is a diagram showing a configuration of encoded content data.
Fig. 3 is a diagram showing zoom region information.
Fig. 4 is a diagram showing the syntax of the zoom area information present flag.
Fig. 5 is a diagram showing the syntax of the zoom region information.
Fig. 6 is a diagram showing the syntax of the zoom region information.
Fig. 7 is a diagram showing the syntax of the zoom region information.
Fig. 8 is a diagram showing the syntax of the zoom region information.
Fig. 9 is a diagram showing the syntax of the zoom region information.
Fig. 10 is a diagram showing the syntax of the zoom region information.
Fig. 11 is a diagram showing zoom region information.
Fig. 12 is a diagram showing zoom region information.
Fig. 13 is a diagram showing the syntax of the zoom region information.
Fig. 14 is a diagram showing the syntax of the zoom area information presence flag and the like.
Fig. 15 is a diagram showing the syntax of the zoom region information.
Fig. 16 is a diagram showing the syntax of the zoom region auxiliary information and the like.
Fig. 17 is a diagram showing a scaling specification.
Fig. 18 is a diagram showing an example of reproduced content.
Fig. 19 is a flowchart showing the encoding process.
Fig. 20 is a diagram showing an example of the configuration of the reproduction apparatus.
Fig. 21 is a flowchart showing the reproduction processing.
Fig. 22 is a diagram showing an example of the configuration of the reproduction apparatus.
Fig. 23 is a flowchart showing the reproduction processing.
Fig. 24 is a diagram showing an example of the configuration of the reproduction apparatus.
Fig. 25 is a flowchart showing the reproduction processing.
Fig. 26 is a diagram showing an example of the configuration of the reproduction apparatus.
Fig. 27 is a flowchart showing the reproduction processing.
Fig. 28 is a diagram showing an example of the configuration of a computer.
Detailed Description
Hereinafter, embodiments to which the present technology is applied will be described with reference to the drawings.
< first embodiment >
< example of configuration of encoding apparatus >
The present technology enables reproduction devices such as a TV receiver and a smart phone having display screen sizes different from each other to reproduce appropriate content such as content suitable for such reproduction devices in a simplified manner. The content described here may be, for example, content formed of video and audio or content formed of either of video and audio. Hereinafter, the description will be continued using an example of the case of content formed of a video and audio accompanying the video.
Fig. 1 is a diagram showing an example of the configuration of an encoding apparatus according to the present technology.
The encoding device 11 encodes a content generated by a content generator, and outputs a bit stream (code string) in which encoded data obtained as a result thereof is stored.
The encoding device 11 includes: a video data encoding unit 21; an audio data encoding unit 22; a metadata encoding unit 23; a multiplexer 24; and an output unit 25.
In the present example, video data of video and audio data of audio constituting a content are supplied to the video data encoding unit 21 and the audio data encoding unit 22, respectively, and metadata of the content is supplied to the metadata encoding unit 23.
The video data encoding unit 21 encodes the video data of the supplied content, and supplies the encoded video data obtained as a result thereof to the multiplexer 24. The audio data encoding unit 22 encodes the audio data of the supplied content, and supplies the encoded audio data obtained as a result thereof to the multiplexer 24.
The metadata encoding unit 23 encodes the metadata of the supplied content, and supplies the encoded metadata obtained as a result thereof to the multiplexer 24.
The multiplexer 24 generates a bitstream by multiplexing the encoded video data supplied from the video data encoding unit 21, the encoded audio data supplied from the audio data encoding unit 22, and the encoded metadata supplied from the metadata encoding unit 23, and supplies the generated bitstream to the output unit 25. The output unit 25 outputs the bit stream supplied from the multiplexer 24 to a reproducing apparatus or the like.
Note that, hereinafter, the bit stream output from the output unit 25 will also be referred to as encoded content data.
< encoded content data >
The content encoded by the encoding device 11 is generated in consideration of clipping and reproduction as necessary. In other words, the content producer generates the content in consideration of directly reproducing the content or cutting and reproducing a part of the entire area of the video constituting the content.
For example, a content producer selects a partial area to be clipped and reproduced, i.e., an area to be scaled and reproduced by clipping, as a zoom area from the entire area of a video (image) constituting the content.
Note that, for example, a zoom area for achieving a purpose suitable for a viewing angle or the like of a reproduction apparatus in question may be freely determined by a content producer. Further, the zoom region may be determined based on a zoom purpose, for example, zooming in and tracking a specific object such as a singer or a player within a video of the content.
In this way, in the case where several zoom regions are specified to the content by the producer side, in the bitstream output from the encoding apparatus 11, that is, in the encoded content data, zoom region information specifying the zoom regions is stored as metadata. At this time, when it is desired to specify the zoom area for each predetermined time unit, the zoom area information may be stored in the encoded content data for each time unit described above.
More specifically, for example, as shown in fig. 2, in the case where the content is stored in the bitstream for each frame, the zoom area information may be stored in the bitstream for each frame.
In the example shown in fig. 2, a header section HD in which header information and the like are stored is arranged at the beginning of the bit stream, i.e., the encoded content data, and a data section DA in which encoded video data and encoded audio data are stored is arranged after the header section HD.
In the header section HD, a video header section PHD in which header information on video constituting the content is stored, an audio header section AHD in which header information on audio constituting the content is stored, and a meta header section MHD in which header information on meta data of the content is stored are provided.
Further, in the meta header section MHD, a zoom area header section ZHD in which information relating to zoom area information is stored is provided. For example, in the zoom area information header section ZHD, a zoom area information presence flag or the like indicating whether or not zoom area information is stored in the data section DA is stored.
Further, in the data section DA, a data section in which data of the encoded content is stored for each frame of the content is provided. In the present example, a data section DAF-1 in which data of a first frame is stored is provided at the beginning of the data section DA, and a data section DAF-2 in which data of a second frame of content is stored is provided after the data section DAF-1. In addition, here, the data sections of the third frame and the subsequent frames are not shown in the drawing. Hereinafter, in the case where the data section DAF-1 or the data section DAF-2 of each frame does not need to be particularly distinguished from each other, each of the data section DAF-1 and the data section DAF-2 will be simply referred to as a data section DAF.
A video information data section PD-1 in which encoded video data is stored, an audio information data section AD-1 in which encoded audio data is stored, and a meta information data section MD-1 in which encoded meta data is stored are disposed in the data section DAF-1 of the first frame.
For example, in the meta information data section MD-1, position information of a video object and a sound source object included in the first frame of the content, and the like are included. In addition, a zoom region information data section ZD-1 in which the encoded zoom region information in the encoded metadata is stored is set within the meta information data section MD-1. Position information, zoom area information, and the like of the video object and the sound source object are set as metadata of the content.
Also, similarly to the data section DAF-1, a video information data section PD-2 in which encoded video data is stored, an audio information data section AD-2 in which encoded audio data is stored, and a meta information data section MD-2 in which encoded meta data is stored are provided in the data section DAF-2. In addition, in the meta information data section MD-2, a zoom region information data section ZD-2 in which the encoded zoom region information is stored is provided.
Further, hereinafter, in the case where the video information data section PD-1 and the video information data section PD-2 do not need to be particularly distinguished from each other, each of the video information data section PD-1 and the video information data section PD-2 will also be simply referred to as a video information data section PD, and in the case where the audio information data section AD-1 and the audio information data section AD-2 do not need to be particularly distinguished from each other, each of the audio information data section AD-1 and the audio information data section AD-2 will also be simply referred to as an audio information data section AD. In addition, in the case where the meta information data section MD-1 and the meta information data section MD-2 do not need to be particularly distinguished from each other, each of the meta information data section MD-1 and the meta information data section MD-2 will be simply referred to as a meta information data section MD, and in the case where the scaling region information data section ZD-1 and the scaling region information data section ZD-2 do not need to be particularly distinguished from each other, each of the scaling region information data section ZD-1 and the scaling region information data section ZD-2 will also be simply referred to as a scaling region information data section ZD.
Further, in the case of fig. 2, in each data section DAF, an example is described in which the video information data section PD, the audio information data section AD, and the meta information data section MD are set. However, the meta information data section MD may be provided in each or one of the video information data section PD and the audio information data section AD. In this case, the zoom region information is stored in the zoom region information data section ZD of the meta information data section MD provided within the video information data section PD or the audio information data section AD.
Similarly, although an example is described in which the video header section PHD, the audio header section AHD, and the meta header section MHD are provided in the header section HD, the meta header section MHD may be provided in both or either of the video header section PHD and the audio header section AHD.
In addition, in the case where the zoom area information in each frame of the content is the same, the zoom area information may be configured to be stored in the header section HD. In this case, it is not necessary to provide the zoom area information data section ZD in each data section DAF.
< specific example of zoom region information 1>
Subsequently, a more specific example of the zoom area information will be described.
The above-mentioned zoom region information is information that specifies a zoom region of a region to be zoomed, and more specifically, zoom region information is information that indicates a position of the zoom region. For example, the zoom region as shown in fig. 3 may be specified using coordinates of a center position of the zoom region, coordinates of a start point, coordinates of an end point, a vertical width, a horizontal width, and the like.
In the case shown in fig. 3, the area of the entire video (image) of the content is the original area OR, and one rectangular scaling area ZE is specified within the original area OR. In the present example, the width of the zoom region ZE in the lateral direction (horizontal direction) of the drawing is a horizontal width XW, and the width of the zoom region ZE in the longitudinal direction (vertical direction) of the drawing is a vertical width YW.
Here, in the figure, a point in the XY coordinate system having the lateral direction (horizontal direction) as the X direction and the longitudinal direction (vertical direction) as the Y direction will be represented as coordinates (X, Y).
Now, when the coordinate of the point P11 of the center position (center position) of the zoom region ZE is (XC, YC), the zoom region ZE can be specified using this center coordinate (XC, YC) and the horizontal width XW and vertical width YW of the zoom region ZE. Accordingly, the center coordinates (XC, YC), the horizontal width XW, and the vertical width YW can be set as the zoom region information.
In addition, for example, in the case where the zoom region ZE is a rectangular region, the upper left vertex P12 of the zoom region ZE in the drawing is set as a start point, and the lower right vertex P13 of the zoom region ZE in the drawing is set as an end point, and the zoom region ZE may also be specified using the coordinates (X0, Y0) of the start point (vertex P12) and the coordinates (X1, Y1) of the end point (vertex P13). Therefore, the coordinates of the start point (X0, Y0) and the coordinates of the end point (X1, Y1) can be set as the zoom region information.
More specifically, the coordinates (X0, Y0) of the start point and the coordinates (X1, Y1) of the end point are set as the zoom region information. In this case, for example, it may be configured such that the zoom region information shown in fig. 4 is stored in the above-described zoom region information header section ZHD, and the zoom region information shown in fig. 5 is stored in each zoom region information data section ZD, according to the value of the zoom region information present flag.
Fig. 4 is a diagram showing the syntax of the zoom area information present flag. In the present example, "haszoomarreainfo" denotes a zoom area information presence flag, and the value of the zoom area information presence flag haszoomarreainfo is one of "0" and "1".
Here, in the case where the value of the zoom area information presence flag haszoomapreainfo is "0", it indicates that the zoom area information is not included in the encoded content data. In contrast, in the case where the value of the zoom area information presence flag haszoomapreainfo is "1", it indicates that the zoom area information is included in the encoded content data.
In addition, in the case where the value of the zoom area information presence flag hasZoomAreaInfo is "1", the zoom area information is stored in the zoom area information data section ZD of each frame. For example, the zoom region information is stored in the zoom region information data section ZD in the syntax shown in fig. 5.
In fig. 5, "zoomarreax 0" and "zoomarreay 0" respectively represent the X coordinate X0 and the Y coordinate Y0 of the start point of the zoom region ZE. In addition, "ZoomAreaX 1" and "ZoomAreaY 1" respectively represent the X coordinate X1 and the Y coordinate Y1 of the end point of the zoom region ZE.
For example, in the case where the video of the content to be encoded is 8K video, each of the values of "ZoomAreaX 0" and "ZoomAreaX 1" is set to one of values 0 to 7679, and each of the values of "ZoomAreaY 0" and "ZoomAreaY 1" is set to one of values 0 to 4319.
< specific example of zoom region information 2>
In addition, for example, also in the case where the center coordinates (XC, YC), the horizontal width XW, and the vertical width YW are set as zoom area information, the zoom area information presence flag hasZoomAreaInfo shown in fig. 4 is stored in the zoom area information header section ZHD. When the value of the zoom area information presence flag haszoomapreainfo is "1", the zoom area information is stored in the zoom area information data section ZD of each frame. In this case, the zoom region information is stored in the zoom region information data section ZD in the syntax shown in fig. 6, for example.
In the case of fig. 6, "zoomar rexc" and "zoomar reyc" respectively denote an X coordinate XC and a Y coordinate YC of the center coordinate (XC, YC) of the zoom region ZE.
In addition, "ZoomAreaXW" and "ZoomAreaYW" represent the horizontal width XW and the vertical width YW of the zoom region ZE, respectively.
Also in this example, for example, in the case where the video of the content to be encoded is an 8K video, each of the values of "ZoomAreaXC" and "ZoomAreaXW" is set to one of values 0 to 7679, and each of the values of "ZoomAreaYC" and "ZoomAreaYW" is set to one of values 0 to 4319.
< specific example of zoom region information 3>
In addition, for example, in the case where the zoom region is specified using the center coordinates (XC, YC), the horizontal width XW, and the vertical width YW, and the horizontal width XW and the vertical width YW are set to fixed values, only the difference of the center coordinates (XC, YC) may be stored as zoom region information in the zoom region information data section ZD.
In this case, for example, in the zoom area information data section ZD-1 provided in the data section DAF-1 of the first frame, the zoom area information shown in fig. 6 is stored. In addition, in the zoom area information data section ZD provided in the data section DAF of each of the second frame and the subsequent frames, the zoom area information is stored in the syntax shown in fig. 7.
In the case of fig. 7, "nbits", "ZoomAreaXCshift", and "ZoomAreaYCshift" are stored as the zoom region information. "nbits" is bit number information indicating the number of bits of information of each of "ZoomAreaXCshift" and "ZoomAreaYCshift".
In addition, "ZoomAreaXCshift" represents a difference between XC, which is an X coordinate of center coordinates (XC, YC), and a predetermined reference value. For example, the reference value of the coordinate XC may be an X coordinate of a center coordinate (XC, YC) in the first frame or an X coordinate of a center coordinate (XC, YC) in the previous frame of the current frame.
"ZoomAreaYCshift" denotes a difference of YC as a Y coordinate of the center coordinate (XC, YC) from a predetermined reference value. For example, the reference value of the coordinate YC may be a Y coordinate of the center coordinate (XC, YC) in the first frame or a Y coordinate of the center coordinate (XC, YC) in the previous frame of the current frame, similar to the reference value of the coordinate XC.
Such "zoomarea xcshift" and "zoomarea ycshift" represent the amount of movement from the reference value of the center coordinates (XC, YC).
Note that, for example, in the case where the reference value of the center coordinates (XC, YC) is known on the reproduction side of the content, in the case where the reference value of the center coordinates (XC, YC) is stored in the zoom region information header section ZHD or the like, the zoom region information shown in fig. 7 may be stored in the zoom region information data section ZD of each frame.
< specific example of zoom region information 4>
In addition, for example, in the case where the zoom region is specified using the center coordinates (XC, YC), the horizontal width XW, and the vertical width YW, and the center coordinates (XC, YC) are set to a fixed value, only the difference, that is, the amount of change in the horizontal width XW and the vertical width YW may be stored as zoom region information in the zoom region information data section ZD.
In this case, for example, in the zoom area information data section ZD-1 provided in the data section DAF-1 of the first frame, the zoom area information shown in fig. 6 is stored. In addition, in the zoom area information data section ZD set in the data section DAF set in each of the second frame and the subsequent frame, the zoom area information is stored in the syntax shown in fig. 8.
In fig. 8, "nbits", "zoomareaxshift", and "ZoomAreaYWshift" are stored as the zoom region information. "nbits" is bit number information indicating the number of bits of information of each of "zoomareaxshift" and "ZoomAreaYWshift".
In addition, "ZoomAreaXWshift" represents the amount of change from a predetermined reference value of the horizontal width XW. For example, the reference value of the horizontal width XW may be the horizontal width XW in the first frame or the horizontal width XW of the previous frame of the current frame.
"ZoomAreaYWshift" represents the amount of change from the reference value of the vertical width YW. For example, the reference value of the vertical width YW may be the vertical width YW in the first frame or the vertical width YW of the previous frame of the current frame, similar to the reference value of the horizontal width XW.
Note that, for example, in the case where the reference values of the horizontal width XW and the vertical width YW are known on the reproduction side of the content, in the case where the reference values of the horizontal width XW and the vertical width YW are stored in the zoom region header section ZHD or the like, the zoom region information shown in fig. 8 may be stored in the zoom region information data section ZD of each frame.
< specific example of zoom region information 5>
In addition, for example, in the case where the zoom region is specified using the center coordinates (XC, YC), the horizontal width XW, and the vertical width YW, as in the case of fig. 7 and 8, the difference of the center coordinates (XC, YC), the horizontal width XW, and the vertical width YW may be stored as zoom region information in the zoom region information data section ZD.
In this case, for example, in the zoom area information data section ZD-1 set in the data section DAF-1 of the first frame, the zoom area information shown in fig. 6 is stored. In addition, in the scaling region information data section ZD provided in the data section DAF of each of the second frame and the subsequent frame, the scaling region information is stored in the syntax shown in fig. 9.
In the case of fig. 9, "nbits", "ZoomAreaXCshift", "ZoomAreaYCshift", "ZoomAreaXWshift", and "ZoomAreaYWshift" are stored as the zoom region information.
"nbits" is bit number information indicating the number of bits of information of each of "ZoomAreaXCshift", "ZoomAreaYCshift", "zoomareaxwsshift", and "zoomayreywshift".
As in the case of fig. 7, "ZoomAreaXCshift" and "ZoomAreaYCshift" represent differences from the reference values of the X coordinate and the Y coordinate of the center coordinate (XC, YC), respectively.
In addition, "ZoomAreaXWshift" and "ZoomAreaYWshift" represent the amount of change from the reference values of the horizontal width XW and the vertical width YW, respectively, as in the case of fig. 8.
Here, reference values of the center coordinates (XC, YC), the horizontal width XW, and the vertical width YW may be set to the center coordinates (XC, YC), the horizontal width XW, and the vertical width YW in the first frame or a frame previous to the current frame. Further, in the case where the reference values of the center coordinates (XC, YC), the horizontal width XW, and the vertical width YW are known on the reproduction side of the content, or in the case where the reference values are stored in the zoom region information header section ZHD, the zoom region information shown in fig. 9 may be stored in the zoom region information data section ZD of each frame.
< specific example of zoom region information 6>
In addition, by combining the examples shown in fig. 6 to 9 described above, for example, the zoom region information may be stored in each zoom region information data section ZD in the syntax shown in fig. 10.
In this case, the zoom area information presence flag haszoomapareinfo shown in fig. 4 is stored in the zoom area header section ZHD. Further, when the value of the zoom area information presence flag hasZoomAreaInfo is "1", the zoom area information is stored in the zoom area information data section ZD of each frame. For example, the zoom region information is stored in the zoom region information data section ZD in the syntax shown in fig. 10.
In the case shown in fig. 10, encoding mode information indicating a format describing the zoom region information (more specifically, information specifying the position of the zoom region) among the formats shown in fig. 6 to 9 is arranged at the beginning of the zoom region information. In fig. 10, "mode" indicates coding mode information.
Here, the value of the coding mode information mode is set to one of values 0 to 3.
For example, in the case where the value of the encoding mode information mode is "0", as shown in "case 0" in the drawing and below, "ZoomAreaXC" representing the coordinate XC, "ZoomAreaYC" representing the coordinate YC, "ZoomAreaXW" representing the horizontal width XW, and "ZoomAreaYW" representing the vertical width YW are stored as the zoom region information, similarly to the example shown in fig. 6.
On the other hand, in the case where the value of the encoding mode information mode is "1", as shown in "case 1" in the figure and below, "nbits" representing the bit number information, "ZoomAreaXCshift" representing the difference of the coordinates XC, and "ZoomAreaYCshift" representing the difference of the coordinates YC are stored as the zoom area information, similarly to the example shown in fig. 7.
In the case where the value of the coding mode information mode is "2", as shown in "case 2" in the figure and below, "nbits" indicating bit number information, "zoomareatxwshift" indicating a variation of the horizontal width XW, "zoomareatywshift" indicating a variation of the vertical width YW are stored as the zoom area information, similarly to the example shown in fig. 8.
Further, in the case where the value of the encoding mode information mode is "3", as shown in "case 3" in the drawing and below, "nbits" representing the bit number information, "ZoomAreaXCshift" representing the difference of the coordinates XC, "ZoomAreaYCshift" representing the difference of the coordinates YC, "ZoomAreaXWshift" representing the variation of the horizontal width XW, and "ZoomAreaYWshift" representing the variation of the vertical width YW are stored as the zoom area information, similarly to the example shown in fig. 9.
< specific example of zoom region information 7>
In addition, although the example in which the coordinate information is stored as the zoom region information is described above, the angle information specifying the zoom region may be stored as the zoom region information in each zoom region information data section ZD.
For example, as shown in fig. 11, a point which is located at the same height as the center position CP of the original area OR and is spaced apart from the center position CP by a predetermined distance to the front side in fig. 11 is set as the viewing point WP which is a reference when viewing the content. In addition, it is assumed that the positional relationship between the center position CP and the viewpoint WP is constantly the same positional relationship regardless of the frame of the content. Note that in fig. 11, the same reference numerals are assigned to portions corresponding to the case shown in fig. 3, and a description thereof will not be presented as appropriate.
In fig. 11, a straight line connecting the center position CP and the view point WP is set as a straight line L11. In addition, the middle point on the left side of the zoom region ZE in the drawing is set as a point P21, and a straight line connecting the point P21 and the viewpoint WP is set as a straight line L12. Further, the angle formed by the straight line L11 and the straight line L12 is set to the horizontal angle φLeft side of。
Similarly, the middle point on the right side of the zoom region ZE in the drawing is set as a point P22, and a straight line connecting the point P22 and the viewpoint WP is set as a straight line L13. The angle formed by the straight line L11 and the straight line L13 is set to the horizontal angle φRight side。
In addition, a point having the same position of the Y coordinate as the center position CP on the right side of the zoom area ZE in the drawing is set as a point P23, and a straight line connecting the point P23 and the viewpoint WP is set as a straight line L14. In addition, the top right point of the zoom region ZE in the drawing is set to a point P24, a straight line connecting the point P24 and the viewing point WP is set to a straight line L15, and an angle formed by the straight line L14 and the straight line L15 is set to a pitch angle θTop roof。
Similarly, the lower right vertex of the zoom region ZE in the drawing is set to a point P25, a straight line connecting the point P25 and the viewing point WP is set to a straight line L16, and an angle formed by the straight line L14 and the straight line L16 is set to a pitch angle θBottom。
In this case, the horizontal angle φ may be usedLeft side ofHorizontal angle phiRight sideAngle of pitch thetaTop roofAnd a pitch angle thetaBottomTo specify the zoom area ZE. Accordingly, the horizontal angle phi can be adjustedLeft side ofHorizontal angle phiRight sideAngle of pitch thetaTop roofAnd a pitch angle thetaBottomIs stored as zoom region information in each zoom region information data section ZD shown in fig. 2. In addition, the horizontal angle phi can be adjustedLeft side ofHorizontal angle phiRight sideAngle of pitch thetaTop roofAnd a pitch angle thetaBottomIs set to the amount of change of some or all ofIs the zoom area information.
< specific example of zoom region information 8>
In addition, for example, as shown in fig. 12, angle information determined based on the positional relationship among the center position CP, the point P11 located at the center position of the zoom region ZE, and the viewpoint WP may be set as the zoom region information. Note that in fig. 12, the same reference numerals are assigned to portions corresponding to the cases shown in fig. 3 or fig. 11, and a description thereof will not be presented as appropriate.
In fig. 12, a straight line connecting a point P11 located at the center position of the zoom region ZE and the viewing point WP is set as a straight line L21. Further, a point having the same X coordinate as the point P11 located at the center position of the zoom region ZE and having the same Y coordinate as the center position CP of the original region OR is set as a point P31, and a straight line connecting the point P31 and the viewing point WP is set as a straight line L22.
Further, the middle point on the upper side of the zoom region ZE in the drawing is set to a point P32, the straight line connecting the point P32 and the viewing point WP is set to a straight line L23, the middle point on the lower side of the zoom region ZE in the drawing is set to a point P33, and the straight line connecting the point P33 and the viewing point WP is set to a straight line L24.
Further, an angle formed by the straight line L12 and the straight line L13 is set as a horizontal angle of view φWAnd an angle formed by the straight line L11 and the straight line L22 is set as a horizontal angle phiC. In addition, an angle formed by the straight line L23 and the straight line L24 is set as the vertical angle of view θWAnd an angle formed by a straight line L21 and a straight line L22 is set as a pitch angle thetaC。
Here, the horizontal angle phiCAnd a pitch angle thetaCRespectively representing the horizontal and pitch angles from the viewpoint WP relative to a point P11 located in the center of the zoom region ZE.
At this time, a horizontal viewing angle φ may be usedWHorizontal angle phiCVertical angle of view thetaWAnd a pitch angle thetaCTo specify the zoom area ZE. Thus, the horizontal viewing angle phi can be adjustedWHorizontal angle phiCVertical angle of view thetaWAnd a pitch angle thetaCOr the amount of change in these angles is stored as scaling region information in each scaling region information data section ZD shown in fig. 2.
In this case, for example, the zoom area information presence flag haszoomapreainfo shown in fig. 4 is stored in the zoom area header section ZHD. Further, when the value of the zoom area information presence flag hasZoomAreaInfo is "1", the zoom area information is stored in the zoom area information data section ZD of each frame. For example, the zoom region information is stored in the zoom region information data section ZD in the syntax shown in fig. 13.
In the case shown in fig. 13, encoding mode information indicating one format among a plurality of formats in which the zoom region information (more specifically, information of the position of the zoom region) is described is arranged at the beginning of the zoom region information.
In fig. 13, "mode" denotes coding mode information, and the value of the coding mode information mode is set to one of values 0 to 3.
For example, when the value of the coding mode information mode is "0", it indicates the horizontal angle Φ as "case 0" and below in the figureC"ZoomareaAZC" of (A) represents a pitch angle θC"ZoomareaELC" of (C) represents the horizontal viewing angle phiW"ZoomareaAZW" of and represents the vertical viewing angle θWIs stored as the zoom area information.
When the value of the coding mode information is "1", the "nbits" indicating the bit number information and the "horizontal angle Φ" indicating the horizontal angle are as shown in "case 1" and below in the figureCAnd represents the pitch angle θC"zoomar reaelcshift" of the offset angle is stored as the zoom area information.
Here, the bit number information nbits is information indicating the bit number of information of each of "ZoomAreaAZCshift" and "ZoomAreaELCshift".
In addition, "ZoomAreaAZCshift" is set to the horizontal angle phi of the previous frame of the current frameCOr a horizontal angle phi as a predetermined referenceCWith the current frameHorizontal angle phi ofCThe difference between "zoomAreeELCshift" is set to the pitch angle θ of the frame preceding the current frameCOr a pitch angle theta as a predetermined referenceCPitch angle theta from the current frameCThe difference between, etc.
When the value of the coding mode information mode is "2", the "nbits" indicating the bit number information and the "horizontal view angle Φ" indicate the "nbits" as shown in "case 2" and below in the figureW"ZoomAreaAZWshift" and represents the vertical viewing angle θW"ZoomAreaELWshift" of the variation amount is stored as the zoom area information.
Here, the bit number information nbits is information indicating the number of bits of information of each of "zoomarea azw shift" and "zoomarea elw shift".
In addition, "zoomareatazwshift" is set as the horizontal view angle phi of the previous frame of the current frameWOr a horizontal viewing angle phi as a predetermined referenceWHorizontal view angle phi with current frameWThe difference between "zoomAreaELWshift" is set as the vertical angle of view θ of the frame immediately preceding the current frameWOr a vertical angle of view theta as a predetermined referenceWVertical angle of view theta with the current frameWThe difference between, etc.
When the value of the coding mode information mode is "3", the coding mode information mode indicates "nbits" indicating bit number information and the horizontal angle Φ "as shown in" case3 "and below in the figureC"ZoomareaAZCshift" indicating pitch angle θ of the offset angle of (1)C"ZoomareaELCshift" of the offset angle of (1) represents the horizontal viewing angle phiW"ZoomAreaAZWshift" and represents the vertical viewing angle θW"ZoomAreaELWshift" of the variation amount is stored as the zoom area information.
In this case, the bit number information nbits is information indicating the bit number of information of each of "ZoomAreaAZCshift", "ZoomAreaELCshift", "ZoomAreaAZWshift", and "ZoomAreaELWshift".
Note that the configuration of the zoom area information is not limited to the example shown in fig. 13, and only "ZoomAreaAZC", "ZoomAreaELC", "ZoomAreaAZW", and "ZoomAreaELW" may be set as the zoom area information. Further, both sides or only one side of "zoomarea azcshift" and "zoomarea elcshift" and "zoomarea azwshift" and "zoomarea elwshift" may be set as the zoom area information.
< specific example of zoom region information 9>
In addition, although the case where only one piece of zoom region information exists is described above, a plurality of pieces of zoom region information may be stored in the zoom region information data section ZD. In other words, by specifying a plurality of zoom regions for one content, the zoom region information can be stored in the zoom region information data section ZD of each zoom region.
In this case, for example, each information is stored in the zoom region information header section ZHD in the syntax shown in fig. 14, and the zoom region information is further stored in the zoom region information data section ZD of each frame in the syntax shown in fig. 15.
In the example shown in fig. 14, "haszoomameareinfo" indicates a zoom area information presence flag. In the case where the value of the zoom area information presence flag is "1," numzoomarreas "is stored after the zoom area information presence flag haszoomarreainfo.
Here, "numzoomarreas" represents zoom region number information representing the number of pieces of zoom region information described in the zoom region information data section ZD, that is, the number of zoom regions set for the content. In the present example, the value of the zoom area number information numzoomarreas is one of values 0 to 15.
In the encoded content data, the zoom region information, more specifically, information specifying the position of each zoom region corresponding to a value obtained by adding 1 to the value of the zoom region number information numZoomAreas is stored in the zoom region information data section ZD.
Accordingly, for example, in the case where the value of the zoom region number information numZoomAreas is "0", in the zoom region information data section ZD, for one zoom region, information specifying the position of the zoom region is stored.
In addition, in the case where the value of the zoom area information presence flag hasZoomAreaInfo is "1", the zoom area information is stored in the zoom area information data section ZD. The zoom region information is described in the zoom region information data section ZD in the syntax shown in fig. 15, for example.
In the example shown in fig. 15, the zoom area information corresponding to the number indicated by the zoom area number information numZoomAreas is stored.
In fig. 15, "mode [ idx ]" denotes coding mode information of a scaling region specified by an index idx, and the value of the coding mode information mode [ idx ] is set to one of values 0 to 3. Note that the index idx is each value of 0 to numzoomarreas.
For example, in the case where the value of the encoding mode information mode [ idx ] is "0", as shown in "case 0" and below in the drawing, "ZoomAreaXC [ idx ] that represents the coordinate XC," ZoomAreaYC [ idx ] that represents the coordinate YC, "ZoomAreaXW [ idx ] that represents the horizontal width XW," ZoomAreaXW [ idx ] that represents the vertical width YW, "ZoomAreaYW [ idx ]" that represents the vertical width YW are stored as the zoom region information of the zoom region specified by the index idx.
In addition, when the value of the encoding mode information mode [ idx ] is "1", as shown in "case 1" and below in the figure, "nbits" which is bit number information, "ZoomAreaXCshift [ idx ] which represents the difference of the coordinates XC, and" ZoomAreaYCshift [ idx ] which represents the difference of the coordinates YC "are stored as the zoom region information of the zoom region specified by the index idx. Here, the number-of-bits information nbits represents the number of bits of information of each of "ZoomAreaXCshift [ idx ]" and "ZoomAreaYCshift [ idx ]".
In the case where the value of the coding mode information mode [ idx ] is "2", as shown in "case 2" and below in the figure, "nbits" indicating bit number information, "zoomareatawshift [ idx ] indicating a variation of the horizontal width XW, and" zoomareatywshift [ idx ] indicating a variation of the vertical width YW are stored as the zoom area information of the zoom area specified by the index idx. Here, the bit number information nbits represents the number of bits of information of each of "ZoomAreaXWshift [ idx ]" and "ZoomAreaYWshift [ idx ]".
Further, in the case where the value of the encoding mode information mode [ idx ] is "3", as shown in "case 3" and below in the figure, "nbits" which is bit number information, "zoomarea xcshift [ idx ] which represents the difference of the coordinates XC," zoomarea ycshift [ idx ] which represents the difference of the coordinates YC, "zoomarea xshift [ idx ] which represents the variation of the horizontal width XW, and" zoomarea xshift [ idx ] which represents the variation of the vertical width YW are stored as the zoom region information of the zoom region specified by the index idx. Here, the digit information nbits represents the digit of information of each of "zoomarea xcshift [ idx ]", "zoomarea ycshift [ idx ]", "zoomarea xwshift [ idx ]" and "zoomarea ywshift [ idx ]".
In the example shown in fig. 15, coding mode information mode [ idx ] and scaling region information corresponding to the number of scaling regions are stored in the scaling region information data section ZD.
Note that alternatively, the zoom region information may consist of only the coordinate XC and the coordinate YC, the horizontal angle φCAnd a pitch angle thetaCThe difference of the coordinates XC and the difference of the coordinates YC or the horizontal angle phiCDifference of (a) and pitch angle thetaCThe difference of (a).
In this case, the horizontal width XW and the vertical width YW and the horizontal viewing angle φWAnd vertical viewing angle thetaWCan be set on the reproduction side. At this time, the horizontal width XW and the vertical width YW and the horizontal viewing angle phiWAnd vertical viewing angle thetaWMay be automatically set in the reproduction-side apparatus or may be specified by the user.
In such an example, for example, in the case where the content is video and audio of a ball game, coordinates XC and coordinates YC representing the position of the ball are set as zoom region information, and a horizontal width XW and a vertical width YW that are fixed or specified by the user are used on the reproduction-side apparatus.
< zoom region assistance information >
In addition, in the zoom region header section ZHD, as the zoom region auxiliary information, supplementary information such as an ID indicating a reproduction target apparatus or a zoom destination and other text information may be included.
In this case, in the zoom area header section ZHD, the zoom area information presence flag hasZoomAreaInfo and zoom area auxiliary information are stored, for example, in the syntax shown in fig. 16.
In the example shown in fig. 16, the zoom area information presence flag haszoomarreainfo is arranged at the beginning, and in the case where the value of the zoom area information presence flag haszoomarreainfo is "1", each information such as zoom area auxiliary information is stored thereafter.
In other words, in the present example, after the zoom region information presence flag haszoomamareginfo, the zoom region number information "numzoomamareas" representing the number of zoom region information described in the zoom region information data section ZD is stored. Here, the value of the zoom area number information numzoomarreas is set to one of values 0 to 15.
In addition, after the scaling region number information numzoomarreas, information of each scaling region specified by the index idx corresponding to the number represented by the scaling region number information numzoomarreas is arranged. Here, the index idx is set to each value of 0 to numzoomarreas.
In other words, "hasext zoomareainfo [ idx ]" after the zoom area number information numZoomAreas represents an auxiliary information flag indicating whether or not the zoom area auxiliary information of the zoom area specified by the index idx is stored. Here, the value of the side information flag hasExtZoomAreaInfo [ idx ] is set to one of "0" and "1".
In the case where the value of the side information flag hasExtZoomAreaInfo [ idx ] is "0", the zoom area side information indicating the zoom area specified by the index idx is not stored in the zoom area header section ZHD. In contrast, in the case where the value of the side information flag hasExtZoomAreaInfo [ idx ] is "1", the zoom area side information indicating the zoom area specified by the index idx is stored in the zoom area header section ZHD.
In the case where the value of the auxiliary information flag hasExtZoomAreaInfo [ idx ] is "1", after the auxiliary information flag hasExtZoomAreaInfo [ idx ], a specification ID representing the specification of the zoom area specified by the index idx is arranged.
In addition, "haszoomareacommensurability" indicates a supplemental information flag indicating whether new supplemental information other than the specification ID exists for the zoom area specified by the index idx, for example, text information including a description of the zoom area, and the like.
For example, if the value of the supplemental information flag haszoomareacommensurability is "0", it indicates that there is no supplemental information. In contrast, in the case where the value of the supplementary information flag haszoomage commensuration is "1", it indicates that the supplementary information exists, and after the supplementary information flag haszoomage commensuration, "nbytes" as byte number information and "zoomage commensuration [ idx ] as supplementary information are arranged.
Here, the byte count information nbytes indicates the byte count of the information of the supplemental information zoomage public [ idx ]. In addition, the supplemental information zoomareacommensurability [ idx ] is set as text information describing the zoom area specified by the index idx.
More specifically, for example, it is assumed that the content is composed of live video and audio thereof, and the zoom area specified by the index idx is a zoom area for continuously zooming a singer as a video object. In this case, for example, text information such as "singer zoom" is set as the supplementary information zoomar reacommandary [ idx ].
In the zoom area header section ZHD, settings of the following items corresponding to the number indicated using the zoom area number information numZoomAreas are stored as necessary: an auxiliary information flag hasExtZoomareaInfo [ idx ], a ZoomareaSpecificationdID [ idx ] as a specification ID, a supplementary information flag hasZoomareaComplementary, byte number information bytes, and supplementary information ZoomareaComplementary [ idx ]. However, for a zoom area whose side information flag hasExtZoomAreaInfo [ idx ] has a value of "0", the zoomagereaspossindid [ idx ], the supplementary information flag haszoomage commensurability, the byte count information nbytes, and the supplementary information zoomage commensurability [ idx ] are not stored. Similarly, for a zoom area whose supplementary information flag haszoomage commensurability has a value of "0", byte number information nbytes and supplementary information zoomage commensurability [ idx ] are not stored.
In addition, zoomaraespecifiedld [ idx ] as the specification ID is information representing a zoom specification such as a reproduction target device for a zoom area and a zoom purpose, and for example, as shown in fig. 17, the zoom specification is set for each value of zoomaraespecifiedld [ idx ].
In the present example, for example, in the case where the value of zoomareaspecectdid [ idx ] is "1", the zoom area indicating the zoom specification indicated by the specification ID is a zoom area assuming that the reproduction target apparatus is the projector.
In addition, in the case where the values of zoomareaspecectind [ idx ] are 2 to 4, the zoom regions in which these values respectively represent the zoom specifications represented by the specification ID are the zoom regions in which it is assumed that the reproduction target apparatus is a television receiver having a screen exceeding 50 types, 30 to 50 types, and less than 30 types.
In this way, in the example shown in fig. 17, the zoom region information whose zoomareaspecectind [ idx ] value is one of "1" to "4" is information indicating the zoom region set for each type of reproduction target apparatus.
In addition, for example, in the case where the value of zoomareaspecectind [ idx ] is "7", the zoom region indicating the zoom specification indicated by the specification ID is a zoom region assuming that the reproduction target device is a smartphone and the rotation direction of the smartphone is a vertical direction.
Here, the rotation direction of the smartphone being a vertical direction means that the direction of the smartphone is a vertical direction when the user views content using the smartphone, i.e., the longitudinal direction of the display screen of the smartphone is a vertical direction (upward/downward direction) from the perspective of the user. Therefore, for example, in the case where the value of zoomareaspecectid [ idx ] is "7", the zoom region is considered to be a region that is long in the vertical direction.
In addition, for example, in the case where the value of zoomareaspecectdid [ idx ] is "8", the zoom area indicating the zoom specification indicated by the specification ID is a zoom area assuming that the reproduction target device is a smartphone and the rotation direction of the smartphone is the horizontal direction. In this case, for example, the zoom region is considered as a region that is long in the horizontal direction.
In this way, in the example shown in fig. 17, the zoom region information whose zoomareatepecifiedld [ idx ] value is one of "5" to "8" is information indicating the zoom region set for the type of reproduction target apparatus and the rotation direction of the reproduction target apparatus.
For example, when the value of zoomareaspecefield [ idx ] is "9", the zoom region indicating the zoom specification indicated by the specification ID is a zoom region having a predetermined zoom purpose set by the content producer. Here, the predetermined zoom purpose is, for example, displaying a specific zoom view, for example, displaying a zoom of a predetermined video object.
Thus, for example, in the case where the value "9" of zoomarasepecifiedID [ idx ] indicates a scaling specification for continuous scaling of a singer, the supplemental information zoomaraeacommensaevaritary [ idx ] of the index idx is set to text information such as "singer scaling". The user can obtain the contents of the zoom specification represented by each specification ID based on the specification ID or information associated with the specification ID, supplementary information of the specification ID, and the like.
In this way, in the example shown in fig. 17, each zoom region information whose zoomareaspecectidid [ idx ] value is one of "9" to "15" is information indicating an arbitrary zoom region freely set by the content generator side, for example, a zoom region set for each specific video object.
As described above, by setting one or more zoom regions for one content, for example, as shown in fig. 18, content matching the preference of the user or content suitable for each reproduction device can be provided in a simplified manner.
In fig. 18, an image Q11 shows a video (image) of predetermined content. The content is that of live video, and the image Q11 is a wide-angle image, live performers, i.e., singer M11, guitarist M12, and beth hand M13 are projected in the image Q11, and the entire state, audience, and the like are projected.
With respect to the image Q11 constituting such content, the content producer sets one or more zoom regions in accordance with the zoom specification or zoom purpose of the reproduction target apparatus.
For example, in order to display a zoom view in which the singer M11 as a video object is enlarged, in the case where an area centered on the singer M11 on the image Q11 is set as a zoom area, the image Q12 may be made on the reproduction side.
Similarly, for example, in order to display a zoom view in which the guitarist M12 as a video object is zoomed in, in the case where an area centered on the guitarist M12 on the image Q11 is set as a zoom area, the reproduction image Q13 can be reproduced as content on the reproduction side.
In addition, for example, by selecting a plurality of zoom regions on the reproduction side and configuring one screen by aligning the zoom regions, the reproduction image Q14 can be reproduced as content on the reproduction side.
In the present example, the image Q14 is composed of an image Q21 having a zoom region slightly smaller than the angle of view of the image Q11, an image Q22 having a zoom region in which the singer M11 is enlarged, an image Q23 having a zoom region in which the guitarist M12 is enlarged, and an image Q24 having a zoom region in which the beth hand M13 is enlarged. I.e., the image Q14 has a multi-screen configuration. In the case where a plurality of zoom regions are set in advance on the content provider side, on the content reproduction side, by selecting a plurality of zoom regions, the content can be reproduced by adopting a multi-screen configuration such as the image Q14.
In addition, for example, in a case where a viewing angle of half of the viewing angle of Q11, that is, a region including the center of the image Q11 having approximately half the area of the entire image Q11 is set as a zoom region in consideration of a reproducing apparatus such as a tablet PC having a display screen that is not so large, the image Q15 can be reproduced as content on the reproducing side. In the present example, in a reproduction apparatus having a display screen that is not so large, each performer can also be displayed in a sufficiently large size.
In addition, for example, in the case where a relatively narrow horizontally long region including the center of the image Q11 within the image Q11 is set as a zoom region in consideration of a smartphone whose rotational direction is the horizontal direction, that is, the display screen is in a horizontally long state, the image Q16 can be reproduced as content on the reproduction side.
For example, in the case where an area long in the vertical direction near the center of the image Q11 is set as a zoom area in consideration of a smartphone whose rotation direction is the vertical direction, that is, whose display screen is in a state long in the vertical direction, the image Q17 may be reproduced as content on the reproduction side.
In the image Q17, a singer M11 as one of the performers is displayed in enlargement. In the present example, since a small vertically long display screen is considered instead of displaying all performers arranged in the horizontal direction, it is appropriate for the reproduction target apparatus to display one performer in an enlarged manner, and therefore, such a zoom region is set.
In addition, for example, in a case where the angle of view is set to be slightly smaller than that of the image Q11, that is, in a case where a relatively large area including the center of the image Q11 within the image Q11 is set as a zoom area, in consideration of a case where the reproducing apparatus has a relatively large display screen such as a large-sized television receiver, the image Q18 can be reproduced as content on the reproducing side.
As described above, by setting the zoom region on the content provider side and generating the encoded content data including the zoom region information indicating the zoom region on the reproduction side, the user who is a person viewing the content can select to directly reproduce the content or to perform zoom reproduction, that is, clip reproduction, based on the zoom region information.
Specifically, in the case where there are a plurality of pieces of zoom region information, the user can select zoom reproduction according to specific zoom region information among the plurality of pieces of zoom region information.
In addition, in the case where the zoom region auxiliary information is stored in the encoded content data, on the reproduction side, by referring to the reproduction target apparatus, the zoom purpose, the zoom specification such as the zoom content, and the auxiliary information, the zoom region suitable for the reproduction apparatus or the preference of the user can be selected. The selection of the zoom region may be specified by a user or may be automatically performed by the reproducing apparatus
< description of encoding Process >
Next, a specific operation of the encoding device 11 will be described.
When video data and audio data constituting a content and metadata of the content are supplied from the outside, the encoding apparatus 11 performs an encoding process and outputs encoded content data. Hereinafter, the encoding process performed by the encoding apparatus 11 will be described with reference to a flowchart shown in fig. 19.
In step S11, the video data encoding unit 21 encodes the video data of the supplied content, and supplies the encoded video data obtained as a result thereof to the multiplexer 24.
In step S12, the audio data encoding unit 22 encodes the audio data of the supplied content, and supplies the encoded audio data obtained as a result thereof to the multiplexer 24.
In step S13, the metadata encoding unit 23 encodes the metadata of the supplied content, and supplies the encoded metadata obtained as a result thereof to the multiplexer 24.
Herein, the above-described zoom region information is included in metadata to be encoded, for example. The zoom area information may be any information other than the information described with reference to fig. 5 to 10, 13, 15, and the like, for example.
In addition, the metadata encoding unit 23 also encodes header information of the zoom area information, such as a zoom area information presence flag hasZoomAreaInfo, zoom area number information numZoomAreas, and zoom area auxiliary information, as needed, and provides the encoded header information to the multiplexer 24.
In step S14, the multiplexer 24 generates a bitstream by multiplexing the encoded video data supplied from the video data encoding unit 21, the encoded audio data supplied from the audio data encoding unit 22, and the encoded metadata supplied from the metadata encoding unit 23, and supplies the generated bitstream to the output unit 25. At this time, the multiplexer 24 also stores the encoded header information of the zoom area information supplied from the metadata encoding unit 23 in the bitstream.
Thus, for example, the encoded content data shown in fig. 2 can be obtained as a bit stream. Note that the configuration of the zoom region header section ZHD of the encoded content data may be, for example, any configuration, such as the configurations shown in fig. 4, fig. 14, or fig. 16.
In step S15, the output unit 25 outputs the bit stream supplied from the multiplexer 24, and the encoding process ends.
As described above, the encoding apparatus 11 encodes metadata including zoom region information along with content, thereby generating a bitstream.
In this way, by generating a bitstream including zoom area information for specifying a zoom area without preparing content for each reproduction device, content matching the user's taste or content suitable for each reproduction device can be provided in a simplified manner.
In other words, the content producer can provide the content, which is considered to be optimal for the user's taste, the screen size of the reproducing apparatus, the rotational direction of the reproducing apparatus, and the like, in a simplified manner, only by specifying the zoom area without preparing the content for each taste or each reproducing apparatus.
In addition, on the reproduction side, by selecting a zoom area and cutting the content as needed, it is possible to view the content optimal for the preference of the user, the screen size of the reproduction apparatus, the rotation direction of the reproduction apparatus, and the like.
< example of configuration of reproduction apparatus >
Next, a reproducing apparatus that receives a bit stream, i.e., encoded content data, output from the encoding apparatus 11 and reproduces the content will be described.
Fig. 20 is a diagram showing an example of the configuration of a reproduction apparatus according to an embodiment of the present technology.
In the present example, a display device 52 that displays information when a zoom region is selected, a video output device 53 that outputs a video of a content, and an audio output device 54 that outputs an audio of the content are connected to the reproduction device 51 as necessary.
Note that the display device 52, the video output device 53, and the audio output device 54 may be provided in the reproduction device 51. In addition, the display device 52 and the video output device 53 may be the same device.
The reproduction apparatus 51 includes: a content data decoding unit 61; a zoom region selection unit 62; a video data decoding unit 63; a video segmentation unit 64; an audio data decoding unit 65; and an audio conversion unit 66.
The content data decoding unit 61 receives the bit stream, i.e., the encoded content data, transmitted from the encoding apparatus 11, and separates the encoded video data, the encoded audio data, and the encoded metadata from the encoded content data.
The content data decoding unit 61 supplies the encoded video data to the video data decoding unit 63, and supplies the encoded audio data to the audio data decoding unit 65.
The content data decoding unit 61 obtains metadata by decoding the encoded metadata, and supplies the obtained metadata to each unit of the reproduction apparatus 51 as necessary. In addition, in the case where the zoom region information is included in the metadata, the content data decoding unit 61 supplies the zoom region information to the zoom region selection unit 62. Further, in the case where the zoom region auxiliary information is stored in the bitstream, the content data decoding unit 61 reads the zoom region auxiliary information, decodes the zoom region auxiliary information as necessary, and supplies the resultant zoom region auxiliary information to the zoom region selecting unit 62.
The zoom region selection unit 62 selects one piece of zoom region information from the one or more pieces of zoom region information supplied from the content data decoding unit 61, and supplies the selected zoom region information as selected zoom region information to the video division unit 64 and the audio conversion unit 66. In other words, in the zoom region selection unit 62, the zoom region is selected based on the zoom region information supplied from the content data decoding unit 61.
For example, in the case where the zoom region auxiliary information is supplied from the content data decoding unit 61, the zoom region selection unit 62 supplies the zoom region auxiliary information to the display device 52 to be displayed on the display device 52. In this way, for example, supplementary information such as the purpose and content of the zoom area, a specification ID indicating the zoom specification such as a reproduction target apparatus or the like, information based on the specification ID, and text information is displayed on the display device 52 as zoom area auxiliary information.
Then, the user checks the zoom area assistance information displayed on the display device 52, and selects a desired zoom area by operating an input unit not shown in the drawing. The zoom region selection unit 62 selects a zoom region based on a signal according to an operation of the user supplied from the input unit, and outputs selection zoom region information representing the selected zoom region. In other words, zoom region information of a zoom region specified by a user is selected, and the selected zoom region information is output as selection zoom region information.
Note that the selection of the zoom regions may be performed using any method, such as generating information indicating the position and size of each zoom region from the zoom region information by the zoom region selection unit 62, and displaying the information on the display device 52, and the user selecting the zoom region based on the display.
Note that in the case where selection of a zoom region is not performed, that is, in the case where reproduction of the original content is selected, the selection zoom region information is set to information indicating that clipping or the like is not performed.
Further, for example, in a case where the reproduction apparatus 51 has recorded reproduction device information indicating the type of the own device such as a smart phone or a television receiver in advance, the zoom region information (zoom region) may be selected by using the reproduction device information.
In this case, for example, the zoom region selection unit 62 obtains the reproduction apparatus information and selects the zoom region information by using the obtained reproduction apparatus information and zoom region auxiliary information.
More specifically, the zoom area selection unit 62 selects, as the zoom area auxiliary information, the specification ID indicating that the reproduction target apparatus is an apparatus of the type indicated by the reproduction apparatus information from the specification IDs. Then, the zoom region selection unit 62 sets, as the selected zoom region information, zoom region information corresponding to the selected specification ID, that is, zoom region information whose index idx is the same as the selected specification ID.
In addition, for example, in the case where the reproduction apparatus 51 is a mobile apparatus such as a smartphone or a tablet PC, the zoom region selection unit 62 may obtain direction information or the like indicating the rotation direction of the reproduction apparatus 51 from a gyro sensor not shown in the figure, and select the zoom region information by using the direction information.
In this case, for example, the zoom region selection unit 62 selects the specification ID indicating that the reproduction target apparatus is an apparatus of the type indicated by the reproduction apparatus information, and the assumed rotational direction is the direction indicated by the direction information obtained in the specification ID as the zoom region auxiliary information. Then, the zoom region selection unit 62 sets the zoom region information corresponding to the selected specification ID as the selected zoom region information. In this way, the zoom area information of the zoom area optimal for the current state is selected in both a state where the user uses the reproduction apparatus 51 in the vertical direction (the vertically long screen) and a state where the user uses the reproduction apparatus 51 in the horizontal direction (the horizontally long screen).
Note that, in addition to this, the zoom area information may be selected using only one of the reproduction apparatus information and the direction information, or the zoom area information may be selected using any other information related to the reproduction apparatus 51.
The video data decoding unit 63 decodes the encoded video data supplied from the content data decoding unit 61, and supplies the video data obtained as a result thereof to the video dividing unit 64.
The video dividing unit 64 clips (divides) the zoom region indicated by the selection zoom region information supplied from the zoom region selecting unit 62 from the video (image) based on the video data supplied from the video data decoding unit 63, and outputs the video data obtained as a result thereof to the video output device 53.
Note that in the case where the selection zoom area information is information indicating that clipping is not to be performed, the video division unit 64 does not perform clipping processing on the video data, and outputs the video data directly to the video output device 53 as zoomed video data.
The audio data decoding unit 65 decodes the encoded audio data supplied from the content data decoding unit 61, and supplies audio data obtained as a result thereof to the audio conversion unit 66.
The audio conversion unit 66 performs audio conversion processing on the audio data supplied from the audio data decoding unit 65 based on the selection zoom region information supplied from the zoom region selection unit 62, and supplies the resulting scaled audio data to the audio output device 54.
Here, the audio conversion process is conversion for audio reproduction suitable for scaling video of the content.
For example, in accordance with the clipping processing for the zoom region, that is, the division processing for the zoom region, the distance from the object inside the video to the viewpoint serving as a reference changes. Therefore, for example, in the case where the audio data is object-based audio, the audio conversion unit 66 converts the position information of the object supplied from the content data decoding unit 61 through the audio data decoding unit 65 into metadata based on the selection zoom region information. In other words, the audio conversion unit 66 moves the position of the object as the sound source, that is, changes the distance from the object, based on the selection zoom area information.
Then, the audio conversion unit 66 performs rendering processing based on the audio data in which the position of the object has been moved, and supplies the scaled audio data obtained as a result thereof to the audio output device 54, thereby reproducing audio.
Note that such audio conversion processing is described in detail in, for example, PCT/JP2014/067508 or the like.
In addition, in the case where the selection zoom area information is information indicating that clipping is not to be performed, the audio conversion unit 66 does not perform audio conversion processing on the audio data, and directly outputs the audio data as the zoom audio data to the audio output device 54.
< description of reproduction processing >
Subsequently, the operation of the reproducing apparatus 51 will be described.
When receiving the encoded content data output from the encoding apparatus 11, the reproducing apparatus 51 performs reproduction processing in which the received encoded content data is decoded, and reproduces the content. Hereinafter, the reproduction processing performed by the reproduction apparatus 51 will be described with reference to a flowchart shown in fig. 21.
In step S41, the content data decoding unit 61 separates the encoded video data, the encoded audio data, and the encoded metadata from the received encoded content data, and decodes the encoded metadata.
Then, the content data decoding unit 61 supplies the encoded video data to the video data decoding unit 63, and supplies the encoded audio data to the audio data decoding unit 65. In addition, the content data decoding unit 61 supplies metadata obtained by decoding to each unit of the reproduction apparatus 51 as necessary.
At this time, the content data decoding unit 61 supplies the zoom region information obtained as metadata to the zoom region selection unit 62. In addition, in the case where zoom region auxiliary information as header information of metadata is stored in the encoded content data, the content data decoding unit 61 reads the zoom region auxiliary information and supplies the read zoom region auxiliary information to the zoom region selection unit 62. For example, as the zoom area auxiliary information, the supplemental information zoomareacommensurable [ idx ], zoomareaspecidendid [ idx ] as the specification ID, and the like are read.
In step S42, the zoom region selection unit 62 selects a piece of zoom region information from the zoom region information supplied from the content data decoding unit 61, and supplies the selected zoom region information to the video division unit 64 and the audio conversion unit 66 according to the selection result.
For example, when the zoom region information is selected, the zoom region selection unit 62 supplies the zoom region auxiliary information to the display device 52 to be displayed on the display device 52, and selects the zoom region information based on a signal supplied by an operation input of a user who sees the display.
In addition, as described above, the zoom region information may be selected by using not only the zoom region auxiliary information and the operation input from the user, but also the reproducing apparatus information or the direction information.
In step S43, the video data decoding unit 63 decodes the encoded video data supplied from the content data decoding unit 61, and supplies the video data obtained as a result thereof to the video dividing unit 64.
In step S44, the video dividing unit 64 divides (crops) the zoom region indicated by the selected zoom region information supplied from the zoom region selection unit 62 for video based on the video data supplied from the video data decoding unit 63. In this way, scaled video data for reproducing the video of the scaling region indicated by the selection scaling region information is obtained.
The video dividing unit 64 supplies the scaled video data obtained by the division to the video output device 53, thereby reproducing the video of the cropped content. The video output device 53 reproduces (displays) a video based on the scaled video data supplied from the video dividing unit 64.
In step S45, the audio data decoding unit 65 decodes the encoded audio data supplied from the content data decoding unit 61, and supplies the audio data obtained as a result thereof to the audio conversion unit 66.
In step S46, the audio conversion unit 66 performs audio conversion processing on the audio data supplied from the audio data decoding unit 65 based on the selection zoom region information supplied from the zoom region selection unit 62. In addition, the audio conversion unit 66 supplies the scaled audio data obtained through the audio conversion process to the audio output device 54, thereby outputting audio. The audio output device 54 reproduces the audio of the content on which the audio conversion process is performed based on the scaled audio data supplied from the audio conversion unit 66, and the reproduction process ends.
Note that, more specifically, the processing of steps S43 and S44 and the processing of steps S45 and S46 are executed in parallel with each other.
As described above, the reproducing unit 51 selects appropriate zoom region information, performs clipping of video data and audio conversion processing on audio data based on the selected zoom region information according to the result of the selection, and reproduces content.
In this way, by selecting the zoom region information, it is possible to reproduce content which is appropriately cut and has converted audio, such as content matching the preference of the user or content suitable for the size of the display screen of the reproduction apparatus 51, the rotation direction of the reproduction apparatus 51, and the like, in a simplified manner. In addition, in the case where the user selects a zoom region based on the zoom region assistance information presented by the display device 52, the user can select a desired zoom region in a simplified manner.
Note that in the reproduction processing described with reference to fig. 21, although the case where both the cropping of the video constituting the content and the audio conversion processing of the audio constituting the content are performed based on the selection zoom area information is described, only one of them may be performed.
In addition, also in the case where the content is composed of only video or audio, clipping or audio conversion processing is still performed on such video or audio, and the video or audio can be reproduced.
For example, also in the case where the content is composed of only audio, by selecting zoom region information representing a region to be zoomed and changing the distance or the like from the sound source object by audio conversion processing according to the selected zoom region information, reproduction of the content suitable for the preference of the user, the reproduction apparatus, or the like can be realized.
< second embodiment >
< example of configuration of reproduction apparatus >
Note that, although the example in which the zoom region is clipped from the video of the content according to one piece of selection zoom region information by the video division unit 64 is described above, it may be configured to select a plurality of zoom regions and output such a plurality of zoom regions in a multi-screen arrangement.
In this case, for example, the reproduction apparatus 51 is configured as shown in fig. 22. Note that in fig. 22, the same reference numerals are assigned to portions corresponding to the case shown in fig. 20, and a description thereof will not be presented as appropriate.
The reproduction apparatus 51 shown in fig. 22 includes: a content data decoding unit 61; a zoom region selection unit 62; a video data decoding unit 63; a video segmentation unit 64; a video arrangement unit 91; an audio data decoding unit 65; and an audio conversion unit 66.
The configuration of the reproduction apparatus 51 shown in fig. 22 differs from the reproduction apparatus 51 shown in fig. 20 in that: the video arrangement unit 91 is newly provided at the subsequent stage of the video division unit 64, and is otherwise the same as the configuration of the reproduction apparatus 51 shown in fig. 20.
In the present example, the zoom region selection unit 62 selects one or more pieces of zoom region information, and supplies such zoom region information to the video segmentation unit 64 as selection zoom region information. In addition, the zoom region selection unit 62 selects a piece of zoom region information, and supplies the zoom region information to the audio conversion unit 66 as the selected zoom region information.
Note that, as in the case of the reproducing apparatus 51 shown in fig. 20, the selection of the zoom region information by the zoom region selection unit 62 may be performed in accordance with an input operation by the user, or may be performed based on zoom region auxiliary information, reproducing device information, direction information, or the like.
Further, the zoom region information as the selection zoom region information provided to the audio conversion unit 66 may be selected in accordance with an input operation by the user, or may be zoom region information arranged at a predetermined position, such as a start position of the encoded content data. In addition to this, the zoom region information may be zoom region information having a representative zoom region such as a zoom region having a maximum size.
The video division unit 64 clips the zoom region represented by each of the one or more pieces of selection zoom region information supplied from the zoom region selection unit 62 from the video (image) based on the video data supplied from the video data decoding unit 63, thereby generating the zoom video data of each zoom region. In addition, the video dividing unit 64 supplies the scaled video data of each scaling region obtained by the clipping to the video arranging unit 91.
Note that the video division unit 64 may directly supply the video data that is not clipped to the video arrangement unit 91 as one piece of scaled video data.
The video arrangement unit 91 generates multi-screen video data to be reproduced based on videos of such multi-screen video data arranged on a plurality of screens, based on one or more pieces of zoom video data supplied from the video division unit 64, and supplies the generated multi-screen video data to the video output device 53. Here, the video reproduced based on the multi-screen video data is, for example, a video similar to the image Q14 shown in fig. 18: wherein the video (images) of the selected zoom regions are arranged to align.
In addition, the audio conversion unit 66 performs audio conversion processing on the audio data supplied from the audio data decoding unit 65 based on the selection zoom region information supplied from the zoom region selection unit 62, and supplies the zoom audio data obtained as a result thereof to the audio output device 54 as audio data of representative audio of the multi-screen arrangement. In addition, the audio conversion unit 66 may directly supply the audio data supplied from the audio data decoding unit 65 to the audio output device 54 as audio data (zoom audio data) of representative audio.
< description of reproduction processing >
Next, the reproduction processing performed by the reproduction apparatus 51 shown in fig. 22 will be described with reference to the flowchart shown in fig. 23. Note that the processing of step S71 is similar to that of step S41 shown in fig. 21, and therefore the description thereof is omitted.
In step S72, the zoom region selection unit 62 selects one or more pieces of zoom region information from the zoom region information supplied from the content data decoding unit 61, and supplies the selected zoom region information to the video segmentation unit 64 according to the selection result.
Note that the processing of selecting zoom region information described herein is basically similar to the processing of step S42 shown in fig. 21, except that the number of selected zoom region information is different.
In addition, the zoom region selection unit 62 selects zoom region information of one representative zoom region from the zoom region information supplied from the content data decoding unit 61, and supplies the selected zoom region information to the audio conversion unit 66 according to the selection result. Here, the selection scaling region information provided to the audio conversion unit 66 is the same as one of the one or more pieces of selection scaling region information provided to the video division unit 64.
When the zoom region information is selected, thereafter, the processes of steps S73 and S74 are performed, and decoding of the encoded video data and cropping of the zoom region from the video are performed. However, such processing is similar to that of steps S43 and S44 shown in fig. 21, and thus the description thereof is omitted. However, in step S74, for each of the one or more pieces of selection zoom region information, clipping (division) of a zoom region indicated by the selection zoom region information from the video based on video data is performed, and the zoom video data of each zoom region is supplied to the video arranging unit 91.
In step S75, the video arrangement unit 91 performs video arrangement processing based on the one or more pieces of scaled video data supplied from the video division unit 64. In other words, the video arrangement unit 91 generates multi-screen video data based on one or more pieces of zoom video data, and supplies the generated multi-screen video data to the video output device 53, thereby reproducing the video of each zoom region of the content. The video output device 53 reproduces (displays) videos arranged in a plurality of screens based on the multi-screen video data supplied from the video arrangement unit 91. For example, in a case where a plurality of zoom regions are selected, the content is reproduced in a multi-screen configuration similar to the image Q14 shown in fig. 18.
When the video arrangement processing is performed, thereafter, the processing of steps S76 and S77 is performed, and the reproduction processing ends. However, this processing is similar to the processing of steps S45 and S46 shown in fig. 21, and thus the description thereof is omitted.
As described above, the reproducing unit 51 selects one or more pieces of zoom region information, performs clipping of video data and audio conversion processing of audio data based on the selected zoom region information according to the result of the selection, and reproduces content.
In this way, by selecting one or more pieces of zoom region information, appropriate content, such as content matching the preference of the user or content suitable for the size of the display screen of the reproduction apparatus 51, or the like, can be reproduced in a simplified manner. In particular, in the case where a plurality of pieces of zoom region information are selected, a content video can be reproduced in a multi-screen display that matches the user's taste or the like.
In addition, in the case where the user selects a zoom region based on the zoom region assistance information presented by the display device 52, the user can select a desired zoom region in a simplified manner.
< third embodiment >
< example of configuration of reproduction apparatus >
In addition, in the case where the above-described content is transmitted through a network, the reproduction-side device may be configured to efficiently receive data necessary only for reproduction of the selected zoom area. In this case, for example, the reproduction apparatus is configured as shown in fig. 24. Note that in fig. 24, the same reference numerals are assigned to portions corresponding to the case shown in fig. 20, and a description thereof will not be presented as appropriate.
In the case shown in fig. 24, the reproduction apparatus 121 that reproduces content receives the provision of desired encoded video data and encoded audio data from the content data distribution server 122 in which content and metadata are recorded. In other words, the content data distribution server 122 records the content and the metadata of the content in an encoded state or an unencoded state, and distributes the content in response to a request from the reproduction apparatus 121.
In the present example, the reproduction apparatus 121 includes: a communication unit 131; a metadata decoding unit 132; a video/audio data decoding unit 133; a zoom region selection unit 62; a video data decoding unit 63; a video segmentation unit 64; an audio data decoding unit 65; and an audio conversion unit 66.
The communication unit 131 transmits and receives various types of data to and from the content data distribution server 122 via the network.
For example, the communication unit 131 receives encoded metadata from the content data distribution server 122 and supplies the received encoded metadata to the metadata decoding unit 132, or receives encoded video data and encoded audio data from the content data distribution server 122 and supplies the received data to the video/audio data decoding unit 133. Further, the communication unit 131 transmits the selection zoom area information supplied from the zoom area selection unit 62 to the content data distribution server 122.
The metadata decoding unit 132 obtains metadata by decoding the encoded metadata supplied from the communication unit 131, and supplies the obtained metadata to each unit of the reproduction apparatus 121 as necessary.
In addition, in the case where the zoom region information is included in the metadata, the metadata decoding unit 132 supplies the zoom region information to the zoom region selection unit 62. Further, in the case of receiving the zoom region assistance information from the content data distribution server 122, the metadata decoding unit 132 supplies the zoom region assistance information to the zoom region selection unit 62.
When the encoded video data and the encoded audio data are supplied from the communication unit 131, the video/audio data decoding unit 133 supplies the encoded video data to the video data decoding unit 63, and supplies the encoded audio data to the audio data decoding unit 65,
< description of reproduction processing >
Subsequently, the operation of the reproducing apparatus 121 will be described.
The reproduction apparatus 121 requests the content data distribution server 122 to transmit the encoded metadata. Then, when the encoded metadata is transmitted from the content data distribution server 122, the reproduction apparatus 121 reproduces the content by performing reproduction processing. Hereinafter, the reproduction processing performed by the reproduction apparatus 121 will be described with reference to a flowchart shown in fig. 25.
In step S101, the communication unit 131 receives the encoded metadata transmitted from the content data distribution server 122, and supplies the received metadata to the metadata decoding unit 132. Note that, more specifically, the communication unit 131 also receives header information of metadata such as zoom region number information and zoom region auxiliary information from the content data distribution server 122 as necessary, and supplies the received header information to the metadata decoding unit 132.
In step S102, the metadata decoding unit 132 decodes the encoded metadata supplied from the communication unit 131, and supplies the metadata obtained by the decoding to each unit of the reproduction apparatus 121 as necessary. In addition, the metadata decoding unit 132 supplies the zoom region information obtained as the metadata to the zoom region selection unit 62, and also supplies the zoom region auxiliary information to the zoom region selection unit 62 in the case where the zoom region auxiliary information exists as the header information of the metadata.
In this way, in the case where metadata is obtained, subsequently, zoom area information is selected by performing the process of step S103. However, the processing of step S103 is similar to the processing of step S42 shown in fig. 21, and therefore, the description thereof is omitted. However, in step S103, the selection zoom region information obtained by selecting the zoom region information is supplied to the video dividing unit 64, the audio converting unit 66, and the communication unit 131.
In step S104, the communication unit 131 transmits the selected zoom area information supplied from the zoom area selection unit 62 to the content data distribution server 122 via the network.
The content data distribution server 122 having received the selection zoom area information performs cropping (division) of the zoom area indicated by the selection zoom area information with respect to the video data of the recorded content, thereby generating the zoomed video data. The zoom video data obtained in this way is video data that reproduces only the zoom area indicated by the selection zoom area information in the entire video of the original content.
The content data distribution server 122 transmits encoded video data obtained by encoding the scaled video data and encoded audio data obtained by encoding the audio data, which constitute the content, to the reproduction apparatus 121.
Note that, in the content data distribution server 122, the zoom video data of each zoom area may be prepared in advance. In addition, in the content data distribution server 122, regarding the audio data constituting the content, although all the audio data is generally encoded and the encoded audio data is output regardless of the selected zoom region, it may be configured to output only a part of the encoded audio data in the audio data. For example, in the case where the audio data constituting the content is audio data of each object, only the audio data of the object within the zoom region indicated by the selection zoom region information may be encoded and transmitted to the reproducing apparatus 121.
In step S105, the communication unit 131 receives the encoded video data and the encoded audio data transmitted from the content data distribution server 122, and supplies the encoded video data and the encoded audio data to the video/audio data decoding unit 133. In addition, the video/audio data decoding unit 133 supplies the encoded video data supplied from the communication unit 131 to the video data decoding unit 63, and supplies the encoded audio data supplied from the communication unit 131 to the audio data decoding unit 65.
When the encoded video data and the encoded audio data are obtained, thereafter, the processes of steps S106 to S109 are performed, and the reproduction process ends. However, such processing is similar to that of steps S43 to S46 shown in fig. 21, and thus the description thereof is omitted.
However, since the signal obtained by decoding the encoded video data by the video data decoding unit 63 is the scaled video data that has been clipped, the clipping processing is not basically performed by the video dividing unit 64. Only in the case where additional cropping is required, the video division unit 64 crops the scaled video data supplied from the video data decoding unit 63 based on the selection scaling region information supplied from the scaling region selection unit 62.
In this way, when the content is reproduced by the video output device 53 and the audio output device 54 based on the scaled video data and the scaled audio data, the content according to the selected scaling region, for example, the content shown in fig. 18, is reproduced.
As described above, the reproducing apparatus 121 selects appropriate zoom region information, transmits the selected zoom region information to the content data distribution server 122 according to the selection result, and receives encoded video data and encoded audio data.
In this way, by receiving encoded video data and encoded audio data according to the selection zoom region information, it is possible to reproduce appropriate content, such as content matching the preference of the user or content suitable for the size of the display screen of the reproducing apparatus 121, the rotational direction of the reproducing apparatus 121, and the like, in a simplified manner. Further, only data in the content that needs to be reproduced can be efficiently obtained.
< fourth embodiment >
< example of configuration of reproduction apparatus >
In addition, the above describes an example of including zoom area information in encoded content data. However, for example, the content may be cut and reproduced according to zoom area information disclosed on a network such as the internet or zoom area information recorded on a predetermined recording medium, i.e., in such a manner that the zoom area information is separated from the content. In this case, for example, the clip reproduction can be performed by obtaining zoom region information generated not only by the content producer but also by a third party other than the content producer, that is, other user.
In this way, in the case where the content and the metadata including the zoom area information are obtained separately, for example, the reproducing apparatus is configured as shown in fig. 26. Note that in fig. 26, the same reference numerals are assigned to portions corresponding to the case shown in fig. 20, and a description thereof will not be presented as appropriate.
The reproduction apparatus 161 shown in fig. 26 includes: a metadata decoding unit 171; a content data decoding unit 172; a zoom region selection unit 62; a video data decoding unit 63; a video segmentation unit 64; an audio data decoding unit 65; and an audio conversion unit 66.
The metadata decoding unit 171 obtains encoded metadata including metadata of the zoom area information, for example, from a device on the network, a recording medium connected to the reproduction device 161, or the like, and decodes the obtained encoded metadata.
In addition, the metadata decoding unit 171 supplies metadata obtained by decoding the encoded metadata to each unit of the reproducing apparatus 161 as necessary, and supplies zoom region information included in the metadata to the zoom region selecting unit 62. Further, the unit 171 obtains header information of the metadata such as zoom region auxiliary information along with the encoded metadata as necessary, and supplies the obtained header information to the zoom region selection unit 62.
The content data decoding unit 172 obtains encoded video data and encoded audio data of content, for example, from a device on a network, a recording medium connected to the reproduction device 161, or the like. In addition, the content data decoding unit 172 supplies the obtained encoded video data to the video data decoding unit 63 and supplies the obtained encoded audio data to the audio data decoding unit 65 note that, in the present example, the encoded video data and the encoded audio data and the encoded metadata are obtained from devices, recording media, and the like, which are different from each other.
< description of reproduction processing >
Subsequently, the operation of the reproducing apparatus 161 will be described.
When instructed to reproduce the content, the reproducing apparatus 161 performs reproduction processing of obtaining the encoded metadata and the encoded content, and reproduces the content. Hereinafter, the reproduction processing performed by the reproduction apparatus 161 will be described with reference to a flowchart shown in fig. 27.
In step S131, the metadata decoding unit 171 obtains the encoded metadata including the zoom area information, for example, from a device on the network, a recording medium connected to the reproducing device 161, or the like. Note that the encoded metadata may be obtained in advance before the reproduction processing starts.
In step S132, the metadata decoding unit 171 decodes the obtained encoded metadata, and supplies the metadata obtained as a result thereof to each unit of the reproduction apparatus 161 as necessary. In addition, the metadata decoding unit 171 supplies the zoom region information included in the metadata to the zoom region selection unit 62, and also supplies header information of the metadata, such as zoom region auxiliary information, to the zoom region selection unit 62 as necessary.
When the metadata is obtained by decoding, the process of step S133 is performed, and the zoom area information is selected. However, the process of step S133 is similar to the process of step S42 shown in fig. 21, and therefore, the description thereof is omitted.
In step S134, the content data decoding unit 172 obtains encoded video data and encoded audio data of the content, for example, from a device on the network, a recording medium connected to the reproduction device 161, or the like. In addition, the content data decoding unit 172 supplies the obtained encoded video data to the video data decoding unit 63, and supplies the obtained encoded audio data to the audio data decoding unit 65.
In this way, when the encoded video data and the encoded audio data of the content are obtained, thereafter, the processes of steps S135 to S138 are performed, and the reproduction process ends. However, such processing is similar to that of steps S43 to S46 shown in fig. 21, and thus description thereof is omitted.
As above, the reproducing apparatus 161 separately obtains encoded video data and encoded audio data of content and encoded metadata including zoom region information. Then, the reproducing unit 161 selects appropriate zoom region information, and performs clipping of video data and audio conversion processing of audio data based on the selected zoom region information according to the selected result, and reproduces the content.
In this way, by separately obtaining encoded metadata including zoom region information from encoded video data and encoded audio data, it is possible to clip and reproduce a zoom region set not only by a content producer but also by other users or the like.
Meanwhile, the above-described series of processes may be executed by hardware or software. In the case where a series of processes is executed by software, a program configuring the software is installed to a computer. Here, the computer includes a computer built in dedicated hardware, for example, a general-purpose personal computer or the like capable of executing various functions by installing various programs thereto.
Fig. 28 is a block diagram showing an example of a hardware configuration of a computer that executes the above-described series of processes by using a program.
In the computer, a Central Processing Unit (CPU)501, a Read Only Memory (ROM)502, and a Random Access Memory (RAM)503 are interconnected by a bus 504.
In addition, an input/output interface 505 is connected to the bus 504. An input unit 506, an output unit 507, a recording unit 508, a communication unit 509, and a drive 510 are connected to the input/output interface 505.
The input unit 506 is configured by a keyboard, a mouse, a microphone, an imaging device, and the like. The output unit 507 is configured by a display, a speaker, and the like. The recording unit 508 is configured by a hard disk, a nonvolatile memory, and the like. The communication unit 509 is configured by a network interface or the like. The drive 510 drives a removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
In the computer configured as above, the CPU 501 loads a program recorded in the recording unit 508 into the RAM 503, for example, through the input/output interface 505 and the bus 504, and executes the loaded program, thereby executing a series of processes.
For example, the program executed by the computer (CPU 501) may be provided as a package medium or the like in a form recorded on the removable medium 511. In addition, the program may be provided through a wired or wireless transmission medium such as a local area network, the internet, or digital satellite broadcasting.
In the computer, by loading the removable medium 511 into the drive 510, the program can be installed to the recording unit 508 through the input/output interface 505. Further, the program may be received by the communication unit 509 through a wired or wireless transmission medium and installed to the recording unit 508. Further, the program may also be installed in advance to the ROM 502 or the recording unit 508.
Note that the program executed by the computer may be a program that performs processing in time series according to the order described in this specification, or a program that performs processing in parallel or at necessary timing, for example, when called.
In addition, the embodiments of the present technology are not limited to the above-described embodiments, and various changes may be made within a scope not departing from the concept of the present technology.
For example, the present technology may employ a configuration of cloud computing in which one function is shared by a plurality of apparatuses through a network and is processed together by all the apparatuses.
In addition, each step described in the above-described flowcharts may be performed not only by one apparatus but also by a plurality of apparatuses in a shared manner.
Further, in the case where a plurality of processes are included in one step, the plurality of processes included in one step may be executed not only by one apparatus but also by a plurality of apparatuses in a shared manner.
In addition, the present technology may adopt the following configuration.
[1]
A reproduction apparatus comprising:
a decoding unit that decodes the encoded video data or the encoded audio data;
a zoom region selection unit that selects one or more pieces of zoom region information from a plurality of pieces of zoom region information that specify a region to be zoomed; and
a data processing unit that performs a cropping process on video data obtained by the decoding or performs an audio conversion process on audio data obtained by the decoding, based on the selected zoom region information.
[2]
The reproduction apparatus according to claim 1, wherein zoom region information specifying the region for each type of reproduction target device is included in the plurality of pieces of zoom region information.
[3]
The reproduction apparatus according to claim 1, wherein zoom region information specifying the region for each reproduction target device rotation direction is included in the plurality of pieces of zoom region information.
[4]
The reproduction apparatus according to claim 1, wherein zoom region information specifying a region for each specific video object is included in the plurality of pieces of zoom region information.
[5]
The reproduction apparatus according to claim 1, wherein the zoom region selection unit selects the zoom region information according to an operation input by a user.
[6]
The reproduction apparatus according to claim 1, wherein the zoom region selection unit selects the zoom region information based on information relating to the reproduction apparatus.
[7]
The reproduction apparatus according to claim 6, wherein the zoom region selection unit selects the zoom region information by using at least any one of information representing a type of the reproduction apparatus and information representing a rotation direction of the reproduction apparatus as the information relating to the reproduction apparatus.
[8]
A reproduction method comprising the steps of:
decoding the encoded video data or the encoded audio data;
selecting one or more pieces of zoom region information from a plurality of pieces of zoom region information that specify a region to be zoomed; and
based on the selected zoom region information, a cropping process is performed on video data obtained by decoding or an audio conversion process is performed on audio data obtained by decoding.
[9]
A program for causing a computer to execute a process comprising the steps of:
decoding the encoded video data or the encoded audio data;
selecting one or more pieces of zoom region information from a plurality of pieces of zoom region information that specify a region to be zoomed; and
based on the selected zoom region information, a cropping process is performed on video data obtained by decoding or an audio conversion process is performed on audio data obtained by decoding.
[10]
An encoding apparatus comprising:
an encoding unit that encodes video data or encodes audio data; and
a multiplexer generating a bitstream by multiplexing the encoded video data or the encoded audio data with a plurality of pieces of zoom region information specifying a region to be zoomed.
[11]
An encoding method comprising the steps of:
encoding video data or encoding audio data; and
a bitstream is generated by multiplexing encoded video data or encoded audio data with a plurality of pieces of zoom region information specifying regions to be zoomed.
[12]
A program for causing a computer to execute a process comprising the steps of:
encoding video data or encoding audio data; and
a bitstream is generated by multiplexing encoded video data or encoded audio data with a plurality of pieces of zoom region information specifying regions to be zoomed.
List of reference numerals
11 encoder apparatus
21 video data coding unit
22 Audio data encoding Unit
23 metadata encoding unit
24 multiplexer
25 output unit
51 reproducing device
61 content data decoding unit
62 zoom region selection unit
63 video data decoding unit
64 video segmentation unit
65 Audio data decoding unit
66 Audio conversion Unit
Claims (1)
1. A reproduction apparatus comprising:
a decoding unit that decodes the encoded video data or the encoded audio data, and decodes a plurality of pieces of zoom region information that specify a region to be zoomed;
a zoom region selection unit that selects one or more pieces of zoom region information from a plurality of pieces of zoom region information that specify a region to be zoomed; and
a data processing unit that performs a cropping process on video data obtained by decoding based on the selected zoom region information, or performs an audio conversion process on audio data obtained by decoding based on a position of a sound source object and the selected zoom region information.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014208594 | 2014-10-10 | ||
JP2014-208594 | 2014-10-10 | ||
CN201580053817.8A CN106797499A (en) | 2014-10-10 | 2015-09-28 | Code device and method, transcriber and method and program |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201580053817.8A Division CN106797499A (en) | 2014-10-10 | 2015-09-28 | Code device and method, transcriber and method and program |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112511833A true CN112511833A (en) | 2021-03-16 |
Family
ID=55653028
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011216551.3A Pending CN112511833A (en) | 2014-10-10 | 2015-09-28 | Reproducing apparatus |
CN202210679653.1A Pending CN115243075A (en) | 2014-10-10 | 2015-09-28 | Reproducing apparatus and reproducing method |
CN201580053817.8A Pending CN106797499A (en) | 2014-10-10 | 2015-09-28 | Code device and method, transcriber and method and program |
CN202210683302.8A Pending CN115209186A (en) | 2014-10-10 | 2015-09-28 | Reproducing apparatus and reproducing method |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210679653.1A Pending CN115243075A (en) | 2014-10-10 | 2015-09-28 | Reproducing apparatus and reproducing method |
CN201580053817.8A Pending CN106797499A (en) | 2014-10-10 | 2015-09-28 | Code device and method, transcriber and method and program |
CN202210683302.8A Pending CN115209186A (en) | 2014-10-10 | 2015-09-28 | Reproducing apparatus and reproducing method |
Country Status (5)
Country | Link |
---|---|
US (4) | US10631025B2 (en) |
EP (2) | EP3829185B1 (en) |
JP (3) | JP6565922B2 (en) |
CN (4) | CN112511833A (en) |
WO (1) | WO2016056411A1 (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112511833A (en) | 2014-10-10 | 2021-03-16 | 索尼公司 | Reproducing apparatus |
EP3035326B1 (en) * | 2014-12-19 | 2019-07-17 | Alcatel Lucent | Encoding, transmission , decoding and displaying of oriented images |
KR102561371B1 (en) | 2016-07-11 | 2023-08-01 | 삼성전자주식회사 | Multimedia display apparatus and recording media |
JP6969572B2 (en) * | 2016-10-25 | 2021-11-24 | ソニーグループ株式会社 | Transmitter, transmitter, receiver and receiver |
EP3534612B1 (en) * | 2016-10-26 | 2021-08-25 | Sony Group Corporation | Transmission apparatus, transmission method, reception apparatus, and reception method |
US20200126582A1 (en) * | 2017-04-25 | 2020-04-23 | Sony Corporation | Signal processing device and method, and program |
WO2019187437A1 (en) * | 2018-03-29 | 2019-10-03 | ソニー株式会社 | Information processing device, information processing method, and program |
WO2019187434A1 (en) * | 2018-03-29 | 2019-10-03 | ソニー株式会社 | Information processing device, information processing method, and program |
CN112423021B (en) | 2020-11-18 | 2022-12-06 | 北京有竹居网络技术有限公司 | Video processing method and device, readable medium and electronic equipment |
US20220212100A1 (en) * | 2021-01-04 | 2022-07-07 | Microsoft Technology Licensing, Llc | Systems and methods for streaming interactive applications |
WO2023234429A1 (en) * | 2022-05-30 | 2023-12-07 | 엘지전자 주식회사 | Artificial intelligence device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002171529A (en) * | 2000-11-30 | 2002-06-14 | Matsushita Electric Ind Co Ltd | Video encoder and method, recording medium, and decoder |
JP2008199370A (en) * | 2007-02-14 | 2008-08-28 | Nippon Hoso Kyokai <Nhk> | Digital-broadcasting program display device, and digital-broadcasting program display program |
US20090251594A1 (en) * | 2008-04-02 | 2009-10-08 | Microsoft Corporation | Video retargeting |
JP2010232814A (en) * | 2009-03-26 | 2010-10-14 | Nikon Corp | Video editing program, and video editing device |
CN102244807A (en) * | 2010-06-02 | 2011-11-16 | 微软公司 | Microsoft Corporation |
JP2012060575A (en) * | 2010-09-13 | 2012-03-22 | Canon Inc | Video processing device and control method therefor |
Family Cites Families (92)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7168084B1 (en) * | 1992-12-09 | 2007-01-23 | Sedna Patent Services, Llc | Method and apparatus for targeting virtual objects |
JP4515559B2 (en) * | 1999-08-24 | 2010-08-04 | 富士フイルム株式会社 | Image data recording apparatus and method, and zoom image reproducing apparatus and method |
KR100327377B1 (en) * | 2000-03-06 | 2002-03-06 | 구자홍 | Method of Displaying Digital Broadcasting signals Using Digital Broadcasting Receiver and Digital Display Apparatus |
US7738550B2 (en) * | 2000-03-13 | 2010-06-15 | Sony Corporation | Method and apparatus for generating compact transcoding hints metadata |
US7577333B2 (en) * | 2001-08-04 | 2009-08-18 | Samsung Electronics Co., Ltd. | Method and apparatus for recording and reproducing video data, and information storage medium in which video data is recorded by the same |
AU2003249237A1 (en) * | 2002-07-15 | 2004-02-02 | Device Independent Software, Inc. | Editing image for delivery over a network |
WO2004084535A2 (en) * | 2003-03-14 | 2004-09-30 | Starz Encore Group Llc | Video aspect ratio manipulation |
US7646437B1 (en) * | 2003-09-03 | 2010-01-12 | Apple Inc. | Look-ahead system and method for pan and zoom detection in video sequences |
JP4444623B2 (en) * | 2003-10-29 | 2010-03-31 | 富士フイルム株式会社 | Moving image conversion apparatus and method, moving image distribution apparatus, mail relay apparatus, and program |
US20050195205A1 (en) * | 2004-03-03 | 2005-09-08 | Microsoft Corporation | Method and apparatus to decode a streaming file directly to display drivers |
FR2875662A1 (en) * | 2004-09-17 | 2006-03-24 | Thomson Licensing Sa | METHOD FOR VISUALIZING AUDIOVISUAL DOCUMENTS AT A RECEIVER, AND RECEIVER CAPABLE OF VIEWING THEM |
US9329827B2 (en) * | 2004-12-29 | 2016-05-03 | Funmobility, Inc. | Cropping of images for display on variably sized display devices |
US8924256B2 (en) * | 2005-03-31 | 2014-12-30 | Google Inc. | System and method for obtaining content based on data from an electronic device |
EP1897010A1 (en) * | 2005-06-30 | 2008-03-12 | Nokia Corporation | Camera control means to allow operating of a destined location of the information surface of a presentation and information system |
PL1905233T3 (en) * | 2005-07-18 | 2017-12-29 | Thomson Licensing | Method and device for handling multiple video streams using metadata |
JP4940671B2 (en) * | 2006-01-26 | 2012-05-30 | ソニー株式会社 | Audio signal processing apparatus, audio signal processing method, and audio signal processing program |
JP4715633B2 (en) * | 2006-05-19 | 2011-07-06 | ソニー株式会社 | RECORDING DEVICE, RECORDING METHOD, RECORDING PROGRAM, EDITING DEVICE, EDITING METHOD, AND EDITING PROGRAM |
WO2008085150A1 (en) * | 2006-12-21 | 2008-07-17 | Thomson Licensing | Method, apparatus and system for providing color grading for displays |
KR20150016591A (en) * | 2007-04-03 | 2015-02-12 | 톰슨 라이센싱 | Methods and systems for displays with chromatic correction with differing chromatic ranges |
US20090089448A1 (en) * | 2007-09-28 | 2009-04-02 | David Sze | Mobile browser with zoom operations using progressive image download |
US8826145B1 (en) * | 2007-11-12 | 2014-09-02 | Google Inc. | Unified web and application framework |
JP2009192949A (en) * | 2008-02-15 | 2009-08-27 | Sony Corp | Image processing apparatus, image processing method, and image processing system |
EP2338278B1 (en) * | 2008-09-16 | 2015-02-25 | Intel Corporation | Method for presenting an interactive video/multimedia application using content-aware metadata |
US8416264B2 (en) * | 2008-11-03 | 2013-04-09 | Sony Mobile Communications Ab | Method and device for optimizing an image displayed on a screen |
US8693846B2 (en) * | 2009-03-16 | 2014-04-08 | Disney Enterprises, Inc. | System and method for dynamic video placement on a display |
JP5369952B2 (en) * | 2009-07-10 | 2013-12-18 | ソニー株式会社 | Information processing apparatus and information processing method |
US20110099494A1 (en) * | 2009-10-22 | 2011-04-28 | Microsoft Corporation | Dynamic graphical user interface layout |
CN102630385B (en) * | 2009-11-30 | 2015-05-27 | 诺基亚公司 | Method, device and system for audio zooming process within an audio scene |
US9564148B2 (en) * | 2010-05-18 | 2017-02-07 | Sprint Communications Company L.P. | Isolation and modification of audio streams of a mixed signal in a wireless communication device |
JP5555068B2 (en) * | 2010-06-16 | 2014-07-23 | キヤノン株式会社 | Playback apparatus, control method thereof, and program |
US10324605B2 (en) * | 2011-02-16 | 2019-06-18 | Apple Inc. | Media-editing application with novel editing tools |
US20120038675A1 (en) * | 2010-08-10 | 2012-02-16 | Jay Wesley Johnson | Assisted zoom |
US9446261B2 (en) * | 2010-09-29 | 2016-09-20 | Verizon Patent And Licensing Inc. | Billing system for video provisioning system |
US20120191876A1 (en) * | 2011-01-20 | 2012-07-26 | Openwave Systems Inc. | Method and system for policy based transcoding brokering |
US9792363B2 (en) * | 2011-02-01 | 2017-10-17 | Vdopia, INC. | Video display method |
US9009760B2 (en) * | 2011-06-30 | 2015-04-14 | Verizon Patent And Licensing Inc. | Provisioning interactive video content from a video on-demand (VOD) server |
US20130097634A1 (en) * | 2011-10-13 | 2013-04-18 | Rogers Communications Inc. | Systems and methods for real-time advertisement selection and insertion |
JP2013130964A (en) * | 2011-12-20 | 2013-07-04 | Ricoh Co Ltd | Display control device, display control system and program |
JP2015507407A (en) * | 2011-12-28 | 2015-03-05 | インテル コーポレイション | Integrated metadata insertion system and method in video encoding system |
CN102685597B (en) * | 2012-04-28 | 2015-04-01 | 广州爱九游信息技术有限公司 | Device, mobile terminal and method of content reappearing |
US8823667B1 (en) * | 2012-05-23 | 2014-09-02 | Amazon Technologies, Inc. | Touch target optimization system |
US9773072B2 (en) * | 2012-06-04 | 2017-09-26 | Adobe Systems Incorporated | Systems and methods for developing adaptive layouts for electronic content |
EP2680581A1 (en) * | 2012-06-28 | 2014-01-01 | Alcatel-Lucent | Method and apparatus for dynamic adaptation of video encoder parameters |
US9584573B2 (en) * | 2012-08-29 | 2017-02-28 | Ericsson Ab | Streaming policy management system and method |
KR101676634B1 (en) * | 2012-08-31 | 2016-11-16 | 돌비 레버러토리즈 라이쎈싱 코오포레이션 | Reflected sound rendering for object-based audio |
KR102028696B1 (en) * | 2012-10-04 | 2019-10-07 | 삼성전자주식회사 | Content processing device for processing high resolution content and method thereof |
KR102015204B1 (en) * | 2012-10-26 | 2019-08-27 | 인텔 코포레이션 | Multimedia adaptation based on video orientation |
US9554162B2 (en) * | 2012-11-12 | 2017-01-24 | Lg Electronics Inc. | Apparatus for transreceiving signals and method for transreceiving signals |
JP2016502801A (en) * | 2012-11-27 | 2016-01-28 | エルジー エレクトロニクス インコーポレイティド | Signal transmitting / receiving apparatus and signal transmitting / receiving method |
WO2014088917A1 (en) * | 2012-11-29 | 2014-06-12 | University Of Georgia Research Foundtion Inc. | Music creation systems and methods |
TWI517682B (en) * | 2012-12-28 | 2016-01-11 | 晨星半導體股份有限公司 | Multimedia data stream format, metadata generator, encoding method, encoding system, decoding method, and decoding system |
KR101967295B1 (en) * | 2013-01-09 | 2019-04-09 | 엘지전자 주식회사 | Client for processing information on multiple channels and method for controlling the same in server |
US9124857B2 (en) * | 2013-02-06 | 2015-09-01 | Adobe Systems Incorporated | Method and apparatus for context-aware automatic zooming of a video sequence |
US20140280698A1 (en) * | 2013-03-13 | 2014-09-18 | Qnx Software Systems Limited | Processing a Link on a Device |
US9165203B2 (en) * | 2013-03-15 | 2015-10-20 | Arris Technology, Inc. | Legibility enhancement for a logo, text or other region of interest in video |
SG10201909965RA (en) * | 2013-04-19 | 2019-11-28 | Sony Corp | Information processing device, content requesting method,and computer program |
WO2015014773A1 (en) * | 2013-07-29 | 2015-02-05 | Koninklijke Kpn N.V. | Providing tile video streams to a client |
RU2627048C1 (en) * | 2013-07-30 | 2017-08-03 | Долби Лэборетериз Лайсенсинг Корпорейшн | System and methods of forming scene stabilized metadata |
US20160227285A1 (en) * | 2013-09-16 | 2016-08-04 | Thomson Licensing | Browsing videos by searching multiple user comments and overlaying those into the content |
US9977591B2 (en) * | 2013-10-01 | 2018-05-22 | Ambient Consulting, LLC | Image with audio conversation system and method |
US9374552B2 (en) * | 2013-11-11 | 2016-06-21 | Amazon Technologies, Inc. | Streaming game server video recorder |
US9508172B1 (en) * | 2013-12-05 | 2016-11-29 | Google Inc. | Methods and devices for outputting a zoom sequence |
US11228764B2 (en) * | 2014-01-15 | 2022-01-18 | Avigilon Corporation | Streaming multiple encodings encoded using different encoding parameters |
US9426500B2 (en) * | 2014-01-15 | 2016-08-23 | Verizon and Redbox Digital Entertainment Services, LLC | Optimal quality adaptive video delivery |
KR102056193B1 (en) * | 2014-01-22 | 2019-12-16 | 엘지전자 주식회사 | Mobile terminal and method for controlling the same |
JP6382329B2 (en) * | 2014-02-18 | 2018-08-29 | エルジー エレクトロニクス インコーポレイティド | Broadcast signal transmission and reception method and apparatus for panorama service |
US9626084B2 (en) * | 2014-03-21 | 2017-04-18 | Amazon Technologies, Inc. | Object tracking in zoomed video |
GB2524726B (en) * | 2014-03-25 | 2018-05-23 | Canon Kk | Image data encapsulation with tile support |
EP2928216A1 (en) * | 2014-03-26 | 2015-10-07 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for screen related audio object remapping |
EP2925024A1 (en) * | 2014-03-26 | 2015-09-30 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for audio rendering employing a geometric distance definition |
US9766781B2 (en) * | 2014-04-28 | 2017-09-19 | Google Inc. | Methods, systems, and media for presenting related content in a user interface navigated using directional controls |
US9922007B1 (en) * | 2014-05-22 | 2018-03-20 | Amazon Technologies, Inc. | Split browser architecture capable of determining whether to combine or split content layers based on the encoding of content within each layer |
US20150373341A1 (en) * | 2014-06-23 | 2015-12-24 | Cisco Technology, Inc. | Techniques for Interactive Region-Based Scalability |
WO2015197815A1 (en) * | 2014-06-27 | 2015-12-30 | Koninklijke Kpn N.V. | Determining a region of interest on the basis of a hevc-tiled video stream |
US9681157B2 (en) * | 2014-07-23 | 2017-06-13 | Able Technologies | System and method for D-cinema to a selected location |
CN112511833A (en) * | 2014-10-10 | 2021-03-16 | 索尼公司 | Reproducing apparatus |
US20160227228A1 (en) * | 2015-01-29 | 2016-08-04 | Vixs Systems, Inc. | Video camera with layered encoding, video system and methods for use therewith |
GB201502205D0 (en) * | 2015-02-10 | 2015-03-25 | Canon Kabushiki Kaisha And Telecom Paris Tech | Image data encapsulation |
EP3086562B1 (en) * | 2015-04-23 | 2017-05-24 | Axis AB | Method and device for processing a video stream in a video camera |
WO2017047466A1 (en) * | 2015-09-18 | 2017-03-23 | シャープ株式会社 | Reception device, reception method, and program |
US9883235B2 (en) * | 2015-10-28 | 2018-01-30 | At&T Intellectual Property I, L.P. | Video motion augmentation |
EP3203437A1 (en) * | 2016-02-05 | 2017-08-09 | Thomson Licensing | Method and apparatus for locally sharpening a video image using a spatial indication of blurring |
US20170257679A1 (en) * | 2016-03-01 | 2017-09-07 | Tivo Solutions Inc. | Multi-audio annotation |
GB2550604A (en) * | 2016-05-24 | 2017-11-29 | Canon Kk | Method, device, and computer program for encapsulating and parsing timed media data |
KR102224826B1 (en) * | 2016-05-26 | 2021-03-09 | 브이아이디 스케일, 인크. | Methods and apparatus of viewport adaptive 360 degree video delivery |
US20170353704A1 (en) * | 2016-06-01 | 2017-12-07 | Apple Inc. | Environment-Aware Supervised HDR Tone Mapping |
EP3482566B1 (en) * | 2016-07-08 | 2024-02-28 | InterDigital Madison Patent Holdings, SAS | Systems and methods for region-of-interest tone remapping |
US10805614B2 (en) * | 2016-10-12 | 2020-10-13 | Koninklijke Kpn N.V. | Processing spherical video data on the basis of a region of interest |
EP3470976A1 (en) * | 2017-10-12 | 2019-04-17 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Method and apparatus for efficient delivery and usage of audio messages for high quality of experience |
US10742999B2 (en) * | 2017-01-06 | 2020-08-11 | Mediatek Inc. | Methods and apparatus for signaling viewports and regions of interest |
US20200126582A1 (en) * | 2017-04-25 | 2020-04-23 | Sony Corporation | Signal processing device and method, and program |
WO2019013400A1 (en) * | 2017-07-09 | 2019-01-17 | 엘지전자 주식회사 | Method and device for outputting audio linked with video screen zoom |
-
2015
- 2015-09-28 CN CN202011216551.3A patent/CN112511833A/en active Pending
- 2015-09-28 CN CN202210679653.1A patent/CN115243075A/en active Pending
- 2015-09-28 US US15/516,537 patent/US10631025B2/en active Active
- 2015-09-28 CN CN201580053817.8A patent/CN106797499A/en active Pending
- 2015-09-28 CN CN202210683302.8A patent/CN115209186A/en active Pending
- 2015-09-28 EP EP20215659.2A patent/EP3829185B1/en active Active
- 2015-09-28 EP EP15849654.7A patent/EP3206408B1/en active Active
- 2015-09-28 JP JP2016553047A patent/JP6565922B2/en active Active
- 2015-09-28 WO PCT/JP2015/077243 patent/WO2016056411A1/en active Application Filing
-
2019
- 2019-08-01 JP JP2019142166A patent/JP6992789B2/en active Active
-
2020
- 2020-03-23 US US16/826,675 patent/US11330310B2/en active Active
-
2021
- 2021-09-15 JP JP2021149971A patent/JP7409362B2/en active Active
-
2022
- 2022-04-26 US US17/729,251 patent/US11917221B2/en active Active
-
2024
- 2024-01-09 US US18/407,888 patent/US20240146981A1/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002171529A (en) * | 2000-11-30 | 2002-06-14 | Matsushita Electric Ind Co Ltd | Video encoder and method, recording medium, and decoder |
JP2008199370A (en) * | 2007-02-14 | 2008-08-28 | Nippon Hoso Kyokai <Nhk> | Digital-broadcasting program display device, and digital-broadcasting program display program |
US20090251594A1 (en) * | 2008-04-02 | 2009-10-08 | Microsoft Corporation | Video retargeting |
JP2010232814A (en) * | 2009-03-26 | 2010-10-14 | Nikon Corp | Video editing program, and video editing device |
CN102244807A (en) * | 2010-06-02 | 2011-11-16 | 微软公司 | Microsoft Corporation |
US20110299832A1 (en) * | 2010-06-02 | 2011-12-08 | Microsoft Corporation | Adaptive video zoom |
JP2012060575A (en) * | 2010-09-13 | 2012-03-22 | Canon Inc | Video processing device and control method therefor |
Non-Patent Citations (2)
Title |
---|
R. OLDFIELD, B. SHIRLEY AND N. CULLEN: "Demo paper: Audio object extraction for live sports broadcast", 2013 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (ICMEW), 3 October 2013 (2013-10-03), pages 1 * |
R. OLDFIELD, B. SHIRLEY AND N. CULLEN: "Demo paper: Audio object extraction for live sports broadcast", 2013 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (ICMEW), pages 1 * |
Also Published As
Publication number | Publication date |
---|---|
WO2016056411A1 (en) | 2016-04-14 |
CN115243075A (en) | 2022-10-25 |
CN106797499A (en) | 2017-05-31 |
JPWO2016056411A1 (en) | 2017-07-20 |
CN115209186A (en) | 2022-10-18 |
US11917221B2 (en) | 2024-02-27 |
JP6992789B2 (en) | 2022-01-13 |
US20200221146A1 (en) | 2020-07-09 |
JP7409362B2 (en) | 2024-01-09 |
US20180242030A1 (en) | 2018-08-23 |
US11330310B2 (en) | 2022-05-10 |
EP3206408B1 (en) | 2020-12-30 |
EP3206408A4 (en) | 2018-04-25 |
EP3829185A1 (en) | 2021-06-02 |
US20220256216A1 (en) | 2022-08-11 |
JP6565922B2 (en) | 2019-08-28 |
EP3829185B1 (en) | 2024-04-10 |
US10631025B2 (en) | 2020-04-21 |
JP2021185720A (en) | 2021-12-09 |
EP3206408A1 (en) | 2017-08-16 |
US20240146981A1 (en) | 2024-05-02 |
JP2019186969A (en) | 2019-10-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7409362B2 (en) | Reproduction device and method, and program | |
JP6501933B2 (en) | XML document generation apparatus, generation method, information processing apparatus, information processing method, and program | |
US10257638B2 (en) | Audio object processing based on spatial listener information | |
CN109155874B (en) | Method, apparatus and computer program for adaptive streaming of virtual reality media content | |
EP2624551A1 (en) | Content transmitting device, content transmitting method, content reproduction device, content reproduction method, program, and content delivery system | |
CN103125123A (en) | Playback device, playback method, integrated circuit, broadcasting system, and broadcasting method | |
JP2018513583A (en) | Audio video file live streaming method, system and server | |
CN106303663B (en) | live broadcast processing method and device and live broadcast server | |
JP2012015990A (en) | Video processing apparatus and control method thereof | |
JP6860485B2 (en) | Information processing equipment, information processing methods, and programs | |
JP2022501902A (en) | Image processing methods, devices, systems, network equipment, terminals and computer programs | |
WO2019155930A1 (en) | Transmission device, transmission method, processing device, and processing method | |
CN111903135A (en) | Information processing apparatus, information processing method, and program | |
CN111903136B (en) | Information processing apparatus, information processing method, and computer-readable storage medium | |
WO2019004073A1 (en) | Image placement determination device, display control device, image placement determination method, display control method, and program | |
JP4017436B2 (en) | 3D moving image data providing method and display method thereof, providing system and display terminal, execution program of the method, and recording medium recording the execution program of the method | |
US20230156257A1 (en) | Information processing apparatus, information processing method, and storage medium | |
WO2004030375A1 (en) | Image data creation device, image data reproduction device, image data creation method, image data reproduction method, recording medium containing image data or image processing program, and image data recording device | |
JP2018142934A (en) | Video distribution system | |
SA516371004B1 (en) | Multi-Layer Video File Format Designs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |