[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN118414837A - Point cloud data transmitting device, method executed in transmitting device, point cloud data receiving device, and method executed in receiving device - Google Patents

Point cloud data transmitting device, method executed in transmitting device, point cloud data receiving device, and method executed in receiving device Download PDF

Info

Publication number
CN118414837A
CN118414837A CN202280081528.9A CN202280081528A CN118414837A CN 118414837 A CN118414837 A CN 118414837A CN 202280081528 A CN202280081528 A CN 202280081528A CN 118414837 A CN118414837 A CN 118414837A
Authority
CN
China
Prior art keywords
point cloud
temporal
track
value
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280081528.9A
Other languages
Chinese (zh)
Inventor
亨得利·亨得利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Priority claimed from PCT/KR2022/020139 external-priority patent/WO2023113405A1/en
Publication of CN118414837A publication Critical patent/CN118414837A/en
Pending legal-status Critical Current

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Provided are a point cloud data transmitting apparatus, a method performed in the transmitting apparatus, a receiving apparatus, and a method performed in the receiving apparatus. The method performed in the point cloud data receiving device disclosed herein comprises the steps of: acquiring time scalability information about a point cloud in a three-dimensional space based on the G-PCC file; and reconstructing the three-dimensional point cloud based on the temporal scalability information, wherein the temporal scalability information comprises a first syntax element of a temporal level identifier for the samples in the temporal level track, and a value of the temporal level identifier may be represented as a discrete value.

Description

Point cloud data transmitting device, method executed in transmitting device, point cloud data receiving device, and method executed in receiving device
Technical Field
The present disclosure relates to methods and apparatus for processing point cloud content.
Background
The point cloud content is expressed as a point cloud that is a set of points belonging to a coordinate system representing a three-dimensional space. The point cloud content may represent three-dimensional media and be used to provide various services such as Virtual Reality (VR), augmented reality (augmented reality, AR), mixed Reality (MR), and self-driving (self-driving) services. Since tens of thousands to hundreds of thousands of point data are required to express point cloud contents, a method of efficiently processing a large amount of point data is required.
Disclosure of Invention
Technical problem
The present disclosure provides an apparatus and method for efficiently processing point cloud data. The present disclosure provides a point cloud data processing method and apparatus for resolving latency and encoding/decoding complexity.
In addition, the present disclosure provides an apparatus and method supporting temporal scalability (temporal scalability) in terms of portability (carrage) of geometry-based point cloud compressed data (G-PCC).
In addition, the present disclosure proposes an apparatus and method for efficiently storing a G-PCC bitstream in a single track (track) in a file, or storing the G-PCC bitstream separately in multiple tracks and providing a point cloud content service that provides signaling of the G-PCC bitstream.
In addition, the present disclosure proposes an apparatus and method for processing file storage techniques to support efficient access to stored G-PCC bitstreams.
In addition, the present disclosure proposes an apparatus and method for defining interleaving between samples belonging to different temporal levels when supporting temporal scalability.
The technical problems solved by the present disclosure are not limited to the above technical problems, and other technical problems not described herein will be apparent to those skilled in the art from the following description.
Technical proposal
According to an embodiment of the present disclosure, a method performed by a receiving device of point cloud data includes the steps of: acquiring time scalability information of a point cloud in a three-dimensional space based on the G-PCC file; and reconstructing the three-dimensional point cloud based on the temporal scalability information. The temporal scalability information may include a first syntax element of an identifier of a temporal level for a sample in a temporal level track, and an identifier value of the temporal level may be represented as a discrete value.
According to an embodiment of the present disclosure, the identifier value of the temporal level may be a discrete value of an interval having the same value, and the interval may be 1.
According to embodiments of the present disclosure, samples of identifiers of different temporal levels may be included in each temporal level track.
According to an embodiment of the present disclosure, the time level track may include a first time level track and a second time level track, and when the second time level track is a next track to the first time level track, the second time level track may include a sample of identifiers of time levels greater than an identifier value of a maximum time level of the first time level track.
According to an embodiment of the present disclosure, the second time level track may include a sample having an identifier value obtained by adding 1 to the identifier value of the maximum time level of the first level track.
According to an embodiment of the present disclosure, the temporal scalability information may further include a second syntax element for whether there are a plurality of temporal level tracks, a first value of the second syntax element may indicate that there is only one temporal level track, and a second value of the second syntax element may indicate that there are a plurality of temporal level tracks.
According to an embodiment of the present disclosure, the first value may be 0.
According to an embodiment of the present disclosure, the second value may be 1.
According to embodiments of the present disclosure, the temporal level track may include only samples of consecutive temporal levels.
According to embodiments of the present disclosure, only samples of different time levels may be included between the time level tracks.
According to an embodiment of the present disclosure, a method performed by a transmitting apparatus of point cloud data may include the steps of: determining whether temporal scalability is applied to the point cloud data in the three-dimensional space; and generating a G-PCC file by including the temporal scalability information and the point cloud data. The temporal scalability information may include a first syntax element of an identifier of a temporal level for a sample in a temporal level track, and an identifier value of the temporal level may be represented as a discrete value.
According to an embodiment of the present disclosure, a receiving device of point cloud data may include a memory and at least one processor. The at least one processor may obtain time scalability information for a point cloud in a three-dimensional space based on a G-PCC file, and reconstruct the three-dimensional point cloud based on the time scalability information. The temporal scalability information may include a first syntax element of an identifier of a temporal level for a sample in a temporal level track, and an identifier value of the temporal level may be represented as a discrete value.
According to an embodiment of the present disclosure, a transmitting device of point cloud data may include a memory and at least one processor. The at least one processor may determine whether temporal scalability is applied to point cloud data in a three-dimensional space, and generate a G-PCC file by including temporal scalability information and the point cloud data. The temporal scalability information may include a first syntax element of an identifier of a temporal level for a sample in a temporal level track, and an identifier value of the temporal level may be represented as a discrete value.
According to an embodiment of the present disclosure, a computer readable medium storing a G-PCC bitstream or file is disclosed. The G-PCC bit stream or file may be generated by a method performed by a transmitting apparatus of the point cloud data.
According to an embodiment of the present disclosure, a method of transmitting a G-PCC bitstream or file is disclosed. The G-PCC bit stream or file may be generated by a method performed by a transmitting apparatus of the point cloud data.
Advantageous effects
The apparatus and method according to the embodiments of the present disclosure may process point cloud data with high efficiency.
Apparatus and methods according to embodiments of the present disclosure may provide high quality point cloud services.
Apparatuses and methods according to embodiments of the present disclosure may provide point cloud content for providing general services such as VR services and autopilot services.
Apparatuses and methods according to embodiments of the present disclosure may provide temporal scalability for efficiently accessing a desired component among G-PCC components.
Apparatus and methods according to embodiments of the present disclosure may define interleaving between samples belonging to different temporal levels when supporting temporal scalability.
Apparatuses and methods according to embodiments of the present disclosure may improve image encoding/decoding efficiency and speed by defining a system of values for a time identifier to clearly define a track system of multi-track G-PCC content.
Apparatuses and methods according to embodiments of the present disclosure may support time scalability so that data may be manipulated at a high level consistent with a network function or a decoder function, and thus, performance of a point cloud content providing system can be improved.
Apparatuses and methods according to embodiments of the present disclosure may divide a G-PCC bitstream into one or more tracks in a file and store them.
Apparatuses and methods according to embodiments of the present disclosure may enable smooth and progressive playback by reducing an increase in playback complexity.
Drawings
Fig. 1 is a block diagram illustrating an example of a system for providing point cloud content according to an embodiment of the present disclosure.
Fig. 2 is a block diagram illustrating an example of a process of providing point cloud content according to an embodiment of the present disclosure.
Fig. 3 illustrates an example of a process of acquiring point cloud video according to an embodiment of the present disclosure.
Fig. 4 illustrates an example of a point cloud encoding apparatus according to an embodiment of the present disclosure.
Fig. 5 illustrates an example of voxels (voxel) according to an embodiment of the disclosure.
Fig. 6 illustrates an example of an octree (octree) and an occupancy code (occupancy code) according to embodiments of the present disclosure.
Fig. 7 illustrates an example of neighbor patterns (neighbor patterns) according to embodiments of the present disclosure.
Fig. 8 illustrates an example of a point configuration in terms of LOD distance values according to an embodiment of the present disclosure.
Fig. 9 illustrates an example of points of respective LODs according to an embodiment of the present disclosure.
Fig. 10 is a block diagram illustrating an example of a point cloud decoding apparatus according to an embodiment of the present disclosure.
Fig. 11 is a block diagram illustrating another example of a point cloud decoding apparatus according to an embodiment of the present disclosure.
Fig. 12 is a block diagram illustrating another example of a transmitting apparatus according to an embodiment of the present disclosure.
Fig. 13 is a block diagram illustrating another example of a receiving apparatus according to an embodiment of the present disclosure.
Fig. 14 illustrates an example of a structure capable of interworking with a method/apparatus for transmitting and receiving point cloud data according to an embodiment of the present disclosure.
Fig. 15 illustrates an example in which a bounding box is spatially partitioned into one or more 3D blocks according to an embodiment of the present disclosure.
Fig. 16 illustrates an example of a structure of a bitstream according to an embodiment of the present disclosure.
Fig. 17 illustrates an example of a file including a single track according to an embodiment of the present disclosure.
Fig. 18 illustrates an example of a file including a plurality of tracks according to an embodiment of the present disclosure.
Fig. 19 to 23 illustrate examples of temporal scalability information according to an embodiment of the present disclosure.
Fig. 24 is a flowchart of a method performed by a receiving device or a transmitting device of point cloud data according to an embodiment of the present disclosure.
Fig. 25 is a flowchart of a method performed by a transmitting apparatus of point cloud data according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art to which the present disclosure pertains can easily implement them. The present disclosure may be embodied in many different forms and is not limited to the embodiments described herein.
In describing the present disclosure, a detailed description of known functions and configurations may be omitted when it may obscure the subject matter of the present disclosure. In the drawings, parts irrelevant to the description of the present disclosure are omitted, and like reference numerals are given to like parts.
In this disclosure, when one component is "connected," "coupled," or "linked" to another component, it can include not only a direct connection, but also an indirect connection in which the other component exists. In addition, when an element is referred to as being "comprising" or "having" another element, it is intended that the inclusion of the other element is not excluded unless stated to the contrary.
In this disclosure, terms such as first, second, etc. are used solely for the purpose of distinguishing one component from another and not limitation of the order or importance of the components unless otherwise specified. Accordingly, within the scope of the present disclosure, a first component in one embodiment may be referred to as a second component in another embodiment, and similarly, a second component in one embodiment may be referred to as a first component in another embodiment.
In this disclosure, components that are distinguished from each other are used to clearly explain their features and do not necessarily mean separating the components. That is, multiple components may be integrated to form one hardware or software unit, or one component may be distributed to form multiple hardware or software units. Accordingly, such integrated or distributed embodiments are included within the scope of the present disclosure, even though not specifically mentioned.
The components described in the various embodiments are not necessarily essential components in this disclosure, some of which may be optional components. Thus, embodiments consisting of a subset of the components described in one embodiment are also included within the scope of the present disclosure. In addition, embodiments that include other components in addition to those described in the various embodiments are also included within the scope of the present disclosure.
The present disclosure relates to encoding and decoding of point cloud related data, and unless terms used in the present disclosure are redefined in the present disclosure, these terms may have a general meaning commonly used in the art to which the present disclosure pertains.
In this disclosure, the terms "/" and "," should be interpreted as indicating "and/or". For example, the expressions "A/B" and "A, B" may mean "A and/or B". Further, "A/B/C" and "A/B/C" may mean at least one of "A, B and/or C".
In this disclosure, the term "or" should be interpreted as indicating "and/or". For example, the expression "a or B" may include 1) only "a", 2) only "B", and/or 3) both "a and B". In other words, in the present disclosure, the term "or" should be interpreted as indicating "additionally or alternatively".
The present disclosure relates to compression of point cloud related data. The various methods or embodiments of the present invention may be applied to the point cloud compression or point cloud coding (point cloud coding, PCC) standard (e.g., G-PCC or V-PCC standard) of the moving picture expert group (moving picture experts group, MPEG) or to the next generation video/image coding standard.
In this disclosure, a "point cloud" may refer to a set of points located in a three-dimensional space. Also, in the present disclosure, "point cloud content" is expressed as a point cloud, and may refer to "point cloud video/image". Hereinafter, the 'point cloud video/image' is referred to as 'point cloud video'. The point cloud video may include one or more frames, and one frame may be a still image or picture (picture). Thus, the point cloud video may include point cloud images/frames/pictures, and may be referred to as any of "point cloud images", "point cloud frames", and "point cloud pictures".
In this disclosure, "point cloud data" may refer to data or information related to individual points in a point cloud. The point cloud data may include geometry and/or attributes. In addition, the point cloud data may also include metadata. The point cloud data may be referred to as "point cloud content data" or "point cloud video data" or the like. In addition, the point cloud data may be referred to as "point cloud content", "point cloud video", "G-PCC data", and the like.
In the present disclosure, a point cloud object corresponding to point cloud data may be represented in a frame shape based on a coordinate system, and the frame shape based on the coordinate system may be referred to as a bounding box. That is, the bounding box may be a cuboid capable of accommodating all points of the point cloud, and may be a cuboid including the source point cloud frame.
In the present disclosure, the geometry includes the position (or position information) of each point, and the position may be expressed as parameters (e.g., x-axis values, y-axis values, and z-axis values) representing a three-dimensional coordinate system (e.g., a coordinate system composed of x-axis, y-axis, and z-axis). The geometry may be referred to as "geometry information".
In the present disclosure, the attributes may include attributes of the respective points, and the attributes may include one or more of texture information, color (RGB or YCbCr), reflectivity (r), transparency, and the like of the respective points. The attribute may be referred to as "attribute information". The metadata may include various data related to acquisition in an acquisition process to be described later.
Overview of a Point cloud content providing System
Fig. 1 illustrates an example of a system for providing point cloud content (hereinafter, referred to as a 'point cloud content providing system') according to an embodiment of the present disclosure. Fig. 2 illustrates an example of a process in which the point cloud content providing system provides point cloud content.
As shown in fig. 1, the point cloud content providing system may include a transmitting apparatus 10 and a receiving apparatus 20. The point cloud content providing system may perform the acquisition process S20, the encoding process S21, the transmission process S22, the decoding process S23, the rendering process S24, and/or the feedback process S25 shown in fig. 2 through the operations of the transmitting apparatus 10 and the receiving apparatus 20.
The transmitting apparatus 10 acquires point cloud data and outputs a bitstream through a series of processes (e.g., encoding processes) for the acquired point cloud data (source point cloud data) in order to provide point cloud content. Here, the point cloud data may be output in the form of a bit stream through an encoding process. In some embodiments, the transmitting apparatus 10 may transmit the output bitstream to the receiving apparatus 20 in the form of a file or a streaming (segment) through a digital storage medium or a network. The digital storage medium may include a variety of storage media such as USB, SD, CD, DVD, blu-ray disc, HDD, and SSD. The receiving device 20 may process (e.g., decode or reconstruct) Cheng Yuandian the received data (e.g., encoded point cloud data) and render it. The point cloud content may be provided to a user through these processes, and the present disclosure may provide various embodiments required to efficiently perform a series of these processes.
As illustrated in fig. 1, the transmitting apparatus 10 may include: the acquisition unit 11, the encoding unit 12, the encapsulation processing unit 13, and the transmission unit 14, and the reception apparatus 20 may include: a receiving unit 21, a decapsulation processing unit 22, a decoding unit 23, and a rendering unit 24.
The acquisition unit 11 may perform a process S20 of acquiring a point cloud video through a capturing, synthesizing, or generating process. Thus, the acquisition unit 11 may be referred to as a 'point cloud video acquisition unit'.
Point cloud data (geometric and/or attributes, etc.) of a plurality of points may be generated through the acquisition process (S20). Further, through the acquisition process (S20), metadata related to the acquisition of the point cloud video may be generated. Also, mesh data (e.g., triangle data) indicating connection information between the point clouds may be generated through the acquisition process (S20).
The metadata may include initial viewing orientation metadata. The initial viewing orientation metadata may indicate whether the point cloud data is data representing a front or a back. Metadata may be referred to as "auxiliary data," which is metadata of a point cloud.
The acquired point cloud video may include a polygon file format or a Stanford triangle format (PLY) file. Because the point cloud video has one or more frames, the acquired point cloud video may include one or more PLY files. The PLY file may include point cloud data for each point.
In order to acquire the point cloud video (or point cloud data), the acquisition unit 11 may be composed of a combination of a camera device capable of acquiring depth (depth information) and an RGB camera capable of extracting color information corresponding to the depth information. Here, the camera device capable of acquiring depth information may be a combination of an infrared pattern projector and an infrared camera. In addition, the acquisition unit 11 may be composed of a laser radar (LiDAR), and the laser radar may use a radar system for measuring the position coordinates of the reflector by measuring the time required for the laser pulse to be emitted and returned after being reflected.
The acquisition unit 110 may extract a geometric shape (shape of geometry) composed of points in a three-dimensional space from the depth information, and may extract attributes representing colors or reflections of the respective points from the RGB information.
As a method of extracting (or capturing, acquiring, etc.) point cloud video (or point cloud data), there may be an inward-facing method of capturing a central object and an outward-facing method of capturing an external environment. Fig. 3 shows an example of an inward facing method and an outward facing method. Fig. 3 (a) shows an example of an inward-facing method, and fig. 3 (b) shows an example of an outward-facing method.
As illustrated in fig. 3 (a), when the current surrounding environment in the vehicle is configured as point cloud content (such as at the time of automatic driving), an inward facing method may be used. As illustrated in (b) of fig. 3, when core objects such as characters, players, objects, actors, etc. are configured as point cloud content that can be freely viewed by a user in 360 degrees in a VR/AR environment, an outward facing method may be used. When configuring point cloud content by a plurality of cameras, in order to set a global coordinate system between the cameras, a process of calibrating the cameras may be performed before capturing the content. A method of synthesizing an arbitrary point cloud video based on the captured point cloud video may be utilized.
On the other hand, in the case of providing point cloud video for a virtual space generated by a computer, capturing by a real camera may not be performed. In this case, post-processing may be required to improve the quality of the captured point cloud content. For example, in the acquisition process (S20), the maximum/minimum depth value may be adjusted within a range provided by the camera apparatus, but post-processing for removing an unwanted region (e.g., background) or point data of the unwanted region, or post-processing for identifying a connected space and filling a space hole (spatial hole) may be performed. As another example, post-processing may be performed to integrate point cloud data extracted from cameras sharing a spatial coordinate system into a single content by a transformation process of transforming each point into a global coordinate system based on the position coordinates of each camera. In this way, a wide range of point cloud contents can be generated, or point cloud contents having a high density of points can be acquired.
The encoding unit 12 may perform an encoding process of encoding data (e.g., geometry, attribute, and/or metadata, and/or mesh data, etc.) generated by the acquisition unit 11 into one or more bitstreams (S21). Thus, the encoding unit 12 may be referred to as a 'point cloud video encoder'. The encoding unit 12 may encode the data generated by the acquisition unit 11 in series or in parallel.
The encoding process S21 performed by the encoding unit 12 may be geometry-based point cloud compression (G-PCC). The encoding unit 12 may perform a series of processes such as prediction, transformation, quantization, and entropy encoding for compression and encoding efficiency.
The encoded point cloud data may be output in the form of a bit stream. Based on the G-PCC process, the encoding unit 12 may segment the point cloud data into geometry and attributes and encode them as described below. In this case, the output bitstream may include a geometry bitstream including the encoded geometry and an attribute bitstream including the encoded attributes. In addition, the output bitstream may further include one or more of a metadata bitstream including metadata, an auxiliary bitstream including auxiliary data, and a mesh data bitstream including mesh data. The encoding process (S21) will be described in more detail below. The bitstream including the encoded point cloud data may be referred to as a 'point cloud bitstream' or a 'point cloud video bitstream'.
The encapsulation processing unit 13 may perform a process of encapsulating one or more bitstreams output from the decoding unit 12 in the form of files or segments. Thus, the encapsulation processing unit 13 may be referred to as a 'file/segment encapsulation module'. Although the figures show examples in which the encapsulation processing unit 13 is made up of individual components/modules associated with the transmission unit 14, in some embodiments the encapsulation processing unit 13 may be included in the transmission unit 14.
The encapsulation processing unit 13 may encapsulate the data in a file Format such as ISO Base MEDIA FILE Format (ISOBMFF) or process the data in the form of other DASH segments. In some embodiments, the encapsulation processing unit 13 may include metadata in a file format. The metadata may be included in various levels of boxes, such as the ISOBMFF file format, or as data in separate tracks within the file. In some implementations, the encapsulation processing unit 130 may encapsulate the metadata itself into a file. The metadata processed by the package processing unit 13 may be transmitted from a metadata processing unit not shown in the figure. The metadata processing unit may be included in the encoding unit 12 or may be configured as a separate component/module.
The transmission unit 14 may perform a transmission process of applying a process according to a file format (a process for transmission) to the 'encapsulated point cloud bit stream' (S22). The transmitting unit 140 may transmit the bit stream or a file/segment including the bit stream to the receiving unit 21 of the receiving apparatus 20 through a digital storage medium or a network. Thus, the transmitting unit 14 may be referred to as a 'transmitter' or a 'communication module'.
The sending unit 14 may process the point cloud data according to any transport protocol. Here, 'processing the point cloud data according to any transmission protocol' may be 'processing for transmission'. The process for transmitting may include a process for transmitting through a broadcast network/a process for transmitting through broadband, etc. In some embodiments, the transmitting unit 14 may not only receive point cloud data from the metadata processing unit, but also receive metadata from the metadata processing unit, and may perform processing for transmission on the transmitted metadata. In some implementations, the processing for transmitting may be performed by a transmission processing unit, and the transmission processing unit may be included in the transmission unit 14 or configured as a component/module separate from the transmission unit 14.
The receiving unit 21 may receive a bit stream transmitted by the transmitting apparatus 10 or a file/segment including the bit stream. Depending on the transmitted channel, the receiving unit 21 may receive a bitstream or a file/segment including a bitstream through a broadcast network or may receive a bitstream or a file/segment including a bitstream through a broadband. Alternatively, the receiving unit 21 may receive the bitstream or a file/segment including the bitstream through a digital storage medium.
The receiving unit 21 may perform processing on the received bitstream or a file/segment including the bitstream according to a transmission protocol. The receiving unit 21 may perform the inverse of the transmission process (process for transmission) to correspond to the process for transmission performed by the transmitting apparatus 10. The receiving unit 21 may transmit encoded point cloud data among the received data to the decapsulation processing unit 22, and may transmit metadata to the metadata parsing unit. The metadata may take the form of a signaling table. In some embodiments, the inverse of the processing for transmission may be performed in the receive processing unit. Each of the reception processing unit, the decapsulation processing unit 22, and the metadata parsing unit may be included in the reception unit 21, or may be configured as a component/module separate from the reception unit 21.
The decapsulation processing unit 22 may decapsulate the point cloud data in the file format (i.e., a bitstream in the file format) received from the reception unit 21 or the reception processing unit. Thus, the decapsulation processing unit 22 may be referred to as a 'file/segment decapsulation module'.
The decapsulation processing unit 22 may obtain a point cloud bit stream or a metadata bit stream by decapsulating the file according to ISOBMFF or the like. In some implementations, metadata (metadata bit stream) can be included in the point cloud bit stream. The acquired point cloud bit stream may be transmitted to the decoding unit 23, and the acquired metadata bit stream may be transmitted to the metadata processing unit. The metadata processing unit may be included in the decoding unit 23 or may be configured as a separate component/module. The metadata obtained by the decapsulation processing unit 23 may take the form of boxes or tracks in a file format. The decapsulation processing unit 23 may receive metadata required for decapsulation from the metadata processing unit, if necessary. The metadata may be transmitted to the decoding unit 23 and used in the decoding process (S23), or may be transmitted to the rendering unit 24 and used in the rendering process (S24).
The decoding unit 23 may receive the bit stream and perform an operation corresponding to the operation of the encoding unit 12, thereby performing a decoding process of decoding the point Yun Bite stream (encoded point cloud data) (S23). Thus, the decoding unit 23 may be referred to as a 'point cloud video decoder'.
The decoding unit 23 may segment the point cloud data into geometry and properties and decode them. For example, the decoding unit 23 may reconstruct (decode) the geometry from the geometry bitstream included in the point cloud bitstream, and restore (decode) the attributes based on the reconstructed geometry and the attribute bitstream included in the point cloud bitstream. The three-dimensional point cloud video/image may be reconstructed based on location information from the reconstructed geometry and from properties (such as color or texture) of the decoded properties. The decoding process (S23) will be described in more detail below.
The rendering unit 24 may perform a rendering process S24 of rendering the reconstructed point cloud video. Therefore, the rendering unit 24 may be referred to as a 'renderer'.
The rendering process S24 may refer to a process of rendering and displaying point cloud content in a 3D space. The rendering process S24 may perform rendering according to a desired rendering method based on the position information and the attribute information of the points decoded through the decoding process.
Points of the point cloud content may be rendered with vertices having a specific thickness, cubes having a specific minimum size centered at vertex positions, or circles centered at vertex positions. The user may view all or part of the rendered results through a VR/AR display or a general purpose display. The rendered video may be displayed by a display unit. The user may view all or part of the rendered results through a VR/AR display or a general purpose display.
The feedback process S25 may include a process of transmitting various feedback information acquired during the rendering process S24 or the display process to other components in the transmitting apparatus 10 or the receiving apparatus 20. The feedback process S25 may be performed by one or more of the components included in the reception apparatus 20 of fig. 1, or may be performed by one or more of the components shown in fig. 10 and 11. In some embodiments, the feedback process S25 may be performed by a 'feedback unit' or a 'sensing/tracking unit'.
Interactivity of point cloud content consumption may be provided through a feedback process (S25). In some embodiments, in the feedback process S25, head orientation information, viewport information indicating an area the user is currently viewing, and the like may be fed back. In some implementations, the user may interact with those implemented in VR/AR/MR/autopilot environments. In this case, in the feedback process S25, information related to the interaction is transmitted from the transmitting apparatus 10 to the service provider. In some embodiments, the feedback process (S25) may not be performed.
The head orientation information may refer to information about the head position, angle, movement, etc. of the user. Based on this information, information about the region in the point cloud video that the user is currently viewing (i.e., viewport information) can be calculated.
The viewport information may be information about an area in the point cloud video that the user is currently viewing. A viewpoint (viewpoint) is a point in a point cloud video that a user is watching, and may refer to a center point of a viewport region. That is, the viewport is a region centered on the viewpoint, and the size and shape of the region can be determined by a field of view (FOV). By gaze (gaze) analysis using viewport information, it is possible to check how the user consumes the point cloud video, which region of the point cloud video the user gazes on, and so on. The gaze analysis may be performed at the receiving side (receiving device) and transmitted to the transmitting side (transmitting device) through a feedback channel. Devices such as VR/AR/MR displays may extract viewport regions based on the user's head position/orientation, vertical or horizontal FOV supported by the device, and so forth.
In some embodiments, the feedback information may be transmitted not only to the transmitting side (transmitting apparatus) but also consumed at the receiving side (receiving apparatus). That is, the decoding process, rendering process, and the like of the receiving side (receiving apparatus) may be performed using the feedback information.
For example, the receiving device 20 may use the head orientation information and/or viewport information to preferentially decode and render only point cloud video of the region that the user is currently viewing. In addition, the receiving unit 21 may receive all point cloud data or point cloud data indicated by orientation information and/or viewport information based on the orientation information and/or viewport information. Also, the decapsulation processing unit 22 may decapsulate all the point cloud data or decapsulate the point cloud data indicated by the orientation information and/or viewport information based on the orientation information and/or viewport information. Also, the decoding unit 23 may decode all the point cloud data, or decode the point cloud data indicated by the orientation information and/or the viewport information based on the orientation information and/or the viewport information.
Overview of Point cloud encoding apparatus
Fig. 4 illustrates an example of a point cloud encoding apparatus 400 according to an embodiment of the present disclosure. The point cloud encoding apparatus 400 of fig. 4 may correspond in configuration and function to the encoding unit 12 of fig. 1.
As shown in fig. 4, the point cloud encoding apparatus 400 may include: a coordinate system transformation unit 405, a geometry quantization unit 410, an octree analysis unit 415, an approximation unit 420, a geometry encoding unit 425, a reconstruction unit 430, and attribute transformation units 440, RAHT a transformation unit 445, an LOD generation unit 450, a lifting (lifting) unit 455, an attribute quantization unit 460, an attribute encoding unit 465, and/or a color transformation unit 435.
The point cloud data acquired by the acquisition unit 11 may undergo a process of adjusting the quality (e.g., lossless, lossy, near lossless) of the point cloud content according to the network situation or application. In addition, the individual points of the acquired point cloud content may be transmitted without loss, but in this case, real-time streaming is impossible because the size of the point cloud content is large. Therefore, in order to smoothly provide the point cloud content, a process of reconstructing the point cloud content according to the maximum target bit rate is required.
The process of adjusting the quality of the point cloud content may be a process of reconstructing and encoding position information (position information included in the geometric information) or color information (color information included in the attribute information) of the point. The process of reconstructing and encoding the position information of the points may be referred to as geometric encoding, and the process of reconstructing and encoding the attribute information associated with the respective points may be referred to as attribute encoding.
The geometric encoding may include: geometric quantization process, voxelization process, octree analysis process, approximation process, geometric encoding process, and/or coordinate system transformation process. Moreover, the geometric coding may also include a geometric reconstruction process. The attribute encoding may include: color transform process, attribute transform process, prediction transform process, lifting transform process, RAHT transform process, attribute quantization process, attribute encoding process, etc.
Geometric coding
The coordinate system transformation process may correspond to a process of transforming a coordinate system of the point positions. Therefore, the coordinate system transformation process may be referred to as 'transforming coordinates'. The coordinate system transformation process may be performed by the coordinate system transformation unit 405. For example, the coordinate system transformation unit 405 may transform the position of a point from a global space coordinate system into position information in a three-dimensional space (e.g., a three-dimensional space expressed in coordinate systems of an X-axis, a Y-axis, and a Z-axis). The positional information in the 3D space according to the embodiment may be referred to as 'geometric information'.
The geometric quantization process may correspond to a process of quantizing position information of points, and may be performed by the geometric quantization unit 410. For example, the geometric-quantization unit 410 may find the position information having the smallest (x, y, z) value among the position information of the points, and subtract the position information having the smallest (x, y, z) position from the position information of each point. In addition, the geometric quantization unit 410 may multiply the subtracted value by a preset quantization scale value (scale value) and then adjust (decrease or increase) the result to a value close to an integer, thereby performing the quantization process.
The voxelization process may correspond to a process of matching geometric information quantized by the quantization process to a specific voxel existing in the 3D space. The voxelization process may also be performed by the geometric quantization unit 410. The geometric quantization unit 410 may perform octree-based voxelization based on the position information of the points so as to reconstruct the respective points to which the quantization process is applied.
Fig. 5 shows an example of voxels according to an embodiment of the disclosure. A voxel may refer to a space for storing information about points existing in 3D, similar to a pixel as a minimum unit having information about 2D images/videos. Voxels are mixed words obtained by combining volumes and pixels. As illustrated in fig. 5, a voxel refers to a three-dimensional cubic space formed by dividing a three-dimensional space (2 depth、2depth、2depth) into units (unit=1.0) based on respective axes (x-axis, y-axis, and z-axis). The voxels may estimate spatial coordinates from a positional relationship with the voxel group and may have color or reflectivity information similar to pixels.
There may not be (match) only one point in a voxel. That is, information related to a plurality of points may exist in one voxel. Alternatively, information related to a plurality of points included in one voxel may be integrated into one point information. Such adjustment may be performed selectively. When one voxel is integrated and expressed as one point information, the position value of the center point of the voxel may be set based on the position values of points existing in the voxel, and an attribute transformation process related thereto needs to be performed. For example, the attribute transformation process may be adjusted to the position value of a point included in a voxel or the center point of the voxel, as well as the average value of the color or reflectivity of neighboring points within a particular radius.
Octree analysis unit 415 may use octree to efficiently manage regions/locations of voxels. Fig. 6 (a) shows an example of an octree according to an embodiment of the present disclosure. In order to efficiently manage the space of the two-dimensional image, if the entire space is divided based on the x-axis and the y-axis, four spaces are created, and when each of the four spaces is divided based on the x-axis and the y-axis, four spaces are created for each small space. The region may be partitioned until the leaf nodes become pixels, and a quadtree (quadtree) may be used as a data structure that is efficiently managed according to the size and location of the region.
As such, the present disclosure may apply the same method to efficiently manage a 3D space according to the position and size of the space. However, as shown in the middle of fig. 6 (a), since the z-axis is added, 8 spaces can be created when the three-dimensional space is divided based on the x-axis, the y-axis, and the z-axis. In addition, as shown on the right side of fig. 6 (a), when each of the 8 spaces is divided again based on the x-axis, the y-axis, and the z-axis, 8 spaces may be created for each small space.
The octree analysis unit 415 may divide the region until leaf nodes become voxels, and may use an octree data structure capable of managing eight child node regions for efficient management according to the size and location of the region.
Since the voxels reflecting the point positions are managed using the octree, the total volume of the octree should be set to (0, 0) to (2 d, 2 d). 2d is set to the value of the smallest bounding box that constitutes all points surrounding the point cloud, and d is the depth of the octree. The equation for calculating the d value may be the same as the following equation 1, in which,Is the position value of the point at which the quantization process is applied.
[ 1]
Octree may be expressed as an occupancy code, and (b) of fig. 6 shows an example of an occupancy code according to an embodiment of the present disclosure. The octree analysis unit 415 may express the occupancy code of a node as 1 when one point is included in each node and express the occupancy code of a node as 0 when the point is not included.
Each node is represented by an 8-bit bitmap with occupancy indicating 8 child nodes. For example, since the occupation code of the node corresponding to the second depth (1-depth) of fig. 6 (b) is 00100001, the space (voxel or region) corresponding to the third node and the eighth node may include at least one point. Also, since the occupation code of the child node (leaf node) of the third node is 10000111, the space corresponding to the first, sixth, seventh, and eighth leaf nodes among the leaf nodes may include at least one point. In addition, since the occupation code of the child node (leaf node) of the eighth node is 01001111, the space corresponding to the second leaf node, the fifth leaf node, the sixth leaf node, the seventh leaf node, and the eighth leaf node among the leaf nodes may include at least one point.
The geometric encoding process may correspond to a process of performing entropy encoding on the occupied code. The geometry encoding process may be performed by the geometry encoding unit 425. The geometric coding unit 425 may perform entropy coding on the occupied codes. The generated occupied codes may be encoded immediately or may be encoded through an intra/inter (intra/inter) encoding process to improve compression efficiency. The receiving device 20 may reconstruct the octree by occupying the codes.
On the other hand, where a particular region has no or few points, it may be inefficient to voxel all regions. That is, since there are few points in a particular region, it may not be necessary to construct the entire octree. For this case, an early termination method may be required.
The point cloud encoding apparatus 400 may directly transmit the positions of the points only for a specific region, or reconfigure the positions of the points within the specific region based on voxels using a surface model, instead of dividing a node (specific node) corresponding to the specific region (specific region not corresponding to a leaf node) into 8 sub-nodes (sub-nodes) for the specific region.
The mode for directly transmitting the positions of the respective points for the specific node may be a direct mode. The point cloud encoding apparatus 400 may check whether a condition for enabling the direct mode is satisfied.
The conditions for enabling direct mode are: 1) the option of using direct mode should be enabled, 2) the particular node does not correspond to a leaf node, 3) there should be points below the threshold within the particular node, and 4) the total number of points to be directly transmitted does not exceed the limit value.
When all of the above conditions are satisfied, the point cloud encoding apparatus 400 may entropy-encode and transmit the point-to-point position value directly for a specific node through the geometric encoding unit 425.
The mode of reconstructing the position of a point in a specific region based on voxels using a surface model may be a trigonometric soup (trisoup) mode. The delta soup pattern may be performed by the approximation unit 420. The approximation unit 420 may determine a particular level of the octree and reconstruct the locations of points in the node region based on voxels using the surface model from the determined particular level.
The point cloud encoding apparatus 400 may selectively apply the delta-soup mode. Specifically, when the delta soup mode is applied, the point cloud encoding apparatus 400 may specify a level (a specific level) at which the delta soup mode is applied. For example, when the specified specific level is equal to the depth (d) of the octree, the delta soup mode may not be applied. That is, the specific level specified should be less than the octree depth value.
The three-dimensional cube region of the specified level of nodes is referred to as a block, and a block may include one or more voxels. The blocks or voxels may correspond to bricks (brick). Each block may have 12 edges and approximation unit 420 may check whether each edge is adjacent to an occupied voxel with a point. Each edge may be adjacent to a plurality of occupied voxels. The specific positions of the edges adjacent to the voxel are called vertices, and when a plurality of occupied voxels are adjacent to one edge, the approximation unit 420 may determine the average position of these positions as vertices.
When the vertex exists, the point cloud encoding apparatus 400 may entropy encode the start point (x, y, z) of the edge, the direction vector (Δx, Δy, Δz) of the edge, and the position value of the vertex (relative position value within the edge) by the geometric encoding unit 425.
The geometry reconstruction process may correspond to a process of generating a reconstructed geometry by reconstructing an octree and/or an approximated octree. The geometric reconstruction process may be performed by the reconstruction unit 430. The reconstruction unit 430 may perform the geometric reconstruction process by triangle reconstruction, upsampling, voxelization, etc.
When the triangle soup pattern is applied in the approximation unit 420, the reconstruction unit 430 may reconstruct a triangle based on the start point of the edge, the direction vector of the edge, and the position value of the vertex. For this purpose, the reconstruction unit 430 may calculate centroid values of the respective vertices as shown in equation 2 belowThe values from the respective vertices are shown in the following 3Subtracting the centroid value to derive a subtracted valueThe value obtained by adding all squares of the subtracted values is then derived as shown in the following equation 4
[ 2]
[ 3]
[ 4]
Also, the reconstruction unit 430 may obtain a minimum value of the added values, and may perform a projection process along an axis having the minimum value.
For example, when the x-element is smallest, the reconstruction unit 430 may project the respective vertices along the x-axis based on the center of the block and project them in the (y, z) plane. In addition, when the value derived by projecting the vertex onto the (y, z) plane is (ai, bi), the reconstruction unit 430 may obtain the θ value through atan2 (bi, ai), and align the vertex based on the θ value.
The method of reconstructing a triangle from the number of vertices may generate a triangle by combining the vertices as shown in table 1 below according to the aligned order. For example, if there are 4 vertices (n=4), two triangles (1, 2, 3) and (3, 4, 1) may be formed. The first triangle (1, 2, 3) may be composed of a first vertex, a second vertex and a third vertex from the aligned vertices, and the second triangle (3, 4, 1) may be composed of a third vertex, a fourth vertex and the first vertex.
TABLE 1
Triangles formed by vertices ordered by 1, … …, n
The reconstruction unit 430 may perform an upsampling process for voxelization by adding points in the middle along the sides of the triangle. The reconstruction unit 430 may generate the additional points based on the upsampling factor and the width of the block. These points may be referred to as refinement vertices. The reconstruction unit 430 may voxel the refined vertices and the point cloud encoding device 400 may perform attribute encoding based on the voxel-ized position values.
In some implementations, the geometric coding unit 425 may increase the compression efficiency by applying context adaptive arithmetic coding. The geometric coding unit 425 may directly entropy-code the occupied code using arithmetic codes. In some implementations, the geometric coding unit 425 adaptively performs coding (intra-coding) based on occupancy of neighboring nodes or adaptively performs coding (inter-coding) based on occupancy codes of previous frames. Here, a frame may refer to a set of point cloud data that is generated simultaneously. Intra-coding and inter-coding are optional processes and may therefore be omitted.
The compression efficiency may vary according to how many neighbor nodes are referenced, and when the bit is large, the encoding process becomes more complicated, but the compression efficiency may be improved by biasing it to one side. For example, in the case of having a 3-bit context, encoding may be performed by dividing it into 3 =8. Since the partitioned and encoded portions may affect implementation complexity, appropriate complexity levels and compression efficiencies must be adjusted.
In the case of intra-coding, the geometry-coding unit 425 may first use occupancy of neighbor nodes to obtain values of neighbor modes. Fig. 7 shows an example of a neighbor mode.
Fig. 7 (a) shows a cube corresponding to a node (a centrally located cube) and six cubes sharing at least one surface with the cubes (neighbor nodes). The nodes shown in the graph are nodes of the same depth. The numbers shown in the figures represent weights (1, 2,4, 8, 16, 32, etc.) associated with six nodes, respectively. The respective weights are given sequentially according to the positions of the neighboring nodes.
Fig. 7 (b) shows neighbor mode values. The neighbor mode value is the sum of values multiplied by the weights of occupied neighbor nodes (neighbor nodes with points). Thus, the neighbor mode value may have a value ranging from 0 to 63. When the neighbor mode value is 0, it indicates that there is no node (occupied node) having a point among neighbor nodes of the node. When the neighbor mode value is 63, it indicates that the neighbor nodes are all occupied nodes. In fig. 7 (b), since the neighbor nodes to which weights 1,2, 4, and 8 are assigned are occupied nodes, the neighbor mode value is 15, which is the sum of 1,2, 4, and 8.
The geometric coding unit 425 may perform coding according to neighbor mode values. For example, when the neighbor mode value is 63, the geometric coding unit 425 may perform 64 types of coding. In some implementations, the geometric coding unit 425 may reduce coding complexity by changing neighbor mode values, and the change in neighbor mode values may be performed, for example, based on a table changing 64 to 10 or 6.
Attribute encoding
Attribute encoding may correspond to a process of encoding attribute information based on the reconstructed geometry and the geometry (source geometry) before the coordinate system transformation. Since the properties may depend on geometry, the reconstructed geometry may be used for property encoding.
As described above, the attributes may include color, reflectivity, and the like. The same attribute encoding method may be applied to information or parameters included in the attribute. The color has three elements, the reflectivity has one element, and each element can be treated independently.
The attribute encoding may include: color transform process, attribute transform process, prediction transform process, lifting transform process, RAHT transform process, attribute quantization process, attribute encoding process, etc. The predictive transform process, the lifting transform process, and the RAHT transform process may be selectively used, or a combination of one or more of these processes may be used.
The color transformation process may correspond to a process of transforming a format of a color in an attribute into another format. The color conversion process may be performed by the color conversion unit 435. That is, the color conversion unit 435 may convert the colors in the attributes. For example, the color conversion unit 435 may perform an encoding operation for converting colors in the attribute from RGB to YCbCr. In some implementations, the operation of the color conversion unit 435 (i.e., the color conversion process) may optionally be applied according to the color values included in the attributes.
As described above, when one or more points exist in one voxel, the position values of the points existing in the voxel are set to the center point of the voxel so that they are displayed by integrating them into one point information of the voxel. Thus, a process of transforming attribute values associated with points may be required. Also, even if the delta soup mode is performed, the attribute transformation process may be performed.
The attribute transformation process may correspond to a process of transforming an attribute based on a location where geometric encoding was not performed and/or a reconstructed geometry. For example, the attribute transformation process may correspond to a process of transforming an attribute of a point having a position based on the position of the point included in a voxel. The attribute transformation process may be performed by the attribute transformation unit 440.
The attribute transformation unit 440 may calculate a center position value of the voxel and an average value of attribute values of neighbor points within a specific radius. Alternatively, the attribute transformation unit 440 may apply weights to the attribute values according to the distances from the center position, and calculate an average value of the attribute values to which the weights are applied. In this case, each voxel has a location and a calculated attribute value.
K-D trees or Morton (Morton) codes may be utilized when searching for neighbor points that exist within a particular location or radius. The K-D tree is a binary search tree and supports a data structure that can manage points based on location to enable nearest neighbor searches (nearest neighbor search, NNS) to be performed quickly. The morton code may be generated by mixing bits of the 3D position information (x, y, z) of all points. For example, when (x, y, z) is (5,9,1) and (5,9,1) is expressed as one bit, it becomes (0101, 1001, 0001), and when the values are mixed according to the bit index in the order of z, y, and x, it becomes 010001000111, and the value becomes 1095. That is, 1095 becomes the morton code value of (5,9,1). The point-to-point alignment is based on morton codes and Nearest Neighbor Search (NNS) may be performed through a depth-first traversal procedure.
After the attribute transformation process, there may be a case where Nearest Neighbor Search (NNS) is required even in another transformation process for attribute encoding. In this case, a K-D tree or Morton code may be utilized.
The predictive conversion process may correspond to a process of predicting an attribute value of a current point (a point corresponding to a prediction target) based on attribute values of one or more points (neighbor points) adjacent to the current point. The predictive conversion process may be performed by a level-of-detail (LOD) generation unit 450.
The predictive transform is a method of applying the LOD transform technique, and the LOD generating unit 450 may calculate and set LOD values of respective points based on LOD distance values of the respective points.
Fig. 8 shows an example of a point configuration according to LOD distance values. In fig. 8, based on the direction of the arrow, the first graph represents the original point cloud content, the second graph represents the distribution of points of the lowest LOD, and the seventh graph represents the distribution of points of the highest LOD. As illustrated in fig. 8, the points of the lowest LOD may be sparsely distributed, while the points of the highest LOD may be densely distributed. That is, as the LOD increases, the spacing (or distance) between points may become shorter.
Each point present in the point cloud may be separated for each LOD, and the configuration of points for each LOD may include points belonging to LODs below the LOD value. For example, a configuration of points with LOD level 2 may include all points belonging to LOD level 1 and LOD level 2.
Fig. 9 shows an example of the dot configuration of each LOD. The upper diagram of fig. 9 shows examples (P0 to P9) of points in the point cloud content distributed in the three-dimensional space. The original order of fig. 9 indicates the order of the points P0 to P9 before LOD generation, and the LOD-based order of fig. 9 indicates the order of the points generated according to LOD.
As illustrated in fig. 9, the points may be rearranged for each LOD, and a high LOD may include points belonging to a low LOD. For example, LOD0 may include P0, P5, P4, and P2, and LOD1 may include points of LOD0 and P1, P6, and P3. Furthermore, LOD2 may include a point of LOD0, a point of LOD1, P9, P8, and P7.
The LOD generation unit 450 may generate predictors for predicting respective points of the transform. Thus, when there are N points, N predictors can be generated. The predictor may calculate and set a weight value (=1/distance) based on the LOD value of each point, the index information of the neighbor point, and the distance value from the neighbor point. Here, the neighbor point may be a point existing within a distance from the current point set for each LOD.
In addition, the predictor may multiply the attribute value of the neighbor point by a 'set weight value', and set a value obtained by averaging the attribute values multiplied by the weight value as a predicted attribute value of the current point. The attribute quantization process may be performed on a residual attribute value obtained by subtracting a predicted attribute value of the current point from an attribute value of the current point.
The lifting transform process may correspond to a process of reconstructing points into a set of levels of detail through the LOD generation process, just as the predictive transform process. The lifting transformation process may be performed by the lifting unit 455. The lifting transform process may further include a process of generating a predictor for each point, a process of setting the calculated LOD in the predictor, a process of registering neighbor points, and a process of setting weights according to distances between the current point and the neighbor points.
The difference between the lifting transform process and the predictive transform process is that the lifting transform process may be a method of cumulatively applying weights to attribute values. The method of cumulatively applying the weight to the attribute value may be as follows.
1) An array QW (quantization weight) for storing weight values of respective points may exist alone. The initial value of all elements of the QW is 1.0. The values obtained by multiplying the weight of the predictor of the current point by the QW value of the predictor index of the neighbor node (neighbor point) registered in the predictor are added.
2) To calculate the predicted attribute value, a value obtained by multiplying the attribute value of the point by the weight is subtracted from the existing attribute value. This process may be referred to as a lift prediction process.
3) Temporary arrays, called 'update weights (updateweight)' and 'update', are generated and the elements in the array are initialized to 0.
4) For all predictors, the calculated weights are also multiplied by the weights stored in the QW to derive new weights that are cumulatively added to the updated weights as indexes of the neighbor nodes, and a value obtained by multiplying the new weights by attribute values of the indexes of the neighbor nodes is cumulatively added to the update.
5) For all predictors, the updated attribute value is divided by the weight of the update weight of the predictor index and the result is added to the existing attribute value. This process may be referred to as a promotion update process.
6) For all predictors, the attribute values updated by the boost update process are multiplied by weights (stored in QWs) updated by the boost prediction process, the result (multiplied values) is quantized, and then the quantized values are entropy encoded.
The RAHT transform process may correspond to a method of predicting attribute information for a higher level node using attribute information associated with a lower level node of an octree. That is, RATH transform processes may correspond to an attribute information intra-coding method by octree backward scanning. RAHT the transform process may be performed by RAHT transform unit 445.
RAHT the transform unit 445 scans the entire region in the voxel and may perform the RAHT transform process up to the root node while summing (merging) the voxels into larger blocks at various steps. Since RAHT transform unit 445 performs RAHT transform process only on occupied nodes, in the case of unoccupied empty nodes, RAHT transform process can be performed on nodes of a higher level directly above it.
When the average attribute value of the voxels at level l is assumed to beIn this case, it is possible to select fromAndCalculation ofWhen (when)AndWeights of (2) are respectivelyAndWhen this is done, RAHT transform matrices shown in the following equation 5 can be obtained.
[ 5]
In the case of the method of claim 5,Is a low-pass value and can be used in the next higher level of merging.Is a high-pass coefficient, and the high-pass coefficients in the respective steps may be quantized and entropy-encoded. The weights may be byTo calculate. The root node may pass through the lastAndThe result is shown in the following formula 6.
[ 6]
In equation 6, gDC values can also be quantized and entropy encoded like high-pass coefficients.
The attribute quantization process may correspond to a process of quantizing the attributes output from the RAHT transform unit 445, the LOD generation unit 450, and/or the lifting unit 455. The attribute quantization process may be performed by the attribute quantization unit 460. The attribute encoding process may correspond to a process of encoding the quantized attributes and outputting an attribute bitstream. The attribute encoding process may be performed by the attribute encoding unit 465.
For example, when the LOD generating unit 450 calculates the predicted attribute value of the current point, the attribute quantizing unit 460 may quantize the residual attribute value obtained by subtracting the predicted attribute value of the current point from the attribute value of the current point. Table 2 shows an example of the attribute quantization process of the present disclosure.
TABLE 2
The attribute encoding unit 465 may directly entropy-encode the attribute value (unquantized attribute value) of the current point if there are no neighbor points in the predictors of the respective points. In contrast, when there is a neighbor point in the predictor of the current point, the attribute encoding unit 465 may entropy encode the quantized residual attribute value.
As another example, when a value obtained by multiplying the attribute value updated through the lifting update process by the weight (stored in QW) updated through the lifting prediction process is output from the lifting unit 460, the attribute quantization unit 460 may quantize the result (multiplied value), and the attribute encoding unit 465 may entropy-encode the quantized value.
Overview of Point cloud decoding apparatus
Fig. 10 illustrates an example of a point cloud decoding apparatus 1000 according to an embodiment of the present disclosure. The point cloud decoding apparatus 1000 of fig. 10 may correspond in configuration and function to the decoding unit 23 of fig. 1.
The point cloud decoding apparatus 1000 may perform a decoding process based on data (bit stream) transmitted from the transmitting device 10. The decoding process may include a process of reconstructing (decoding) the point cloud video by performing an operation corresponding to the above-described encoding operation on the bitstream.
As illustrated in fig. 10, the decoding process may include a geometry decoding process and an attribute decoding process. The geometry decoding process may be performed by the geometry decoding unit 1010, and the attribute decoding process may be performed by the attribute decoding unit 1020. That is, the point cloud decoding apparatus 1000 may include a geometry decoding unit 1010 and an attribute decoding unit 1020.
The geometry decoding unit 1010 may reconstruct geometry from the geometry bitstream, and the attribute decoder 1020 may reconstruct attributes based on the reconstructed geometry and the attribute bitstream. Also, the point cloud decoding apparatus 1000 may reconstruct a three-dimensional point cloud video (point cloud data) based on the position information according to the reconstructed geometry and the attribute information according to the reconstructed attribute.
Fig. 11 illustrates a specific example of a point cloud decoding apparatus 1100 according to another embodiment of the present disclosure. As illustrated in fig. 11, the point cloud decoding apparatus 1100 includes: a geometry decoding unit 1105, an octree synthesis unit 1110, an approximation synthesis unit 1115, a geometry reconstruction unit 1120, a coordinate system inverse transformation unit 1125, an attribute decoding unit 1130, an attribute dequantization unit 1135, RATH transformation unit 1150, an LOD generation unit 1140, an inverse lifting unit 1145, and/or a color inverse transformation unit 1155.
The geometry decoding unit 1105, the octree synthesis unit 1110, the approximate synthesis unit 1115, the geometry reconstruction unit 1120, and the coordinate system inverse transformation unit 1150 may perform geometry decoding. The geometric decoding may be performed as an inverse of the geometric encoding described with reference to fig. 1 to 9. Geometric decoding may include direct coding (direct coding) and triangular Shang Jihe decoding. Direct encoding and triangle Shang Jihe decoding may be selectively applied.
The geometric decoding unit 1105 may decode the received geometric bitstream based on arithmetic coding. The operation of the geometry decoding unit 1105 may correspond to an inverse of the operation performed by the geometry encoding unit 435.
The octree synthesis unit 1110 may generate octrees by obtaining an occupied code (or information on geometry obtained as a result of decoding) from the decoded geometry bitstream. The operation of the octree synthesis unit 1110 may correspond to an inverse of the operation performed by the octree analysis unit 415.
When applying triangle Shang Jihe encoding, the approximate synthesis unit 1115 may synthesize a surface based on the decoded geometry and/or the generated octree.
Geometry reconstruction unit 1120 may reconstruct the geometry based on the surface and the decoded geometry. When the direct encoding is applied, the geometric reconstruction unit 1120 may directly bring and add the position information of the point to which the direct encoding is applied. In addition, when triangle Shang Jihe encoding is applied, the geometry reconstruction unit 1120 may reconstruct the geometry by performing a reconstruction operation (e.g., triangle reconstruction, upsampling, voxelization operation, etc.). The reconstructed geometry may include point cloud pictures or frames that do not include attributes.
The coordinate system inverse transformation unit 1150 may acquire the positions of the points by transforming the coordinate system based on the reconstructed geometry. For example, the coordinate system inverse transformation unit 1150 may inverse transform the positions of the points from a three-dimensional space (e.g., a three-dimensional space expressed as a coordinate system of an x-axis, a y-axis, a z-axis, etc.) into position information of a global spatial coordinate system.
The attribute decoding unit 1130, the attribute dequantization unit 1135, RATH transform unit 1230, the LOD generation unit 1140, the inverse lifting unit 1145, and/or the inverse color transform unit 1250 may perform attribute decoding. The attribute decoding may include: RAHT transform decoding, predictive transform decoding, and lifting transform decoding. The three above-described decoding may be selectively used, or a combination of one or more decoding may be used.
The attribute decoding unit 1130 may decode the attribute bitstream based on arithmetic coding. For example, when there are no neighbor points in the predictors of the respective points and thus the attribute value of the current point is directly entropy-encoded, the attribute decoding unit 1130 may decode the attribute value (unquantized attribute value) of the current point. As another example, when there is a neighbor point in the predictor of the current point and thus the quantized residual attribute value is entropy-encoded, the attribute decoding unit 1130 may decode the quantized residual attribute value.
The attribute dequantization unit 1135 may dequantize a decoded attribute bit stream or information about an attribute obtained as a result of decoding, and output the dequantized attribute (or attribute value). For example, when the quantized residual attribute value is output from the attribute decoding unit 1130, the attribute dequantization unit 1135 may dequantize the quantized residual attribute value to output the residual attribute value. The dequantization process may be selectively applied based on whether the attribute is encoded in the point cloud encoding device 400. That is, when there are no neighbor points in the predictors of the respective points and thus the attribute value of the current point is directly encoded, the attribute decoding unit 1130 may output the attribute value of the current point that is not quantized and may skip the attribute encoding process. Table 3 shows an example of an attribute dequantization process of the present disclosure.
TABLE 3
RATH transform unit 1150, LOD generation unit 1140, and/or inverse boost unit 1145 may process the reconstructed geometry and dequantized properties. RATH the transform unit 1150, LOD generation unit 1140, and/or inverse boost unit 1145 may optionally perform decoding operations corresponding to the encoding operations of the point cloud encoding device 400.
The color inverse transform unit 1155 may perform inverse transform encoding to inverse transform the color values (or textures) included in the decoded attribute. The operation of the color inverse transformation unit 1155 may be selectively performed based on whether the color transformation unit 435 is operated.
Fig. 12 shows another example of a transmitting apparatus according to an embodiment of the present disclosure. As illustrated in fig. 12, the transmitting apparatus may include: a data input unit 1205, a quantization processing unit 1210, a voxelization processing unit 1215, an octree occupation code generation unit 1220, a surface model processing unit 1225, an intra/inter coding processing unit 1230, an arithmetic encoder 1235, a metadata processing unit 1240, a color conversion processing unit 1245, an attribute conversion processing unit 1250, a prediction/lifting/RAHT conversion processing unit 1255, an arithmetic encoder 1260, and a transmission processing unit 1265.
The function of the data input unit 1205 may correspond to an acquisition process performed by the acquisition unit 11 of fig. 1. That is, the data input unit 1205 may acquire the point cloud video and generate point cloud data of a plurality of points. Geometric information (position information) in the point cloud data can be generated in the form of a geometric bit stream by the quantization processing unit 1210, the voxelization processing unit 1215, the octree occupation code generation unit 1220, the surface model processing unit 1225, the intra/inter encoding processing unit 1230, and the arithmetic encoder 1235. The attribute information in the point cloud data may be generated in the form of an attribute bit stream by the color transform processing unit 1245, the attribute transform processing unit 1250, the prediction/lifting/RAHT transform processing unit 1255, and the arithmetic encoder 1260. The geometric bit stream, the attribute bit stream, and/or the metadata bit stream may be transmitted to the reception apparatus through the processing of the transmission processing unit 1265.
In particular, the function of the quantization processing unit 1210 may correspond to the quantization process performed by the geometric quantization unit 410 of fig. 4 and/or the function of the coordinate system transformation unit 405. The function of the voxelization processing unit 1215 may correspond to the voxelization process performed by the geometric quantization unit 410 of fig. 4, and the function of the octree occupation code generation unit 1220 may correspond to the function performed by the octree analysis unit 415 of fig. 4. The function of the surface model processing unit 1225 may correspond to the function performed by the approximation unit 420 of fig. 4, and the function of the intra/inter encoding processing unit 1230 and the function of the arithmetic encoder 1235 may correspond to the function performed by the geometry encoding unit 425. The functions of the metadata processing unit 1240 may correspond to the functions of the metadata processing unit described with reference to fig. 1.
In addition, the function of the color conversion processing unit 1245 may correspond to the function performed by the color conversion unit 435 of fig. 4, and the function of the attribute conversion processing unit 1250 may correspond to the function performed by the attribute conversion unit 440 of fig. 4. The function of the prediction/lifting/RAHT transform processing unit 1255 may correspond to the functions performed by the RAHT transform unit 4450, the LOD generating unit 450, and the lifting unit 455 of fig. 4, and the function of the arithmetic encoder 1260 may correspond to the function of the attribute encoding unit 465 of fig. 4. The functions of the transmission processing unit 1265 may correspond to the functions performed by the transmission unit 14 and/or the encapsulation processing unit 13 of fig. 1.
Fig. 13 shows another example of a receiving apparatus according to an embodiment of the present disclosure. As illustrated in fig. 13, the receiving apparatus includes: the receiving unit 1305, the receiving processing unit 1310, the arithmetic decoder 1315, the metadata parser 1335, the octree reconstruction processing unit 1320 based on an occupied code, the surface model processing unit 1325, the inverse quantization processing unit 1330, the arithmetic decoder 1340, the inverse quantization processing unit 1345, the prediction/lifting/RAHT inverse transform processing unit 1350, the color inverse transform processing unit 1355, and the renderer 1360.
The function of the receiving unit 1305 may correspond to the function performed by the receiving unit 21 of fig. 1, and the function of the receiving processing unit 1310 may correspond to the function performed by the decapsulation processing unit 22 of fig. 1. That is, the receiving unit 1305 may receive the bit stream from the transmission processing unit 1265, and the receiving processing unit 1310 may extract the geometric bit stream, the attribute bit stream, and/or the metadata bit stream through the decapsulation process. The geometric bitstream may be generated as a reconstructed position value (position information) by the arithmetic decoder 1315, the octree reconstruction processing unit 1320, the surface model processing unit 1325, and the inverse quantization processing unit 1330 based on the occupied codes. The attribute bitstream may be generated into a reconstructed attribute value by an arithmetic decoder 1340, an inverse quantization processing unit 1345, a prediction/lifting/RAHT inverse transform processing unit 1350, and a color inverse transform processing unit 1355. The metadata bitstream may be generated as reconstructed metadata (or metadata information) by a metadata parser 1335. The location values, attribute values, and/or metadata may be rendered in a renderer 1360 to provide an experience such as VR/AR/MR/autopilot to the user.
Specifically, the function of the arithmetic decoder 1315 may correspond to the function performed by the geometry decoding unit 1105 of fig. 11, and the function of the octree reconstruction unit 1320 based on the occupation code may correspond to the function performed by the octree synthesis unit 1110 of fig. 11. The function of the surface model processing unit 1325 may correspond to the function performed by the approximate synthesis unit of fig. 11, and the function of the inverse quantization processing unit 1330 may correspond to the function performed by the geometric reconstruction unit 1120 and/or the coordinate system inverse transformation unit 1125 of fig. 11. The functions of the metadata parser 1335 may correspond to the functions performed by the metadata parser described with reference to fig. 1.
In addition, the function of the arithmetic decoder 1340 may correspond to the function performed by the attribute decoding unit 1130 of fig. 11, and the function of the inverse quantization processing unit 1345 may correspond to the function of the attribute inverse quantization unit 1135 of fig. 11. The functions of the prediction/lifting/RAHT inverse-transform processing unit 1350 may correspond to the functions performed by the RAHT transform unit 1150, the LOD generation unit 1140, and the inverse-lifting unit 1145 of fig. 11, and the functions of the inverse-color transform processing unit 1355 may correspond to the functions performed by the inverse-color transform unit 1155 of fig. 11.
Fig. 14 illustrates an example of a structure capable of interworking with a method/apparatus for transmitting and receiving point cloud data according to an embodiment of the present disclosure.
The structure of fig. 14 illustrates a configuration in which at least one of a server (AI server), a robot, an autonomous vehicle, an XR device, a smart phone, a home appliance, and/or an HMD is connected to a cloud network. Robots, autonomous vehicles, XR devices, smart phones or household appliances may be referred to as devices. Additionally, the XR device may correspond to a point cloud data device (PCC) according to an embodiment, or may interwork with a PCC device.
A cloud network may refer to a network that forms part of or resides within a cloud computing infrastructure. Here, the cloud network may be configured using a 3G network, a 4G or long term evolution (Long Term Evolution, LTE) network, or a 5G network.
The server may be connected to at least one of a robot, an autonomous vehicle, an XR device, a smart phone, a home appliance, and/or an HMD through a cloud network, and may facilitate at least a portion of the processing of the connected devices.
The HMD may represent one of the types of XR devices and/or PCC devices in which an embodiment may be implemented. An HMD type device according to an embodiment may include: a communication unit, a control unit, a memory unit, an I/O unit, a sensor unit and a power supply unit.
<PCC+XR>
The XR/PCC device may be implemented by an HMD, HUD provided in a vehicle, TV, mobile phone, smart phone, computer, wearable device, home appliance, digital signage, vehicle, stationary or mobile robot, etc. by applying PCC and/or XR technology.
The XR/PCC device may acquire information about surrounding space or real objects by analyzing 3D point cloud data or image data acquired through various sensors or from external devices to generate position (geometry) data and attribute data of the 3D points, and render and output XR objects to be output. For example, the XR/PCC device may output an XR object corresponding to the identified object that includes additional information about the identified object.
< PCC+XR+Mobile Phone >
The XR/PCC device may be implemented by a mobile phone or the like by applying PCC technology. The mobile phone may decode and display the point cloud content based on PCC technology.
< PCC+autopilot+XR >
Autonomous vehicles may be implemented by mobile robots, vehicles, unmanned aerial vehicles, etc. by applying PCC technology and XR technology. An autonomous vehicle applying XR/PCC technology may refer to an autonomous vehicle equipped with a unit for providing XR images, or an autonomous vehicle subject to control/interaction within an XR image. In particular, autonomous vehicles subject to control/interaction within the XR image are distinguished from the XR device and may interwork with each other.
An autonomous vehicle equipped with a unit for providing an XR/PCC image may acquire sensor information from a sensor comprising a camera and output an XR/PCC image generated based on the acquired sensor information. For example, an autonomous vehicle has a HUD and may provide an XR/PCC object corresponding to a real object or an object in a screen to a passenger by outputting an XR/PCC image.
In this case, when the XR/PCC object is output to the HUD, at least a portion of the XR/PCC object may be output so as to overlap the actual object at which the passenger's gaze is directed. On the other hand, when the XR/PCC object is output to a display provided inside the autonomous vehicle, at least a portion of the XR/PCC object may be output to overlap the object in the screen. For example, an autonomous vehicle may output XR/PCC objects corresponding to objects such as lanes, other vehicles, traffic lights, traffic signs, two-wheeled vehicles, pedestrians, and buildings.
VR, AR, MR, and/or PCC techniques according to embodiments may be applied to various devices. That is, VR technology is a display technology that provides an object or background in the real world only as a CG image. On the other hand, the AR technique refers to a technique of displaying a virtual CG image on top of a real object image. Moreover, MR technology is similar to the above-described AR technology in that a mixture and combination of virtual objects in the real world is shown. However, in the AR technique, the distinction between a real object and a virtual object composed of CG images is clear, and the virtual object is used in a form that supplements the real object, whereas in the MR technique, the virtual object is considered to be equivalent to the real object, unlike the AR technique. More specifically, for example, the hologram service applies the MR technology described above. VR, AR, and MR technologies may be integrated and referred to as XR technologies.
Spatial segmentation
The point cloud data (i.e., G-PCC data) may represent a volumetric encoding (volumetric encoding) of a point cloud consisting of a sequence of frames (point cloud frames). Each point cloud frame may include: the number of points, the location of the points, and the properties of the points. The number of points, the location of the points, and the nature of the points may vary from frame to frame. Each point cloud frame may refer to a set of three-dimensional points specified by zero or more attributes and cartesian coordinates (x, y, z) of the three-dimensional points in a particular time instance. Here, the cartesian coordinates (x, y, z) of the three-dimensional point may be a position or a geometry.
In some implementations, the present disclosure may also perform a spatial segmentation process that segments the point cloud data into one or more 3D blocks prior to encoding the point cloud data. A 3D block may refer to all or part of the 3D space occupied by the point cloud data. The 3D block may be one or more of a tile group, a tile, a slice, a Coding Unit (CU), a Prediction Unit (PU), or a Transform Unit (TU).
Tiles corresponding to 3D blocks may refer to all or part of the 3D space occupied by the point cloud data. Also, slices corresponding to 3D blocks may refer to all or part of the 3D space occupied by the point cloud data. A tile may be partitioned into one or more slices based on the number of points included in the tile. A tile may be a set of slices with bounding box information. The bounding box information for each tile may be specified in a tile list (or tile parameter set, TILE PARAMETER SET, TPS). A tile may overlap another tile in a bounding box. A slice may be a data unit that performs encoding independently or a data unit that performs decoding independently. That is, a slice may be a collection of points that may be encoded or decoded independently. In some implementations, a slice may be a series of syntax elements that represent part or all of an encoded point cloud frame. Each slice may include an index identifying the tile to which the slice belongs.
The spatially segmented 3D blocks may be processed independently or non-independently. For example, spatially partitioned 3D blocks may be encoded or decoded independently or non-independently, respectively, and may be transmitted or received independently or non-independently, respectively. In addition, the spatially partitioned 3D blocks may be quantized or dequantized independently or non-independently, and transformed or inverse transformed independently or non-independently, respectively. In addition, the spatially partitioned 3D blocks may be rendered independently or non-independently. For example, encoding or decoding may be performed in units of slices or in units of tiles. In addition, quantization or dequantization may be performed differently for each tile or slice, and quantization or dequantization may be performed differently for each transformed or inverse transformed tile or slice.
In this way, when the point cloud data is spatially divided into one or more 3D blocks and the spatially divided 3D blocks are independently or non-independently processed, a process of processing the 3D blocks is performed in real time and is performed with low latency. Further, random access and parallel encoding or parallel decoding in a three-dimensional space occupied by the point cloud data can be enabled, and errors accumulated in the encoding or decoding process can be prevented.
Fig. 15 illustrates an example in which a bounding box (i.e., point cloud data) is spatially partitioned into one or more 3D blocks. As illustrated in fig. 15, the entire bounding box of the point cloud data has three tiles, namely, tile #0, tile #1, and tile #2. Also, tile #0 may be partitioned into two slices, slice #0 and slice #1. In addition, tile #1 may be subdivided into two slices, namely slice #2 and slice #3. Also, tile #2 may be subdivided into slices #4.
When the point cloud data is partitioned into one or more 3D blocks, information for decoding some of the point cloud data corresponding to a specific tile or a specific slice may be required. Furthermore, to support spatial access (or partial access) to point cloud data, information related to the 3D spatial region may be required. Here, the spatial access may refer to extracting only necessary partial point cloud data from the entire point cloud data from the file. The signaling information may include: information for decoding some point cloud data, information related to a 3D spatial region for supporting spatial access, etc. For example, the signaling information may include: 3D bounding box information, 3D spatial region information, tile information, and/or tile manifest information.
The signaling information may be stored and signaled in samples, sample entries, sample groups, track groups, or individual metadata tracks in the track. In some implementations, signaling information may be signaled in units of a sequence parameter set (sequence PARAMETER SET, SPS) for sequence-level signaling, a geometry parameter set (geometry PARAMETER SET, GPS) for signaling of geometry coding information, an attribute parameter set (attribute PARAMETER SET, APS) for signaling of attribute coding information, a tile parameter set (TILE PARAMETER SET, TPS) (or tile list) for tile-level signaling, and so on. In addition, signaling information may be signaled in units of coding units such as slices or tiles.
Bit stream
Fig. 16 illustrates an example of a structure of a bitstream according to an embodiment of the present disclosure.
When the geometry bitstream, the attribute bitstream, and/or the signaling bitstream are comprised of one bitstream (or G-PCC bitstream), the bitstream may include one or more sub-bitstreams.
As illustrated in fig. 16, the bitstream may include: one or more SPS, one or more GPS, one or more APS (APS 0 and APS 1), one or more TPS, and/or one or more slices (slice 0, … …, slice n). Since a tile is a slice group that includes one or more slices, a bitstream may include one or more tiles. The TPS may include information about each tile (e.g., information such as coordinate values, height, and/or size of bounding boxes), and each slice may include a geometric bitstream Geom0 and/or one or more attribute bitstreams Attr0 and Attr1. For example, slice 0 may include a geometric bitstream Geom and/or one or more attribute bitstreams Attr00 and Attr10.
The geometric bit stream in each slice may be composed of a geometric slice header (Geom _slice_header) and geometric slice data (Geom _slice_data). The geometric slice header includes: identification information (geom _parameter_set_id), tile identifier (geom _tile_id), slice identifier (geom _slice_id), and/or information (geomBoxOrigin, geom _box_log2_scale, geom_max_node_size_log2, geom _num_points) on data included in the geometric slice data (geom _slice_data) and the like of the parameter set included in the GPS. geomBoxOrigin is geometric box origin information indicating a box origin of geometric slice data, geom _box_log2_scale is information indicating a log scale (log scale) of geometric slice data, geom _max_node_size_log2 is information indicating a size of a root geometric octree node, and geom _num_points is information related to the number of points of geometric slice data. The geometric slice data may include geometric information (or geometric data) of point cloud data in the slice.
Each attribute bit stream in each slice may include an attribute slice header (attr_slice_header) and attribute slice data (attr_slice_data). The attribute slice header may include information about the attribute slice data, and the attribute slice data may include attribute information (or attribute data) of the point cloud data in the slice. When there are multiple attribute bitstreams in one slice, each attribute bitstream may include different attribute information. For example, one attribute bit stream may include attribute information corresponding to colors, while another attribute bit stream may include attribute information corresponding to reflectivity.
GPCC entry information structure
The syntax structure of the G-PCC entry information box (GPCCEntryInfoBox) may be defined as follows.
class GPCCEntryInfoBox extends Box('gpsb'){
GPCCEntryInfoStruct();
}
In the above syntax structure, GPCCEntryInfoBox with sample entry type 'gpsb' may include GPCCEntryInfoStruct (). The syntax of GPCCEntryInfoStruct () can be defined as follows.
aligned(8)class GPCCEntryInfoStruct{
unsigned int(1)main_entry_flag;
unsigned int(1)dependent_on;
If (dependent on) {// non-entry
unsigned int(16)dependency_id;
}
}
GPCCEntryInfoStruct () may include main_entry_flag and dependent_on. The main_entry_flag may indicate whether it is an entry point (entry point) for decoding the G-PCC bit stream. The dependent on indicates whether its decoding depends on others. If dependent_on is present in a sample entry, dependent_on may indicate that decoding of samples in a track depends on other tracks. GPCCEntryInfoStruct () may also include a dependency_id if the value of the dependency_on is 1. The dependency_id may indicate an identifier of a track for decoding the related data. If dependency_id is present in the sample entry, the dependency_id may indicate an identifier of the track carrying the G-PCC sub-bit stream on which the decoding of the samples in the track depends. If dependency_id exists in the sample group, the dependency_id may indicate an identifier of the sample carrying the G-PCC sub-bit stream on which decoding of the relevant sample depends.
G-PCC component information structure
The syntax structure of the G-PCC component type box (GPCCComponentTypeBox) may be defined as follows.
aligned(8)class GPCCComponentTypeBox extends FullBox('gtyp',version=0,0){
GPCCComponentTypeStruct();
}
GPCCComponentTypeBox with sample entry type 'gtyp' may include GPCCComponentTypeStruct (). The syntax of GPCCComponentTypeStruct () can be defined as follows.
aligned(8)class GPCCComponentTypeStruct{
unsigned int(8)numOfComponents;
for(i=0;i<numOfComponents;i++){
unsigned int(8)gpcc_type;
if(gpcc_type==4)
unsigned int(8)AttrIdx;
}
Additional field//
}
NumOfComponents may indicate the number of G-PCC components signaled to GPCCComponentTypeStruct. Gpcc _type may be included in GPCCComponentTypeStruct by a loop repeated according to the value of numOfComponents. This cycle may be repeated while increasing by 1 until i goes from 0 to (numOfComponents-1). gpcc _type may indicate the type of the G-PCC component. For example, if gpcc _type has a value of 2, it may indicate a geometric component, and if it is 4, it may indicate an attribute component. If gpcc _type has a value of 4, i.e., when it indicates an attribute component, the loop may also include AttrIdx. AttrIdx may indicate an identifier of an attribute signaled in SPS (). A G-PCC component type box (GPCCComponentTypeBox) may be included in the sample entries for the plurality of tracks. If a G-PCC component type box (GPCCComponentTypeBox) exists in a sample entry that carries a track of some or all of the G-PCC bit streams, GPCCComponentTypeStruct () may indicate one or more G-PCC component types carried by the respective track. GPCCComponentTypeBox, which includes GPCCComponentTypeStruct () or GPCCComponentTypeStruct (), may be referred to as G-PCC component information.
Sample group
The encapsulation processing unit referred to in this disclosure may generate a sample group by grouping one or more samples. The encapsulation processing unit, metadata processing unit, or signaling processing unit referred to in this disclosure may signal signaling information associated with a sample group in a sample, sample group, or sample entry. That is, sample set information associated with a sample set may be added to a sample, sample set, or sample entry. The sample set information may be 3D bounding box sample set information, 3D region sample set information, 3D tile manifest sample set information, and the like.
Rail set
The encapsulation processing unit mentioned in this disclosure may generate a track group by grouping one or more tracks. The encapsulation processing unit, metadata processing unit, or signaling processing unit referred to in this disclosure may signal signaling information associated with a track group in a sample, track group, or sample entry. That is, track group information associated with a track group may be added to a sample, track group, or sample entry. The track group information may be 3D bounding box track group information, point cloud composition track group information, spatial region track group information, 3D tile list track group information, and the like.
Sample entry
Fig. 17 is a diagram for explaining an ISOBMFF-based file including a single track. Fig. 17 (a) illustrates an example of a layout of an ISOBMFF-based file including a single track, and fig. 17 (b) illustrates an example of a sample structure of mdat box when a G-PCC bitstream is stored in a single track of a file. Fig. 18 is a diagram for explaining an ISOBMFF-based file including a plurality of tracks. Fig. 18 (a) illustrates an example of a layout of an ISOBMFF-based file including a plurality of tracks, and fig. 18 (b) illustrates an example of a sample structure of mdat box when a G-PCC bitstream is stored in a single track of the file.
The stsd box (SampleDescriptionBox) included in the moov box of the file may include a sample entry for storing a single track of the G-PCC bitstream. SPS, GPS, APS, the tile manifest may be included in the sample entry in the moov box or in the sample in the mdat box in the file. Also, a geometric slice and zero or more attribute slices may be included in the sample of the mdat box in the file. When the G-PCC bit stream is stored in a single track of a file, each sample may contain multiple G-PCC components. That is, each sample may be composed of one or more TLV encapsulation structures. Sample entries for a single track may be defined as follows.
Sample entry type: 'gpe1', 'gpeg'
A container: sampleDescriptionBox A
Mandatory: the 'gpe1' or 'gpeg' sample entry is a mandatory quantity: there may be one or more sample entries
Sample entry types 'gpe' or 'gpeg' are mandatory and there may be one or more sample entries. The G-PCC track may use VolumetricVisualSampleEntry with sample entry types gpe ' or ' gpeg '. The sample entry of the G-PCC track may include a G-PCC decoder configuration box GPCCConfigurationBox, and the G-PCC decoder configuration box may include a G-PCC decoder configuration record (GPCCDecoderConfigurationRecord ()). GPCCDecoderConfigurationRecord () may include at least one :configurationVersion、profile_idc、profile_compatibility_flags、level_idc、numOfSetupUnitArrays、SetupUnitType、completeness、numOfSepupUnit、 or setupUnit of the following. The setupUnit array field included in GPCCDecoderConfigurationRecord () may include a TLV encapsulation structure including one SPS.
If the sample entry type is 'gpe1', then all parameter sets (e.g., SPS, GPS, APS, tile manifest) may be included in the array of setupUints. If the sample entry type is 'gpeg', the above-described parameter set may be included in the array of setupUint (i.e., sample entry) or in the flow (i.e., sample). An example of a syntax for a G-PCC sample entry (GPCCSAMPLEENTRY) with a sample entry type of 'gpe' is as follows.
aligned(8)class GPCCSampleEntry()
extends VolumetricVisualSampleEntry('gpe1'){
GPCCConfigurationBox config; mandatory/mandatory
3DBoundingBoxInfoBox();
CubicRegionInfoBox();
TileInventoryBox();
}
The G-PCC sample entry (GPCCSAMPLEENTRY) with sample entry type 'gpe1' may include: GPCCConfigurationBox, 3DBoundingBoxInfoBox (), cubicRegionInfoBox (), and TileInventoryBox (). 3DBoundingBoxInfoBox () may indicate 3D bounding box information of point cloud data related to the samples carried by the track. CubicRegionInfoBox () may indicate information about one or more spatial regions of point cloud data carried by a sample in the track. TileInventoryBox () may indicate 3D tile inventory information of point cloud data carried by samples in the track.
As illustrated in fig. 17 (b), the sample may include a TLV encapsulation structure containing geometric slices. Additionally, the samples may include TLV encapsulation structures containing one or more parameter sets. Additionally, the samples may include TLV encapsulation structures containing one or more attribute slices.
As illustrated in fig. 18 (a), when the G-PCC bit stream is carried by multiple tracks of an ISOBMFF-based file, individual geometric slices or attribute slices may be mapped to individual tracks. For example, a geometric slice may be mapped to track 1 and an attribute slice may be mapped to track 2. The track carrying the geometric slice (track 1) may be referred to as a geometric track or a G-PCC geometric track, and the track carrying the property slice (track 2) may be referred to as a property track or a G-PCC property track. In addition, the geometric trajectory may be defined as a volumetric visual trajectory carrying the geometric slice, and the property trajectory may be defined as a volumetric visual trajectory carrying the property slice.
The track carrying the portion of the G-PCC bit stream comprising both the geometry slice and the attribute slice may be referred to as a multiplexed track. Where the geometry slice and the attribute slice are stored on separate tracks, each sample in a track may include at least one TLV encapsulation structure carrying data of a single G-PCC component. In this case, each sample contains neither geometry nor attributes, and may contain no plurality of attributes. The multi-track encapsulation of the G-PCC bit stream may enable the G-PCC player to efficiently access one of the G-PCC components. When the G-PCC bit stream is carried by multiple tracks, in order for the G-PCC player to effectively access one of the G-PCC components, the following condition needs to be satisfied.
A) When a G-PCC bitstream composed of TLV encapsulation structures is carried by multiple tracks, the track carrying the geometric bitstream (or geometric slice) becomes an entry point.
B) In the sample entry, a new box is added to indicate the role of the stream included in the track. The new box may be the aforementioned G-PCC component type box (GPCCComponentTypeBox). That is, GPCCComponentTypeBox may be included in the sample entries of the multiple tracks.
C) Track references are introduced from tracks carrying only G-PCC geometry bitstreams to tracks carrying G-PCC attribute bitstreams.
GPCCComponentTypeBox may include GPCCComponentTypeStruct (). If GPCCComponentTypeBox is present in a sample entry of a track that carries part or all of the G-PCC bit stream, GPCCComponentTypeStruct () may specify the type (e.g., geometry, properties) of one or more G-PCC components carried by the respective track. For example, if the value of gpcc _type field included in GPCCComponentTypeStruct () is 2, it may indicate a geometric component, and if it is 4, it may indicate an attribute component. In addition, when the value of the gpcc _type field indicates 4 (i.e., attribute component), a AttrIdx field indicating an attribute identifier signaled to SPS () may be further included.
In case the G-PCC bit stream is carried by multiple tracks, the syntax of the sample entry may be defined as follows.
Sample entry type: 'gpe1', 'gpeg', 'gpc1' or 'gpcg'
A container: sampleDescriptionBox A
Mandatory: the 'gpc1' or 'gpcg' sample entries are mandatory
The amount is as follows: there may be one or more sample entries
Sample entry types 'gpc1', 'gpcg', 'gpc1', or 'gpcg' are mandatory, and one or more sample entries may be present. Multiple tracks (e.g., geometric or attribute tracks) may use VolumetricVisualSampleEntry with sample entry types 'gpc1', 'gpcg', 'gpc1', or 'gpcg'. In the 'gpe1' sample entry, all parameter sets may be present in the setupUnit array. In the 'gpeg' sample entry, the parameter set is present in the array or stream. In the 'gpe' or 'gpeg' sample entries, GPCCComponentTypeBox should not be present. In the 'gpc1' sample entry, the SPS, GPS, and tile inventory may be present in an array SetupUnit of tracks carrying the G-PCC geometry bitstream. All relevant APS may exist in SetupUnit arrays of tracks carrying the G-PCC attribute bit stream. In the 'gpcg' sample entry, SPS, GPS, APS or a tile manifest may exist in the array or stream. In the 'gpc1' or 'gpcg' sample arrays, GPCCComponentTypeBox should be present.
An example of the syntax of the G-PCC sample entry is as follows.
Compressorname (i.e., codingname) of the base class VolumetricVisualSampleEntry may indicate the name of the compression program used with the recommended "013GPCC encoding" value. In "\013GPCC encoding", the first byte (octal 13 or decimal 11 represented by\013) is the number of remaining bytes, and may indicate the number of bytes of the remaining string. congif may include G-PCC decoder configuration information. info may indicate G-PCC component information carried in the respective tracks. The info may indicate component tiles carried in the track and may also indicate the attribute name, index, and attribute type of the G-PCC component carried in the G-PCC attribute track.
Sample format
When the G-PCC bit stream is stored in a single track, the syntax of the sample format is as follows.
aligned(8)class GPCCSample
{
Unsigned INT GPCCLENGTH = sample size; size of the sample
For (i=0, i < gpcclength;) v/to the end of the sample
{
tlv_encapsulation gpcc_unit;
i+=(1+4)+gpcc_unit.tlv_num_payload_bytes;
}
}
In the above syntax, each sample (GPCCSAMPLE) corresponds to a single point cloud frame and may be composed of one or more TLV encapsulation structures belonging to the same presentation time. Each TLV encapsulation structure may include a single type of TLV payload. In addition, one sample may be independent (e.g., synchronous samples). GPCCLENGTH indicates the length of the sample, and gpcc _unit may include an instance of a TLV encapsulation structure that contains a single G-PCC component (e.g., a geometric slice).
When the G-PCC bit stream is stored in multiple tracks, each sample may correspond to a single point cloud frame, and samples that contribute to the same point cloud frame in different tracks may have to have the same presentation time. Each sample may be composed of one or more G-PCC units of the G-PCC component indicated in GPCCComponentInfoBox of the sample entry and zero or more G-PCC units carrying one of a parameter set or a tile manifest. When a G-PCC unit including a parameter set or tile list is present in the sample, the F-PCC sample may need to appear before the G-PCC unit of the G-PCC component. Each sample may include one or more G-PCC units comprising an attribute data unit, and zero or more G-PCC units carrying a parameter set. In the case where the G-PCC bit stream is stored in multiple tracks, the syntax and semantics of the sample format may be the same as those described above in the case where the G-PCC bit stream is stored in a single track.
Subsamples
In the receiving apparatus, since the geometry slice is decoded first and the attribute slice needs to be decoded based on the decoded geometry, when each sample is composed of a plurality of TLV encapsulation structures, it is necessary to access each TLV encapsulation structure in the samples. In addition, if a sample is made up of multiple TLV encapsulations, each of the multiple TLV encapsulations may be stored as a sub-sample. The subsamples may be referred to as G-PCC subsamples. For example, if a sample includes a parameter set TLV envelope containing a parameter set, a geometry TLV envelope containing a geometry slice, and an attribute TLV envelope containing an attribute slice, the parameter set TLV envelope, the geometry TLV envelope, and the attribute TLV envelope may be stored as sub-samples, respectively. In this case, the type of TLV encapsulation structure carried by the sub-samples may be required in order to be able to access the individual G-PCC components in the samples.
When the G-PCC bit stream is stored in a single track, the G-PCC sub-samples may include only one TLV encapsulation structure. One SubSampleInformationBox may be present in the sample table box (SampleTableBox, stbl) of the moov box, or may be present in the track fragment box (TrackFragmentBox, traf) of the respective movie fragment box (MovieFragmentBox, moof). If SubSampleInformationBox is present, an 8-bit type value of the TLV encapsulation structure may be included in the 32-bit codec_specific_parameters field of the sub-sample entry in SubSampleInformationBox. If the TLV encapsulation structure includes an attribute payload, a 6-bit value of the attribute index may be included in the 32-bit codec_specific_parameters field of the sub-sample entry in SubSampleInformationBox. In some implementations, the type of each sub-sample may be identified by parsing the codec_specific_parameters field of the sub-sample entry in SubSampleInformationBox. SubSampleInformationBox's codec_specific_parameters may be defined as follows.
In the above sub-sample syntax payloadType may indicate TLV _type of TLV encapsulation structure in the sub-sample. For example, if payloadType has a value of 4, an attribute slice (i.e., attribute slice) may be indicated. attrIdx may indicate an identifier of the attribute information of the TLV encapsulation structure including the attribute payload in the sub-sample. attrIdx may be identical to the ash_attr_sps_attr_idx of the TLV encapsulation structure that includes the attribute payload in the sub-samples. tile_data may indicate whether a sub-sample includes one tile or another tile. When the tile_data value is 1, it may indicate that the subsamples include a TLV encapsulation structure containing geometric data units or attribute data units corresponding to one G-PCC tile. When the tile_data value is 0, it may indicate that the sub-samples include TLV encapsulation structures containing various parameter sets, tile lists, or frame boundary markers. tile_id may indicate an index of a G-PCC tile associated with a subsample in the tile manifest.
When storing the G-PCC bit stream in multiple tracks (in the case of multi-track encapsulation of G-PCC data in ISOBMFF), if subsamples are present, only SubSampleInformationBox with a flag of 1 may be required to be present in SampleTableBox or TrackFragmentBox of each moviefragmentbox. In the case where the G-PCC bit stream is stored in a plurality of tracks, the syntax element and semantics may be the same as in the case where the syntax element and flag (flag) = 1 in the semantics when the G-PCC bit stream is stored in a single track.
Reference between tracks
When the G-PCC bit stream is carried in multiple tracks (i.e. when the G-PCC geometry bit stream and the attribute bit stream are carried in different (separate) tracks), a track reference tool may be used for connecting between the tracks. One TrackReferenceTypeBoxes may be added to TrackReferenceBox in TrackBox of the G-PCC track. TrackReferenceTypeBox may contain an array of track_ids specifying the tracks to which the G-PCC track refers.
In some implementations, the present disclosure may provide apparatus and methods for supporting temporal scalability in the carryover of G-PCC data (hereinafter, may be referred to as G-PCC bitstreams, encapsulated G-PCC bitstreams, or G-PCC files). In addition, the present disclosure may propose an apparatus and method for providing a point cloud content service that efficiently stores a G-PCC bit stream in a single track in a file, or separately in multiple tracks, and provides signaling for it. In addition, the present disclosure proposes an apparatus and method for processing file storage techniques to support efficient access to stored G-PCC bitstreams.
Temporal scalability
Temporal scalability may refer to a function that allows the possibility of extracting one or more subsets of independently encoded frames. Moreover, time scalability may refer to a function of dividing G-PCC data into a plurality of different time levels and independently processing individual G-PCC frames belonging to the different time levels. If time scalability is supported, the G-PCC player (or transmitting device and/or receiving device of the present disclosure) may effectively access a desired component (target component) among the G-PCC components. In addition, if temporal scalability is supported, since the G-PCC frames are processed independently of each other, the temporal scalability support at the system level can be expressed as a more flexible temporal sub-hierarchy. In addition, if time scalability is supported, a system (point cloud content providing system) processing G-PCC data may manipulate data at a high level to match network capabilities or decoder capabilities, so that performance of the point cloud content providing system may be improved.
Description of the embodiments
If time scalability is supported, G-PCC content may be carried in multiple tile tracks and information about time scalability may be signaled. The information about temporal scalability (hereinafter referred to as "temporal scalability information") may include information about a temporal level and information about a temporal level track. Here, the information about the time level may include information about a time level identifier. Here, the identification information may be a list of temporal level identifiers and may be expressed as a syntax, such as temporal_level_id. That is, the temporal scalability information may include information about a temporal level (hereinafter referred to as "temporal level information").
The G-PCC time level may be a subset of G-PCC bit stream frames constituting a sub-sequence having a frame rate less than the actual bit stream sequence. Each G-PCC frame may be associated with a particular time level, the frame frequency of the frames of each time level may be fixed, and each time level may be identified by a time level identifier (i.e., a time identifier).
Furthermore, the temporal scalability information may include information regarding whether the G-PCC content is stored in a plurality of tracks. In other words, the temporal scalability information may include information indicating whether or not a plurality of temporal level tracks exist, and may be expressed as a syntax such as multiple_temporal_level_ tracks _flag. That is, the temporal scalability information may include information about a temporal level track (hereinafter referred to as 'temporal level track information').
As an example, the temporal scalability information may be carried using a box existing in a track or tile base track and a box existing in a tile track, that is, a box for temporal scalability information (hereinafter referred to as a 'temporal scalability information box' or 'scalability information box'). The box present in the GPCC track or tile base track carrying the temporal scalability information may be GPCCScalabilityInfoBox and the box present in the tile track may be GPCCTileScalabilityInfoBox. GPCCTileScalabilityInfoBox may be present in each tile track associated with the tile base track where GPCCScalabilityInfoBox is present. In addition, as an example, temporal scalability information may be included in the above decoder configuration information. As an example, decoder configuration information may be carried in GPCC decoder configuration box GPCCDecoderConfigurationBox and may be included and carried in GPCC decoder configuration record GPCCDecoderConfigurationRecord of the decoder configuration box.
The tile base track may be a track with sample entry types of 'gpeb' or 'gpcb'. On the other hand, when GPCCScalabilityInfoBox exists in a track having the sample entry type 'gpe1', 'gpeg', 'gpc1', 'gpcg', 'gpcb', or 'gpeb', it may indicate that temporal scalability is supported, and may provide information about a temporal level existing in the track. Such a box may not exist in the track if temporal scalability is not used. Moreover, if all frames are signaled at a single time level, they may not exist in tracks with sample entry types ' gpe ' 1', ' gpeg ', ' gpc1', ' gpcg ', ' gpcb ', or ' gpeb '. On the other hand, such a box may not exist in a track with the sample entry type 'gpt 1'. The GPCC track in the sample entry that includes a box for temporal scalability information (i.e., a temporal scalability information box) may be expressed as a temporal level track.
Furthermore, when supporting temporal scalability, it is not clear whether interleaving between samples belonging to different temporal levels is possible or not, depending on the current temporal scalability information.
In addition, when information on a plurality of time-level tracks that may be included in the time scalability information, that is, time-level track information (for example, information indicating whether G-PCC content is carried in a plurality of tracks, that is, information on the presence of a plurality of time-level tracks in a G-PCC file), has a first value (for example, 1), it may indicate that G-PCC bitstream frames may be grouped into a plurality of time tracks. On the other hand, if it has a second value (e.g., 0), it may indicate that all temporal level samples are present in a single track. However, even if the temporal level track information value is the first value, it is not clearly defined whether it means that there are a plurality of temporal levels for all tracks. That is, it is not clearly defined whether the first value is for all tracks (whether there are multiple temporal levels).
In addition, in time level information (e.g., information about a time level identifier) that may be included in the time scalability information (e.g., temporal_level_id), it is unclear whether the value of the current time identifier should be continuous or discrete. Thus, when processing G-PCC content in multiple tracks, problems may occur in defining the track system. Thus, an undesirable situation may occur in terms of signaling efficiency, which may be a problem.
To solve the above-described problem, according to the present disclosure, time scalability information may be further specified to define whether interleaving between samples of different time levels is possible, and an item indicated by the time level track information may be specified. In addition, a relationship between identifiers of time levels of tracks (i.e., time identifiers) may be defined, and whether information about the time level identifiers is discrete is defined.
Hereinafter, the technology proposed by the present disclosure will be described in detail using embodiments.
Embodiment 1-time level information and sample interleaving
As an example, time level information and sample interleaving will be described with reference to the syntax expressed in fig. 19 to 22.
As described above, temporal level information may be included in the temporal scalability information and signaled.
Referring to fig. 19 to 22, when a multiple_temporal_level_ tracks _flag, which may be included in the multiple time-level track information, is a first value (e.g., 1), it may indicate that the multiple time-level tracks exist in the G-PCC file. On the other hand, when it is a second value (e.g., 0), it may indicate that all temporal level samples are present in a single track. Further, when the temporal scalability information box exists in a track (e.g., tile base track) having a sample entry type of 'gpcb' or 'gpeb', if the multiple temporal level tracks flag is a first value, it may indicate that all tracks associated with the track include samples of all temporal levels. On the other hand, if it is the second value, it may indicate that there are one or more tracks associated with the track, excluding samples at all temporal levels.
The frame_rate_present_flag may indicate whether average frame rate information exists. A first value of frame_rate_present_flag (e.g., 1) may indicate that average frame rate information is present, and a second value of frame_rate_present_flag (e.g., 0) may indicate that average frame rate information is not present.
Num temporal levels, which may be included in the temporal level information, may indicate the number of temporal levels present in the samples of each track. When the sample entry type is "gpcb" or "gpeb", num_temporal_ levels may indicate the maximum number of time levels in which the G-PCC frame is grouped, and the maximum value may be 1.
Level idc may include a level code of the i-th time level.
Frame_rate may indicate an average frame rate at a time level in frames (frames/256 seconds). When the value of frame_rate is 0, this may indicate an unspecified average frame rate.
The value of the time identifier indicated by the information about the time identifier (e.g., temporal_level_id) that may be included in the time level information may be discretely increased, and the information about the time identifier may be signaled separately. Here, the interval between time identifiers may be fixed to a specific value. For example, the time identifier value may be increased or decreased at intervals equal to an integer value of n. For example, in the case of a time level whose time identifier is x, the identifier of the next time level may be x+a, the identifier of the next time level may be x+2a, and a may be any integer. For example, a may be 1. As an example, the time identifier may directly indicate a time level. For example, when there are two or more tracks and the highest temporal identifier of the samples of one track is x, if an application needs to process samples of a higher temporal level than the samples of one track, it is necessary to find a track including samples having a temporal identifier such as x+1.
Meanwhile, referring to fig. 20, information on only the lowest time id (e.g., lowest temporal_id) may be signaled for the time identifier signaling. In this case, the remaining time identifiers may be derived based on other information indicating the lowest time identifier and the number of time levels. In this case, the lowest time identifier may be a predefined value, or may be 0, and the interval between time identifier values may be a predefined value. Alternatively, the time identifier of the lowest time level is not limited to be equal to 0, but may be any number within a range allowed by the number of bits allocated for signaling the time identifier.
On the other hand, when there are two or more tracks and there is a reference between the tracks, i.e. when the second track references the first track, the second track may be associated with the first track, such that this means that the second track comprises samples of the next temporal level to the highest temporal level of the first track. Conversely, if there is a reference between tracks, when a first track references a second track, the first track may be associated with another second track such that it means that the second track includes samples of the next temporal level to the highest temporal level of the first track. That is, references between tracks may indicate that there is a relationship between the time identifiers of the tracks. In other words, when a reference between tracks occurs, the referenced tracks may include samples having a temporal identifier that is greater than or less than the temporal identifier of the track being referenced. For example, if one track TrackB is the next temporal level track of another track TrackA, trackB may include samples whose temporal identifiers are equal to the highest temporal identifier of TrackA plus 1. At the same time, specific information (or syntax elements) indicating the track references may be further defined and signaled separately.
Referring to fig. 21, information for assigning whether a track includes a lowest temporal level (e.g., has_base_ tmeporal _level_id) may be included in the temporal level information and separately signaled. As an example, the information may be a flag, and if the value is a first value (e.g., 1), it may indicate that the track includes samples belonging to the lowest/base temporal level. When the value is a second value (e.g., 0), it may indicate that the track does not include samples belonging to the lowest/base temporal level. Meanwhile, track reference information (e.g., tsrf) indicating that one track references another track may be defined.
Referring to fig. 22, when time scalability is supported and compiled G-PCC data is stored in a plurality of tracks, the tracks may include only samples belonging to a continuous time level. In other words, when temporal scalability is applied, one track may include samples belonging to consecutive temporal levels. In other words, as described above, the time identifier value itself representing the time level may be represented discretely, but may include samples belonging to the time level in a certain order. For example, if the time identifiers corresponding to the time levels are 0, 1, 2, 3, the first track may not include samples with time identifiers of 0 and 2, and the second track may not include tracks with time identifiers of 1 and 3, which may be based on the sample entry type of the tracks. For example, only samples belonging to consecutive time levels may be included if the sample entry type is "gpe1", "gpeg", "gpc1", or "gpcg" and one or more time level tracks are present. That is, when one track includes samples corresponding to time identifiers from n to k, there may be no other track including samples corresponding to time identifiers from n to k. In this case, interleaving between time levels may not be allowed.
Thus, by defining a system for a value of a time identifier to clearly define a track system of multi-track G-PCC content, image encoding/decoding efficiency and speed can be improved, and information can be efficiently signaled within the maximum number of bits of the time identifier by increasing the value of the time identifier. Further, a track of samples having a time level higher than the maximum time level of one track can be referenced and set as the next time level track, thereby improving image encoding/decoding efficiency and speed.
Embodiment 2-temporal level track information
The time-level track information will be described with reference to fig. 23.
As an example, fram _rate_ presnet _flag, num_temporal_levels, level_idc, frame_rate, etc. are as described above.
Meanwhile, when the value of the time-level track information (e.g., multiple_temporal_level_ tracks _flag) is a first value (e.g., 0), it may indicate that the G-PCC content is not stored in two or more tracks, or is stored in only one track. On the other hand, if the value is a second value (e.g., 1), it may indicate that the G-PCC content may be stored in one or more tracks.
Thus, according to the present disclosure, when the time level track information indicates that the G-PCC bit stream frame is grouped into a plurality of time level tracks, it clearly indicates whether there are a plurality of time levels for all tracks and thus encoding and decoding efficiency can be improved.
Embodiment 3-encoding and decoding Process
Fig. 24 illustrates an example of a method performed by a receiving apparatus of point cloud data, and fig. 25 illustrates an example of a method performed by a transmitting apparatus of point cloud data. As an example, the reception device or the transmission device may include the reception device or the transmission device described in the present disclosure with reference to the drawings, and may be the same as the reception device or the transmission device assumed to describe the above-described embodiments. That is, it is apparent that the receiving apparatus performing fig. 24 and the transmitting apparatus performing fig. 25 may also implement the other embodiments described above.
As an example, referring to fig. 24, the receiving device may acquire time scalability information of a point cloud in a three-dimensional space based on a G-PCC file (S2401). The G-PCC file may be obtained by transmission from a transmitting device. Thereafter, the receiving device may reconstruct the 3D point cloud based on the temporal scalability information (S2402), and the temporal scalability information may include a first syntax element for a temporal level identifier of the samples in the temporal level track, and a value of the temporal level identifier may be expressed as a discrete value.
As another example, referring to fig. 25, the transmitting device may determine whether to apply temporal scalability to point cloud data in a three-dimensional space (S2501), and may generate a G-PCC file including temporal scalability information and point cloud data. Here, the temporal scalability information may include a first syntax element for a temporal level identifier of a sample in the temporal level track, and a value of the temporal level identifier may be expressed as a discrete value.
For example, the first syntax element may be temporal_level_id. As an example, the identifier value of a temporal level may be a discrete value with the same value interval, and the interval may be 1. Meanwhile, samples of identifiers of different time levels may be included in each time level track. That is, a temporal level of samples included in one track may not be included in another track. On the other hand, the time-level tracks include two or more time-level tracks (e.g., a first time-level track (first track) and a second time-level track (second track)). When the second time level track is the next track to the first time level track, the second time level track may include a sample of time level identifiers that are greater than the identifier value of the maximum time level of the first time level track. For example, when the identification value of the maximum time level of the first time level track is x, the identification value of the maximum time level of the second time level track is x+a, a may be 1, and at this time, the first track (first time level track) may refer to the second track (second time level track), and the second track may also refer to the first track. Meanwhile, the temporal scalability information further includes a second syntax element (e.g., multiple_temporal_level_ tracks _flag) for whether there are a plurality of temporal level tracks, a first value of the second syntax element indicates that there is one temporal level track, and a second value of the second syntax element may indicate that there are a plurality of temporal level tracks. For example, the first value may be 0 and the second value may be 1. Meanwhile, the time-level track may include only samples of consecutive time levels. That is, in one time level track, when the interval between time identifiers is an arbitrary integer a, the time identifiers may include only samples of x, x+a, x+2a …. Furthermore, only samples of different temporal levels may be included between temporal level tracks. That is, samples at a particular time level may be contained only in a particular track, and may not be contained in other tracks. That is, the tracks may be mutually exclusive with respect to the temporal level of the samples. In this case, references may be made between tracks, but interleaving may not be allowed between temporal levels.
According to embodiments of the present disclosure, image encoding/decoding efficiency may be improved by specifying semantics of temporal scalability information.
The scope of the present disclosure includes software or machine-executable instructions (e.g., operating system, application programs, firmware, programs, etc.) that cause the operation of the methods according to various embodiments to be performed on a device or computer, as well as non-transitory computer-readable media in which such software, instructions, etc., are stored and which are executable on a device or computer.
INDUSTRIAL APPLICABILITY
Embodiments according to the present disclosure may be used to provide point cloud content. Further, embodiments according to the present disclosure may be used to encode/decode point cloud data.

Claims (13)

1.A method performed by a receiving device of point cloud data, the method comprising the steps of:
Obtaining time scalability information of a point cloud in a three-dimensional space based on the G-PCC file; and
Reconstructing a three-dimensional point cloud based on the temporal scalability information,
Wherein the temporal scalability information comprises a first syntax element of a temporal level identifier for samples in a temporal level track, and
Wherein the value of the time level identifier is represented as a discrete value.
2. The method of claim 1, wherein the value of the temporal level identifier is a discrete value of an interval having the same value, and the interval is 1.
3. The method of claim 2, wherein samples of different time level identifiers are included in each time level track.
4. The method according to claim 2,
Wherein the time level tracks include a first time level track and a second time level track, and
Wherein when the second temporal level track is the next track to the first temporal level track, the second temporal level track comprises a sample of temporal level identifiers that are greater than the value of the maximum temporal level identifier of the first temporal level track.
5. The method of claim 4, wherein the second temporal level track comprises a sample having an identifier value obtained by adding 1 to the value of the maximum temporal level identifier of the first temporal level track.
6. The method according to claim 1,
Wherein the temporal scalability information further comprises a second syntax element for whether a plurality of temporal level tracks exist,
Wherein the first value of the second syntax element indicates that only one temporal level track exists, an
Wherein a second value of the second syntax element indicates that there are a plurality of temporal level tracks.
7. The method of claim 6, wherein the first value is 0.
8. The method of claim 6, wherein the second value is 1.
9. The method of claim 1, wherein the temporal level track comprises only samples of consecutive temporal levels.
10. The method of claim 1, wherein the temporal level of the samples included in the first temporal level track is different from the temporal level of the samples included in the second temporal level track.
11. A method performed by a transmitting device of point cloud data, the method comprising the steps of:
Determining whether temporal scalability is applied to the point cloud data in the three-dimensional space; and
Generating a G-PCC file by including temporal scalability information and the point cloud data,
Wherein the temporal scalability information comprises a first syntax element of an identifier of a temporal level for a sample in a temporal level track, and
Wherein the identifier value of the temporal level is represented as a discrete value.
12. A receiving device of point cloud data, the receiving device comprising:
A memory; and
At least one of the processors is configured to perform,
Wherein the at least one processor is configured to:
Acquiring time scalability information of point clouds in a three-dimensional space; and
Reconstructing a three-dimensional point cloud based on the temporal scalability information,
Wherein the temporal scalability information comprises a first syntax element of an identifier of a temporal level for a sample in a temporal level track, and
Wherein the identifier value of the temporal level is represented as a discrete value.
13. A transmitting apparatus of point cloud data, the transmitting apparatus comprising:
A memory; and
At least one of the processors is configured to perform,
Wherein the at least one processor is configured to:
Determining whether temporal scalability is applied to the point cloud data in the three-dimensional space; and
Generating a G-PCC file by including temporal scalability information and the point cloud data,
Wherein the temporal scalability information comprises a first syntax element of an identifier of a temporal level for a sample in a temporal level track, and
Wherein the identifier value of the temporal level is represented as a discrete value.
CN202280081528.9A 2021-12-13 2022-12-12 Point cloud data transmitting device, method executed in transmitting device, point cloud data receiving device, and method executed in receiving device Pending CN118414837A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202163289102P 2021-12-13 2021-12-13
US63/289,098 2021-12-13
US63/289,102 2021-12-13
PCT/KR2022/020139 WO2023113405A1 (en) 2021-12-13 2022-12-12 Point cloud data transmission device, method performed in transmission device, point cloud data reception device, and method performed in reception device

Publications (1)

Publication Number Publication Date
CN118414837A true CN118414837A (en) 2024-07-30

Family

ID=91991364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280081528.9A Pending CN118414837A (en) 2021-12-13 2022-12-12 Point cloud data transmitting device, method executed in transmitting device, point cloud data receiving device, and method executed in receiving device

Country Status (1)

Country Link
CN (1) CN118414837A (en)

Similar Documents

Publication Publication Date Title
KR102330527B1 (en) Point cloud data transmission apparatus, point cloud data transmission method, point cloud data reception apparatus and point cloud data reception method
CN114503586B (en) Point cloud data transmitting device, point cloud data transmitting method, point cloud data receiving device and point cloud data receiving method
KR102355634B1 (en) Point cloud data transmission apparatus, point cloud data transmission method, point cloud data reception apparatus and point cloud data reception method
US20210329052A1 (en) Point cloud data transmission apparatus, point cloud data transmission method, point cloud data reception apparatus and point cloud data reception method
US20220239945A1 (en) Apparatus and method for processing point cloud data
US20220417557A1 (en) Device and method for processing point cloud data
US12120328B2 (en) Transmission device of point cloud data and method performed by transmission device, and reception device of point cloud data and method performed by reception device
CN114402624B (en) Point cloud data processing equipment and method
CN116349229A (en) Point cloud data transmitting device and method, and point cloud data receiving device and method
US20230206510A1 (en) Point cloud data processing device and processing method
KR20240032912A (en) Point cloud data transmission device, point cloud data transmission method, point cloud data reception device and point cloud data reception method
CN118414837A (en) Point cloud data transmitting device, method executed in transmitting device, point cloud data receiving device, and method executed in receiving device
US20240357139A1 (en) Transmission device for point cloud data and method performed by transmission device, and reception device for point cloud data and method performed by reception device
EP4436177A1 (en) Transmission device for point cloud data, method performed by transmission device, reception device for point cloud data, and method performed by reception device
CN118251892A (en) Transmitting device for point cloud data, method executed by transmitting device, receiving device for point cloud data, and method executed by receiving device
EP4415361A1 (en) Transmission device of point cloud data and method performed by transmission device, and reception device of point cloud data and method performed by reception device
KR20240125607A (en) A device for transmitting point cloud data and a method performed in the device for transmitting point cloud data, and a device for receiving point cloud data and a method performed in the device for receiving point cloud data
EP4429248A1 (en) Device for transmitting point cloud data, method for transmitting point cloud data, device for receiving point cloud data, and method for receiving point cloud data
EP4451691A1 (en) Transmission device for point cloud data and method performed by transmission device, and reception device for point cloud data and method performed by reception device
CN118614067A (en) Transmitting device for point cloud data, method executed by transmitting device, receiving device for point cloud data, and method executed by receiving device
EP4451692A1 (en) Transmission device for point cloud data, method performed by said transmission device, reception device for point cloud data, and method performed by said reception device
CN118318448A (en) Transmitting device for point cloud data, method executed by transmitting device, receiving device for point cloud data, and method executed by receiving device
CN117643062A (en) Transmitting device for point cloud data, method executed by transmitting device, receiving device for point cloud data, and method executed by receiving device
KR20240161126A (en) A device for transmitting point cloud data and a method performed in the device for transmitting point cloud data, and a device for receiving point cloud data and a method performed in the device for receiving point cloud data
KR20240161128A (en) A device for transmitting point cloud data and a method performed in the device for transmitting point cloud data, and a device for receiving point cloud data and a method performed in the device for receiving point cloud data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication