[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

EP4451692A1 - Transmission device for point cloud data, method performed by said transmission device, reception device for point cloud data, and method performed by said reception device - Google Patents

Transmission device for point cloud data, method performed by said transmission device, reception device for point cloud data, and method performed by said reception device Download PDF

Info

Publication number
EP4451692A1
EP4451692A1 EP22907910.8A EP22907910A EP4451692A1 EP 4451692 A1 EP4451692 A1 EP 4451692A1 EP 22907910 A EP22907910 A EP 22907910A EP 4451692 A1 EP4451692 A1 EP 4451692A1
Authority
EP
European Patent Office
Prior art keywords
track
pcc
point cloud
information
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22907910.8A
Other languages
German (de)
French (fr)
Inventor
Hendry Hendry
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Publication of EP4451692A1 publication Critical patent/EP4451692A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2353Processing of additional data, e.g. scrambling of additional data or processing content descriptors specifically adapted to content descriptors, e.g. coding, compressing or processing of metadata
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/85406Content authoring involving a specific file format, e.g. MP4 format
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors

Definitions

  • the present disclosure relates to a method and device for processing point cloud content.
  • Point cloud content is expressed as a point cloud which is a set of points belonging to a coordinate system representing a three-dimensional space.
  • the point cloud content may represent three-dimensional media and is used to provide various services such as virtual reality (VR), augmented reality (AR), mixed reality (MR) and self-driving services. Since tens of thousands to hundreds of thousands of point data are required to express point cloud content, a method of efficiently processing a vast amount of point data is required.
  • the present disclosure provides a device and method for efficiently processing point cloud data.
  • the present disclosure provides a point cloud data processing method and device for solving latency and encoding/decoding complexity.
  • the present disclosure provides device and methods for supporting 'tele' sample grouping according to the number of temporal levels in a geometry track.
  • the present disclosure proposes a device and methods for processing a file storage technique to support efficient access to a stored G-PCC bitstream.
  • a method performed by a reception device of point cloud data may comprise obtaining a geometry-based point cloud compression (G-PCC) file including the point cloud data, obtaining temporal level number information indicating the number of temporal levels in a track within the G-PCC file from the G-PCC file, and obtaining sample group information of samples in the track from the G-PCC file based on the temporal level number information.
  • the sample group information may be obtained based on the temporal level number information indicating that there is a plurality of temporal levels in the track.
  • a method performed by a transmission device of point cloud data may comprise generating temporal level number information indicating the number of temporal levels in a track within a geometry-based point cloud compression (G-PCC) file, generating sample group information which is information on grouping of samples in the track based on the number of temporal levels, and generating the G-PCC file including the temporal level number information.
  • the sample group information may be generated based on there being a plurality of temporal levels.
  • a reception device of point cloud data may comprise a memory and at least one processor.
  • the at least one processor may obtain a geometry-based point cloud compression (G-PCC) file including the point cloud data, obtain temporal level number information indicating the number of temporal levels in a track within the G-PCC file from the G-PCC file, and obtain sample group information of samples in the track from the G-PCC file based on the temporal level number information.
  • the sample group information may be obtained based on the temporal level number information indicating that there is a plurality of temporal levels in the track.
  • a transmission device of point cloud data may comprise a memory and at least one processor.
  • the at least one processor may generate temporal level number information indicating the number of temporal levels in a track within a geometry-based point cloud compression (G-PCC) file, generate sample group information which is information on grouping of samples in the track based on the number of temporal levels and generate the G-PCC file including the temporal level number information.
  • the sample group information may be generated based on there being a plurality of temporal levels.
  • the device and method according to embodiments of the present disclosure may process point cloud data with high efficiency.
  • the device and method according to embodiments of the present disclosure may provide point cloud content for providing universal services such as a VR service and a self-driving service.
  • the device and method according to embodiments of the present disclosure may provide temporal scalability for effectively accessing a desired component among G-PCC components.
  • the device and method according to the embodiments of the present disclosure can reduce bits by performing 'tele' sample grouping when a track having a plurality of temporal levels is present.
  • the device and method according to the embodiments of the present disclosure can reduce bits by performing 'tele' sample grouping when a geometry track having a plurality of temporal levels is present.
  • a component when a component is “connected”, “coupled” or “linked” to another component, it may include not only a direct connection relationship but also an indirect connection relationship in which another component exists in therebetween.
  • a component when it is said that a component “includes” or “has” another component, this indicates that the other components are not excluded, but may be further included unless specially described.
  • first, second, etc. are used only for the purpose of distinguishing one component from other components, and, unless otherwise specified, the order or importance of the components is not limited. Accordingly, within the scope of the present disclosure, a first component in one embodiment may be referred to as a second component in another embodiment, and, similarly, a second component in one embodiment is referred to as a first component in another embodiment.
  • components that are distinguished from each other are for clearly explaining features thereof, and do not necessarily mean that the components are separated. That is, a plurality of components may be integrated to form one hardware or software unit, or one component may be distributed to form a plurality of hardware or software units. Accordingly, even if not specifically mentioned, such integrated or distributed embodiments are also included in the scope of the present disclosure.
  • components described in various embodiments do not necessarily mean essential components, and some thereof may be optional components. Accordingly, an embodiment composed of a subset of components described in one embodiment is also included in the scope of the present disclosure. In addition, embodiments including other components in addition to components described in various embodiments are also included in the scope of the present disclosure.
  • the present disclosure relates to encoding and decoding of point cloud-related data, and terms used in the present disclosure may have general meanings commonly used in the technical field to which the present disclosure belongs unless they are newly defined in the present disclosure.
  • the term “/” and “,” should be interpreted to indicate “and/or.”
  • the expression “A/B” and “A, B” may mean “A and/or B.”
  • “A/B/C” and “A/B/C” may mean “at least one of A, B, and/or C.”
  • the term “or” should be interpreted to indicate “and/or.”
  • the expression “A or B” may comprise 1) only “A”, 2) only “B”, and/or 3) both "A and B”.
  • the term “or” should be interpreted to indicate "additionally or alternatively.”
  • the present disclosure relates to compression of point cloud-related data.
  • Various methods or embodiments of the present disclosure may be applied to a point cloud compression or point cloud coding (PCC) standard (e.g., G-PCC or V-PCC standard) of a moving picture experts group (MPEG) or a next-generation video/image coding standard.
  • PCC point cloud compression or point cloud coding
  • a "point cloud” may mean a set of points located in a three-dimensional space.
  • point cloud content is expressed as a point cloud, and may mean a "point cloud video/image”.
  • the 'point cloud video/image' is referred to as a 'point cloud video'.
  • a point cloud video may include one or more frames, and one frame may be a still image or a picture. Accordingly, the point cloud video may include a point cloud image/frame/picture, and may be referred to as any one of a "point cloud image", a "point cloud frame", and a "point cloud picture".
  • point cloud data may mean data or information related to each point in the point cloud.
  • Point cloud data may include geometry and/or attribute.
  • the point cloud data may further include metadata.
  • the point cloud data may be referred to as “point cloud content data” or “point cloud video data” or the like.
  • point cloud data may be referred to as "point cloud content”, “point cloud video”, “G-PCC data”, and the like.
  • a point cloud object corresponding to point cloud data may be represented in a box shape based on a coordinate system, and the box shape based on the coordinate system may be referred to as a bounding box. That is, the bounding box may be a rectangular cuboid capable of accommodating all points of the point cloud, and may be a cuboid including a source point cloud frame.
  • geometry includes the position (or position information) of each point, and the position may be expressed by parameters (e.g., for example, an x-axis value, a y-axis value, and a z-axis value) representing a three-dimensional coordinate system (e.g., a coordinate system consisting of an x-axis, y-axis, and z-axis).
  • parameters e.g., for example, an x-axis value, a y-axis value, and a z-axis value
  • the geometry may be referred to as "geometric information”.
  • the attribute may include properties of each point, and the properties may include one or more of texture information, color (RGB or YCbCr), reflectance (r), transparency, etc. of each point.
  • the attribute may be referred to as "attribute information”.
  • Metadata may include various data related to acquisition in an acquisition process to be described later.
  • FIG. 1 illustrates an example of a system for providing point cloud content (hereinafter, referred to as a 'point cloud content provision system') according to embodiments of the present disclosure.
  • FIG. 2 illustrates an example of a process in which the point cloud content provision system provides point cloud content.
  • the point cloud content provision system may include a transmission device 10 and a reception device 20.
  • the point cloud content provision system may perform an acquisition process S20, an encoding process S21, a transmission process S22, a decoding process S23, a rendering process S24 and/or a feedback process S25 shown in FIG.2 by operation of the transmission device 10 and the reception device 20.
  • the transmission device 10 acquires point cloud data and outputs a bitstream through a series of processes (e.g., encoding process) for the acquired point cloud data (source point cloud data), in order to provide point cloud content.
  • the point cloud data may be output in the form of a bitstream through an encoding process.
  • the transmission device 10 may transmit the output bitstream in the form of a file or streaming (streaming segment) to the reception device 20 through a digital storage medium or a network.
  • the digital storage medium may include a variety of storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD.
  • the reception device 20 may process (e.g., decode or reconstruct) the received data (e.g., encoded point cloud data) into source point cloud data and render it.
  • the point cloud content may be provided to the user through these processes, and the present disclosure may provide various embodiments necessary to effectively perform a series of these processes.
  • the transmission device 10 may include an acquisition unit 11, an encoding unit 12, an encapsulation processing unit 13 and a transmission unit 14, and the reception device 20 may include a reception unit 21, a decapsulation processing unit 22, a decoding unit 23, and a rendering unit 24.
  • the acquisition unit 11 may perform a process S20 of acquiring a point cloud video through a capturing, synthesizing or generating process. Accordingly, the acquisition unit 11 may be referred to as a 'point cloud video acquisition unit'.
  • Point cloud data (geometry and/or attribute, etc.) for a plurality of points may be generated by the acquisition process (S20). Also, through the acquisition process (S20), metadata related to the acquisition of the point cloud video may be generated. Also, mesh data (e.g., triangular data) indicating connection information between point clouds may be generated by the acquisition process (S20).
  • the metadata may include initial viewing orientation metadata.
  • the initial viewing orientation metadata may indicate whether the point cloud data is data representing the front or the back.
  • the metadata may be referred to as "auxiliary data" that is metadata for the point cloud.
  • the acquired point cloud video may include the polygon file format or the Stanford triangle format (PLY) file. Since the point cloud video has one or more frames, the acquired point cloud video may include one or more PLY files.
  • the PLY file may include point cloud data of each point.
  • the acquisition unit 11 may be composed of a combination of camera equipment capable of acquiring depth (depth information) and RGB cameras capable of extracting color information corresponding to the depth information.
  • the camera equipment capable of acquiring the depth information may be a combination of an infrared pattern projector and an infrared camera.
  • the acquisition unit 11 may be composed of a LiDAR, and the LiDAR may use a radar system for measuring the position coordinates of a reflector by measuring a time required for a laser pulse to be emitted and returned after being reflected.
  • the acquisition unit 110 may extract a shape of geometry composed of points in a three-dimensional space from the depth information, and may extract an attribute representing a color or reflection of each point from the RGB information.
  • a method of extracting (or capturing, acquiring, etc.) a point cloud video there may be an inward-facing method of capturing a central object and an outward-facing method of capturing an external environment.
  • the encoding unit 12 may perform the encoding process (S21) of encoding the data (e.g., geometry, attribute and/or metadata, and/or mesh data, etc.) generated by the acquisition unit 11 into one or more bitstreams. Accordingly, the encoding unit 12 may be referred to as a 'point cloud video encoder'. The encoding unit 12 may encode the data generated by the acquisition unit 11 in series or in parallel.
  • S21 encoding process
  • the encoding unit 12 may encode the data generated by the acquisition unit 11 in series or in parallel.
  • the encoding process S21 performed by the encoding unit 12 may be geometry-based point cloud compression (G-PCC).
  • the encoding unit 12 may perform a series of procedures such as prediction, transform, quantization, and entropy coding for compression and coding efficiency.
  • the encoded point cloud data may be output in the form of a bitstream.
  • the encoding unit 12 may partition the point cloud data into geometry and attribute and encode them as described below.
  • the output bitstream may include a geometry bitstream including the encoded geometry and an attribute bitstream including the encoded attribute.
  • the output bitstream may further include one or more of a metadata bitstream including metadata, an auxiliary bitstream including auxiliary data, and a mesh data bitstream including mesh data.
  • the encoding process (S21) will be described in more detail below.
  • a bitstream including the encoded point cloud data may be referred to as a 'point cloud bitstream' or a 'point cloud video bitstream'.
  • the encapsulation processing unit 13 may perform a process of encapsulating one or more bitstreams output from the decoding unit 12 in the form of a file or a segment. Accordingly, the encapsulation processing unit 13 may be referred to as a 'file/segment encapsulation module'. Although the drawing shows an example in which the encapsulation processing unit 13 is composed of a separate component/module in relation to the transmission unit 14, the encapsulation processing unit 13 may be included in the transmission unit 14 in some embodiments.
  • the encapsulation processing unit 13 may encapsulate the data in a file format such as ISO Base Media File Format (ISOBMFF) or process the data in the form of other DASH segments.
  • the encapsulation processing unit 13 may include metadata in a file format. Metadata may be included, for example, in boxes of various levels in the ISOBMFF file format, or as data in a separate track within the file.
  • the encapsulation processing unit 130 may encapsulate the metadata itself into a file.
  • the metadata processed by the encapsulation processing unit 13 may be transmitted from a metadata processing unit not shown in the drawing.
  • the metadata processing unit may be included in the encoding unit 12 or may be configured as a separate component/module.
  • the transmission unit 14 may perform the transmission process (S22) of applying processing (processing for transmission) according to a file format to the 'encapsulated point cloud bitstream'.
  • the transmission unit 140 may transmit the bitstream or a file/segment including the bitstream to the reception unit 21 of the reception device 20 through a digital storage medium or a network. Accordingly, the transmission unit 14 may be referred to as a 'transmitter' or a 'communication module'.
  • the transmission unit 14 may process point cloud data according to an arbitrary transmission protocol.
  • 'processing the point cloud data according to the arbitrary transmission protocol' may be 'processing for transmission'.
  • the processing for transmission may include processing for transmission through a broadcast network, processing for transmission through a broadband, and the like.
  • the transmission unit 14 may receive not only point cloud data but also metadata from the metadata processing unit, and may perform processing for transmission on the transmitted metadata.
  • the processing for transmission may be performed by the transmission processing unit, and the transmission processing unit may be included in the transmission unit 14 or configured as a component/module separate from the transmission unit 14.
  • the reception unit 21 may receive the bitstream transmitted by the transmission device 10 or a file/segment including the bitstream. Depending on the transmitted channel, the reception unit 21 may receive a bitstream or a file/segment including the bitstream through a broadcast network, or may receive a bitstream or a file/segment including the bitstream through a broadband. Alternatively, the reception unit 21 may receive a bitstream or a file/segment including the bitstream through a digital storage medium.
  • the reception unit 21 may perform processing according to a transmission protocol on the received bitstream or the file/segment including the bitstream.
  • the reception unit 21 may perform a reverse process of transmission processing (processing for transmission) to correspond to processing for transmission performed by the transmission device 10.
  • the reception unit 21 may transmit the encoded point cloud data among the received data to the decapsulation processing unit 22 and may transmit metadata to a metadata parsing unit.
  • the metadata may be in the form of a signaling table. In some embodiments, the reverse process of the processing for transmission may be performed in the reception processing unit.
  • Each of the reception processing unit, the decapsulation processing unit 22, and the metadata parsing unit may be included in the reception unit 21 or may be configured as a component/module separate from the reception unit 21.
  • the decapsulation processing unit 22 may decapsulate the point cloud data (i.e., a bitstream in a file format) in a file format received from the reception unit 21 or a reception processing unit. Accordingly, the decapsulation processing unit 22 may be referred to as a 'file/segment decapsulation module'.
  • the decapsulation processing unit 22 may acquire a point cloud bitstream or a metadata bitstream by decapsulating files according to ISOBMFF or the like.
  • metadata (metadata bitstream) may be included in the point cloud bitstream.
  • the acquired point cloud bitstream may be transmitted to the decoding unit 23, and the acquired metadata bitstream may be transmitted to the metadata processing unit.
  • the metadata processing unit may be included in the decoding unit 23 or may be configured as a separate component/module.
  • the metadata obtained by the decapsulation processing unit 23 may be in the form of a box or track in a file format. If necessary, the decapsulation processing unit 23 may receive metadata required for decapsulation from the metadata processing unit.
  • the metadata may be transmitted to the decoding unit 23 and used in the decoding process (S23), or may be transmitted to the rendering unit 24 and used in the rendering process (S24).
  • the decoding unit 23 may receive the bitstream and perform operation corresponding to the operation of the encoding unit 12, thereby performing the decoding process (S23) of decoding the point cloud bitstream (encoded point cloud data). Accordingly, the decoding unit 23 may be referred to as a 'point cloud video decoder'.
  • the decoding unit 23 may partition the point cloud data into geometry and attribute and decode them. For example, the decoding unit 23 may reconstruct (decode) geometry from a geometry bitstream included in the point cloud bitstream, and restore (decode) attribute based on the reconstructed geometry and an attribute bitstream included in the point cloud bitstream. A three-dimensional point cloud video/image may be reconstructed based on position information according to the reconstructed geometry and attribute (such as color or texture) according to the decoded attribute.
  • the decoding process (S23) will be described in more detail below.
  • the rendering unit 24 may perform the rendering process S24 of rendering the reconstructed point cloud video. Accordingly, the rendering unit 24 may be referred to as a 'renderer'.
  • the rendering process S24 may refer to a process of rendering and displaying point cloud content in a 3D space.
  • the rendering process S24 may perform rendering according to a desired rendering method based on the position information and attribute information of the points decoded through the decoding process.
  • the feedback process S25 may include a process of transmitting various feedback information that may be acquired during the rendering process S24 or the display process to the transmission device 10 or to other components in the reception device 20.
  • the feedback process S25 may be performed by one or more of the components included in the reception device 20 of FIG. 1 or may be performed by one or more of the components shown in FIGS. 10 and 11 .
  • the feedback process S25 may be performed by a 'feedback unit' or a 'sensing/tracking unit'.
  • FIG. 3 illustrates an example of a point cloud encoding apparatus 300 according to embodiments of the present disclosure.
  • the point cloud encoding apparatus 300 of FIG. 3 may correspond to the encoding unit 12 of FIG. 1 in terms of the configuration and function.
  • the point cloud encoding apparatus 300 may include a coordinate system transform unit 305, a geometry quantization unit 310, an octree analysis unit 315, an approximation unit 320, a geometry encoding unit 325, a reconstruction unit 330, and an attribute transform unit 340, a RAHT transform unit 345, an LOD generation unit 350, a lifting unit 355, an attribute quantization unit 360, an attribute encoding unit 365, and/or a color transform unit 335.
  • the point cloud data acquired by the acquisition unit 11 may undergo processes of adjusting the quality of the point cloud content (e.g., lossless, lossy, near-lossless) according to the network situation or application.
  • each point of the acquired point cloud content may be transmitted without loss, but, in that case, real-time streaming may not be possible because the size of the point cloud content is large. Therefore, in order to provide the point cloud content smoothly, a process of reconstructing the point cloud content according to a maximum target bitrate is required.
  • Processes of adjusting the quality of the point cloud content may be processes of reconstructing and encoding the position information (position information included in the geometry information) or color information (color information included in the attribute information) of the points.
  • a process of reconstructing and encoding position information of points may be referred to as geometry coding, and a process of reconstructing and encoding attribute information associated with each point may be referred to as attribute coding.
  • Geometry coding may include a geometry quantization process, a voxelization process, an octree analysis process, an approximation process, a geometry encoding process, and/or a coordinate system transform process. Also, geometry coding may further include a geometry reconstruction process. Attribute coding may include a color transform process, an attribute transform process, a prediction transform process, a lifting transform process, a RAHT transform process, an attribute quantization process, an attribute encoding process, and the like.
  • the coordinate system transform process may correspond to a process of transforming a coordinate system for positions of points. Therefore, the coordinate system transform process may be referred to as 'transform coordinates'.
  • the coordinate system transform process may be performed by the coordinate system transform unit 305.
  • the coordinate system transform unit 305 may transform the positions of the points from the global space coordinate system to position information in a three-dimensional space (e.g., a three-dimensional space expressed in coordinate system of the X-axis, Y-axis, and Z-axis).
  • Position information in the 3D space may be referred to as 'geometric information'.
  • the geometry quantization process may correspond to a process of quantizing the position information of points, and may be performed by the geometry quantization unit 310.
  • the geometry quantization unit 310 may find position information having minimum (x, y, z) values among the position information of the points, and subtract position information having the minimum (x, y, z) positions from the position information of each point.
  • the geometry quantization unit 310 may multiply the subtracted value by a preset quantization scale value, and then adjust (lower or raise) the result to a near integer value, thereby performing the quantization process.
  • the voxelization process may correspond to a process of matching geometry information quantized through the quantization process to a specific voxel present in a 3D space.
  • the voxelization process may also be performed by the geometry quantization unit 310.
  • the geometry quantization unit 310 may perform octree-based voxelization based on position information of the points, in order to reconstruct each point to which the quantization process is applied.
  • the geometry encoding process may correspond to a process of performing entropy coding on the occupancy code.
  • the geometry encoding process may be performed by the geometry encoding unit 325.
  • the geometry encoding unit 325 may perform entropy coding on the occupancy code.
  • the generated occupancy code may be immediately encoded or may be encoded through an intra/inter coding process to increase compression efficiency.
  • the reception device 20 may reconstruct the octree through the occupancy code.
  • the point cloud encoding apparatus 300 may directly transmit the positions of points only for the specific area, or reconfigure positions of points within the specific area based on the voxel using a surface model, instead of partitioning a node (specific node) corresponding to this specific area into 8 sub-nodes (children nodes) for the specific area (a specific area that does not correspond to a leaf node).
  • a mode for directly transmitting the position of each point for a specific node may be a direct mode.
  • the point cloud encoding apparatus 300 may check whether conditions for enabling the direct mode are satisfied.
  • the conditions for enabling the direct mode are: 1) the option to use the direct mode shall be enabled, 2) the specific node does not correspond to a leaf node, and 3) points below a threshold shall exist within the specific node, and 4) the total number of points to be directly transmitted does not exceed a limit value.
  • the point cloud encoding apparatus 300 may entropy-code and transmit the position value of the point directly for the specific node through the geometry encoding unit 325.
  • a mode in which a position of a point in a specific area is reconstructed based on a voxel using a surface model may be a trisoup mode.
  • the trisoup mode may be performed by the approximation unit 320.
  • the approximation unit 320 may determine a specific level of the octree and reconstruct the positions of points in the node area based on the voxel using the surface model from the determined specific level.
  • the point cloud encoding apparatus 300 may selectively apply the trisoup mode. Specifically, the point cloud encoding apparatus 300 may designate a level (specific level) to which the trisoup mode is applied, when the trisoup mode is used. For example, when the specified specific level is equal to the depth (d) of the octree, the trisoup mode may not be applied. That is, the designated specific level shall be less than the depth value of the octree.
  • a three-dimensional cubic area of nodes of the designated specific level is called a block, and one block may include one or more voxels.
  • a block or voxel may correspond to a brick.
  • Each block may have 12 edges, and the approximation unit 320 may check whether each edge is adjacent to an occupied voxel having a point. Each edge may be adjacent to several occupied voxels.
  • a specific position of an edge adjacent to a voxel is called a vertex, and, when a plurality of occupied voxels are adjacent to one edge, the approximation unit 320 may determine an average position of the positions as a vertex.
  • the point cloud encoding apparatus 300 may entropy-code the starting points (x, y, z) of the edge, the direction vector ( ⁇ x, ⁇ y, ⁇ z) of the edge and position value of the vertex (relative position values within the edge) through the geometry encoding unit 325, when a vertex is present.
  • the geometry reconstruction process may correspond to a process of generating a reconstructed geometry by reconstructing an octree and/or an approximated octree.
  • the geometry reconstruction process may be performed by the reconstruction unit 330.
  • the reconstruction unit 330 may perform a geometry reconstruction process through triangle reconstruction, up-sampling, voxelization, and the like.
  • the reconstruction unit 330 may reconstruct a triangle based on the starting point of the edge, the direction vector of the edge and the position value of the vertex.
  • the reconstruction unit 330 may perform an upsampling process for voxelization by adding points in the middle along the edge of the triangle.
  • the reconstruction unit 330 may generate additional points based on an upsampling factor and the width of the block. These points may be called refined vertices.
  • the reconstruction unit 330 may voxel the refined vertices, and the point cloud encoding apparatus 300 may perform attribute coding based on the voxelized position value.
  • the geometry encoding unit 325 may increase compression efficiency by applying context adaptive arithmetic coding.
  • the geometry encoding unit 325 may directly entropy-code the occupancy code using the arithmetic code.
  • the geometry encoding unit 325 adaptively performs encoding based on occupancy of neighbor nodes (intra coding), or adaptively performs encoding based on the occupancy code of a previous frame (inter-coding).
  • the frame may mean a set of point cloud data generated at the same time.
  • Intra coding and inter coding are optional processes and thus may be omitted.
  • Attribute coding may correspond to a process of coding attribute information based on reconstructed geometry and geometry before coordinate system transform (source geometry). Since the attribute may be dependent on the geometry, the reconstructed geometry may be utilized for attribute coding.
  • the attribute may include color, reflectance, and the like.
  • the same attribute coding method may be applied to information or parameters included in the attribute. Color has three elements, reflectance has one element, and each element can be processed independently.
  • Attribute coding may include a color transform process, an attribute transform process, a prediction transform process, a lifting transform process, a RAHT transform process, an attribute quantization process, an attribute encoding process, and the like.
  • the prediction transform process, the lifting transform process, and the RAHT transform process may be selectively used, or a combination of one or more thereof may be used.
  • the color transform process may correspond to a process of transforming the format of the color in the attribute into another format.
  • the color transform process may be performed by the color transform unit 335. That is, the color transform unit 335 may transform the color in the attribute.
  • the color transform unit 335 may perform a coding operation for transforming the color in the attribute from RGB to YCbCr.
  • the operation of the color transform unit 335 that is, the color transform process, may be optionally applied according to a color value included in the attribute.
  • position values for points existing in the voxel are set to the center point of the voxel in order to display them by integrating them into one point information for the voxel. Accordingly, a process of transforming the values of attributes related to the points may be required. Also, even when the trisoup mode is performed, the attribute transform process may be performed.
  • the attribute transform unit 340 may calculate the central position value of the voxel and an average value of the attribute values of neighbor points within a specific radius. Alternatively, the attribute transform unit 340 may apply a weight according to a distance from the central position to the attribute values and calculate an average value of the attribute values to which the weight is applied. In this case, each voxel has a position and a calculated attribute value.
  • the prediction transform process may correspond to a process of predicting an attribute value of a current point based on attribute values of one or more points (neighbor points) adj acent to the current point (a point corresponding to a prediction target).
  • the prediction transform process may be performed by a level-of-detail (LOD) generation unit 350.
  • LOD level-of-detail
  • Prediction transform is a method to which the LOD transform technique is applied, and the LOD generation unit 350 may calculate and set the LOD value of each point based on the LOD distance value of each point.
  • the LOD generation unit 350 may generate a predictor for each point for prediction transform. Accordingly, when there are N points, N predictors may be generated.
  • the neighbor points may be points existing within a distance set for each LOD from the current point.
  • the predictor may multiply the attribute values of neighbor points by the 'set weight value', and set a value obtained by averaging the attribute values multiplied by the weight value as the predicted attribute value of the current point.
  • An attribute quantization process may be performed on a residual attribute value obtained by subtracting the predicted attribute value of the current point from the attribute value of the current point.
  • the lifting transform process may correspond to a process of reconstructing points into a set of detail levels through the LOD generation process, like the prediction transform process.
  • the lifting transform process may be performed by the lifting unit 355.
  • the lifting transform process may also include a process of generating a predictor for each point, a process of setting the calculated LOD in the predictor, a process of registering neighbor points, and a process of setting a weight according to distances between the current point and the neighbor points.
  • the RAHT transform process may correspond to a method of predicting attribute information of nodes at a higher level using attribute information associated with a node at a lower level of the octree. That is, the RATH transform process may correspond to an attribute information intra coding method through octree backward scan.
  • the RAHT transform process may be performed by the RAHT transform unit 345.
  • the RAHT transform unit 345 scans the entire area in the voxel, and may perform the RAHT transform process up to the root node while summing (merging) the voxel into a larger block at each step. Since the RAHT transform unit 345 performs a RAHT transform process only on an occupied node, in the case of an empty node that is not occupied, the RAHT transform process may be performed on a node at a higher level immediately above it.
  • the attribute quantization process may correspond to a process of quantizing the attribute output from the RAHT transform unit 345, the LOD generation unit 350, and/or the lifting unit 355.
  • the attribute quantization process may be performed by the attribute quantization unit 360.
  • the attribute encoding process may correspond to a process of encoding a quantized attribute and outputting an attribute bitstream.
  • the attribute encoding process may be performed by the attribute encoding unit 365.
  • FIG. 4 illustrates an example of a point cloud decoding apparatus 400 according to an embodiment of the present disclosure.
  • the point cloud decoding apparatus 400 of FIG. 4 may correspond to the decoding unit 23 of FIG. 1 in terms of configuration and function.
  • the point cloud decoding apparatus 400 may perform a decoding process based on data (bitstream) transmitted from the transmission device 10.
  • the decoding process may include a process of reconstructing (decoding) a point cloud video by performing operation corresponding to the above-described encoding operation on the bitstream.
  • the decoding process may include a geometry decoding process and an attribute decoding process.
  • the geometry decoding process may be performed by a geometry decoding unit 410
  • an attribute decoding process may be performed by an attribute decoding unit 420. That is, the point cloud decoding apparatus 400 may include the geometry decoding unit 410 and the attribute decoding unit 420.
  • the geometry decoding unit 410 may reconstruct geometry from a geometry bitstream, and the attribute decoder 420 may reconstruct attribute based on the reconstructed geometry and the attribute bitstream. Also, the point cloud decoding apparatus 400 may reconstruct a three-dimensional point cloud video (point cloud data) based on position information according to the reconstructed geometry and attribute information according to the reconstructed attribute.
  • FIG. 5 illustrates a specific example of a point cloud decoding apparatus 500 according to another embodiment of the present disclosure.
  • the point cloud decoding apparatus 500 includes a geometry decoding unit 505, an octree synthesis unit 510, an approximation synthesis unit 515, a geometry reconstruction unit 520, and a coordinate system inverse transform unit 525, an attribute decoding unit 530, an attribute dequantization unit 535, a RATH transform unit 550, an LOD generation unit 540, an inverse lifting unit 545, and/or a color inverse transform unit 555.
  • the geometry decoding unit 505, the octree synthesis unit 510, the approximation synthesis unit 515, the geometry reconstruction unit 520 and the coordinate system inverse transform unit 550 may perform geometry decoding.
  • Geometry decoding may be performed as a reverse process of the geometry coding described with reference to FIGS. 1 to 3 .
  • Geometry decoding may include direct coding and trisoup geometry decoding. Direct coding and trisoup geometry decoding may be selectively applied.
  • the geometry decoding unit 505 may decode the received geometry bitstream based on arithmetic coding. Operation of the geometry decoding unit 505 may correspond to a reverse process of operation performed by the geometry encoding unit 335.
  • the octree synthesis unit 510 may generate an octree by obtaining an occupancy code from the decoded geometry bitstream (or information on a geometry obtained as a result of decoding). Operation of the octree synthesis unit 510 may correspond to a reverse process of operation performed by the octree analysis unit 315.
  • the approximation synthesis unit 515 may synthesize a surface based on the decoded geometry and/or the generated octree, when trisoup geometry encoding is applied.
  • the geometry reconstruction unit 520 may reconstruct geometry based on the surface and the decoded geometry. When direct coding is applied, the geometry reconstruction unit 520 may directly bring and add position information of points to which direct coding is applied. In addition, when trisoup geometry encoding is applied, the geometry reconstruction unit 520 may reconstruct the geometry by performing reconstruction operation, for example, triangle reconstruction, up-sampling, voxelization operation and the like.
  • the reconstructed geometry may include a point cloud picture or frame that does not include attributes.
  • the coordinate system inverse transform unit 550 may acquire positions of points by transforming the coordinate system based on the reconstructed geometry. For example, the coordinate system inverse transform unit 550 may inversely transform the positions of points from a three-dimensional space (e.g., a three-dimensional space expressed by the coordinate system of X-axis, Y-axis, and Z-axis, etc.) to position information of the global space coordinate system.
  • a three-dimensional space e.g., a three-dimensional space expressed by the coordinate system of X-axis, Y-axis, and Z-axis, etc.
  • the attribute decoding unit 530, the attribute inverse quantization unit 535, the LOD generator 540, and/or the inverse lifting unit 545 may perform attribute decoding.
  • Attribute decoding may include RAHT transform decoding, predictive transform decoding, and lifting transform decoding.
  • the above three types of decoding may be selectively used, or a combination of one or more types of decoding may be used.
  • the attribute decoding unit 530 may decode an attribute bitstream based on arithmetic coding. For example, when there is no neighbor point in the predictor of each point and thus the attribute value of the current point is directly entropy-encoded, the attribute decoding unit 530 may decode the attribute value (non-quantized attribute value) of the current point. As another example, when there are neighbor points in the predictor of the current points and thus the quantized residual attribute value is entropy-encoded, the attribute decoding unit 530 may decode the quantized residual attribute value.
  • the attribute dequantization unit 535 may dequantize the decoded attribute bitstream or information on the attribute obtained as a result of decoding, and output dequantized attributes (or attribute values). For example, when the quantized residual attribute value is output from the attribute decoding unit 530, the attribute dequantization unit 535 may dequantize the quantized residual attribute value to output the residual attribute value.
  • the dequantization process may be selectively applied based on whether the attribute is encoded in the point cloud encoding apparatus 300. That is, when there is no neighbor point in the predictor of each point and thus the attribute value of the current point is directly encoded, the attribute decoding unit 530 may output the attribute value of the current point that is not quantized, and the attribute encoding process may be skipped.
  • the RATH transform unit 550, the LOD generation unit 540, and/or the inverse lifting unit 545 may process the reconstructed geometry and dequantized attributes.
  • the RATH transform unit 550, the LOD generation unit 540, and/or the inverse lifting unit 545 may selectively perform decoding operation corresponding to the encoding operation of the point cloud encoding apparatus 300.
  • the color inverse transform unit 555 may perform inverse transform coding for inverse transforming s color value (or texture) included in the decoded attributes. Operation of the inverse color transform unit 555 may be selectively performed based on whether the color transform unit 335 operates.
  • FIG. 6 illustrates an example of a structure capable of interworking with a method/device for transmitting and receiving point cloud data according to embodiments of the present disclosure.
  • FIG. 6 illustrates a configuration in which at least one of a server (AI Server), a robot, a self-driving vehicle, an XR device, a smartphone, a home appliance and/ or a HMD is connected to a cloud network.
  • a server AI Server
  • the robot, the self-driving vehicle, the XR device, the smartphone, or the home appliance may be referred to as a device.
  • the XR device may correspond to a point cloud data device (PCC) according to embodiments or may interwork with the PCC device.
  • PCC point cloud data device
  • the cloud network may refer to a network that forms part of the cloud computing infrastructure or exists within the cloud computing infrastructure.
  • the cloud network may be configured using a 3G network, a 4G or Long Term Evolution (LTE) network, or a 5G network.
  • LTE Long Term Evolution
  • the server may be connected to at least one of the robot, the self-driving vehicle, the XR device, the smartphone, the home appliance, and/or the HMD through a cloud network, and may help at least a part of processing of the connected devices.
  • the HMD may represent one of the types in which an XR device and/or the PCC device according to embodiments may be implemented.
  • the HMD type device according to the embodiments may include a communication unit, a control unit, a memory unit, an I/O unit, a sensor unit, and a power supply unit.
  • the XR/PCC device may be implemented by a HMD, a HUD provided in a vehicle, a TV, a mobile phone, a smartphone, a computer, a wearable device, a home appliance, a digital signage, a vehicle, a fixed robot or a mobile robot, etc., by applying PCC and/or XR technology.
  • the XR/PCC device may obtain information on a surrounding space or a real object by analyzing 3D point cloud data or image data acquired through various sensors or from an external device to generate position (geometric) data and attribute data for 3D points, and render and output an XR object to be output.
  • the XR/PCC device may output an XR object including additional information on the recognized object in correspondence with the recognized object.
  • the XR/PCC device may be implemented by a mobile phone or the like by applying PCC technology.
  • a mobile phone can decode and display point cloud content based on PCC technology.
  • the self-driving vehicle may be implemented by a mobile robot, a vehicle, an unmanned aerial vehicle, etc. by applying PCC technology and XR technology.
  • the self-driving vehicle to which the XR/PCC technology is applied may mean a self-driving vehicle equipped with a unit for providing an XR image or a self-driving vehicle which is subjected to control/interaction within the XR image.
  • the self-driving vehicle which is subjected to control/interaction within the XR image is distinguished from the XR device and may be interwork with each other.
  • the self-driving vehicle equipped with a unit for providing an XR/PCC image may acquire sensor information from sensors including a camera, and output an XR/PCC image generated based on the acquired sensor information.
  • the self-driving vehicle has a HUD and may provide a passenger with an XR/PCC object corresponding to a real object or an object in a screen by outputting an XR/PCC image.
  • the XR/PCC object when the XR/PCC object is output to the HUD, at least a portion of the XR/PCC object may be output so as to overlap an actual object to which a passenger's gaze is directed.
  • the XR/PCC object when the XR/PCC object is output to a display provided inside the self-driving vehicle, at least a portion of the XR/PCC object may be output to overlap the object in the screen.
  • the self-driving vehicle may output XR/PCC objects corresponding to objects such as a lane, other vehicles, traffic lights, traffic signs, two-wheeled vehicles, pedestrians, and buildings.
  • VR technology is display technology that provides objects or backgrounds in the real world only as CG images.
  • AR technology refers to technology that shows a virtual CG image on top of an actual object image.
  • MR technology is similar to AR technology described above in that a mixture and combination of virtual objects in the real world is shown.
  • AR technology the distinction between real objects and virtual objects made of CG images is clear, and virtual objects are used in a form that complements the real objects, whereas, in MR technology, virtual objects are regarded as equivalent to real objects unlike the AR technology.
  • applying the MR technology described above is a hologram service.
  • VR, AR and MR technologies may be integrated and referred to as XR technology.
  • Point cloud data may represent volumetric encoding of a point cloud consisting of a sequence of frames (point cloud frames).
  • Each point cloud frame may include the number of points, the positions of the points, and the attributes of the points. The number of points, the positions of the points, and the attributes of the points may vary from frame to frame.
  • Each point cloud frame may mean a set of three-dimensional points specified by zero or more attributes and Cartesian coordinates (x, y, z) of three-dimensional points in a particular time instance.
  • the Cartesian coordinates (x, y, z) of the three-dimensional points may be a position or a geometry.
  • the present disclosure may further perform a space partition process of partitioning the point cloud data into one or more 3D blocks before encoding the point cloud data.
  • the 3D block may mean whole or part of a 3D space occupied by the point cloud data.
  • the 3D block may be one or more of a tile group, a tile, a slice, a coding unit (CU), a prediction unit (PU), or a transform unit (TU).
  • a tile corresponding to a 3D block may mean whole or part of the 3D space occupied by the point cloud data.
  • a slice corresponding to a 3D block may mean whole or part of a 3D space occupied by the point cloud data.
  • a tile may be partitioned into one or more slices based on the number of points included in one tile.
  • a tile may be a group of slices with bounding box information. The bounding box information of each tile may be specified in a tile inventory (or a tile parameter set, a tile parameter set (TPS)).
  • a tile may overlap another tile in the bounding box.
  • a slice may be a unit of data on which encoding is independently performed, or a unit of data on which decoding is independently performed.
  • a slice may be a set of points that may be independently encoded or decoded.
  • a slice may be a series of syntax elements representing part or whole of a coded point cloud frame.
  • Each slice may include an index for identifying a tile to which the slice belongs.
  • the spatially partitioned 3D blocks may be processed independently or non-independently.
  • spatially partitioned 3D blocks may be encoded or decoded independently or non-independently, respectively, and may be transmitted or received independently or non-independently, respectively.
  • the spatially partitioned 3D blocks may be quantized or dequantized independently or non-independently, and may be transformed or inversely transformed independently or non-independently, respectively.
  • spatially partitioned 3D blocks may be rendered independently or non-independently.
  • encoding or decoding may be performed in units of slices or units of tiles.
  • quantization or dequantization may be performed differently for each tile or slice, and may be performed differently for each transformed or inversely transformed tile or slice.
  • the process of processing the 3D blocks is performed in real time and the process is performed with low latency.
  • random access and parallel encoding or parallel decoding in a three-dimensional space occupied by point cloud data may be enabled, and errors accumulated in the encoding or decoding process may be prevented.
  • FIG. 7 is a block diagram illustrating an example of a transmission device 700 for performing a space partition process according to embodiments of the present disclosure.
  • the transmission device 700 may include a space partition unit 705 for performing a space partition process, a signaling processing unit 710, a geometry encoder 715, an attribute encoder 720, and an encapsulation processing unit 725 and/or a transmission processing unit 730.
  • the space partition unit 705 may perform a space partition process of partitioning the point cloud data into one or more 3D blocks based on a bounding box and/or a sub-bounding box.
  • point cloud data may be partitioned into one or more tiles and/or one or more slices.
  • the point cloud data may be partitioned into one or more tiles, and each partitioned tile may be further partitioned into one or more slices, through a space partition process.
  • the signaling processing unit 710 may generate and/or process (e.g., entropy-encode) signaling information and output it in the form of a bitstream.
  • a bitstream (in which signaling information is encoded) output from the signaling processing unit is referred to as a 'signaling bitstream'.
  • the signaling information may include information for space partition or information on space partition. That is, the signaling information may include information related to the space partition process performed by the space partition unit 705.
  • the point cloud data When the point cloud data is partitioned into one or more 3D blocks, information for decoding some point cloud data corresponding to a specific tile or a specific slice among the point cloud data may be required.
  • information related to 3D spatial areas may be required.
  • the spatial access may mean extracting, from a file, only necessary partial point cloud data in the entire point cloud data.
  • the signaling information may include information for decoding some point cloud data, information related to 3D spatial areas for supporting spatial access, and the like.
  • the signaling information may include 3D bounding box information, 3D spatial area information, tile information, and/or tile inventory information.
  • the signaling information may be provided from the space partition unit 705, the geometry encoder 715, the attribute encoder 720, the transmission processing unit 725, and/or the encapsulation processing unit 730.
  • the signaling processing unit 710 may provide the feedback information fed back from the reception device 800 of FIG. 8 to the space partition unit 705, the geometry encoder 715, the attribute encoder 720, the transmission processing unit 725 and/or the encapsulation processing unit 730.
  • the signaling information may be stored and signaled in a sample in a track, a sample entry, a sample group, a track group, or a separate metadata track.
  • the signaling information may be signaled in units of sequence parameter sets (SPSs) for signaling of a sequence level, geometry parameter sets (GPSs) for signaling of geometry coding information, and attribute parameter sets (APSs) for signaling of attribute coding information, tile parameter sets (TPSs) (or tile inventory) for signaling of a tile level, etc.
  • SPSs sequence parameter sets
  • GPSs geometry parameter sets
  • APSs attribute parameter sets
  • TPSs tile parameter sets
  • the signaling information may be signaled in units of coding units such as slices or tiles.
  • positions (position information) of the 3D blocks may be output to the geometry encoder 715, and attributes (attribute information) of the 3D blocks may be output to the attribute encoder 720.
  • the geometry encoder 715 may construct an octree based on the position information, encode the constructed octree, and output a geometry bitstream. Also, the geometry encoder 715 may reconstruct the octree and/or the approximated octree and output it to the attribute encoder 720. The reconstructed octree may be reconstructed geometry. The geometry encoder 715 may perform all or some of operations performed by the coordinate system transform unit 305, the geometry quantization unit 310, the octree analysis unit 315, the approximation unit 320, the geometry encoding unit 325 and/or the reconstruction unit 330 of FIG. 3 .
  • the attribute encoder 720 may output an attribute bitstream by encoding an attribute based on the reconstructed geometry.
  • the attribute encoder 720 may perform all or some of operations performed by the attribute transform unit 340, the RAHT transform unit 345, the LOD generation unit 350, the lifting unit 355, the attribute quantization unit 360, the attribute encoding unit 365 and/or the color transform nit 335 of FIG. 3 .
  • the encapsulation processing unit 725 may encapsulate one or more input bitstreams into a file or segment. For example, the encapsulation processing unit 725 may encapsulate each of the geometry bitstream, the attribute bitstream, and the signaling bitstream, or multiplex and encapsulate the geometry bitstream, the attribute bitstream, and the signaling bitstream. In some embodiments, the encapsulation processing unit 725 may encapsulate a bitstream (G-PCC bitstream) consisting of a sequence of a type-length-value (TLV) structure into a file. TLV (or TLV encapsulation) structures constituting the G-PCC bitstream may include a geometry bitstream, an attribute bitstream, a signaling bitstream, and the like.
  • G-PCC bitstream bitstream
  • TLV (or TLV encapsulation) structures constituting the G-PCC bitstream may include a geometry bitstream, an attribute bitstream, a signaling bitstream, and the like.
  • the G-PCC bitstream may be generated by the encapsulation processing unit 725 or generated by the transmission processing unit 730.
  • the encapsulation processing unit 725 may perform all or some of operations performed by the encapsulation processing unit 13 of FIG. 1 .
  • the transmission processing unit 730 may process an encapsulated bitstream or a file/segment according to an arbitrary transport protocol.
  • the transmission processing unit 730 may perform all or some of operations performed by the transmission unit 14 and the transmission processing unit described with reference to FIG. 1 .
  • FIG. 8 is a block diagram illustrating an example of a reception device 800 according to embodiments of the present disclosure.
  • the reception device 800 may perform operations corresponding to the operations of the transmission device 700 for performing space partition.
  • the reception device 800 may include a reception processing unit 805, a decapsulation processing unit 810, a signaling processing unit 815, a geometry decoder 820, an attribute encoder 825, and/or a post-processing unit 830.
  • the reception processing unit 805 may receive a file/segment in which a G-PCC bitstream is encapsulated, a G-PCC bitstream, or a bitstream, and may process it according to a transport protocol.
  • the reception processing unit 805 may perform all or some of operations performed by the reception unit 21 and the reception processing unit described with reference to FIG. 1 .
  • the decapsulation processing unit 810 may obtain a G-PCC bitstream by performing a reverse process of operations performed by the encapsulation processing unit 725.
  • the decapsulation processing unit 810 may obtain a G-PCC bitstream by decapsulating the file/segment.
  • the decapsulation processing unit 810 may obtain and output a signaling bitstream to the signaling processing unit 815, obtain and output a geometry bitstream to the geometry decoder 820, and obtain and output an attribute bitstream to the attribute decoder 825.
  • the decapsulation processing unit 810 may perform all or some of operations performed by the decapsulation processing unit 22 of FIG. 1 .
  • the signaling processing unit 815 may parse and decode signaling information by performing a reverse process of operations performed by the signaling processing unit 710.
  • the signaling processing unit 815 may parse and decode signaling information from a signaling bitstream.
  • the signaling processing unit 815 may provide the decoded signaling information to the geometry decoder 820, the attribute decoder 820, and/or the post-processing unit 830.
  • the geometry decoder 820 may reconstruct geometry from the geometry bitstream by performing a reverse process of operations performed by the geometry encoder 715.
  • the geometry decoder 820 may reconstruct geometry based on signaling information (parameters related to the geometry).
  • the reconstructed geometry may be provided to the attribute decoder 825.
  • the attribute decoder 825 may reconstruct attribute from the attribute bitstream by performing a reverse process of the operations performed by the attribute encoder 720.
  • the attribute decoder 825 may reconstruct the attribute based on the signaling information (parameters related to the attribute) and the reconstructed geometry.
  • the post-processing unit 830 may reconstruct point cloud data based on the reconstructed geometry and the reconstructed attribute. Reconstruction of point cloud data may be performed through a process of matching the reconstructed geometry with the reconstructed attribute. In some embodiments, when the reconstructed point cloud data is in units of tiles and/or slices, the post-processing unit 830 may reconstruct the bounding box of the point cloud data, by performing a reverse process of the space partition process of the transmission device 700 based on signaling information.
  • the post-processing unit 830 may reconstruct part of the bounding box, by combining some slices and/or some tiles based on the signaling information.
  • some slices and/or some tiles used to reconstruct the bounding box may be slices and/or some tiles related to a 3D spatial area in which spatial access is desired.
  • the G-PCC bitstream may mean a bitstream of point cloud data consisting of a sequence of TLV structures.
  • the TLV structure may be referred to as "TLV encapsulation structure", “G-PCC TLV encapsulation structure”, or "G-PCC TLV structure”.
  • Each TLV encapsulation structure may be composed of a TLV type (TLV TYPE), a TLV length (TLV LENGTH), and/or a TLV payload (TLV PAYLOAD).
  • TLV TYPE TLV type
  • TLV LENGTH TLV length
  • TLV PAYLOAD TLV payload
  • the TLV type may be type information of the TLV payload
  • the TLV length may be length information of the TLV payload
  • the TLV payload may be a payload (or payload bytes).
  • tlv_type may indicate type information of the TLV payload
  • tlv _num_payload_bytes may indicate length information of the TLV payload
  • tlv_payload_byte[i] may indicate the TLV payload.
  • tlv_payload_byte[i] may be signaled by the value of tlv_num_payload_bytes, and i may increase by 1 from 0 to (tlv_num_payload_bytes - 1).
  • TLV payloads may include an SPS, a GPS, one or more APSs, a tile inventory, a geometry slice, one or more attribute slices, and one or more metadata slices.
  • the TLV payload of each TLV encapsulation structure may include an SPS, a GPS, one or more APSs, a tile inventory, a geometry slice, one or more attribute slices, and one or more metadata slices according to the type information of the TLV payload.
  • Data included in the TLV payload may be distinguished through type information of the TLV payload. For example, as illustrated in FIG.
  • tlv_type equal to 0 may indicates that data included in the TLV payload is an SPS
  • tlv_type equal to 1 may indicate that the data included in the TLV payload is a GPS
  • tlv_type equal to 2 may indicate that data included in the TLV payload is a geometry slice
  • tlv_type equal to 3 may indicate that data included in the TLV payload is an APS
  • tlv_type equal to 4 may indicate that data included in the TLV payload is an attribute slice
  • tlv_type equal to 5 may indicate that data included in the TLV payload is a tile inventory (or tile parameter set).
  • tlv_type 6 may indicate that data included in the TLV payload is a frame boundary marker
  • tlv_type 7 may indicate that data included in the TLV payload is a metadata slice.
  • the payload of the TLV encapsulation structure may conform to the format of a High Efficiency Video Coding (HEVC) Network Abstraction Layer (NAL) unit.
  • HEVC High Efficiency Video Coding
  • NAL Network Abstraction Layer
  • the G-PCC bitstream composed of TLV encapsulation structures may be transmitted to the reception device without change, or may be encapsulated and transmitted to the reception device.
  • the encapsulation processing unit 725 may encapsulate a G-PCC bitstream composed of TLV encapsulation structures in the form of a file/segment and transmit it.
  • the decapsulation processing unit 810 may acquire a G-PCC bitstream by decapsulating the encapsulated file/segment.
  • the G-PCC bitstream may be encapsulated in an ISOBMFF-based file format.
  • the G-PCC bitstream may be stored in a single track or multiple tracks in the ISOBMFF file.
  • the single track or multiple tracks in a file may be referred to as "tracks" or "G-PCC tracks”.
  • the ISOBMFF-based file may be referred to as a container, a container file, a media file, a G-PCC file, and the like.
  • the file may be composed of boxes and/or information that may be referred to as ftyp, moov, mdat, and the like.
  • the ftyp box may provide file type or file compatibility related information for the file.
  • the reception device may identify the file by referring to the ftyp box.
  • the mdat box is also called a media data box and may include actual media data.
  • a geometry slice or coded geometry bitstream
  • zero or more attribute slices or coded attribute bitstream
  • the moov box is also called a movie box, and may include metadata for media data of the file.
  • the moov box may include information necessary for decoding and playback of the media data, and may include information on tracks and samples of the file.
  • the moov box may act as a container for all metadata.
  • the moov box may be a box of the uppermost layer among metadata-related boxes.
  • the moov box may include a track (trak) box providing information related to a track of a file
  • the trak box may include a media (mdia) box (MediaBox) providing media information of the track, and a track reference container (tref) box for linking (referencing) the track and a sample of a file corresponding to the track.
  • the media box MediaBox may include a media information container (minf) box that provides information on the media data and a handler (hdlr) box that indicates a stream type.
  • the minf box may include a sample table (stbl) box that provides metadata related to a sample of the mdat box.
  • the stbl box may include a sample description (stsd) box that provides information on a used coding type and initialization information required for the coding type.
  • a sample description (stsd) box may include a sample entry for a track.
  • signaling information or metadata
  • SPS SPS
  • GPS GPS
  • APS APS
  • tile inventory may be included in a sample entry of a moov box or a sample of an mdat box in a file.
  • a G-PCC track may be defined as a volumetric visual track carrying a geometry slice (or coded geometry bitstream) or attribute slice (or coded attribute bitstream), or both a geometry slice and an attribute slice.
  • the volumetric visual track may be identified by a volumetric visual media handler type 'volv' in a handler box HandlerBox of a media box (MediaBox) and/or a volumetric visual media header vvhd in a minf box of a media box MediaBox.
  • the minf box may be referred to as a media information container or a media information box.
  • the minf box may be included in the media box MediaBox, the media box MediaBox may be included in the track box, and the track box may be included in the moov box of the file.
  • a single volumetric visual track or multiple volumetric visual tracks may be present in a file.
  • Volumetric visual media header box (Volumetric VisualMediaHeaderBox)
  • Volumetric Visual Tracks may use a volumetric visual sample (VolumetricVisualSample) for transmission of actual data.
  • a volumetric visual sample entry may be referred to as a sample entry or a G-PCC sample entry, and a volumetric visual sample may be referred to as a sample or a G-PCC sample.
  • a single volumetric visual track may be referred to as a single track or G-PCC single track, and multiple volumetric visual tracks may be referred to as multiple tracks or multiple G-PCC tracks.
  • Signaling information related to grouping of samples, grouping of tracks, single track encapsulation of a G-PCC bitstream, or multiple-track encapsulation of a G-PCC bitstream, or signaling information to support spatial access may be added to the sample entry in the form of a box or a FullBox.
  • the signaling information may include at least one of a GPCC entry information box (GPCCEntryInfoBox), a GPCC component type box (GPCCComponentTypeBox), a cubic region information box (CubicRegionInfoBox), a 3D bounding box information box (3DBoundingBoxInfoBox), or a tile inventory box (TileInventoryBox).
  • a GPCCEntryInfoBox having a sample entry type of 'gpsb' may include GPCCEntryInfoStruct().
  • the syntax of GPCCEntryInfoStruct() may be defined as follows.
  • GPCCEntryInfoStruct() may include main_entry _flag and dependent_on.
  • main_entry_flag may indicate whether or not it is an entry point for decoding the G-PCC bitstream.
  • dependent_on indicates whether its decoding is dependent on others. If dependent_on is present in a sample entry, dependent_on may indicate that decoding of samples in a track is dependent on other tracks. If the value of dependent_on is 1, GPCCEntryInfoStruct() may further include dependency_id. dependency_id may indicate an identifier of tracks for decoding related data.
  • dependency_id may indicate an identifier of a track carrying a G-PCC sub-bitstream on which decoding of samples in the track is dependent. If dependency_id is present in a sample group, dependency_id may indicate an identifier of samples carrying a G-PCC sub-bitstream on which decoding of related samples is dependent.
  • the syntax structure of the G-PCC component type box (GPCCComponentTypeBox) may be defined as follows.
  • a GPCCComponentTypeBox having a sample entry type of 'gtyp' may include GPCCComponentTypeStruct().
  • the syntax of GPCCComponentTypeStruct() may be defined as follows.
  • numOfComponents may indicate the number of G-PCC components signaled to the GPCCComponentTypeStruct.
  • gpcc_type may be included in GPCCComponentTypeStruct by a loop that is repeated by the value of numOfComponents. This loop can be repeated while increasing by 1 until i is from 0 to (numOfComponents - 1).
  • gpcc_type may indicate the type of the G-PCC component. For example, if the value of gpcc_type is 2, it may indicate a geometry component, and if it is 4, it may indicate an attribute component.
  • AttrIdx may indicate the identifier of the attribute signaled in SPS().
  • a G-PCC component type box (GPCCComponentTypeBox) may be included in a sample entry for multiple tracks. If a G-PCC component type box (GPCCComponentTypeBox) is present in the sample entry of tracks carrying part or whole of the G-PCC bitstream, then GPCCComponentTypeStruct() may indicate one or more G-PCC component types carried by each track.
  • GPCCComponentTypeBox including GPCCComponentTypeStruct() or GPCCComponentTypeStruct() may be referred to as G-PCC component information.
  • the encapsulation processing unit mentioned in the present disclosure may generate a sample group by grouping one or more samples.
  • the encapsulation processing unit, the metadata processing unit, or the signaling processing unit mentioned in the present disclosure may signal signaling information associated with a sample group in a sample, a sample group, or a sample entry. That is, the sample group information associated with the sample group may be added to a sample, a sample group, or a sample entry.
  • the sample group information may be 3D bounding box sample group information, 3D region sample group information, 3D tile sample group information, 3D tile inventory sample group information, and the like.
  • the encapsulation processing unit mentioned in the present disclosure may generate a track group by grouping one or more tracks.
  • the encapsulation processing unit, the metadata processing unit, or the signaling processing unit mentioned in the present disclosure may signal signaling information associated with a track group in a sample, a track group, or a sample entry. That is, the track group information associated with the track group may be added to a sample, track group or sample entry.
  • the track group information may be 3D bounding box track group information, point cloud composition track group information, spatial region track group information, 3D tile track group information, 3D tile inventory track group information, and the like.
  • FIG. 11 is a diagram for explaining an ISOBMFF-based file including a single track.
  • (a) of FIG. 11 illustrates an example of the layout of an ISOBMFF-based file including a single track
  • (b) of FIG. 11 illustrates an example of a sample structure of a mdat box when a G-PCC bitstream is stored in a single track of a file.
  • FIG. 12 is a diagram for explaining an ISOBMFF-based file including multiple tracks
  • (a) of FIG. 12 illustrates an example of the layout of an ISOBMFF-based file including multiple tracks
  • (b) of FIG. 12 illustrates an example of a sample structure of a mdat box when a G-PCC bitstream is stored in a single track of a file.
  • the stsd box (SampleDescriptionBox) included in the moov box of the file may include a sample entry for a single track storing the G-PCC bitstream.
  • the SPS, GPS, APS, tile inventory may be included in a sample entry in a moov box or a sample in an mdat box in a file.
  • geometry slices and zero or more attribute slices may be included in the sample of the mdat box in the file.
  • each sample may contain multiple G-PCC components. That is, each sample may be composed of one or more TLV encapsulation structures.
  • a sample entry of a single track may be defined as follows.
  • the sample entry type 'gpe1' or 'gpeg' is mandatory, and one or more sample entries may be present.
  • the G-PCC track may use a VolumetricVisualSampleEntry having a sample entry type of 'gpe1' or 'gpeg'.
  • the sample entry of the G-PCC track may include a G-PCC decoder configuration box GPCCConfigurationBox, and the G-PCC decoder configuration box may include a G-PCC decoder configuration record (GPCCDecoderConfigurationRecord()).
  • GPCCDecoderConfigurationRecord() may include at least one of configuration Version, profile_idc, profile_compatibility_flags, level_idc, numOfSetupUnitArrays, SetupUnitType, completeness, numOfSepupUnit, or setupUnit.
  • the setupUnit array field included in GPCCDecoderConfigurationRecord() may include TLV encapsulation structures including one SPS.
  • sample entry type is 'gpe1'
  • all parameter sets e.g., SPS, GPS, APS, tile inventory
  • the sample entry type is 'gpeg'
  • the above parameter sets may be included in the array (i.e., sample entry) of setupUints or included in the stream (i.e., sample).
  • G-PCC sample entry GPSampleEntry
  • a G-PCC sample entry (GPCCSampleEntry) having a sample entry type of 'gpe1' may include GPCCConfigurationBox, 3DBoundingBoxInfoBox(), CubicRegionInfoBox(), and TilelnventoryBox().
  • 3DBoundingBoxInfoBox() may indicate 3D bounding box information of point cloud data related to samples carried by the track.
  • CubicRegionInfoBox() may indicate information on one or more spatial regions of point cloud data carried by samples in the track.
  • TileInventoryBox() may indicate 3D tile inventory information of point cloud data carried by samples in the track.
  • the sample may include TLV encapsulation structures including a geometry slice.
  • a sample may include TLV encapsulation structures including one or more parameter sets.
  • a sample may include TLV encapsulation structures including one or more attribute slices.
  • each geometry slice or attribute slice may be mapped to an individual track.
  • a geometry slice may be mapped to track 1
  • an attribute slice may be mapped to track 2.
  • the track (track 1) carrying the geometry slice may be referred to as a geometry track or a G-PCC geometry track
  • the track (track 2) carrying the attribute slice may be referred to as an attribute track or a G-PCC attribute track.
  • the geometry track may be defined as a volumetric visual track carrying a geometry slice
  • the attribute track may be defined as a volumetric visual track carrying an attribute slice.
  • a track carrying part of a G-PCC bitstream including both a geometry slice and an attribute slice may be referred to as a multiplexed track.
  • each sample in the track may include at least one TLV encapsulation structure carrying data of a single G-PCC component.
  • each sample contains neither geometry nor attributes, and may not contain multiple attributes.
  • Multi-track encapsulation of a G-PCC bitstream may enable a G-PCC player to effectively access one of the G-PCC components.
  • a G-PCC bitstream is carried by multiple tracks, in order for a G-PCC player to effectively access one of the G-PCC components, the following conditions need to be satisfied.
  • GPCCComponentTypeBox may include GPCCComponentTypeStruct(). If a GPCCComponentTypeBox is present in the sample entry of tracks carrying part or whole of the G-PCC bitstream, then GPCCComponentTypeStruct() may specify the type (e.g., geometry, attribute) of one or more G-PCC components carried by each track. For example, if the value of the gpcc_type field included in GPCCComponentTypeStruct() is 2, it may indicate a geometry component, and if it is 4, it may indicate an attribute component. In addition, when the value of the gpcc_type field indicates 4, that is, an attribute component, an AttrIdx field indicating an attribute identifier signaled to SPS( ) may be further included.
  • GPCCComponentTypeBox may include GPCCComponentTypeStruct(). If a GPCCComponentTypeBox is present in the sample entry of tracks carrying part or whole of the G-PCC bitstream, then
  • the syntax of the sample entry may be defined as follows.
  • the sample entry type 'gpc1', 'gpcg', 'gpc1' or 'gpcg' is mandatory, and one or more sample entries may be present.
  • Multiple tracks e.g., geometry or attribute tracks
  • Multiple tracks may use a VolumetricVisualSampleEntry having a sample entry type of 'gpc1', 'gpcg', 'gpc1' or 'gpcg'.
  • all parameter sets may be present in the setupUnit array.
  • the parameter set is present in the array or stream.
  • the GPCCComponentTypeBox shall not be present.
  • the SPS, GPS and tile inventory may be present in the SetupUnit array of the track carrying the G-PCC geometry bitstream. All relevant APSs may be present in the SetupUnit array of the track carrying the G-PCC attribute bitstream.
  • an SPS, GPS, APS or tile inventory may be present in the array or stream.
  • the GPCCComponentTypeBox shall be present.
  • the compressorname, that is, codingname, of the base class VolumetricVisualSampleEntry may indicate the name of a compressor used together with the recommended " 013GPCC coding" value.
  • the first byte (octal number 13 or decimal number 11 represented by ⁇ 013) is the number of remaining bytes, and may indicate the number of bytes of the remaining string.
  • congif may include G-PCC decoder configuration information, info may indicate G-PCC component information carried in each track. info may indicate the component tile carried in the track, and may also indicate the attribute name, index, and attribute type of the G-PCC component carried in the G-PCC attribute track.
  • the syntax for the sample format is as follows.
  • each sample corresponds to a single point cloud frame, and may be composed of one or more TLV encapsulation structures belonging to the same presentation time.
  • Each TLV encapsulation structure may include a single type of TLV payload.
  • one sample may be independent (e.g., a sync sample).
  • GPCCLength indicates the length of the sample
  • gpcc_unit may include an instance of a TLV encapsulation structure including a single G-PCC component (e.g., a geometry slice).
  • each sample may correspond to a single point cloud frame, and samples contributing to the same point cloud frame in different tracks may have to have the same presentation time.
  • Each sample may consist of one or more G-PCC units of the G-PCC component indicated in the GPCCComponentInfoBox of the sample entry and zero or more G-PCC units carrying one of a parameter set or a tile inventory.
  • the F-PCC sample may need to appear before the G-PCC unit of the G-PCC component.
  • Each sample may contain one or more G-PCC units containing an attribute data unit, and zero or more G-PCC units carrying a parameter set.
  • the syntax and semantics for the sample format may be the same as the syntax and semantics for the case where the G-PCC bitstream is stored in a single track described above.
  • each sample consists of multiple TLV encapsulation structures, it is necessary to access each TLV encapsulation structure in the sample.
  • each of the multiple TLV encapsulation structures may be stored as a sub-sample.
  • a subsample may be referred to as a G-PCC subsample.
  • one sample includes a parameter set TLV encapsulation structure including a parameter set, a geometry TLV encapsulation structure including a geometry slice, and an attribute TLV encapsulation structure including an attribute slice
  • the parameter set TLV encapsulation structure, the geometry TLV encapsulation structure, and the attribute TLV encapsulation structure may be stored as subsamples, respectively.
  • the type of the TLV encapsulation structure carried by the subsample may be required.
  • the G-PCC subsample may include only one TLV encapsulation structure.
  • One SubSampleInformationBox may be present in a sample table box (SampleTableBox, stbl) of a moov box, or may be present in a track fragment box (TrackFragmentBox, traf) of each of the movie fragment boxes (MovieFragmentBox, moof). If the SubSampleInformationBox is present, the 8-bit type value of the TLV encapsulation structure may be included in the 32-bit codec_specific_parameters field of the sub-sample entry in the SubSampleInformationBox.
  • the 6-bit value of the attribute index may be included in the 32-bit codec_specific_parameters field of the subsample entry in the SubSampleInformationBox.
  • the type of each subsample may be identified by parsing the codec_specific_parameters field of the subsample entry in the SubSampleInformationBox.
  • Codec_specific_parameters of SubSampleInformationBox may be defined as follows.
  • payloadType may indicate the tlv_type of the TLV encapsulation structure in the subsample.
  • the attribute slice i.e., attribute slice
  • attrIdx may indicate an identifier of attribute information of a TLV encapsulation structure including an attribute payload in the subsample.
  • attrIdx may be the same as ash_attr_sps_attr_idx of the TLV encapsulation structure including the attribute payload in the subsample.
  • tile_data may indicate whether a subsample includes one tile or another tile.
  • tile_data When the value of tile_data is 1, it may indicate that the subsample includes TLV encapsulation structure(s) including a geometry data unit or an attribute data unit corresponding to one G-PCC tile. When the value of tile_data is 0, it may indicate that the subsample includes TLV encapsulation structure(s) including each parameter set, tile inventory, or frame boundary marker.
  • tile_id may indicate an index of a G-PCC tile with which a subsample is associated in a tile inventory.
  • the G-PCC bitstream is stored in multiple tracks (in case of multiple-track encapsulation of G-PCC data in ISOBMFF), if subsamples are present, only SubSampleInformationBox whose flag is 1 in SampleTableBox or TrackFragmentBox of each MovieFragmentBox may need to be present.
  • a track reference tool may be used.
  • One TrackReferenceTypeBoxes may be added to a TrackReferenceBox in the TrackBox of a G-PCC track.
  • the TrackReferenceTypeBox may contain an array of track_IDs specifying the tracks referenced by the G-PCC track.
  • the present disclosure may provide a device and method for supporting temporal scalability in the carriage of G-PCC data (hereinafter, may be referred to as a G-PCC bitstream, an encapsulated G-PCC bitstream, or a G-PCC file).
  • the present disclosure may propose a device and methods for providing a point cloud content service, which efficiently stores a G-PCC bitstream in a single track in a file, or divisionally stores it in a plurality of tracks, and provides a signaling therefor.
  • the present disclosure proposes a device and methods for processing a file storage technique to support efficient access to a stored G-PCC bitstream.
  • Temporal scalability may refer to a function that allows the possibility of extracting one or more subsets of independently coded frames.
  • temporal scalability may refer to a function of dividing G-PCC data into a plurality of different temporal levels and independently processing each G-PCC frame belonging to different temporal levels. If temporal scalability is supported, the G-PCC player (or the transmission device and/or the reception device of the present disclosure) may effectively access a desired component (target component) among G-PCC components.
  • target component target component among G-PCC components.
  • temporal scalability support at the system level may be expressed as more flexible temporal sub-layering.
  • the system the point cloud content provision system
  • the system that processes G-PCC data can manipulate data at a high level to match network capability or decoder capability, the performance of the point cloud content provision system can be improved.
  • the sample grouping method may be a method of grouping samples in a G-PCC file according to a temporal level
  • the track grouping method may be a method of grouping tracks in a G-PCC file according to a temporal level.
  • a sample group may be used to associate samples with temporal levels designated to them. That is, a sample group may indicate which sample belongs to which temporal level. Also, the sample group may be information on a result of grouping one or more samples into one or more temporal levels.
  • the sample group may be referred to as a 'tele' sample group and a temporal level sample group 'tele'.
  • Information on a sample group may include information on a result of sample grouping. Accordingly, the information on the sample group may be information used to associate the samples with the temporal levels designated to them. That is, the information on the sample group may indicate which sample belongs to which temporal level, and may be information on a result of grouping one or more samples into one or more temporal levels.
  • the information on the sample group may be present in tracks including geometry data units.
  • the information on the sample group may be present only in a geometry track to group each sample in the track into a designated temporal level.
  • Samples in attribute tracks may be inferred based on a relationship with a geometry track associated therewith. For example, samples in attribute tracks may belong to the same temporal level as samples in a geometry track associated therewith.
  • the information on the sample group may be present in the rest tile track referred to by a G-PCC tile base track.
  • the G-PCC tile track may be a volumetric visual track carrying all G-PCC components or a single G-PCC component corresponding to one or more G-PCC tiles.
  • the G-PCC tile base track may be a volumetric visual track carrying all parameter sets and tile inventories corresponding to the G-PCC tile track.
  • Information on a temporal level may be signaled to describe temporal scalability supported by a G-PCC file.
  • the information on the temporal level may be present in a sample entry of a track including a sample group (or information on a sample group).
  • the information on the temporal level may be present in GPCCDecoderConfigurationRecord () or in a G-PCC scalability information box (GPCCScalabilityInfoBox) signaling scalability information of a G-PCC track.
  • Temporal level sample grouping may mean grouping samples in a track according to a temporal level. Such a 'tele' sample group may be in a track containing geometry data.
  • the 'tele' sample group may be information on codec-independent sample grouping. Specifically, being codec-independent may mean a case in which samples of one temporal level have no coding dependency on samples of another temporal level. That is, when samples of one temporal level have no coding dependency on samples of another temporal level, the 'tele' sample group may be present in a track.
  • the 'tele' sample group may include information on a result of grouping samples in a track (and/or potential track fragment) according to a temporal level and/or information on a temporal level identifier value.
  • the 'tele' sample group may be present only in a track carrying geometry data.
  • Samples in tracks carrying attribute data may be inferred based on a relationship with a geometry track associated therewith. For example, samples in attribute tracks may belong to the same temporal level as samples in a geometry track associated therewith.
  • a 'tele' sample group is only needed for a track with a plurality of temporal levels.
  • samples are grouped and processed or sample group information is signaled. Therefore, in the related art, coding efficiency or bit efficiency may decrease by performing unnecessary sample grouping or sample group information signaling.
  • a 'tele' sample group may be present in a track with a plurality of temporal levels.
  • a track with a plurality of temporal levels may be a track carrying geometry data.
  • information on the number of temporal levels may be defined by syntax elements in different boxes according to sample entry types.
  • information on the number of temporal levels may be defined by a syntax element in a GPCC decoder configuration record (GPCCDecoderConfigurationRecord) and/or a syntax element in a GPCC temporal scalability information box (GPCCScalabilityInfoBox).
  • the number of temporal levels in the track may be defined by the num_temporal_levels syntax element of the GPCC decoder configuration record.
  • the GPCC decoder configuration record may provide G-PCC decoder configuration information for geometry-based point cloud content.
  • the syntax structure of the GPCC decoder configuration record may be defined as shown in Table 1 below.
  • the GPCC decoder configuration record may include at least one of configuration Version, profile_idc, profile_compatibility_flags, level_idc, numOfSetupUnitArrays, SetupUnitType, completeness, numOfSepupUnit or setupUnit.
  • the num_temporal_levels syntax element may indicate a maximum number of temporal levels in which G-PCC bitstream frames are grouped.
  • the number of temporal levels in a track may be defined by the num_temporal_levels syntax element of the GPCC temporal scalability information box.
  • the GPCC temporal scalability information box may include scalability information of a G-PCC track.
  • the syntax structure of the GPCC temporal scalability information box may be defined as shown in Table 2 below. [Tabel 2]
  • num_temporal_levels may indicate a maximum number of temporal levels in which G-PCC bitstream frames are grouped.
  • temporal_level_id may indicate temporal level identifier information of a G-PCC sample.
  • the number of temporal levels in the track may be included in other box other than the GPCC decoder configuration record when the track contains a sample entry type equal to 'gpe 1', 'gpeg', 'gpc1' or 'gpcg'.
  • the other box may be the GPCC temporal scalability information box or any box including G-PCC related temporal scalability information.
  • the number of temporal levels in the track may be included in other box other than the GPCC temporal scalability information box when the track contains a sample entry type equal to 'gpcb' or 'gpeb'.
  • the other box may be any box including G-PCC related temporal scalability information.
  • the number of temporal levels in the track may be included in a predetermined box regardless of the sample entry type in the track.
  • the predetermined box may be a GPCC decoder configuration record, a GPCC temporal scalability information box or any box including G-PCC related temporal scalability information.
  • the box including G-PCC related temporal scalability information may be referred to by various names.
  • the G-PCC related temporal scalability information box may be referred to as a GPCCScalabilityInfoBox or a GPCCTileScalabilityInfoBox.
  • samples belonging to a first temporal level identifier may be mapped to a sample group description index equal to the first temporal level identifier plus 1.
  • the sample group description index may mean an index of a sample entry describing samples included in a sample description (stsd) box.
  • the sample description (stsd) box may include a sample description signal for all temporal levels signaled in the decoder configuration record.
  • a sample description (stsd) box may include a sample entry for a track.
  • the temporal level of a sample in an attribute track may be equal to the temporal level of a corresponding sample in a referenced geometry track, that is, a sample with the same composition time stamp.
  • the 'tele' sample group may be only present in a track carrying geometry data.
  • FIGS. 13 to 15 illustrate examples of sample grouping in a track according to embodiments of the present disclosure.
  • each of the tracks may include at least one of a GPCC temporal scalability information ('gsci') box, a sample group description ('sgpd') box, or a sample group ('sbgp') box.
  • a GPCC temporal scalability information 'gsci'
  • a sample group description 'sgpd'
  • a sample group 'sbgp'
  • the 'gsci' box may include information on the number of temporal levels (num_temporal_levels).
  • the 'sgpd' box may include information on common characteristics of samples included in the sample group.
  • the 'sbgp' box may include information on a sample group including samples.
  • the number of temporal levels present in the 'gsci' box may be 2.
  • the two temporal levels may be distinguished by temporal_level_id.
  • the number of entries included in the 'sgpd' box may be two (entry_count: 2).
  • An entry present in the 'sgpd' box may be represented by SampleGroupDescripitionEntry('tele'), and SampleGroupDescripitionEntry('tele') may mean a 'tele' sample group description entry.
  • the 'sbgp' box of Track 1 may include a 'tele' sample group, and the number of entries may be 12.
  • the number of temporal levels present in the 'gsci' box of Track 2 may be 2. At this time, the two temporal levels may be distinguished by temporal_level_id.
  • the number of entries included in the 'sgpd' box of Track 2 may be four.
  • the entry present in the 'sgpd' box may be represented SampleGroupDescripitionEntry('tele'), and SampleGroupDescripitionEntry('tele') may mean a 'tele' sample group description entry.
  • the 'sbgp' box of Track 2 may include a 'tele' sample group, and the number of entries may be 12.
  • the 'mdat' box of FIG. 13 may include media data. Specifically, the 'mdat' box may include image frames. The image frames included in the 'mdat' box may have unique identifiers (TemporalId, Tid). For example, the 'mdat' box may include four TemporalIDs (Tid 0, Tid 1, Tid 2, Tid 3).
  • Track 1 of FIG. 14 may include a 'gsci' box, and Track 2 may include a 'sgpd' box and a 'sbgp' box.
  • Track 2 may not include GPCCTileScalabilityInfoBox('gtsi' box) because it is a temporal level tile track containing samples of all temporal levels.
  • the GPCC file structure of FIG. 15 may include a tile base track (Track 1) and two temporal level tile tracks (Track 2 and Track 3).
  • Track 1 the number of 'tele' sample group description entries in the 'sgpd' box may be equal to the maximum value of temporal_level_id in the 'gtsi' box plus 1.
  • FIGS. 16 to 17 are flowcharts illustrating 'tele' sample grouping according to embodiments of the present disclosure.
  • a transmission device 10 may generate temporal level number information of a track in a G-PCC file (S1610).
  • the temporal level number information may be information representing the number of temporal levels present in a track.
  • the temporal level number information may be represented by num_temporal_levels.
  • the transmission device 10 may configure at least one sample group based on the number of temporal levels (or temporal level number information) (S1620). For example, the transmission device 10 may configure at least one sample group when there is a plurality of temporal levels in the track. As another example, the transmission device 10 may configure at least one sample group when there is a plurality of temporal levels in the track and the corresponding track is a geometry track.
  • the sample group may be a 'tele' sample group.
  • the transmission device 10 may generate sample group information (S1630).
  • the sample group information may be information on the sample group configured in step S1620.
  • the sample group information may include temporal level identifier information of a G-PCC sample.
  • the temporal level identifier information may be represented by temporal_level_id.
  • the transmission device 10 may generate a G-PCC file (S1640).
  • the G-PCC file may include at least one of temporal level number information and/or sample group information.
  • samples belonging to a temporal level identifier may be mapped to a sample group description index.
  • the sample group description index may be a value obtained by adding 1 to the temporal level identifier value.
  • a reception device 20 may obtain a G-PCC file (S1710).
  • the G-PCC file may include point cloud data, temporal level number information of a track, and sample group information.
  • the reception device 20 may obtain the temporal level number information of a track in the G-PCC file from the G-PCC file (S1720).
  • the temporal level number information may be num_temporal_levels.
  • the reception device 20 may obtain sample group information of samples in a track from the G-PCC file based on the temporal level number information (S1730). For example, the receiving device 20 may configure at least one sample group when there is a plurality of temporal levels in the track. As another example, the reception device 20 may configure at least one sample group when there is a plurality of temporal levels in the track and the corresponding track is a geometry track.
  • the sample group information may include temporal level identifier information of G-PCC samples.
  • the temporal level identifier information may be represented by temporal_level_id.
  • FIG. 18 illustrates an example of a structure of a G-PCC file for 'tele' sample grouping according to embodiments of the present disclosure.
  • a moov box 1805 in a G-PCC file 1800 may be a multi-track encapsulation file in which two or more tracks are present.
  • the types of two or more tracks present in the moov box 1805 may include a geometry track and/or an attribute track.
  • the 'tele' sample group may be present only in a geometry track.
  • the 'tele' sample group may be present in the track only when there is a plurality of temporal levels in the geometry track.
  • temporal levels in the track are 0, 1, and 2 and there is a plurality of temporal levels.
  • a 'tele' sample group may be present in the track.
  • the temporal level in the track is 3 and the number thereof is 1. Therefore, in geometry track #2 1815, the 'tele' sample group may not be present in the track.
  • FIGS. 19 and 20 are flowcharts for data transmission/reception based on sample grouping according to embodiments of the present disclosure.
  • the transmission device 10 may determine whether to group samples based on the number of temporal levels in a track (S1910).
  • the transmission device 10 may group samples in the track when the number of temporal levels in the track is two or more. On the other hand, when the number of temporal levels in the track is one, the transmission device 10 may not group samples in the track. If the samples are grouped based on the number of temporal levels in the track being 2 or more, the transmission device 10 may generate corresponding sample group information (S1920).
  • the sample group information may include temporal level index information.
  • the transmission device 10 may encapsulate the samples based on the sample group information generated in step S1920 (S1930). On the other hand, if the samples in the track are not grouped, the transmission device 10 may encapsulate the samples without generating sample group information (S1930).
  • the reception device 20 may determine whether the number of temporal levels is 2 or more (S2010).
  • the temporal level number information may be information on the number of temporal levels present in the track.
  • the reception device 20 may determine that the samples in the track are grouped when the information on the number of temporal levels in the track is 2 or more.
  • the reception device 20 may determine that the samples in the track are not grouped when the information on the number of temporal levels in the track is one.
  • the reception device 20 may obtain sample group information (S2020). Sample group information may include temporal level index information in the track. The temporal level index information may be temporal_level_id. The reception device 20 may obtain sample group information based on the number of temporal levels determined in step S2010. Specifically, the reception device 20 may obtain sample group information when the number of temporal levels determined in step S2010 is 2 or more.
  • the reception device 20 may decapsulate the samples based on the sample group information obtained in step S2020 (S2030). On the other hand, the reception device 20 may decapsulate the samples without checking the sample group information when the number of temporal levels in the track is not greater than 2 (S2030).
  • the scope of the disclosure includes software or machine-executable commands (e.g., an operating system, an application, firmware, a program, etc.) for enabling operations according to the methods of various embodiments to be executed on an apparatus or a computer, a non-transitory computer-readable medium having such software or commands stored thereon and executable on the apparatus or the computer.
  • software or machine-executable commands e.g., an operating system, an application, firmware, a program, etc.
  • the embodiments of the present disclosure may be used to provide point cloud content.
  • the embodiments of the present disclosure may be used to encode/decode point cloud data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Library & Information Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A transmission device of point cloud data, a method performed by the transmission device, a reception device and a method performed by the reception device are provided. A method performed by a reception device of point cloud data may comprise obtaining a geometry-based point cloud compression (G-PCC) file including the point cloud data, obtaining temporal level number information indicating the number of temporal levels in a track within the G-PCC file from the G-PCC file, and obtaining sample group information of samples in the track from the G-PCC file based on the temporal level number information. The sample group information may be obtained based on the temporal level number information indicating that there is a plurality of temporal levels in the track.

Description

    Technical Field
  • The present disclosure relates to a method and device for processing point cloud content.
  • Background Art
  • Point cloud content is expressed as a point cloud which is a set of points belonging to a coordinate system representing a three-dimensional space. The point cloud content may represent three-dimensional media and is used to provide various services such as virtual reality (VR), augmented reality (AR), mixed reality (MR) and self-driving services. Since tens of thousands to hundreds of thousands of point data are required to express point cloud content, a method of efficiently processing a vast amount of point data is required.
  • Disclosure Technical Problem
  • The present disclosure provides a device and method for efficiently processing point cloud data. The present disclosure provides a point cloud data processing method and device for solving latency and encoding/decoding complexity.
  • In addition, the present disclosure provides device and methods for supporting 'tele' sample grouping according to the number of temporal levels in a geometry track.
  • In addition, the present disclosure proposes a device and methods for processing a file storage technique to support efficient access to a stored G-PCC bitstream.
  • The technical problems solved by the present disclosure are not limited to the above technical problems and other technical problems which are not described herein will become apparent to those skilled in the art from the following description.
  • Technical Solution
  • A method performed by a reception device of point cloud data according to an embodiment of the present disclosure may comprise obtaining a geometry-based point cloud compression (G-PCC) file including the point cloud data, obtaining temporal level number information indicating the number of temporal levels in a track within the G-PCC file from the G-PCC file, and obtaining sample group information of samples in the track from the G-PCC file based on the temporal level number information. The sample group information may be obtained based on the temporal level number information indicating that there is a plurality of temporal levels in the track.
  • A method performed by a transmission device of point cloud data according to another embodiment of the present disclosure may comprise generating temporal level number information indicating the number of temporal levels in a track within a geometry-based point cloud compression (G-PCC) file, generating sample group information which is information on grouping of samples in the track based on the number of temporal levels, and generating the G-PCC file including the temporal level number information. The sample group information may be generated based on there being a plurality of temporal levels.
  • A reception device of point cloud data according to another embodiment of the present disclosure may comprise a memory and at least one processor. The at least one processor may obtain a geometry-based point cloud compression (G-PCC) file including the point cloud data, obtain temporal level number information indicating the number of temporal levels in a track within the G-PCC file from the G-PCC file, and obtain sample group information of samples in the track from the G-PCC file based on the temporal level number information. The sample group information may be obtained based on the temporal level number information indicating that there is a plurality of temporal levels in the track.
  • A transmission device of point cloud data according to another embodiment of the present disclosure may comprise a memory and at least one processor. The at least one processor may generate temporal level number information indicating the number of temporal levels in a track within a geometry-based point cloud compression (G-PCC) file, generate sample group information which is information on grouping of samples in the track based on the number of temporal levels and generate the G-PCC file including the temporal level number information. The sample group information may be generated based on there being a plurality of temporal levels.
  • Advantageous Effects
  • The device and method according to embodiments of the present disclosure may process point cloud data with high efficiency.
  • The device and method according to embodiments of the present disclosure may provide a high-quality point cloud service.
  • The device and method according to embodiments of the present disclosure may provide point cloud content for providing universal services such as a VR service and a self-driving service.
  • The device and method according to embodiments of the present disclosure may provide temporal scalability for effectively accessing a desired component among G-PCC components.
  • The device and method according to the embodiments of the present disclosure can reduce bits by performing 'tele' sample grouping when a track having a plurality of temporal levels is present.
  • The device and method according to the embodiments of the present disclosure can reduce bits by performing 'tele' sample grouping when a geometry track having a plurality of temporal levels is present.
  • Description of Drawings
    • FIG. 1 is a block diagram illustrating an example of a point cloud content provision system according to embodiments of the present disclosure.
    • FIG. 2 is a block diagram illustrating an example of a point cloud content provision process according to embodiments of the present disclosure.
    • FIG. 3 is a diagram illustrating an example of a point cloud encoding apparatus according to embodiments of the present disclosure.
    • FIG. 4 is a block diagram illustrating an example of a point cloud decoding apparatus according to embodiments of the present disclosure.
    • FIG. 5 is a block diagram illustrating another example of a point cloud decoding apparatus according to embodiments of the present disclosure.
    • FIG. 6 is a diagram illustrating an example of a structure interworking with a point cloud data transmission/reception method/apparatus according to embodiments of the present disclosure.
    • FIG. 7 is a block diagram illustrating another example of a transmission device according to embodiments of the present disclosure.
    • FIG. 8 is a block diagram illustrating another example of a reception device according to embodiments of the present disclosure.
    • FIG. 9 illustrates an example of a TLV encapsulation structure according to embodiments of the present disclosure.
    • FIG. 10 illustrates an example of a TLV encapsulation syntax structure and a payload type according to embodiments of the present disclosure.
    • FIG. 11 illustrates an example of a file including a single track according to embodiments of the present disclosure.
    • FIG. 12 illustrates an example of a file including multiple tracks according to embodiments of the present disclosure.
    • FIGS. 13 to 15 illustrate examples of sample grouping in a track according to embodiments of the present disclosure.
    • FIGS. 16 to 17 are flowcharts illustrating 'tele' sample grouping according to embodiments of the present disclosure.
    • FIG. 18 illustrates an example of a structure of a G-PCC file for 'tele' sample grouping according to embodiments of the present disclosure.
    • FIG. 19 is a flowchart for data transmission based on sample grouping according to embodiments of the present disclosure.
    • FIG. 20 is a flowchart for data reception based on sample grouping according to embodiments of the present disclosure.
    Mode for Invention
  • Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those of ordinary skill in the art to which the present disclosure pertains can easily implement them. The present disclosure may be embodied in several different forms and is not limited to the embodiments described herein.
  • In describing the present disclosure, a detailed description of known functions and configurations will be omitted when it may obscure the subject matter of the present disclosure. In the drawings, parts not related to the description of the present disclosure are omitted, and similar reference numerals are attached to similar parts.
  • In the present disclosure, when a component is "connected", "coupled" or "linked" to another component, it may include not only a direct connection relationship but also an indirect connection relationship in which another component exists in therebetween. In addition, when it is said that a component "includes" or "has" another component, this indicates that the other components are not excluded, but may be further included unless specially described.
  • In the present disclosure, terms such as first, second, etc. are used only for the purpose of distinguishing one component from other components, and, unless otherwise specified, the order or importance of the components is not limited. Accordingly, within the scope of the present disclosure, a first component in one embodiment may be referred to as a second component in another embodiment, and, similarly, a second component in one embodiment is referred to as a first component in another embodiment.
  • In the present disclosure, components that are distinguished from each other are for clearly explaining features thereof, and do not necessarily mean that the components are separated. That is, a plurality of components may be integrated to form one hardware or software unit, or one component may be distributed to form a plurality of hardware or software units. Accordingly, even if not specifically mentioned, such integrated or distributed embodiments are also included in the scope of the present disclosure.
  • In the present disclosure, components described in various embodiments do not necessarily mean essential components, and some thereof may be optional components. Accordingly, an embodiment composed of a subset of components described in one embodiment is also included in the scope of the present disclosure. In addition, embodiments including other components in addition to components described in various embodiments are also included in the scope of the present disclosure.
  • The present disclosure relates to encoding and decoding of point cloud-related data, and terms used in the present disclosure may have general meanings commonly used in the technical field to which the present disclosure belongs unless they are newly defined in the present disclosure.
  • In the present disclosure, the term "/" and "," should be interpreted to indicate "and/or." For instance, the expression "A/B" and "A, B" may mean "A and/or B." Further, "A/B/C" and "A/B/C" may mean "at least one of A, B, and/or C."
  • In the present disclosure, the term "or" should be interpreted to indicate "and/or." For instance, the expression "A or B" may comprise 1) only "A", 2) only "B", and/or 3) both "A and B". In other words, in the present disclosure, the term "or" should be interpreted to indicate "additionally or alternatively."
  • The present disclosure relates to compression of point cloud-related data. Various methods or embodiments of the present disclosure may be applied to a point cloud compression or point cloud coding (PCC) standard (e.g., G-PCC or V-PCC standard) of a moving picture experts group (MPEG) or a next-generation video/image coding standard.
  • In the present disclosure, a "point cloud" may mean a set of points located in a three-dimensional space. Also, in the present disclosure, "point cloud content" is expressed as a point cloud, and may mean a "point cloud video/image". Hereinafter, the 'point cloud video/image' is referred to as a 'point cloud video'. A point cloud video may include one or more frames, and one frame may be a still image or a picture. Accordingly, the point cloud video may include a point cloud image/frame/picture, and may be referred to as any one of a "point cloud image", a "point cloud frame", and a "point cloud picture".
  • In the present disclosure, "point cloud data" may mean data or information related to each point in the point cloud. Point cloud data may include geometry and/or attribute. In addition, the point cloud data may further include metadata. The point cloud data may be referred to as "point cloud content data" or "point cloud video data" or the like. In addition, the point cloud data may be referred to as "point cloud content", "point cloud video", "G-PCC data", and the like.
  • In the present disclosure, a point cloud object corresponding to point cloud data may be represented in a box shape based on a coordinate system, and the box shape based on the coordinate system may be referred to as a bounding box. That is, the bounding box may be a rectangular cuboid capable of accommodating all points of the point cloud, and may be a cuboid including a source point cloud frame.
  • In the present disclosure, geometry includes the position (or position information) of each point, and the position may be expressed by parameters (e.g., for example, an x-axis value, a y-axis value, and a z-axis value) representing a three-dimensional coordinate system (e.g., a coordinate system consisting of an x-axis, y-axis, and z-axis). The geometry may be referred to as "geometric information".
  • In the present disclosure, the attribute may include properties of each point, and the properties may include one or more of texture information, color (RGB or YCbCr), reflectance (r), transparency, etc. of each point. The attribute may be referred to as "attribute information". Metadata may include various data related to acquisition in an acquisition process to be described later.
  • Overview of point cloud content provision system
  • FIG. 1 illustrates an example of a system for providing point cloud content (hereinafter, referred to as a 'point cloud content provision system') according to embodiments of the present disclosure. FIG. 2 illustrates an example of a process in which the point cloud content provision system provides point cloud content.
  • As shown in FIG. 1, the point cloud content provision system may include a transmission device 10 and a reception device 20. The point cloud content provision system may perform an acquisition process S20, an encoding process S21, a transmission process S22, a decoding process S23, a rendering process S24 and/or a feedback process S25 shown in FIG.2 by operation of the transmission device 10 and the reception device 20.
  • The transmission device 10 acquires point cloud data and outputs a bitstream through a series of processes (e.g., encoding process) for the acquired point cloud data (source point cloud data), in order to provide point cloud content. Here, the point cloud data may be output in the form of a bitstream through an encoding process. In some embodiments, the transmission device 10 may transmit the output bitstream in the form of a file or streaming (streaming segment) to the reception device 20 through a digital storage medium or a network. The digital storage medium may include a variety of storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD. The reception device 20 may process (e.g., decode or reconstruct) the received data (e.g., encoded point cloud data) into source point cloud data and render it. The point cloud content may be provided to the user through these processes, and the present disclosure may provide various embodiments necessary to effectively perform a series of these processes.
  • As illustrated in FIG. 1, the transmission device 10 may include an acquisition unit 11, an encoding unit 12, an encapsulation processing unit 13 and a transmission unit 14, and the reception device 20 may include a reception unit 21, a decapsulation processing unit 22, a decoding unit 23, and a rendering unit 24.
  • The acquisition unit 11 may perform a process S20 of acquiring a point cloud video through a capturing, synthesizing or generating process. Accordingly, the acquisition unit 11 may be referred to as a 'point cloud video acquisition unit'.
  • Point cloud data (geometry and/or attribute, etc.) for a plurality of points may be generated by the acquisition process (S20). Also, through the acquisition process (S20), metadata related to the acquisition of the point cloud video may be generated. Also, mesh data (e.g., triangular data) indicating connection information between point clouds may be generated by the acquisition process (S20).
  • The metadata may include initial viewing orientation metadata. The initial viewing orientation metadata may indicate whether the point cloud data is data representing the front or the back. The metadata may be referred to as "auxiliary data" that is metadata for the point cloud.
  • The acquired point cloud video may include the polygon file format or the Stanford triangle format (PLY) file. Since the point cloud video has one or more frames, the acquired point cloud video may include one or more PLY files. The PLY file may include point cloud data of each point.
  • In order to acquire a point cloud video (or point cloud data), the acquisition unit 11 may be composed of a combination of camera equipment capable of acquiring depth (depth information) and RGB cameras capable of extracting color information corresponding to the depth information. Here, the camera equipment capable of acquiring the depth information may be a combination of an infrared pattern projector and an infrared camera. In addition, the acquisition unit 11 may be composed of a LiDAR, and the LiDAR may use a radar system for measuring the position coordinates of a reflector by measuring a time required for a laser pulse to be emitted and returned after being reflected.
  • The acquisition unit 110 may extract a shape of geometry composed of points in a three-dimensional space from the depth information, and may extract an attribute representing a color or reflection of each point from the RGB information.
  • As a method of extracting (or capturing, acquiring, etc.) a point cloud video (or point cloud data), there may be an inward-facing method of capturing a central object and an outward-facing method of capturing an external environment.
  • The encoding unit 12 may perform the encoding process (S21) of encoding the data (e.g., geometry, attribute and/or metadata, and/or mesh data, etc.) generated by the acquisition unit 11 into one or more bitstreams. Accordingly, the encoding unit 12 may be referred to as a 'point cloud video encoder'. The encoding unit 12 may encode the data generated by the acquisition unit 11 in series or in parallel.
  • The encoding process S21 performed by the encoding unit 12 may be geometry-based point cloud compression (G-PCC). The encoding unit 12 may perform a series of procedures such as prediction, transform, quantization, and entropy coding for compression and coding efficiency.
  • The encoded point cloud data may be output in the form of a bitstream. Based on the G-PCC procedure, the encoding unit 12 may partition the point cloud data into geometry and attribute and encode them as described below. In this case, the output bitstream may include a geometry bitstream including the encoded geometry and an attribute bitstream including the encoded attribute. In addition, the output bitstream may further include one or more of a metadata bitstream including metadata, an auxiliary bitstream including auxiliary data, and a mesh data bitstream including mesh data. The encoding process (S21) will be described in more detail below. A bitstream including the encoded point cloud data may be referred to as a 'point cloud bitstream' or a 'point cloud video bitstream'.
  • The encapsulation processing unit 13 may perform a process of encapsulating one or more bitstreams output from the decoding unit 12 in the form of a file or a segment. Accordingly, the encapsulation processing unit 13 may be referred to as a 'file/segment encapsulation module'. Although the drawing shows an example in which the encapsulation processing unit 13 is composed of a separate component/module in relation to the transmission unit 14, the encapsulation processing unit 13 may be included in the transmission unit 14 in some embodiments.
  • The encapsulation processing unit 13 may encapsulate the data in a file format such as ISO Base Media File Format (ISOBMFF) or process the data in the form of other DASH segments. In some embodiments, the encapsulation processing unit 13 may include metadata in a file format. Metadata may be included, for example, in boxes of various levels in the ISOBMFF file format, or as data in a separate track within the file. In some embodiments, the encapsulation processing unit 130 may encapsulate the metadata itself into a file. The metadata processed by the encapsulation processing unit 13 may be transmitted from a metadata processing unit not shown in the drawing. The metadata processing unit may be included in the encoding unit 12 or may be configured as a separate component/module.
  • The transmission unit 14 may perform the transmission process (S22) of applying processing (processing for transmission) according to a file format to the 'encapsulated point cloud bitstream'. The transmission unit 140 may transmit the bitstream or a file/segment including the bitstream to the reception unit 21 of the reception device 20 through a digital storage medium or a network. Accordingly, the transmission unit 14 may be referred to as a 'transmitter' or a 'communication module'.
  • The transmission unit 14 may process point cloud data according to an arbitrary transmission protocol. Here, 'processing the point cloud data according to the arbitrary transmission protocol' may be 'processing for transmission'. The processing for transmission may include processing for transmission through a broadcast network, processing for transmission through a broadband, and the like. In some embodiments, the transmission unit 14 may receive not only point cloud data but also metadata from the metadata processing unit, and may perform processing for transmission on the transmitted metadata. In some embodiments, the processing for transmission may be performed by the transmission processing unit, and the transmission processing unit may be included in the transmission unit 14 or configured as a component/module separate from the transmission unit 14.
  • The reception unit 21 may receive the bitstream transmitted by the transmission device 10 or a file/segment including the bitstream. Depending on the transmitted channel, the reception unit 21 may receive a bitstream or a file/segment including the bitstream through a broadcast network, or may receive a bitstream or a file/segment including the bitstream through a broadband. Alternatively, the reception unit 21 may receive a bitstream or a file/segment including the bitstream through a digital storage medium.
  • The reception unit 21 may perform processing according to a transmission protocol on the received bitstream or the file/segment including the bitstream. The reception unit 21 may perform a reverse process of transmission processing (processing for transmission) to correspond to processing for transmission performed by the transmission device 10. The reception unit 21 may transmit the encoded point cloud data among the received data to the decapsulation processing unit 22 and may transmit metadata to a metadata parsing unit. The metadata may be in the form of a signaling table. In some embodiments, the reverse process of the processing for transmission may be performed in the reception processing unit. Each of the reception processing unit, the decapsulation processing unit 22, and the metadata parsing unit may be included in the reception unit 21 or may be configured as a component/module separate from the reception unit 21.
  • The decapsulation processing unit 22 may decapsulate the point cloud data (i.e., a bitstream in a file format) in a file format received from the reception unit 21 or a reception processing unit. Accordingly, the decapsulation processing unit 22 may be referred to as a 'file/segment decapsulation module'.
  • The decapsulation processing unit 22 may acquire a point cloud bitstream or a metadata bitstream by decapsulating files according to ISOBMFF or the like. In some embodiments, metadata (metadata bitstream) may be included in the point cloud bitstream. The acquired point cloud bitstream may be transmitted to the decoding unit 23, and the acquired metadata bitstream may be transmitted to the metadata processing unit. The metadata processing unit may be included in the decoding unit 23 or may be configured as a separate component/module. The metadata obtained by the decapsulation processing unit 23 may be in the form of a box or track in a file format. If necessary, the decapsulation processing unit 23 may receive metadata required for decapsulation from the metadata processing unit. The metadata may be transmitted to the decoding unit 23 and used in the decoding process (S23), or may be transmitted to the rendering unit 24 and used in the rendering process (S24).
  • The decoding unit 23 may receive the bitstream and perform operation corresponding to the operation of the encoding unit 12, thereby performing the decoding process (S23) of decoding the point cloud bitstream (encoded point cloud data). Accordingly, the decoding unit 23 may be referred to as a 'point cloud video decoder'.
  • The decoding unit 23 may partition the point cloud data into geometry and attribute and decode them. For example, the decoding unit 23 may reconstruct (decode) geometry from a geometry bitstream included in the point cloud bitstream, and restore (decode) attribute based on the reconstructed geometry and an attribute bitstream included in the point cloud bitstream. A three-dimensional point cloud video/image may be reconstructed based on position information according to the reconstructed geometry and attribute (such as color or texture) according to the decoded attribute. The decoding process (S23) will be described in more detail below.
  • The rendering unit 24 may perform the rendering process S24 of rendering the reconstructed point cloud video. Accordingly, the rendering unit 24 may be referred to as a 'renderer'.
  • The rendering process S24 may refer to a process of rendering and displaying point cloud content in a 3D space. The rendering process S24 may perform rendering according to a desired rendering method based on the position information and attribute information of the points decoded through the decoding process.
  • The feedback process S25 may include a process of transmitting various feedback information that may be acquired during the rendering process S24 or the display process to the transmission device 10 or to other components in the reception device 20. The feedback process S25 may be performed by one or more of the components included in the reception device 20 of FIG. 1 or may be performed by one or more of the components shown in FIGS. 10 and 11. In some embodiments, the feedback process S25 may be performed by a 'feedback unit' or a 'sensing/tracking unit'.
  • Overview of point cloud encoding apparatus
  • FIG. 3 illustrates an example of a point cloud encoding apparatus 300 according to embodiments of the present disclosure. The point cloud encoding apparatus 300 of FIG. 3 may correspond to the encoding unit 12 of FIG. 1 in terms of the configuration and function.
  • As shown in FIG. 3, the point cloud encoding apparatus 300 may include a coordinate system transform unit 305, a geometry quantization unit 310, an octree analysis unit 315, an approximation unit 320, a geometry encoding unit 325, a reconstruction unit 330, and an attribute transform unit 340, a RAHT transform unit 345, an LOD generation unit 350, a lifting unit 355, an attribute quantization unit 360, an attribute encoding unit 365, and/or a color transform unit 335.
  • The point cloud data acquired by the acquisition unit 11 may undergo processes of adjusting the quality of the point cloud content (e.g., lossless, lossy, near-lossless) according to the network situation or application. In addition, each point of the acquired point cloud content may be transmitted without loss, but, in that case, real-time streaming may not be possible because the size of the point cloud content is large. Therefore, in order to provide the point cloud content smoothly, a process of reconstructing the point cloud content according to a maximum target bitrate is required.
  • Processes of adjusting the quality of the point cloud content may be processes of reconstructing and encoding the position information (position information included in the geometry information) or color information (color information included in the attribute information) of the points. A process of reconstructing and encoding position information of points may be referred to as geometry coding, and a process of reconstructing and encoding attribute information associated with each point may be referred to as attribute coding.
  • Geometry coding may include a geometry quantization process, a voxelization process, an octree analysis process, an approximation process, a geometry encoding process, and/or a coordinate system transform process. Also, geometry coding may further include a geometry reconstruction process. Attribute coding may include a color transform process, an attribute transform process, a prediction transform process, a lifting transform process, a RAHT transform process, an attribute quantization process, an attribute encoding process, and the like.
  • Geometry coding
  • The coordinate system transform process may correspond to a process of transforming a coordinate system for positions of points. Therefore, the coordinate system transform process may be referred to as 'transform coordinates'. The coordinate system transform process may be performed by the coordinate system transform unit 305. For example, the coordinate system transform unit 305 may transform the positions of the points from the global space coordinate system to position information in a three-dimensional space (e.g., a three-dimensional space expressed in coordinate system of the X-axis, Y-axis, and Z-axis). Position information in the 3D space according to embodiments may be referred to as 'geometric information'.
  • The geometry quantization process may correspond to a process of quantizing the position information of points, and may be performed by the geometry quantization unit 310. For example, the geometry quantization unit 310 may find position information having minimum (x, y, z) values among the position information of the points, and subtract position information having the minimum (x, y, z) positions from the position information of each point. In addition, the geometry quantization unit 310 may multiply the subtracted value by a preset quantization scale value, and then adjust (lower or raise) the result to a near integer value, thereby performing the quantization process.
  • The voxelization process may correspond to a process of matching geometry information quantized through the quantization process to a specific voxel present in a 3D space. The voxelization process may also be performed by the geometry quantization unit 310. The geometry quantization unit 310 may perform octree-based voxelization based on position information of the points, in order to reconstruct each point to which the quantization process is applied.
  • The geometry encoding process may correspond to a process of performing entropy coding on the occupancy code. The geometry encoding process may be performed by the geometry encoding unit 325. The geometry encoding unit 325 may perform entropy coding on the occupancy code. The generated occupancy code may be immediately encoded or may be encoded through an intra/inter coding process to increase compression efficiency. The reception device 20 may reconstruct the octree through the occupancy code.
  • On the other hand, in the case of a specific area having no points or very few points, it may be inefficient to voxelize all areas. That is, since there are few points in a specific area, it may not be necessary to construct the entire octree. For this case, an early termination method may be required.
  • The point cloud encoding apparatus 300 may directly transmit the positions of points only for the specific area, or reconfigure positions of points within the specific area based on the voxel using a surface model, instead of partitioning a node (specific node) corresponding to this specific area into 8 sub-nodes (children nodes) for the specific area (a specific area that does not correspond to a leaf node).
  • A mode for directly transmitting the position of each point for a specific node may be a direct mode. The point cloud encoding apparatus 300 may check whether conditions for enabling the direct mode are satisfied.
  • The conditions for enabling the direct mode are: 1) the option to use the direct mode shall be enabled, 2) the specific node does not correspond to a leaf node, and 3) points below a threshold shall exist within the specific node, and 4) the total number of points to be directly transmitted does not exceed a limit value.
  • When all of the above conditions are satisfied, the point cloud encoding apparatus 300 may entropy-code and transmit the position value of the point directly for the specific node through the geometry encoding unit 325.
  • A mode in which a position of a point in a specific area is reconstructed based on a voxel using a surface model may be a trisoup mode. The trisoup mode may be performed by the approximation unit 320. The approximation unit 320 may determine a specific level of the octree and reconstruct the positions of points in the node area based on the voxel using the surface model from the determined specific level.
  • The point cloud encoding apparatus 300 may selectively apply the trisoup mode. Specifically, the point cloud encoding apparatus 300 may designate a level (specific level) to which the trisoup mode is applied, when the trisoup mode is used. For example, when the specified specific level is equal to the depth (d) of the octree, the trisoup mode may not be applied. That is, the designated specific level shall be less than the depth value of the octree.
  • A three-dimensional cubic area of nodes of the designated specific level is called a block, and one block may include one or more voxels. A block or voxel may correspond to a brick. Each block may have 12 edges, and the approximation unit 320 may check whether each edge is adjacent to an occupied voxel having a point. Each edge may be adjacent to several occupied voxels. A specific position of an edge adjacent to a voxel is called a vertex, and, when a plurality of occupied voxels are adjacent to one edge, the approximation unit 320 may determine an average position of the positions as a vertex.
  • The point cloud encoding apparatus 300 may entropy-code the starting points (x, y, z) of the edge, the direction vector (△x, △y, △z) of the edge and position value of the vertex (relative position values within the edge) through the geometry encoding unit 325, when a vertex is present.
  • The geometry reconstruction process may correspond to a process of generating a reconstructed geometry by reconstructing an octree and/or an approximated octree. The geometry reconstruction process may be performed by the reconstruction unit 330. The reconstruction unit 330 may perform a geometry reconstruction process through triangle reconstruction, up-sampling, voxelization, and the like.
  • When the trisoup mode is applied in the approximation unit 320, the reconstruction unit 330 may reconstruct a triangle based on the starting point of the edge, the direction vector of the edge and the position value of the vertex.
  • The reconstruction unit 330 may perform an upsampling process for voxelization by adding points in the middle along the edge of the triangle. The reconstruction unit 330 may generate additional points based on an upsampling factor and the width of the block. These points may be called refined vertices. The reconstruction unit 330 may voxel the refined vertices, and the point cloud encoding apparatus 300 may perform attribute coding based on the voxelized position value.
  • In some embodiments, the geometry encoding unit 325 may increase compression efficiency by applying context adaptive arithmetic coding. The geometry encoding unit 325 may directly entropy-code the occupancy code using the arithmetic code. In some embodiments, the geometry encoding unit 325 adaptively performs encoding based on occupancy of neighbor nodes (intra coding), or adaptively performs encoding based on the occupancy code of a previous frame (inter-coding). Here, the frame may mean a set of point cloud data generated at the same time. Intra coding and inter coding are optional processes and thus may be omitted.
  • Attribute coding
  • Attribute coding may correspond to a process of coding attribute information based on reconstructed geometry and geometry before coordinate system transform (source geometry). Since the attribute may be dependent on the geometry, the reconstructed geometry may be utilized for attribute coding.
  • As described above, the attribute may include color, reflectance, and the like. The same attribute coding method may be applied to information or parameters included in the attribute. Color has three elements, reflectance has one element, and each element can be processed independently.
  • Attribute coding may include a color transform process, an attribute transform process, a prediction transform process, a lifting transform process, a RAHT transform process, an attribute quantization process, an attribute encoding process, and the like. The prediction transform process, the lifting transform process, and the RAHT transform process may be selectively used, or a combination of one or more thereof may be used.
  • The color transform process may correspond to a process of transforming the format of the color in the attribute into another format. The color transform process may be performed by the color transform unit 335. That is, the color transform unit 335 may transform the color in the attribute. For example, the color transform unit 335 may perform a coding operation for transforming the color in the attribute from RGB to YCbCr. In some embodiments, the operation of the color transform unit 335, that is, the color transform process, may be optionally applied according to a color value included in the attribute.
  • As described above, when one or more points exist in one voxel, position values for points existing in the voxel are set to the center point of the voxel in order to display them by integrating them into one point information for the voxel. Accordingly, a process of transforming the values of attributes related to the points may be required. Also, even when the trisoup mode is performed, the attribute transform process may be performed.
  • The attribute transform process may correspond to a process of transforming the attribute based on a position on which geometry coding is not performed and/or reconstructed geometry. For example, the attribute transform process may correspond to a process of transforming the attribute having a point of the position based on the position of a point included in a voxel. The attribute transform process may be performed by the attribute transform unit 340.
  • The attribute transform unit 340 may calculate the central position value of the voxel and an average value of the attribute values of neighbor points within a specific radius. Alternatively, the attribute transform unit 340 may apply a weight according to a distance from the central position to the attribute values and calculate an average value of the attribute values to which the weight is applied. In this case, each voxel has a position and a calculated attribute value.
  • The prediction transform process may correspond to a process of predicting an attribute value of a current point based on attribute values of one or more points (neighbor points) adj acent to the current point (a point corresponding to a prediction target). The prediction transform process may be performed by a level-of-detail (LOD) generation unit 350.
  • Prediction transform is a method to which the LOD transform technique is applied, and the LOD generation unit 350 may calculate and set the LOD value of each point based on the LOD distance value of each point.
  • The LOD generation unit 350 may generate a predictor for each point for prediction transform. Accordingly, when there are N points, N predictors may be generated. The predictor may calculate and set a weight value (= 1/distance) based on the LOD value for each point, the indexing information for the neighbor points, and distance values from the neighbor points. Here, the neighbor points may be points existing within a distance set for each LOD from the current point.
  • In addition, the predictor may multiply the attribute values of neighbor points by the 'set weight value', and set a value obtained by averaging the attribute values multiplied by the weight value as the predicted attribute value of the current point. An attribute quantization process may be performed on a residual attribute value obtained by subtracting the predicted attribute value of the current point from the attribute value of the current point.
  • The lifting transform process may correspond to a process of reconstructing points into a set of detail levels through the LOD generation process, like the prediction transform process. The lifting transform process may be performed by the lifting unit 355. The lifting transform process may also include a process of generating a predictor for each point, a process of setting the calculated LOD in the predictor, a process of registering neighbor points, and a process of setting a weight according to distances between the current point and the neighbor points.
  • The RAHT transform process may correspond to a method of predicting attribute information of nodes at a higher level using attribute information associated with a node at a lower level of the octree. That is, the RATH transform process may correspond to an attribute information intra coding method through octree backward scan. The RAHT transform process may be performed by the RAHT transform unit 345.
  • The RAHT transform unit 345 scans the entire area in the voxel, and may perform the RAHT transform process up to the root node while summing (merging) the voxel into a larger block at each step. Since the RAHT transform unit 345 performs a RAHT transform process only on an occupied node, in the case of an empty node that is not occupied, the RAHT transform process may be performed on a node at a higher level immediately above it.
  • The attribute quantization process may correspond to a process of quantizing the attribute output from the RAHT transform unit 345, the LOD generation unit 350, and/or the lifting unit 355. The attribute quantization process may be performed by the attribute quantization unit 360. The attribute encoding process may correspond to a process of encoding a quantized attribute and outputting an attribute bitstream. The attribute encoding process may be performed by the attribute encoding unit 365.
  • Overview of point cloud decoding apparatus
  • FIG. 4 illustrates an example of a point cloud decoding apparatus 400 according to an embodiment of the present disclosure. The point cloud decoding apparatus 400 of FIG. 4 may correspond to the decoding unit 23 of FIG. 1 in terms of configuration and function.
  • The point cloud decoding apparatus 400 may perform a decoding process based on data (bitstream) transmitted from the transmission device 10. The decoding process may include a process of reconstructing (decoding) a point cloud video by performing operation corresponding to the above-described encoding operation on the bitstream.
  • As illustrated in FIG. 4, the decoding process may include a geometry decoding process and an attribute decoding process. The geometry decoding process may be performed by a geometry decoding unit 410, and an attribute decoding process may be performed by an attribute decoding unit 420. That is, the point cloud decoding apparatus 400 may include the geometry decoding unit 410 and the attribute decoding unit 420.
  • The geometry decoding unit 410 may reconstruct geometry from a geometry bitstream, and the attribute decoder 420 may reconstruct attribute based on the reconstructed geometry and the attribute bitstream. Also, the point cloud decoding apparatus 400 may reconstruct a three-dimensional point cloud video (point cloud data) based on position information according to the reconstructed geometry and attribute information according to the reconstructed attribute.
  • FIG. 5 illustrates a specific example of a point cloud decoding apparatus 500 according to another embodiment of the present disclosure. As illustrated in FIG. 5, the point cloud decoding apparatus 500 includes a geometry decoding unit 505, an octree synthesis unit 510, an approximation synthesis unit 515, a geometry reconstruction unit 520, and a coordinate system inverse transform unit 525, an attribute decoding unit 530, an attribute dequantization unit 535, a RATH transform unit 550, an LOD generation unit 540, an inverse lifting unit 545, and/or a color inverse transform unit 555.
  • The geometry decoding unit 505, the octree synthesis unit 510, the approximation synthesis unit 515, the geometry reconstruction unit 520 and the coordinate system inverse transform unit 550 may perform geometry decoding. Geometry decoding may be performed as a reverse process of the geometry coding described with reference to FIGS. 1 to 3. Geometry decoding may include direct coding and trisoup geometry decoding. Direct coding and trisoup geometry decoding may be selectively applied.
  • The geometry decoding unit 505 may decode the received geometry bitstream based on arithmetic coding. Operation of the geometry decoding unit 505 may correspond to a reverse process of operation performed by the geometry encoding unit 335.
  • The octree synthesis unit 510 may generate an octree by obtaining an occupancy code from the decoded geometry bitstream (or information on a geometry obtained as a result of decoding). Operation of the octree synthesis unit 510 may correspond to a reverse process of operation performed by the octree analysis unit 315.
  • The approximation synthesis unit 515 may synthesize a surface based on the decoded geometry and/or the generated octree, when trisoup geometry encoding is applied.
  • The geometry reconstruction unit 520 may reconstruct geometry based on the surface and the decoded geometry. When direct coding is applied, the geometry reconstruction unit 520 may directly bring and add position information of points to which direct coding is applied. In addition, when trisoup geometry encoding is applied, the geometry reconstruction unit 520 may reconstruct the geometry by performing reconstruction operation, for example, triangle reconstruction, up-sampling, voxelization operation and the like. The reconstructed geometry may include a point cloud picture or frame that does not include attributes.
  • The coordinate system inverse transform unit 550 may acquire positions of points by transforming the coordinate system based on the reconstructed geometry. For example, the coordinate system inverse transform unit 550 may inversely transform the positions of points from a three-dimensional space (e.g., a three-dimensional space expressed by the coordinate system of X-axis, Y-axis, and Z-axis, etc.) to position information of the global space coordinate system.
  • The attribute decoding unit 530, the attribute inverse quantization unit 535, the LOD generator 540, and/or the inverse lifting unit 545 may perform attribute decoding. Attribute decoding may include RAHT transform decoding, predictive transform decoding, and lifting transform decoding. The above three types of decoding may be selectively used, or a combination of one or more types of decoding may be used.
  • The attribute decoding unit 530 may decode an attribute bitstream based on arithmetic coding. For example, when there is no neighbor point in the predictor of each point and thus the attribute value of the current point is directly entropy-encoded, the attribute decoding unit 530 may decode the attribute value (non-quantized attribute value) of the current point. As another example, when there are neighbor points in the predictor of the current points and thus the quantized residual attribute value is entropy-encoded, the attribute decoding unit 530 may decode the quantized residual attribute value.
  • The attribute dequantization unit 535 may dequantize the decoded attribute bitstream or information on the attribute obtained as a result of decoding, and output dequantized attributes (or attribute values). For example, when the quantized residual attribute value is output from the attribute decoding unit 530, the attribute dequantization unit 535 may dequantize the quantized residual attribute value to output the residual attribute value. The dequantization process may be selectively applied based on whether the attribute is encoded in the point cloud encoding apparatus 300. That is, when there is no neighbor point in the predictor of each point and thus the attribute value of the current point is directly encoded, the attribute decoding unit 530 may output the attribute value of the current point that is not quantized, and the attribute encoding process may be skipped.
  • The RATH transform unit 550, the LOD generation unit 540, and/or the inverse lifting unit 545 may process the reconstructed geometry and dequantized attributes. The RATH transform unit 550, the LOD generation unit 540, and/or the inverse lifting unit 545 may selectively perform decoding operation corresponding to the encoding operation of the point cloud encoding apparatus 300.
  • The color inverse transform unit 555 may perform inverse transform coding for inverse transforming s color value (or texture) included in the decoded attributes. Operation of the inverse color transform unit 555 may be selectively performed based on whether the color transform unit 335 operates.
  • FIG. 6 illustrates an example of a structure capable of interworking with a method/device for transmitting and receiving point cloud data according to embodiments of the present disclosure.
  • The structure of FIG. 6 illustrates a configuration in which at least one of a server (AI Server), a robot, a self-driving vehicle, an XR device, a smartphone, a home appliance and/ or a HMD is connected to a cloud network. The robot, the self-driving vehicle, the XR device, the smartphone, or the home appliance may be referred to as a device. In addition, the XR device may correspond to a point cloud data device (PCC) according to embodiments or may interwork with the PCC device.
  • The cloud network may refer to a network that forms part of the cloud computing infrastructure or exists within the cloud computing infrastructure. Here, the cloud network may be configured using a 3G network, a 4G or Long Term Evolution (LTE) network, or a 5G network.
  • The server may be connected to at least one of the robot, the self-driving vehicle, the XR device, the smartphone, the home appliance, and/or the HMD through a cloud network, and may help at least a part of processing of the connected devices.
  • The HMD may represent one of the types in which an XR device and/or the PCC device according to embodiments may be implemented. The HMD type device according to the embodiments may include a communication unit, a control unit, a memory unit, an I/O unit, a sensor unit, and a power supply unit.
  • <PCC+XR>
  • The XR/PCC device may be implemented by a HMD, a HUD provided in a vehicle, a TV, a mobile phone, a smartphone, a computer, a wearable device, a home appliance, a digital signage, a vehicle, a fixed robot or a mobile robot, etc., by applying PCC and/or XR technology.
  • The XR/PCC device may obtain information on a surrounding space or a real object by analyzing 3D point cloud data or image data acquired through various sensors or from an external device to generate position (geometric) data and attribute data for 3D points, and render and output an XR object to be output. For example, the XR/PCC device may output an XR object including additional information on the recognized object in correspondence with the recognized object.
  • <PCC+XR+mobile phone>
  • The XR/PCC device may be implemented by a mobile phone or the like by applying PCC technology. A mobile phone can decode and display point cloud content based on PCC technology.
  • <PCC+self-driving+XR>
  • The self-driving vehicle may be implemented by a mobile robot, a vehicle, an unmanned aerial vehicle, etc. by applying PCC technology and XR technology. The self-driving vehicle to which the XR/PCC technology is applied may mean a self-driving vehicle equipped with a unit for providing an XR image or a self-driving vehicle which is subjected to control/interaction within the XR image. In particular, the self-driving vehicle which is subjected to control/interaction within the XR image is distinguished from the XR device and may be interwork with each other.
  • The self-driving vehicle equipped with a unit for providing an XR/PCC image may acquire sensor information from sensors including a camera, and output an XR/PCC image generated based on the acquired sensor information. For example, the self-driving vehicle has a HUD and may provide a passenger with an XR/PCC object corresponding to a real object or an object in a screen by outputting an XR/PCC image.
  • In this case, when the XR/PCC object is output to the HUD, at least a portion of the XR/PCC object may be output so as to overlap an actual object to which a passenger's gaze is directed. On the other hand, when the XR/PCC object is output to a display provided inside the self-driving vehicle, at least a portion of the XR/PCC object may be output to overlap the object in the screen. For example, the self-driving vehicle may output XR/PCC objects corresponding to objects such as a lane, other vehicles, traffic lights, traffic signs, two-wheeled vehicles, pedestrians, and buildings.
  • The VR technology, AR technology, MR technology, and/or PCC technology according to the embodiments are applicable to various devices. That is, VR technology is display technology that provides objects or backgrounds in the real world only as CG images. On the other hand, AR technology refers to technology that shows a virtual CG image on top of an actual object image. Furthermore, MR technology is similar to AR technology described above in that a mixture and combination of virtual objects in the real world is shown. However, in AR technology, the distinction between real objects and virtual objects made of CG images is clear, and virtual objects are used in a form that complements the real objects, whereas, in MR technology, virtual objects are regarded as equivalent to real objects unlike the AR technology. More specifically, for example, applying the MR technology described above is a hologram service. VR, AR and MR technologies may be integrated and referred to as XR technology.
  • Space partition
  • Point cloud data (i.e., G-PCC data) may represent volumetric encoding of a point cloud consisting of a sequence of frames (point cloud frames). Each point cloud frame may include the number of points, the positions of the points, and the attributes of the points. The number of points, the positions of the points, and the attributes of the points may vary from frame to frame. Each point cloud frame may mean a set of three-dimensional points specified by zero or more attributes and Cartesian coordinates (x, y, z) of three-dimensional points in a particular time instance. Here, the Cartesian coordinates (x, y, z) of the three-dimensional points may be a position or a geometry.
  • In some embodiments, the present disclosure may further perform a space partition process of partitioning the point cloud data into one or more 3D blocks before encoding the point cloud data. The 3D block may mean whole or part of a 3D space occupied by the point cloud data. The 3D block may be one or more of a tile group, a tile, a slice, a coding unit (CU), a prediction unit (PU), or a transform unit (TU).
  • A tile corresponding to a 3D block may mean whole or part of the 3D space occupied by the point cloud data. Also, a slice corresponding to a 3D block may mean whole or part of a 3D space occupied by the point cloud data. A tile may be partitioned into one or more slices based on the number of points included in one tile. A tile may be a group of slices with bounding box information. The bounding box information of each tile may be specified in a tile inventory (or a tile parameter set, a tile parameter set (TPS)). A tile may overlap another tile in the bounding box. A slice may be a unit of data on which encoding is independently performed, or a unit of data on which decoding is independently performed. That is, a slice may be a set of points that may be independently encoded or decoded. In some embodiments, a slice may be a series of syntax elements representing part or whole of a coded point cloud frame. Each slice may include an index for identifying a tile to which the slice belongs.
  • The spatially partitioned 3D blocks may be processed independently or non-independently. For example, spatially partitioned 3D blocks may be encoded or decoded independently or non-independently, respectively, and may be transmitted or received independently or non-independently, respectively. In addition, the spatially partitioned 3D blocks may be quantized or dequantized independently or non-independently, and may be transformed or inversely transformed independently or non-independently, respectively. In addition, spatially partitioned 3D blocks may be rendered independently or non-independently. For example, encoding or decoding may be performed in units of slices or units of tiles. In addition, quantization or dequantization may be performed differently for each tile or slice, and may be performed differently for each transformed or inversely transformed tile or slice.
  • In this way, when the point cloud data is spatially partitioned into one or more 3D blocks and the spatially partitioned 3D blocks are processed independently or non-independently, the process of processing the 3D blocks is performed in real time and the process is performed with low latency. In addition, random access and parallel encoding or parallel decoding in a three-dimensional space occupied by point cloud data may be enabled, and errors accumulated in the encoding or decoding process may be prevented.
  • FIG. 7 is a block diagram illustrating an example of a transmission device 700 for performing a space partition process according to embodiments of the present disclosure. As illustrated in FIG. 7, the transmission device 700 may include a space partition unit 705 for performing a space partition process, a signaling processing unit 710, a geometry encoder 715, an attribute encoder 720, and an encapsulation processing unit 725 and/or a transmission processing unit 730.
  • The space partition unit 705 may perform a space partition process of partitioning the point cloud data into one or more 3D blocks based on a bounding box and/or a sub-bounding box. Through the space partition process, point cloud data may be partitioned into one or more tiles and/or one or more slices. In some embodiments, the point cloud data may be partitioned into one or more tiles, and each partitioned tile may be further partitioned into one or more slices, through a space partition process.
  • The signaling processing unit 710 may generate and/or process (e.g., entropy-encode) signaling information and output it in the form of a bitstream. Hereinafter, a bitstream (in which signaling information is encoded) output from the signaling processing unit is referred to as a 'signaling bitstream'. The signaling information may include information for space partition or information on space partition. That is, the signaling information may include information related to the space partition process performed by the space partition unit 705.
  • When the point cloud data is partitioned into one or more 3D blocks, information for decoding some point cloud data corresponding to a specific tile or a specific slice among the point cloud data may be required. In addition, in order to support spatial access (or partial access) to point cloud data, information related to 3D spatial areas may be required. Here, the spatial access may mean extracting, from a file, only necessary partial point cloud data in the entire point cloud data. The signaling information may include information for decoding some point cloud data, information related to 3D spatial areas for supporting spatial access, and the like. For example, the signaling information may include 3D bounding box information, 3D spatial area information, tile information, and/or tile inventory information.
  • The signaling information may be provided from the space partition unit 705, the geometry encoder 715, the attribute encoder 720, the transmission processing unit 725, and/or the encapsulation processing unit 730. In addition, the signaling processing unit 710 may provide the feedback information fed back from the reception device 800 of FIG. 8 to the space partition unit 705, the geometry encoder 715, the attribute encoder 720, the transmission processing unit 725 and/or the encapsulation processing unit 730.
  • The signaling information may be stored and signaled in a sample in a track, a sample entry, a sample group, a track group, or a separate metadata track. In some embodiments, the signaling information may be signaled in units of sequence parameter sets (SPSs) for signaling of a sequence level, geometry parameter sets (GPSs) for signaling of geometry coding information, and attribute parameter sets (APSs) for signaling of attribute coding information, tile parameter sets (TPSs) (or tile inventory) for signaling of a tile level, etc. In addition, the signaling information may be signaled in units of coding units such as slices or tiles.
  • Meanwhile, positions (position information) of the 3D blocks may be output to the geometry encoder 715, and attributes (attribute information) of the 3D blocks may be output to the attribute encoder 720.
  • The geometry encoder 715 may construct an octree based on the position information, encode the constructed octree, and output a geometry bitstream. Also, the geometry encoder 715 may reconstruct the octree and/or the approximated octree and output it to the attribute encoder 720. The reconstructed octree may be reconstructed geometry. The geometry encoder 715 may perform all or some of operations performed by the coordinate system transform unit 305, the geometry quantization unit 310, the octree analysis unit 315, the approximation unit 320, the geometry encoding unit 325 and/or the reconstruction unit 330 of FIG. 3.
  • The attribute encoder 720 may output an attribute bitstream by encoding an attribute based on the reconstructed geometry. The attribute encoder 720 may perform all or some of operations performed by the attribute transform unit 340, the RAHT transform unit 345, the LOD generation unit 350, the lifting unit 355, the attribute quantization unit 360, the attribute encoding unit 365 and/or the color transform nit 335 of FIG. 3.
  • The encapsulation processing unit 725 may encapsulate one or more input bitstreams into a file or segment. For example, the encapsulation processing unit 725 may encapsulate each of the geometry bitstream, the attribute bitstream, and the signaling bitstream, or multiplex and encapsulate the geometry bitstream, the attribute bitstream, and the signaling bitstream. In some embodiments, the encapsulation processing unit 725 may encapsulate a bitstream (G-PCC bitstream) consisting of a sequence of a type-length-value (TLV) structure into a file. TLV (or TLV encapsulation) structures constituting the G-PCC bitstream may include a geometry bitstream, an attribute bitstream, a signaling bitstream, and the like. In some embodiments, the G-PCC bitstream may be generated by the encapsulation processing unit 725 or generated by the transmission processing unit 730. In some embodiments, the encapsulation processing unit 725 may perform all or some of operations performed by the encapsulation processing unit 13 of FIG. 1.
  • The transmission processing unit 730 may process an encapsulated bitstream or a file/segment according to an arbitrary transport protocol. The transmission processing unit 730 may perform all or some of operations performed by the transmission unit 14 and the transmission processing unit described with reference to FIG. 1.
  • FIG. 8 is a block diagram illustrating an example of a reception device 800 according to embodiments of the present disclosure. The reception device 800 may perform operations corresponding to the operations of the transmission device 700 for performing space partition. As illustrated in FIG. 8, the reception device 800 may include a reception processing unit 805, a decapsulation processing unit 810, a signaling processing unit 815, a geometry decoder 820, an attribute encoder 825, and/or a post-processing unit 830.
  • The reception processing unit 805 may receive a file/segment in which a G-PCC bitstream is encapsulated, a G-PCC bitstream, or a bitstream, and may process it according to a transport protocol. The reception processing unit 805 may perform all or some of operations performed by the reception unit 21 and the reception processing unit described with reference to FIG. 1.
  • The decapsulation processing unit 810 may obtain a G-PCC bitstream by performing a reverse process of operations performed by the encapsulation processing unit 725. The decapsulation processing unit 810 may obtain a G-PCC bitstream by decapsulating the file/segment. For example, the decapsulation processing unit 810 may obtain and output a signaling bitstream to the signaling processing unit 815, obtain and output a geometry bitstream to the geometry decoder 820, and obtain and output an attribute bitstream to the attribute decoder 825. The decapsulation processing unit 810 may perform all or some of operations performed by the decapsulation processing unit 22 of FIG. 1.
  • The signaling processing unit 815 may parse and decode signaling information by performing a reverse process of operations performed by the signaling processing unit 710. The signaling processing unit 815 may parse and decode signaling information from a signaling bitstream. The signaling processing unit 815 may provide the decoded signaling information to the geometry decoder 820, the attribute decoder 820, and/or the post-processing unit 830.
  • The geometry decoder 820 may reconstruct geometry from the geometry bitstream by performing a reverse process of operations performed by the geometry encoder 715. The geometry decoder 820 may reconstruct geometry based on signaling information (parameters related to the geometry). The reconstructed geometry may be provided to the attribute decoder 825.
  • The attribute decoder 825 may reconstruct attribute from the attribute bitstream by performing a reverse process of the operations performed by the attribute encoder 720. The attribute decoder 825 may reconstruct the attribute based on the signaling information (parameters related to the attribute) and the reconstructed geometry.
  • The post-processing unit 830 may reconstruct point cloud data based on the reconstructed geometry and the reconstructed attribute. Reconstruction of point cloud data may be performed through a process of matching the reconstructed geometry with the reconstructed attribute. In some embodiments, when the reconstructed point cloud data is in units of tiles and/or slices, the post-processing unit 830 may reconstruct the bounding box of the point cloud data, by performing a reverse process of the space partition process of the transmission device 700 based on signaling information. In some embodiments, when the bounding box is partitioned into a plurality of tiles and/or a plurality of slices through a space partition process, the post-processing unit 830 may reconstruct part of the bounding box, by combining some slices and/or some tiles based on the signaling information. Here, some slices and/or some tiles used to reconstruct the bounding box may be slices and/or some tiles related to a 3D spatial area in which spatial access is desired.
  • TLV structure
  • As described above, the G-PCC bitstream may mean a bitstream of point cloud data consisting of a sequence of TLV structures. The TLV structure may be referred to as "TLV encapsulation structure", "G-PCC TLV encapsulation structure", or "G-PCC TLV structure".
  • An example of a TLV encapsulation structure is shown in FIG. 9, an example of a syntax structure of TLV encapsulation is shown in FIG. 10a, and an example of a payload type of the TLV encapsulation structure is shown in FIG. 10b. Each TLV encapsulation structure may be composed of a TLV type (TLV TYPE), a TLV length (TLV LENGTH), and/or a TLV payload (TLV PAYLOAD). The TLV type may be type information of the TLV payload, the TLV length may be length information of the TLV payload, and the TLV payload may be a payload (or payload bytes). Referring to the TLV encapsulation syntax structure (tlv_encapsulation()) illustrated in FIG. 10a, tlv_type may indicate type information of the TLV payload, and tlv _num_payload_bytes may indicate length information of the TLV payload. Also, tlv_payload_byte[i] may indicate the TLV payload. tlv_payload_byte[i] may be signaled by the value of tlv_num_payload_bytes, and i may increase by 1 from 0 to (tlv_num_payload_bytes - 1).
  • TLV payloads may include an SPS, a GPS, one or more APSs, a tile inventory, a geometry slice, one or more attribute slices, and one or more metadata slices. In some embodiments, the TLV payload of each TLV encapsulation structure may include an SPS, a GPS, one or more APSs, a tile inventory, a geometry slice, one or more attribute slices, and one or more metadata slices according to the type information of the TLV payload. Data included in the TLV payload may be distinguished through type information of the TLV payload. For example, as illustrated in FIG. 10b , tlv_type equal to 0 may indicates that data included in the TLV payload is an SPS, and tlv_type equal to 1 may indicate that the data included in the TLV payload is a GPS. tlv_type equal to 2 may indicate that data included in the TLV payload is a geometry slice, tlv_type equal to 3 may indicate that data included in the TLV payload is an APS. tlv_type equal to 4 may indicate that data included in the TLV payload is an attribute slice, and tlv_type equal to 5 may indicate that data included in the TLV payload is a tile inventory (or tile parameter set). tlv_type equal to 6 may indicate that data included in the TLV payload is a frame boundary marker, and tlv_type equal to 7 may indicate that data included in the TLV payload is a metadata slice. The payload of the TLV encapsulation structure may conform to the format of a High Efficiency Video Coding (HEVC) Network Abstraction Layer (NAL) unit.
  • Encapsulation/decapsulation
  • The G-PCC bitstream composed of TLV encapsulation structures may be transmitted to the reception device without change, or may be encapsulated and transmitted to the reception device. For example, the encapsulation processing unit 725 may encapsulate a G-PCC bitstream composed of TLV encapsulation structures in the form of a file/segment and transmit it. The decapsulation processing unit 810 may acquire a G-PCC bitstream by decapsulating the encapsulated file/segment.
  • In some embodiments, the G-PCC bitstream may be encapsulated in an ISOBMFF-based file format. In this case, the G-PCC bitstream may be stored in a single track or multiple tracks in the ISOBMFF file. Here, the single track or multiple tracks in a file may be referred to as "tracks" or "G-PCC tracks". The ISOBMFF-based file may be referred to as a container, a container file, a media file, a G-PCC file, and the like. Specifically, the file may be composed of boxes and/or information that may be referred to as ftyp, moov, mdat, and the like.
  • The ftyp box (file type box) may provide file type or file compatibility related information for the file. The reception device may identify the file by referring to the ftyp box. The mdat box is also called a media data box and may include actual media data. In some embodiments, a geometry slice (or coded geometry bitstream) and zero or more attribute slices (or coded attribute bitstream) may be included in a sample of an mdat box in a file. Here, the sample may be referred to as a G-PCC sample. The moov box is also called a movie box, and may include metadata for media data of the file. For example, the moov box may include information necessary for decoding and playback of the media data, and may include information on tracks and samples of the file. The moov box may act as a container for all metadata. The moov box may be a box of the uppermost layer among metadata-related boxes.
  • In some embodiments, the moov box may include a track (trak) box providing information related to a track of a file, and the trak box may include a media (mdia) box (MediaBox) providing media information of the track, and a track reference container (tref) box for linking (referencing) the track and a sample of a file corresponding to the track. The media box MediaBox may include a media information container (minf) box that provides information on the media data and a handler (hdlr) box that indicates a stream type. The minf box may include a sample table (stbl) box that provides metadata related to a sample of the mdat box. The stbl box may include a sample description (stsd) box that provides information on a used coding type and initialization information required for the coding type. In some embodiments, a sample description (stsd) box may include a sample entry for a track. In some embodiments, signaling information (or metadata) such as SPS, GPS, APS, and tile inventory may be included in a sample entry of a moov box or a sample of an mdat box in a file.
  • A G-PCC track may be defined as a volumetric visual track carrying a geometry slice (or coded geometry bitstream) or attribute slice (or coded attribute bitstream), or both a geometry slice and an attribute slice. In some embodiments, the volumetric visual track may be identified by a volumetric visual media handler type 'volv' in a handler box HandlerBox of a media box (MediaBox) and/or a volumetric visual media header vvhd in a minf box of a media box MediaBox. The minf box may be referred to as a media information container or a media information box. The minf box may be included in the media box MediaBox, the media box MediaBox may be included in the track box, and the track box may be included in the moov box of the file. A single volumetric visual track or multiple volumetric visual tracks may be present in a file.
  • Volumetric visual media header box (Volumetric VisualMediaHeaderBox)
  • Volumetric Visual Tracks may use a volumetric visual sample (VolumetricVisualSample) for transmission of actual data. A volumetric visual sample entry may be referred to as a sample entry or a G-PCC sample entry, and a volumetric visual sample may be referred to as a sample or a G-PCC sample. A single volumetric visual track may be referred to as a single track or G-PCC single track, and multiple volumetric visual tracks may be referred to as multiple tracks or multiple G-PCC tracks. Signaling information related to grouping of samples, grouping of tracks, single track encapsulation of a G-PCC bitstream, or multiple-track encapsulation of a G-PCC bitstream, or signaling information to support spatial access may be added to the sample entry in the form of a box or a FullBox. The signaling information may include at least one of a GPCC entry information box (GPCCEntryInfoBox), a GPCC component type box (GPCCComponentTypeBox), a cubic region information box (CubicRegionInfoBox), a 3D bounding box information box (3DBoundingBoxInfoBox), or a tile inventory box (TileInventoryBox).
  • GPCC entry information structure
  • The syntax structure of the G-PCC entry information box (GPCCEntryInfoBox) may be defined as follows.
                       class GPCCEntryInfoBox extends Box('gpsb') {
                       GPCCEntryInfoStruct ();
              }
  • In the above syntax structure, a GPCCEntryInfoBox having a sample entry type of 'gpsb' may include GPCCEntryInfoStruct(). The syntax of GPCCEntryInfoStruct() may be defined as follows.
  •                        aligned(8) class GPCCEntryInfoStruct {
                                  unsigned int (1) main_entry_flag;
                                  unsigned int(1) dependent_on;
                                if (dependent_on) { //non-entry
                                      unsigned int(16) dependency_id;
                                }
                           }
  • GPCCEntryInfoStruct() may include main_entry _flag and dependent_on. main_entry_flag may indicate whether or not it is an entry point for decoding the G-PCC bitstream. dependent_on indicates whether its decoding is dependent on others. If dependent_on is present in a sample entry, dependent_on may indicate that decoding of samples in a track is dependent on other tracks. If the value of dependent_on is 1, GPCCEntryInfoStruct() may further include dependency_id. dependency_id may indicate an identifier of tracks for decoding related data. If dependency_id is present in a sample entry, dependency_id may indicate an identifier of a track carrying a G-PCC sub-bitstream on which decoding of samples in the track is dependent. If dependency_id is present in a sample group, dependency_id may indicate an identifier of samples carrying a G-PCC sub-bitstream on which decoding of related samples is dependent.
  • G-PCC component information structure
  • The syntax structure of the G-PCC component type box (GPCCComponentTypeBox) may be defined as follows.
  •         aligned(8) class GPCCComponentTypeBox extends FullBox('gtyp',
    version = 0, 0) {
            GPCCComponentTypeStruct();
            }
  • A GPCCComponentTypeBox having a sample entry type of 'gtyp' may include GPCCComponentTypeStruct(). The syntax of GPCCComponentTypeStruct() may be defined as follows.
  •                     aligned(8) class GPCCComponentTypeStruct {
                          unsigned int(8) numOfComponents;
                        for (i=0; i< numOfComponents; i++) {
                           unsigned int(8) gpcc_type;
                           if(gpcc_type == 4)
                                unsigned int(8) AttrIdx;
                        }
                        // additional fields
                      }
  • numOfComponents may indicate the number of G-PCC components signaled to the GPCCComponentTypeStruct. gpcc_type may be included in GPCCComponentTypeStruct by a loop that is repeated by the value of numOfComponents. This loop can be repeated while increasing by 1 until i is from 0 to (numOfComponents - 1). gpcc_type may indicate the type of the G-PCC component. For example, if the value of gpcc_type is 2, it may indicate a geometry component, and if it is 4, it may indicate an attribute component. If the value of gpcc_type is 4, that is, when it indicates an attribute component, the loop may further include AttrIdx. AttrIdx may indicate the identifier of the attribute signaled in SPS(). A G-PCC component type box (GPCCComponentTypeBox) may be included in a sample entry for multiple tracks. If a G-PCC component type box (GPCCComponentTypeBox) is present in the sample entry of tracks carrying part or whole of the G-PCC bitstream, then GPCCComponentTypeStruct() may indicate one or more G-PCC component types carried by each track. GPCCComponentTypeBox including GPCCComponentTypeStruct() or GPCCComponentTypeStruct()may be referred to as G-PCC component information.
  • Sample group
  • The encapsulation processing unit mentioned in the present disclosure may generate a sample group by grouping one or more samples. The encapsulation processing unit, the metadata processing unit, or the signaling processing unit mentioned in the present disclosure may signal signaling information associated with a sample group in a sample, a sample group, or a sample entry. That is, the sample group information associated with the sample group may be added to a sample, a sample group, or a sample entry. The sample group information may be 3D bounding box sample group information, 3D region sample group information, 3D tile sample group information, 3D tile inventory sample group information, and the like.
  • Track group
  • The encapsulation processing unit mentioned in the present disclosure may generate a track group by grouping one or more tracks. The encapsulation processing unit, the metadata processing unit, or the signaling processing unit mentioned in the present disclosure may signal signaling information associated with a track group in a sample, a track group, or a sample entry. That is, the track group information associated with the track group may be added to a sample, track group or sample entry. The track group information may be 3D bounding box track group information, point cloud composition track group information, spatial region track group information, 3D tile track group information, 3D tile inventory track group information, and the like.
  • Sample entry
  • FIG. 11 is a diagram for explaining an ISOBMFF-based file including a single track. (a) of FIG. 11 illustrates an example of the layout of an ISOBMFF-based file including a single track, and (b) of FIG. 11 illustrates an example of a sample structure of a mdat box when a G-PCC bitstream is stored in a single track of a file. FIG. 12 is a diagram for explaining an ISOBMFF-based file including multiple tracks, (a) of FIG. 12 illustrates an example of the layout of an ISOBMFF-based file including multiple tracks, and (b) of FIG. 12 illustrates an example of a sample structure of a mdat box when a G-PCC bitstream is stored in a single track of a file.
  • The stsd box (SampleDescriptionBox) included in the moov box of the file may include a sample entry for a single track storing the G-PCC bitstream. The SPS, GPS, APS, tile inventory may be included in a sample entry in a moov box or a sample in an mdat box in a file. Also, geometry slices and zero or more attribute slices may be included in the sample of the mdat box in the file. When a G-PCC bitstream is stored in a single track of a file, each sample may contain multiple G-PCC components. That is, each sample may be composed of one or more TLV encapsulation structures. A sample entry of a single track may be defined as follows.
    • Sample Entry Type: 'gpe1', 'gpeg'
    • Container: SampleDescriptionBox
    • Mandatory: A 'gpe1' or 'gpeg' sample entry is mandatory
    • Quantity: One or more sample entries may be present
  • The sample entry type 'gpe1' or 'gpeg' is mandatory, and one or more sample entries may be present. The G-PCC track may use a VolumetricVisualSampleEntry having a sample entry type of 'gpe1' or 'gpeg'. The sample entry of the G-PCC track may include a G-PCC decoder configuration box GPCCConfigurationBox, and the G-PCC decoder configuration box may include a G-PCC decoder configuration record (GPCCDecoderConfigurationRecord()). GPCCDecoderConfigurationRecord() may include at least one of configuration Version, profile_idc, profile_compatibility_flags, level_idc, numOfSetupUnitArrays, SetupUnitType, completeness, numOfSepupUnit, or setupUnit. The setupUnit array field included in GPCCDecoderConfigurationRecord() may include TLV encapsulation structures including one SPS.
  • If the sample entry type is 'gpe1', all parameter sets, e.g., SPS, GPS, APS, tile inventory, may be included in the array of setupUints. If the sample entry type is 'gpeg', the above parameter sets may be included in the array (i.e., sample entry) of setupUints or included in the stream (i.e., sample). An example of the syntax of a G-PCC sample entry (GPCCSampleEntry) having a sample entry type of 'gpe1' is as follows.
  •                    aligned(8) class GPCCSampleEntry()
                           extends VolumetricVisualSampleEntry ('gpe1') {
                        GPCCConfigurationBox config; //mandatory
                        3DBoundingBoxInfoBox();
                        CubicRegionInfoBox();
                        TilelnventoryBox();
                       }
  • A G-PCC sample entry (GPCCSampleEntry) having a sample entry type of 'gpe1' may include GPCCConfigurationBox, 3DBoundingBoxInfoBox(), CubicRegionInfoBox(), and TilelnventoryBox(). 3DBoundingBoxInfoBox() may indicate 3D bounding box information of point cloud data related to samples carried by the track. CubicRegionInfoBox() may indicate information on one or more spatial regions of point cloud data carried by samples in the track. TileInventoryBox() may indicate 3D tile inventory information of point cloud data carried by samples in the track.
  • As illustrated in (b) of FIG. 11, the sample may include TLV encapsulation structures including a geometry slice. In addition, a sample may include TLV encapsulation structures including one or more parameter sets. In addition, a sample may include TLV encapsulation structures including one or more attribute slices.
  • As illustrated in (a) of FIG. 12, when a G-PCC bitstream is carried by multiple tracks of an ISOBMFF-based file, each geometry slice or attribute slice may be mapped to an individual track. For example, a geometry slice may be mapped to track 1, and an attribute slice may be mapped to track 2. The track (track 1) carrying the geometry slice may be referred to as a geometry track or a G-PCC geometry track, and the track (track 2) carrying the attribute slice may be referred to as an attribute track or a G-PCC attribute track. In addition, the geometry track may be defined as a volumetric visual track carrying a geometry slice, and the attribute track may be defined as a volumetric visual track carrying an attribute slice.
  • A track carrying part of a G-PCC bitstream including both a geometry slice and an attribute slice may be referred to as a multiplexed track. In the case where the geometry slice and attribute slice are stored on separate tracks, each sample in the track may include at least one TLV encapsulation structure carrying data of a single G-PCC component. In this case, each sample contains neither geometry nor attributes, and may not contain multiple attributes. Multi-track encapsulation of a G-PCC bitstream may enable a G-PCC player to effectively access one of the G-PCC components. When a G-PCC bitstream is carried by multiple tracks, in order for a G-PCC player to effectively access one of the G-PCC components, the following conditions need to be satisfied.
    1. a) When a G-PCC bitstream consisting of TLV encapsulation structures is carried by multiple tracks, the track carrying the geometry bitstream (or geometry slice) becomes the entry point.
    2. b) In the sample entry, a new box is added to indicate the role of the stream included in the track. The new box may be the aforementioned G-PCC component type box (GPCCComponentTypeBox). That is, GPCCComponentTypeBox may be included in the sample entry for multiple tracks.
    3. c) Track reference is introduced from a track carrying only a G-PCC geometry bitstream to a track carrying a G-PCC attribute bitstream.
  • GPCCComponentTypeBox may include GPCCComponentTypeStruct(). If a GPCCComponentTypeBox is present in the sample entry of tracks carrying part or whole of the G-PCC bitstream, then GPCCComponentTypeStruct() may specify the type (e.g., geometry, attribute) of one or more G-PCC components carried by each track. For example, if the value of the gpcc_type field included in GPCCComponentTypeStruct() is 2, it may indicate a geometry component, and if it is 4, it may indicate an attribute component. In addition, when the value of the gpcc_type field indicates 4, that is, an attribute component, an AttrIdx field indicating an attribute identifier signaled to SPS( ) may be further included.
  • In the case where the G-PCC bitstream is carried by multiple tracks, the syntax of the sample entry may be defined as follows.
    • Sample Entry Type: 'gpe1', 'gpeg', 'gpc1' or 'gpcg'
    • Container: SampleDescriptionBox
    • Mandatory: 'gpc1', 'gpcg' sample entry is mandatory
    • Quantity: One or more sample entries may be present
  • The sample entry type 'gpc1', 'gpcg', 'gpc1' or 'gpcg' is mandatory, and one or more sample entries may be present. Multiple tracks (e.g., geometry or attribute tracks) may use a VolumetricVisualSampleEntry having a sample entry type of 'gpc1', 'gpcg', 'gpc1' or 'gpcg'. In the 'gpe1' sample entry, all parameter sets may be present in the setupUnit array. In the 'gpeg' sample entry, the parameter set is present in the array or stream. In the 'gpe1' or 'gpeg' sample entry, the GPCCComponentTypeBox shall not be present. In the 'gpc1' sample entry, the SPS, GPS and tile inventory may be present in the SetupUnit array of the track carrying the G-PCC geometry bitstream. All relevant APSs may be present in the SetupUnit array of the track carrying the G-PCC attribute bitstream. In the 'gpcg' sample entry, an SPS, GPS, APS or tile inventory may be present in the array or stream. In the 'gpc1' or 'gpcg' sample array, the GPCCComponentTypeBox shall be present.
  • An example of the syntax of the G-PCC sample entry is as follows.
    Figure imgb0001
  • The compressorname, that is, codingname, of the base class VolumetricVisualSampleEntry may indicate the name of a compressor used together with the recommended " 013GPCC coding" value. In "\013GPCC coding", the first byte (octal number 13 or decimal number 11 represented by \013) is the number of remaining bytes, and may indicate the number of bytes of the remaining string. congif may include G-PCC decoder configuration information, info may indicate G-PCC component information carried in each track. info may indicate the component tile carried in the track, and may also indicate the attribute name, index, and attribute type of the G-PCC component carried in the G-PCC attribute track.
  • Sample format
  • When the G-PCC bitstream is stored in a single track, the syntax for the sample format is as follows.
  •                    aligned(8) class GPCCSample
                       {
                       unsigned int GPCCLength = sample_size; //Size of Sample
                       for (i=0; i< GPCCLength; ) // to end of the sample
                       {
                       tlv_encapsulation gpcc_unit;
                       i += (1+4)+ gpcc_unit.tlv_num_payload_bytes;
                       }
                       }
  • In the above syntax, each sample (GPCCSample) corresponds to a single point cloud frame, and may be composed of one or more TLV encapsulation structures belonging to the same presentation time. Each TLV encapsulation structure may include a single type of TLV payload. In addition, one sample may be independent (e.g., a sync sample). GPCCLength indicates the length of the sample, and gpcc_unit may include an instance of a TLV encapsulation structure including a single G-PCC component (e.g., a geometry slice).
  • When the G-PCC bitstream is stored in multiple tracks, each sample may correspond to a single point cloud frame, and samples contributing to the same point cloud frame in different tracks may have to have the same presentation time. Each sample may consist of one or more G-PCC units of the G-PCC component indicated in the GPCCComponentInfoBox of the sample entry and zero or more G-PCC units carrying one of a parameter set or a tile inventory. When a G-PCC unit including a parameter set or a tile inventory is present in a sample, the F-PCC sample may need to appear before the G-PCC unit of the G-PCC component. Each sample may contain one or more G-PCC units containing an attribute data unit, and zero or more G-PCC units carrying a parameter set. In the case where the G-PCC bitstream is stored in multiple tracks, the syntax and semantics for the sample format may be the same as the syntax and semantics for the case where the G-PCC bitstream is stored in a single track described above.
  • Subsample
  • In the receiving device, since the geometry slice is first decoded and the attribute slice needs to be decoded based on the decoded geometry, when each sample consists of multiple TLV encapsulation structures, it is necessary to access each TLV encapsulation structure in the sample. In addition, if one sample is composed of multiple TLV encapsulation structures, each of the multiple TLV encapsulation structures may be stored as a sub-sample. A subsample may be referred to as a G-PCC subsample. For example, if one sample includes a parameter set TLV encapsulation structure including a parameter set, a geometry TLV encapsulation structure including a geometry slice, and an attribute TLV encapsulation structure including an attribute slice, the parameter set TLV encapsulation structure, the geometry TLV encapsulation structure, and the attribute TLV encapsulation structure may be stored as subsamples, respectively. In this case, in order to enable access to each G-PCC component in the sample, the type of the TLV encapsulation structure carried by the subsample may be required.
  • When the G-PCC bitstream is stored in a single track, the G-PCC subsample may include only one TLV encapsulation structure. One SubSampleInformationBox may be present in a sample table box (SampleTableBox, stbl) of a moov box, or may be present in a track fragment box (TrackFragmentBox, traf) of each of the movie fragment boxes (MovieFragmentBox, moof). If the SubSampleInformationBox is present, the 8-bit type value of the TLV encapsulation structure may be included in the 32-bit codec_specific_parameters field of the sub-sample entry in the SubSampleInformationBox. If the TLV encapsulation structure includes the attribute payload, the 6-bit value of the attribute index may be included in the 32-bit codec_specific_parameters field of the subsample entry in the SubSampleInformationBox. In some embodiments, the type of each subsample may be identified by parsing the codec_specific_parameters field of the subsample entry in the SubSampleInformationBox. Codec_specific_parameters of SubSampleInformationBox may be defined as follows.
    Figure imgb0002
  • In the above subsample syntax, payloadType may indicate the tlv_type of the TLV encapsulation structure in the subsample. For example, if the value of payloadType is 4, the attribute slice (i.e., attribute slice) may be indicated. attrIdx may indicate an identifier of attribute information of a TLV encapsulation structure including an attribute payload in the subsample. attrIdx may be the same as ash_attr_sps_attr_idx of the TLV encapsulation structure including the attribute payload in the subsample. tile_data may indicate whether a subsample includes one tile or another tile. When the value of tile_data is 1, it may indicate that the subsample includes TLV encapsulation structure(s) including a geometry data unit or an attribute data unit corresponding to one G-PCC tile. When the value of tile_data is 0, it may indicate that the subsample includes TLV encapsulation structure(s) including each parameter set, tile inventory, or frame boundary marker. tile_id may indicate an index of a G-PCC tile with which a subsample is associated in a tile inventory.
  • When the G-PCC bitstream is stored in multiple tracks (in case of multiple-track encapsulation of G-PCC data in ISOBMFF), if subsamples are present, only SubSampleInformationBox whose flag is 1 in SampleTableBox or TrackFragmentBox of each MovieFragmentBox may need to be present. In the case where the G-PCC bitstream is stored in multiple tracks, the syntax elements and semantics may be the same as the case where flag==1 in the syntax elements and semantics when the G-PCC bitstream is stored in a single track.
  • Reference between tracks
  • When the G-PCC bitstream is carried in multiple tracks (that is, when the G-PCC geometry bitstream and the attribute bitstream are carried in different (separate) tracks), in order to connect between the tracks, a track reference tool may be used. One TrackReferenceTypeBoxes may be added to a TrackReferenceBox in the TrackBox of a G-PCC track. The TrackReferenceTypeBox may contain an array of track_IDs specifying the tracks referenced by the G-PCC track.
  • In some embodiments, the present disclosure may provide a device and method for supporting temporal scalability in the carriage of G-PCC data (hereinafter, may be referred to as a G-PCC bitstream, an encapsulated G-PCC bitstream, or a G-PCC file). In addition, the present disclosure may propose a device and methods for providing a point cloud content service, which efficiently stores a G-PCC bitstream in a single track in a file, or divisionally stores it in a plurality of tracks, and provides a signaling therefor. In addition, the present disclosure proposes a device and methods for processing a file storage technique to support efficient access to a stored G-PCC bitstream.
  • Temporal scalability
  • Temporal scalability may refer to a function that allows the possibility of extracting one or more subsets of independently coded frames. Also, temporal scalability may refer to a function of dividing G-PCC data into a plurality of different temporal levels and independently processing each G-PCC frame belonging to different temporal levels. If temporal scalability is supported, the G-PCC player (or the transmission device and/or the reception device of the present disclosure) may effectively access a desired component (target component) among G-PCC components. In addition, if temporal scalability is supported, since G-PCC frames are processed independently of each other, temporal scalability support at the system level may be expressed as more flexible temporal sub-layering. In addition, if temporal scalability is supported, the system (the point cloud content provision system) that processes G-PCC data can manipulate data at a high level to match network capability or decoder capability, the performance of the point cloud content provision system can be improved.
  • Sample grouping
  • As a method of supporting temporal scalability, there may be a sample grouping method and a track grouping method. The sample grouping method may be a method of grouping samples in a G-PCC file according to a temporal level, and the track grouping method may be a method of grouping tracks in a G-PCC file according to a temporal level.
  • A sample group may be used to associate samples with temporal levels designated to them. That is, a sample group may indicate which sample belongs to which temporal level. Also, the sample group may be information on a result of grouping one or more samples into one or more temporal levels. The sample group may be referred to as a 'tele' sample group and a temporal level sample group 'tele'.
  • Information on sample group
  • Information on a sample group may include information on a result of sample grouping. Accordingly, the information on the sample group may be information used to associate the samples with the temporal levels designated to them. That is, the information on the sample group may indicate which sample belongs to which temporal level, and may be information on a result of grouping one or more samples into one or more temporal levels.
  • The information on the sample group may be present in tracks including geometry data units. When G-PCC data is carried in multiple tracks, the information on the sample group may be present only in a geometry track to group each sample in the track into a designated temporal level. Samples in attribute tracks may be inferred based on a relationship with a geometry track associated therewith. For example, samples in attribute tracks may belong to the same temporal level as samples in a geometry track associated therewith.
  • When the information on the sample group is present in a G-PCC tile track referred to by a G-OCC tile base track, the information on the sample group may be present in the rest tile track referred to by a G-PCC tile base track. Here, the G-PCC tile track may be a volumetric visual track carrying all G-PCC components or a single G-PCC component corresponding to one or more G-PCC tiles. Also, the G-PCC tile base track may be a volumetric visual track carrying all parameter sets and tile inventories corresponding to the G-PCC tile track.
  • Information on temporal level
  • Information on a temporal level may be signaled to describe temporal scalability supported by a G-PCC file. The information on the temporal level may be present in a sample entry of a track including a sample group (or information on a sample group). For example, the information on the temporal level may be present in GPCCDecoderConfigurationRecord () or in a G-PCC scalability information box (GPCCScalabilityInfoBox) signaling scalability information of a G-PCC track.
  • Temporal level sample grouping
  • Temporal level sample grouping (hereinafter referred to as "tele' sample group') may mean grouping samples in a track according to a temporal level. Such a 'tele' sample group may be in a track containing geometry data. In this case, the 'tele' sample group may be information on codec-independent sample grouping. Specifically, being codec-independent may mean a case in which samples of one temporal level have no coding dependency on samples of another temporal level. That is, when samples of one temporal level have no coding dependency on samples of another temporal level, the 'tele' sample group may be present in a track. In addition, the 'tele' sample group may include information on a result of grouping samples in a track (and/or potential track fragment) according to a temporal level and/or information on a temporal level identifier value.
  • When G-PCC data is carried using a plurality of temporal level tracks, the 'tele' sample group may be present only in a track carrying geometry data. Samples in tracks carrying attribute data may be inferred based on a relationship with a geometry track associated therewith. For example, samples in attribute tracks may belong to the same temporal level as samples in a geometry track associated therewith.
  • Problems of the Related Art
  • A 'tele' sample group is only needed for a track with a plurality of temporal levels. However, in the related art, even in the case of a track with only one temporal level, samples are grouped and processed or sample group information is signaled. Therefore, in the related art, coding efficiency or bit efficiency may decrease by performing unnecessary sample grouping or sample group information signaling.
  • Embodiment
  • According to an embodiment of the present disclosure, a 'tele' sample group may be present in a track with a plurality of temporal levels. Also, a track with a plurality of temporal levels may be a track carrying geometry data.
  • In some embodiments, information on the number of temporal levels may be defined by syntax elements in different boxes according to sample entry types. For example, information on the number of temporal levels may be defined by a syntax element in a GPCC decoder configuration record (GPCCDecoderConfigurationRecord) and/or a syntax element in a GPCC temporal scalability information box (GPCCScalabilityInfoBox).
  • In the case of a track containing the same sample entry type as 'gpe1', 'gpeg', 'gpc1' or 'gpcg', the number of temporal levels in the track may be defined by the num_temporal_levels syntax element of the GPCC decoder configuration record. The GPCC decoder configuration record may provide G-PCC decoder configuration information for geometry-based point cloud content. The syntax structure of the GPCC decoder configuration record may be defined as shown in Table 1 below.
    Figure imgb0003
    Figure imgb0004
  • In the above syntax structure, the GPCC decoder configuration record may include at least one of configuration Version, profile_idc, profile_compatibility_flags, level_idc, numOfSetupUnitArrays, SetupUnitType, completeness, numOfSepupUnit or setupUnit. The num_temporal_levels syntax element may indicate a maximum number of temporal levels in which G-PCC bitstream frames are grouped.
  • In the case of a track containing the same sample entry type as 'gpcb' or 'gpeb', the number of temporal levels in a track may be defined by the num_temporal_levels syntax element of the GPCC temporal scalability information box. The GPCC temporal scalability information box may include scalability information of a G-PCC track. The syntax structure of the GPCC temporal scalability information box may be defined as shown in Table 2 below. [Tabel 2]
  • Aligned (8) class GPCCScalabilityInfoBox
    extends FullBox('gsci', version = 0, 0) {
    unsigned int(16) num_temporal_levels;
    for(i=0; i < num_temporal_levels; i++) {
          unsigned int(16) temporal_level_id;
       }
    }
  • In the above syntax structure, num_temporal_levels may indicate a maximum number of temporal levels in which G-PCC bitstream frames are grouped. temporal_level_id may indicate temporal level identifier information of a G-PCC sample.
  • According to another embodiment of the present disclosure, the number of temporal levels in the track may be included in other box other than the GPCC decoder configuration record when the track contains a sample entry type equal to 'gpe 1', 'gpeg', 'gpc1' or 'gpcg'. For example, the other box may be the GPCC temporal scalability information box or any box including G-PCC related temporal scalability information.
  • Alternatively, the number of temporal levels in the track may be included in other box other than the GPCC temporal scalability information box when the track contains a sample entry type equal to 'gpcb' or 'gpeb'. For example, the other box may be any box including G-PCC related temporal scalability information.
  • According to another embodiment of the present disclosure, the number of temporal levels in the track may be included in a predetermined box regardless of the sample entry type in the track. For example, the predetermined box may be a GPCC decoder configuration record, a GPCC temporal scalability information box or any box including G-PCC related temporal scalability information.
  • The box including G-PCC related temporal scalability information may be referred to by various names. For example, the G-PCC related temporal scalability information box may be referred to as a GPCCScalabilityInfoBox or a GPCCTileScalabilityInfoBox.
  • When a 'tele' sample group is present in a G-PCC track, samples belonging to a first temporal level identifier may be mapped to a sample group description index equal to the first temporal level identifier plus 1.
  • The sample group description index may mean an index of a sample entry describing samples included in a sample description (stsd) box. The sample description (stsd) box may include a sample description signal for all temporal levels signaled in the decoder configuration record. Also, a sample description (stsd) box may include a sample entry for a track.
  • The temporal level of a sample in an attribute track may be equal to the temporal level of a corresponding sample in a referenced geometry track, that is, a sample with the same composition time stamp. Thus, the 'tele' sample group may be only present in a track carrying geometry data.
  • FIGS. 13 to 15 illustrate examples of sample grouping in a track according to embodiments of the present disclosure.
  • Referring to FIG. 13, each of the tracks (track 1 and track 2) may include at least one of a GPCC temporal scalability information ('gsci') box, a sample group description ('sgpd') box, or a sample group ('sbgp') box.
  • The 'gsci' box may include information on the number of temporal levels (num_temporal_levels). The 'sgpd' box may include information on common characteristics of samples included in the sample group. The 'sbgp' box may include information on a sample group including samples.
  • Referring to Track 1 of FIG. 13, the number of temporal levels present in the 'gsci' box may be 2. At this time, the two temporal levels may be distinguished by temporal_level_id. The number of entries included in the 'sgpd' box may be two (entry_count: 2). An entry present in the 'sgpd' box may be represented by SampleGroupDescripitionEntry('tele'), and SampleGroupDescripitionEntry('tele') may mean a 'tele' sample group description entry. The 'sbgp' box of Track 1 may include a 'tele' sample group, and the number of entries may be 12.
  • Referring to Track 2 of FIG. 13, the number of temporal levels present in the 'gsci' box of Track 2 may be 2. At this time, the two temporal levels may be distinguished by temporal_level_id. The number of entries included in the 'sgpd' box of Track 2 may be four. The entry present in the 'sgpd' box may be represented SampleGroupDescripitionEntry('tele'), and SampleGroupDescripitionEntry('tele') may mean a 'tele' sample group description entry. The 'sbgp' box of Track 2 may include a 'tele' sample group, and the number of entries may be 12.
  • The 'mdat' box of FIG. 13 may include media data. Specifically, the 'mdat' box may include image frames. The image frames included in the 'mdat' box may have unique identifiers (TemporalId, Tid). For example, the 'mdat' box may include four TemporalIDs (Tid 0, Tid 1, Tid 2, Tid 3).
  • Referring to FIG. 14 illustrating an example of sample grouping in a temporal level tile track, Track 1 of FIG. 14 may include a 'gsci' box, and Track 2 may include a 'sgpd' box and a 'sbgp' box. Track 2 may not include GPCCTileScalabilityInfoBox('gtsi' box) because it is a temporal level tile track containing samples of all temporal levels.
  • Referring to FIG. 15, the GPCC file structure of FIG. 15 may include a tile base track (Track 1) and two temporal level tile tracks (Track 2 and Track 3). In each temporal level tile track, the number of 'tele' sample group description entries in the 'sgpd' box may be equal to the maximum value of temporal_level_id in the 'gtsi' box plus 1.
  • FIGS. 16 to 17 are flowcharts illustrating 'tele' sample grouping according to embodiments of the present disclosure.
  • Referring to FIG. 16, a transmission device 10 may generate temporal level number information of a track in a G-PCC file (S1610). The temporal level number information may be information representing the number of temporal levels present in a track. The temporal level number information may be represented by num_temporal_levels.
  • The transmission device 10 may configure at least one sample group based on the number of temporal levels (or temporal level number information) (S1620). For example, the transmission device 10 may configure at least one sample group when there is a plurality of temporal levels in the track. As another example, the transmission device 10 may configure at least one sample group when there is a plurality of temporal levels in the track and the corresponding track is a geometry track. The sample group may be a 'tele' sample group.
  • The transmission device 10 may generate sample group information (S1630). The sample group information may be information on the sample group configured in step S1620. The sample group information may include temporal level identifier information of a G-PCC sample. The temporal level identifier information may be represented by temporal_level_id.
  • The transmission device 10 may generate a G-PCC file (S1640). The G-PCC file may include at least one of temporal level number information and/or sample group information.
  • According to the present disclosure, when sample group information is present in a track, samples belonging to a temporal level identifier may be mapped to a sample group description index. In this case, the sample group description index may be a value obtained by adding 1 to the temporal level identifier value.
  • Referring to FIG. 17, a reception device 20 may obtain a G-PCC file (S1710). The G-PCC file may include point cloud data, temporal level number information of a track, and sample group information.
  • The reception device 20 may obtain the temporal level number information of a track in the G-PCC file from the G-PCC file (S1720). The temporal level number information may be num_temporal_levels.
  • The reception device 20 may obtain sample group information of samples in a track from the G-PCC file based on the temporal level number information (S1730). For example, the receiving device 20 may configure at least one sample group when there is a plurality of temporal levels in the track. As another example, the reception device 20 may configure at least one sample group when there is a plurality of temporal levels in the track and the corresponding track is a geometry track. The sample group information may include temporal level identifier information of G-PCC samples. The temporal level identifier information may be represented by temporal_level_id.
  • FIG. 18 illustrates an example of a structure of a G-PCC file for 'tele' sample grouping according to embodiments of the present disclosure.
  • Referring to FIG. 18, a moov box 1805 in a G-PCC file 1800 may be a multi-track encapsulation file in which two or more tracks are present. Here, the types of two or more tracks present in the moov box 1805 may include a geometry track and/or an attribute track.
  • According to an embodiment of the present disclosure, the 'tele' sample group may be present only in a geometry track. In addition, the 'tele' sample group may be present in the track only when there is a plurality of temporal levels in the geometry track. In the case of FIG. 18, in geometry track #1 1810, temporal levels in the track are 0, 1, and 2 and there is a plurality of temporal levels. Accordingly, in geometry track #1 1801, a 'tele' sample group may be present in the track. However, in geometry track #2 1815, the temporal level in the track is 3 and the number thereof is 1. Therefore, in geometry track #2 1815, the 'tele' sample group may not be present in the track.
  • FIGS. 19 and 20 are flowcharts for data transmission/reception based on sample grouping according to embodiments of the present disclosure.
  • Referring to FIG. 19, the transmission device 10 may determine whether to group samples based on the number of temporal levels in a track (S1910). The transmission device 10 may group samples in the track when the number of temporal levels in the track is two or more. On the other hand, when the number of temporal levels in the track is one, the transmission device 10 may not group samples in the track. If the samples are grouped based on the number of temporal levels in the track being 2 or more, the transmission device 10 may generate corresponding sample group information (S1920). The sample group information may include temporal level index information.
  • The transmission device 10 may encapsulate the samples based on the sample group information generated in step S1920 (S1930). On the other hand, if the samples in the track are not grouped, the transmission device 10 may encapsulate the samples without generating sample group information (S1930).
  • Referring to FIG. 20, the reception device 20 may determine whether the number of temporal levels is 2 or more (S2010). The temporal level number information may be information on the number of temporal levels present in the track. The reception device 20 may determine that the samples in the track are grouped when the information on the number of temporal levels in the track is 2 or more. On the other hand, the reception device 20 may determine that the samples in the track are not grouped when the information on the number of temporal levels in the track is one.
  • The reception device 20 may obtain sample group information (S2020). Sample group information may include temporal level index information in the track. The temporal level index information may be temporal_level_id. The reception device 20 may obtain sample group information based on the number of temporal levels determined in step S2010. Specifically, the reception device 20 may obtain sample group information when the number of temporal levels determined in step S2010 is 2 or more.
  • The reception device 20 may decapsulate the samples based on the sample group information obtained in step S2020 (S2030). On the other hand, the reception device 20 may decapsulate the samples without checking the sample group information when the number of temporal levels in the track is not greater than 2 (S2030).
  • According to the present disclosure described above, when the number of temporal levels in the track is one, unnecessary 'tele' sample groups may be removed by not performing sample grouping. In addition, when the number of temporal levels in the track is one, unnecessary information may not be signaled by not signaling sample group information. Accordingly, encapsulation/decapsulation efficiency and/or bit efficiency of G-PCC file data may be improved.
  • The scope of the disclosure includes software or machine-executable commands (e.g., an operating system, an application, firmware, a program, etc.) for enabling operations according to the methods of various embodiments to be executed on an apparatus or a computer, a non-transitory computer-readable medium having such software or commands stored thereon and executable on the apparatus or the computer.
  • Industrial Applicability
  • The embodiments of the present disclosure may be used to provide point cloud content. In addition, the embodiments of the present disclosure may be used to encode/decode point cloud data.
  • Claims (12)

    1. A method performed by a reception device of point cloud data, the method comprising:
      obtaining a geometry-based point cloud compression (G-PCC) file including the point cloud data;
      obtaining temporal level number information indicating the number of temporal levels in a track within the G-PCC file from the G-PCC file; and
      obtaining sample group information of samples in the track from the G-PCC file based on the temporal level number information,
      wherein the sample group information is obtained based on the temporal level number information indicating that there is a plurality of temporal levels in the track.
    2. The method of claim 1, wherein the sample group information includes identifier information of the temporal levels.
    3. The method of claim 1, wherein the track includes geometry information of the point cloud data.
    4. The method of claim 1, wherein, based on the track having a first sample entry type, the number of temporal levels is included in a G-PCC decoder configuration record (GPCCDecoderConfigurationRecord).
    5. The method of claim 4, wherein the first sample entry type is at least one of a gpe1 sample entry type, a gpeg sample entry type, a gpc1 sample entry type or a gpcg sample entry type.
    6. The method of claim 1, wherein, based on the track having a second sample entry type, the number of temporal levels is included in a G-PCC temporal scalability information box (GPCCScalabilityInfoBox).
    7. The method of claim 6, wherein the second sample entry type is at least one of a gpcb sample entry type or a gpeb sample entry type.
    8. The method of claim 1,
      wherein, based on the sample group information being present in the track, a sample belonging to a temporal level identifier is mapped to a sample group description index, and
      wherein the sample group description index is equal to the temporal level identifier plus 1.
    9. The method of claim 4, wherein a sample description for temporal levels signaled in the G-PCC decoder configuration record is included in a sample description box (SampleDescriptionBox).
    10. A method performed by a transmission device of point cloud data, the method comprising:
      generating temporal level number information indicating the number of temporal levels in a track within a geometry-based point cloud compression (G-PCC) file;
      generating sample group information which is information on grouping of samples in the track based on the number of temporal levels; and
      generating the G-PCC file including the temporal level number information,
      wherein the sample group information is generated based on there being a plurality of temporal levels.
    11. A reception device of point cloud data, the reception device comprising:
      a memory; and
      at least one processor,
      wherein the at least one processor is configured to:
      obtain a geometry-based point cloud compression (G-PCC) file including the point cloud data;
      obtain temporal level number information indicating the number of temporal levels in a track within the G-PCC file from the G-PCC file; and
      obtain sample group information of samples in the track from the G-PCC file based on the temporal level number information,
      wherein the sample group information is obtained based on the temporal level number information indicating that there is a plurality of temporal levels in the track.
    12. A transmission device of point cloud data, the transmission device comprising:
      a memory; and
      at least one processor,
      wherein the at least one processor is configured to:
      generate temporal level number information indicating the number of temporal levels in a track within a geometry-based point cloud compression (G-PCC) file;
      generate sample group information which is information on grouping of samples in the track based on the number of temporal levels; and
      generate the G-PCC file including the temporal level number information,
      wherein the sample group information is generated based on there being a plurality of temporal levels.
    EP22907910.8A 2021-12-13 2022-12-13 Transmission device for point cloud data, method performed by said transmission device, reception device for point cloud data, and method performed by said reception device Pending EP4451692A1 (en)

    Applications Claiming Priority (2)

    Application Number Priority Date Filing Date Title
    US202163289097P 2021-12-13 2021-12-13
    PCT/KR2022/020278 WO2023113443A1 (en) 2021-12-13 2022-12-13 Transmission device for point cloud data, method performed by said transmission device, reception device for point cloud data, and method performed by said reception device

    Publications (1)

    Publication Number Publication Date
    EP4451692A1 true EP4451692A1 (en) 2024-10-23

    Family

    ID=86773021

    Family Applications (1)

    Application Number Title Priority Date Filing Date
    EP22907910.8A Pending EP4451692A1 (en) 2021-12-13 2022-12-13 Transmission device for point cloud data, method performed by said transmission device, reception device for point cloud data, and method performed by said reception device

    Country Status (4)

    Country Link
    EP (1) EP4451692A1 (en)
    KR (1) KR20240093609A (en)
    CN (1) CN118541981A (en)
    WO (1) WO2023113443A1 (en)

    Family Cites Families (5)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    US11095907B2 (en) * 2017-03-27 2021-08-17 Nokia Technologies Oy Apparatus, a method and a computer program for video coding and decoding
    WO2020166643A1 (en) * 2019-02-15 2020-08-20 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Encoding device, decoding device, encoding method, and decoding method
    US11122102B2 (en) * 2019-07-03 2021-09-14 Lg Electronics Inc. Point cloud data transmission apparatus, point cloud data transmission method, point cloud data reception apparatus and point cloud data reception method
    WO2021002657A1 (en) * 2019-07-04 2021-01-07 엘지전자 주식회사 Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method
    KR20220113400A (en) * 2019-12-20 2022-08-12 프라운호퍼-게젤샤프트 추르 푀르데룽 데어 안제반텐 포르슝 에 파우 Video data stream, video encoder, apparatus and method for HRD timing fix, additional functions for scalable and mergeable bitstreams

    Also Published As

    Publication number Publication date
    CN118541981A (en) 2024-08-23
    WO2023113443A1 (en) 2023-06-22
    KR20240093609A (en) 2024-06-24

    Similar Documents

    Publication Publication Date Title
    KR102330527B1 (en) Point cloud data transmission apparatus, point cloud data transmission method, point cloud data reception apparatus and point cloud data reception method
    KR102406846B1 (en) Point cloud data transmission apparatus, point cloud data transmission method, point cloud data reception apparatus and point cloud data reception method
    KR102355634B1 (en) Point cloud data transmission apparatus, point cloud data transmission method, point cloud data reception apparatus and point cloud data reception method
    CN115443652B (en) Point cloud data transmitting device, point cloud data transmitting method, point cloud data receiving device and point cloud data receiving method
    EP4131961A1 (en) Device for transmitting point cloud data, method for transmitting point cloud data, device for receiving point cloud data, and method for receiving point cloud data
    KR20210005524A (en) An apparatus for transmitting point cloud data, an apparatus for receiving point colud data
    CN115398890B (en) Point cloud data transmitting device, point cloud data transmitting method, point cloud data receiving device and point cloud data receiving method
    KR102373833B1 (en) An apparatus for transmitting point cloud data, a method for transmitting point cloud data, an apparatus for receiving point colud data and a method for receiving point cloud data
    US12120328B2 (en) Transmission device of point cloud data and method performed by transmission device, and reception device of point cloud data and method performed by reception device
    EP4451692A1 (en) Transmission device for point cloud data, method performed by said transmission device, reception device for point cloud data, and method performed by said reception device
    EP4451690A1 (en) Transmission device for point cloud data and method performed by transmission device, and reception device for point cloud data and method performed by reception device
    EP4415361A1 (en) Transmission device of point cloud data and method performed by transmission device, and reception device of point cloud data and method performed by reception device
    EP4451691A1 (en) Transmission device for point cloud data and method performed by transmission device, and reception device for point cloud data and method performed by reception device
    EP4436177A1 (en) Transmission device for point cloud data, method performed by transmission device, reception device for point cloud data, and method performed by reception device
    KR20240161128A (en) A device for transmitting point cloud data and a method performed in the device for transmitting point cloud data, and a device for receiving point cloud data and a method performed in the device for receiving point cloud data
    CN118614067A (en) Transmitting device for point cloud data, method executed by transmitting device, receiving device for point cloud data, and method executed by receiving device
    CN118318448A (en) Transmitting device for point cloud data, method executed by transmitting device, receiving device for point cloud data, and method executed by receiving device
    KR20240125607A (en) A device for transmitting point cloud data and a method performed in the device for transmitting point cloud data, and a device for receiving point cloud data and a method performed in the device for receiving point cloud data
    KR20240136930A (en) A device for transmitting point cloud data and a method performed in the device for transmitting point cloud data, and a device for receiving point cloud data and a method performed in the device for receiving point cloud data
    CN117643062A (en) Transmitting device for point cloud data, method executed by transmitting device, receiving device for point cloud data, and method executed by receiving device
    KR20240161126A (en) A device for transmitting point cloud data and a method performed in the device for transmitting point cloud data, and a device for receiving point cloud data and a method performed in the device for receiving point cloud data
    CN118511531A (en) Point cloud data transmitting device, method executed by transmitting device, point cloud data receiving device, and method executed by receiving device
    CN118251892A (en) Transmitting device for point cloud data, method executed by transmitting device, receiving device for point cloud data, and method executed by receiving device
    CN118414837A (en) Point cloud data transmitting device, method executed in transmitting device, point cloud data receiving device, and method executed in receiving device

    Legal Events

    Date Code Title Description
    STAA Information on the status of an ep patent application or granted ep patent

    Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

    PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

    Free format text: ORIGINAL CODE: 0009012

    STAA Information on the status of an ep patent application or granted ep patent

    Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

    17P Request for examination filed

    Effective date: 20240604

    AK Designated contracting states

    Kind code of ref document: A1

    Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR