+

WO2018128248A1 - Procédé et dispositif de décodage d'image basé sur une unité invalide dans un système de codage d'image pour une vidéo à 360 degrés - Google Patents

Procédé et dispositif de décodage d'image basé sur une unité invalide dans un système de codage d'image pour une vidéo à 360 degrés Download PDF

Info

Publication number
WO2018128248A1
WO2018128248A1 PCT/KR2017/011035 KR2017011035W WO2018128248A1 WO 2018128248 A1 WO2018128248 A1 WO 2018128248A1 KR 2017011035 W KR2017011035 W KR 2017011035W WO 2018128248 A1 WO2018128248 A1 WO 2018128248A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
target
unit
processing unit
flag
Prior art date
Application number
PCT/KR2017/011035
Other languages
English (en)
Korean (ko)
Inventor
이령
김승환
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Publication of WO2018128248A1 publication Critical patent/WO2018128248A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present invention relates to 360 degree video, and more particularly, to an invaluable unit based image decoding method and apparatus in a coding system for 360 degree video.
  • 360 degree video may refer to video or image content that is required to provide a virtual reality (VR) system, while being captured or played back in all directions (360 degrees).
  • VR virtual reality
  • 360 degree video can be represented on a three-dimensional spherical surface.
  • 360-degree video captures images or videos of each of a plurality of viewpoints through one or more cameras, and combines the captured images / videos into a single panoramic image / video or spherical image / video. It can be provided through a process of projecting on and coding and transmitting the projected picture.
  • 360-degree video increases the amount of information or bit transmitted relative to conventional image data, when transmitting image data using a medium such as a conventional wired / wireless broadband line or storing image data using a conventional storage medium, The transmission and storage costs are increased.
  • An object of the present invention is to provide a method and apparatus for increasing the efficiency of 360 video information transmission for providing 360 video.
  • Another technical problem of the present invention is to provide a method and apparatus for dividing a projected picture for 360 video into in-valid units.
  • Another technical problem of the present invention is to provide a method and apparatus for deriving invalidated samples based on invalidated units of a projected picture.
  • Another technical problem of the present invention is to provide a method and apparatus for transmitting information about an invalid map and a type of invalidated units of a projected picture.
  • an intra prediction method performed by an encoding apparatus includes obtaining 360 degree video data captured by at least one camera, processing the 360 degree video data to obtain a projected picture, and deriving processing units of the projected picture. Deriving an IU flag for a target processing unit among the processing units, and if the IU flag for the target processing unit indicates that the target processing unit is invalid, a sample of the target processing unit Deriving a sample value of a predetermined value, and generating, encoding, and outputting 360-degree video information about the projected picture, wherein the IU flag for the target processing unit is the target processing; It is characterized by indicating whether the unit is an invalidated area.
  • an encoding apparatus for performing intra prediction.
  • the encoding apparatus obtains 360-degree video data captured by at least one camera, and processes the 360-degree video data to obtain a projected picture, a projection processing unit, deriving the processing units of the projected picture Deriving an IU flag for a target processing unit among the processing units, and if the IU flag for the target processing unit indicates that the target processing unit is invalid,
  • a prediction unit for deriving a sample value into a predetermined specific value, and an entropy encoding unit for generating, encoding, and outputting 360-degree video information about the projected picture, wherein the IU flag for the target processing unit is the target unit; It is characterized by indicating whether the processing unit is an invalidated area.
  • an image decoding method performed by a decoding apparatus.
  • the method includes: receiving 360 degree video information, deriving processing units of a projected picture, deriving an IU flag for a target processing unit of the processing units based on the 360 video information, and the target If the IU flag for a processing unit indicates that the target processing unit is invalid, deriving a sample value of a sample of the target processing unit as a predetermined specific value;
  • the IU flag may indicate whether the target processing unit is an invalidated area.
  • a decoding apparatus for processing 360 degree video data.
  • An entropy decoding unit for receiving 360-degree video information, and processing units of the projected picture, deriving an IU flag for a target processing unit among the processing units based on the 360 video information, And the IU flag for indicating that the target processing unit is invalid, the prediction unit deriving a sample value of a sample of the target processing unit into a predetermined specific value, wherein the IU for the target processing unit is included.
  • the flag may indicate whether the target processing unit is an invalidated area.
  • the present invention it is possible to derive an invalidated sample more efficiently based on the invalidated unit of the projected picture for 360 degree video, thereby reducing the overhead for deriving the invalidated sample.
  • the overall coding efficiency can be improved.
  • a type for an invalidated unit of a projected picture may be derived based on IU type information, and an invalidated unit for the projected picture may be derived based on a type of the invalidated unit.
  • the IU flags for the in-valid units of the projected picture can be derived more efficiently based on the in-valid map, through which overhead for deriving in-valid samples of the projected picture can be obtained. Can be reduced, and the overall coding efficiency can be improved.
  • FIG. 1 is a diagram illustrating an overall architecture for providing a 360 degree video according to the present invention.
  • FIG. 2 exemplarily illustrates a process of 360-degree video in an encoding apparatus and a decoding apparatus.
  • FIG. 3 is a diagram schematically illustrating a configuration of a video encoding apparatus to which the present invention may be applied.
  • FIG. 4 is a diagram schematically illustrating a configuration of a video decoding apparatus to which the present invention may be applied.
  • 5A to 5C exemplarily illustrate projected pictures derived based on the ERP, the CMP, and the OHP.
  • 6A-6C illustratively show projected pictures divided into IUs.
  • FIG. 8 shows an example of an encoding / decoding process performed based on IU.
  • FIG. 9 schematically illustrates a video encoding method by an encoding device according to the present invention.
  • FIG. 10 schematically illustrates a video decoding method by a decoding apparatus according to the present invention.
  • each configuration in the drawings described in the present invention are shown independently for the convenience of description of the different characteristic functions, it does not mean that each configuration is implemented by separate hardware or separate software.
  • two or more of each configuration may be combined to form one configuration, or one configuration may be divided into a plurality of configurations.
  • Embodiments in which each configuration is integrated and / or separated are also included in the scope of the present invention without departing from the spirit of the present invention.
  • a picture generally refers to a unit representing one image of a specific time zone
  • a slice is a unit constituting a part of a picture in coding.
  • One picture may be composed of a plurality of slices, and if necessary, the picture and the slice may be mixed with each other.
  • a pixel or a pel may refer to a minimum unit constituting one picture (or image). Also, 'sample' may be used as a term corresponding to a pixel.
  • a sample may generally represent a pixel or a value of a pixel, and may only represent pixel / pixel values of the luma component, or only pixel / pixel values of the chroma component.
  • a unit represents the basic unit of image processing.
  • the unit may include at least one of a specific region of the picture and information related to the region.
  • the unit may be used interchangeably with terms such as block or area in some cases.
  • an M ⁇ N block may represent a set of samples or transform coefficients composed of M columns and N rows.
  • FIG. 1 is a diagram illustrating an overall architecture for providing a 360 degree video according to the present invention.
  • the present invention proposes a method of providing 360 content in order to provide a user with virtual reality (VR).
  • VR may refer to a technique or environment for replicating a real or virtual environment.
  • VR artificially provides the user with a sensational experience, which allows the user to experience the same as being in an electronically projected environment.
  • 360 content refers to the overall content for implementing and providing VR, and may include 360 degree video and / or 360 audio.
  • 360 degree video may refer to video or image content that is required to provide VR, and simultaneously captured or played back in all directions (360 degrees).
  • the 360 degree video may mean a 360 degree video.
  • the 360 degree video may refer to a video or an image represented on various types of 3D space according to the 3D model, for example, the 360 degree video may be displayed on a spherical surface.
  • 360 audio is also audio content for providing VR, and may mean spatial audio content, in which a sound source can be recognized as being located in a specific space in three dimensions.
  • 360 content may be generated, processed, and transmitted to users, and users may consume the VR experience using 360 content.
  • the present invention particularly proposes a method for effectively providing 360 degree video.
  • 360 degree video may first be captured via one or more cameras.
  • the captured 360-degree video is transmitted through a series of processes, and the receiving side can process and render the received data back into the original 360-degree video. This may provide a 360 degree video to the user.
  • the entire process for providing the 360 degree video may include a capture process, preparation process, transmission process, processing process, rendering process, and / or feedback process.
  • the capturing process may refer to capturing an image or video for each of a plurality of viewpoints through one or more cameras.
  • Image / video data such as 110 of FIG. 1 shown by the capture process may be generated.
  • Each plane of FIG. 1 110 shown may mean an image / video for each viewpoint.
  • the captured plurality of images / videos may be referred to as raw data.
  • metadata related to capture may be generated.
  • Special cameras can be used for this capture.
  • capture through an actual camera may not be performed.
  • the corresponding capture process may be replaced by simply generating related data.
  • the preparation process may be a process of processing the captured image / video and metadata generated during the capture process.
  • the captured image / video may undergo a stitching process, a projection process, a region-wise packing process, and / or an encoding process in this preparation process.
  • each image / video can be stitched.
  • the stitching process may be a process of connecting each captured image / video to create a panoramic image / video or a spherical image / video.
  • the stitched image / video may be subjected to a projection process.
  • the stitched image / video may be projected onto the 2D image.
  • This 2D image may be called a 2D image frame or projected picture depending on the context. It can also be expressed as mapping a projection to a 2D image to a 2D image.
  • the projected image / video data may be in the form of a 2D image as shown in FIG. 1 120.
  • the region may mean a region in which 2D images projected with 360-degree video data are divided.
  • the region may correspond to a face or a tile.
  • the regions may be divided evenly or arbitrarily divided into 2D images according to an embodiment. In some embodiments, regions may be divided according to a projection scheme.
  • this processing may include rotating each region or rearranging on 2D images in order to increase video coding efficiency. For example, by rotating the regions so that certain sides of the regions are located close to each other, efficiency in coding can be increased.
  • the process may include increasing or decreasing a resolution for a specific region in order to differentiate the resolution for each region of the 360 degree video. For example, regions that correspond to relatively more important regions on 360 degree video may have higher resolution than other regions.
  • Video data projected onto a 2D image may undergo an encoding process through a video codec.
  • the preparation process may further include an editing process.
  • editing process editing of image / video data before and after projection may be further performed.
  • metadata about stitching / projection / encoding / editing may be generated.
  • metadata about an initial time point, or a region of interest (ROI) of video data projected on the 2D image may be generated.
  • the transmission process may be a process of processing and transmitting image / video data and metadata that have been prepared. Processing may be performed according to any transport protocol for the transmission. Data that has been processed for transmission may be delivered through a broadcast network and / or broadband. These data may be delivered to the receiving side in an on demand manner. The receiving side can receive the corresponding data through various paths.
  • the processing may refer to a process of decoding the received data and re-projecting the projected image / video data onto the 3D model.
  • image / video data projected on 2D images may be re-projected onto 3D space.
  • This process may be called mapping or projection depending on the context.
  • the mapped 3D space may have a different shape according to the 3D model.
  • the 3D model may have a sphere, a cube, a cylinder, or a pyramid.
  • the processing process may further include an editing process, an up scaling process, and the like.
  • editing process editing of image / video data before and after re-projection may be further performed.
  • the size of the sample may be increased by upscaling the samples during the upscaling process. If necessary, the operation of reducing the size through down scaling may be performed.
  • the rendering process may refer to a process of rendering and displaying re-projected image / video data in 3D space. Depending on the representation, it may be said to combine re-projection and rendering to render on a 3D model.
  • the image / video re-projected onto the 3D model (or rendered onto the 3D model) may have a shape such as 130 of FIG. 1 shown. 1, 130 is shown when re-projected onto a 3D model of a sphere.
  • the user may view some areas of the rendered image / video through the VR display. In this case, the area seen by the user may be in the form as shown in FIG.
  • the feedback process may mean a process of transmitting various feedback information that can be obtained in the display process to the transmitter. Through the feedback process, interactivity may be provided for 360-degree video consumption. According to an embodiment, in the feedback process, head orientation information, viewport information indicating an area currently viewed by the user, and the like may be transmitted to the transmitter. According to an embodiment, the user may interact with those implemented on the VR environment, in which case the information related to the interaction may be transmitted to the sender or service provider side in the feedback process. In some embodiments, the feedback process may not be performed.
  • the head orientation information may mean information about a head position, an angle, and a movement of the user. Based on this information, information about the area currently viewed by the user in the 360 degree video, that is, viewport information, may be calculated.
  • the viewport information may be information about an area currently viewed by the user in 360 degree video. Through this, a gaze analysis may be performed to determine how the user consumes 360 degree video, which area of the 360 degree video, and how much. Gayes analysis may be performed at the receiving end and delivered to the transmitting side via a feedback channel.
  • a device such as a VR display may extract a viewport area based on the position / direction of a user's head, vertical or horizontal field of view (FOV) information supported by the device, and the like.
  • FOV horizontal field of view
  • the above-described feedback information may be consumed at the receiving side as well as transmitted to the transmitting side. That is, the decoding, re-projection, rendering process, etc. of the receiving side may be performed using the above-described feedback information. For example, using head orientation information and / or viewport information, only 360 degree video for the area currently being viewed by the user may be preferentially decoded and rendered.
  • the viewport to the viewport area may mean an area that the user is viewing in the 360 degree video.
  • a viewpoint is a point that a user is viewing in the 360 degree video and may mean a center point of the viewport area. That is, the viewport is an area centered on the viewpoint, and the size shape occupied by the area may be determined by a field of view (FOV) to be described later.
  • FOV field of view
  • 360-degree video data image / video data that undergoes a series of processes of capture / projection / encoding / transmission / decoding / re-projection / rendering may be referred to as 360-degree video data.
  • the term 360 degree video data may also be used as a concept including metadata or signaling information associated with such image / video data.
  • the projection processor 210 may stitch and project 360-degree video data of an input viewpoint to a 3D projection structure according to various projection schemes, and project the 3D projection structure to the 3D projection structure.
  • Recorded 360-degree video data may be represented as a 2D image. That is, the projection processor 210 may stitch the 360-degree video data and project the 2D image.
  • the projection scheme may be called a projection type.
  • the 2D image projected with the 360 degree video data may be referred to as a projected frame or a projected picture.
  • the projected picture may be divided into a plurality of faces according to the projection type.
  • the face may correspond to a tile.
  • the plurality of faces in the projected picture may be the same size and shape (eg, triangle or square).
  • the size and shape of the face in the projected picture may vary depending on the projection type.
  • the projection processing unit 210 may perform processing such as rotating and rearranging regions of the projected picture, changing the resolution of each region, and the like.
  • the encoding device 220 may encode the information about the projected picture and output the encoded picture. The process of encoding the projected picture by the encoding device 220 will be described later in detail with reference to FIG. 3.
  • the projection processing unit 210 may be included in the encoding apparatus, or the projection process may be performed through an external device.
  • FIG. 2A may show a process of processing information about a projected picture related to 360 degree video data performed by the decoding apparatus.
  • Information about the projected picture may be received through a bitstream.
  • the decoding apparatus 250 may decode the projection picture based on the received information about the projection picture. A process of decoding the projected picture by the decoding device 250 will be described later in detail with reference to FIG. 4.
  • the re-projection processor 260 may re-project the 360-degree video data projected on the projected picture derived through the decoding process onto the 3D model.
  • the re-projection processor 260 may correspond to the projection processor.
  • 360 degree video data projected on the projected picture may be re-projected onto 3D space.
  • This process may be called mapping or projection depending on the context.
  • the mapped 3D space may have a different shape according to the 3D model.
  • the 3D model may have a sphere, a cube, a cylinder, or a pyramid.
  • the re-projection processor 260 may be included in the decoding apparatus 250 or the re-projection process may be performed through an external device.
  • the re-projected 360 degree video data can be rendered in 3D space.
  • FIG. 3 is a diagram schematically illustrating a configuration of a video encoding apparatus to which the present invention may be applied.
  • the video encoding apparatus 300 may include a picture splitter 305, a predictor 310, a subtractor 315, a transformer 320, a quantizer 325, a reorderer 330, The entropy encoding unit 335, the residual processing unit 340, the adding unit 350, the filter unit 355, and the memory 360 may be included.
  • the residual processor 340 may include an inverse quantizer 341 and an inverse transform unit 342.
  • the picture dividing unit 305 may divide the input picture into at least one processing unit.
  • the processing unit may be called a coding unit (CU).
  • the coding unit may be recursively split from the largest coding unit (LCU) according to a quad-tree binary-tree (QTBT) structure.
  • LCU largest coding unit
  • QTBT quad-tree binary-tree
  • one coding unit may be divided into a plurality of coding units of a deeper depth based on a quad tree structure and / or a binary tree structure.
  • the quad tree structure may be applied first and the binary tree structure may be applied later.
  • the binary tree structure may be applied first.
  • the coding procedure according to the present invention may be performed based on the final coding unit that is no longer split.
  • the maximum coding unit may be used as the final coding unit immediately based on coding efficiency according to the image characteristic, or if necessary, the coding unit is recursively divided into coding units of lower depths and optimized.
  • a coding unit of size may be used as the final coding unit.
  • the coding procedure may include a procedure of prediction, transform, and reconstruction, which will be described later.
  • the processing unit may include a coding unit (CU) prediction unit (PU) or a transform unit (TU).
  • the coding unit may be split from the largest coding unit (LCU) into coding units of deeper depths along the quad tree structure.
  • LCU largest coding unit
  • the maximum coding unit may be used as the final coding unit immediately based on coding efficiency according to the image characteristic, or if necessary, the coding unit is recursively divided into coding units of lower depths and optimized.
  • a coding unit of size may be used as the final coding unit. If a smallest coding unit (SCU) is set, the coding unit may not be split into smaller coding units than the minimum coding unit.
  • the final coding unit refers to a coding unit that is the basis of partitioning or partitioning into a prediction unit or a transform unit.
  • the prediction unit is a unit partitioning from the coding unit and may be a unit of sample prediction. In this case, the prediction unit may be divided into sub blocks.
  • the transform unit may be divided along the quad tree structure from the coding unit, and may be a unit for deriving a transform coefficient and / or a unit for deriving a residual signal from the transform coefficient.
  • a coding unit may be called a coding block (CB)
  • a prediction unit is a prediction block (PB)
  • a transform unit may be called a transform block (TB).
  • a prediction block or prediction unit may mean a specific area in the form of a block within a picture, and may include an array of prediction samples.
  • a transform block or a transform unit may mean a specific area in a block form within a picture, and may include an array of transform coefficients or residual samples.
  • the prediction unit 310 may perform a prediction on a block to be processed (hereinafter, referred to as a current block) and generate a predicted block including prediction samples of the current block.
  • the unit of prediction performed by the prediction unit 310 may be a coding block, a transform block, or a prediction block.
  • the prediction unit 310 may determine whether intra prediction or inter prediction is applied to the current block. As an example, the prediction unit 310 may determine whether intra prediction or inter prediction is applied on a CU basis.
  • the prediction unit 310 may derive a prediction sample for the current block based on reference samples outside the current block in the picture to which the current block belongs (hereinafter, referred to as the current picture). In this case, the prediction unit 310 may (i) derive the prediction sample based on the average or interpolation of neighboring reference samples of the current block, and (ii) the neighbor reference of the current block.
  • the prediction sample may be derived based on a reference sample present in a specific (prediction) direction with respect to the prediction sample among the samples. In case of (i), it may be called non-directional mode or non-angle mode, and in case of (ii), it may be called directional mode or angular mode.
  • the prediction mode may have, for example, 33 directional prediction modes and at least two non-directional modes.
  • the non-directional mode may include a DC prediction mode and a planner mode (Planar mode).
  • the prediction unit 310 may determine the prediction mode applied to the current block by using the prediction mode applied to the neighboring block.
  • the prediction unit 310 may derive the prediction sample for the current block based on the sample specified by the motion vector on the reference picture.
  • the prediction unit 310 may derive the prediction sample for the current block by applying any one of a skip mode, a merge mode, and a motion vector prediction (MVP) mode.
  • the prediction unit 310 may use the motion information of the neighboring block as the motion information of the current block.
  • the skip mode unlike the merge mode, the difference (residual) between the prediction sample and the original sample is not transmitted.
  • the motion vector of the current block may be derived using the motion vector of the neighboring block as a motion vector predictor.
  • the neighboring block may include a spatial neighboring block existing in the current picture and a temporal neighboring block present in the reference picture.
  • a reference picture including the temporal neighboring block may be called a collocated picture (colPic).
  • the motion information may include a motion vector and a reference picture index.
  • Information such as prediction mode information and motion information may be encoded (entropy) and output in the form of a bitstream.
  • the highest picture on the reference picture list may be used as the reference picture.
  • Reference pictures included in a reference picture list may be sorted based on a difference in a picture order count (POC) between a current picture and a corresponding reference picture.
  • POC picture order count
  • the subtraction unit 315 generates a residual sample which is a difference between the original sample and the prediction sample.
  • residual samples may not be generated as described above.
  • the transformer 320 generates a transform coefficient by transforming the residual sample in units of transform blocks.
  • the transform unit 120 may perform the transformation according to the size of the transform block and the prediction mode applied to the coding block or the prediction block that spatially overlaps the transform block. For example, if intra prediction is applied to the coding block or the prediction block that overlaps the transform block, and the transform block is a 4 ⁇ 4 residual array, the residual sample uses a discrete sine transform (DST). In other cases, the residual sample may be transformed by using a discrete cosine transform (DCT).
  • DST discrete sine transform
  • DCT discrete cosine transform
  • the quantization unit 325 may quantize the transform coefficients to generate quantized transform coefficients.
  • the reordering unit 330 rearranges the quantized transform coefficients.
  • the reordering unit 330 may reorder the quantized transform coefficients in the form of a block into a one-dimensional vector through a coefficient scanning method. Although the reordering unit 330 has been described in a separate configuration, the reordering unit 330 may be part of the quantization unit 325.
  • the entropy encoding unit 335 may perform entropy encoding on the quantized transform coefficients.
  • Entropy encoding may include, for example, encoding methods such as exponential Golomb, context-adaptive variable length coding (CAVLC), context-adaptive binary arithmetic coding (CABAC), and the like.
  • the entropy encoding unit 335 may encode information necessary for video reconstruction other than the quantized transform coefficients (for example, a value of a syntax element) together or separately. Entropy encoded information may be transmitted or stored in units of network abstraction layer (NAL) units in the form of bitstreams.
  • NAL network abstraction layer
  • the inverse quantization unit 341 inverse quantizes the quantized values (quantized transform coefficients) in the quantization unit 325, and the inverse transform unit 342 inversely transforms the inverse quantized values in the inverse quantization unit 341 to obtain a residual sample.
  • the adder 350 reconstructs the picture by combining the residual sample and the predictive sample.
  • the residual sample and the predictive sample may be added in units of blocks to generate a reconstructed block.
  • the adder 350 may be part of the predictor 310.
  • the adder 350 may be called a restoration unit or a restoration block generation unit.
  • the filter unit 355 may apply a deblocking filter and / or a sample adaptive offset to the reconstructed picture. Through deblocking filtering and / or sample adaptive offset, the artifacts of the block boundaries in the reconstructed picture or the distortion in the quantization process can be corrected.
  • the sample adaptive offset may be applied on a sample basis and may be applied after the process of deblocking filtering is completed.
  • the filter unit 355 may apply an adaptive loop filter (ALF) to the reconstructed picture. ALF may be applied to the reconstructed picture after the deblocking filter and / or sample adaptive offset is applied.
  • ALF adaptive loop filter
  • the memory 360 may store reconstructed pictures (decoded pictures) or information necessary for encoding / decoding.
  • the reconstructed picture may be a reconstructed picture after the filtering process is completed by the filter unit 355.
  • the stored reconstructed picture may be used as a reference picture for (inter) prediction of another picture.
  • the memory 360 may store (reference) pictures used for inter prediction.
  • pictures used for inter prediction may be designated by a reference picture set or a reference picture list.
  • FIG. 4 is a diagram schematically illustrating a configuration of a video decoding apparatus to which the present invention may be applied.
  • the video decoding apparatus 400 includes an entropy decoding unit 410, a residual processing unit 420, a prediction unit 430, an adder 440, a filter unit 450, and a memory 460. It may include.
  • the residual processor 420 may include a reordering unit 421, an inverse quantization unit 422, and an inverse transform unit 423.
  • the video decoding apparatus 400 may reconstruct the video in response to a process in which the video information is processed in the video encoding apparatus.
  • the video decoding apparatus 400 may perform video decoding using a processing unit applied in the video encoding apparatus.
  • the processing unit block of video decoding may be, for example, a coding unit, and in another example, a coding unit, a prediction unit, or a transform unit.
  • the coding unit may be split along the quad tree structure and / or binary tree structure from the largest coding unit.
  • the prediction unit and the transform unit may be further used in some cases, in which case the prediction block is a block derived or partitioned from the coding unit and may be a unit of sample prediction. At this point, the prediction unit may be divided into subblocks.
  • the transform unit may be divided along the quad tree structure from the coding unit, and may be a unit for deriving a transform coefficient or a unit for deriving a residual signal from the transform coefficient.
  • the entropy decoding unit 410 may parse the bitstream and output information necessary for video reconstruction or picture reconstruction. For example, the entropy decoding unit 410 decodes information in the bitstream based on a coding method such as exponential Golomb coding, CAVLC, or CABAC, quantized values of syntax elements required for video reconstruction, and transform coefficients for residuals. Can be output.
  • a coding method such as exponential Golomb coding, CAVLC, or CABAC, quantized values of syntax elements required for video reconstruction, and transform coefficients for residuals. Can be output.
  • the CABAC entropy decoding method receives a bin corresponding to each syntax element in a bitstream, and decodes syntax element information and decoding information of neighboring and decoding target blocks or information of symbols / bins decoded in a previous step.
  • the context model may be determined using the context model, the probability of occurrence of a bin may be predicted according to the determined context model, and arithmetic decoding of the bin may be performed to generate a symbol corresponding to the value of each syntax element. have.
  • the CABAC entropy decoding method may update the context model by using the information of the decoded symbol / bin for the context model of the next symbol / bean after determining the context model.
  • the information related to the prediction among the information decoded by the entropy decoding unit 410 is provided to the prediction unit 430, and the residual value on which the entropy decoding has been performed by the entropy decoding unit 410, that is, the quantized transform coefficient, is used as a reordering unit ( 421).
  • the reordering unit 421 may rearrange the quantized transform coefficients into a two-dimensional block.
  • the reordering unit 421 may perform reordering in response to coefficient scanning performed by the encoding apparatus. Although the reordering unit 421 has been described in a separate configuration, the reordering unit 421 may be part of the inverse quantization unit 422.
  • the inverse quantization unit 422 may dequantize the quantized transform coefficients based on the (inverse) quantization parameter and output the transform coefficients.
  • information for deriving a quantization parameter may be signaled from the encoding apparatus.
  • the inverse transform unit 423 may inversely transform transform coefficients to derive residual samples.
  • the prediction unit 430 may perform prediction on the current block, and generate a predicted block including prediction samples for the current block.
  • the unit of prediction performed by the prediction unit 430 may be a coding block, a transform block, or a prediction block.
  • the prediction unit 430 may determine whether to apply intra prediction or inter prediction based on the information about the prediction.
  • a unit for determining which of intra prediction and inter prediction is to be applied and a unit for generating a prediction sample may be different.
  • the unit for generating a prediction sample in inter prediction and intra prediction may also be different.
  • whether to apply inter prediction or intra prediction may be determined in units of CUs.
  • a prediction mode may be determined and a prediction sample may be generated in PU units
  • intra prediction a prediction mode may be determined in PU units and a prediction sample may be generated in TU units.
  • the prediction unit 430 may derive the prediction sample for the current block based on the neighbor reference samples in the current picture.
  • the prediction unit 430 may derive the prediction sample for the current block by applying the directional mode or the non-directional mode based on the neighbor reference samples of the current block.
  • the prediction mode to be applied to the current block may be determined using the intra prediction mode of the neighboring block.
  • the prediction unit 430 may derive the prediction sample for the current block based on the sample specified on the reference picture by the motion vector on the reference picture.
  • the prediction unit 430 may induce a prediction sample for the current block by applying any one of a skip mode, a merge mode, and an MVP mode.
  • motion information required for inter prediction of the current block provided by the video encoding apparatus for example, information about a motion vector, a reference picture index, and the like may be obtained or derived based on the prediction information.
  • the motion information of the neighboring block may be used as the motion information of the current block.
  • the neighboring block may include a spatial neighboring block and a temporal neighboring block.
  • the prediction unit 430 may construct a merge candidate list using motion information of available neighboring blocks, and may use information indicated by the merge index on the merge candidate list as a motion vector of the current block.
  • the merge index may be signaled from the encoding device.
  • the motion information may include a motion vector and a reference picture. When the motion information of the temporal neighboring block is used in the skip mode and the merge mode, the highest picture on the reference picture list may be used as the reference picture.
  • the difference (residual) between the prediction sample and the original sample is not transmitted.
  • the motion vector of the current block may be derived using the motion vector of the neighboring block as a motion vector predictor.
  • the neighboring block may include a spatial neighboring block and a temporal neighboring block.
  • a merge candidate list may be generated by using a motion vector of a reconstructed spatial neighboring block and / or a motion vector corresponding to a Col block, which is a temporal neighboring block.
  • the motion vector of the candidate block selected from the merge candidate list is used as the motion vector of the current block.
  • the information about the prediction may include a merge index indicating a candidate block having an optimal motion vector selected from candidate blocks included in the merge candidate list.
  • the prediction unit 430 may derive the motion vector of the current block by using the merge index.
  • a motion vector predictor candidate list may be generated using a motion vector of a reconstructed spatial neighboring block and / or a motion vector corresponding to a Col block which is a temporal neighboring block.
  • the prediction information may include a prediction motion vector index indicating an optimal motion vector selected from the motion vector candidates included in the list.
  • the prediction unit 430 may select the predicted motion vector of the current block from the motion vector candidates included in the motion vector candidate list using the motion vector index.
  • the prediction unit of the encoding apparatus may obtain a motion vector difference (MVD) between the motion vector of the current block and the motion vector predictor, and may encode the output vector in a bitstream form. That is, MVD may be obtained by subtracting the motion vector predictor from the motion vector of the current block.
  • the prediction unit 430 may obtain a motion vector difference included in the information about the prediction, and may derive the motion vector of the current block by adding the motion vector difference and the motion vector predictor.
  • the prediction unit may also obtain or derive a reference picture index or the like indicating a reference picture from the information about the prediction.
  • the adder 440 may reconstruct the current block or the current picture by adding the residual sample and the predictive sample.
  • the adder 440 may reconstruct the current picture by adding the residual sample and the predictive sample in units of blocks. Since the residual is not transmitted when the skip mode is applied, the prediction sample may be a reconstruction sample.
  • the adder 440 is described in a separate configuration, the adder 440 may be part of the predictor 430. On the other hand, the adder 440 may be called a restoration unit or a restoration block generation unit.
  • the filter unit 450 may apply the deblocking filtering sample adaptive offset, and / or ALF to the reconstructed picture.
  • the sample adaptive offset may be applied in units of samples and may be applied after deblocking filtering.
  • ALF may be applied after deblocking filtering and / or sample adaptive offset.
  • the memory 460 may store reconstructed pictures (decoded pictures) or information necessary for decoding.
  • the reconstructed picture may be a reconstructed picture after the filtering process is completed by the filter unit 450.
  • the memory 460 may store pictures used for inter prediction.
  • pictures used for inter prediction may be designated by a reference picture set or a reference picture list.
  • the reconstructed picture can be used as a reference picture for another picture.
  • the memory 460 may output the reconstructed picture in the output order.
  • the 2D picture used in the encoding process and the decoding process may have the same number of samples as the size of the picture.
  • the valid sample may represent a sample having valid information or a value of the picture, and the valid information or value may represent information of an original picture of the picture.
  • the invalidated sample may be a sample opposite to the validated sample. That is, samples other than the valid sample may be referred to as invalidated samples.
  • the invalidated sample may have a specific value.
  • the invalidated sample may include information representing gray.
  • a 360 degree video which is a 3D image
  • 360 video data may be projected onto a 2D picture
  • the 2D picture projected onto the 360 degree video data may be referred to as a projected frame or a projected picture.
  • the 360 degree video data may be projected onto a picture through various projection methods.
  • 360 degree video data may be projected and / or packed into a picture via Equirectangular Projection (ERP), Cube Map Projection (CMP) or Octahedron Projection (OHP).
  • 360 degree video data may be projected onto a 2D picture through ERP.
  • stitched 360 degree video data can be represented on a spherical surface, and the 360 degree video data is characterized by continuity on the spherical surface. It can be projected into one picture that is maintained.
  • the 360 degree video data may be mapped to one face in the projected picture.
  • the width and height of the projected picture may be 3228 and 1664, respectively, and in this case, the projected picture may include 3228x1668 valid samples.
  • 360 degree video data may be projected onto a 2D picture through a CMP.
  • the CMP may be referred to as a cubic projection scheme.
  • stitched 360 degree video data can be represented on a spherical surface, and the 360 degree video data is in a cube-shaped 3D projection structure. It can be divided and projected onto a 2D image. That is, the 360 degree video data on the spherical face can be mapped to each of the six faces of the cube, and each face of the cube can be projected onto a 2D image.
  • the six faces may include a front face, a back face, a top face, a bottom face, a right face, and a left face.
  • the width and height of the projected picture may be 2880 and 1920 respectively.
  • the projected picture when projected through the CMP may include invalidated samples, and thus the projected picture includes 2880x1920 valid samples. You can't.
  • the invalidated samples may represent samples in the gray area of FIG. 5B, that is, samples to which the 360-degree video data is not mapped.
  • the invalidated samples may include information representing gray, or may include other information according to a projection method.
  • 360 degree video data may be projected onto a 2D picture through the OHP.
  • stitched 360 degree video data can be represented on a spherical surface, and the 360 degree video data is in an octahedral 3D projection structure. It can be divided and projected onto a 2D image. That is, the 360 degree video data on the spherical surface can be projected onto the 2D image as shown in FIG. 5C. Meanwhile, the width and height of the projected picture may be 2880 and 1920 respectively.
  • the projected picture when projected through the CMP may include invalidated samples, and thus the projected picture includes 2880x1920 valid samples.
  • the invalidated samples may represent samples in the gray area of FIG. 5C, that is, samples to which the 360-degree video data is not mapped. Meanwhile, even when the 360-degree video data is projected through a projection method other than the above-described CMP and OHP, the invalidated samples may be generated.
  • an invalidated sample for the projected picture is not defined as a separate sample, but may be processed as a valid sample having a specific value indicating gray.
  • An additional bit overhead may occur in the encoding / decoding process for the invalidated sample that is not used to code the information about the projected picture.
  • the present invention proposes a method for treating the invalidated sample by a separate process from the validated sample. do.
  • the input picture may be divided into units of a coding unit (CU).
  • CU coding unit
  • CTU coding tree unit
  • the CU may be recursively partitioned from the CTU according to a quad-tree binary-tree (QTBT) structure.
  • QTBT quad-tree binary-tree
  • the CTU may be referred to as a large coding unit (LCU).
  • an invalid unit is proposed in the present invention.
  • the above invalidated sample may be represented based on the IU and may be processed based on the IU. That is, the input picture may be divided into the IU unit, and the invalidated sample may be processed in the IU unit.
  • the IU may correspond to the CTU or may correspond to the CU. Alternatively, the IU may correspond to a face of the projected picture, or the IU may correspond to a slice of the projected picture.
  • the IU may be derived in blocks of the same size as the corresponding type.
  • IU type information indicating the type of the IU for the projected picture may be transmitted through a high level syntax such as a slice header, a picture parameter set (PPS), a sequence parameter set (SPS), or the like. have.
  • the IU type information may indicate a type corresponding to the IU, and the IU type information may be as shown in the following table.
  • IU_type may indicate a syntax element indicating the IU type information. Referring to Table 1, when the value of IU_type is 0, the type of IU may be CTU. When the value of IU_type is 1, the type of IU may be CU, and the value of IU_type is 2. When, the type of the IU may be a face, and when the value of the IU_type is 3, the type of the IU may be a slice.
  • an IU flag for the IU may be derived, and the IU flag may indicate whether the IU is invalid for the IU flag. That is, the IU flag may indicate whether the IU includes only an invalidated sample (or does not include a valid sample). For example, when the value of the IU flag is 1, that is, when the IU flag indicates that the IU is invalidated, the IU may include only invalidated samples, and the encoding / decoding for the IU is performed. The process can be skipped. In addition, when the value of the IU flag is 0, that is, when the IU flag indicates that the IU includes valid samples, the above-described encoding / decoding process may be performed. That is, the prediction and reconstruction process for the IU may be performed. Meanwhile, the method for processing an invalidated sample based on the IU described below is described based on a picture projected through the CMP illustrated in FIG. 5B, but may also be applied to a case in which the projected image is projected through another projection method.
  • the projected picture may include IUs corresponding to the CTU.
  • the blocks shown in FIGS. 6A to 6B may represent the IUs.
  • an IU flag for each CTU of the projected picture may be derived. That is, when the value of the IU type information is 0, an IU corresponding to each CTU of the projected picture may be derived, and the IU flag for the IU may be derived.
  • the IU includes a valid sample, the value of the IU flag may be 0, and the value of the IU flag may be 1.
  • the size of the CTU may be represented by a predetermined size.
  • the size of the CTU may be represented as a 128x128 size.
  • the size of the CTU may be represented by 256 ⁇ 256 size.
  • the projected picture may include IUs corresponding to the face.
  • an IU flag for each face of the projected picture may be derived. That is, when the value of the IU type information is 2, an IU corresponding to each face of the projected picture may be derived, and the IU flag for the IU may be derived.
  • the size of the face may be represented by a predetermined size. For example, referring to FIG. 6C, the size of the face may be represented by a size of 960x960.
  • IU flags for IUs included in the projected picture may be derived, and the IU flags may have values for the corresponding IU, respectively.
  • an invalidated map for coding the values of the IU flags is proposed in order to reduce the overhead in signaling the IU flags.
  • the invalidated map may be generated using run-length coding and copy above line coding.
  • the syntax element for the invalidated map may be represented as IU_map.
  • the information about the target row in the invalidated map for the projected picture may be derived by coding the indicated value of the target row and the number of the values through the run-length coding,
  • the information about the row, that is, the row below the target row may be derived in a manner of being coded based on the value of the target row through the copy-above line coding. A detailed method may be as described later in FIG. 7.
  • the information about the first row (the uppermost row) of the invalidated map may be derived as follows by applying the run-length coding.
  • the run-length coding is applied to the values of the IU flags for the first row
  • the information on the first row may be derived as 0 8 22.
  • 0 of the information on the first row may indicate a start value
  • 8 of the information on the first row may indicate a previous value, that is, the start value is derived eight times.
  • 22 of the information about the first row may indicate that 1 is derived 22 times in the following order.
  • the copy-above line coding may be used to code information on the subsequent rows of the first row.
  • information indicating the number of rows including information on the same IU flags as the first row may be generated.
  • the information representing the number of rows including information on the same IU flags may be referred to as copy run information, and copy_run may be a syntax element representing the copy run information. Referring to FIG. 7, seven rows from the first to seventh rows may have the same IU flag values, and thus, the value of the copy run information for the first row may be derived as seven.
  • the invalidated map for the projected picture shown in FIG. 7 derived through the above-described method may be as shown in the following table.
  • the IU flags for the IUs included in the projected picture are not signaled, respectively, and the invalid map is signaled so that IU flags of the IUs can be derived based on the invalidated map.
  • the information on the IU flags may be signaled with a bit amount smaller than the bit amount for signaling the IU flags, respectively, thereby improving overall bit efficiency.
  • the encoding / decoding process performed based on the IU may be started at the slice level.
  • IU type information is obtained through a bitstream (S800).
  • the syntax element for the IU type information may be represented as IU_type.
  • the IU type information may indicate a type corresponding to the IUs.
  • the IU type information may indicate a CTU, and in this case, the IU may be derived as a block corresponding to the CTU of the projected picture.
  • the size of the IU may be the same as the size of the corresponding CTU.
  • the IU type information may indicate a CU, in which case, the IU may be derived as a block corresponding to the CU of the projected picture.
  • the size of the IU may be the same as the size of the corresponding CU.
  • the IU type information may indicate a face, in which case the IU may be derived in a block corresponding to the face of the projected picture.
  • the size of the IU may be the same as the size of the corresponding face.
  • the IU type information may represent a slice, and in this case, the IU may be derived as a block corresponding to a slice of the projected picture.
  • the size of the IU may be the same as the size of the corresponding slice.
  • an IU flag of a target IU in the projected picture is obtained (S810).
  • the IU flag may indicate whether the target IU is invalid with respect to the IU flag. That is, the IU flag may indicate whether the target IU includes only an invalidated sample (or does not include a valid sample).
  • the IU flag information may be derived based on the invalid map information.
  • the invalid map information may be obtained through a bitstream, and the IU_map may indicate a syntax element representing the invalid map information.
  • an IU flag of the target IU among the IUs may be derived based on the invalidated map information.
  • the value of the IU flag for the target IU is 1 (S820). If the value of the IU flag is 1, the encoding / decoding process for the target IU is omitted, and the process for the next IU of the target IU is performed (S830). In addition, when the value of the IU flag is 0, an encoding / decoding process for the target IU is performed (S840). Specifically, when the value of the IU flag is 1, that is, when the IU flag indicates that the target IU is invalidated, the target IU may include only invalidated samples, and the encoding for the target IU is performed. The / decoding process can be skipped.
  • the sample value of the invalidated sample of the target IU may be derived as a specific value.
  • the sample value of the invalidated sample of the target IU may be derived as a value representing gray.
  • the target IU may be subjected to an encoding / decoding process. That is, the prediction and reconstruction process for the target IU may be performed.
  • FIG. 9 schematically illustrates a video encoding method by an encoding device according to the present invention.
  • the method disclosed in FIG. 9 may be performed by the encoding apparatus disclosed in FIG. 3.
  • S900 to S910 of FIG. 9 may be performed by the projection processing unit of the encoding apparatus
  • S920 to S950 may be performed by the prediction unit of the encoding apparatus
  • S960 may be entropy of the encoding apparatus. It may be performed by the encoding unit.
  • the encoding device obtains 360 degree video data captured by the at least one camera (S900).
  • the encoding device may obtain 360 degree video data captured by the at least one camera.
  • the 360 degree video data may be video captured by at least one camera.
  • the encoding apparatus processes the 360 degree video data to obtain a projected picture (S910).
  • the encoding apparatus may perform projection on a 2D image (or picture) according to the projection type for the 360 degree video data among various projection types, and obtain the projected picture.
  • the projection type may correspond to the projection method described above, and the projected picture may be referred to as a projected frame.
  • the various projection types include Equirectangular Projection (ERP), cube map projection (CMP), Icosahedral Projection (ISP), Octahedron Projection (OHP), Truncated Square Pyramid projection (TSP), Segmented Sphere Projection (SSP) and Equal Area Projection (EAP). ) May be included.
  • the 360 degree video data may be mapped to faces of the 3D projection structure of each projection type, and the faces may be projected onto the projected picture. That is, the projected picture may include faces of a 3D projection structure of each projection type.
  • the 360 degree video data may be projected onto the projected picture based on a cube map projection (CMP), in which case the 3D projection structure may be a cube.
  • CMP cube map projection
  • the 360 degree video data may be mapped to six faces of the cube, and the faces may be projected onto the projected picture.
  • the 360-degree video data may be projected onto the projected picture based on an ISP (Icosahedral Projection), in which case the 3D projection structure may be icosahedron.
  • the 360 degree video data may be projected onto the projected picture based on Octahedron Projection (OHP), in which case the 3D projection structure may be octahedral.
  • OHP Octahedron Projection
  • the encoding apparatus may perform processing such as rotating and rearranging each of the faces of the projected picture, changing the resolution of each face, and the like.
  • the encoding apparatus derives processing units of the projected picture (S920).
  • the projected picture may include an invalid sample.
  • the invalidated sample may represent a sample that does not include information about the projected picture, or may represent a sample to which the 360 degree video data is not mapped.
  • the encoding apparatus may split the projection picture into various types of processing units to process the invalidated sample, and decode the projected picture based on the various types of processing units, thereby making it optimal.
  • a type having a rate-distortion (RD) cost may be derived as a type of the processing units for the projected picture.
  • the processing units may be referred to as invalid units (IUs) or IU-related processing units.
  • the type of the IUs may be derived as a coding tree unit (CTU) of the projected picture.
  • the type of the IUs may be derived into a coding unit (CU) of the projected picture.
  • the type of IUs can be derived to the face of the projected picture.
  • the type of IUs may be derived as a slice of the projected picture.
  • the encoding apparatus may derive a type for processing units of the projected picture, and may derive the processing units of the projected picture based on the type.
  • the IUs when the type of the IUs is derived as a coding tree unit (CTU) of the projected picture, the IUs may be derived as blocks corresponding to the CTU.
  • the size of the IUs may be the same as the size of the CTU.
  • the size of the CTU may be 128x128 size or 256x256 size.
  • the IUs when the types of the IUs are derived to a coding unit (CU) of the projected picture, the IUs may be derived to blocks corresponding to the CU.
  • the size of the IUs may be the same as the size of the CU.
  • the IUs when the type of the IUs is derived to a face of the projected picture, the IUs may be derived to blocks corresponding to the face.
  • the size of the IUs may be the same as the size of the face.
  • the size of the face may be 960x960 size.
  • the IUs when the type of the IUs is derived as a slice of the projected picture, the IUs may be derived as blocks corresponding to the slice.
  • the size of the IUs may be the same as the size of the slice.
  • the encoding apparatus may generate IU type information about the processing units of the projected picture.
  • the IU type information may indicate a type corresponding to the processing units.
  • the syntax element for the IU type information may be represented as IU_type.
  • the IU type information may indicate the CTU, the CU, the face, or the slice.
  • the IU type information may be derived as shown in Table 1 above.
  • the encoding apparatus derives an IU flag for a target processing unit among the processing units (S930).
  • the target processing unit may be called a target IU.
  • the encoding apparatus may derive the IU flag for the target IU among the IUs.
  • the IU flag for the target IU may indicate whether the target IU is invalid.
  • the IU flag for the target IU may indicate whether the target IU is an invalidated region, that is, whether to include a valid sample of the target IU.
  • the IU flag for the target IU may indicate whether the target IU includes only an invalid sample.
  • the valid sample may represent a sample including information about the projected picture.
  • the valid sample may represent a sample to which the 360 degree video data is mapped.
  • the invalidated sample may represent a sample that does not include information about the projected picture.
  • the invalidated sample may represent a sample to which the 360 degree video data is not mapped.
  • the invalidated region may indicate an area including only the invalidated sample or an area in which the validated sample is not included.
  • the encoding apparatus may generate the IU flag based on whether the target IU includes a valid sample. For example, if the target IU includes the valid sample, the value of the IU flag for the target IU may be derived as 0, and if the target IU does not include the valid sample (ie When the target IU includes only invalidated samples), the value of the IU flag for the target IU may be derived as 1.
  • the encoding apparatus may generate information about IU flags of the IUs of the projected picture.
  • the information on the IU flags of the IUs may be referred to as an invalid map.
  • the invalidated map may be generated based on run-length coding and copy above line coding.
  • the envalidated map may include information about IU flags of IUs included in a target row among the IUs, and information about IU flags of IUs included in the target row. May be derived based on the run-length coding.
  • Information about the IU flags of the IUs included in the target row may include the IU flag value of the first IU of the target row, the number of occurrences of the IU flag value of the first IU, and / or the IU flag value of the first IU. Information about the number of occurrences of other values may be included.
  • the invalid map may include information indicating the number of rows including the same information as the target row. The information indicating the number of rows including the same information as the target row may be derived based on the copy-above line coding.
  • the information on the IU flags of the IUs included in the target row includes the IU flag value a of the first IU of the target row, the number b of occurrences of the IU flag value of the first IU, and / or the first IU.
  • Information about the number c of occurrences of a value different from the IU flag value may be included.
  • the values of the IU flags of the first IU to the b th IU of the target row may be derived as a.
  • the values of the IU flags of the b + 1 th IU to the b + c th IU of the target row may be derived from values different from the a (for example, 0 if a is 1 and 1 if a is 0).
  • the invalid map may include information indicating the number of rows d including the same information as the target row, in which case, the rows up to d-1 following the target row may correspond to the target row.
  • the same IU flags may be derived. That is, the IU flag values of each of the rows up to the d-1 following the target row may be derived from the same IU flag values as the IUs of the target row.
  • the encoding apparatus derives a sample value of a sample of the target processing unit as a predetermined specific value (S940).
  • a sample value of a sample of the target processing unit is a preset specific value. Can be derived.
  • the prediction for the target IU may be omitted, and the samples of the target IU may be derived with the predetermined specific value.
  • the predetermined specific value may be a value representing gray.
  • the target Decoding for the IU may be performed. That is, a prediction sample for the target IU may be derived, and a reconstruction sample may be derived based on the prediction sample.
  • the encoding apparatus generates, encodes, and outputs 360-degree video information about the projected picture (S950).
  • the encoding apparatus may generate the 360 degree video information for the projected picture, and may encode the 360 video information and output the encoded video information through a bitstream.
  • the 360 video information may include IU type information about the processing units of the projected picture.
  • the IU type information may indicate a type corresponding to the processing units.
  • the syntax element for the IU type information may be represented as IU_type.
  • the IU type information may indicate the CTU, the CU, the face, or the slice.
  • the IU type information may be transmitted through a high level syntax such as a slice header, a picture parameter set (PPS), a sequence parameter set (SPS), and the like.
  • the IU type information may be derived as shown in Table 1 above.
  • the 360 video information may include an invalidated map for the projection picture.
  • the envalidated map may indicate information on IU flags of the processing units of the projected picture.
  • the syntax element for the invalidated map may be represented as IU_map.
  • the envalidated map may include information on IU flags of processing units included in a target row among the processing units.
  • Information about the IU flags of the processing units included in the target row may include the IU flag value of the first processing unit of the target row, the number of occurrences of the IU flag value of the first processing unit, and / or the first processing unit.
  • Information about the number of occurrences of a value different from the IU flag value may be included.
  • the invalid map may include information indicating the number of rows including the same information as the target row.
  • the 360 degree video information may include information indicating the projection type of the projected picture.
  • the projection type of the projected picture may be one of several projection types, and the projection types may be Equirectangular Projection (ERP), cube map projection (CMP), Icosahedral Projection (ISP), Octahedron Projection (OHP). It may include Truncated Square Pyramid projection (TSP), Segmented Sphere Projection (SSP) and Equal Area Projection (EAP).
  • the encoding apparatus may derive a prediction sample for the target processing unit, and register based on the original sample and the derived prediction sample. Dual samples can be generated.
  • the encoding apparatus may generate information about the residual based on the residual sample.
  • the information about the residual may include transform coefficients related to the residual sample.
  • the encoding apparatus may derive the reconstructed sample based on the prediction sample and the residual sample. That is, the encoding apparatus may derive the reconstructed sample by adding the prediction sample and the residual sample.
  • the encoding apparatus may encode the information about the residual and output the bitstream.
  • the bitstream may be transmitted to a decoding apparatus via a network or a storage medium.
  • FIG. 10 schematically illustrates a video decoding method by a decoding apparatus according to the present invention.
  • the method disclosed in FIG. 10 may be performed by the decoding apparatus disclosed in FIG. 4.
  • S1000 of FIG. 10 may be performed by the entropy decoding unit of the decoding apparatus
  • S1010 to S1030 may be performed by the prediction unit of the decoding apparatus.
  • the decoding apparatus receives 360 degree video information (S1000).
  • the decoding apparatus may receive the 360 degree video information through a bitstream.
  • the 360 video information may include IU type information about processing units of the projected picture.
  • the processing units may be called invalid units (IUs).
  • the IU type information may indicate a type corresponding to the IUs.
  • the syntax element for the IU type information may be represented as IU_type.
  • the IU type information may indicate a coding tree unit (CTU), a coding unit (CU), a face, or the slice.
  • the IU type information may be received through a high level syntax such as a slice header, a picture parameter set (PPS), a sequence parameter set (SPS), and the like.
  • the IU type information may be derived as shown in Table 1 above.
  • the 360 video information may include an invalidated map for the projection picture.
  • the envalidated map may indicate information on IU flags of the processing units of the projected picture.
  • the syntax element for the invalidated map may be represented as IU_map.
  • the envalidated map may include information on IU flags of IUs included in a target row among the IUs.
  • Information about the IU flags of the IUs included in the target row may include the IU flag value of the first IU of the target row, the number of occurrences of the IU flag value of the first IU, and / or the IU flag value of the first IU.
  • Information about the number of occurrences of other values may be included.
  • the invalid map may include information indicating the number of rows including the same information as the target row.
  • the 360 degree video information may include information indicating the projection type of the projected picture.
  • the projection type of the projected picture may be one of several projection types, and the projection types may be Equirectangular Projection (ERP), cube map projection (CMP), Icosahedral Projection (ISP), Octahedron Projection (OHP). It may include Truncated Square Pyramid projection (TSP), Segmented Sphere Projection (SSP) and Equal Area Projection (EAP).
  • the decoding apparatus derives processing units of the projected picture (S1010).
  • the processing units may be called invalid units (IUs).
  • the decoding apparatus may derive the types of the IUs based on the IU type information on the IUs of the projected picture obtained from the bitstream. That is, the type of the IUs may be derived based on the IU type information for the IUs of the projected picture.
  • the IU type information may indicate a coding tree unit (CTU), a coding unit (CU), a face, or the slice. That is, the type of the IUs may be derived as a CTU, CU, face, or slice of the projected picture based on the IU type information.
  • the IUs may be derived from blocks corresponding to the type indicated by the IU type information. For example, when the type of the IUs is derived as a coding tree unit (CTU) of the projected picture, the IUs may be derived as blocks corresponding to the CTU.
  • the size of the IUs may be the same as the size of the CTU. For example, the size of the CTU may be 128x128 size or 256x256 size.
  • the IUs when the types of the IUs are derived to a coding unit (CU) of the projected picture, the IUs may be derived to blocks corresponding to the CU.
  • the size of the IUs may be the same as the size of the CU.
  • the IUs when the type of the IUs is derived to a face of the projected picture, the IUs may be derived to blocks corresponding to the face.
  • the size of the IUs may be the same as the size of the face.
  • the size of the face may be 960x960 size.
  • the IUs when the type of the IUs is derived as a slice of the projected picture, the IUs may be derived as blocks corresponding to the slice.
  • the size of the IUs may be the same as the size of the slice.
  • the decoding apparatus derives an IU flag of a target processing unit among the processing units based on the 360 video information (S1020).
  • the processing units may be called invalid units (IUs), and the target processing units may be called target IUs.
  • the IU flag for the target processing unit may indicate whether the target processing unit is an invalidated area.
  • the invalidated area may indicate an area including only invalidated samples.
  • the invalidated sample may represent a sample that does not include information about the projected picture, or may represent a sample to which the 360 degree video data is not mapped.
  • the decoding apparatus may derive the IU flags of the IUs based on the invalidated map for the projected picture obtained from the bitstream.
  • the IU flags of the IUs including the target IU of the projected picture may be derived based on the invalidated map for the projected picture.
  • the IU flags of the IUs may be derived by applying run-length coding and copy above line coding to the invalidated map.
  • the envalidated map may include information on IU flags of IUs included in a target row among the IUs. Information about the IU flags of the IUs included in the target row may be derived based on run-length coding.
  • Information about the IU flags of the IUs included in the target row may include the IU flag value of the first IU of the target row, the number of occurrences of the IU flag value of the first IU, and / or the IU flag value of the first IU. Information about the number of occurrences of other values may be included.
  • the invalid map may include information indicating the number of rows including the same information as the target row. The information indicating the number of rows including the same information as the target row may be derived based on the copy-above line coding.
  • rows of the next order of the target row having the same information as the phase target row may be derived, and the IUs included in the rows of the next order may be derived.
  • IU flags may be derived in the same manner as IU flags of IUs included in the target row.
  • the information on the IU flags of the IUs included in the target row includes the IU flag value a of the first IU of the target row, the number b of occurrences of the IU flag value of the first IU, and / or the first IU.
  • Information about the number c of occurrences of a value different from the IU flag value may be included.
  • the values of the IU flags of the first IU to the b th IU of the target row may be derived as a.
  • the values of the IU flags of the b + 1 th IU to the b + c th IU of the target row may be derived from values different from the a (for example, 0 if a is 1 and 1 if a is 0).
  • the invalid map may include information indicating the number of rows d including the same information as the target row, in which case, the rows up to d-1 following the target row may correspond to the target row.
  • the same IU flags may be derived. That is, the IU flag values of each of the rows up to the d-1 following the target row may be derived from the same IU flag values as the IUs of the target row.
  • the decoding apparatus derives a sample value of a sample of the target processing unit as a predetermined specific value (S1030). If the IU flag for the target processing unit indicates that the target processing unit is invaluated (e.g., the value of the IU flag for the target processing unit is 1), decoding for the target processing unit May be omitted, and the sample value of the sample of the target processing unit may be derived as a predetermined specific value.
  • the prediction for the target IU may be omitted, and the samples of the target IU may be derived with the predetermined specific value.
  • the predetermined specific value may be a value representing gray.
  • the IU flag for the target IU indicates that the target IU is not balanced (for example, the value of the IU flag for the target IU is 0), that is, for the target IU
  • decoding may be performed on the target IU. That is, a prediction sample for the target IU may be derived, and a reconstruction sample may be derived based on the prediction sample.
  • the decoding apparatus may use the prediction sample directly as a reconstruction sample according to a prediction mode, or A residual sample may be added to the prediction sample to generate a reconstructed sample.
  • the decoding apparatus may receive information about the residual for the target block, and the information about the residual may be included in the information about the face.
  • the information about the residual may include transform coefficients regarding the residual sample.
  • the decoding apparatus may derive the residual sample (or residual sample array) for the target block based on the residual information.
  • the decoding apparatus may generate a reconstructed sample based on the prediction sample and the residual sample, and may derive a reconstructed block or a reconstructed picture based on the reconstructed sample. Thereafter, as described above, the decoding apparatus may apply an in-loop filtering procedure, such as a deblocking filtering and / or SAO procedure, to the reconstructed picture in order to improve subjective / objective picture quality as necessary.
  • an in-loop filtering procedure such as a deblocking filtering and / or SAO procedure
  • the decoding apparatus may map 360-degree video data of the decoded projected picture into 3D space. That is, the decoding apparatus may re-project the projected picture into the 3D space.
  • a type for an invalidated unit of a projected picture is derived based on IU type information, and an invalidated unit for the projected picture is derived based on a type for the invalidated unit.
  • the above-described method according to the present invention may be implemented in software, and the encoding device and / or the decoding device according to the present invention may perform image processing of, for example, a TV, a computer, a smartphone, a set-top box, a display device, and the like. It can be included in the device.
  • the above-described method may be implemented as a module (process, function, etc.) for performing the above-described function.
  • the module may be stored in memory and executed by a processor.
  • the memory may be internal or external to the processor and may be coupled to the processor by various well known means.
  • the processor may include application-specific integrated circuits (ASICs), other chipsets, logic circuits, and / or data processing devices.
  • the memory may include read-only memory (ROM), random access memory (RAM), flash memory, memory card, storage medium and / or other storage device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention concerne un procédé de décodage d'image, exécuté par un dispositif de décodage, qui comprend les étapes consistant : à recevoir des informations vidéo à 360 degrés ; à dériver des unités de traitement d'une image projetée ; sur la base des informations vidéo à 360 degrés, à déduire un drapeau d'IU pour une unité de traitement cible parmi les unités de traitement ; et si le drapeau d'IU pour l'unité de traitement cible indique que l'unité de traitement cible est invalide, à dériver une valeur d'échantillon d'un échantillon de l'unité de traitement cible à partir d'une valeur spécifique prédéfinie, le drapeau d'IU pour l'unité de traitement cible indiquant si l'unité de traitement cible est une région invalide ou non.
PCT/KR2017/011035 2017-01-03 2017-09-29 Procédé et dispositif de décodage d'image basé sur une unité invalide dans un système de codage d'image pour une vidéo à 360 degrés WO2018128248A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762441594P 2017-01-03 2017-01-03
US62/441,594 2017-01-03

Publications (1)

Publication Number Publication Date
WO2018128248A1 true WO2018128248A1 (fr) 2018-07-12

Family

ID=62791078

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/011035 WO2018128248A1 (fr) 2017-01-03 2017-09-29 Procédé et dispositif de décodage d'image basé sur une unité invalide dans un système de codage d'image pour une vidéo à 360 degrés

Country Status (1)

Country Link
WO (1) WO2018128248A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12189058B2 (en) 2017-12-22 2025-01-07 Seyond, Inc. High resolution LiDAR using high frequency pulse firing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007166134A (ja) * 2005-12-13 2007-06-28 Hitachi Software Eng Co Ltd 非可逆圧縮画像における無効データ領域表示抑制方法
JP2011259246A (ja) * 2010-06-09 2011-12-22 Canon Inc 画像処理装置、画像処理方法、及びプログラム
KR20160001430A (ko) * 2014-06-27 2016-01-06 삼성전자주식회사 영상 패딩영역의 비디오 복호화 및 부호화 장치 및 방법
WO2016004850A1 (fr) * 2014-07-07 2016-01-14 Mediatek Singapore Pte. Ltd. Procédé de recherche de copie intrabloc et de plage de compensation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007166134A (ja) * 2005-12-13 2007-06-28 Hitachi Software Eng Co Ltd 非可逆圧縮画像における無効データ領域表示抑制方法
JP2011259246A (ja) * 2010-06-09 2011-12-22 Canon Inc 画像処理装置、画像処理方法、及びプログラム
KR20160001430A (ko) * 2014-06-27 2016-01-06 삼성전자주식회사 영상 패딩영역의 비디오 복호화 및 부호화 장치 및 방법
WO2016004850A1 (fr) * 2014-07-07 2016-01-14 Mediatek Singapore Pte. Ltd. Procédé de recherche de copie intrabloc et de plage de compensation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIANG MA: "Co-project ion-plane based motion compensated prediction for cu bic format VR content", JVET DOCUMENT JVET-D0061, 12 October 2016 (2016-10-12), pages 1 - 4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12189058B2 (en) 2017-12-22 2025-01-07 Seyond, Inc. High resolution LiDAR using high frequency pulse firing

Similar Documents

Publication Publication Date Title
WO2018062921A1 (fr) Procédé et appareil de partitionnement et de prédiction intra de blocs dans un système de codage d'image
WO2018128247A1 (fr) Procédé et dispositif d'intra-prédiction dans un système de codage d'image pour vidéo à 360 degrés
WO2020197236A1 (fr) Codage d'image ou de vidéo s'appuyant sur une structure de manipulation de sous-images
WO2019198997A1 (fr) Procédé de codage d'image à base d'intraprédiction et appareil pour cela
WO2019112071A1 (fr) Procédé et appareil de décodage d'image basés sur une transformation efficace de composante de chrominance dans un système de codage d'image
EP4346209B1 (fr) Dispositif de codage/décodage d'image pour signaler des informations de prédiction de composante de chrominance selon le mode palette
WO2016056821A1 (fr) Procédé et dispositif de compression d'informations de mouvement pour un codage de vidéo tridimensionnelle (3d)
WO2020076066A1 (fr) Procédé de conception de syntaxe et appareil permettant la réalisation d'un codage à l'aide d'une syntaxe
WO2018212430A1 (fr) Procédé de filtrage de domaine de fréquence dans un système de codage d'image et dispositif associé
WO2016056822A1 (fr) Procédé et dispositif de codage vidéo 3d
WO2019212230A1 (fr) Procédé et appareil de décodage d'image à l'aide d'une transformée selon une taille de bloc dans un système de codage d'image
WO2019009600A1 (fr) Procédé et appareil de décodage d'image utilisant des paramètres de quantification basés sur un type de projection dans un système de codage d'image pour une vidéo à 360 degrés
WO2020141928A1 (fr) Procédé et appareil de décodage d'image sur la base d'une prédiction basée sur un mmvd dans un système de codage d'image
WO2019083119A1 (fr) Procédé et dispositif de décodage d'image utilisant des paramètres de rotation dans un système de codage d'image pour une vidéo à 360 degrés
WO2020141885A1 (fr) Procédé et dispositif de décodage d'image au moyen d'un filtrage de dégroupage
WO2020040439A1 (fr) Procédé et dispositif de prédiction intra dans un système de codage d'image
WO2022039499A1 (fr) Procédé de codage/décodage d'image, dispositif et support d'enregistrement lisible par ordinateur à des fins de signalisation de flux binaire vcm
WO2018084344A1 (fr) Procédé et dispositif de décodage d'image dans un système de codage d'image
WO2018174531A1 (fr) Procédé et dispositif de traitement de signal vidéo
JP2024144567A (ja) ピクチャ分割情報をシグナリングする方法及び装置
WO2020141884A1 (fr) Procédé et appareil de codage d'image en utilisant une mmvd sur la base d'un cpr
WO2018128248A1 (fr) Procédé et dispositif de décodage d'image basé sur une unité invalide dans un système de codage d'image pour une vidéo à 360 degrés
WO2018174542A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2021206524A1 (fr) Procédé de décodage d'image et dispositif associé
WO2018155939A1 (fr) Procédé et appareil de décodage d'image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17889677

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17889677

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载