+

WO2018182310A1 - Procédé et appareil de codage vidéo, et procédé et appareil de décodage vidéo - Google Patents

Procédé et appareil de codage vidéo, et procédé et appareil de décodage vidéo Download PDF

Info

Publication number
WO2018182310A1
WO2018182310A1 PCT/KR2018/003658 KR2018003658W WO2018182310A1 WO 2018182310 A1 WO2018182310 A1 WO 2018182310A1 KR 2018003658 W KR2018003658 W KR 2018003658W WO 2018182310 A1 WO2018182310 A1 WO 2018182310A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion information
reference pixel
motion
information
current block
Prior art date
Application number
PCT/KR2018/003658
Other languages
English (en)
Korean (ko)
Inventor
탬즈아니쉬
표인지
Original Assignee
삼성전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자 주식회사 filed Critical 삼성전자 주식회사
Priority to KR1020197019480A priority Critical patent/KR102243215B1/ko
Publication of WO2018182310A1 publication Critical patent/WO2018182310A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding

Definitions

  • the present specification relates to an image encoding, an image decoding method, and an apparatus, and more particularly, to an image encoding or decoding method and apparatus for more accurately predicting motion information of pixels of a current block based on a plurality of motion information associated with a current block. It is about.
  • one picture may be divided into a plurality of blocks to encode an image, and each block may be predictively encoded using inter prediction or intra prediction.
  • Inter prediction is a method of compressing an image by removing temporal redundancy between pictures.
  • Inter prediction may predict blocks of the current picture using at least one reference picture, respectively.
  • the inter prediction may search a reference block most similar to the current block in a predetermined search range of the reference picture by using a predetermined evaluation function.
  • inter prediction may predict the current block using the most similar reference block.
  • inter prediction may obtain, as motion information, a difference between positions of the most similar reference block and the current block.
  • the existing motion information cannot represent the change of the image such as zoom, rotation, or torsion.
  • the present disclosure provides a method and apparatus for encoding or decoding to more accurately predict motion information of pixels of a current block based on a plurality of motion information associated with a current block.
  • the image decoding method when the prediction mode of the current block is the affine mode, includes: Acquiring a first directional motion component and a second directional motion component, acquiring a first directional motion component included in motion information of a second reference pixel located at a second position of the current block from the bitstream; Acquiring a second directional motion component included in motion information of a second reference pixel, and positioned at a third position of the current block based on motion information of the first reference pixel and motion information of the second reference pixel; Acquiring motion information of a third reference pixel, length of a width and height of the current block, motion motion of the first reference pixel, and Characterized in that it comprises the step of: on the basis of the motion information of the second reference pixel motion information, and the third reference pixel in obtaining the motion information on the pixels included in the current block.
  • An image decoding method includes obtaining first difference information that is a difference between a length of a current block, a motion information of the first reference pixel, and a motion information of the second reference pixel. Acquiring second difference information which is a difference between a length of the height of the current block, motion information of the first reference pixel, and motion information of the third reference pixel, and a position of a pixel included in the current block. And acquiring motion information of a pixel included in the current block based on the information, the first difference information, and the second difference information.
  • An image decoding method may include acquiring, from the bitstream, the second directional motion component included in the motion information of the second reference pixel.
  • An image decoding method further includes obtaining information about a motion type of a current block from a bitstream, and when the motion type indicates zoom, the first reference pixel Obtaining a second directional motion component included in the motion information of the second reference pixel based on the second directional motion component included in the motion information of the second reference pixel;
  • the one-way motion component is an x-direction motion component and the second direction motion component included in the motion information of the second reference pixel is a y-direction motion component.
  • the image decoding method further includes obtaining information about a motion type of a current block from the bitstream, and when the motion type indicates rotation, the first Obtaining a second directional motion component included in the motion information of the second reference pixel based on at least one of the motion information of the reference pixel and the first directional motion component included in the motion information of the second reference pixel.
  • the first direction motion component included in the motion information of the second reference pixel is a y direction motion component
  • the second direction motion component included in the motion information of the second reference pixel is an x direction motion component.
  • An image decoding method may include the third method based on a y-direction motion component of a width and a height of the current block, motion information of the first reference pixel, and motion information of the second reference pixel. Obtaining an x-direction motion component of the motion information of the reference pixel, and based on the x-direction motion component of the width and height of the current block, the motion information of the first reference pixel, and the motion information of the second reference pixel Acquiring a y-direction motion component of the motion information of the third reference pixel.
  • the image decoding method when the size of the current block is larger than a threshold size, obtaining information about an affine mode from the bitstream; Acquiring information about a motion type from the bitstream when performing the affine mode; and when the information about the motion type indicates acquiring three directional motion components from the bitstream; Obtaining a first directional motion component and a second directional motion component included in the motion information of the first reference pixel, a first directional motion component included in the motion information of the second reference pixel, and
  • the information about the motion type indicates obtaining four directional motion components from the bitstream, an x-direction motion component and a y-direction motion component included in the motion information of the first reference pixel from the bitstream; And acquiring the x-direction motion component and the y-direction motion component included in the motion information of the two reference pixels.
  • the motion of the first reference pixel is based on the motion information of the first position of previously reconstructed neighboring blocks of the current block.
  • Acquiring information acquiring motion information of a second reference pixel based on motion information of the second position of the neighboring blocks, and moving the third reference pixel based on the motion information of the third position of the neighboring blocks Acquiring information, and based on the length of the width and height of the current block, the motion information of the first reference pixel, the motion information of the second reference pixel, and the motion information of the third reference pixel.
  • An image decoding method obtains information on an affine mode from a received bitstream when the size of the current block is larger than a threshold size or at least one of the neighboring blocks is an affine mode.
  • the method may further include determining a prediction mode of the current block as an affine mode based on the information on the affine mode.
  • an image decoding method includes identifying whether neighboring blocks are in an affine mode from a lower left neighboring block to a right upper neighboring block, and an initial neighbor identified in an affine mode among the neighboring blocks. Acquiring motion information of the first reference pixel based on motion information of the first position of the block.
  • An image decoding method includes identifying whether the neighboring blocks are in an affine mode in a zigzag order from an upper left neighboring block to a right upper neighboring block or a lower left neighboring block, and among the neighboring blocks. And obtaining motion information of the second reference pixel based on the motion information of the second position of the first neighboring block identified in the affine mode.
  • an image decoding method includes identifying whether neighboring blocks are in an affine mode from an upper right neighboring block to a lower left neighboring block, and an initial neighbor identified in an affine mode among the neighboring blocks. And obtaining motion information of the third reference pixel based on the motion information of the third position of the block.
  • an image decoding method includes motion information of a first reference pixel, which is motion information of a left upper pixel of the current block, based on motion information of a neighboring block adjacent to an upper left pixel of the current block.
  • obtaining motion information of the third reference pixel which is motion information of the lower left pixel of the current block, based on the motion information of the neighboring block adjacent to the lower left pixel of the current block.
  • An image decoding apparatus includes at least one processor, wherein the at least one processor is configured to generate a first block of the current block from a received bitstream when the prediction mode of the current block is an affine mode. Acquiring a first directional motion component and a second directional motion component included in the motion information of the first reference pixel located at a position; movement of a second reference pixel located at a second position of the current block from the bitstream Acquiring a first directional motion component included in the information, acquiring a second directional motion component included in the motion information of the second reference pixel, motion information of the first reference pixel and the second reference pixel Acquiring motion information of a third reference pixel located at a third position of the current block based on the motion information, and Motion information of a pixel included in the current block based on a width and a height of a current block, motion information of the first reference pixel, motion information of the second reference pixel, and motion information of the third reference pixel. And to perform
  • An image decoding apparatus includes at least one processor, wherein the at least one processor is configured to determine previously reconstructed neighboring blocks of the current block when the prediction mode of the current block is an affine mode.
  • the neighboring blocks are temporally or spatially adjacent to the current block.
  • a method of encoding an image is based on motion information of a first reference pixel for a first position of the current block based on a current block included in an original image and a previously reconstructed image of the current block.
  • Obtaining an included first directional motion component and a second directional motion component, included in the motion information of a second reference pixel for a second position of the current block based on the current block and the previously reconstructed image Acquiring a first directional motion component, obtaining a second directional motion component included in the motion information of the second reference pixel, moving information of the first reference pixel, and motion information of the second reference pixel.
  • An image encoding method includes obtaining a second directional motion component included in motion information of the second reference pixel based on the current block and the previously reconstructed image, And generating a bitstream based on the second direction motion component included in the motion information of the second reference pixel.
  • An image encoding method obtains a second directional motion component included in the motion information of the second reference pixel based on a second directional motion component included in the motion information of the first reference pixel.
  • the method includes determining that the motion type of the current block is zoom, generating a bitstream based on the motion type, and included in the motion information of the second reference pixel.
  • the first direction motion component is an x direction motion component and the second direction motion component included in the motion information of the second reference pixel is a y direction motion component.
  • An image encoding method is included in the motion information of the second reference pixel based on the motion information of the first reference pixel and the first direction motion component included in the motion information of the second reference pixel.
  • determining that the motion type of the current block is rotation and generating a bitstream based on the motion type.
  • the first direction motion component included in the motion information of the second reference pixel is a y direction motion component
  • the second direction motion component included in the motion information of the second reference pixel is an x direction motion component.
  • An image encoding apparatus includes at least one processor, wherein the at least one processor is based on a current block included in an original image and a previously reconstructed image of the current block. Acquiring a first directional motion component and a second directional motion component included in the motion information of the first reference pixel with respect to the first position of the block; based on the current block and the previously reconstructed image, the current block Acquiring a first directional motion component included in motion information of a second reference pixel for a second position of, acquiring a second directional motion component included in motion information of the second reference pixel, the first Motion definition of a third reference pixel relative to a third position of the current block based on motion information of a reference pixel and motion information of the second reference pixel Acquiring a value included in the current block based on a width and a height of the current block, motion information of the first reference pixel, motion information of the second reference pixel, and motion information of the third reference pixel. Acquiring a first directional motion component
  • FIG. 1 is a schematic block diagram of an image decoding apparatus according to an embodiment.
  • FIG. 2 is a flowchart of an image decoding method, according to an exemplary embodiment.
  • FIG. 3 is a diagram illustrating a method of predicting motion information of pixels of a current block according to an embodiment.
  • FIG. 4 is a diagram illustrating a method of predicting motion information of pixels of a current block when the motion type of the current block is a zoom type according to an embodiment.
  • FIG. 5 is a diagram illustrating a method of predicting motion information of pixels of a current block when a motion type of a current block is a rotation type according to an embodiment.
  • FIG. 6 is a diagram for describing an affine mode for receiving a differential motion vector, according to an exemplary embodiment.
  • FIG. 7 is a flowchart for describing an affine mode in which a differential motion vector is not received, according to an exemplary embodiment.
  • FIG. 8 illustrates an affine mode in which a differential motion vector is not received, according to an exemplary embodiment.
  • FIG. 9 is a diagram for describing an affine mode in which a differential motion vector is not received, according to an exemplary embodiment.
  • FIG. 10 is a diagram for describing an affine mode in which a differential motion vector is not received, according to an exemplary embodiment.
  • FIG. 11 illustrates an affine mode in which a differential motion vector is not received, according to an exemplary embodiment.
  • FIG. 12 is a diagram for describing an affine mode in which a motion vector is not received, according to another embodiment.
  • FIG. 13 is a diagram for describing a method of obtaining motion information of a pixel included in a current block according to one embodiment of the present disclosure.
  • FIG. 14 is a flowchart for inter prediction according to an embodiment.
  • 15 is a schematic block diagram of an image encoding apparatus, according to an embodiment.
  • 16 is a flowchart of a video encoding method, according to an embodiment.
  • FIG. 17 is a diagram of a process of determining, by an image decoding apparatus, at least one coding unit by dividing a current coding unit according to an embodiment.
  • FIG. 18 illustrates a process of determining, by an image decoding apparatus, at least one coding unit by dividing a coding unit having a non-square shape according to an embodiment.
  • FIG. 19 illustrates a process of splitting a coding unit based on at least one of block shape information and split shape information, according to an embodiment.
  • 20 is a diagram for a method of determining, by an image decoding apparatus, a predetermined coding unit among odd number of coding units according to an embodiment.
  • FIG. 21 is a diagram illustrating an order in which a plurality of coding units are processed when the image decoding apparatus determines a plurality of coding units by dividing a current coding unit.
  • FIG. 22 illustrates a process of determining that a current coding unit is divided into odd coding units when the image decoding apparatus cannot process the coding units in a predetermined order, according to an embodiment.
  • FIG. 23 is a diagram of a process of determining, by an image decoding apparatus, at least one coding unit by dividing a first coding unit.
  • FIG. 24 is a view illustrating that a shape in which a second coding unit may be split is limited when a non-square type second coding unit determined by splitting a first coding unit according to an embodiment satisfies a predetermined condition. Shows that.
  • FIG. 25 is a diagram illustrating a process of splitting a coding unit having a square shape by the image decoding apparatus when the split shape information cannot be divided into four square coding units according to an embodiment.
  • FIG. 26 illustrates that a processing order between a plurality of coding units may vary according to a division process of coding units, according to an embodiment.
  • FIG. 27 illustrates a process of determining a depth of a coding unit as a shape and a size of a coding unit change when a coding unit is recursively divided to determine a plurality of coding units according to an embodiment.
  • FIG. 28 illustrates a depth and a part index (PID) for classifying coding units, which may be determined according to the shape and size of coding units, according to an embodiment.
  • PID depth and a part index
  • FIG. 29 illustrates that a plurality of coding units are determined according to a plurality of predetermined data units included in a picture according to an embodiment.
  • FIG. 30 is a diagram of a processing block serving as a reference for determining a determination order of a reference coding unit included in a picture, according to an embodiment.
  • the term “part” means a software or hardware component, and “part” plays certain roles. However, “part” is not meant to be limited to software or hardware.
  • the “unit” may be configured to be in an addressable storage medium and may be configured to play one or more processors.
  • a “part” refers to components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, procedures, Subroutines, segments of program code, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays and variables.
  • the functionality provided within the components and “parts” may be combined into a smaller number of components and “parts” or further separated into additional components and “parts”.
  • the “unit” may be implemented with a processor and a memory.
  • the term “processor” should be interpreted broadly to include general purpose processors, central processing units (CPUs), microprocessors, digital signal processors (DSPs), controllers, microcontrollers, state machines, and the like.
  • a “processor” may refer to an application specific semiconductor (ASIC), a programmable logic device (PLD), a field programmable gate array (FPGA), or the like.
  • ASIC application specific semiconductor
  • PLD programmable logic device
  • FPGA field programmable gate array
  • processor refers to a combination of processing devices such as, for example, a combination of a DSP and a microprocessor, a combination of a plurality of microprocessors, a combination of one or more microprocessors in conjunction with a DSP core, or a combination of any other such configuration. May be referred to.
  • memory should be interpreted broadly to include any electronic component capable of storing electronic information.
  • the term memory refers to random access memory (RAM), read-only memory (ROM), non-volatile random access memory (NVRAM), programmable read-only memory (PROM), erase-programmable read-only memory (EPROM), electrical And may refer to various types of processor-readable media, such as erasable PROM (EEPROM), flash memory, magnetic or optical data storage, registers, and the like.
  • RAM random access memory
  • ROM read-only memory
  • NVRAM non-volatile random access memory
  • PROM programmable read-only memory
  • EPROM erase-programmable read-only memory
  • electrical And may refer to various types of processor-readable media, such as erasable PROM (EEPROM), flash memory, magnetic or optical data storage, registers, and the like.
  • EEPROM erasable PROM
  • flash memory magnetic or optical data storage, registers, and the like.
  • the "image” may be a static image such as a still image of a video or may represent a dynamic image such as a video, that is, the video itself.
  • sample means data to be processed as data allocated to a sampling position of an image.
  • pixel values and transform coefficients on a transform region may be samples in an image of a spatial domain.
  • a unit including the at least one sample may be defined as a block.
  • FIGS. 1 to 30 An image encoding apparatus, an image decoding apparatus, an image encoding method, and an image decoding method will be described in detail with reference to FIGS. 1 to 30.
  • An encoding or decoding method using image prediction according to an embodiment will be described with reference to FIGS. 1 through 16, and a method of determining a data unit of an image according to an embodiment will be described with reference to FIGS. 17 through 30. do.
  • FIGS. 1 to 16 a method and apparatus for efficiently predicting a current block based on a plurality of motion information of the current block according to an embodiment of the present disclosure will be described with reference to FIGS. 1 to 16.
  • FIG. 1 is a schematic block diagram of an image decoding apparatus 100 according to an embodiment.
  • the image decoding apparatus 100 may include a receiver 110 and a decoder 120.
  • the receiver 110 and the decoder 120 may include at least one processor.
  • the receiver 110 and the decoder 120 may include a memory that stores instructions to be executed by at least one processor.
  • the receiver 110 may receive a bitstream.
  • the bitstream includes information encoded by an image encoding apparatus 1500, which will be described later.
  • the bitstream may be transmitted from the image encoding apparatus 1500.
  • the image encoding apparatus 1500 and the image decoding apparatus 100 may be connected by wire or wirelessly, and the receiver 110 may receive a bitstream through wire or wirelessly.
  • the receiver 110 may receive a bitstream from a storage medium such as an optical media or a hard disk.
  • the decoder 120 may reconstruct an image by obtaining information from the received bitstream. The operation of the decoder 120 will be described in more detail with reference to FIG. 2.
  • FIG. 2 is a flowchart of an image decoding method, according to an exemplary embodiment.
  • the receiver 110 may receive a bitstream.
  • the decoder 120 When the prediction mode of the current block is the affine mode, the decoder 120 includes the first direction motion component and the second direction motion component included in the motion information of the first reference pixel located at the first position of the current block from the received bitstream. Acquiring a directional motion component (210). The decoder 120 acquires a first direction motion component included in the motion information of the second reference pixel located at the second position of the current block from the bitstream (220). The decoder 120 acquires a second direction motion component included in the motion information of the second reference pixel (230). The decoder 120 acquires the motion information of the third reference pixel located at the third position of the current block based on the motion information of the first reference pixel and the motion information of the second reference pixel (240). .
  • the decoder 120 may determine the motion information of the pixel included in the current block based on the width and height of the current block, the motion information of the first reference pixel, the motion information of the second reference pixel, and the motion information of the third reference pixel. Acquisition step 250 is performed.
  • the image may be divided into maximum coding units.
  • the size of the largest coding unit may be determined based on information obtained from the bitstream.
  • the shape of the largest coding unit may have a square of the same size. But it is not limited thereto.
  • the maximum coding unit may be hierarchically divided into coding units based on split type information obtained from the bitstream.
  • the coding unit may be smaller than or equal to the maximum coding unit. For example, when indicating that split type information is not split, a coding unit has the same size as a maximum coding unit. When the split type information is split, the maximum coding unit may be split into coding units.
  • split type information of coding units indicates splitting, coding units may be split into coding units having a smaller size. However, segmentation of an image is not limited thereto, and a maximum coding unit and a coding unit may not be distinguished. Splitting of coding units will be described in more detail with reference to FIGS. 17 to 30.
  • the coding unit may be divided into a prediction unit for prediction of an image.
  • the prediction unit may be equal to or smaller than the coding unit.
  • the coding unit may be divided into a transformation unit for transformation of an image.
  • the transformation unit may be equal to or smaller than the coding unit.
  • the shape and size of the transform unit and the prediction unit may not be related to each other.
  • the coding unit may be distinguished from the prediction unit and the transformation unit, but the coding unit, the prediction unit, and the transformation unit may be the same.
  • the division of the prediction unit and the transformation unit may be performed in the same manner as the division of the coding unit. Splitting of coding units will be described in more detail with reference to FIGS. 17 to 30.
  • the current block of the present disclosure may indicate one of a maximum coding unit, a coding unit, a prediction unit, and a transformation unit.
  • the current block is a block in which decoding or encoding is currently performed.
  • the neighboring block may be a block restored before the current block.
  • the neighboring blocks can be spatially or temporally adjacent from the current block.
  • the neighboring block may be located at one of the lower left side, left side, upper left side, upper side, upper right side, right side, and lower side of the current block.
  • the decoder 120 may determine the prediction mode of the current block as the inter prediction mode or the intra prediction mode.
  • the inter prediction mode is a method of compressing an image by removing temporal redundancy between the images.
  • the decoder 120 may determine whether the current block is an inter prediction mode based on information obtained from the bitstream.
  • the decoder 120 may determine whether the current block is in an affine mode. A method of determining the affine mode by the decoder 120 will be described with reference to FIG. 14. When not in the affine mode, the decoder 120 may perform prediction based on an existing inter prediction mode.
  • the affine mode refers to a mode for predicting motion information of pixels of the current block based on motion information of some pixels of the current block or motion information of neighboring blocks of the current block. The decoding method of the affine mode will be described in more detail with reference to FIGS. 3 to 14.
  • FIG. 3 is a diagram illustrating a method of predicting motion information of pixels of a current block according to an embodiment.
  • the decoder 120 may predict the current block 310 in the affine mode.
  • the decoder 120 may acquire motion information of a reference pixel located at a plurality of positions inside and outside the current block 310.
  • the reference pixels may be predetermined positions between the image encoding apparatus 1500 and the image decoding apparatus 100.
  • the reference pixels may be included in a bitstream transmitted from the encoding apparatus 1500 to the decoding apparatus 100.
  • the decoder 120 may determine at least two of the upper left, upper right, lower left and lower right pixels of the current block as positions of the reference pixels. Referring to FIG.
  • the decoder 120 may use the upper left pixel 320 of the current block 310 as the first reference pixel, the upper right pixel 330 as the second reference pixel, and the lower left corner. It is assumed that the side pixel 340 is determined as the third reference pixel.
  • the motion information may be a motion vector.
  • the motion information may be a predicted motion vector.
  • the motion information may include an x direction motion component and a y direction motion component of the Cartesian coordinate system.
  • the motion information may include an angular motion component and a longitudinal motion component of the polar coordinate system.
  • the first direction motion component may be an x direction motion component or a y direction motion component.
  • the second direction motion component of the present disclosure may be a y direction motion component or an x direction motion component.
  • the decoder 120 may receive a bitstream from the encoding apparatus 1500 to obtain motion information.
  • the decoder 120 may receive some motion information from the encoding apparatus 1500 and predict the remaining motion information.
  • the decoder 120 may receive only the x-direction motion component or the y-direction motion component of the motion information and derive an unreceived direction motion component.
  • the decoder 120 may receive an x-direction motion component and a y-direction motion component included in one piece of motion information and derive the x-direction motion component and the y-direction motion component of other motion information.
  • the decoder 120 may obtain motion information from blocks reconstructed before the current block 310. Referring to FIG.
  • the decoder 120 may perform motion information MV0 of the first reference pixel of the upper left pixel 320 of the current block 310, and motion information of the second reference pixel of the right upper pixel 330.
  • the motion information MV2 of the third reference pixel of the MV1 and the lower left pixel 340 may be determined.
  • the motion information MV0, MV1, and MV2 may include an x direction motion component and a y direction motion component, respectively. Therefore, the decoder 120 may predict the motion information of the pixels of the current block 310 using a total of six directional motion components.
  • the motion vector may change linearly with the position of the pixel within the current block 310.
  • the decoder 120 may obtain motion information of pixels included in the current block 310 based on the motion information MV0, MV1, and MV2.
  • the decoder 120 includes first difference information that is a difference between the length w of the current block 310, the motion information MV0 of the first reference pixel, and the motion information MV1 of the second reference pixel. Can be obtained.
  • the decoder 120 may acquire second difference information, which is a difference between the length h of the height of the current block 310, the motion information MV0 of the first reference pixel, and the motion information MV2 of the third reference pixel. Can be.
  • the decoder 120 may obtain the motion information MV of the pixels included in the current block based on the position information (x, y), the first difference information, and the second difference information of the pixels included in the current block. .
  • Equation 1 the first difference information may be represented by Equation 1 as follows.
  • dMVx (MV1-MV0) / w
  • dMVx represents first difference information
  • MV0 represents motion information of the first reference pixel
  • MV1 represents motion information of the second reference pixel
  • w represents the length of the width of the current block 310.
  • Equation 2 the second difference information may be represented by Equation 2 as follows.
  • dMVy (MV2-MV0) / h
  • dMVy represents second difference information
  • MV0 represents motion information of the first reference pixel
  • MV2 represents motion information of the third reference pixel
  • h represents the length of the height of the current block 310.
  • the decoder 120 may obtain the first difference information and the second difference information by using a predetermined weight.
  • the first difference information or the second is applied by applying a weight to at least one of the motion information MV0 of the first reference pixel, the motion information MV1 of the second reference pixel, or the motion information MV2 of the third reference pixel.
  • Difference information dMVy may be obtained.
  • the decoder 120 may obtain the motion information MV of the pixel included in the current block 310 by Equation 3 based on the first difference information and the second difference information.
  • MV MV0 + x * dMVx + y * dMVy
  • MV represents the motion information of the pixel
  • MV0 represents the motion information of the first reference pixel
  • x represents the x-axis coordinate value of the pixel included in the current block 310
  • y is included in the current block 310 Represents the y-axis coordinate value of the pixel
  • dMVx represents the first difference information
  • dMVy represents the second difference information.
  • the decoder 120 may acquire motion information MV of pixels included in the current block 310 by using a predetermined weight. For example, the decoder 120 obtains the motion information MV by applying a weight to at least one of the motion information MV0, the first difference information dMVx, or the second difference information dMVy of the first reference pixel. can do.
  • FIG. 4 is a diagram illustrating a method of predicting motion information of pixels of a current block when the motion type of the current block is a zoom type according to an embodiment.
  • the decoder 120 may predict the current block 410 based on a previously reconstructed reference image of the current image including the current block 410. For example, the decoder 120 may predict the current block 410 based on the reference block 415 included in the reference image. In FIG. 4, the size of the reference block 415 is larger than the size of the current block 410, but is not limited thereto. The size of the reference block 415 may be less than or equal to the size of the current block 410.
  • the decoder 120 may acquire motion information of pixels included in the current block 410 in order to predict the current block 410 based on the reference block 415.
  • the decoder 120 may perform motion information MV0 of the first reference pixel at the first position 420 of the current block 410, and motion information of the second reference pixel at the second position 430. MV1 and the motion information MV2 of the third reference pixel at the third position 440 may be obtained.
  • a method in which the decoder 120 acquires the motion information MV0, MV1, and MV2 when the motion type is zoomed together with FIG. 4 will be described in detail.
  • the decoder 120 may obtain information about a motion type of the current block 410 from the bitstream.
  • the movement type may include a zoom type, a rotation type, a type simultaneously indicating zoom and rotation, and a torsion type.
  • the decoder 120 may determine the motion type based on the information about the motion type of the current block 410. 4 illustrates a case in which information about a motion type of the current block 410 is zoomed.
  • the decoder 120 may obtain the motion information MV0 of the first reference pixel based on the information obtained from the bitstream.
  • the motion information MV0 of the first reference pixel may be a motion vector.
  • the decoder 120 may obtain a differential motion vector associated with the motion information MV0 of the first reference pixel from the bitstream.
  • the decoder 120 may obtain the predicted motion vector based on the motion information of the neighboring block previously reconstructed of the current block 410.
  • the decoder 120 may determine candidate blocks based on neighboring blocks.
  • the neighboring blocks may be blocks temporally or spatially adjacent to the current block 410.
  • the decoder 120 may select one candidate block among candidate blocks based on the index obtained from the bitstream.
  • the decoder 120 may determine the motion information of the selected candidate block as a predicted motion vector.
  • the decoder 120 may obtain motion information MV0 of the first reference pixel based on the differential motion vector and the predicted motion vector.
  • the motion information MV0 of the first reference pixel may include an x direction motion component and a y direction motion component.
  • the decoder 120 may determine whether the first direction motion component included in the motion information MV1 of the second reference pixel is an x direction motion component or a y direction motion component according to the motion type. For example, when the motion type indicates zoom, the decoder 120 may determine that the first direction motion component included in the motion information MV1 of the second reference pixel is the x direction motion component. In addition, when the motion type indicates a zoom, the decoder 120 may determine that the second direction motion component included in the motion information of the second reference pixel is the y direction motion component.
  • the decoder 120 may obtain the first direction motion component of the motion information MV1 of the second reference pixel based on the information obtained from the bitstream. For example, the decoder 120 may obtain a differential motion vector associated with the motion information MV1 of the second reference pixel from the bitstream. The differential motion vector may be related to either the x direction motion component or the y direction motion component of the motion information MV1 of the second reference pixel. The decoder 120 may obtain the predicted motion vector based on the motion information of the neighboring block previously reconstructed of the current block 410. The decoder 120 may determine candidate blocks based on neighboring blocks. The neighboring blocks may be blocks temporally or spatially adjacent to the current block 410.
  • the decoder 120 may select one candidate block among candidate blocks based on the index obtained from the bitstream.
  • the decoder 120 may determine the motion information of the selected candidate block as a predicted motion vector.
  • the decoder 120 may include the x direction motion component and the y direction motion component.
  • the decoder 120 may determine the first direction motion component (ie, the x direction motion component) of the motion information of the second reference pixel based on the x direction motion component included in the differential motion vector and the x direction motion component included in the predicted motion vector. ) Can be obtained.
  • the decoder 120 may determine the second direction motion component (ie, y) included in the motion information MV1 of the second reference pixel based on the y direction motion component included in the motion information MV0 of the first reference pixel.
  • Directional motion component can be obtained. More specifically, the second direction motion component may be determined by Equation 4 as follows.
  • MV1 [y] represents the second direction motion component included in the motion information MV1 of the second reference pixel
  • MV0 [y] represents the y direction motion component of the motion information of the first reference pixel.
  • the decoder 120 may obtain the second direction motion component by multiplying the y direction motion component of the motion information of the first reference pixel by a predetermined weight.
  • the decoder 120 may acquire motion information of the third reference pixel based on at least one of motion information of the first reference pixel and motion information of the second reference pixel. According to an embodiment of the present disclosure, the decoder 120 may determine a third reference pixel based on the y-direction motion component of the width and height of the current block, the motion information of the first reference pixel, and the motion information of the second reference pixel. It is possible to obtain a motion component of the x direction of the motion information. Also, the decoder 120 may determine the y-direction motion component of the motion information of the third reference pixel based on the motion information of the first reference pixel and the x-direction motion component of the motion information of the second reference pixel. Can be obtained. According to another embodiment of the present disclosure, the decoder 120 may obtain motion information of the third reference pixel according to Equation 5.
  • MV2 [x] -(MV1 [y]-MV0 [y]) * h / w + MV0 [x]
  • MV2 [y] (MV1 [x]-MV0 [x]) * h / w + MV0 [y]
  • MV2 [x] represents the x-direction motion component of the motion information of the third reference pixel
  • MV1 [y] represents the y-direction motion component of the motion information of the second reference pixel
  • MV0 [y] represents the first reference pixel.
  • h represents the height of the current block 410
  • w represents the width of the current block 410
  • MV0 [x] is the x-direction motion of the motion information of the first reference pixel Components may be indicated.
  • the decoder 120 may acquire motion information of an arbitrary position 450 of the current block 410 based on motion information of the first reference pixel, motion information of the second reference pixel, and motion information of the third reference pixel. Can be.
  • the decoder 120 may obtain motion information of an arbitrary location 450 based on Equation 3. Since the method of obtaining the motion information of the pixel of the current block 410 has been described with reference to FIG. 3, overlapping description thereof will be omitted.
  • the decoder 120 may predict and reconstruct the current block based on the motion information of the pixels of the current block 410.
  • the decoder 120 may obtain a reference picture index from the bitstream.
  • the decoder 120 may determine the reference picture based on the reference picture index.
  • the decoder 120 refers to the pixel at the position 425 of the reference block 415 included in the reference image based on the motion information MV0 of the first reference pixel, and the pixel at the first position 420 of the current block. Can be predicted.
  • the decoder 120 may predict the pixel at the second position 430 of the current block by referring to the pixel at the position 435 of the reference block 415 based on the motion information MV1 of the second reference pixel.
  • the decoder 120 may predict the pixel at the third position 440 of the current block by referring to the pixel at the position 445 of the reference block 415 based on the motion information MV2 of the third reference pixel.
  • the decoder 120 may predict the pixel at the arbitrary position 450 based on the motion information at the arbitrary position 450.
  • the decoder 120 may reconstruct the current block based on the predicted current block and the residual obtained from the bitstream.
  • FIG. 5 is a diagram illustrating a method of predicting motion information of pixels of a current block when a motion type of a current block is a rotation type according to an embodiment.
  • the decoder 120 may predict the current block 510 based on a previously reconstructed reference image of the current image including the current block 510. For example, the decoder 120 may predict the current block 510 based on the reference block 515 included in the reference image. In FIG. 4, the size of the reference block 515 may be the same as that of the current block 510. The decoder 120 may obtain motion information of the pixels included in the current block 510 to predict the current block 510 based on the reference block 515.
  • the decoder 120 may perform motion information MV0 of the first reference pixel at the first position 520 of the current block 510, and motion information of the second reference pixel at the second position 530. MV1 and the motion information MV2 of the third reference pixel at the third position 540 may be obtained.
  • a method in which the decoder 120 acquires the motion information MV0, MV1, and MV2 when the motion type is rotation will be described in detail.
  • the decoder 120 may obtain information about a motion type of the current block 510 from the bitstream.
  • the decoder 120 may determine the motion type of the current block 510 based on the information about the motion type of the current block 510.
  • 5 illustrates a case in which the movement type of the current block 510 is rotation.
  • the decoder 120 may obtain the motion information MV0 of the first reference pixel based on the information obtained from the bitstream.
  • the motion information MV0 of the first reference pixel may be a motion vector.
  • the decoder 120 may obtain motion information MV0 of the first reference pixel based on the differential motion vector obtained from the bitstream. Since the method of obtaining the motion information MV0 of the first reference pixel based on the information obtained from the bitstream has been described with reference to FIG. 4, overlapping description thereof will be omitted.
  • the motion information MV0 of the first reference pixel may include an x direction motion component and a y direction motion component.
  • the decoder 120 may determine whether the first direction motion component included in the motion information MV1 of the second reference pixel is an x direction motion component or a y direction motion component according to the motion type. For example, when the motion type indicates rotation, the decoder 120 may determine that the first direction motion component included in the motion information MV1 of the second reference pixel is the y direction motion component. In addition, when the motion type indicates rotation, the decoder 120 may determine that the second direction motion component included in the motion information of the second reference pixel is the x direction motion component.
  • the decoder 120 may obtain a first directional motion component of the motion information MV1 of the second reference pixel based on the information obtained from the bitstream. For example, the decoder 120 may obtain a differential motion vector associated with the motion information MV1 of the second reference pixel from the bitstream. The differential motion vector may be related to either the x direction motion component or the y direction motion component of the motion information MV1 of the second reference pixel.
  • the decoder 120 may obtain a predicted motion vector based on the motion information of the neighboring block previously reconstructed of the current block 510.
  • the decoder 120 may determine candidate blocks based on neighboring blocks.
  • the decoder 120 may select one candidate block among candidate blocks based on the index obtained from the bitstream.
  • the decoder 120 may determine the motion information of the selected candidate block as a predicted motion vector.
  • the decoder 120 may include the x direction motion component and the y direction motion component.
  • the decoder 120 may acquire the first direction motion component of the motion information of the second reference pixel based on the x direction motion component included in the differential motion vector and the x direction motion component included in the predicted motion vector.
  • the decoder 120 may generate a second reference pixel based on at least one of the first direction motion component of the motion information of the first reference pixel and the motion information of the second reference pixel.
  • the second directional motion component of the motion information of may be obtained.
  • the second direction motion component of the motion information of the second reference pixel may be obtained by the following method.
  • the decoder 120 may acquire the coordinates (0,0) of the first position 520. Also, the decoder 120 may acquire the coordinates x0 and y0 of the position 525 of the reference block 515 based on the motion information MV0 of the first reference pixel. The decoder 120 may acquire the coordinates (w, 0) of the second position 530. w may be the length of the width of the current block 510. The first direction motion component of the motion information MV1 of the second reference pixel may be a y direction motion component.
  • the decoder 120 may obtain the y-coordinate value y1 of the position 535 of the reference block 515 based on the obtained first direction motion component of the motion information MV1 of the second reference pixel. .
  • the decoder 120 may obtain the x-coordinate value x1 of the position 535 of the reference block 515 based on the Pythagorean theorem. For example, the decoder 120 may obtain the x-coordinate value x1 of the position 535 by Equation 6.
  • x1 sqrt (w 2- (y1-y0) 2 ) + x0
  • the decoder 120 may obtain a second direction motion component of the motion information MV1 of the second reference pixel based on the x coordinate value x1 of the position 535.
  • the second direction motion component of the motion information MV1 of the second reference pixel may be an x direction motion component.
  • the second direction motion component of the motion information MV1 of the second reference pixel may be equal to Equation 7.
  • MV1 [x] may be a second direction motion component of the motion information MV1 of the second reference pixel.
  • the second direction motion component of the motion information MV1 of the second reference pixel may be the x direction motion component of the motion information MV1 of the second reference pixel.
  • x1 may be the x-coordinate value of position 535 of reference block 515.
  • w may be the length of the width of the current block 510.
  • the decoder 120 may obtain motion information of the third reference pixel based on the motion information of the first reference pixel and the motion information of the second reference pixel according to Equation 5. Since the method of obtaining the motion information of the third reference pixel based on the motion information of the first reference pixel and the motion information of the second reference pixel has already been described with reference to FIG. 4, redundant description thereof will be omitted.
  • the decoder 120 may acquire motion information of an arbitrary position 550 of the current block 510 based on motion information of the first reference pixel, motion information of the second reference pixel, and motion information of the third reference pixel. Can be.
  • the decoder 120 may obtain motion information of an arbitrary location 450 based on Equation 3. Since the method of obtaining the motion information of the pixel of the current block 410 has been described with reference to FIG. 3, overlapping description thereof will be omitted.
  • the decoder 120 may predict and reconstruct the current block based on the motion information of the pixels of the current block 510.
  • the decoder 120 may determine the x-direction motion component of the motion information of the first reference pixel, the y-direction motion component of the motion information of the first reference pixel, and the first motion information of the second reference pixel.
  • the motion information of the pixel of the current block may be obtained based on the directional motion component.
  • the decoder 120 may predict and reconstruct the current block based on the motion information of the pixels of the current block. Since the image decoding apparatus 100 and the image encoding apparatus 1500 may obtain motion information of a plurality of pixels of the current block using only three directional motion components, the image decoding apparatus 100 and the image encoding apparatus 1500 may increase the compression efficiency of the image and restore the high quality image. can do.
  • the decoder 120 may determine the first direction motion component included in the motion information MV1 of the second reference pixel as the x direction motion component regardless of the moved type. In addition, the decoder 120 may determine the second direction motion component included in the motion information MV1 of the second reference pixel as the y direction motion component regardless of the type of movement. The decoder 120 may vary a formula for acquiring the second direction motion component included in the motion information MV1 of the second reference pixel according to the motion type.
  • the decoder 120 may move the motion information MV1 of the second reference pixel based on the y-direction motion component included in the motion information MV0 of the first reference pixel as shown in Equation 4. It is possible to obtain a second direction motion component (ie, y direction motion component) included in.
  • a second direction motion component ie, y direction motion component
  • the decoder 120 may acquire the coordinates (x0, y0) of the position 525 of the reference block 515 based on the motion information MV0 of the first reference pixel. .
  • the decoder 120 determines an x-coordinate value of the position 535 of the reference block 515 based on the first direction motion component (that is, the x direction motion component) of the obtained motion information MV1 of the second reference pixel. (x1) can be obtained.
  • the decoder 120 may obtain a y-coordinate value y1 of the position 535 of the reference block 515 based on the Pythagorean theorem. For example, the decoder 120 may obtain the y-coordinate value y1 of the position 535 by Equation 8.
  • the decoder 120 may obtain a second direction motion component (ie, a y direction motion component) of the motion information MV1 of the second reference pixel based on the y coordinate value y1 of the position 535.
  • the second direction motion component of the motion information MV1 of the second reference pixel may be the same as Equation 9.
  • MV1 [y] may be a second direction motion component of the motion information MV1 of the second reference pixel.
  • the second direction motion component of the motion information MV1 of the second reference pixel may be a y direction motion component of the motion information MV1 of the second reference pixel.
  • y1 may be the y-coordinate value of position 535 of reference block 515.
  • the decoder 120 may obtain motion information of the third reference pixel based on the motion information of the first reference pixel and the motion information of the second reference pixel according to Equation 5. Since the method of obtaining the motion information of the third reference pixel based on the motion information of the first reference pixel and the motion information of the second reference pixel has already been described with reference to FIG. 4, redundant description thereof will be omitted.
  • the decoder 120 performs a current block based on the x direction motion component of the motion information of the first reference pixel, the y direction motion component of the motion information of the first reference pixel, and the x direction motion component of the motion information of the second reference pixel.
  • the motion information of the pixel may be obtained.
  • the decoder 120 may predict and reconstruct the current block based on the motion information of the pixels of the current block. Since the image decoding apparatus 100 and the image encoding apparatus 1500 may obtain motion information of a plurality of pixels of the current block using only three directional motion components, the image decoding apparatus 100 and the image encoding apparatus 1500 may increase the compression efficiency of the image and restore the high quality image. can do.
  • the decoder 120 may obtain a second direction motion component included in the motion information of the second reference pixel from the bitstream. That is, the decoder 120 may obtain the x direction motion component and the y direction motion component included in the motion information of the first reference pixel and the motion information of the second reference pixel, respectively, based on the information obtained from the bitstream. .
  • the decoder 120 may obtain a differential motion vector related to motion information of the second reference pixel from the bitstream.
  • the decoder 120 may obtain the predicted motion vector based on the motion information of the neighboring block previously reconstructed in the current block.
  • the decoder 120 may determine candidate blocks based on neighboring blocks.
  • the neighboring blocks may be blocks that are temporally or spatially adjacent to the current block.
  • the decoder 120 may select one candidate block among candidate blocks based on the index obtained from the bitstream.
  • the decoder 120 may determine the motion information of the candidate block as the predicted motion vector.
  • the decoder 120 may obtain motion information of the second reference pixel based on the differential motion vector and the predictive motion vector.
  • the motion information of the second reference pixel may include an x direction motion component and a y direction motion component.
  • the decoder 120 may simultaneously display the movement of the zoom and the rotation by using the four directional motion components.
  • the decoder 120 may obtain motion information of the third reference pixel based on Equation 5.
  • the decoder 120 may obtain motion information of pixels of the current block based on Equation 3.
  • the decoder 120 may predict the current block based on the motion information of the plurality of reference pixels of the current block.
  • the decoder 120 may increase the accuracy of prediction by using four directional motion components.
  • the inter prediction mode may include a mode for receiving a differential motion vector and a mode for not receiving a differential motion vector.
  • the decoder 120 may obtain more accurate motion information by applying the received differential motion vector to the predicted motion information.
  • the affine mode for receiving the differential motion vector and the affine mode for not receiving the differential motion vector will be described with reference to FIGS. 6 to 13.
  • FIG. 6 is a diagram for describing an affine mode for receiving a differential motion vector, according to an exemplary embodiment.
  • the decoder 120 may obtain a predictive motion vector from neighboring blocks. For example, the decoder 120 may determine neighboring blocks as candidate blocks. The neighboring blocks may be blocks spatially adjacent to the current block. Although not shown in FIG. 6, the neighboring blocks may be blocks temporally adjacent to the current block. The decoder 120 may obtain an index from the bitstream. The decoder 120 may select one candidate block among candidate blocks based on the index. The decoder 120 may obtain the predicted motion vector of the current block based on the motion vector of the selected candidate block.
  • the decoder 120 may obtain a predicted motion vector of the first position, a predicted motion vector of the second position, and a predicted motion vector of the third position to predict the current block 600.
  • the first position, the second position, and the third position may be any one of the positions of the upper left pixel 610, the upper right pixel 620, the lower left pixel 630, and the lower left pixel 640 of the current block 600, respectively. May correspond to.
  • the decoder 120 may obtain a predicted motion vector corresponding to the position of the upper left pixel 610 based on the motion vectors of the neighboring blocks 611, 612, and 613.
  • the neighboring blocks 611, 612, and 613 may be blocks restored before the current block 600.
  • the decoder 120 may select one of the neighboring blocks 611, 612, and 613 based on an index obtained from the bitstream received from the encoding apparatus 1500.
  • the decoder 120 may obtain a predicted motion vector corresponding to the position of the upper left pixel 610 based on the motion vector of the selected block.
  • the decoder 120 may select one of the neighboring blocks 611, 612, and 613 according to a predetermined rule.
  • the decoder 120 may determine whether the motion vectors of the neighboring blocks 611, 612, and 613 are available in a predetermined order. For example, the decoder 120 may determine whether the motion vector is available in the order of the upper left peripheral block 611, the lower left peripheral block 613, and the right upper peripheral block 612.
  • the present invention is not limited thereto, and various orders may be used.
  • the decoder 120 may obtain a predicted motion vector corresponding to the position of the upper left pixel 610 based on the first available motion vector.
  • the decoder 120 may obtain a predicted motion vector corresponding to the position of the upper left pixel 610 based on the average of the motion vectors of the neighboring blocks 611, 612, and 613. have.
  • the decoder 120 may obtain a prediction motion vector corresponding to the position of the upper right pixel 620 based on the motion vectors of the neighboring blocks 621, 622, and 623.
  • the neighboring blocks 621, 622, and 623 may be blocks reconstructed before the current block 600.
  • the decoder 120 may select one of the neighboring blocks 621, 622, and 623 based on an index obtained from the bitstream received from the encoding apparatus 1500.
  • the decoder 120 may obtain a predicted motion vector corresponding to the position of the upper right pixel 620 based on the motion vector of the selected block.
  • the decoder 120 may determine whether the motion vectors of the neighboring blocks 621, 622, and 623 are available in a predetermined order. For example, the decoder 120 may determine whether the motion vector is available in the order of the lower right peripheral block 623, the upper right peripheral block 622, and the upper left peripheral block 621. However, the present invention is not limited thereto, and various orders may be used. The decoder 120 may obtain a predicted motion vector corresponding to the position of the upper right pixel 620 based on the first available motion vector.
  • the decoder 120 may obtain a predicted motion vector corresponding to the position of the upper right pixel 620 based on the average of the motion vectors of the neighboring blocks 621, 622, and 623. have.
  • the decoder 120 may obtain a predicted motion vector corresponding to the position of the lower left pixel 630 based on the motion vectors of the neighboring blocks 631 and 632.
  • the neighboring blocks 631 and 632 may be blocks restored before the current block 600.
  • the decoder 120 may select one of the neighboring blocks 631 and 632 based on an index obtained from the bitstream received from the encoding apparatus 1500.
  • the decoder 120 may obtain a predicted motion vector corresponding to the position of the lower left pixel 630 based on the motion vector of the selected block.
  • the decoder 120 may determine whether the motion vectors of the neighboring blocks 631 and 632 are available in a predetermined order. For example, the decoder 120 may determine whether a motion vector is available in the order of the lower left peripheral block 632 and the upper left peripheral block 631. However, the present invention is not limited thereto, and various orders may be used. The decoder 120 may obtain a predicted motion vector corresponding to the position of the lower left pixel 630 based on the first available motion vector.
  • the decoder 120 may obtain a predicted motion vector corresponding to the position of the lower left pixel 630 based on the average of the motion vectors of the neighboring blocks 631 and 632.
  • the decoder 120 may obtain a predicted motion vector corresponding to the position of the lower right pixel 640 based on the motion vectors of the neighboring blocks 641 and 642.
  • the neighboring blocks 641 and 642 may be blocks that are restored before the current block 600.
  • the decoder 120 may select one of the neighboring blocks 641 and 642 based on an index obtained from the bitstream received from the encoding apparatus 1500.
  • the decoder 120 may obtain a predicted motion vector corresponding to the position of the lower right pixel 640 based on the motion vector of the selected block.
  • the decoder 120 may determine whether motion vectors of the neighboring blocks 641 and 642 are available in a predetermined order. For example, the decoder 120 may determine whether the motion vector is available in the order of the lower right peripheral block 642 and the upper right peripheral block 641. However, the present invention is not limited thereto, and various orders may be used. The decoder 120 may obtain a predicted motion vector corresponding to the position of the lower right pixel 640 based on the first available motion vector.
  • the decoder 120 may obtain a predicted motion vector corresponding to the position of the lower right pixel 640 based on the average of the motion vectors of the neighboring blocks 641 and 642.
  • the decoder 120 determines the first to third positions based on a predetermined reference among the positions of the upper left pixel 610, the upper right pixel 620, the lower left pixel 630, and the lower left pixel 640. Can be. In addition, the decoder 120 may determine the first to third positions based on the information obtained from the received bitstream. The first to third positions may be various combinations created based on the positions of the upper left pixel 610, the upper right pixel 620, the lower left pixel 630, and the lower left pixel 640. For convenience of description, the first position corresponds to the position of the upper left pixel 610, the second position corresponds to the position of the upper right pixel 620, and the third position corresponds to the position of the lower left pixel 630. It shall correspond.
  • the decoder 120 may obtain prediction motion vectors of the first to second positions.
  • the decoder 120 may obtain differential motion vectors from the bitstream.
  • the decoder 120 includes first motion information included in the motion information of the first reference pixel corresponding to the first position and the motion information of the second reference pixel corresponding to the second position based on the predicted motion vectors and the differential motion vectors.
  • Directional motion components can be obtained.
  • the decoder 120 may obtain a differential motion vector for the first position from the bitstream.
  • the differential motion vector for the first position may include an x direction motion component and a y direction motion component.
  • the decoder 120 may obtain motion information of the first reference pixel of the first position based on the differential motion vector of the first position and the predicted motion vector of the first position.
  • the decoder 120 may obtain any one of an x-direction motion component and a y-direction motion component of the differential motion vector with respect to the second position from the bitstream.
  • the decoder 120 may obtain an x-direction motion component of the differential motion vector from the bitstream.
  • the decoder 120 may acquire the first direction motion component of the motion information of the second reference pixel based on the x direction motion component of the differential motion vector and the x direction motion component of the predicted motion vector with respect to the second position.
  • the decoder 120 may obtain a y-direction motion component of the differential motion vector from the bitstream. Also, the decoder 120 may determine the first direction motion component (ie, the y direction) of the motion information of the second reference pixel based on the y direction motion component of the differential motion vector and the y direction motion component of the predicted motion vector for the second position. Motion component) can be obtained.
  • the decoder 120 may obtain the x direction motion component of the differential motion vector from the bitstream. Also, the decoder 120 may determine the first direction motion component (ie, the x direction) of the motion information of the second reference pixel based on the x direction motion component of the differential motion vector and the x direction motion component of the predicted motion vector for the second position. Motion component) can be obtained.
  • the decoder 120 may move the second reference pixel based on the first direction motion component included in the motion information of the first reference pixel and the motion information of the second reference pixel.
  • the second directional motion component included in the information may be obtained.
  • the decoder 120 may acquire a second direction motion component included in the motion information of the second reference pixel based on the information obtained from the bitstream.
  • the decoder 120 may acquire motion information of the third reference pixel based on the motion information of the first reference pixel and the motion information of the second reference pixel.
  • the motion information of the third reference pixel may correspond to the motion vector of the third position.
  • the decoder 120 may acquire motion information of a pixel included in the current block 600 based on the motion information of the first reference pixel or the motion information of the third reference pixel.
  • the decoder 120 may predict the current block based on the motion information of the pixels of the current block.
  • the decoder 120 may obtain a reference picture index from the bitstream.
  • the decoder 120 may determine the reference picture based on the reference picture index.
  • the decoder 120 may predict the value of the pixel at the first position of the current block from the pixel value of the reference block included in the reference image based on the motion information of the first reference pixel.
  • the decoder 120 may predict the value of the pixel at the second position of the current block from the value of the pixel at the position of the reference block based on the motion information of the second reference pixel.
  • the decoder 120 may predict the pixel at the third position of the current block from the pixel value of the position of the reference block based on the motion information of the third reference pixel.
  • the decoder 120 may predict the pixel at an arbitrary position based on the motion information of the arbitrary position obtained based on the motion information of the first reference pixel or the motion information of the third reference pixel.
  • the decoder 120 may reconstruct the current block based on the predicted current block and the residual obtained from the bitstream.
  • FIG. 7 is a flowchart for describing an affine mode in which a differential motion vector is not received, according to an exemplary embodiment.
  • the decoder 120 acquires the motion information of the first reference pixel based on the motion information of the first position of the neighboring blocks reconstructed before the current block (710). Do this.
  • the decoder 120 acquires the motion information of the second reference pixel based on the motion information of the second position of the neighboring blocks (720).
  • the decoder 120 acquires the motion information of the third reference pixel based on the motion information of the third position of the neighboring blocks (730).
  • the decoder 120 performs motion information of a pixel included in the current block based on the width and height of the current block, the motion information of the first reference pixel, the motion information of the second reference pixel, and the motion information of the third reference pixel.
  • the neighboring blocks may be temporally or spatially adjacent to the current block.
  • the neighboring blocks may be located on the left side, the upper side and the right side of the current block.
  • the first position, the second position, and the third position may not be collinear with each other.
  • the first position, the second position and the third position may form a triangle. Affine mode not receiving the differential motion vector will be described in more detail with reference to FIGS. 8 to 13.
  • FIG. 8 illustrates an affine mode in which a differential motion vector is not received, according to an exemplary embodiment.
  • the decoder 120 may identify whether neighboring blocks of the current block 810 are in an affine mode in a predetermined order.
  • the decoder 120 may identify whether the neighboring blocks are in the affine mode in order from the lower left neighboring block 821 to the upper right neighboring block 823.
  • the decoder 120 may identify whether the neighboring block 821 to the neighboring block 822 is in an affine mode.
  • the decoder 120 may identify whether the neighboring block 822 to the neighboring block 823 are in an affine mode.
  • the decoder 120 may obtain the motion information of the first reference pixel based on the motion information of the neighboring block identified in the affine mode among the neighboring blocks. For example, the decoder 120 may determine a representative value of motion information of pixels of the neighboring block identified in the affine mode. The decoder 120 may determine one of an average value, a median value, and a median value of motion information of pixels of a neighboring block as a representative value. The decoder 120 may determine motion information of one pixel among pixels of the neighboring block as a representative value. The decoder 120 may obtain the representative value as the motion information of the first reference pixel. The decoder 120 may obtain representative values of the plurality of neighboring blocks. In addition, motion information of the first reference pixel may be obtained by applying weights to representative values of the plurality of neighboring blocks.
  • the decoder 120 may generate a first block based on the motion information of the first position 850 of the first neighboring block identified in the affine mode among the neighboring blocks 841, 842, and 843.
  • the motion information of the reference pixel may be obtained.
  • neighboring blocks 841, 842, and 843 adjacent to the current block 830 may be in an affine mode.
  • the decoder 120 may identify whether the neighbor mode is in an affine mode in a predetermined order, and among these, the neighbor block 841 may be the first neighbor block identified in the affine mode.
  • the decoder 120 may acquire motion information of the first reference pixel based on the motion information of the first position 850 of the neighboring block 841.
  • the first position 850 may be any one of a lower left side, an upper left side, an upper right side, and a lower right side of the peripheral block 841.
  • the decoder 120 may determine a lower left side of the neighboring block 841 as the first position.
  • the motion information of the first reference pixel may be the same as the motion information of the first position 850.
  • the decoder 120 may determine the motion information of the first reference pixel as the motion information of the first position 850 in the neighboring block. In addition, the decoder 120 may determine the motion information of the first reference pixel as the motion information of a predetermined position in the current block 830 adjacent to the neighboring block 841.
  • FIG. 9 is a diagram for describing an affine mode in which a differential motion vector is not received, according to an exemplary embodiment.
  • the decoder 120 may identify whether neighboring blocks of the current block 910 are in an affine mode in a predetermined order.
  • the decoder 120 performs the neighboring blocks 921, 922, 923, 924, 925, in a zigzag order from the upper left peripheral block 921 to the upper right peripheral block 926 or the lower left peripheral block 927.
  • 926, 927 may be identified as an affine mode.
  • the decoder 120 may identify the affine mode in the order of the neighboring block 921, the neighboring block 922, the neighboring block 923, the neighboring block 924, and the neighboring block 925. . In addition, the decoder 120 may identify the affine mode in the order of the neighboring block 921, the neighboring block 923, the neighboring block 922, the neighboring block 925, and the neighboring block 924.
  • the decoder 120 may obtain the motion information of the second reference pixel based on the motion information of the second position of the first neighboring block identified in the affine mode among the neighboring blocks. .
  • neighboring blocks 941, 942, and 943 adjacent to the current block 930 may be in an affine mode.
  • the decoder 120 may identify whether the neighboring block 941, the neighboring block 942, and the neighboring block 943 are in the affine mode, and the neighboring block 941 is identified as the affine mode. It may be the first peripheral block.
  • the decoder 120 may acquire motion information of the second reference pixel based on the motion information of the second position 950 of the neighboring block 941.
  • the second position 950 may be any one of a lower left side, an upper left side, an upper right side, and a lower right side of the peripheral block 941. Referring to FIG. 9, the decoder 120 determines the upper left side of the neighboring block 941 as the second position.
  • the motion information of the second reference pixel may be the same as the motion information of the second position 950.
  • the decoder 120 may determine the motion information of the second reference pixel as the motion information of the second position 950. In addition, the decoder 120 may determine the motion information of the second reference pixel as the motion information of a predetermined position of the current block 930 adjacent to the neighboring block 941.
  • FIG. 10 is a diagram for describing an affine mode in which a differential motion vector is not received, according to an exemplary embodiment.
  • the decoder 120 may identify whether neighboring blocks of the current block 1010 are in an affine mode in a predetermined order.
  • the decoder 120 may identify whether the neighboring blocks are in the affine mode in order from the upper right neighboring block 1021 to the lower left neighboring block 1023.
  • the decoder 120 may identify whether the neighboring block 1021 to the neighboring block 1022 is in an affine mode.
  • the decoder 120 may identify whether the neighboring block 1022 or the neighboring block 1023 is in an affine mode.
  • the decoder 120 may obtain the motion information of the third reference pixel based on the motion information of the third position of the first neighboring block identified in the affine mode among the neighboring blocks. .
  • neighboring blocks 1041, 1042, and 1043 adjacent to the current block 1030 may be in an affine mode.
  • the decoder 120 may identify whether the neighboring block 1041, the neighboring block 1042, and the neighboring block 1043 are in the affine mode, and the neighboring block 1041 is identified as the affine mode. It may be the first peripheral block.
  • the decoder 120 may acquire motion information of the third reference pixel based on the motion information of the third position 1050 of the neighboring block 1041.
  • the third position 1050 may be any one of a lower left side, an upper left side, an upper right side, and a lower right side of the peripheral block 1041.
  • the decoder 120 determines the upper left side of the neighboring block 1041 as the third position.
  • the motion information of the third reference pixel may be the same as the motion information of the third location 1050.
  • the decoder 120 may determine the motion information of the third reference pixel as the motion information of the third position 1050. In addition, the decoder 120 may determine the motion information of the third reference pixel as the motion information of a predetermined position of the current block 1030 adjacent to the neighboring block 1041.
  • the decoder 120 may acquire motion information of the first reference pixel, motion information of the second reference pixel, and motion information of the third reference pixel based on at least one of the methods of FIGS. 8 to 10.
  • the decoder 120 may obtain motion information of the pixel included in the current block based on the motion information of the first reference pixel or the motion information of the third reference pixel. This will be described in detail with reference to FIG. 13.
  • the decoder 120 may acquire the motion information of the first reference pixel, the motion information of the second reference pixel, or the motion information of the third reference pixel based on the left, upper left, upper or right peripheral blocks.
  • the present invention will be described in detail with reference to FIG. 11.
  • FIG. 11 illustrates an affine mode in which a differential motion vector is not received, according to an exemplary embodiment.
  • the decoder 120 may identify whether neighboring blocks of the current block 1100 are in an affine mode in a predetermined order.
  • the decoder 120 performs the neighboring blocks 1111, 1112, 1113, 1114, 1115, in a zigzag order from the upper right peripheral block 1111 to the upper right peripheral block 1116 or the lower left peripheral block 1117. It may be identified whether 1116 and 1117 are in affine mode.
  • the decoder 120 may identify the affine mode in the order of the neighboring block 1111, the neighboring block 1112, the neighboring block 1113, the neighboring block 1114, and the neighboring block 1115. .
  • the decoder 120 may identify the affine mode in the order of the neighboring block 1111, the neighboring block 1113, the neighboring block 1112, the neighboring block 1115, and the neighboring block 1114.
  • the decoder 120 may obtain motion information based on the first neighboring block identified in the affine mode among the neighboring blocks 1111, 1112, 1113, 1114, 1115, 1116, and 1117.
  • the obtained motion information may be one of motion information of the first reference pixel and motion information of the third reference pixel of FIG. 7.
  • the motion information of the pixel included in the current block 1100 may be obtained based on the obtained motion information.
  • the decoder 120 may identify whether neighboring blocks of the current block 1100 are in an affine mode in a predetermined order.
  • the decoder 120 performs the neighboring blocks 1111, 1112, 1113, 1114, 1115, in a zigzag order from the upper right peripheral block 1111 to the lower right peripheral block 1116 or the lower left peripheral block 1117. It may be identified whether 1116 and 1117 are in affine mode.
  • the decoder 120 may identify the affine mode in the order of the neighboring block 1111, the neighboring block 1112, the neighboring block 1113, the neighboring block 1114, and the neighboring block 1115. .
  • the decoder 120 may identify the affine mode in the order of the neighboring block 1111, the neighboring block 1113, the neighboring block 1112, the neighboring block 1115, and the neighboring block 1114.
  • the decoder 120 may obtain motion information based on a predetermined method based on the neighboring block identified in the affine mode among the neighboring blocks 1111, 1112, 1113, 1114, 1115, 1116, and 1117.
  • the motion information may be obtained from an average, a median, and a median value of motion information of neighboring blocks identified in the affine mode.
  • the decoder 1200 may determine the motion information as the motion information of the predetermined position obtained based on the positions of the neighboring blocks identified in the affine mode, and also include the current information in the current block 1100 based on the obtained motion information.
  • the motion information of the pixel may be obtained.
  • the decoder 120 may identify whether neighboring blocks of the current block 1100 are in an affine mode in a predetermined order.
  • the decoder 120 may identify whether neighboring blocks of the current block 1120 are in an affine mode in a predetermined order.
  • the decoder 120 may identify whether the neighboring blocks are in the affine mode in order from the lower right neighboring block 1131 to the upper left neighboring block 1133.
  • the decoder 120 may identify whether the neighboring block 1131 to the neighboring block 1132 is in an affine mode.
  • the decoder 120 may identify whether the neighboring block 1132 to the neighboring block 1133 are in an affine mode.
  • the decoder 120 may acquire the motion information based on the neighboring block identified in the affine mode among the neighboring blocks 1131, 1132, 1133, 1134, and the like.
  • the obtained motion information may be one of motion information of the first reference pixel and motion information of the third reference pixel of FIG. 7.
  • the motion information of the pixel included in the current block 1120 may be obtained based on the obtained motion information.
  • the decoder 120 may identify whether the neighboring blocks are in the affine mode in order from the lower right neighboring block 1131 to the lower left neighboring block 1134.
  • the decoder 120 may identify whether the neighboring block 1131 to the neighboring block 1132 is in an affine mode.
  • the decoder 120 may identify whether the neighboring block 1132 to the neighboring block 1133 are in an affine mode.
  • the decoder 120 may identify whether the neighboring block 1133 to the neighboring block 1134 is in an affine mode.
  • the decoder 120 may acquire motion information based on the neighboring block identified in the affine mode among the neighboring blocks.
  • the obtained motion information may be one of motion information of the first reference pixel and motion information of the third reference pixel of FIG. 7.
  • the decoder 120 may obtain motion information of the pixel included in the current block 1120 based on the obtained motion information.
  • the decoder 120 may identify whether the neighboring blocks are in the affine mode in order from the lower left neighboring block 1134 to the lower right neighboring block 1131. The decoder 120 may identify whether the neighboring block 1134 to the neighboring block 1133 are in an affine mode. In addition, the decoder 120 may identify whether the neighboring block 1133 to the neighboring block 1132 is in an affine mode. In addition, the decoder 120 may identify whether the neighboring block 1132 to the neighboring block 1131 is in an affine mode.
  • the decoder 120 may acquire motion information based on the neighboring block identified in the affine mode among the neighboring blocks. For example, the decoder 120 may select at least one neighboring block by identifying the affine mode in a predetermined order. The decoder 120 may acquire motion information of the first reference pixel, motion information of the second reference pixel, and motion information of the third reference pixel based on the selected motion information of the at least one neighboring block. For example, the decoder 120 may obtain motion information about pixels at different positions in the selected neighboring block as motion information of the first reference pixel or motion information of the third reference pixel.
  • the decoder 120 may obtain motion information about pixels at different positions in the two selected neighboring blocks as motion information of the first reference pixel or motion information of the third reference pixel. Also, the decoder 120 may obtain motion information about pixels at different positions in the selected three neighboring blocks as motion information of the first reference pixel or motion information of the third reference pixel. The decoder 120 may acquire the motion information of the pixel included in the current block 1120 based on the obtained motion information of the first reference pixel or the motion information of the third reference pixel. A method of obtaining the motion information of the pixel included in the current block based on the motion information of the first reference pixel or the motion information of the third reference pixel will be described in detail with reference to FIG. 13.
  • FIG. 12 is a diagram for describing an affine mode in which a motion vector is not received, according to another embodiment.
  • the decoder 120 may acquire motion information of the first reference pixel or motion information of the third reference pixel based on the motion information of the neighboring blocks.
  • the motion information of the first reference pixel and the motion information of the third reference pixel may be motion vectors.
  • the decoder 120 may select three positions among the upper left pixel 1210, the upper right pixel 1220, the lower left pixel 1230, and the lower left pixel 1240 of the current block 1200.
  • the three positions may be predetermined positions.
  • the present invention is not limited thereto, and the decoder 120 may select three positions based on information obtained from the bitstream.
  • a description will be given of a method in which the decoder 120 acquires motion information about positions of an upper left pixel 1210, an upper right pixel 1220, a lower left pixel 1230, and a lower left pixel 1240.
  • the decoder 120 may acquire motion information corresponding to the position of the upper left pixel 1210 based on the motion information of the neighboring blocks 1211, 1212, and 1213.
  • the neighboring blocks 1211, 1212, and 1213 may be blocks restored before the current block 1200.
  • the decoder 120 may select one of the neighboring blocks 1211, 1212, and 1213 based on information obtained from the bitstream received from the encoding apparatus 1500.
  • the decoder 120 may acquire motion information corresponding to the position of the upper left pixel 1210 based on the motion information of the selected block.
  • the decoder 120 may determine whether motion information of the neighboring blocks 1211, 1212, and 1213 is available in a predetermined order. For example, the decoder 120 may determine whether motion information is available in the order of the upper left peripheral block 1211, the lower left peripheral block 1213, and the upper right peripheral block 1212. However, the present invention is not limited thereto, and various orders may be used.
  • the decoder 120 may acquire motion information corresponding to the position of the upper left pixel 1210 based on the first available motion information.
  • the decoder 120 may acquire motion information corresponding to the position of the upper left pixel 1210 based on the average of the motion information of the neighboring blocks 1211, 1212, and 1213. .
  • the decoder 120 may acquire motion information corresponding to the position of the upper right pixel 1220 based on the motion information of the neighboring blocks 1221, 1222, and 1223.
  • the neighboring blocks 1221, 1222, and 1223 may be blocks restored before the current block 1200.
  • the decoder 120 may select one of the neighbor blocks 1221, 1222, and 1223 based on information obtained from the bitstream received from the encoding apparatus 1500.
  • the decoder 120 may acquire motion information corresponding to the position of the upper right pixel 1220 based on the motion information of the selected block.
  • the decoder 120 may determine whether motion information of the neighboring blocks 1221, 1222, and 1223 is available in a predetermined order. For example, the decoder 120 may determine whether motion information is available in the order of the lower right peripheral block 1223, the upper right peripheral block 1222, and the upper left peripheral block 1221. The decoder 120 may acquire motion information corresponding to the position of the upper right pixel 1220 based on the first motion information available.
  • the decoder 120 may obtain motion information corresponding to the position of the upper right pixel 1220 based on the average of the motion information of the neighboring blocks 1221, 1222, and 1223. .
  • the decoder 120 may acquire motion information corresponding to the position of the lower left pixel 1230 based on the motion information of the neighboring blocks 1231 and 1232.
  • the neighboring blocks 1231 and 1232 may be blocks restored before the current block 1200.
  • the decoder 120 may select one of the neighboring blocks 1231 and 1232 based on information obtained from the bitstream received from the encoding apparatus 1500.
  • the decoder 120 may acquire motion information corresponding to the position of the lower left pixel 1230 based on the motion information of the selected block.
  • the decoder 120 may determine whether motion information of the neighboring blocks 1231 and 1232 is available in a predetermined order. For example, the decoder 120 may determine whether motion information is available in the order of the lower left peripheral block 1232 and the upper left peripheral block 1231. The decoder 120 may acquire motion information corresponding to the position of the lower left pixel 1230 based on the first available motion information.
  • the decoder 120 may obtain motion information corresponding to the position of the lower left pixel 1230 based on the average of the motion information of the neighboring blocks 1231 and 1232.
  • the decoder 120 may acquire motion information corresponding to the position of the lower right pixel 1240 based on the motion information of the neighboring blocks 1241 and 1242.
  • the neighboring blocks 1241 and 1242 may be blocks restored before the current block 1200.
  • the decoder 120 may select one of the neighboring blocks 1241 and 1242 based on information obtained from the bitstream received from the encoding apparatus 1500.
  • the decoder 120 may acquire motion information corresponding to the position of the lower right pixel 1240 based on the motion information of the selected block.
  • the decoder 120 may determine whether motion information of the neighboring blocks 1241 and 1242 is available in a predetermined order. For example, the decoder 120 may determine whether motion information is available in the order of the lower right peripheral block 1242 and the upper right peripheral block 1241. The decoder 120 may acquire motion information corresponding to the position of the lower right pixel 1240 based on the first available motion information.
  • the decoder 120 may obtain motion information corresponding to the position of the lower right pixel 1240 based on the average of the motion information of the neighboring blocks 1241 and 1242.
  • the decoder 120 may determine the motion information of the upper left pixel 1210 of the current block 1200, the motion information of the right upper pixel 1220, and the motion information of the lower left pixel 1230 based on the neighboring blocks. Describe how to obtain.
  • the decoder 120 may determine an upper left side of the current block 1200 based on the motion information of at least one of the neighboring blocks 1211, 1212, and 1213 adjacent to the upper left pixel 1210 of the current block 1200.
  • the motion information of the first reference pixel which is the motion information of the pixel 1210, may be obtained.
  • the at least one motion information may be obtained based on the motion information of the first position included in the neighboring block.
  • Decoder 120 The upper right pixel 1220 of the current block 1200 is based on the motion information of at least one of the neighboring blocks 1221, 1222, 1223 adjacent to the upper right pixel 1220 of the current block 1200.
  • Motion information of the second reference pixel may be obtained.
  • the at least one motion information may be obtained based on the motion information of the second position included in the neighboring block.
  • the decoder 120 determines the lower left pixel 1230 of the current block 1200 based on the motion information of at least one of the neighboring blocks 1231 and 1232 adjacent to the lower left pixel 1230 of the current block 1200.
  • the motion information of the third reference pixel which is motion information of, may be obtained.
  • the at least one motion information may be obtained based on the motion information of the third position included in the neighboring block.
  • the decoder 120 may acquire motion information of the pixel included in the current block 1200 based on the motion information of the first reference pixel, the motion information of the second reference pixel, and the motion information of the third reference pixel. This will be described with reference to FIG. 13.
  • FIG. 13 is a diagram for describing a method of obtaining motion information of a pixel included in a current block according to one embodiment of the present disclosure.
  • the decoder 120 may obtain motion information of the first reference pixel, motion information of the second reference pixel, and motion information of the third reference pixel according to FIGS. 7 to 12.
  • the motion information of the first reference pixel may be motion information of the position 1310.
  • the motion information of the second reference pixel may be motion information of the location 1320.
  • the motion information of the third reference pixel may be motion information of the location 1330.
  • the motion information may be a motion vector.
  • the decoder 120 may acquire the motion information of the pixel included in the current block 1300 based on the motion information of the first reference pixel, the motion information of the second reference pixel, and the motion information of the third reference pixel.
  • the decoder 120 may acquire a unit change amount of the motion information related to the y axis and a unit change amount of the motion information related to the x axis.
  • the unit change amount of the motion information associated with the y axis may be equal to Equation 10.
  • dy is the unit change amount of the motion information associated with the y axis.
  • m is the difference between the x coordinate of position 1310 and position 1330.
  • w is the difference between the x coordinate of position 1310 and position 1320.
  • n is the difference between the y coordinate of the position 1310 and the position 1320.
  • h is the difference between the y coordinate of position 1310 and position 1330.
  • P0 is motion information of the first reference pixel.
  • P1 is motion information of the second reference pixel.
  • P2 is motion information of the third reference pixel.
  • the unit change amount of the motion information associated with the x axis may be equal to Equation 11.
  • dx is a unit change amount of the motion information associated with the x axis.
  • m is the difference between the x coordinate of position 1310 and position 1330.
  • w is the difference between the x coordinate of position 1310 and position 1320.
  • n is the difference between the y coordinate of the position 1310 and the position 1320.
  • h is the difference between the y coordinate of position 1310 and position 1330.
  • P0 is motion information of the first reference pixel.
  • P1 is motion information of the second reference pixel.
  • P2 is motion information of the third reference pixel.
  • the decoder 120 may acquire the motion information of the pixel included in the current block 1300 based on the unit change amount of the motion information related to the y axis and the unit change amount of the motion information related to the x axis.
  • the motion information of the pixel included in the current block 1300 may be the same as Equation 10.
  • Pa P0 + idx + jdy
  • P0 is motion information of the first reference pixel.
  • i is the difference between the x coordinate of position 1310 and position 1340 of the pixel.
  • j is the difference between the y coordinate of the position 1310 and the position 1340 of the pixel.
  • dx is a unit change amount of the motion information associated with the x axis.
  • dy is the unit change amount of the motion information associated with the y axis.
  • Pa is motion information of a pixel at an arbitrary position 1340 of the current block 1300.
  • the decoder 120 may inter predict the current block 1300 based on the motion information of the pixels of the current block 1300. Also, the decoder 120 may restore the current block 1300 based on the predicted current block 1300.
  • FIG. 14 is a flowchart for inter prediction according to an embodiment.
  • the decoder 120 may determine to predict the current block in the inter prediction mode 1400. In operation 1410, the decoder 120 may obtain a flag indicating whether to receive a differential motion vector from the bitstream. The decoder 120 may determine whether or not to receive the differential motion vector based on a flag indicating whether to receive the differential motion vector.
  • condition 1 may be whether the size of the current block is greater than the threshold size.
  • the decoder 120 may determine whether the length or the length of the width of the current block is greater than the threshold length. For example, the decoder 120 may determine whether the width of the current block is greater than or equal to 16. The decoder 120 may determine whether the height of the current block is greater than or equal to 16. In addition, the decoder 120 may determine whether the width of the current block is greater than the threshold width.
  • the width of the current block can be expressed as the product of the width and height of the current block. For example, the decoder 120 may determine whether the width of the current block is greater than 64.
  • the decoder 120 may obtain information about an affine mode from the bitstream. The decoder 120 may determine whether to predict the current block in the affine mode based on the information on the affine mode.
  • the decoder 120 may perform only one of the operations 1451 and 1452 to determine whether to predict the current block in the affine mode.
  • the decoder 120 may perform an existing inter prediction mode in step 1460.
  • the existing inter prediction mode may be a technique related to high efficiency video coding (HEVC) or H.264.
  • the decoder 120 may perform a prediction mode for receiving a differential motion vector among the existing inter prediction modes.
  • the existing inter prediction mode may be similar to the advanced motion vector prediction of HEVC.
  • the decoder 120 may obtain information about a motion type from the bitstream in step 1471.
  • the decoder 120 may acquire three directional motion components based on the information obtained from the bitstream.
  • the decoder 120 may acquire the first direction motion component included in the motion information of the first reference pixel and the motion information of the second reference pixel based on the information obtained from the bitstream.
  • the motion information of the first reference pixel may include an x direction motion component and a y direction motion component.
  • the decoder 120 may determine that the first direction motion component of the motion information of the second reference pixel is the x direction motion component of the motion information of the second reference pixel.
  • the decoder 120 may determine the second directional motion component of the motion information of the second reference pixel and the third reference pixel based on the motion information of the first reference pixel and the first direction motion component of the motion information of the second reference pixel.
  • the motion information can be obtained.
  • the efficiency of the bitstream can be predicted because the motion vector of the pixels included in the current block can be predicted with minimal information. Can increase. It can also correspond to zooming or rotation, allowing accurate prediction of the current block.
  • the decoder 120 may predict the motion information of the pixel included in the current block based on the motion information of the first reference pixel, the second motion information, and the motion information of the third reference pixel.
  • the decoder 120 may predict the current block based on the motion information of the pixel.
  • the case in which the motion type is zoom has been described in detail with reference to FIGS. 3 to 6, and thus redundant description thereof will be omitted.
  • the decoder 120 may obtain three directional motion components based on the information obtained from the bitstream.
  • the decoder 120 may acquire the first direction motion component included in the motion information of the first reference pixel and the motion information of the second reference pixel based on the information obtained from the bitstream.
  • the motion information of the first reference pixel may include an x direction motion component and a y direction motion component.
  • the decoder 120 may determine that the first direction motion component of the motion information of the second reference pixel is the y direction motion component of the motion information of the second reference pixel.
  • the decoder 120 may determine the second directional motion component of the motion information of the second reference pixel and the third reference pixel based on the motion information of the first reference pixel and the first direction motion component of the motion information of the second reference pixel.
  • the motion information can be obtained.
  • the decoder 120 may predict the motion information of the pixel included in the current block based on the motion information of the first reference pixel, the second motion information, and the motion information of the third reference pixel.
  • the decoder 120 may predict the current block based on the motion information of the pixel.
  • the case in which the movement type is rotation has been described in detail with reference to FIGS. 3 to 6, and thus redundant description thereof will be omitted.
  • the decoder 120 may obtain four directional motion components based on information obtained from the bitstream. That is, the decoder 120 may obtain motion information of the first reference pixel and motion information of the second reference pixel based on the information obtained from the bitstream.
  • the motion information of the first reference pixel may include an x direction motion component and a y direction motion component.
  • the motion information of the second reference pixel may include an x direction motion component and a y direction motion component.
  • the decoder 120 may obtain motion information of the third reference pixel based on the motion information of the first reference pixel and the motion information of the second reference pixel.
  • the decoder 120 may obtain a separate flag different from the motion type from the bitstream.
  • the decoder 120 may determine to acquire three direction motion components or four direction motion components based on the flag.
  • the decoder 120 may receive information about the motion type.
  • the decoder 120 may determine the x direction motion component and the y direction motion component and the first direction motion component included in the motion information of the first reference pixel based on the information obtained from the bitstream.
  • the x-direction motion component and the y-direction motion component included in the motion information of the two reference pixels may be obtained.
  • the decoder 120 may not obtain information about a motion type from the bitstream.
  • the decoder 120 may obtain four directional motion components based on information obtained from the bitstream without information on the motion type. That is, the decoder 120 may obtain motion information of the first reference pixel and motion information of the second reference pixel based on the information obtained from the bitstream.
  • the motion information of the first reference pixel may include an x direction motion component and a y direction motion component.
  • the decoder 120 may obtain motion information of the third reference pixel based on the motion information of the first reference pixel and the motion information of the second reference pixel. Since the decoder 120 does not receive the information on the motion type, the efficiency of the bitstream may be increased.
  • the decoder 120 may correspond to both zoom and rotation using four directional motion components.
  • the decoder 120 may predict the motion information of the pixel included in the current block based on the motion information of the first reference pixel, the second motion information, and the motion information of the third reference pixel. The decoder 120 may predict the current block based on the motion information of the pixel.
  • condition 2 may be whether the size of the current block is greater than the threshold size, or at least one of the neighboring blocks is in the affine mode. For example, the decoder 120 may determine whether the width of the current block is greater than or equal to 16. The decoder 120 may determine whether the height of the current block is greater than or equal to 16. In addition, the decoder 120 may determine whether the width of the current block is greater than the threshold width. The width of the current block can be expressed as the product of the width and height of the current block. For example, the decoder 120 may determine whether the width of the current block is greater than 64.
  • the decoder 120 may determine whether at least one of the neighboring blocks of the current block is in the affine mode.
  • the neighboring blocks may be lower left, left, upper left, upper, right upper, right and lower right blocks of the current block.
  • the decoder 120 may determine whether neighboring blocks are in an affine mode in a predetermined order. Since the predetermined order has been described with reference to FIGS. 8 to 11, overlapping descriptions are omitted.
  • the condition that the size of the current block is larger than the threshold size and the condition that at least one of the neighboring blocks is an affine mode may satisfy only one of the two or both.
  • the decoder 120 may obtain information about the affine mode from the received bitstream. The decoder 120 may determine whether to predict the current block in the affine mode based on the information on the affine mode.
  • FIG. 14 illustrates performing steps 1421 and 1422, but is not limited thereto.
  • the decoder 120 may determine whether to predict the current block in the affine mode by performing only one of the steps 1421 or 1422.
  • the decoder 120 may perform the existing inter prediction mode in operation 1430.
  • the existing inter prediction mode may be a technique related to high efficiency video coding (HEVC) or H.264.
  • the decoder 120 may perform a prediction mode in which the differential motion vector is not received among the existing inter prediction modes.
  • the existing inter prediction mode may be similar to the merge mode or the skip mode of HEVC.
  • the decoder 120 may acquire candidate related information in operation 1442.
  • the decoder 120 may select one of affine candidate 1 or affine candidate 2 based on the candidate related information.
  • the decoder 120 may acquire motion information of the first reference pixel and motion information of the third reference pixel according to the description associated with FIGS. 8 to 11.
  • the decoder 120 may acquire motion information of the first reference pixel or motion information of the third reference pixel according to the description associated with FIG. 12.
  • the decoder 120 may not acquire candidate related information.
  • the decoder 120 may use only one candidate.
  • the decoder 120 may obtain motion information of the first reference pixel to motion information of the third reference pixel according to the description associated with FIGS. 8 to 11.
  • the decoder 120 may obtain motion information of the first reference pixel to motion information of the third reference pixel according to the description associated with FIG. 12.
  • 15 is a schematic block diagram of an image encoding apparatus, according to an embodiment.
  • the image encoding apparatus 1500 may include an encoder 1510 and a bitstream generator 1520.
  • the encoder 1510 may receive an input image and encode the input image.
  • the bitstream generator 1520 may output a bitstream based on the encoded input image.
  • the image encoding apparatus 1500 may transmit a bitstream to the image decoding apparatus 100. Detailed operations of the video encoding apparatus 1500 will be described in detail with reference to FIG. 16.
  • 16 is a flowchart of a video encoding method, according to an embodiment.
  • FIG. 16 relates to an image encoding method and includes similar contents to those of the image decoding method and apparatus described with reference to FIGS. 1 to 14, and descriptions thereof will not be repeated.
  • the encoder 1510 may include a first directional motion component included in motion information of a first reference pixel located at a first position of the current block based on a current block included in the original image and a previously reconstructed image of the current block. And obtaining the second directional motion component 1610.
  • the encoder 1510 may acquire motion information of the first reference pixel with respect to the first position based on a correlation between the current block and the previously reconstructed image. In order to determine the degree of correlation, the encoder 1510 may use a sum of absolute difference (SAD).
  • SAD sum of absolute difference
  • the encoder 1510 may acquire a first direction motion component included in the motion information of the second reference pixel located at the second position of the current block based on the current block and the previously reconstructed image. Can be done.
  • the encoder 1510 may perform an operation 1630 of acquiring the second direction motion component included in the motion information of the second reference pixel.
  • the encoder 1510 may acquire a second directional motion component included in the motion information of the second reference pixel based on the current block and the previously reconstructed image.
  • the encoder 1510 may acquire motion information of the second reference pixel for the second position based on a correlation between the current block and the previously reconstructed image. In order to determine the degree of correlation, the encoder 1510 may use a sum of absolute difference (SAD).
  • SAD sum of absolute difference
  • the encoder 1510 may compare the x direction motion component and the y direction motion component included in the motion information of the first reference pixel with the x direction motion component and the y direction motion component included in the motion information of the second reference pixel.
  • the first direction motion component of the motion information of the second reference pixel may be an x direction motion component
  • the second direction motion component of the motion information of the second reference pixel may be a y direction motion component. If the y-direction motion component included in the motion information of the first reference pixel is similar to the second direction motion component included in the motion information of the second reference pixel, the encoder 1510 may include the motion information of the first reference pixel. It may be determined to obtain a second direction motion component based on the y direction motion component.
  • the encoder 1510 may determine the motion type of the current block as a zoom.
  • the bitstream generator 1520 may generate a bitstream based on the movement type.
  • the bitstream generator 1520 may not generate the second direction motion component as a bitstream. That is, the image encoding apparatus 1500 may not transmit the second direction motion component to the image decoding apparatus 100.
  • the image encoding apparatus 1500 and the image decoding apparatus 100 may increase the efficiency of the bitstream.
  • the encoder 1510 may include a second reference based on the directional motion components included in the motion information of the first reference pixel and the first directional motion component included in the motion information of the second reference pixel. It may be determined whether the second direction motion component included in the motion information of the pixel can be obtained.
  • the first direction motion component of the motion information of the second reference pixel may be a y direction motion component
  • the second direction motion component of the motion information of the second reference pixel may be an x direction motion component.
  • the encoder 1510 determines whether the direction motion component obtained from the first direction motion component included in the motion information of the first reference pixel and the motion information of the second pixel according to Equations 6 and 7 is similar to the second direction motion component. You can decide. When the directional motion components obtained according to Equations 6 and 7 are similar to the second directional motion components, the encoder 1510 acquires the second directional motion components based on the motion information of the first reference pixel and the first directional motion components. You can decide what to do.
  • the first direction motion component of the motion information of the second reference pixel may be an x direction motion component
  • the second direction motion component of the motion information of the second reference pixel may be a y direction motion component.
  • the encoder 1510 determines whether the direction motion component obtained from the first direction motion component included in the motion information of the first reference pixel and the motion information of the second pixel according to Equations 8 and 9 is similar to the second direction motion component. You can decide. If the directional motion components obtained according to Equations 8 and 9 are similar to the second directional motion components, the encoder 1510 acquires the second directional motion components based on the motion information of the first reference pixel and the first directional motion components. You can decide what to do.
  • the motion type of the current block is rotated ( rotation).
  • the bitstream generator 1520 may generate a bitstream based on the movement type.
  • the bitstream generator 1520 may not generate the second direction motion component as a bitstream. That is, the image encoding apparatus 1500 may not transmit the second direction motion component to the image decoding apparatus 100.
  • the image encoding apparatus 1500 and the image decoding apparatus 100 may increase the efficiency of the bitstream.
  • the encoder 1510 may include the x direction motion component included in the motion information of the first reference pixel, the y direction motion component included in the motion information of the first reference pixel, and the first direction motion included in the motion information of the second reference pixel.
  • the bitstream may be generated based on the component and the second direction motion component included in the motion information of the second reference pixel.
  • the encoder 1510 may determine the movement type as a type representing zoom and rotation at the same time.
  • the bitstream generator 1520 may generate a bitstream based on the movement type.
  • the present invention is not limited thereto, and the image encoding apparatus 1500 may not transmit the motion type to the image decoding apparatus 100.
  • the image decoding apparatus 100 may determine to receive four directional motion components when the motion type is not received.
  • the encoder 1510 may acquire an operation 1640 of obtaining motion information of the third reference pixel located at the third position of the current block based on the motion information of the first reference pixel and the motion information of the second reference pixel. Can be.
  • the encoder 1510 acquires the motion information of the pixel included in the current block based on the motion information of the first reference pixel, the motion information of the second reference pixel, and the motion information of the third reference pixel (1650). can do.
  • the bitstream generator 1520 may perform a step 1650 of generating a bitstream based on at least one of the first direction motion component included in the motion information of the first reference pixel and the motion information of the second reference pixel. have.
  • the encoder 1510 may obtain a motion vector of a neighboring neighbor block reconstructed adjacent to the current block.
  • the encoder 1510 may determine whether the first direction motion component included in the motion information of the first reference pixel or the motion information of the second reference pixel of the current block is similar to the motion vector of the neighboring block. In a similar case, the encoder 1510 may determine not to transmit the differential motion vector to the image decoding apparatus 100.
  • the bitstream generator 1520 may generate information about a neighboring block having motion information similar to motion information of the current block, as a bitstream.
  • the encoder 1510 may obtain a motion vector of a neighboring neighbor block reconstructed adjacent to the current block.
  • the encoder 1510 may determine whether the first direction motion component included in the motion information of the first reference pixel or the motion information of the second reference pixel of the current block is similar to the motion vector of the neighboring block. If not, the encoder 1510 may determine to transmit the differential motion vector to the image decoding apparatus 100.
  • the bitstream generator 1520 may generate information on the neighboring blocks having the motion information similar to the motion information of the current block and the differential motion vector in the bitstream.
  • the image decoding apparatus 100 may reconstruct an image based on information obtained from a bitstream received from the image encoding apparatus 1500.
  • FIG. 17 illustrates a process of determining, by the image decoding apparatus 100, at least one coding unit by dividing a current coding unit according to an embodiment.
  • the block type may include 4Nx4N, 4Nx2N, 2Nx4N, 4NxN or Nx4N. Where N may be a positive integer.
  • the block shape information is information indicating at least one of a shape, a direction, a width, and a ratio or size of a coding unit.
  • the shape of the coding unit may include square and non-square.
  • the image decoding apparatus 100 may determine block shape information of the coding unit as a square.
  • the image decoding apparatus 100 may determine the shape of the coding unit as a non-square.
  • the image decoding apparatus 100 may determine the block shape information of the coding unit as a non-square. Can be. When the shape of the coding unit is non-square, the image decoding apparatus 100 may determine a ratio of the width and the height of the block shape information of the coding unit to 1: 2, 2: 1, 1: 4, 4: 1, 1: 8. Or 8: 1. In addition, the image decoding apparatus 100 may determine whether the coding unit is a horizontal direction or a vertical direction, based on the length of the width of the coding unit and the length of the height. Also, the image decoding apparatus 100 may determine the size of the coding unit based on at least one of the length, the length, or the width of the coding unit.
  • the image decoding apparatus 100 may determine a shape of a coding unit by using block shape information, and determine in which form the coding unit is divided using the split shape information. That is, the method of dividing the coding unit indicated by the segmentation form information may be determined according to which block form the block form information used by the image decoding apparatus 100 represents.
  • the image decoding apparatus 100 may use block shape information indicating that the current coding unit is square. For example, the image decoding apparatus 100 may determine whether to split a square coding unit, to split vertically, to split horizontally, or to split into four coding units according to the split type information. Referring to FIG. 17, when the block shape information of the current coding unit 1700 indicates a square shape, the decoder 120 may have the same size as the current coding unit 1700 according to the split shape information indicating that the block shape information is not divided. The split coding unit 1710a may not be divided, or the split coding unit 1710b, 1710c, 1710d, or the like may be determined based on split type information indicating a predetermined division method.
  • the image decoding apparatus 100 determines two coding units 1710b that split the current coding unit 1700 in the vertical direction based on split shape information indicating that the image is split in the vertical direction. Can be.
  • the image decoding apparatus 100 may determine two coding units 1710c that divide the current coding unit 1700 in the horizontal direction based on the split type information indicating the split in the horizontal direction.
  • the image decoding apparatus 100 may determine four coding units 1710d that divide the current coding unit 1700 in the vertical direction and the horizontal direction based on the split type information indicating that the image decoding apparatus 100 is split in the vertical direction and the horizontal direction.
  • the divided form in which the square coding unit may be divided should not be limited to the above-described form and may include various forms represented by the divided form information. Certain division forms in which a square coding unit is divided will be described in detail with reference to various embodiments below.
  • FIG. 18 illustrates a process of determining, by the image decoding apparatus 100, at least one coding unit by dividing a coding unit having a non-square shape according to an embodiment.
  • the image decoding apparatus 100 may use block shape information indicating that a current coding unit is a non-square shape.
  • the image decoding apparatus 100 may determine whether to divide the current coding unit of the non-square according to the split type information or to split it by a predetermined method. Referring to FIG. 18, when the block shape information of the current coding unit 1800 or 1850 indicates a non-square shape, the image decoding apparatus 100 may not divide the current coding unit 1800 according to the split shape information indicating that the shape is not divided.
  • coding units 1820a, 1820b, 1830a, 1830b, 1830c, 1870a which do not divide the coding units 1810 or 1860 having the same size as 1850, or are divided based on the split type information indicating a predetermined division method. 1870b, 1880a, 1880b, 1880c).
  • a predetermined division method in which a non-square coding unit is divided will be described in detail with reference to various embodiments below.
  • the image decoding apparatus 100 may determine a shape in which a coding unit is divided using split shape information.
  • the split shape information may include the number of at least one coding unit generated by splitting a coding unit. Can be represented.
  • the image decoding apparatus 100 may determine a current coding unit 1800 or 1850 based on split shape information. By splitting, two coding units 1820a, 1820b, or 1870a, 1870b included in the current coding unit may be determined.
  • the image decoding apparatus 100 may divide the current coding unit 1800 or 1850 having a non-square shape.
  • the current coding unit may be split in consideration of the position of the long side. For example, the image decoding apparatus 100 divides the current coding unit 1800 or 1850 in a direction of dividing a long side of the current coding unit 1800 or 1850 in consideration of the shape of the current coding unit 1800 or 1850. To determine a plurality of coding units.
  • the image decoding apparatus 100 may determine an odd number of coding units included in the current coding unit 1800 or 1850. For example, when the split type information indicates that the current coding unit 1800 or 1850 is divided into three coding units, the image decoding apparatus 100 may divide the current coding unit 1800 or 1850 into three coding units 1830a. , 1830b, 1830c, 1880a, 1880b, and 1880c. According to an embodiment, the image decoding apparatus 100 may determine an odd number of coding units included in the current coding unit 1800 or 1850, and not all sizes of the determined coding units may be the same.
  • the size of a predetermined coding unit 1830b or 1880b among the determined odd coding units 1830a, 1830b, 1830c, 1880a, 1880b, and 1880c is different from other coding units 1830a, 1830c, 1880a, and 1880c. May have That is, the coding units that may be determined by dividing the current coding units 1800 or 1850 may have a plurality of types, and in some cases, odd number of coding units 1830a, 1830b, 1830c, 1880a, 1880b, and 1880c. Each may have a different size.
  • the image decoding apparatus 100 may determine an odd number of coding units included in the current coding unit 1800 or 1850.
  • the image decoding apparatus 100 may set a predetermined limit on at least one coding unit among odd-numbered coding units generated by dividing.
  • the image decoding apparatus 100 may include a coding unit positioned at the center of three coding units 1830a, 1830b, 1830c, 1880a, 1880b, and 1880c generated by splitting a current coding unit 1800 or 1850.
  • the decoding process for 1830b and 1880b may be different from other coding units 1830a, 1830c, 1880a, and 1880c.
  • the image decoding apparatus 100 limits the coding units 1830b and 1880b to be no longer divided, or only a predetermined number of times. You can limit it to split.
  • FIG. 19 illustrates a process of splitting a coding unit by the image decoding apparatus 100 based on at least one of block shape information and split shape information, according to an embodiment.
  • the image decoding apparatus 100 may determine to split or not split the first coding unit 1900 having a square shape into coding units based on at least one of block shape information and split shape information.
  • the image decoding apparatus 100 splits the first coding unit 1900 in the horizontal direction to thereby split the second coding unit. (1910) may be determined.
  • the first coding unit, the second coding unit, and the third coding unit used according to an embodiment are terms used to understand a before and after relationship between the coding units.
  • the first coding unit is split, the second coding unit may be determined.
  • the third coding unit may be determined.
  • the relationship between the first coding unit, the second coding unit, and the third coding unit used is based on the above-described feature.
  • the image decoding apparatus 100 may determine to divide or not split the determined second coding unit 1910 into coding units based on at least one of block shape information and split shape information.
  • the image decoding apparatus 100 may determine a second coding unit 1910 having a non-square shape determined by dividing the first coding unit 1900 based on at least one of block shape information and split shape information. It may be split into at least one third coding unit 1920a, 1920b, 1920c, 1920d, or the like, or may not split the second coding unit 1910.
  • the image decoding apparatus 100 may obtain at least one of the block shape information and the split shape information, and the image decoding apparatus 100 may determine the first coding unit 1900 based on at least one of the obtained block shape information and the split shape information.
  • the unit 1900 may be divided according to the divided manner. According to an embodiment, when the first coding unit 1900 is divided into the second coding unit 1910 based on at least one of the block shape information and the split shape information for the first coding unit 1900, the second The coding unit 1910 may also be divided into third coding units (eg, 1920a, 1920b, 1920c, 1920d, etc.) based on at least one of block shape information and split shape information for the second coding unit 1910. have.
  • third coding units eg, 1920a, 1920b, 1920c, 1920d, etc.
  • the coding unit may be recursively divided based on at least one of the partition shape information and the block shape information associated with each coding unit. Therefore, a square coding unit may be determined in a non-square coding unit, and a coding unit of a square shape may be recursively divided to determine a coding unit of a non-square shape.
  • a predetermined coding unit (eg, located in the center of an odd number of third coding units 1920b, 1920c, and 1920d) determined by splitting a second coding unit 1910 having a non-square shape may be included. Coding units or coding units having a square shape) may be recursively divided. According to an embodiment, the third coding unit 1920b having a square shape, which is one of odd third coding units 1920b, 1920c, and 1920d, may be divided in a horizontal direction and divided into a plurality of fourth coding units.
  • the fourth coding unit 1930b or 1930d having a non-square shape which is one of the plurality of fourth coding units 1930a, 1930b, 1930c, and 1930d, may be further divided into a plurality of coding units.
  • the fourth coding unit 1930b or 1930d having a non-square shape may be divided into odd coding units.
  • a method that can be used for recursive division of coding units will be described later through various embodiments.
  • the image decoding apparatus 100 may divide each of the third coding units 1920a, 1920b, 1920c, 1920d, etc. into coding units based on at least one of block shape information and split shape information. Also, the image decoding apparatus 100 may determine not to split the second coding unit 1910 based on at least one of the block shape information and the split shape information. The image decoding apparatus 100 may divide the second coding unit 1910 having a non-square shape into an odd number of third coding units 1920b, 1920c, and 1920d. The image decoding apparatus 100 may place a predetermined limit on a predetermined third coding unit among the odd number of third coding units 1920b, 1920c, and 1920d.
  • the image decoding apparatus 100 may be limited to no more division or may be divided by a set number of times for the coding unit 1920c positioned in the middle of the odd number of third coding units 1920b, 1920c, and 1920d. It can be limited to.
  • the image decoding apparatus 100 may include a coding unit positioned at the center of odd-numbered third coding units 1920b, 1920c, and 1920d included in a second coding unit 1910 having a non-square shape.
  • 1920c is no longer partitioned, or is limited to being divided into a predetermined division form (for example, divided into only four coding units or divided into a form corresponding to a divided form of the second coding unit 1910), or predetermined. It can be limited to dividing only by the number of times (eg, dividing only n times, n> 0).
  • the coding unit 1920c located in the center is merely a mere embodiment, it should not be construed as being limited to the above-described embodiments, and the coding unit 1920c located in the center is different from other coding units 1920b and 1920d. ), It should be interpreted as including various restrictions that can be decoded.
  • the image decoding apparatus 100 may obtain at least one of block shape information and split shape information used to divide a current coding unit at a predetermined position in the current coding unit.
  • FIG. 20 illustrates a method for the image decoding apparatus 100 to determine a predetermined coding unit among odd number of coding units, according to an exemplary embodiment.
  • At least one of the block shape information and the split shape information of the current coding unit 2000 may be a sample at a predetermined position (for example, located at the center of a plurality of samples included in the current coding unit 2000). Sample 2040).
  • a predetermined position in the current coding unit 2000 from which at least one of such block shape information and split shape information may be obtained should not be interpreted as being limited to the center position shown in FIG. 20, and the current coding unit 2000 is located at the predetermined position. It should be construed that various positions (eg, top, bottom, left, right, top left, bottom left, top right or bottom right, etc.) that may be included in the.
  • the image decoding apparatus 100 may determine that the current coding unit is divided into coding units of various shapes and sizes by not obtaining at least one of block shape information and split shape information obtained from a predetermined position.
  • the image decoding apparatus 100 may select one coding unit from among them. Methods for selecting one of a plurality of coding units may vary, which will be described below through various embodiments.
  • the image decoding apparatus 100 may divide a current coding unit into a plurality of coding units and determine a coding unit of a predetermined position.
  • the image decoding apparatus 100 may use information indicating the position of each of the odd coding units to determine a coding unit located in the middle of the odd coding units. Referring to FIG. 20, the image decoding apparatus 100 may divide the current coding unit 2000 to determine odd number of coding units 2020a, 2020b, and 2020c. The image decoding apparatus 100 may determine the center coding unit 2020b by using information about the positions of the odd number of coding units 2020a, 2020b, and 2020c. For example, the image decoding apparatus 100 determines the positions of the coding units 2020a, 2020b, and 2020c based on information indicating the positions of predetermined samples included in the coding units 2020a, 2020b, and 2020c. The coding unit 2020b positioned at may be determined.
  • the image decoding apparatus 100 may encode the coding units 2020a, 2020b, and 2020c based on the information indicating the positions of the samples 2030a, 2030b, and 2030c on the upper left side of the coding units 2020a, 2020b, and 2020c.
  • the coding unit 2020b positioned in the center may be determined by determining the position of.
  • the information indicating the position of the upper left samples 2030a, 2030b, and 2030c included in the coding units 2020a, 2020b, and 2020c may be included in the picture of the coding units 2020a, 2020b, and 2020c. It may include information about the location or coordinates of.
  • the information indicating the positions of the samples 2030a, 2030b, and 2030c in the upper left included in the coding units 2020a, 2020b, and 2020c may be encoded units 2020a included in the current coding unit 2000.
  • 2020b, and 2020c may include information indicating a width or a height, and the width or height may correspond to information indicating a difference between coordinates in a picture of the coding units 2020a, 2020b, and 2020c.
  • the image decoding apparatus 100 directly uses information about the position or coordinates in the pictures of the coding units 2020a, 2020b, and 2020c or information about the width or height of the coding unit corresponding to the difference between the coordinates. By using, the coding unit 2020b located in the center may be determined.
  • the information indicating the position of the sample 2030a at the upper left of the upper coding unit 2020a may indicate (xa, ya) coordinates, and the sample 1930b at the upper left of the middle coding unit 2020b.
  • the information indicating the position of) may indicate (xb, yb) coordinates, and the information indicating the position of the sample 2030c on the upper left side of the lower coding unit 2020c may indicate (xc, yc) coordinates.
  • the image decoding apparatus 100 may determine the center coding unit 2020b using the coordinates of the samples 2030a, 2030b, and 2030c in the upper left included in the coding units 2020a, 2020b, and 2020c, respectively.
  • the coordinates indicating the positions of the samples 2030a, 2030b, and 2030c in the upper left corner may indicate coordinates indicating an absolute position in the picture, and further, the positions of the samples 2030a in the upper left corner of the upper coding unit 2020a may be determined.
  • the (dxb, dyb) coordinate which is information indicating the relative position of the sample 2030b at the upper left of the center coding unit 2020b, and the relative position of the sample 2030c at the upper left of the lower coding unit 2020c.
  • Information (dxc, dyc) coordinates can also be used.
  • the method of determining the coding unit of a predetermined position by using the coordinates of the sample as information indicating the position of the sample included in the coding unit should not be interpreted to be limited to the above-described method, and various arithmetic operations that can use the coordinates of the sample It should be interpreted in a way.
  • the image decoding apparatus 100 may split the current coding unit 2000 into a plurality of coding units 2020a, 2020b, and 2020c, and may select one of the coding units 2020a, 2020b, and 2020c.
  • the coding unit may be selected according to the standard. For example, the image decoding apparatus 100 may select coding units 2020b having different sizes from among coding units 2020a, 2020b, and 2020c.
  • the image decoding apparatus 100 may include (xa, ya) coordinates, which are information indicating a position of the sample 2030a on the upper left side of the upper coding unit 2020a, and a sample on the upper left side of the center coding unit 2020b.
  • Coding units 2020a using (xb, yb) coordinates indicating information of position of 2030b and (xc, yc) coordinates indicating information of sample 2030c on the upper left of lower coding unit 2020c. 2020b, 2020c) may determine the width or height of each.
  • the image decoding apparatus 100 uses (xa, ya), (xb, yb), and (xc, yc) coordinates indicating the positions of the coding units 2020a, 2020b, and 2020c. 2020c) may determine the size of each.
  • the image decoding apparatus 100 may determine the width of the upper coding unit 2020a as the width of the current coding unit 2000 and the height as yb-ya. According to an embodiment, the image decoding apparatus 100 may determine the width of the central coding unit 2020b as the width of the current coding unit 2000 and the height as yc-yb. According to an embodiment, the image decoding apparatus 100 may determine the width or height of the lower coding unit using the width or height of the current coding unit, and the width and height of the upper coding unit 2020a and the center coding unit 2020b. .
  • the image decoding apparatus 100 may determine a coding unit having a different size from other coding units based on the width and the height of the determined coding units 2020a, 2020b, and 2020c. Referring to FIG. 20, the image decoding apparatus 100 may determine a coding unit 2020b as a coding unit having a predetermined position while having a size different from that of the upper coding unit 2020a and the lower coding unit 2020c. However, in the above-described process of determining, by the image decoding apparatus 100, a coding unit having a different size from another coding unit, the coding unit at a predetermined position may be determined using the size of the coding unit determined based on the sample coordinates. In this regard, various processes of determining a coding unit at a predetermined position by comparing the sizes of coding units determined according to predetermined sample coordinates may be used.
  • the position of the sample to be considered for determining the position of the coding unit should not be interpreted as being limited to the upper left side described above, but may be interpreted that information on the position of any sample included in the coding unit may be used.
  • the image decoding apparatus 100 may select a coding unit of a predetermined position among odd-numbered coding units determined by dividing the current coding unit in consideration of the shape of the current coding unit. For example, if the current coding unit has a non-square shape having a width greater than the height, the image decoding apparatus 100 may determine the coding unit at a predetermined position in the horizontal direction. That is, the image decoding apparatus 100 may determine one of the coding units having different positions in the horizontal direction to limit the corresponding coding unit. If the current coding unit has a non-square shape having a height greater than the width, the image decoding apparatus 100 may determine a coding unit of a predetermined position in the vertical direction. That is, the image decoding apparatus 100 may determine one of the coding units having different positions in the vertical direction to limit the corresponding coding unit.
  • the image decoding apparatus 100 may use information indicating the positions of each of the even coding units to determine the coding unit of the predetermined position among the even coding units.
  • the image decoding apparatus 100 may determine an even number of coding units by dividing a current coding unit and determine a coding unit of a predetermined position by using information about the positions of the even coding units.
  • a detailed process thereof may be a process corresponding to a process of determining a coding unit of a predetermined position (for example, a center position) among the odd number of coding units described above with reference to FIG. 20, and thus will be omitted.
  • a predetermined value for a coding unit of a predetermined position in the splitting process is determined to determine a coding unit of a predetermined position among the plurality of coding units.
  • Information is available.
  • the image decoding apparatus 100 may determine block shape information and a split shape stored in a sample included in a middle coding unit in a splitting process in order to determine a coding unit located in a center among coding units having a plurality of current coding units split. At least one of the information may be used.
  • the image decoding apparatus 100 may divide the current coding unit 2000 into a plurality of coding units 2020a, 2020b, and 2020c based on at least one of block shape information and split shape information.
  • a coding unit 2020b positioned in the center of the plurality of coding units 2020a, 2020b, and 2020c may be determined.
  • the image decoding apparatus 100 may determine a coding unit 2020b located in the center in consideration of a position where at least one of block shape information and split shape information is obtained. That is, at least one of the block shape information and the split shape information of the current coding unit 2000 may be obtained from a sample 2040 located in the center of the current coding unit 2000. The block shape information and the split shape information may be obtained.
  • the coding unit 2020b including the sample 2040 is a coding unit positioned at the center. You can decide. However, the information used to determine the coding unit located in the middle should not be interpreted as being limited to at least one of the block type information and the split type information, and various types of information may be used in the process of determining the coding unit located in the center. Can be.
  • predetermined information for identifying a coding unit of a predetermined position may be obtained from a predetermined sample included in the coding unit to be determined.
  • the image decoding apparatus 100 may divide a plurality of coding units (eg, divided into a plurality of coding units 2020a, 2020b, and 2020c) determined by splitting a current coding unit 2000.
  • Block shape information obtained from a sample at a predetermined position in the current coding unit 2000 (for example, a sample located in the center of the current coding unit 2000) in order to determine a coding unit located in the middle of the coding units.
  • At least one of the partition type information may be used. .
  • the image decoding apparatus 100 may determine a sample of the predetermined position in consideration of the block shape of the current coding unit 2000, and the image decoding apparatus 100 may determine a plurality of pieces in which the current coding unit 2000 is divided and determined.
  • a coding unit 2020b including a sample from which predetermined information (for example, at least one of block shape information and split shape information) may be obtained may be determined and determined.
  • predetermined information for example, at least one of block shape information and split shape information
  • the image decoding apparatus 100 may determine a sample 2040 positioned in the center of the current coding unit 2000 as a sample from which predetermined information may be obtained, and the image decoding apparatus may be used.
  • the 100 may set a predetermined limit in the decoding process of the coding unit 2020b including the sample 2040.
  • the position of the sample from which the predetermined information can be obtained should not be interpreted as being limited to the above-described position, but may be interpreted as samples of arbitrary positions included in the coding unit 2020b to be determined for the purpose of limitation.
  • a position of a sample from which predetermined information may be obtained may be determined according to the shape of the current coding unit 2000.
  • the block shape information may determine whether the shape of the current coding unit is square or non-square, and determine the position of a sample from which the predetermined information may be obtained according to the shape.
  • the image decoding apparatus 100 may be positioned on a boundary that divides at least one of the width and the height of the current coding unit in half using at least one of information about the width and the height of the current coding unit.
  • the sample may be determined as a sample from which predetermined information can be obtained.
  • the image decoding apparatus 100 may select one of samples adjacent to a boundary that divides the long side of the current coding unit in half. May be determined as a sample from which information may be obtained.
  • the image decoding apparatus 100 when the image decoding apparatus 100 divides a current coding unit into a plurality of coding units, at least one of block shape information and split shape information may be used to determine a coding unit of a predetermined position among a plurality of coding units. You can use one.
  • the image decoding apparatus 100 may obtain at least one of block shape information and split shape information from a sample at a predetermined position included in a coding unit, and the image decoding apparatus 100 may divide the current coding unit.
  • the generated plurality of coding units may be divided using at least one of split shape information and block shape information obtained from a sample of a predetermined position included in each of the plurality of coding units.
  • the coding unit may be recursively split using at least one of block shape information and split shape information obtained from a sample of a predetermined position included in each coding unit. Since the recursive division process of the coding unit has been described above with reference to FIG. 19, a detailed description thereof will be omitted.
  • the image decoding apparatus 100 may determine at least one coding unit by dividing a current coding unit, and determine an order in which the at least one coding unit is decoded in a predetermined block (for example, the current coding unit). Can be determined according to
  • FIG. 21 is a diagram illustrating an order in which a plurality of coding units is processed when the image decoding apparatus 100 determines a plurality of coding units by dividing a current coding unit.
  • the image decoding apparatus 100 determines the second coding units 2110a and 2110b by dividing the first coding unit 2100 in the vertical direction according to the block shape information and the split shape information.
  • the second coding units 2130a and 2130b are determined by dividing the 2100 in the horizontal direction, or the second coding units 2150a, 2150b, 2150c and 2150d by dividing the first coding unit 2100 in the vertical and horizontal directions. Can be determined.
  • the image decoding apparatus 100 may determine an order such that the second coding units 2110a and 2110b determined by dividing the first coding unit 2100 in the vertical direction are processed in the horizontal direction 2110c. .
  • the image decoding apparatus 100 may determine the processing order of the second coding units 2130a and 2130b determined by dividing the first coding unit 2100 in the horizontal direction, in the vertical direction 2130c.
  • the image decoding apparatus 100 processes the coding units in which the second coding units 2150a, 2150b, 2150c, and 2150d are positioned in one row. It may be determined according to a predetermined order (for example, raster scan order or z scan order 2150e, etc.) in which coding units located in a next row are processed.
  • a predetermined order for example, raster scan order or z scan order 2150e, etc.
  • the image decoding apparatus 100 may recursively split coding units.
  • the image decoding apparatus 100 may determine a plurality of coding units 2110a, 2110b, 2130a, 2130b, 2150a, 2150b, 2150c, and 2150d by dividing the first coding unit 2100.
  • Each of the determined coding units 2110a, 2110b, 2130a, 2130b, 2150a, 2150b, 2150c, and 2150d may be recursively divided.
  • the method of dividing the plurality of coding units 2110a, 2110b, 2130a, 2130b, 2150a, 2150b, 2150c, and 2150d may correspond to a method of dividing the first coding unit 2100. Accordingly, the plurality of coding units 2110a, 2110b, 2130a, 2130b, 2150a, 2150b, 2150c, and 2150d may be independently divided into a plurality of coding units. Referring to FIG. 21, the image decoding apparatus 100 may determine the second coding units 2110a and 2110b by dividing the first coding unit 2100 in the vertical direction, and further, respectively, the second coding units 2110a and 2110b. It can be decided to split independently or not.
  • the image decoding apparatus 100 may divide the second coding unit 2110a on the left side into a horizontal coding direction and divide the second coding unit 2120a and 2120b into a second coding unit 2110b. ) May not be divided.
  • the processing order of coding units may be determined based on a split process of the coding units.
  • the processing order of the divided coding units may be determined based on the processing order of the coding units immediately before being split.
  • the image decoding apparatus 100 may independently determine the order in which the third coding units 2120a and 2120b determined by splitting the second coding unit 2110a on the left side from the second coding unit 2110b on the right side. Since the second coding unit 2110a on the left is divided in the horizontal direction to determine the third coding units 2120a and 2120b, the third coding units 2120a and 2120b may be processed in the vertical direction 2120c.
  • the third coding unit included in the second coding unit 2110a on the left side corresponds to the horizontal direction 2110c
  • the right coding unit 2110b may be processed.
  • FIG. 22 illustrates a process of determining that a current coding unit is divided into an odd number of coding units when the image decoding apparatus 100 may not process the coding units in a predetermined order, according to an exemplary embodiment.
  • the image decoding apparatus 100 may determine that the current coding unit is split into odd coding units based on the obtained block shape information and the split shape information.
  • a first coding unit 2200 having a square shape may be divided into second coding units 2210a and 2210b having a non-square shape, and each of the second coding units 2210a and 2210b may be independently formed. It may be divided into three coding units 2220a, 2220b, 2220c, 2220d, and 2220e.
  • the image decoding apparatus 100 may determine a plurality of third coding units 2220a and 2220b by dividing the left coding unit 2210a in the horizontal direction among the second coding units, and may include the right coding unit 2210b. ) May be divided into an odd number of third coding units 2220c, 2220d, and 2220e.
  • the image decoding apparatus 100 determines whether the third coding units 2220a, 2220b, 2220c, 2220d, and 2220e may be processed in a predetermined order to determine whether there are oddly divided coding units. You can decide. Referring to FIG. 22, the image decoding apparatus 100 may determine the third coding units 2220a, 2220b, 2220c, 2220d, and 2220e by recursively dividing the first coding unit 2200.
  • the image decoding apparatus 100 may include a first coding unit 2200, a second coding unit 2210a, 2210b, or a third coding unit 2220a, 2220b, 2220c, based on at least one of block shape information and split shape information.
  • the order in which the plurality of coding units included in the first coding unit 2200 is processed may be a predetermined order (for example, a z-scan order 2230), and the image decoding apparatus ( 100 may determine whether the third coding units 2220c, 2220d, and 2220e determined by splitting the right second coding unit 2210b into an odd number satisfy the condition in which the right coding units 2210b are processed in the predetermined order.
  • the image decoding apparatus 100 may satisfy a condition that the third coding units 2220a, 2220b, 2220c, 2220d, and 2220e included in the first coding unit 2200 may be processed in a predetermined order. Whether the at least one of the width and the height of the second coding unit 2210a, 2210b is divided in half according to the boundary of the third coding unit 2220a, 2220b, 2220c, 2220d, 2220e. Related. For example, the third coding units 2220a and 2220b, which are determined by dividing the height of the left second coding unit 2210a having a non-square shape in half, may satisfy the condition.
  • a boundary of the third coding units 2220c, 2220d, and 2220e determined by dividing the right second coding unit 2210b into three coding units may not divide the width or height of the right second coding unit 2210b in half. Therefore, the third coding units 2220c, 2220d, and 2220e may be determined to not satisfy the condition. In case of such a condition dissatisfaction, the image decoding apparatus 100 may determine that the scan order is disconnected, and determine that the right second coding unit 2210b is divided into odd coding units based on the determination result.
  • the image decoding apparatus 100 when the image decoding apparatus 100 is divided into an odd number of coding units, the image decoding apparatus 100 may set a predetermined limit on a coding unit of a predetermined position among the divided coding units. Since the above has been described through the embodiments, detailed description thereof will be omitted.
  • FIG. 23 illustrates a process of determining, by the image decoding apparatus 100, at least one coding unit by dividing the first coding unit 2300, according to an exemplary embodiment.
  • the image decoding apparatus 100 may divide the first coding unit 2300 based on at least one of the block shape information and the split shape information acquired through the receiver 110.
  • the first coding unit 2300 having a square shape may be divided into coding units having four square shapes, or may be divided into a plurality of coding units having a non-square shape.
  • the image decoding apparatus 100 may determine the first coding unit. 2300 may be divided into a plurality of non-square coding units.
  • the image decoding apparatus 100 may form a square first coding unit 2300. ) May be divided into second coding units 2310a, 2310b, and 2310c determined by being split in the vertical direction as odd coding units, or second coding units 2320a, 2320b, and 2320c by splitting into the horizontal direction.
  • the image decoding apparatus 100 may process the second coding units 2310a, 2310b, 2310c, 2320a, 2320b, and 2320c included in the first coding unit 2300 in a predetermined order.
  • the condition is whether the at least one of the width and height of the first coding unit 2300 is divided in half according to the boundary of the second coding unit (2310a, 2310b, 2310c, 2320a, 2320b, 2320c).
  • a boundary between second coding units 2310a, 2310b, and 2310c which is determined by dividing a square first coding unit 2300 in a vertical direction, divides the width of the first coding unit 2300 in half.
  • the first coding unit 2300 may be determined to not satisfy a condition that may be processed in a predetermined order.
  • the boundary between the second coding units 2320a, 2320b, and 2320c which is determined by dividing the first coding unit 2300 having a square shape in the horizontal direction, does not divide the width of the first coding unit 2300 in half,
  • the one coding unit 2300 may be determined as not satisfying a condition that may be processed in a predetermined order. In case of such a condition dissatisfaction, the image decoding apparatus 100 may determine that the scan order is disconnected, and determine that the first coding unit 2300 is divided into odd coding units based on the determination result.
  • the image decoding apparatus 100 when the image decoding apparatus 100 is divided into an odd number of coding units, the image decoding apparatus 100 may set a predetermined limit on a coding unit of a predetermined position among the divided coding units. Since the above has been described through the embodiments, detailed description thereof will be omitted.
  • the image decoding apparatus 100 may determine various coding units by dividing the first coding unit.
  • the image decoding apparatus 100 may split a first coding unit 2300 having a square shape and a first coding unit 2330 or 2350 having a non-square shape into various coding units. .
  • FIG. 24 illustrates that the second coding unit is split when the second coding unit having a non-square shape determined by splitting the first coding unit 2400 meets a predetermined condition, according to an embodiment. It shows that the form that can be limited.
  • the image decoding apparatus 100 may determine a square-type first coding unit 2400 having a non-square shape based on at least one of block shape information and segmentation shape information acquired through the receiver 110. It may be determined by dividing into two coding units 2410a, 2410b, 2420a, and 2420b. The second coding units 2410a, 2410b, 2420a, and 2420b may be split independently. Accordingly, the image decoding apparatus 100 determines whether to split or not split into a plurality of coding units based on at least one of block shape information and split shape information related to each of the second coding units 2410a, 2410b, 2420a, and 2420b. Can be.
  • the image decoding apparatus 100 divides the left second coding unit 2410a having a non-square shape in a horizontal direction by dividing the first coding unit 2400 in a vertical direction to form a third coding unit ( 2412a, 2412b) can be determined.
  • the right second coding unit 2410b may have the same horizontal direction as the direction in which the left second coding unit 2410a is divided. It can be limited to not be divided into.
  • the right second coding unit 2410b is divided in the same direction and the third coding units 2414a and 2414b are determined, the left second coding unit 2410a and the right second coding unit 2410b are respectively horizontal.
  • the third coding units 2412a, 2412b, 2414a, and 2414b may be determined.
  • the image decoding apparatus 100 divides the first coding unit 2400 into four square second coding units 2430a, 2430b, 2430c, and 2430d based on at least one of block shape information and split shape information. This is the same result as the above, which may be inefficient in terms of image decoding.
  • the image decoding apparatus 100 divides a second coding unit 2420a or 2420b having a non-square shape, determined by dividing the first coding unit 2400 in a horizontal direction, in a vertical direction, to thereby form a third coding unit. (2422a, 2422b, 2424a, 2424b) can be determined.
  • the image decoding apparatus 100 divides one of the second coding units (for example, the upper second coding unit 2420a) in the vertical direction
  • another image coding unit for example, the lower end
  • the coding unit 2420b may restrict the upper second coding unit 2420a from being split in the vertical direction in the same direction as the split direction.
  • FIG. 25 illustrates a process of splitting a coding unit having a square shape by the image decoding apparatus 100 when the split shape information cannot be divided into four square coding units according to an exemplary embodiment.
  • the image decoding apparatus 100 divides the first coding unit 2500 based on at least one of the block shape information and the split shape information to divide the second coding units 2510a, 2510b, 2520a, 2520b, and the like. You can decide.
  • the split type information may include information about various types in which a coding unit may be split, but the information on various types may not include information for splitting into four coding units having a square shape.
  • the image decoding apparatus 100 may not divide the first coding unit 2500 having a square shape into four second coding units 2530a, 2530b, 2530c, and 2530d having a square shape.
  • the image decoding apparatus 100 may determine the non-square second coding units 2510a, 2510b, 2520a, 2520b, and the like based on the segmentation information.
  • the image decoding apparatus 100 may independently split the non-square second coding units 2510a, 2510b, 2520a, 2520b, and the like.
  • Each of the second coding units 2510a, 2510b, 2520a, 2520b, etc. may be split in a predetermined order through a recursive method, which is based on at least one of block shape information and split shape information 2500. ) May be a division method corresponding to the division method.
  • the image decoding apparatus 100 may determine the third coding units 2512a and 2512b having a square shape by dividing the left second coding unit 2510a in the horizontal direction, and the right second coding unit 2510b The third coding units 2514a and 2514b having a square shape may be determined by being split in the horizontal direction. Furthermore, the image decoding apparatus 100 may divide the left second coding unit 2510a and the right second coding unit 2510b in the horizontal direction to determine the third coding units 2516a, 2516b, 2516c, and 2516d having a square shape. have. In this case, the coding unit may be determined in the same form as that in which the first coding unit 2500 is divided into four square second coding units 2530a, 2530b, 2530c, and 2530d.
  • the image decoding apparatus 100 may determine the third coding units 2522a and 2522b having a square shape by dividing the upper second coding unit 2520a in the vertical direction, and lower second coding unit 2520b. ) May be divided in a vertical direction to determine third coding units 2524a and 2524b having a square shape. Furthermore, the image decoding apparatus 100 may divide the upper second coding unit 2520a and the lower second coding unit 2520b in the vertical direction to determine the third coding units 2526a, 2526b, 2526a, and 2526b having a square shape. have. In this case, the coding unit may be determined in the same form as that in which the first coding unit 2500 is divided into four square second coding units 2530a, 2530b, 2530c, and 2530d.
  • FIG. 26 illustrates that a processing order between a plurality of coding units may vary according to a division process of coding units, according to an embodiment.
  • the image decoding apparatus 100 may divide the first coding unit 2600 based on the block shape information and the split shape information.
  • the image decoding apparatus 100 may determine the first coding unit 2600.
  • the second coding units 2610a, 2610b, 2620a, and 2620b of the non-square shape determined by dividing the first coding unit 2600 only in the horizontal direction or the vertical direction may each have block shape information and split shape information.
  • the image decoding apparatus 100 divides the second coding units 2610a and 2610b, which are generated by splitting the first coding unit 2600 in the vertical direction, in the horizontal direction, respectively, to separate the third coding units 2616a and 2616b, 2616c and 2616d, and the second coding units 2620a and 2620b generated by dividing the first coding unit 2600 in the horizontal direction are divided in the horizontal direction, respectively, and the third coding units 2626a, 2626b and 2626c. , 2626d). Since the splitting process of the second coding units 2610a, 2610b, 2620a, and 2620b has been described above with reference to FIG. 25, a detailed description thereof will be omitted.
  • the image decoding apparatus 100 may process coding units in a predetermined order. Features of the processing of coding units according to a predetermined order have been described above with reference to FIG. 21, and thus a detailed description thereof will be omitted. Referring to FIG. 26, the image decoding apparatus 100 splits a first coding unit 2600 having a square shape to form four square third coding units 2616a, 2616b, 2616c, 2616d, 2626a, 2626b, 2626c, and 2626d. ) Can be determined.
  • the image decoding apparatus 100 may process a sequence of the third coding units 2616a, 2616b, 2616c, 2616d, 2626a, 2626b, 2626c, and 2626d according to a form in which the first coding unit 2600 is divided. You can decide.
  • the image decoding apparatus 100 divides the second coding units 2610a and 2610b generated by splitting in the vertical direction in the horizontal direction, respectively, to determine the third coding units 2616a, 2616b, 2616c, and 2616d.
  • the image decoding apparatus 100 may first process the third coding units 2616a and 2616c included in the left second coding unit 2610a in the vertical direction, and then include the right coding unit 2610b in the right second coding unit 2610b.
  • the third coding units 2616a, 2616b, 2616c, and 2616d may be processed according to an order 2615 that processes the third coding units 2616b and 2616d in the vertical direction.
  • the image decoding apparatus 100 determines the third coding units 2626a, 2626b, 2626c, and 2626d by dividing the second coding units 2620a and 2620b generated by dividing in the horizontal direction, respectively.
  • the image decoding apparatus 100 may first process the third coding units 2626a and 2626b included in the upper second coding unit 2620a in the horizontal direction, and then include the lower coding unit 2620b.
  • the third coding units 2626a, 2626b, 2626c, and 2626d may be processed according to an order 2627 that processes the third coding units 2626c and 2626d in the horizontal direction.
  • second coding units 2610a, 2610b, 2620a, and 2620b may be divided, respectively, and square third coding units 2616a, 2616b, 2616c, 2616d, 2626a, 2626b, 2626c, and 2626d may be determined. have.
  • the second coding units 2610a and 2610b that are determined by dividing in the vertical direction and the second coding units 2620a and 2620b that are determined by dividing in the horizontal direction are divided into different forms, but are determined later.
  • 2616b, 2616c, 2616d, 2626a, 2626b, 2626c, and 2626d eventually result in splitting the first coding unit 2600 into coding units having the same type.
  • the apparatus 100 for decoding an image recursively splits a coding unit through a different process based on at least one of block shape information and split shape information, and as a result, even if the coding units having the same shape are determined, the plurality of pictures determined in the same shape are determined. Coding units may be processed in different orders.
  • FIG. 27 illustrates a process of determining a depth of a coding unit as a shape and a size of a coding unit change when a coding unit is recursively divided to determine a plurality of coding units according to an embodiment.
  • the image decoding apparatus 100 may determine the depth of a coding unit according to a predetermined criterion.
  • the predetermined criterion may be the length of the long side of the coding unit.
  • the depth of the current coding unit is greater than the depth of the coding unit before the split. It can be determined that the depth is increased by n.
  • a coding unit having an increased depth is expressed as a coding unit of a lower depth.
  • the image decoding apparatus 100 may have a square shape, based on block shape information indicating a square shape (for example, block shape information may indicate '0: SQUARE').
  • the first coding unit 2700 may be divided to determine a second coding unit 2702, a third coding unit 2704, or the like of a lower depth. If the size of the square first coding unit 2700 is 2Nx2N, the second coding unit 2702 determined by dividing the width and height of the first coding unit 2700 by 1/2 times may have a size of NxN. have.
  • the third coding unit 2704 determined by dividing the width and the height of the second coding unit 2702 into 1/2 size may have a size of N / 2 ⁇ N / 2.
  • the width and height of the third coding unit 2704 correspond to 1/4 times the first coding unit 2700.
  • the depth of the first coding unit 2700 is D
  • the depth of the second coding unit 2702, which is 1/2 the width and height of the first coding unit 2700 may be D + 1
  • the depth of the third coding unit 2704, which is 1/4 of the width and the height of 2700 may be D + 2.
  • block shape information indicating a non-square shape (e.g., block shape information indicates that the height is a non-square longer than the width '1: NS_VER' or the width is a non-square longer than the height).
  • 2: may represent NS_HOR ')
  • the image decoding apparatus 100 may divide the first coding unit 2710 or 2720 having a non-square shape to divide the second coding unit 2712 or 2722 of the lower depth
  • the third coding unit 2714 or 2724 may be determined.
  • the image decoding apparatus 100 may determine a second coding unit (eg, 2702, 2712, 2722, etc.) by dividing at least one of a width and a height of the Nx2N size of the first coding unit 2710. That is, the image decoding apparatus 100 may divide the first coding unit 2710 in the horizontal direction to determine a second coding unit 2702 of NxN size or a second coding unit 2722 of NxN / 2 size.
  • the second coding unit 2712 having the size of N / 2 ⁇ N may be determined by splitting in the horizontal direction and the vertical direction.
  • the image decoding apparatus 100 determines at least one of a width and a height of a 2N ⁇ N sized first coding unit 2720 to determine a second coding unit (eg, 2702, 2712, 2722, etc.). It may be. That is, the image decoding apparatus 100 may divide the first coding unit 2720 in the vertical direction to determine a second coding unit 2702 of size NxN or a second coding unit 2712 of size N / 2xN.
  • the second coding unit 2722 having the size of NxN / 2 may be determined by splitting in the horizontal direction and the vertical direction.
  • the image decoding apparatus 100 determines at least one of a width and a height of the NxN-sized second coding unit 2702 to determine a third coding unit (eg, 2704, 2714, 2724, etc.). It may be. That is, the image decoding apparatus 100 determines the third coding unit 2704 having the size of N / 2xN / 2 by dividing the second coding unit 2702 in the vertical direction and the horizontal direction, or the N / 4xN / 2 size 3 coding units 2714 may be determined, or a third coding unit 2724 having a size of N / 2 ⁇ N / 4 may be determined.
  • a third coding unit eg, 2704, 2714, 2724, etc.
  • the image decoding apparatus 100 splits at least one of a width and a height of the N / 2xN sized second coding unit 2712, for example, a third coding unit (eg, 2704, 2714, 2724, etc.). May be determined. That is, the image decoding apparatus 100 divides the second coding unit 2712 in the horizontal direction, so that the third coding unit 2704 having N / 2xN / 2 size or the third coding unit 2724 having N / 2xN / 4 size. ) May be determined or divided into vertical and horizontal directions to determine a third coding unit 2714 having an N / 4xN / 2 size.
  • a third coding unit eg, 2704, 2714, 2724, etc.
  • the image decoding apparatus 100 divides at least one of a width and a height of the second coding unit 2722 having an N ⁇ N / 2 size to form a third coding unit (eg, 2704, 2714, 2724, etc.). May be determined. That is, the image decoding apparatus 100 divides the second coding unit 2722 in the vertical direction to form a third coding unit 2704 of size N / 2xN / 2 or a third coding unit 2714 of size N / 4xN / 2. ) May be determined or divided in the vertical direction and the horizontal direction to determine the third coding unit 2724 having the size of N / 2 ⁇ N / 4.
  • the image decoding apparatus 100 may divide a coding unit having a square shape (for example, 2700, 2702, and 2704) in a horizontal direction or a vertical direction.
  • the first coding unit 2700 of size 2Nx2N is split in the vertical direction to determine the first coding unit 2710 of size Nx2N, or the first coding unit 2720 of size 2NxN is determined by splitting in the horizontal direction.
  • the depth of the coding unit determined by dividing the first coding unit 2700 having a 2N ⁇ 2N size in the horizontal direction or the vertical direction may be determined by the first encoding. It may be equal to the depth of the unit 2700.
  • the width and height of the third coding unit 2714 or 2724 may correspond to 1/4 times the first coding unit 2710 or 2720.
  • the depth of the first coding unit 2710 or 2720 is D
  • the depth of the second coding unit 2712 or 2722 that is 1/2 times the width and the height of the first coding unit 2710 or 2720 may be D + 1.
  • the depth of the third coding unit 2714 or 2724, which is 1/4 of the width and the height of the first coding unit 2710 or 2720, may be D + 2.
  • FIG. 28 illustrates a depth and a part index (PID) for classifying coding units, which may be determined according to the shape and size of coding units, according to an embodiment.
  • PID depth and a part index
  • the image decoding apparatus 100 may determine a second coding unit having various forms by dividing the first coding unit 2800 having a square shape. Referring to FIG. 28, the image decoding apparatus 100 divides the first coding unit 2800 in at least one of a vertical direction and a horizontal direction according to the split type information to form second coding units 2802a, 2802b, 2804a, 2804b, 2806a, 2806b, 2806c, and 2806d. That is, the image decoding apparatus 100 may determine the second coding units 2802a, 2802b, 2804a, 2804b, 2806a, 2806b, 2806c, and 2806d based on the split shape information about the first coding unit 2800.
  • the second coding units 2802a, 2802b, 2804a, 2804b, 2806a, 2806b, 2806c, and 2806d which are determined according to split shape information about the first coding unit 2800 having a square shape, have a long side length. Depth can be determined based on this. For example, since the length of one side of the first coding unit 2800 having a square shape and the length of the long side of the second coding units 2802a, 2802b, 2804a, and 2804b having a non-square shape are the same, the first coding unit ( 2800 and the non-square second coding units 2802a, 2802b, 2804a, and 2804b have the same depth as D.
  • the image decoding apparatus 100 divides the first coding unit 2800 into four square coding units 2806a, 2806b, 2806c, and 2806d based on the split shape information, Since the length of one side of the two coding units 2806a, 2806b, 2806c, and 2806d is 1/2 times the length of one side of the first coding unit 2800, the depth of the second coding units 2806a, 2806b, 2806c, and 2806d is increased. May be a depth of D + 1 that is one depth lower than D, which is a depth of the first coding unit 2800.
  • the image decoding apparatus 100 divides a first coding unit 2810 having a shape having a height greater than a width in a horizontal direction according to split shape information, thereby providing a plurality of second coding units 2812a, 2812b, 2814a, 2814b, 2814c.
  • the image decoding apparatus 100 divides a first coding unit 2820 having a shape having a width greater than a height in a vertical direction according to split shape information, thereby providing a plurality of second coding units 2822a, 2822b, 2824a, 2824b, 2824c).
  • second coding units 2812a, 2812b, 2814a, 2814b, 2814c, 2822a, 2822b, 2824a, and 2824b that are determined according to split shape information about the first coding unit 2810 or 2820 having a non-square shape. , 2824c) may be determined based on the length of the long side.
  • the length of one side of the second coding units 2812a and 2812b having a square shape is 1/2 times the length of one side of the first coding unit 2810 having a non-square shape having a height greater than the width
  • the depths of the second coding units 2812a and 2812b of the shape are D + 1, which is one depth lower than the depth D of the first coding unit 2810 of the non-square shape.
  • the image decoding apparatus 100 may divide the non-square first coding unit 2810 into odd second coding units 2814a, 2814b, and 2814c based on the split shape information.
  • the odd-numbered second coding units 2814a, 2814b, and 2814c may include second coding units 2814a and 2814c having a non-square shape and second coding units 2814b having a square shape.
  • the length of the long side of the second coding units 2814a and 2814c of the non-square shape and the length of one side of the second coding unit 2814b of the square shape are 1 / time of the length of one side of the first coding unit 2810.
  • the depths of the second coding units 2814a, 2814b, and 2814c may be a depth of D + 1 that is one depth lower than the depth D of the first coding unit 2810.
  • the image decoding apparatus 100 corresponds to the above-described method of determining depths of coding units related to the first coding unit 2810 and is related to the first coding unit 2820 having a non-square shape having a width greater than the height. Depth of coding units may be determined.
  • the image decoding apparatus 100 may determine the size ratio between the coding units.
  • the index can be determined based on this. Referring to FIG. 28, a coding unit 2814b positioned in the center of odd-numbered split coding units 2814a, 2814b, and 2814c has the same width as the other coding units 2814a and 2814c but has a different height. It may be twice the height of the fields 2814a, 2814c. That is, in this case, the coding unit 2814b positioned in the middle may include two of the other coding units 2814a and 2814c.
  • the image decoding apparatus 100 may determine whether odd-numbered split coding units are not the same size based on whether there is a discontinuity of an index for distinguishing between the divided coding units.
  • the image decoding apparatus 100 may determine whether the image decoding apparatus 100 is divided into a specific division type based on a value of an index for dividing the plurality of coding units determined by dividing from the current coding unit. Referring to FIG. 28, the image decoding apparatus 100 determines an even number of coding units 2812a and 2812b by dividing a first coding unit 2810 having a rectangular shape whose height is greater than a width, or may determine an odd number of coding units 2814a and 2814b. 2814c). The image decoding apparatus 100 may use an index (PID) indicating each coding unit to distinguish each of the plurality of coding units. According to an embodiment, the PID may be obtained from a sample (eg, an upper left sample) at a predetermined position of each coding unit.
  • a sample eg, an upper left sample
  • the image decoding apparatus 100 may determine a coding unit of a predetermined position among coding units determined by splitting by using an index for dividing coding units. According to an embodiment, when the split type information of the first coding unit 2810 having a height greater than the width is divided into three coding units, the image decoding apparatus 100 may determine the first coding unit 2810. It may be divided into three coding units 2814a, 2814b, and 2814c. The image decoding apparatus 100 may allocate an index for each of three coding units 2814a, 2814b, and 2814c. The image decoding apparatus 100 may compare the indices of the respective coding units to determine the coding unit among the oddly divided coding units.
  • the image decoding apparatus 100 encodes a coding unit 2814b having an index corresponding to a center value among the indices based on the indexes of the coding units, and encodes the center position among the coding units determined by splitting the first coding unit 2810. It can be determined as a unit. According to an embodiment, when determining the indexes for distinguishing the divided coding units, the image decoding apparatus 100 may determine the indexes based on the size ratio between the coding units when the coding units are not the same size. . Referring to FIG. 28, the coding unit 2814b generated by dividing the first coding unit 2810 may include the coding units 2814a and 2814c having the same width but different heights as the other coding units 2814a and 2814c.
  • the image decoding apparatus 100 may determine that the image decoding apparatus 100 is divided into a plurality of coding units including a coding unit having a different size from other coding units. In this case, when the split form information is divided into odd coding units, the image decoding apparatus 100 may have a shape different from a coding unit having a different coding unit (for example, a middle coding unit) at a predetermined position among the odd coding units.
  • the current coding unit can be divided by.
  • the image decoding apparatus 100 may determine a coding unit having a different size by using an index (PID) for the coding unit.
  • PID index
  • the above-described index, the size or position of the coding unit of the predetermined position to be determined are specific to explain an embodiment and should not be construed as being limited thereto. Various indexes and positions and sizes of the coding unit may be used. Should be interpreted.
  • the image decoding apparatus 100 may use a predetermined data unit at which recursive division of coding units begins.
  • FIG. 29 illustrates that a plurality of coding units are determined according to a plurality of predetermined data units included in a picture according to an embodiment.
  • the predetermined data unit may be defined as a data unit in which a coding unit starts to be recursively divided using at least one of block shape information and split shape information. That is, it may correspond to the coding unit of the highest depth used in the process of determining a plurality of coding units for dividing the current picture.
  • a predetermined data unit will be referred to as a reference data unit.
  • the reference data unit may represent a predetermined size and shape.
  • the reference coding unit may include samples of M ⁇ N. M and N may be the same as each other, and may be an integer represented by a multiplier of two. That is, the reference data unit may represent a square or non-square shape, and then may be divided into integer coding units.
  • the image decoding apparatus 100 may divide the current picture into a plurality of reference data units. According to an embodiment, the image decoding apparatus 100 may divide a plurality of reference data units for dividing a current picture by using split type information for each reference data unit. The division process of the reference data unit may correspond to the division process using a quad-tree structure.
  • the image decoding apparatus 100 may predetermine the minimum size of the reference data unit included in the current picture. Accordingly, the image decoding apparatus 100 may determine a reference data unit having various sizes having a minimum size or more, and determine at least one coding unit by using block shape information and split shape information based on the determined reference data unit. You can decide.
  • the image decoding apparatus 100 may use a reference coding unit 2900 in a square shape, or may use a reference coding unit 2902 in a non-square shape.
  • the shape and size of the reference coding unit may include various data units (eg, a sequence, a picture, a slice, and a slice segment) that may include at least one reference coding unit. slice segment, maximum coding unit, etc.).
  • the receiving unit 110 of the image decoding apparatus 100 may obtain at least one of information on the shape of a reference coding unit and information on the size of the reference coding unit from each bitstream. .
  • the process of determining at least one coding unit included in the reference coding unit 2900 having a square shape is described above by splitting the current coding unit 1700 of FIG. 17, and the reference coding unit having a non-square shape 2902 is described. Since the process of determining at least one coding unit included in the above) is described above through the process of splitting the current coding unit 1800 or 1850 of FIG. 18, a detailed description thereof will be omitted.
  • the image decoding apparatus 100 may determine the size and shape of the reference coding unit in order to determine the size and shape of the reference coding unit according to some data unit predetermined based on a predetermined condition.
  • a predetermined condition for example, a data unit having a size less than or equal to a slice
  • the various data units eg, sequence, picture, slice, slice segment, maximum coding unit, etc.
  • the image decoding apparatus 100 may determine the size and shape of the reference data unit for each data unit satisfying the predetermined condition by using the index.
  • the use efficiency of the bitstream may not be good, and thus the shape of the reference coding unit
  • only the index may be obtained and used.
  • at least one of the size and shape of the reference coding unit corresponding to the index indicating the size and shape of the reference coding unit may be predetermined.
  • the image decoding apparatus 100 selects at least one of the predetermined size and shape of the reference coding unit according to the index, thereby selecting at least one of the size and shape of the reference coding unit included in the data unit that is the reference for obtaining the index. You can decide.
  • the image decoding apparatus 100 may use at least one reference coding unit included in one maximum coding unit. That is, at least one reference coding unit may be included in the maximum coding unit for dividing an image, and the coding unit may be determined through a recursive division process of each reference coding unit. According to an embodiment, at least one of the width and the height of the maximum coding unit may correspond to an integer multiple of at least one of the width and the height of the reference coding unit. According to an embodiment, the size of the reference coding unit may be a size obtained by dividing the maximum coding unit n times according to a quad tree structure.
  • the image decoding apparatus 100 may determine the reference coding unit by dividing the maximum coding unit n times according to the quad tree structure, and according to various embodiments, the reference coding unit may include at least one of block shape information and split shape information. Can be divided based on.
  • FIG. 30 is a diagram of a processing block serving as a reference for determining a determination order of a reference coding unit included in a picture 3000, according to an exemplary embodiment.
  • the image decoding apparatus 100 may determine at least one processing block for dividing a picture.
  • the processing block is a data unit including at least one reference coding unit for dividing an image, and the at least one reference coding unit included in the processing block may be determined in a specific order. That is, the determination order of at least one reference coding unit determined in each processing block may correspond to one of various types of order in which the reference coding unit may be determined, and the reference coding unit determination order determined in each processing block. May be different per processing block.
  • the order of determination of the reference coding units determined for each processing block is raster scan, Z-scan, N-scan, up-right diagonal scan, and horizontal scan. It may be one of various orders such as a horizontal scan, a vertical scan, etc., but the order that may be determined should not be construed as being limited to the scan orders.
  • the image decoding apparatus 100 may determine the size of at least one processing block included in the image by obtaining information about the size of the processing block.
  • the image decoding apparatus 100 may determine the size of at least one processing block included in the image by obtaining information about the size of the processing block from the bitstream.
  • the size of such a processing block may be a predetermined size of a data unit indicated by the information about the size of the processing block.
  • the receiver 110 of the image decoding apparatus 100 may obtain information about the size of a processing block from a bitstream for each specific data unit.
  • the information about the size of the processing block may be obtained from the bitstream in data units such as an image, a sequence, a picture, a slice, and a slice segment. That is, the receiver 110 may obtain information about the size of the processing block from the bitstream for each of the various data units, and the image decoding apparatus 100 may at least divide the picture using the information about the size of the acquired processing block.
  • the size of one processing block may be determined, and the size of the processing block may be an integer multiple of the reference coding unit.
  • the image decoding apparatus 100 may determine the sizes of the processing blocks 3002 and 3012 included in the picture 3000. For example, the image decoding apparatus 100 may determine the size of the processing block based on the information about the size of the processing block obtained from the bitstream. Referring to FIG. 30, the image decoding apparatus 100 according to an embodiment may include a horizontal size of the processing blocks 3002 and 3012 as four times the horizontal size of the reference coding unit and four times the vertical size of the reference coding unit. You can decide. The image decoding apparatus 100 may determine an order in which at least one reference coding unit is determined in at least one processing block.
  • the image decoding apparatus 100 may determine each processing block 3002 and 3012 included in the picture 3000 based on the size of the processing block, and include the processing block 3002 and 3012 in the processing block 3002 and 3012.
  • a determination order of at least one reference coding unit may be determined.
  • the determination of the reference coding unit may include the determination of the size of the reference coding unit.
  • the image decoding apparatus 100 may obtain information about a determination order of at least one reference coding unit included in at least one processing block from a bitstream, and based on the obtained determination order The order in which at least one reference coding unit is determined may be determined.
  • the information about the determination order may be defined in an order or direction in which reference coding units are determined in the processing block. That is, the order in which the reference coding units are determined may be independently determined for each processing block.
  • the image decoding apparatus 100 may obtain information about a determination order of a reference coding unit from a bitstream for each specific data unit.
  • the receiver 110 may obtain information about a determination order of a reference coding unit from a bitstream for each data unit such as an image, a sequence, a picture, a slice, a slice segment, and a processing block. Since the information about the determination order of the reference coding unit indicates the determination order of the reference coding unit in the processing block, the information about the determination order may be obtained for each specific data unit including an integer number of processing blocks.
  • the image decoding apparatus 100 may determine at least one reference coding unit based on the order determined according to the embodiment.
  • the receiver 110 may obtain information on a determination order of a reference coding unit from the bitstream as information related to the processing blocks 3002 and 3012, and the image decoding apparatus 100 may process the processing block ( An order of determining at least one reference coding unit included in 3002 and 3012 may be determined, and at least one reference coding unit included in the picture 3000 may be determined according to the determination order of the coding unit. Referring to FIG. 30, the image decoding apparatus 100 may determine the determination order 3004 and 3014 of at least one reference coding unit associated with each processing block 3002 and 3012.
  • the reference coding unit determination order associated with each processing block 3002 or 3012 may be different for each processing block.
  • the reference coding unit determination order 3004 associated with the processing block 3002 is a raster scan order
  • the reference coding unit included in the processing block 3002 may be determined according to the raster scan order.
  • the reference coding unit determination order 3014 associated with another processing block 3012 is the reverse order of the raster scan order
  • the reference coding units included in the processing block 3012 may be determined according to the reverse order of the raster scan order.
  • the image decoding apparatus 100 may decode at least one determined reference coding unit according to an embodiment.
  • the image decoding apparatus 100 may decode an image based on the reference coding unit determined through the above-described embodiment.
  • the method of decoding the reference coding unit may include various methods of decoding an image.
  • the image decoding apparatus 100 may obtain and use block shape information indicating a shape of a current coding unit or split shape information indicating a method of dividing a current coding unit from a bitstream.
  • Block type information or split type information may be included in a bitstream associated with various data units.
  • the image decoding apparatus 100 may include a sequence parameter set, a picture parameter set, a video parameter set, a slice header, and a slice segment header. block type information or segmentation type information included in a segment header) may be used.
  • the image decoding apparatus 100 may obtain and use syntax corresponding to block type information or split type information from the bitstream from the bitstream for each maximum coding unit, reference coding unit, and processing block.
  • the above-described embodiments of the present disclosure may be written as a program executable on a computer, and may be implemented in a general-purpose digital computer operating the program using a computer-readable recording medium.
  • the computer-readable recording medium may include a storage medium such as a magnetic storage medium (eg, a ROM, a floppy disk, a hard disk, etc.) and an optical reading medium (eg, a CD-ROM, a DVD, etc.).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention a pour objet une prédiction plus précise d'informations de mouvement de pixels d'un bloc actuel sur la base d'une pluralité d'éléments d'informations de mouvement associées au bloc actuel. Un procédé de décodage vidéo comprend les étapes consistant à : lorsqu'un mode de prédiction du bloc actuel est un mode affine, obtenir des composantes de mouvement de première et seconde directions incluses dans des informations de mouvement d'un premier pixel de référence du bloc actuel à partir d'un flux de bits reçu ; obtenir une composante de mouvement d'une première direction incluse dans des informations de mouvement d'un deuxième pixel de référence à partir du flux de bits ; obtenir une composante de mouvement d'une seconde direction incluse dans les informations de mouvement du deuxième pixel de référence ; obtenir des informations de mouvement d'un troisième pixel de référence du bloc actuel sur la base des informations de mouvement des premier et deuxième pixels de référence ; et obtenir des informations de mouvement des pixels inclus dans le bloc actuel sur la base de la dimension de la hauteur et de la largeur du bloc actuel, des informations de mouvement du premier pixel de référence, des informations de mouvement du deuxième pixel de référence et des informations de mouvement du troisième pixel de référence.
PCT/KR2018/003658 2017-03-28 2018-03-28 Procédé et appareil de codage vidéo, et procédé et appareil de décodage vidéo WO2018182310A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020197019480A KR102243215B1 (ko) 2017-03-28 2018-03-28 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762477694P 2017-03-28 2017-03-28
US62/477,694 2017-03-28

Publications (1)

Publication Number Publication Date
WO2018182310A1 true WO2018182310A1 (fr) 2018-10-04

Family

ID=63676333

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/003658 WO2018182310A1 (fr) 2017-03-28 2018-03-28 Procédé et appareil de codage vidéo, et procédé et appareil de décodage vidéo

Country Status (2)

Country Link
KR (1) KR102243215B1 (fr)
WO (1) WO2018182310A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200112752A (ko) * 2019-03-21 2020-10-05 삼성전자주식회사 블록 형태별로 블록 크기가 설정되는 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치
US11425416B2 (en) 2018-12-07 2022-08-23 Samsung Electronics Co., Ltd. Video decoding method and device, and video encoding method and device
US11546602B2 (en) 2018-08-24 2023-01-03 Samsung Electronics Co., Ltd. Method and apparatus for image encoding, and method and apparatus for image decoding
US11558622B2 (en) 2018-10-09 2023-01-17 Samsung Electronics Co., Ltd. Video decoding method and apparatus, and video encoding method and apparatus involving sub-block merge index context and bypass model
US12316840B2 (en) 2024-04-03 2025-05-27 Samsung Electronics Co., Ltd. Method and device for encoding video having block size set for each block shape, and method and device for decoding video

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101003105B1 (ko) * 2008-01-29 2010-12-21 한국전자통신연구원 어파인 변환 기반의 움직임 보상을 이용한 비디오 부호화 및 복호화 방법 및 장치
KR101366242B1 (ko) * 2007-03-29 2014-02-20 삼성전자주식회사 움직임 모델 파라메터의 부호화, 복호화 방법 및 움직임모델 파라메터를 이용한 영상의 부호화, 복호화 방법 및장치
KR20150087207A (ko) * 2013-07-12 2015-07-29 삼성전자주식회사 변이 벡터 유도를 사용하는 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치
KR20170001704A (ko) * 2016-12-26 2017-01-04 삼성전자주식회사 영상 복호화 방법 및 장치
WO2017022973A1 (fr) * 2015-08-04 2017-02-09 엘지전자 주식회사 Procédé d'interprédiction, et dispositif, dans un système de codage vidéo

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101366242B1 (ko) * 2007-03-29 2014-02-20 삼성전자주식회사 움직임 모델 파라메터의 부호화, 복호화 방법 및 움직임모델 파라메터를 이용한 영상의 부호화, 복호화 방법 및장치
KR101003105B1 (ko) * 2008-01-29 2010-12-21 한국전자통신연구원 어파인 변환 기반의 움직임 보상을 이용한 비디오 부호화 및 복호화 방법 및 장치
KR20150087207A (ko) * 2013-07-12 2015-07-29 삼성전자주식회사 변이 벡터 유도를 사용하는 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치
WO2017022973A1 (fr) * 2015-08-04 2017-02-09 엘지전자 주식회사 Procédé d'interprédiction, et dispositif, dans un système de codage vidéo
KR20170001704A (ko) * 2016-12-26 2017-01-04 삼성전자주식회사 영상 복호화 방법 및 장치

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11546602B2 (en) 2018-08-24 2023-01-03 Samsung Electronics Co., Ltd. Method and apparatus for image encoding, and method and apparatus for image decoding
US12015781B2 (en) 2018-08-24 2024-06-18 Samsung Electronics Co., Ltd. Image encoding and decoding of chroma block using luma block
US11558622B2 (en) 2018-10-09 2023-01-17 Samsung Electronics Co., Ltd. Video decoding method and apparatus, and video encoding method and apparatus involving sub-block merge index context and bypass model
US12149700B2 (en) 2018-10-09 2024-11-19 Samsung Electronics Co., Ltd. Video decoding method and apparatus, and video encoding method and apparatus involving sub-block merge index context and bypass model
US11425416B2 (en) 2018-12-07 2022-08-23 Samsung Electronics Co., Ltd. Video decoding method and device, and video encoding method and device
US11943469B2 (en) 2018-12-07 2024-03-26 Samsung Electronics Co., Ltd. Video decoding method and device, and video encoding method and device
KR20200112752A (ko) * 2019-03-21 2020-10-05 삼성전자주식회사 블록 형태별로 블록 크기가 설정되는 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치
KR102213901B1 (ko) 2019-03-21 2021-02-08 삼성전자주식회사 블록 형태별로 블록 크기가 설정되는 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치
US11431975B2 (en) 2019-03-21 2022-08-30 Samsung Electronics Co., Ltd. Method and device for encoding video having block size set for each block shape, and method and device for decoding video
US11979569B2 (en) 2019-03-21 2024-05-07 Samsung Electronics Co., Ltd. Method and device for encoding video having block size set for each block shape, and method and device for decoding video
US12316840B2 (en) 2024-04-03 2025-05-27 Samsung Electronics Co., Ltd. Method and device for encoding video having block size set for each block shape, and method and device for decoding video

Also Published As

Publication number Publication date
KR20190088557A (ko) 2019-07-26
KR102243215B1 (ko) 2021-04-22

Similar Documents

Publication Publication Date Title
WO2017082698A1 (fr) Procédé et appareil de décodage de vidéo, et procédé et appareil de codage de vidéo
WO2017090993A1 (fr) Procédé et dispositif de décodage vidéo et procédé et dispositif de codage vidéo
WO2017142335A1 (fr) Procédé de décodage de vidéo et dispositif pour cela, et procédé de codage de vidéo et dispositif pour cela
WO2018084523A1 (fr) Procédé d'encodage et dispositif associé, et procédé de décodage et dispositif associé
WO2019168244A1 (fr) Procédé de codage et dispositif associé, et procédé de décodage et dispositif associé
WO2021025451A1 (fr) Procédé et appareil de codage/décodage vidéo au moyen d'un candidat d'informations de mouvement, et procédé de transmission de flux binaire
WO2017135759A1 (fr) Procédé et appareil de décodage de vidéo par transformation multiple de chrominance, et procédé et appareil de codage de vidéo par transformation multiple de chrominance
WO2017105097A1 (fr) Procédé de décodage vidéo et appareil de décodage vidéo utilisant une liste de candidats de fusion
WO2019216716A2 (fr) Procédé de codage et dispositif associé, et procédé de décodage et dispositif associé
WO2017171107A1 (fr) Procédé de traitement d'image basé sur un mode d'inter-prédiction, et appareil associé
WO2017026681A1 (fr) Procédé et dispositif d'interprédiction dans un système de codage vidéo
WO2011068360A2 (fr) Procédé et appareil pour coder/décoder des images de haute résolution
WO2018182310A1 (fr) Procédé et appareil de codage vidéo, et procédé et appareil de décodage vidéo
WO2020027551A1 (fr) Procédé et appareil de codage d'image, et procédé et appareil de décodage d'image
WO2017105141A1 (fr) Procédé de codage/décodage d'image et dispositif associé
WO2018124627A1 (fr) Procédé de codage et dispositif associé, et procédé de décodage et dispositif associé
WO2019199127A1 (fr) Procédé de codage et dispositif associé, et procédé de décodage et dispositif associé
WO2019139309A1 (fr) Procédé de codage et appareil correspondant, et procédé de décodage et appareil correspondant
WO2020130745A1 (fr) Procédé de codage et dispositif associé, et procédé de décodage et dispositif associé
WO2018012893A1 (fr) Procédé de codage/décodage d'image, et appareil correspondant
WO2023043226A1 (fr) Procédé de codage/décodage de signal vidéo, et support d'enregistrement ayant un flux binaire stocké sur celui-ci
WO2020117010A1 (fr) Procédé et dispositif de décodage de vidéo et procédé et dispositif d'encodage de vidéo
WO2019017673A1 (fr) Procédé de codage et appareil associé, et procédé de décodage et appareil associé
WO2018194189A1 (fr) Procédé de codage/décodage d'image et dispositif associé
WO2022177383A1 (fr) Appareil de codage et de décodage d'image sur la base de l'ia et procédé associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18778027

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20197019480

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18778027

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载