US20110274162A1 - Coding Unit Quantization Parameters in Video Coding - Google Patents
Coding Unit Quantization Parameters in Video Coding Download PDFInfo
- Publication number
- US20110274162A1 US20110274162A1 US13/093,715 US201113093715A US2011274162A1 US 20110274162 A1 US20110274162 A1 US 20110274162A1 US 201113093715 A US201113093715 A US 201113093715A US 2011274162 A1 US2011274162 A1 US 2011274162A1
- Authority
- US
- United States
- Prior art keywords
- coding unit
- coded
- quantization parameter
- coding
- largest
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000013139 quantization Methods 0.000 title claims abstract description 110
- 238000000034 method Methods 0.000 claims abstract description 50
- 230000011664 signaling Effects 0.000 claims description 5
- 238000012545 processing Methods 0.000 description 20
- 238000010586 diagram Methods 0.000 description 14
- 238000004891 communication Methods 0.000 description 11
- 230000000694 effects Effects 0.000 description 11
- 230000008569 process Effects 0.000 description 11
- 238000000638 solvent extraction Methods 0.000 description 10
- 230000005540 biological transmission Effects 0.000 description 8
- 238000012360 testing method Methods 0.000 description 6
- 239000013598 vector Substances 0.000 description 6
- 230000009466 transformation Effects 0.000 description 5
- 230000001413 cellular effect Effects 0.000 description 4
- 238000005192 partition Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000002411 adverse Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- NUHSROFQTUXZQQ-UHFFFAOYSA-N isopentenyl diphosphate Chemical compound CC(=C)CCO[P@](O)(=O)OP(O)(O)=O NUHSROFQTUXZQQ-UHFFFAOYSA-N 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000005022 packaging material Substances 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000010025 steaming Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/15—Data rate or code amount at the encoder output by monitoring actual compressed data size at the memory before deciding storage at the transmission buffer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/167—Position within a video image, e.g. region of interest [ROI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/196—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/43—Hardware specially adapted for motion estimation or compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
- H04N19/463—Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
Definitions
- video communication e.g., video communication, security and surveillance, industrial automation, and entertainment (e.g., DV, HDTV, satellite TV, set-top boxes, Internet video streaming, video gaming devices, digital cameras, cellular telephones, video jukeboxes, high-end displays and personal video recorders).
- entertainment e.g., DV, HDTV, satellite TV, set-top boxes, Internet video streaming, video gaming devices, digital cameras, cellular telephones, video jukeboxes, high-end displays and personal video recorders.
- video applications are becoming increasingly mobile as a result of higher computation power in handsets, advances in battery technology, and high-speed wireless connectivity.
- Video compression i.e., video coding
- video compression techniques that apply prediction, transformation, quantization, and entropy coding to sequential blocks of pixels, i.e., macroblocks, in a video sequence to compress, i.e., encode, the video sequence.
- a macroblock is defined as a 16 ⁇ 16 rectangular block of pixels in a frame or slice of a video sequence where a frame is defined to be a complete image captured during a known time interval.
- a quantization parameter may be used to modulate the step size of the quantization for each macroblock.
- quantization of a transform coefficient involves dividing the coefficient by a quantization step size.
- the quantization step size which may also be referred to as the quantization scale, is defined by the standard based on the QP value, which may be an integer within some range 0 . . . 51.
- a step size for a QP value may be determined, for example, using a table lookup and/or by computational derivation.
- the quality and bit rate of the compressed bit stream is largely determined by the QP value selected for quantizing each macroblock. That is, the quantization step size (Qs) used to quantize a macroblock regulates how much spatial detail is retained in a compressed macroblock. The smaller the Qs, the more retention of detail and the better the quality but at the cost of a higher bit rate. As the Qs increases, less detail is retained and the bit rate decreases but at the cost of increased distortion and loss of quality.
- Qs quantization step size
- FIG. 1 shows a block diagram of a digital system in accordance with one or more embodiments
- FIG. 2 shows an example of a recursive quadtree structure in accordance with one or more embodiments
- FIGS. 3A and 3B show block diagrams of a video encoder in accordance with one or more embodiments
- FIGS. 4-8 show examples in accordance with one or more embodiments
- FIG. 9 shows a block diagram of a video decoder in accordance with one or more embodiments.
- FIG. 10 shows an example in accordance with one or more embodiments
- FIGS. 11 and 12 show flow diagrams of methods in accordance with one or more embodiments.
- FIG. 13 shows a block diagram of an illustrative digital system in accordance with one or more embodiments.
- coding operations of prediction, transformation, quantization, and entropy coding are performed based on fixed size 16 ⁇ 16 blocks referred to as macroblocks. Further, a quantization parameter is generated for each macroblock with no provision for doing so for larger or smaller blocks. For larger frame sizes, e.g., frame sizes used for high definition video, using a larger block size for the block-based coding operations may provide better coding efficiency and/or reduce data transmission overhead. For example, a video sequence with a 1280 ⁇ 720 frame size and a frame rate of 60 frames per second is 36 times larger and 4 times faster than a video sequence with a 176 ⁇ 144 frame size and a frame rate of 15 frames per second.
- HEVC High Efficiency Video Coding
- JCT-VC Joint Collaborative Team on Video Coding
- an increased block size may adversely affect rate control. That is, many rate control techniques manage QP on a block-by-block basis according to the available space in a hypothetical transmission buffer. Increasing the block size reduces the granularity at which rate control can adjust the value of QP, thus possibly making rate control more difficult and/or adversely affecting quality. Further, reducing the granularity at which QP can change by increasing the block size impacts the visual quality performance of perceptual rate control techniques that adapt the QP based on the activity in a block.
- Embodiments described herein provide for block-based video coding with a large block size, e.g., larger than 16 ⁇ 16, in which multiple quantization parameters for a single block may be generated. More specifically, a picture (or slice) is divided into non-over-lapping blocks of pixels referred to as largest coding units (LCU).
- LCU largest coding units
- the term “picture” refers to a frame or a field of a frame. A frame is a complete image captured during a known time interval. A slice is a subset of sequential LCUs in a picture. An LCU is the base unit used for block-based coding.
- an LCU plays a similar role in coding as the prior art macroblock, but it may be larger, e.g., 32 ⁇ 32, 64 ⁇ 64, 128 ⁇ 128, etc.
- the LCU is the largest unit in a picture for which a quantization parameter (QP) may be generated.
- QP quantization parameter
- various criteria e.g., rate control criteria, complexity considerations, rate distortion constraints, etc.
- a CU is a block of pixels within an LCU and the CUs within an LCU may be of different sizes.
- CU partitioning i.e., the CU structure
- Block-based coding is then applied to the LCU to code the CUs.
- the QPs are used in the quantization of the corresponding CUs.
- the CU structure and the QPs are also coded for communication, i.e., signaling, to a decoder.
- QP values are communicated to a decoder in a compressed bit stream as delta QP values.
- Techniques for computing the delta QPs and for controlling the spatial granularity at which delta QPs are signaled are also provided.
- more than one technique for computing the delta QP values may be used in coding a single video sequence.
- the technique used may be signaled in a compressed bit stream at the appropriate level, e.g., sequence, picture, slice, and/or LCU.
- FIG. 1 shows a block diagram of a digital system in accordance with one or more embodiments.
- the system includes a source digital system 100 that transmits encoded video sequences to a destination digital system 102 via a communication channel 116 .
- the source digital system 100 includes a video capture component 104 , a video encoder component 106 and a transmitter component 108 .
- the video capture component 104 is configured to provide a video sequence to be encoded by the video encoder component 106 .
- the video capture component 104 may be for example, a video camera, a video archive, or a video feed from a video content provider.
- the video capture component 104 may generate computer graphics as the video sequence, or a combination of live video, archived video, and/or computer-generated video.
- the video encoder component 106 receives a video sequence from the video capture component 104 and encodes it for transmission by the transmitter component 108 .
- the video encoder component 106 receives the video sequence from the video capture component 104 as a sequence of frames, divides the frames into LCUs, and encodes the video data in the LCUs.
- the video encoder component 106 may be configured to apply one or more techniques for generating and encoding multiple quantization parameters for an LCU during the encoding process as described herein. Embodiments of the video encoder component 106 are described in more detail below in reference to FIGS. 3A and 3B .
- the transmitter component 108 transmits the encoded video data to the destination digital system 102 via the communication channel 116 .
- the communication channel 116 may be any communication medium, or combination of communication media suitable for transmission of the encoded video sequence, such as, for example, wired or wireless communication media, a local area network, or a wide area network.
- the destination digital system 102 includes a receiver component 110 , a video decoder component 112 and a display component 114 .
- the receiver component 110 receives the encoded video data from the source digital system 100 via the communication channel 116 and provides the encoded video data to the video decoder component 112 for decoding.
- the video decoder component 112 reverses the encoding process performed by the video encoder component 106 to reconstruct the LCUs of the video sequence.
- the video decoder component may be configured to apply one or more techniques for decoding multiple quantization parameters for an LCU during the decoding process as described herein. Embodiments of the video decoder component 112 are described in more detail below in reference to FIG. 9 .
- the reconstructed video sequence is displayed on the display component 114 .
- the display component 114 may be any suitable display device such as, for example, a plasma display, a liquid crystal display (LCD), a light emitting diode (LED) display, etc.
- the source digital system 100 may also include a receiver component and a video decoder component and/or the destination digital system 102 may include a transmitter component and a video encoder component for transmission of video sequences both directions for video steaming, video broadcasting, and video telephony.
- the video encoder component 106 and the video decoder component 112 may perform encoding and decoding in accordance with one or more video compression standards.
- the video encoder component 106 and the video decoder component 112 may be implemented in any suitable combination of software, firmware, and hardware, such as, for example, one or more digital signal processors (DSPs), microprocessors, discrete logic, application specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), etc.
- DSPs digital signal processors
- ASICs application specific integrated circuits
- FPGAs field-programmable gate arrays
- an LCU may be partitioned into coding units (CU) during the coding process.
- CU coding units
- a recursive quadtree structure is assumed for partitioning of LCUs into CUs.
- a CU may be square.
- an LCU is also square.
- a picture is divided into non-overlapped LCUs.
- the CU structure within an LCU can be a recursive quadtree structure adapted to the frame. That is, each time a CU (or LCU) is partitioned, it is divided into four equal-sized square blocks.
- a given CU can be characterized by the size of the LCU and the hierarchical depth of the LCU where the CU occurs. The maximum hierarchical depth is determined by the size of the smallest CU (SCU) permitted.
- FIG. 2 shows an example of a recursive quadtree structure in which the LCU is assumed to be 128 ⁇ 128 and the SCU is assumed to be 8 ⁇ 8.
- the maximum hierarchical depth of the quadtree structure is 5. Further, five possible CU sizes are allowed: 128 ⁇ 128, 64 ⁇ 64, 32 ⁇ 32, 16 ⁇ 16, and 8 ⁇ 8. If the LCU is assumed to be 64 ⁇ 64 and the SCU is assumed to be 8 ⁇ 8, the maximum hierarchical depth is 4 and four possible CU sizes are allowed: 64 ⁇ 64, 32 ⁇ 32, 16 ⁇ 16, and 8 ⁇ 8.
- FIGS. 3A and 3B show block diagrams of a video encoder, e.g., the video encoder 106 of FIG. 1 , configured to apply one or more techniques for generating and encoding multiple quantization parameters for an LCU as described herein.
- FIG. 3A shows a high level block diagram of the video encoder and
- FIG. 3B shows a block diagram of the LCU processing component 342 of the video encoder.
- a video encoder includes a coding control component 340 , an LCU processing component 342 , a rate control component 344 , and a memory 346 .
- An input digital video sequence is provided to the coding control component 340 .
- the memory 346 may be internal memory, external memory, or a combination thereof.
- the coding control component 340 sequences the various operations of the video encoder.
- the coding control component 340 performs any processing on the input video sequence that is to be done at the frame level, such as determining the coding type (I, P, or B) of a picture based on the high level coding structure, e.g., IPPP, IBBP, hierarchical-B, and dividing a frame into LCUs for further processing.
- LCU size and SCU size may be different in different embodiments of the video encoder. Further, the LCU size and SCU size may be signaled at the sequence, picture, and/or slice level.
- the coding control component 340 also interacts with the rate control component 344 to determine an initial coding unit structure and initial QPs for each LCU.
- the rate control component 344 receives an LCU from the coding control component 340 and applies various criteria to the LCU to determine one or more QPs to be used by the LCU processing component 342 in coding the LCU. More specifically, the rate control component 344 partitions the LCU into CUs of various sizes within the recursive quadtree structure based on the various criteria to determine the granularity at which QPs should be applied and then computes a QP for each CU that is not further subdivided, i.e., for each coding unit that is a leaf node in the quadtree. The CU structure of the LCU and the QPs are provided to the coding control component 340 .
- the QPs applied to an LCU during the coding of the LCU will be signaled in the compressed bit stream.
- the SCU size sets the size of the smallest CU in the recursive quadtree structure.
- a minimum QP CU size may be specified in addition to the LCU and SCU sizes.
- the smallest CU that the rate control component 344 can use in partitioning an LCU is limited by the minimum QP CU size rather than the SCU size.
- the minimum QP CU size may be set to sizes larger than the SCU to constrain the granularity at which QPs may be applied. For example, if the LCU is assumed to be 64 ⁇ 64 and the SCU is assumed to be 8 ⁇ 8, the four possible CU sizes allowed in the recursive quadtree structure are 64 ⁇ 64, 32 ⁇ 32, 16 ⁇ 6, and 8 ⁇ 8. Without the minimum QP CU size constraint, the rate control component 344 can generate QPs for CUs as small as 8 ⁇ 8. However, if a minimum QP CU size of 16 ⁇ 16 is specified, the rate control component 344 can generate QPs for CUs as small as 16 ⁇ 16 but no smaller. The minimum QP CU size may be set at the sequence, picture, slice, and/or LCU level and signaled in the compressed bit stream accordingly.
- FIG. 4 shows an example CU partitioning of an LCU.
- the LCU is partitioned into 4 CUs, A, B, C, and D.
- CU A is further partitioned into four CUs, A 1 , A 2 , A 3 , and A 4 and
- CU D is further partitioned into four CUs, D 1 , D 2 , D 3 , and D 4 .
- CUs A 2 and D 1 are also further partitioned into four CUs, respectively A 21 , A 22 , A 23 , and A 24 and D 11 , D 12 , D 13 , and D 14 .
- the rate control component 344 computes a QP for each of the CUs that is not further sub-divided, i.e., for A 1 , A 21 , A 22 , A 23 , A 24 , A 3 , A 4 , B, C, D 11 , D 12 , D 13 , D 14 , D 2 , D 3 , and D 4 .
- any suitable criteria may be used by rate control component 344 , such as, for example, perceptual rate control constraints, target bit rate constraints, rate-distortion optimization constraints, and complexity considerations, alone or in any combination.
- the rate control component 344 may determine the CU partitioning and corresponding QPs at least in part based on the spatial characteristics of the LCU. As is well known, if a region of a picture is smooth, quantization errors can be more visible to the human eye whereas if a region is busy (e.g., high textured), any quantization error will likely not be visible.
- the rate control component 344 may determine the activity in an LCU and then partition the LCU into CU sizes based on the locations/levels of the activity.
- An activity measure for a region of an image may be determined, for example, based on edge information, texture information, etc. The goal would be to assign lower QP values to flat regions (regions with little to no activity) to reduce quantization error and to assign higher QP values to busy regions (regions with high activity) as the quantization error will be hidden.
- LCUs there will be transition regions in which LCUs will have both sky and trees.
- Such LCU may be partitioned into CUs sized based on activity (within the limits of the quadtree coding structure). For example, an LCU may be divided into four CUs A, B, C, and D, and the activity level in areas of each CU may then analyzed. If a CU, say CU A, has regions of widely varying activity levels, then CU A may be further divided into four CUs, A 1 , A 2 , A 3 , and A 4 in an attempt to reduce the variance in activity level over the area where a QP will be applied. These four CUs may each also be further divided into four CUs based on activity. Once the CU partitioning is complete, QP values may then be computed for each CU.
- the coding control component 340 provides information regarding the initial LCU CU structure and the QPs determined by the rate control component 344 to the various components of the LCU processing component 342 as needed. For example, the coding control component may provide the LCU and SCU size to the entropy encoder component 340 for inclusion in the compressed video stream at the appropriate point. In another example, the coding control component 340 may generate a quantization parameter array for use by the quantize component 306 and store the quantization parameter array in the memory 346 . The size of the quantization parameter array may be determined based on the maximum possible number of CUs in an LCU. For example, assume the size of the SCU is 8 ⁇ 8 and the size of the LCU is 64 ⁇ 64.
- the maximum possible number of CUs in an LCU is 64.
- the quantization parameter array is sized to hold a QP for each of these 64 possible coding units, i.e., is an 8 ⁇ 8 array.
- the QPs computed by the rate control component 344 are mapped into this array based on the CU structure. As is explained in more detail herein in reference to the quantize component 306 , a QP for any size CU in the LCU may be located in this array using the coordinates of the upper left hand corner of the CU in the LCU.
- FIG. 5 shows an example of mapping QPs into a quantization parameter array 502 based on the CU structure 500 .
- the CU structure assumes a 64 ⁇ 64 LCU.
- the presence of a CU identifier, e.g., A 1 , C, D 11 , etc., in an array cell represents the QP parameter for that CU.
- the QP for CU A 1 is in locations (0,0), (0,1), (1,0), and (1,1)
- the QP for CU D 11 is in location (4,4)
- the QP for CU C is in locations (4, 0), (4, 1), (4, 2), (4, 3), (5, 0), (5, 1), (5, 2), (5, 3), (6, 0), (6, 1), (6, 2), (6, 3), (7, 0), (7, 1), (7, 2), and (7, 3).
- the LCU processing component 342 receives LCUs of the input video sequence from the coding control component 340 and encodes the LCUs to generate the compressed video stream. As previously mentioned, the LCU processing component 342 also receives information regarding the CU structure and QPs of an LCU as determined by the rate control component 344 . The CUs in the CU structure of an LCU may be processed by the LCU processing component 342 in a depth-first Z-scan order. For example, in the LCU of FIG.
- the CUs would be scanned in the following order: A 1 ->A 21 ->A 21 ->A 22 ->A 23 ->A 3 ->A 4 ->B->C->D 11 ->D 12 ->D 13 ->D 14 ->D 2 ->D 3 ->D 4 .
- FIG. 3B shows the basic coding architecture of the LCU processing component 342 .
- the LCUs 300 from the coding control unit 340 are provided as one input of a motion estimation component 320 , as one input of an intra prediction component 324 , and to a positive input of a combiner 302 (e.g., adder or subtractor or the like). Further, although not specifically shown, the prediction mode of each picture as selected by the coding control component 340 is provided to a mode selector component, and the entropy encoder 334 .
- the storage component 318 provides reference data to the motion estimation component 320 and to the motion compensation component 322 .
- the reference data may include one or more previously encoded and decoded CUs, i.e., reconstructed CUs.
- the motion estimation component 320 provides motion estimation information to the motion compensation component 322 and the entropy encoder 334 . More specifically, the motion estimation component 320 performs tests on CUs in an LCU based on multiple temporal prediction modes using reference data from storage 318 to choose the best motion vector(s)/prediction mode based on a coding cost. To perform the tests, the motion estimation component 320 may begin with the CU structure provided by the coding control component 340 . The motion estimation component 320 may divide each CU indicated in the CU structure into prediction units according to the unit sizes of prediction modes and calculate the coding costs for each prediction mode for each CU.
- the motion estimation component 320 may also decide to alter the CU structure by further partitioning one or more of the CUs in the CU structure. That is, when choosing the best motion vectors/prediction modes, in addition to testing with the initial CU structure, the motion estimation component 320 may also choose to divide the larger CUs in the initial CU structure into smaller CUs (within the limits of the recursive quadtree structure), and calculate coding costs at lower levels in the coding hierarchy. As will be explained below in reference to the quantizer component 306 , any changes made to the CU structure do not affect how the QPs computed by the rate control component 344 are applied. If the motion estimation component 320 changes the initial CU structure, the modified CU structure is communicated to other components in the LCU processing component 342 that need the information.
- the motion estimation component 320 provides the selected motion vector (MV) or vectors and the selected prediction mode for each inter predicted CU to the motion compensation component 323 and the selected motion vector (MV) to the entropy encoder 334 .
- the motion compensation component 322 provides motion compensated inter prediction information to a selector switch 326 that includes motion compensated inter predicted CUs and the selected temporal prediction modes for the inter predicted CUs.
- the coding costs of the inter predicted CUs are also provided to the mode selector component (not shown).
- the intra prediction component 324 provides intra prediction information to the selector switch 326 that includes intra predicted CUs and the corresponding spatial prediction modes. That is, the intra prediction component 324 performs spatial prediction in which tests based on multiple spatial prediction modes are performed on CUs in an LCU using previously encoded neighboring CUs of the picture from the buffer 328 to choose the best spatial prediction mode for generating an intra predicted CU based on a coding cost. To perform the tests, the intra prediction component 324 may begin with the CU structure provided by the coding control component 340 . The intra prediction component 324 may divide each CU indicated in the CU structure into prediction units according to the unit sizes of the spatial prediction modes and calculate the coding costs for each prediction mode for each CU.
- the intra prediction component 324 may also decide to alter the CU structure by further partitioning one or more of the CUs in the CU structure. That is, when choosing the best prediction modes, in addition to testing with the initial CU structure, the intra prediction component 324 may also chose to divide the larger CUs in the initial CU structure into smaller CUs (within the limits of the recursive quadtree structure), and calculate coding costs at lower levels in the coding hierarchy. As will be explained below in reference to the quantizer component 306 , any changes made to the CU structure do not affect how the QP values computed by the rate control component 344 are applied.
- the modified CU structure is communicated to other components in the LCU processing component 342 that need the information.
- the spatial prediction mode of each intra predicted CU provided to the selector switch 326 is also provided to the transform component 304 .
- the coding costs of the intra predicted CUs are also provided to the mode selector component.
- the selector switch 326 selects between the motion-compensated inter predicted CUs from the motion compensation component 322 and the intra predicted CUs from the intra prediction component 324 based on the difference metrics of the CUs and the picture prediction mode provided by the mode selector component.
- the output of the selector switch 326 i.e., the predicted CU, is provided to a negative input of the combiner 302 and to a delay component 330 .
- the output of the delay component 330 is provided to another combiner (i.e., an adder) 338 .
- the combiner 302 subtracts the predicted CU from the current CU to provide a residual CU to the transform component 304 .
- the resulting residual CU is a set of pixel difference values that quantify differences between pixel values of the original CU and the predicted CU.
- the transform component 304 performs unit transforms on the residual CUs to convert the residual pixel values to transform coefficients and provides the transform coefficients to a quantize component 306 .
- the quantize component 306 determines a QP for the transform coefficients of a residual CU and quantizes the transform coefficients based on that QP. For example, the quantize component 306 may divide the values of the transform coefficients by a quantization scale (Qs) derived from the QP value.
- Qs quantization scale
- the quantize component 306 represents the coefficients by using a desired number of quantization steps, the number of steps used (or correspondingly the value of Qs) determining the number of bits used to represent the residuals. Other algorithms for quantization such as rate-distortion optimized quantization may also be used by the quantize component 306 .
- the quantize component 306 determines a QP for the residual CU transform coefficients based on the initial CU structure provided by the coding control component 340 . That is, if the residual CU corresponds to a CU in the initial CU structure, then the quantize component 306 uses the QP computed for that CU by the rate control component 344 . For example, referring to the example of FIG. 4 , if the residual CU was generated from CU C with no further partitioning during the prediction processing, then the QP for CU C is used to quantize the residual CU.
- the quantize component 306 uses the QP of the original CU that was subdivided during the prediction processing to create the CU as the QP for the residual CU. For example, if CU C of FIG. 4 is further partitioned during the prediction processing as shown in FIG. 6 , and the residual CU corresponds to one of CUs C 1 , C 2 , C 3 , or C 4 , then the QP for CU C is used to quantize the residual CU.
- the quantize component 306 uses the QP of the original CU of the same size as the minimum QP CU that was partitioned by the rate control component 344 to create the CU. For example, in the LCU of FIG. 4 , if the LCU size is 64 ⁇ 64 and the minimum QP CU size is 32 ⁇ 32 and the residual CU corresponds to one of the 8 ⁇ 8 CUs A 21 , A 22 , A 23 , or A 24 , then the QP for CU A 2 is used to quantize the residual CU.
- the coding control component 340 may generate a quantization parameter array that is stored in the memory 346 .
- the quantize component 306 may use this matrix to determine a QP for the residual CU coefficients. That is, the coordinates of the upper left corner of the CU corresponding to the residual CU, whether that CU is in the original coding structure or was added during the prediction process, may be used to locate the appropriate QP in the quantization parameter array.
- the x coordinate may be divided by the width of the SCU and the y coordinate may be divided by the height of the SCU to compute the coordinates of the appropriate QP in the quantization parameter array.
- the SCU is 8 ⁇ 8.
- the coordinates of the upper left corner of CU A 4 are (16, 16).
- the coordinates of the location in the quantization parameter array 502 holding the appropriate QP are (2, 2).
- C 1 , C 2 , C 3 , and C 4 are assumed to be added to the CU structure during prediction processing.
- the coordinates of the upper left corner of CU C 4 are (48, 16).
- the coordinates of the location in the quantization parameter matrix 502 holding the appropriate QP are (6, 2).
- the quantized transform coefficients are taken out of their scan ordering by a scan component 308 and arranged by significance, such as, for example, beginning with the more significant coefficients followed by the less significant.
- the ordered quantized transform coefficients for a CU provided via the scan component 308 along with header information for the CU are coded by the entropy encoder 334 , which provides a compressed bit stream to a video buffer 336 for transmission or storage.
- the entropy coding performed by the entropy encoder 334 may be use any suitable entropy encoding technique, such as, for example, context adaptive variable length coding (CAVLC), context adaptive binary arithmetic coding (CABAC), run length coding, etc.
- CAVLC context adaptive variable length coding
- CABAC context adaptive binary arithmetic coding
- run length coding etc.
- the entropy encoder 334 encodes information regarding the CU structure used to generate the coded CUs in the compressed bit stream and information indicating the QPs used in the quantization of the coded CUs.
- the CU structure of an LCU is signaled to a decoder by encoding the sizes of the LCU and the SCU and a series of split flags in the compressed bit stream. If a CU in the recursive quadtree structure defined by the LCU and the SCU is split, i.e., partitioned, in the CU structure, a split flag with a value indicating a split, e.g., 1, is signaled in the compressed bit stream.
- a split flag with a value indicating no split, e.g., 0, is signaled in the compressed bit stream.
- Information specific to the unsplit CU will follow the split flag in the bit stream.
- the information specific to a CU may include CU header information (prediction mode, motion vector differences, coding block flag information, etc), QP information, and coefficient information. Coefficient information may not be included if all of the CU coefficients are zero. Further, if the size of a CU is the same size as the SCU, no split flag is encoded in the bit stream for that CU.
- FIG. 7 shows an example of signaling the CU structure of the LCU of FIG. 4 assuming that the LCU size is 64 ⁇ 64 and the SCU size is 8 ⁇ 8. This example assumes that all the CUs have at least one non-zero coefficient.
- split flag S 0 is set to 1 to indicate that the LCU is split into four CUs: A, B, C, and D.
- Split flag S 1 is set to 1 to indicate that CU A is split into four CUs: A 1 , A 2 , A 3 , and A 4 .
- Split flag S 2 is set to 0 to indicate that CU A 1 is not split. Information specific to CU A 1 follows split flag S 2 .
- Split flag S 3 is set to 1 to indicate that CU A 2 is split into four CUs: A 21 , A 22 , A 23 , and A 24 .
- CUs A 21 , A 22 , A 23 , and A 24 are 8 ⁇ 8 so no split flags are encoded for these CUs.
- Information specific to each of the CUS follows split flag S 3 .
- Split flag S 4 is set to 0 to indicate that CU A 3 is not split, and so on.
- the delta QP for CU B is QPB-QPA 4 and the delta QP for CU D 2 is QPD 2 -QPD 14 .
- Computing a delta QP in this way may be desirable when rate control is not based on perceptual criteria.
- spatially neighboring CUs of a CU may be defined as those CUs adjacent to the CU in the CU structure of the LCU.
- CU A 22 , CU A 24 , and CU A 4 are left neighboring CUs of CU B.
- CU A 23 and CU A 24 are top neighboring CUs of CU A 4 .
- adjacent CUs in an LCU to the left or above, respectively, of the LCU in a picture may be considered as left neighboring and top neighboring CUs, respectively.
- more than one mode for computing a predicted QP value for purposes of computing delta QP may be provided.
- the entropy encoder 334 encodes a delta QP value for each CU in the compressed bit stream. For example, referring to FIG. 7 , a delta QP value would be included in the information specific to CU A 1 , in the information specific to CU A 21 , in the information specific to CU A 22 , etc.
- a delta QP value is encoded in the CU specific information for each CU with at least one non-zero coefficient that is larger than or equal to the minimum QP CU in size. For those CUs smaller than the minimum QP CU, a delta QP is encoded at the non-leaf CU level.
- FIG. 8 shows an example of signaling delta QPs for the CU structure of the LCU of FIG. 4 assuming that the LCU size is 64 ⁇ 64, the SCU size is 8 ⁇ 8, the minimum QP CU size is 32 ⁇ 32, and each CU has at least one non-zero coefficient.
- Each of the CUs A, B, C, and D is 32 ⁇ 32, so delta QPs, designated dQPx, are signaled for those CUs and not for any of the smaller ones.
- an embedded decoder inside the encoder is an embedded decoder.
- the embedded decoder provides the same utility to the video encoder.
- Knowledge of the reconstructed input allows the video encoder to transmit the appropriate residual energy to compose subsequent frames.
- the ordered quantized transform coefficients for a CU provided via the scan component 308 are returned to their original post-transform arrangement by an inverse scan component 310 , the output of which is provided to a dequantize component 312 , which outputs estimated transformed information, i.e., an estimated or reconstructed version of the transform result from the transform component 304 .
- the QP for the CU is communicated to the dequantize component 312 by the quantize component 306 .
- the dequantize component 312 determines the QP from a quantization parameter array in the manner previously described.
- the estimated transformed information is provided to the inverse transform component 314 , which outputs estimated residual information which represents a reconstructed version of a residual CU.
- the reconstructed residual CU is provided to the combiner 338 .
- the combiner 338 adds the delayed selected CU to the reconstructed residual CU to generate an unfiltered reconstructed CU, which becomes part of reconstructed picture information.
- the reconstructed picture information is provided via a buffer 328 to the intra prediction component 324 and to a filter component 316 .
- the filter component 316 is an in-loop filter which filters the reconstructed frame information and provides filtered reconstructed CUs, i.e., reference data, to the storage component 318 .
- the above described techniques for computing delta QPs may be used in other components of the video encoder.
- the QPs originally generated by the rate control component 344 may be adjusted up or down by one or more other components in the video encoder prior to quantization.
- FIG. 9 shows a block diagram of a video decoder, e.g., the video decoder 112 , in accordance with one or more embodiments of the invention.
- the video decoder operates to reverse the encoding operations, i.e., entropy coding, quantization, transformation, and prediction, performed by the video encoder of FIGS. 3A and 3B to regenerate the frames of the original video sequence.
- encoding operations i.e., entropy coding, quantization, transformation, and prediction
- the entropy decoding component 900 receives an entropy encoded video bit stream and reverses the entropy encoding to recover the encoded CUs and the encoded CU structures of the LCUs.
- the decoded information is communicated to other components in the video decoder as appropriate.
- the entropy decoding performed by the entropy decoding component 900 may include detecting coded QP values in the bit stream and decoding them for communication to the inverse quantization component 902 .
- the entropy decoding component 900 may detect delta QP values in the bit stream and compute reconstructed QP values from the delta QP values for communication to the inverse quantization component 902 .
- the entropy decoding component 900 computes QP as the delta QP+QPprev, where QPprev is the reconstructed QP computed by the entropy decoding component 900 for the immediately preceding CU in the bit stream. For this computation, the entropy decoding component 900 may store and update a value for QPprev as each encoded CU is entropy decoded and a reconstructed QP is determined for that CU.
- the entropy decoding component 900 computes a reconstructed QP as the delta QP+f(rQPs of spatially neighboring CUs), where rQP is a reconstructed QP. Further, if the video encoder supports multiple modes for computing a delta QP, the video decoder will compute a reconstructed QP from the delta QP according to the mode signaled in the bit stream.
- the entropy decoding component 900 may store the reconstructed QPs of the appropriate spatially neighboring CUs.
- the reconstructed QPs of the neighboring CUs may be stored in a reconstructed quantization parameter array in a manner similar to that of the previously described quantization parameter array.
- Example reconstructed QP calculations are described below assuming f( ) is equal to the rQP of the left neighboring CU and in reference to the example LCU structures 1000 and 1002 in FIG. 10 .
- LCU 0 1000 has been decoded and its reconstructed QPs are stored in the reconstructed quantization parameter array 1004 .
- reconstructed QPs are computed for LCU 1 1002 , they may be stored in a reconstructed quantization parameter array for that LCU.
- the following calculations demonstrate how reconstructed QP values for some of the CUs in LCU 1 1002 may be reconstructed from left neighboring CUs:
- rQP ( A 1) dQP ( A 1)+ rQP ( B 22 of LCU 0 1000)
- rQP ( A 21) dQP ( A 21)+ rQP ( A 1)
- rQP ( A 22) dQP ( A 22)+ rQP ( A 21)
- rQP ( A 23) dQP ( A 23)+ rQP ( A 1)
- rQP ( A 24) dQP ( A 24)+ rQP ( A 23)
- rQP ( A 3) dQP ( A 3)+ rQP ( B 42 of LCU 0 1000)
- rQP ( A 4) dQP ( A 4)+ rQP ( A 3)
- the left column of the reconstructed quantization parameter array 1004 (B 22 , B 24 , B 42 , B 44 , D 22 , D 24 , D 42 , D 44 ) is all that is required for applying predictor f( ) to LCU 1 1002 . If the left neighboring CU is not available as can be the case for the first LCU in a picture, a predefined QP value may be used or the reconstructed QP value in CU coding order may be used.
- the inverse quantization component 902 de-quantizes the residual coefficients of the residual CUs based on the reconstructed QP values.
- the inverse transform component 904 transforms the frequency domain data from the inverse quantization component 902 back to residual CUs. That is, the inverse transform component 904 applies an inverse unit transform, i.e., the inverse of the unit transform used for encoding, to the de-quantized residual coefficients to produce the residual CUs.
- a residual CU supplies one input of the addition component 906 .
- the other input of the addition component 906 comes from the mode switch 908 .
- the mode switch 908 selects a prediction block from the motion compensation component 910 and when intra-prediction is signaled, the mode switch selects a prediction block from the intra prediction component 914 .
- the motion compensation component 910 receives reference data from storage 912 and applies the motion compensation computed by the encoder and transmitted in the encoded video bit stream to the reference data to generate a predicted CU.
- the intra-prediction component 914 receives previously decoded predicted CUs from the current picture and applies the intra-prediction computed by the encoder as signaled by a spatial prediction mode transmitted in the encoded video bit stream to the previously decoded predicted CUs to generate a predicted CU.
- the addition component 906 generates a decoded CU, by adding the selected predicted CU and the residual CU.
- the output of the addition component 906 supplies the input of the in-loop filter component 916 .
- the in-loop filter component 916 smoothes artifacts created by the block nature of the encoding process to improve the visual quality of the decoded frame.
- the output of the in-loop filter component 916 is the decoded frames of the video bit stream.
- Each decoded CU is stored in storage 912 to be used as reference data.
- unit transforms smaller than a CU may be used.
- the video encoder may further partition a CU into transform units.
- a CU may be partitioned into smaller transform units in accordance with a recursive quadtree structure adapted to the CU size.
- the transform unit structure of the CU may be signaled to the decoder in a similar fashion as the LCU CU structure using transform split flags.
- delta QP values may be computed and signaled at the transform unit level.
- a flag indicating whether or not multiple quantization parameters are provided for an LCU may be signaled at the appropriate level, e.g., sequence, picture, and/or slice.
- FIG. 11 is a flow diagram of a method for generating and encoding multiple quantization parameters for an LCU in a video encoder in accordance with one or more embodiments.
- an LCU is received 1100 .
- Various criteria are then applied to the LCU to determine a CU structure for the LCU and QPs are computed for the CUs in the CU structure 1102 .
- the LCU may be divided into CUs of various sizes within a recursive quadtree structure based on the various criteria to determine the granularity at which QP values should be applied, i.e., to determine the CU structure for the LCU.
- a quantization parameter is then computed for each CU in the CU structure.
- CUs in the CU structure are then coded using the corresponding QPs 1104 .
- a block-based coding process i.e., prediction, transformation, and quantization, is performed on each CU in the CU structure.
- the prediction, transformation, and quantization may be performed on each CU as previously described herein.
- the QPs used in coding the CUs are also coded 1106 .
- delta QPs may be computed.
- the delta QP values may be computed as previously described.
- the coded QPs, the coded CUs, and the CU structure are then entropy coded to generate a portion of the compressed bit stream 1108 .
- the coded QPs, coded CUs, and the CU structure may be signaled in the compressed bit stream as previously described herein.
- FIG. 12 is a flow diagram of a method for decoding multiple quantization parameters for an LCU in a video decoder in accordance with one or more embodiments.
- a coded LCU that may include a coded CU structure and coded QPs is received 1200 .
- the coded CU structure and the coded QPs may be generated by a video encoder as previously described.
- Reconstructed QPs for coded CUs in the coded LCU are then computed based on the coded QPs 1202 .
- the reconstructed QPs may be computed as previously described.
- the coded LCU is then decoded based on the coded CU structure and the reconstructed QPs 1204 .
- coded coding units in the coded LCU may be decoded using a block-based decoding process as previously described herein that reverses a block-based coding process used by the video encoder.
- the techniques described in this disclosure may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the software may be executed in one or more processors, such as a microprocessor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), or digital signal processor (DSP).
- the software that executes the techniques may be initially stored in a computer-readable medium such as compact disc (CD), a diskette, a tape, a file, memory, or any other computer readable storage device, and loaded and executed in the processor.
- the software may also be sold in a computer program product, which includes the computer-readable medium and packaging materials for the computer-readable medium.
- the software instructions may be distributed via removable computer readable media (e.g., floppy disk, optical disk, flash memory, USB key), via a transmission path from computer readable media on another digital system, etc.
- FIG. 13 is a block diagram of a digital system (e.g., a mobile cellular telephone) 1300 that may be configured to use techniques described herein.
- a digital system e.g., a mobile cellular telephone
- the signal processing unit (SPU) 1302 includes a digital signal processing system (DSP) that includes embedded memory and security features.
- the analog baseband unit 1304 receives a voice data stream from handset microphone 1313 a and sends a voice data stream to the handset mono speaker 1313 b .
- the analog baseband unit 1304 also receives a voice data stream from the microphone 1314 a and sends a voice data stream to the mono headset 1314 b .
- the analog baseband unit 1304 and the SPU 1302 may be separate ICs.
- the analog baseband unit 1304 does not embed a programmable processor core, but performs processing based on configuration of audio paths, filters, gains, etc being setup by software running on the SPU 1302 .
- the display 1320 may also display pictures and video sequences received from a local camera 1328 , or from other sources such as the USB 1326 or the memory 1312 .
- the SPU 1302 may also send a video sequence to the display 1320 that is received from various sources such as the cellular network via the RF transceiver 1306 or the camera 1326 .
- the SPU 1302 may also send a video sequence to an external video display unit via the encoder unit 1322 over a composite output terminal 1324 .
- the encoder unit 1322 may provide encoding according to PAL/SECAM/NTSC video standards.
- the SPU 1302 includes functionality to perform the computational operations required for video encoding and decoding.
- the SPU 1302 is configured to perform computational operations for applying one or more techniques for generating and encoding multiple quantization parameters for an LCU during the encoding process as described herein.
- Software instructions implementing the techniques may be stored in the memory 1312 and executed by the SPU 1302 , for example, as part of encoding video sequences captured by the local camera 1328 .
- the SPU 1302 is configured to perform computational operations for applying one or more techniques for decoding multiple quantization parameters for an LCU as described herein as part of decoding a received coded video sequence or decoding a coded video sequence stored in the memory 1312 .
- Software instructions implementing the techniques may be stored in the memory 1312 and executed by the SPU 1302 .
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
A method is provided that includes receiving a coded largest coding unit in a video decoder, wherein the coded largest coding unit includes a coded coding unit structure and a plurality of coded quantization parameters, and decoding the coded largest coding unit based on the coded coding unit structure and the plurality of coded quantization parameters.
Description
- This application claims benefit of U.S. Provisional Patent Application Ser. No. 61/331,216, filed May 4, 2010, of U.S. Provisional Patent Application Ser. No. 61/431,889, filed Jan. 12, 2011, and of U.S. Provisional Patent Application Ser. No. 61/469,518, filed Mar. 30, 2011, all of which are incorporated herein by reference in their entirety.
- The demand for digital video products continues to increase. Some examples of applications for digital video include video communication, security and surveillance, industrial automation, and entertainment (e.g., DV, HDTV, satellite TV, set-top boxes, Internet video streaming, video gaming devices, digital cameras, cellular telephones, video jukeboxes, high-end displays and personal video recorders). Further, video applications are becoming increasingly mobile as a result of higher computation power in handsets, advances in battery technology, and high-speed wireless connectivity.
- Video compression, i.e., video coding, is an essential enabler for digital video products as it enables the storage and transmission of digital video. In general, current video coding standards define video compression techniques that apply prediction, transformation, quantization, and entropy coding to sequential blocks of pixels, i.e., macroblocks, in a video sequence to compress, i.e., encode, the video sequence. A macroblock is defined as a 16×16 rectangular block of pixels in a frame or slice of a video sequence where a frame is defined to be a complete image captured during a known time interval.
- A quantization parameter (QP) may be used to modulate the step size of the quantization for each macroblock. For example, in H.264/AVC, quantization of a transform coefficient involves dividing the coefficient by a quantization step size. The quantization step size, which may also be referred to as the quantization scale, is defined by the standard based on the QP value, which may be an integer within some
range 0 . . . 51. A step size for a QP value may be determined, for example, using a table lookup and/or by computational derivation. - The quality and bit rate of the compressed bit stream is largely determined by the QP value selected for quantizing each macroblock. That is, the quantization step size (Qs) used to quantize a macroblock regulates how much spatial detail is retained in a compressed macroblock. The smaller the Qs, the more retention of detail and the better the quality but at the cost of a higher bit rate. As the Qs increases, less detail is retained and the bit rate decreases but at the cost of increased distortion and loss of quality.
- Particular embodiments will now be described, by way of example only, and with reference to the accompanying drawings:
-
FIG. 1 shows a block diagram of a digital system in accordance with one or more embodiments; -
FIG. 2 shows an example of a recursive quadtree structure in accordance with one or more embodiments; -
FIGS. 3A and 3B show block diagrams of a video encoder in accordance with one or more embodiments; -
FIGS. 4-8 show examples in accordance with one or more embodiments; -
FIG. 9 shows a block diagram of a video decoder in accordance with one or more embodiments; -
FIG. 10 shows an example in accordance with one or more embodiments; -
FIGS. 11 and 12 show flow diagrams of methods in accordance with one or more embodiments; and -
FIG. 13 shows a block diagram of an illustrative digital system in accordance with one or more embodiments. - Specific embodiments of the invention will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency.
- As was previously discussed, in current video coding standards such as H.264/AVC, the coding operations of prediction, transformation, quantization, and entropy coding are performed based on fixed size 16×16 blocks referred to as macroblocks. Further, a quantization parameter is generated for each macroblock with no provision for doing so for larger or smaller blocks. For larger frame sizes, e.g., frame sizes used for high definition video, using a larger block size for the block-based coding operations may provide better coding efficiency and/or reduce data transmission overhead. For example, a video sequence with a 1280×720 frame size and a frame rate of 60 frames per second is 36 times larger and 4 times faster than a video sequence with a 176×144 frame size and a frame rate of 15 frames per second. A block size larger than 16×16 would allow a video encoder to take advantage of the increased spatial and/or temporal redundancy in the former video sequence. Such larger block sizes are currently proposed in the emerging next generation video standard referred to High Efficiency Video Coding (HEVC). HEVC is the proposed successor to H.264/MPEG-4 AVC (Advanced Video Coding), currently under development by a Joint Collaborative Team on Video Coding (JCT-VC) established by the ISO/IEC Moving Picture Experts Group (MPEG) and ITU-T Video Coding Experts Group (VCEG).
- However, an increased block size may adversely affect rate control. That is, many rate control techniques manage QP on a block-by-block basis according to the available space in a hypothetical transmission buffer. Increasing the block size reduces the granularity at which rate control can adjust the value of QP, thus possibly making rate control more difficult and/or adversely affecting quality. Further, reducing the granularity at which QP can change by increasing the block size impacts the visual quality performance of perceptual rate control techniques that adapt the QP based on the activity in a block.
- Embodiments described herein provide for block-based video coding with a large block size, e.g., larger than 16×16, in which multiple quantization parameters for a single block may be generated. More specifically, a picture (or slice) is divided into non-over-lapping blocks of pixels referred to as largest coding units (LCU). As used herein, the term “picture” refers to a frame or a field of a frame. A frame is a complete image captured during a known time interval. A slice is a subset of sequential LCUs in a picture. An LCU is the base unit used for block-based coding. That is, an LCU plays a similar role in coding as the prior art macroblock, but it may be larger, e.g., 32×32, 64×64, 128×128, etc. For purposes of quantization, the LCU is the largest unit in a picture for which a quantization parameter (QP) may be generated.
- As part of the coding process, various criteria, e.g., rate control criteria, complexity considerations, rate distortion constraints, etc., may be applied to partition an LCU into coding units (CU). A CU is a block of pixels within an LCU and the CUs within an LCU may be of different sizes. After the CU partitioning. i.e., the CU structure, is identified, a QP is generated for each CU. Block-based coding is then applied to the LCU to code the CUs. As part of the coding, the QPs are used in the quantization of the corresponding CUs. The CU structure and the QPs are also coded for communication, i.e., signaling, to a decoder.
- In some embodiments, QP values are communicated to a decoder in a compressed bit stream as delta QP values. Techniques for computing the delta QPs and for controlling the spatial granularity at which delta QPs are signaled are also provided. In some embodiments, more than one technique for computing the delta QP values may be used in coding a single video sequence. In such embodiments, the technique used may be signaled in a compressed bit stream at the appropriate level, e.g., sequence, picture, slice, and/or LCU.
-
FIG. 1 shows a block diagram of a digital system in accordance with one or more embodiments. The system includes a sourcedigital system 100 that transmits encoded video sequences to a destinationdigital system 102 via acommunication channel 116. The sourcedigital system 100 includes avideo capture component 104, avideo encoder component 106 and atransmitter component 108. Thevideo capture component 104 is configured to provide a video sequence to be encoded by thevideo encoder component 106. Thevideo capture component 104 may be for example, a video camera, a video archive, or a video feed from a video content provider. In some embodiments, thevideo capture component 104 may generate computer graphics as the video sequence, or a combination of live video, archived video, and/or computer-generated video. - The
video encoder component 106 receives a video sequence from thevideo capture component 104 and encodes it for transmission by thetransmitter component 108. Thevideo encoder component 106 receives the video sequence from thevideo capture component 104 as a sequence of frames, divides the frames into LCUs, and encodes the video data in the LCUs. Thevideo encoder component 106 may be configured to apply one or more techniques for generating and encoding multiple quantization parameters for an LCU during the encoding process as described herein. Embodiments of thevideo encoder component 106 are described in more detail below in reference toFIGS. 3A and 3B . - The
transmitter component 108 transmits the encoded video data to the destinationdigital system 102 via thecommunication channel 116. Thecommunication channel 116 may be any communication medium, or combination of communication media suitable for transmission of the encoded video sequence, such as, for example, wired or wireless communication media, a local area network, or a wide area network. - The destination
digital system 102 includes areceiver component 110, avideo decoder component 112 and adisplay component 114. Thereceiver component 110 receives the encoded video data from the sourcedigital system 100 via thecommunication channel 116 and provides the encoded video data to thevideo decoder component 112 for decoding. Thevideo decoder component 112 reverses the encoding process performed by thevideo encoder component 106 to reconstruct the LCUs of the video sequence. The video decoder component may be configured to apply one or more techniques for decoding multiple quantization parameters for an LCU during the decoding process as described herein. Embodiments of thevideo decoder component 112 are described in more detail below in reference toFIG. 9 . - The reconstructed video sequence is displayed on the
display component 114. Thedisplay component 114 may be any suitable display device such as, for example, a plasma display, a liquid crystal display (LCD), a light emitting diode (LED) display, etc. - In some embodiments, the source
digital system 100 may also include a receiver component and a video decoder component and/or the destinationdigital system 102 may include a transmitter component and a video encoder component for transmission of video sequences both directions for video steaming, video broadcasting, and video telephony. Further, thevideo encoder component 106 and thevideo decoder component 112 may perform encoding and decoding in accordance with one or more video compression standards. Thevideo encoder component 106 and thevideo decoder component 112 may be implemented in any suitable combination of software, firmware, and hardware, such as, for example, one or more digital signal processors (DSPs), microprocessors, discrete logic, application specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), etc. - As was previously mentioned, an LCU may be partitioned into coding units (CU) during the coding process. For simplicity of explanation in describing embodiments, a recursive quadtree structure is assumed for partitioning of LCUs into CUs. One of ordinary skill in the art will understand embodiments in which other partitioning structures are used. In the recursive quadtree structure, a CU may be square. Accordingly, an LCU is also square. A picture is divided into non-overlapped LCUs. Given that a CU is square, the CU structure within an LCU can be a recursive quadtree structure adapted to the frame. That is, each time a CU (or LCU) is partitioned, it is divided into four equal-sized square blocks. Further, a given CU can be characterized by the size of the LCU and the hierarchical depth of the LCU where the CU occurs. The maximum hierarchical depth is determined by the size of the smallest CU (SCU) permitted.
-
FIG. 2 shows an example of a recursive quadtree structure in which the LCU is assumed to be 128×128 and the SCU is assumed to be 8×8. With these assumptions, the maximum hierarchical depth of the quadtree structure is 5. Further, five possible CU sizes are allowed: 128×128, 64×64, 32×32, 16×16, and 8×8. If the LCU is assumed to be 64×64 and the SCU is assumed to be 8×8, the maximum hierarchical depth is 4 and four possible CU sizes are allowed: 64×64, 32×32, 16×16, and 8×8. -
FIGS. 3A and 3B show block diagrams of a video encoder, e.g., thevideo encoder 106 ofFIG. 1 , configured to apply one or more techniques for generating and encoding multiple quantization parameters for an LCU as described herein.FIG. 3A shows a high level block diagram of the video encoder andFIG. 3B shows a block diagram of theLCU processing component 342 of the video encoder. - As shown in
FIG. 3A , a video encoder includes acoding control component 340, anLCU processing component 342, arate control component 344, and amemory 346. An input digital video sequence is provided to thecoding control component 340. Thememory 346 may be internal memory, external memory, or a combination thereof. Thecoding control component 340 sequences the various operations of the video encoder. For example, thecoding control component 340 performs any processing on the input video sequence that is to be done at the frame level, such as determining the coding type (I, P, or B) of a picture based on the high level coding structure, e.g., IPPP, IBBP, hierarchical-B, and dividing a frame into LCUs for further processing. LCU size and SCU size may be different in different embodiments of the video encoder. Further, the LCU size and SCU size may be signaled at the sequence, picture, and/or slice level. Thecoding control component 340 also interacts with therate control component 344 to determine an initial coding unit structure and initial QPs for each LCU. - The
rate control component 344 receives an LCU from thecoding control component 340 and applies various criteria to the LCU to determine one or more QPs to be used by theLCU processing component 342 in coding the LCU. More specifically, therate control component 344 partitions the LCU into CUs of various sizes within the recursive quadtree structure based on the various criteria to determine the granularity at which QPs should be applied and then computes a QP for each CU that is not further subdivided, i.e., for each coding unit that is a leaf node in the quadtree. The CU structure of the LCU and the QPs are provided to thecoding control component 340. - The QPs applied to an LCU during the coding of the LCU will be signaled in the compressed bit stream. To minimize the amount of information signaled in the compressed bits stream, it may be desirable to constrain the granularity at which QPs may be applied in an LCU. Recall that the SCU size sets the size of the smallest CU in the recursive quadtree structure. In some embodiments, a minimum QP CU size may be specified in addition to the LCU and SCU sizes. In such embodiments, the smallest CU that the
rate control component 344 can use in partitioning an LCU is limited by the minimum QP CU size rather than the SCU size. Thus, the minimum QP CU size may be set to sizes larger than the SCU to constrain the granularity at which QPs may be applied. For example, if the LCU is assumed to be 64×64 and the SCU is assumed to be 8×8, the four possible CU sizes allowed in the recursive quadtree structure are 64×64, 32×32, 16×6, and 8×8. Without the minimum QP CU size constraint, therate control component 344 can generate QPs for CUs as small as 8×8. However, if a minimum QP CU size of 16×16 is specified, therate control component 344 can generate QPs for CUs as small as 16×16 but no smaller. The minimum QP CU size may be set at the sequence, picture, slice, and/or LCU level and signaled in the compressed bit stream accordingly. -
FIG. 4 shows an example CU partitioning of an LCU. In this example, the LCU is partitioned into 4 CUs, A, B, C, and D. CU A is further partitioned into four CUs, A1, A2, A3, and A4 and CU D is further partitioned into four CUs, D1, D2, D3, and D4. CUs A2 and D1 are also further partitioned into four CUs, respectively A21, A22, A23, and A24 and D11, D12, D13, and D14. Therate control component 344 computes a QP for each of the CUs that is not further sub-divided, i.e., for A1, A21, A22, A23, A24, A3, A4, B, C, D11, D12, D13, D14, D2, D3, and D4. - Referring again to
FIG. 3A , any suitable criteria may be used byrate control component 344, such as, for example, perceptual rate control constraints, target bit rate constraints, rate-distortion optimization constraints, and complexity considerations, alone or in any combination. For example, therate control component 344 may determine the CU partitioning and corresponding QPs at least in part based on the spatial characteristics of the LCU. As is well known, if a region of a picture is smooth, quantization errors can be more visible to the human eye whereas if a region is busy (e.g., high textured), any quantization error will likely not be visible. Therate control component 344 may determine the activity in an LCU and then partition the LCU into CU sizes based on the locations/levels of the activity. An activity measure for a region of an image may be determined, for example, based on edge information, texture information, etc. The goal would be to assign lower QP values to flat regions (regions with little to no activity) to reduce quantization error and to assign higher QP values to busy regions (regions with high activity) as the quantization error will be hidden. - For example, assume an image in which the top half is sky and the bottom half is trees. In top of the image, most of the region is totally flat, so a low QP value should be used. It may be possible to use one QP value for an entire LCU in that part of the image as an LCU may be only sky. In the bottom half of the image, most of the region is busy, so a higher QP value can be used. Further, it may be possible to use one QP value for an entire LCU in that region, as an LCU may have only trees.
- However, there will be transition regions in which LCUs will have both sky and trees. In such LCUs, there may be regions that are sky and regions that are trees. Such an LCU may be partitioned into CUs sized based on activity (within the limits of the quadtree coding structure). For example, an LCU may be divided into four CUs A, B, C, and D, and the activity level in areas of each CU may then analyzed. If a CU, say CU A, has regions of widely varying activity levels, then CU A may be further divided into four CUs, A1, A2, A3, and A4 in an attempt to reduce the variance in activity level over the area where a QP will be applied. These four CUs may each also be further divided into four CUs based on activity. Once the CU partitioning is complete, QP values may then be computed for each CU.
- The
coding control component 340 provides information regarding the initial LCU CU structure and the QPs determined by therate control component 344 to the various components of theLCU processing component 342 as needed. For example, the coding control component may provide the LCU and SCU size to theentropy encoder component 340 for inclusion in the compressed video stream at the appropriate point. In another example, thecoding control component 340 may generate a quantization parameter array for use by thequantize component 306 and store the quantization parameter array in thememory 346. The size of the quantization parameter array may be determined based on the maximum possible number of CUs in an LCU. For example, assume the size of the SCU is 8×8 and the size of the LCU is 64×64. Thus, the maximum possible number of CUs in an LCU is 64. The quantization parameter array is sized to hold a QP for each of these 64 possible coding units, i.e., is an 8×8 array. The QPs computed by therate control component 344 are mapped into this array based on the CU structure. As is explained in more detail herein in reference to thequantize component 306, a QP for any size CU in the LCU may be located in this array using the coordinates of the upper left hand corner of the CU in the LCU. -
FIG. 5 shows an example of mapping QPs into aquantization parameter array 502 based on theCU structure 500. The CU structure assumes a 64×64 LCU. In thequantization parameter array 502, the presence of a CU identifier, e.g., A1, C, D11, etc., in an array cell represents the QP parameter for that CU. For example, the QP for CU A1 is in locations (0,0), (0,1), (1,0), and (1,1), the QP for CU D11 is in location (4,4), and the QP for CU C is in locations (4, 0), (4, 1), (4, 2), (4, 3), (5, 0), (5, 1), (5, 2), (5, 3), (6, 0), (6, 1), (6, 2), (6, 3), (7, 0), (7, 1), (7, 2), and (7, 3). - Referring again to
FIG. 3A , theLCU processing component 342 receives LCUs of the input video sequence from thecoding control component 340 and encodes the LCUs to generate the compressed video stream. As previously mentioned, theLCU processing component 342 also receives information regarding the CU structure and QPs of an LCU as determined by therate control component 344. The CUs in the CU structure of an LCU may be processed by theLCU processing component 342 in a depth-first Z-scan order. For example, in the LCU ofFIG. 4 , the CUs would be scanned in the following order: A1->A21->A21->A22->A23->A3->A4->B->C->D11->D12->D13->D14->D2->D3->D4. -
FIG. 3B shows the basic coding architecture of theLCU processing component 342. TheLCUs 300 from thecoding control unit 340 are provided as one input of amotion estimation component 320, as one input of anintra prediction component 324, and to a positive input of a combiner 302 (e.g., adder or subtractor or the like). Further, although not specifically shown, the prediction mode of each picture as selected by thecoding control component 340 is provided to a mode selector component, and theentropy encoder 334. - The
storage component 318 provides reference data to themotion estimation component 320 and to themotion compensation component 322. The reference data may include one or more previously encoded and decoded CUs, i.e., reconstructed CUs. - The
motion estimation component 320 provides motion estimation information to themotion compensation component 322 and theentropy encoder 334. More specifically, themotion estimation component 320 performs tests on CUs in an LCU based on multiple temporal prediction modes using reference data fromstorage 318 to choose the best motion vector(s)/prediction mode based on a coding cost. To perform the tests, themotion estimation component 320 may begin with the CU structure provided by thecoding control component 340. Themotion estimation component 320 may divide each CU indicated in the CU structure into prediction units according to the unit sizes of prediction modes and calculate the coding costs for each prediction mode for each CU. - For coding efficiency, the
motion estimation component 320 may also decide to alter the CU structure by further partitioning one or more of the CUs in the CU structure. That is, when choosing the best motion vectors/prediction modes, in addition to testing with the initial CU structure, themotion estimation component 320 may also choose to divide the larger CUs in the initial CU structure into smaller CUs (within the limits of the recursive quadtree structure), and calculate coding costs at lower levels in the coding hierarchy. As will be explained below in reference to thequantizer component 306, any changes made to the CU structure do not affect how the QPs computed by therate control component 344 are applied. If themotion estimation component 320 changes the initial CU structure, the modified CU structure is communicated to other components in theLCU processing component 342 that need the information. - The
motion estimation component 320 provides the selected motion vector (MV) or vectors and the selected prediction mode for each inter predicted CU to the motion compensation component 323 and the selected motion vector (MV) to theentropy encoder 334. Themotion compensation component 322 provides motion compensated inter prediction information to aselector switch 326 that includes motion compensated inter predicted CUs and the selected temporal prediction modes for the inter predicted CUs. The coding costs of the inter predicted CUs are also provided to the mode selector component (not shown). - The
intra prediction component 324 provides intra prediction information to theselector switch 326 that includes intra predicted CUs and the corresponding spatial prediction modes. That is, theintra prediction component 324 performs spatial prediction in which tests based on multiple spatial prediction modes are performed on CUs in an LCU using previously encoded neighboring CUs of the picture from thebuffer 328 to choose the best spatial prediction mode for generating an intra predicted CU based on a coding cost. To perform the tests, theintra prediction component 324 may begin with the CU structure provided by thecoding control component 340. Theintra prediction component 324 may divide each CU indicated in the CU structure into prediction units according to the unit sizes of the spatial prediction modes and calculate the coding costs for each prediction mode for each CU. - For coding efficiency, the
intra prediction component 324 may also decide to alter the CU structure by further partitioning one or more of the CUs in the CU structure. That is, when choosing the best prediction modes, in addition to testing with the initial CU structure, theintra prediction component 324 may also chose to divide the larger CUs in the initial CU structure into smaller CUs (within the limits of the recursive quadtree structure), and calculate coding costs at lower levels in the coding hierarchy. As will be explained below in reference to thequantizer component 306, any changes made to the CU structure do not affect how the QP values computed by therate control component 344 are applied. If theintra prediction component 324 changes the initial CU structure, the modified CU structure is communicated to other components in theLCU processing component 342 that need the information. Although not specifically shown, the spatial prediction mode of each intra predicted CU provided to theselector switch 326 is also provided to thetransform component 304. Further, the coding costs of the intra predicted CUs are also provided to the mode selector component. - The
selector switch 326 selects between the motion-compensated inter predicted CUs from themotion compensation component 322 and the intra predicted CUs from theintra prediction component 324 based on the difference metrics of the CUs and the picture prediction mode provided by the mode selector component. The output of theselector switch 326, i.e., the predicted CU, is provided to a negative input of thecombiner 302 and to adelay component 330. The output of thedelay component 330 is provided to another combiner (i.e., an adder) 338. Thecombiner 302 subtracts the predicted CU from the current CU to provide a residual CU to thetransform component 304. The resulting residual CU is a set of pixel difference values that quantify differences between pixel values of the original CU and the predicted CU. - The
transform component 304 performs unit transforms on the residual CUs to convert the residual pixel values to transform coefficients and provides the transform coefficients to aquantize component 306. Thequantize component 306 determines a QP for the transform coefficients of a residual CU and quantizes the transform coefficients based on that QP. For example, thequantize component 306 may divide the values of the transform coefficients by a quantization scale (Qs) derived from the QP value. In some embodiments, thequantize component 306 represents the coefficients by using a desired number of quantization steps, the number of steps used (or correspondingly the value of Qs) determining the number of bits used to represent the residuals. Other algorithms for quantization such as rate-distortion optimized quantization may also be used by thequantize component 306. - The
quantize component 306 determines a QP for the residual CU transform coefficients based on the initial CU structure provided by thecoding control component 340. That is, if the residual CU corresponds to a CU in the initial CU structure, then thequantize component 306 uses the QP computed for that CU by therate control component 344. For example, referring to the example ofFIG. 4 , if the residual CU was generated from CU C with no further partitioning during the prediction processing, then the QP for CU C is used to quantize the residual CU. - If the residual CU corresponds to a CU created during the prediction processing, then the
quantize component 306 uses the QP of the original CU that was subdivided during the prediction processing to create the CU as the QP for the residual CU. For example, if CU C ofFIG. 4 is further partitioned during the prediction processing as shown inFIG. 6 , and the residual CU corresponds to one of CUs C1, C2, C3, or C4, then the QP for CU C is used to quantize the residual CU. In embodiments where a minimum QP CU size is specified, if the residual CU corresponds to a CU created in the initial CU structure and is smaller than the minimum QP CU size, then thequantize component 306 uses the QP of the original CU of the same size as the minimum QP CU that was partitioned by therate control component 344 to create the CU. For example, in the LCU ofFIG. 4 , if the LCU size is 64×64 and the minimum QP CU size is 32×32 and the residual CU corresponds to one of the 8×8 CUs A21, A22, A23, or A24, then the QP for CU A2 is used to quantize the residual CU. - As was previously mentioned, the
coding control component 340 may generate a quantization parameter array that is stored in thememory 346. Thequantize component 306 may use this matrix to determine a QP for the residual CU coefficients. That is, the coordinates of the upper left corner of the CU corresponding to the residual CU, whether that CU is in the original coding structure or was added during the prediction process, may be used to locate the appropriate QP in the quantization parameter array. In general, the x coordinate may be divided by the width of the SCU and the y coordinate may be divided by the height of the SCU to compute the coordinates of the appropriate QP in the quantization parameter array. - For example, consider the
CU structure 500 and thequantization parameter array 502 ofFIG. 5 . In this example, the SCU is 8×8. The coordinates of the upper left corner of CU A4 are (16, 16). Thus, the coordinates of the location in thequantization parameter array 502 holding the appropriate QP are (2, 2). Referring now to the CU structure ofFIG. 6 , recall that for this example C1, C2, C3, and C4 are assumed to be added to the CU structure during prediction processing. The coordinates of the upper left corner of CU C4 are (48, 16). Thus, the coordinates of the location in thequantization parameter matrix 502 holding the appropriate QP are (6, 2). - Because the DCT transform redistributes the energy of the residual signal into the frequency domain, the quantized transform coefficients are taken out of their scan ordering by a
scan component 308 and arranged by significance, such as, for example, beginning with the more significant coefficients followed by the less significant. The ordered quantized transform coefficients for a CU provided via thescan component 308 along with header information for the CU are coded by theentropy encoder 334, which provides a compressed bit stream to avideo buffer 336 for transmission or storage. The entropy coding performed by theentropy encoder 334 may be use any suitable entropy encoding technique, such as, for example, context adaptive variable length coding (CAVLC), context adaptive binary arithmetic coding (CABAC), run length coding, etc. - The
entropy encoder 334 encodes information regarding the CU structure used to generate the coded CUs in the compressed bit stream and information indicating the QPs used in the quantization of the coded CUs. In some embodiments, the CU structure of an LCU is signaled to a decoder by encoding the sizes of the LCU and the SCU and a series of split flags in the compressed bit stream. If a CU in the recursive quadtree structure defined by the LCU and the SCU is split, i.e., partitioned, in the CU structure, a split flag with a value indicating a split, e.g., 1, is signaled in the compressed bit stream. If a CU is not split and the size of the CU is larger than that of the SCU, a split flag with a value indicating no split, e.g., 0, is signaled in the compressed bit stream. Information specific to the unsplit CU will follow the split flag in the bit stream. The information specific to a CU may include CU header information (prediction mode, motion vector differences, coding block flag information, etc), QP information, and coefficient information. Coefficient information may not be included if all of the CU coefficients are zero. Further, if the size of a CU is the same size as the SCU, no split flag is encoded in the bit stream for that CU. -
FIG. 7 shows an example of signaling the CU structure of the LCU ofFIG. 4 assuming that the LCU size is 64×64 and the SCU size is 8×8. This example assumes that all the CUs have at least one non-zero coefficient. In this example, split flag S0 is set to 1 to indicate that the LCU is split into four CUs: A, B, C, and D. Split flag S1 is set to 1 to indicate that CU A is split into four CUs: A1, A2, A3, and A4. Split flag S2 is set to 0 to indicate that CU A1 is not split. Information specific to CU A1 follows split flag S2. Split flag S3 is set to 1 to indicate that CU A2 is split into four CUs: A21, A22, A23, and A24. CUs A21, A22, A23, and A24 are 8×8 so no split flags are encoded for these CUs. Information specific to each of the CUS follows split flag S3. Split flag S4 is set to 0 to indicate that CU A3 is not split, and so on. - The
entropy encoder 334 includes coded QP information for each coded CU in the compressed bit stream. In some embodiments, theentropy encoder 334 includes this QP information in the form of a delta QP value, i.e., the difference between a QP value and a predicted QP value. In some embodiments, theentropy encoder 334 computes a delta QP for a CU as dQP=QPcurr−QPprev where QPcurr is the QP value for the CU and QPprev is the QP value for the CU immediately preceding the CU in the scanning order, e.g., in depth-first Z scan order. In this case, QPprev is the predicted QP. For example, referring toFIG. 4 , the delta QP for CU B is QPB-QPA4 and the delta QP for CU D2 is QPD2-QPD14. Computing a delta QP in this way may be desirable when rate control is not based on perceptual criteria. - In some embodiments, the
entropy encoder 334 computes a value for delta QP as a function of the QP values of one or more spatially neighboring QPs. That is, delta QP=QPcurr−f(QPs of spatially neighboring CUs). In this case, f( ) provides the predicted QP value. Computing delta QP in this way may be desirable when rate control is based on perceptual criteria. Examples of the function f( ) include f( )=QP of a left neighboring CU and f( )=the average of the QP value for a left neighboring CU and the QP value of a top neighboring CU. More sophisticated functions of QPs of spatially neighboring CUs may also be used, including using the QP values of more than one or two neighboring CUs. - Within an LCU, spatially neighboring CUs of a CU may be defined as those CUs adjacent to the CU in the CU structure of the LCU. For example, in
FIG. 4 , CU A22, CU A24, and CU A4 are left neighboring CUs of CU B. Also, CU A23 and CU A24 are top neighboring CUs of CU A4. For CUs on the left and top edges of an LCU, adjacent CUs in an LCU to the left or above, respectively, of the LCU in a picture may be considered as left neighboring and top neighboring CUs, respectively. - In some embodiments, more than one mode for computing a predicted QP value for purposes of computing delta QP may be provided. For example, the
entropy encoder 334 may provide two different modes for computing delta QP: dQP=QPcurr−QPprev and dQP=QPcurr−f(QPs of spatially neighboring CUs). That is, theentropy encoder 334 may compute a delta QP as per the following pseudo code: -
If (qp_predictor_mode == 1) deltaQP = (QP of current CU) − (QP of previous CU in coding order); else if (qp_predictor_mode == 2) deltaQP = (QP of current CU) − f(QP of spatially neighboring CUs)
where qp_predictor_mode is selected elsewhere in the video encoder. More than two modes for computing a delta QP value may be provided in a similar fashion. Further, the mode used to compute delta QPs, i.e., qp_predictor_mode, may be signaled in the compressed bit stream at the appropriate level, e.g., sequence, picture, slice, and/or LCU level. - In some embodiments, the
entropy encoder 334 encodes a delta QP value for each CU in the compressed bit stream. For example, referring toFIG. 7 , a delta QP value would be included in the information specific to CU A1, in the information specific to CU A21, in the information specific to CU A22, etc. In some embodiments, if a minimum QP CU size is specified, a delta QP value is encoded in the CU specific information for each CU with at least one non-zero coefficient that is larger than or equal to the minimum QP CU in size. For those CUs smaller than the minimum QP CU, a delta QP is encoded at the non-leaf CU level. The size of the minimum QP CU is also encoded in the bit stream at the appropriate point.FIG. 8 shows an example of signaling delta QPs for the CU structure of the LCU ofFIG. 4 assuming that the LCU size is 64×64, the SCU size is 8×8, the minimum QP CU size is 32×32, and each CU has at least one non-zero coefficient. Each of the CUs A, B, C, and D is 32×32, so delta QPs, designated dQPx, are signaled for those CUs and not for any of the smaller ones. - Referring again to
FIG. 3B , inside the encoder is an embedded decoder. As any compliant decoder is expected to reconstruct an image from a compressed bitstream, the embedded decoder provides the same utility to the video encoder. Knowledge of the reconstructed input allows the video encoder to transmit the appropriate residual energy to compose subsequent frames. To determine the reconstructed input, i.e., reference data, the ordered quantized transform coefficients for a CU provided via thescan component 308 are returned to their original post-transform arrangement by an inverse scan component 310, the output of which is provided to adequantize component 312, which outputs estimated transformed information, i.e., an estimated or reconstructed version of the transform result from thetransform component 304. In some embodiments, the QP for the CU is communicated to thedequantize component 312 by thequantize component 306. In some embodiments, thedequantize component 312 determines the QP from a quantization parameter array in the manner previously described. The estimated transformed information is provided to theinverse transform component 314, which outputs estimated residual information which represents a reconstructed version of a residual CU. The reconstructed residual CU is provided to thecombiner 338. - The
combiner 338 adds the delayed selected CU to the reconstructed residual CU to generate an unfiltered reconstructed CU, which becomes part of reconstructed picture information. The reconstructed picture information is provided via abuffer 328 to theintra prediction component 324 and to afilter component 316. Thefilter component 316 is an in-loop filter which filters the reconstructed frame information and provides filtered reconstructed CUs, i.e., reference data, to thestorage component 318. - In some embodiments, the above described techniques for computing delta QPs may be used in other components of the video encoder. For example, if the quantize component uses rate distortion optimized quantization which minimizes total rate and distortion for a CU (Total rate=Rate of (dQP)+Rate for (CU)), one or both of these techniques may be used by these components to compute the needed delta QP values. In some embodiments, the QPs originally generated by the
rate control component 344 may be adjusted up or down by one or more other components in the video encoder prior to quantization. -
FIG. 9 shows a block diagram of a video decoder, e.g., thevideo decoder 112, in accordance with one or more embodiments of the invention. The video decoder operates to reverse the encoding operations, i.e., entropy coding, quantization, transformation, and prediction, performed by the video encoder ofFIGS. 3A and 3B to regenerate the frames of the original video sequence. In view of the above description of a video encoder, one of ordinary skill in the art will understand the functionality of components of the video decoder without detailed explanation. - In the video decoder of
FIG. 9 , theentropy decoding component 900 receives an entropy encoded video bit stream and reverses the entropy encoding to recover the encoded CUs and the encoded CU structures of the LCUs. The decoded information is communicated to other components in the video decoder as appropriate. The entropy decoding performed by theentropy decoding component 900 may include detecting coded QP values in the bit stream and decoding them for communication to theinverse quantization component 902. In some embodiments, theentropy decoding component 900 may detect delta QP values in the bit stream and compute reconstructed QP values from the delta QP values for communication to theinverse quantization component 902. For example, if the video encoder computed a delta QP as QPcurr−QPprev where QPprev is the QP of the previous CU in the coding order, theentropy decoding component 900 computes QP as the delta QP+QPprev, where QPprev is the reconstructed QP computed by theentropy decoding component 900 for the immediately preceding CU in the bit stream. For this computation, theentropy decoding component 900 may store and update a value for QPprev as each encoded CU is entropy decoded and a reconstructed QP is determined for that CU. - If the video encoder computed a delta QP as QPcurr−f(QPs of spatially neighboring CUs), the
entropy decoding component 900 computes a reconstructed QP as the delta QP+f(rQPs of spatially neighboring CUs), where rQP is a reconstructed QP. Further, if the video encoder supports multiple modes for computing a delta QP, the video decoder will compute a reconstructed QP from the delta QP according to the mode signaled in the bit stream. - To perform the computation delta QP=QPcurr−f(rQPs of spatially neighboring CUs), the
entropy decoding component 900 may store the reconstructed QPs of the appropriate spatially neighboring CUs. For example, the reconstructed QPs of the neighboring CUs may be stored in a reconstructed quantization parameter array in a manner similar to that of the previously described quantization parameter array. - Example reconstructed QP calculations are described below assuming f( ) is equal to the rQP of the left neighboring CU and in reference to the
1000 and 1002 inexample LCU structures FIG. 10 . In this example,LCU 0 1000 has been decoded and its reconstructed QPs are stored in the reconstructedquantization parameter array 1004. As reconstructed QPs are computed forLCU 1 1002, they may be stored in a reconstructed quantization parameter array for that LCU. The following calculations demonstrate how reconstructed QP values for some of the CUs inLCU 1 1002 may be reconstructed from left neighboring CUs: -
rQP(A1)=dQP(A1)+rQP(B22 of LCU0 1000) -
rQP(A21)=dQP(A21)+rQP(A1) -
rQP(A22)=dQP(A22)+rQP(A21) -
rQP(A23)=dQP(A23)+rQP(A1) -
rQP(A24)=dQP(A24)+rQP(A23) -
rQP(A3)=dQP(A3)+rQP(B42 of LCU0 1000) -
rQP(A4)=dQP(A4)+rQP(A3) - In this example, the left column of the reconstructed quantization parameter array 1004 (B22, B24, B42, B44, D22, D24, D42, D44) is all that is required for applying predictor f( ) to
LCU 1 1002. If the left neighboring CU is not available as can be the case for the first LCU in a picture, a predefined QP value may be used or the reconstructed QP value in CU coding order may be used. - Referring again to
FIG. 9 , theinverse quantization component 902 de-quantizes the residual coefficients of the residual CUs based on the reconstructed QP values. Theinverse transform component 904 transforms the frequency domain data from theinverse quantization component 902 back to residual CUs. That is, theinverse transform component 904 applies an inverse unit transform, i.e., the inverse of the unit transform used for encoding, to the de-quantized residual coefficients to produce the residual CUs. - A residual CU supplies one input of the
addition component 906. The other input of theaddition component 906 comes from themode switch 908. When inter-prediction mode is signaled in the encoded video stream, themode switch 908 selects a prediction block from themotion compensation component 910 and when intra-prediction is signaled, the mode switch selects a prediction block from theintra prediction component 914. Themotion compensation component 910 receives reference data fromstorage 912 and applies the motion compensation computed by the encoder and transmitted in the encoded video bit stream to the reference data to generate a predicted CU. Theintra-prediction component 914 receives previously decoded predicted CUs from the current picture and applies the intra-prediction computed by the encoder as signaled by a spatial prediction mode transmitted in the encoded video bit stream to the previously decoded predicted CUs to generate a predicted CU. - The
addition component 906 generates a decoded CU, by adding the selected predicted CU and the residual CU. The output of theaddition component 906 supplies the input of the in-loop filter component 916. The in-loop filter component 916 smoothes artifacts created by the block nature of the encoding process to improve the visual quality of the decoded frame. The output of the in-loop filter component 916 is the decoded frames of the video bit stream. Each decoded CU is stored instorage 912 to be used as reference data. - In some embodiments, unit transforms smaller than a CU may be used. In such embodiments, the video encoder may further partition a CU into transform units. For example, a CU may be partitioned into smaller transform units in accordance with a recursive quadtree structure adapted to the CU size. The transform unit structure of the CU may be signaled to the decoder in a similar fashion as the LCU CU structure using transform split flags. Further, in some such embodiments, delta QP values may be computed and signaled at the transform unit level. In some embodiments, a flag indicating whether or not multiple quantization parameters are provided for an LCU may be signaled at the appropriate level, e.g., sequence, picture, and/or slice.
-
FIG. 11 is a flow diagram of a method for generating and encoding multiple quantization parameters for an LCU in a video encoder in accordance with one or more embodiments. Initially, an LCU is received 1100. Various criteria are then applied to the LCU to determine a CU structure for the LCU and QPs are computed for the CUs in theCU structure 1102. For example, as previously discussed, the LCU may be divided into CUs of various sizes within a recursive quadtree structure based on the various criteria to determine the granularity at which QP values should be applied, i.e., to determine the CU structure for the LCU. A quantization parameter is then computed for each CU in the CU structure. - CUs in the CU structure are then coded using the corresponding QPs 1104. For example, a block-based coding process, i.e., prediction, transformation, and quantization, is performed on each CU in the CU structure. The prediction, transformation, and quantization may be performed on each CU as previously described herein.
- The QPs used in coding the CUs are also coded 1106. For example, to signal the QPs used in coding the CUs, delta QPs may be computed. The delta QP values may be computed as previously described. The coded QPs, the coded CUs, and the CU structure are then entropy coded to generate a portion of the
compressed bit stream 1108. The coded QPs, coded CUs, and the CU structure may be signaled in the compressed bit stream as previously described herein. -
FIG. 12 is a flow diagram of a method for decoding multiple quantization parameters for an LCU in a video decoder in accordance with one or more embodiments. Initially, a coded LCU that may include a coded CU structure and coded QPs is received 1200. The coded CU structure and the coded QPs may be generated by a video encoder as previously described. Reconstructed QPs for coded CUs in the coded LCU are then computed based on thecoded QPs 1202. The reconstructed QPs may be computed as previously described. The coded LCU is then decoded based on the coded CU structure and the reconstructedQPs 1204. For example, coded coding units in the coded LCU may be decoded using a block-based decoding process as previously described herein that reverses a block-based coding process used by the video encoder. - The techniques described in this disclosure may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the software may be executed in one or more processors, such as a microprocessor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), or digital signal processor (DSP). The software that executes the techniques may be initially stored in a computer-readable medium such as compact disc (CD), a diskette, a tape, a file, memory, or any other computer readable storage device, and loaded and executed in the processor. In some cases, the software may also be sold in a computer program product, which includes the computer-readable medium and packaging materials for the computer-readable medium. In some cases, the software instructions may be distributed via removable computer readable media (e.g., floppy disk, optical disk, flash memory, USB key), via a transmission path from computer readable media on another digital system, etc.
- Embodiments of the methods and encoders as described herein may be implemented for virtually any type of digital system (e.g., a desk top computer, a laptop computer, a handheld device such as a mobile (i.e., cellular) phone, a personal digital assistant, a digital camera, etc.).
FIG. 13 is a block diagram of a digital system (e.g., a mobile cellular telephone) 1300 that may be configured to use techniques described herein. - As shown in
FIG. 13 , the signal processing unit (SPU) 1302 includes a digital signal processing system (DSP) that includes embedded memory and security features. Theanalog baseband unit 1304 receives a voice data stream from handset microphone 1313 a and sends a voice data stream to thehandset mono speaker 1313 b. Theanalog baseband unit 1304 also receives a voice data stream from themicrophone 1314 a and sends a voice data stream to themono headset 1314 b. Theanalog baseband unit 1304 and theSPU 1302 may be separate ICs. In many embodiments, theanalog baseband unit 1304 does not embed a programmable processor core, but performs processing based on configuration of audio paths, filters, gains, etc being setup by software running on theSPU 1302. - The
display 1320 may also display pictures and video sequences received from alocal camera 1328, or from other sources such as theUSB 1326 or thememory 1312. TheSPU 1302 may also send a video sequence to thedisplay 1320 that is received from various sources such as the cellular network via theRF transceiver 1306 or thecamera 1326. TheSPU 1302 may also send a video sequence to an external video display unit via theencoder unit 1322 over acomposite output terminal 1324. Theencoder unit 1322 may provide encoding according to PAL/SECAM/NTSC video standards. - The
SPU 1302 includes functionality to perform the computational operations required for video encoding and decoding. In one or more embodiments, theSPU 1302 is configured to perform computational operations for applying one or more techniques for generating and encoding multiple quantization parameters for an LCU during the encoding process as described herein. Software instructions implementing the techniques may be stored in thememory 1312 and executed by theSPU 1302, for example, as part of encoding video sequences captured by thelocal camera 1328. In some embodiments, theSPU 1302 is configured to perform computational operations for applying one or more techniques for decoding multiple quantization parameters for an LCU as described herein as part of decoding a received coded video sequence or decoding a coded video sequence stored in thememory 1312. Software instructions implementing the techniques may be stored in thememory 1312 and executed by theSPU 1302. - The steps in the flow diagrams herein are described in a specific sequence merely for illustration. Alternative embodiments using a different sequence of steps may also be implemented without departing from the scope and spirit of the present disclosure, as will be apparent to one skilled in the relevant arts by reading the disclosure provided herein.
- While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as disclosed herein. Accordingly, the scope of the invention should be limited only by the attached claims. It is therefore contemplated that the appended claims will cover any such modifications of the embodiments as fall within the true scope and spirit of the invention.
Claims (20)
1. A method comprising:
receiving a coded largest coding unit in a video decoder, wherein the coded largest coding unit comprises a coded coding unit structure and a plurality of coded quantization parameters; and
decoding the coded largest coding unit based on the coded coding unit structure and the plurality of coded quantization parameters.
2. The method of claim 1 , wherein the coded largest coding unit comprises a plurality of coding units as indicated by the coded coding unit structure, wherein at least a first coding unit and a second coding unit are different sizes.
3. The method of claim 1 , wherein decoding the coded largest coding unit further comprises:
receiving, at least one of a sequence level, a picture level, a slice level, and a largest coding unit level, a minimum quantization parameter coding unit size; and
decoding the coded largest coding unit based on the minimum quantization parameter coding unit size.
4. The method of claim 1 , wherein decoding the coded largest coding unit further comprises reconstructing a quantization parameter for a coding unit in the coded largest coding unit by adding a coded quantization parameter corresponding to the coding unit and a reconstructed quantization parameter for a coding unit preceding the coding unit in a coding order.
5. The method of claim 1 , wherein decoding the coded largest coding unit further comprises reconstructing a quantization parameter for a coding unit in the coded largest coding unit by adding a coded quantization parameter corresponding to the coding unit and a reconstructed quantization parameter for a left spatially neighboring coding unit of the coding unit.
6. The method of claim 1 , wherein decoding the coded largest coding unit further comprises reconstructing a quantization parameter for a coding unit in the coded largest coding unit by adding a coded quantization parameter corresponding to the coding unit and a function of one or more reconstructed quantization parameters of spatially neighboring coding units of the coding unit.
7. The method of claim 1 , further comprising:
receiving a coded indicator of a first mode used to compute a first coded quantization parameter;
reconstructing the first coded quantization parameter according to the first mode;
receiving a coded indicator of a second mode used to compute a second coded quantization parameter; and
reconstructing the second coded quantization parameter according to the second mode.
8. A method comprising:
receiving a largest coding unit in a video encoder;
determining a coding unit structure comprising a plurality of coding units for the largest coding unit;
computing a plurality of quantization parameters for the largest coding unit, wherein there is a one-to-one correspondence between each quantization parameter and a coding unit in the coding unit structure;
coding coding units in the plurality of coding units based on the corresponding quantization parameters to generate a portion of a compressed bit stream; and
coding the coding unit structure and the plurality of quantization parameters in the compressed bit stream.
9. The method of claim 8 , wherein at least a first coding unit and a second coding unit are different sizes.
10. The method of claim 8 , wherein determining a coding unit structure further comprises:
determining the coding unit structure based on a minimum quantization parameter coding unit size, wherein the minimum quantization parameter coding unit size is larger than a smallest coding unit size allowed in the coding unit structure.
11. The method of claim 10 , further comprising signaling the minimum quantization parameter coding unit size in the compressed bit stream at least one of a sequence level, a picture level, a slice level, and a largest coding unit level.
12. The method of claim 8 , wherein a quantization parameter is coded as a difference between a quantization parameter for a coding unit in the largest coding unit and a quantization parameter for a coding unit preceding the coding unit in a coding order.
13. The method of claim 8 , wherein a quantization parameter is coded as a difference between a quantization parameter for a coding unit in the largest coding unit and a quantization parameter for a left spatially neighboring coding unit of the coding unit.
14. The method of claim 8 , wherein a quantization parameter is coded as a difference between a quantization parameter for a coding unit in the largest coding unit and a function of one or more quantization parameters of spatially neighboring coding units of the coding unit.
15. The method of claim 8 , further comprising:
selecting a mode for coding quantization parameters from a plurality of modes for coding quantization parameters; and
signaling the selected mode in the compressed bit stream at least one of a sequence level, a picture level, a slice level, and a largest coding unit level.
16. A digital system comprising a video decoder configured to:
receive a coded largest coding unit, wherein the coded largest coding unit comprises a coded coding unit structure and a plurality of coded quantization parameters; and
decode the coded largest coding unit based on the coded coding unit structure and the plurality of coded quantization parameters.
17. The digital system of claim 16 , wherein the coded largest coding unit comprises a plurality of coding units as indicated by the coded coding unit structure, wherein at least a first coding unit and a second coding unit are different sizes.
18. The digital system of claim 16 , wherein the video decoder is further configured to:
receive, at least one of a sequence level, a picture level, a slice level, and a largest coding unit level, a minimum quantization parameter coding unit size; and
decode the largest coding unit based on the minimum quantization parameter coding unit size.
19. The digital system of claim 16 , wherein the video decoder is further configured to decode the coded largest coding unit by reconstructing a quantization parameter for a coding unit in the coded largest coding unit by adding a coded quantization parameter corresponding to the coding unit and one selected from a group consisting of a reconstructed quantization parameter for a coding unit preceding the coding unit in a coding order, a reconstructed quantization parameter for a left spatially neighboring coding unit of the coding unit, and a function of one or more reconstructed quantization parameters of spatially neighboring coding units of the coding unit.
20. The digital system of claim 16 , wherein the video decoder is further configured to:
receive a coded indicator of a first mode used to code a first coded quantization parameter;
reconstruct the first coded quantization parameter according to the first mode;
receive a coded indicator of a second mode used to code a second coded quantization parameter; and
reconstruct the second coded quantization parameter according to the second mode.
Priority Applications (15)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US13/093,715 US20110274162A1 (en) | 2010-05-04 | 2011-04-25 | Coding Unit Quantization Parameters in Video Coding |
| JP2013509218A JP2013529021A (en) | 2010-05-04 | 2011-05-04 | Coding unit quantization parameter in video coding |
| PCT/US2011/035179 WO2011140211A2 (en) | 2010-05-04 | 2011-05-04 | Coding unit quantization parameters in video coding |
| US13/869,253 US10897625B2 (en) | 2009-11-20 | 2013-04-24 | Block artifact suppression in video coding |
| US14/531,632 US9635364B2 (en) | 2010-05-04 | 2014-11-03 | Coding unit quantization parameters in video coding |
| JP2015162979A JP6060229B2 (en) | 2010-05-04 | 2015-08-20 | Coding unit quantization parameter in video coding |
| JP2016090166A JP6372866B2 (en) | 2010-05-04 | 2016-04-28 | Coding unit quantization parameter in video coding |
| US15/145,637 US9635365B2 (en) | 2010-05-04 | 2016-05-03 | Coding unit quantization parameters in video coding |
| US15/289,745 US10368069B2 (en) | 2010-05-04 | 2016-10-10 | Coding unit quantization parameters in video coding |
| US16/524,614 US10972734B2 (en) | 2010-05-04 | 2019-07-29 | Coding unit quantization parameters in video coding |
| US17/107,996 US11438607B2 (en) | 2009-11-20 | 2020-12-01 | Block artifact suppression in video coding |
| US17/191,748 US11743464B2 (en) | 2010-05-04 | 2021-03-04 | Coding unit quantization parameters in video coding |
| US17/890,553 US12328437B2 (en) | 2009-11-20 | 2022-08-18 | Block artifact suppression in video coding |
| US18/237,068 US12231638B2 (en) | 2010-05-04 | 2023-08-23 | Coding unit quantization parameters in video coding |
| US19/022,947 US20250159167A1 (en) | 2010-05-04 | 2025-01-15 | Coding unit quantization parameters in video coding |
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US33121610P | 2010-05-04 | 2010-05-04 | |
| US201161431889P | 2011-01-12 | 2011-01-12 | |
| US201161469518P | 2011-03-30 | 2011-03-30 | |
| US13/093,715 US20110274162A1 (en) | 2010-05-04 | 2011-04-25 | Coding Unit Quantization Parameters in Video Coding |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US12/951,035 Continuation-In-Part US8817884B2 (en) | 2009-11-20 | 2010-11-20 | Techniques for perceptual encoding of video frames |
Related Child Applications (3)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/869,253 Continuation-In-Part US10897625B2 (en) | 2009-11-20 | 2013-04-24 | Block artifact suppression in video coding |
| US14/531,632 Continuation US9635364B2 (en) | 2010-05-04 | 2014-11-03 | Coding unit quantization parameters in video coding |
| US15/289,745 Continuation US10368069B2 (en) | 2010-05-04 | 2016-10-10 | Coding unit quantization parameters in video coding |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20110274162A1 true US20110274162A1 (en) | 2011-11-10 |
Family
ID=44901909
Family Applications (8)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/093,715 Abandoned US20110274162A1 (en) | 2009-11-20 | 2011-04-25 | Coding Unit Quantization Parameters in Video Coding |
| US14/531,632 Active US9635364B2 (en) | 2010-05-04 | 2014-11-03 | Coding unit quantization parameters in video coding |
| US15/145,637 Active US9635365B2 (en) | 2010-05-04 | 2016-05-03 | Coding unit quantization parameters in video coding |
| US15/289,745 Active 2032-03-31 US10368069B2 (en) | 2010-05-04 | 2016-10-10 | Coding unit quantization parameters in video coding |
| US16/524,614 Active US10972734B2 (en) | 2010-05-04 | 2019-07-29 | Coding unit quantization parameters in video coding |
| US17/191,748 Active US11743464B2 (en) | 2010-05-04 | 2021-03-04 | Coding unit quantization parameters in video coding |
| US18/237,068 Active US12231638B2 (en) | 2010-05-04 | 2023-08-23 | Coding unit quantization parameters in video coding |
| US19/022,947 Pending US20250159167A1 (en) | 2010-05-04 | 2025-01-15 | Coding unit quantization parameters in video coding |
Family Applications After (7)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US14/531,632 Active US9635364B2 (en) | 2010-05-04 | 2014-11-03 | Coding unit quantization parameters in video coding |
| US15/145,637 Active US9635365B2 (en) | 2010-05-04 | 2016-05-03 | Coding unit quantization parameters in video coding |
| US15/289,745 Active 2032-03-31 US10368069B2 (en) | 2010-05-04 | 2016-10-10 | Coding unit quantization parameters in video coding |
| US16/524,614 Active US10972734B2 (en) | 2010-05-04 | 2019-07-29 | Coding unit quantization parameters in video coding |
| US17/191,748 Active US11743464B2 (en) | 2010-05-04 | 2021-03-04 | Coding unit quantization parameters in video coding |
| US18/237,068 Active US12231638B2 (en) | 2010-05-04 | 2023-08-23 | Coding unit quantization parameters in video coding |
| US19/022,947 Pending US20250159167A1 (en) | 2010-05-04 | 2025-01-15 | Coding unit quantization parameters in video coding |
Country Status (3)
| Country | Link |
|---|---|
| US (8) | US20110274162A1 (en) |
| JP (3) | JP2013529021A (en) |
| WO (1) | WO2011140211A2 (en) |
Cited By (69)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2012011859A1 (en) * | 2010-07-21 | 2012-01-26 | Telefonaktiebolaget L M Ericsson (Publ) | Picture coding and decoding |
| WO2012062161A1 (en) * | 2010-11-08 | 2012-05-18 | Mediatek Inc. | Method and apparatus of delta quantization parameter processing for high efficiency video coding |
| US20120189052A1 (en) * | 2011-01-24 | 2012-07-26 | Qualcomm Incorporated | Signaling quantization parameter changes for coded units in high efficiency video coding (hevc) |
| US20120201298A1 (en) * | 2011-02-04 | 2012-08-09 | General Instrument Corporation | Implicit Transform Unit Representation |
| US20130022119A1 (en) * | 2011-07-20 | 2013-01-24 | Qualcomm Incorporated | Buffering prediction data in video coding |
| US20130114695A1 (en) * | 2011-11-07 | 2013-05-09 | Qualcomm Incorporated | Signaling quantization matrices for video coding |
| US20130188731A1 (en) * | 2010-10-04 | 2013-07-25 | Korea Advanced Institute Of Science And Technology | Method for encoding/decoding block information using quad tree, and device for using same |
| WO2013141665A1 (en) * | 2012-03-22 | 2013-09-26 | 엘지전자 주식회사 | Video encoding method, video decoding method and apparatus using same |
| US20130287103A1 (en) * | 2012-04-26 | 2013-10-31 | Qualcomm Incorporated | Quantization parameter (qp) coding in video coding |
| US20130287099A1 (en) * | 2009-11-20 | 2013-10-31 | Texas Instruments Incorporated | Block Artifact Suppression in Video Coding |
| US20140079135A1 (en) * | 2012-09-14 | 2014-03-20 | Qualcomm Incoporated | Performing quantization to facilitate deblocking filtering |
| US20140105282A1 (en) * | 2011-06-28 | 2014-04-17 | Keiichi Chono | Method for coding video quantization parameter and method for decoding video quantization parameter |
| US20140133768A1 (en) * | 2012-11-13 | 2014-05-15 | Hon Hai Precision Industry Co., Ltd. | Electronic device and method for splitting image |
| US20140133575A1 (en) * | 2012-11-13 | 2014-05-15 | Hon Hai Precision Industry Co., Ltd. | Electronic device and method for splitting image |
| US20140133769A1 (en) * | 2012-11-13 | 2014-05-15 | Hon Hai Precision Industry Co., Ltd. | Electronic device and image block merging method |
| JP2014099850A (en) * | 2012-11-13 | 2014-05-29 | Hon Hai Precision Industry Co Ltd | Image division system and image division method |
| US20140241422A1 (en) * | 2011-06-28 | 2014-08-28 | Samsung Electronics Co., Ltd. | Method and apparatus for image encoding and decoding using adaptive quantization parameter differential |
| US20140294086A1 (en) * | 2011-11-07 | 2014-10-02 | Inforbridge Pte. Ltd. | Method of decoding video data |
| US20140294078A1 (en) * | 2013-03-29 | 2014-10-02 | Qualcomm Incorporated | Bandwidth reduction for video coding prediction |
| US20140301449A1 (en) * | 2011-11-04 | 2014-10-09 | Infobridge Pte. Ltd. | Method of deriving quantization parameter |
| US20140301465A1 (en) * | 2013-04-05 | 2014-10-09 | Texas Instruments Incorporated | Video Coding Using Intra Block Copy |
| US8964834B2 (en) * | 2011-06-21 | 2015-02-24 | Intellectual Discovery Co., Ltd. | Method and apparatus for adaptively encoding and decoding a quantization parameter based on a quadtree structure |
| US20150163509A1 (en) * | 2013-12-06 | 2015-06-11 | Mediatek Inc. | Method and Apparatus for Fine-grained Motion Boundary Processing |
| US20150215621A1 (en) * | 2014-01-30 | 2015-07-30 | Qualcomm Incorporated | Rate control using complexity in video coding |
| CN104919798A (en) * | 2012-04-16 | 2015-09-16 | 华为技术有限公司 | Quantization matrix coding method and device |
| AU2012365727B2 (en) * | 2012-01-13 | 2015-11-05 | Hfi Innovation Inc. | Method and apparatus for unification of coefficient scan of 8x8 transform units in HEVC |
| US9219915B1 (en) | 2013-01-17 | 2015-12-22 | Google Inc. | Selection of transform size in video coding |
| US9235774B2 (en) | 2010-06-10 | 2016-01-12 | Thomson Licensing | Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters |
| CN105812795A (en) * | 2014-12-31 | 2016-07-27 | 浙江大华技术股份有限公司 | Coding mode determining method and device of maximum coding unit |
| US9414054B2 (en) | 2012-07-02 | 2016-08-09 | Microsoft Technology Licensing, Llc | Control and use of chroma quantization parameter values |
| US20160234502A1 (en) * | 2011-11-07 | 2016-08-11 | Infobridge Pte. Ltd. | Method of constructing merge list |
| US9420282B2 (en) | 2011-01-25 | 2016-08-16 | Microsoft Technology Licensing, Llc | Video coding redundancy reduction |
| US20160261875A1 (en) * | 2015-03-06 | 2016-09-08 | Ali Corporation | Video stream processing method and video processing apparatus thereof |
| US20160360201A1 (en) * | 2011-12-15 | 2016-12-08 | Tagivan Ii Llc | Image coding method, image decoding method, image coding apparatus, and image decoding apparatus |
| US9538192B2 (en) | 2012-01-30 | 2017-01-03 | Samsung Electronics Co., Ltd. | Method and apparatus for hierarchical data unit-based video encoding and decoding comprising quantization parameter prediction |
| US9544597B1 (en) | 2013-02-11 | 2017-01-10 | Google Inc. | Hybrid transform in video encoding and decoding |
| US9565451B1 (en) | 2014-10-31 | 2017-02-07 | Google Inc. | Prediction dependent transform coding |
| US9591302B2 (en) | 2012-07-02 | 2017-03-07 | Microsoft Technology Licensing, Llc | Use of chroma quantization parameter offsets in deblocking |
| US20170085901A1 (en) * | 2010-08-17 | 2017-03-23 | M&K Holdings Inc. | Apparatus for Encoding Moving Picture |
| TWI578764B (en) * | 2011-12-21 | 2017-04-11 | Jvc Kenwood Corp | Dynamic image decoding device, dynamic image decoding method, and dynamic image decoding program |
| US9674530B1 (en) | 2013-04-30 | 2017-06-06 | Google Inc. | Hybrid transforms in video coding |
| US9749645B2 (en) | 2012-06-22 | 2017-08-29 | Microsoft Technology Licensing, Llc | Coded-block-flag coding and derivation |
| US9769499B2 (en) | 2015-08-11 | 2017-09-19 | Google Inc. | Super-transform video coding |
| US20170302923A9 (en) * | 2010-08-17 | 2017-10-19 | M&K Holdings Inc. | Apparatus for encoding an image |
| US20170302948A9 (en) * | 2010-08-17 | 2017-10-19 | M&K Holding Inc. | Method for restoring an intra prediction mode |
| US9807423B1 (en) | 2015-11-24 | 2017-10-31 | Google Inc. | Hybrid transform scheme for video coding |
| US9854275B2 (en) * | 2011-06-25 | 2017-12-26 | Qualcomm Incorporated | Quantization in video coding |
| US9967559B1 (en) | 2013-02-11 | 2018-05-08 | Google Llc | Motion vector dependent spatial transformation in video coding |
| JP2018117356A (en) * | 2010-08-17 | 2018-07-26 | エレクトロニクス アンド テレコミュニケーションズ リサーチ インスチチュートElectronics And Telecommunications Research Institute | Video encoding method and apparatus, and decoding method and apparatus |
| US10277905B2 (en) | 2015-09-14 | 2019-04-30 | Google Llc | Transform selection for non-baseband signal coding |
| WO2019111010A1 (en) * | 2017-12-06 | 2019-06-13 | V-Nova International Ltd | Methods and apparatuses for encoding and decoding a bytestream |
| WO2019111004A1 (en) * | 2017-12-06 | 2019-06-13 | V-Nova International Ltd | Methods and apparatuses for encoding and decoding a bytestream |
| US20190238874A1 (en) * | 2010-05-13 | 2019-08-01 | Sharp Kabushiki Kaisha | Image decoding device, image encoding device, and image decoding method |
| US10375390B2 (en) * | 2011-11-04 | 2019-08-06 | Infobridge Pte. Ltd. | Method and apparatus of deriving intra prediction mode using most probable mode group |
| WO2020071829A1 (en) * | 2018-10-04 | 2020-04-09 | 엘지전자 주식회사 | History-based image coding method, and apparatus thereof |
| US10750187B2 (en) | 2012-04-23 | 2020-08-18 | Sun Patent Trust | Image encoding apparatus for encoding flags indicating removal time |
| WO2020187587A1 (en) * | 2019-03-15 | 2020-09-24 | Dolby International Ab | Method and apparatus for updating a neural network |
| CN112291562A (en) * | 2020-10-29 | 2021-01-29 | 郑州轻工业大学 | Fast CU partition and intra mode decision method for H.266/VVC |
| US11076157B1 (en) * | 2016-05-03 | 2021-07-27 | NGCodec Inc. | Apparatus and method for rate control in accordance with block and stream analyses |
| US11122297B2 (en) | 2019-05-03 | 2021-09-14 | Google Llc | Using border-aligned block functions for image compression |
| US20210329285A1 (en) * | 2020-04-21 | 2021-10-21 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and non-transitory computer-readable storage medium |
| US11245902B2 (en) * | 2011-06-30 | 2022-02-08 | Sony Corporation | Binarization of DQP using separate absolute value and sign (SAVS) in CABAC |
| US20220046252A1 (en) * | 2012-11-19 | 2022-02-10 | Texas Instruments Incorporated | Adaptive coding unit (cu) partitioning based on image statistics |
| US11284072B2 (en) | 2010-08-17 | 2022-03-22 | M&K Holdings Inc. | Apparatus for decoding an image |
| US20220124331A1 (en) * | 2011-06-15 | 2022-04-21 | Sony Group Corporation | Binarization of dqp using separate absolute value and sign (savs) in cabac |
| WO2022232784A1 (en) * | 2021-04-26 | 2022-11-03 | Tencent America LLC | Template matching based intra prediction |
| US20220417534A1 (en) * | 2012-08-15 | 2022-12-29 | Texas Instruments Incorporated | Fast Intra-Prediction Mode Selection in Video Coding |
| CN116760988A (en) * | 2023-08-18 | 2023-09-15 | 瀚博半导体(上海)有限公司 | Video coding method and device based on human visual system |
| US12382044B2 (en) | 2020-01-10 | 2025-08-05 | Samsung Electronics Co., Ltd. | Video decoding method and apparatus for obtaining quantization parameter, and video encoding method and apparatus for transmitting quantization parameter |
Families Citing this family (25)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP5707412B2 (en) | 2010-09-29 | 2015-04-30 | パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America | Image decoding method, image encoding method, image decoding device, image encoding device, program, and integrated circuit |
| KR101959091B1 (en) * | 2010-09-30 | 2019-03-15 | 선 페이턴트 트러스트 | Image decoding method, image encoding method, image decoding device, image encoding device, programme, and integrated circuit |
| PL2663075T3 (en) * | 2011-01-06 | 2020-10-19 | Samsung Electronics Co., Ltd. | Encoding method and device of video using data unit of hierarchical structure, and decoding method and device thereof |
| ES2883132T3 (en) | 2011-01-13 | 2021-12-07 | Canon Kk | Image coding apparatus, image and program coding method, and image decoding apparatus, image and program decoding method |
| JP2013034037A (en) * | 2011-03-09 | 2013-02-14 | Canon Inc | Image encoder, image encoding method and program, image decoder, and image decoding method and program |
| US10298939B2 (en) | 2011-06-22 | 2019-05-21 | Qualcomm Incorporated | Quantization in video coding |
| CN107277548B (en) * | 2011-08-29 | 2019-12-06 | 苗太平洋控股有限公司 | Method for encoding image in merge mode |
| US9961343B2 (en) * | 2011-10-24 | 2018-05-01 | Infobridge Pte. Ltd. | Method and apparatus for generating reconstructed block |
| JP6064581B2 (en) * | 2011-12-21 | 2017-01-25 | 株式会社Jvcケンウッド | Moving picture decoding apparatus, moving picture decoding method, moving picture decoding program, receiving apparatus, receiving method, and receiving program |
| JP6064580B2 (en) * | 2011-12-21 | 2017-01-25 | 株式会社Jvcケンウッド | Moving picture encoding apparatus, moving picture encoding method, moving picture encoding program, transmission apparatus, transmission method, and transmission program |
| EP2842311B1 (en) * | 2012-04-25 | 2016-10-26 | Huawei Technologies Co., Ltd. | Systems and methods for segment integrity and authenticity for adaptive streaming |
| CN103379331B (en) * | 2012-04-28 | 2018-10-23 | 南京中兴新软件有限责任公司 | A kind of video code flow decoding method and device |
| US11076153B2 (en) * | 2015-07-31 | 2021-07-27 | Stc.Unm | System and methods for joint and adaptive control of rate, quality, and computational complexity for video coding and video delivery |
| US9858965B2 (en) * | 2015-10-23 | 2018-01-02 | Microsoft Technology Licensing, Llc | Video loop generation |
| WO2018143289A1 (en) * | 2017-02-02 | 2018-08-09 | シャープ株式会社 | Image encoding device and image decoding device |
| US11032545B2 (en) | 2017-06-29 | 2021-06-08 | Qualcomm Incorporated | Reducing seam artifacts in 360-degree video |
| US11070818B2 (en) * | 2017-07-05 | 2021-07-20 | Telefonaktiebolaget Lm Ericsson (Publ) | Decoding a block of video samples |
| CN107864379B (en) * | 2017-09-28 | 2021-07-02 | 珠海亿智电子科技有限公司 | Compression method applied to video coding and decoding |
| KR102432486B1 (en) | 2017-11-22 | 2022-08-12 | 삼성전자주식회사 | Apparatus for video decoding, computing system comprising the same and method for video decoding |
| JP6953576B2 (en) * | 2018-10-03 | 2021-10-27 | キヤノン株式会社 | Coding device, coding method, program and storage medium |
| JP6686095B2 (en) * | 2018-10-03 | 2020-04-22 | キヤノン株式会社 | Decoding device, decoding method, program, and storage medium |
| CN109688409B (en) * | 2018-12-28 | 2021-03-02 | 北京奇艺世纪科技有限公司 | Video coding method and device |
| KR20220036948A (en) * | 2019-07-05 | 2022-03-23 | 브이-노바 인터내셔널 리미티드 | Quantization of Residuals in Video Coding |
| CN119563177A (en) * | 2022-07-06 | 2025-03-04 | 字节跳动有限公司 | Geometric transformations in neural network-based codec tools for video coding |
| CN118317092B (en) * | 2024-06-11 | 2024-08-30 | 浙江大华技术股份有限公司 | Image encoding method, image encoding device, storage medium and electronic device |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100086030A1 (en) * | 2008-10-03 | 2010-04-08 | Qualcomm Incorporated | Video coding with large macroblocks |
Family Cites Families (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6233017B1 (en) * | 1996-09-16 | 2001-05-15 | Microsoft Corporation | Multimedia compression system with adaptive block sizes |
| JP4224778B2 (en) * | 2003-05-14 | 2009-02-18 | ソニー株式会社 | STREAM CONVERTING APPARATUS AND METHOD, ENCODING APPARATUS AND METHOD, RECORDING MEDIUM, AND PROGRAM |
| JP4146444B2 (en) * | 2005-03-16 | 2008-09-10 | 株式会社東芝 | Video encoding method and apparatus |
| JP4656003B2 (en) * | 2006-06-01 | 2011-03-23 | 沖電気工業株式会社 | Image coding apparatus and image coding method |
| US8571120B2 (en) * | 2006-09-22 | 2013-10-29 | Texas Instruments Incorporated | Transmission of acknowledge/not acknowledge (ACK/NACK) bits and their embedding in the reference signal |
| EP3107295A1 (en) * | 2007-03-20 | 2016-12-21 | Fujitsu Limited | Video encoding method and apparatus, and video decoding apparatus |
| US8542730B2 (en) | 2008-02-22 | 2013-09-24 | Qualcomm, Incorporated | Fast macroblock delta QP decision |
| KR101517768B1 (en) * | 2008-07-02 | 2015-05-06 | 삼성전자주식회사 | Method and apparatus for encoding video and method and apparatus for decoding video |
| US20100086031A1 (en) | 2008-10-03 | 2010-04-08 | Qualcomm Incorporated | Video coding with large macroblocks |
| US8634456B2 (en) | 2008-10-03 | 2014-01-21 | Qualcomm Incorporated | Video coding with large macroblocks |
| US8503527B2 (en) | 2008-10-03 | 2013-08-06 | Qualcomm Incorporated | Video coding with large macroblocks |
| US20110194613A1 (en) * | 2010-02-11 | 2011-08-11 | Qualcomm Incorporated | Video coding with large macroblocks |
| US20110255597A1 (en) * | 2010-04-18 | 2011-10-20 | Tomonobu Mihara | Method and System for Reducing Flicker Artifacts |
| KR101959091B1 (en) * | 2010-09-30 | 2019-03-15 | 선 페이턴트 트러스트 | Image decoding method, image encoding method, image decoding device, image encoding device, programme, and integrated circuit |
| US20120114034A1 (en) * | 2010-11-08 | 2012-05-10 | Mediatek Inc. | Method and Apparatus of Delta Quantization Parameter Processing for High Efficiency Video Coding |
-
2011
- 2011-04-25 US US13/093,715 patent/US20110274162A1/en not_active Abandoned
- 2011-05-04 WO PCT/US2011/035179 patent/WO2011140211A2/en active Application Filing
- 2011-05-04 JP JP2013509218A patent/JP2013529021A/en active Pending
-
2014
- 2014-11-03 US US14/531,632 patent/US9635364B2/en active Active
-
2015
- 2015-08-20 JP JP2015162979A patent/JP6060229B2/en active Active
-
2016
- 2016-04-28 JP JP2016090166A patent/JP6372866B2/en active Active
- 2016-05-03 US US15/145,637 patent/US9635365B2/en active Active
- 2016-10-10 US US15/289,745 patent/US10368069B2/en active Active
-
2019
- 2019-07-29 US US16/524,614 patent/US10972734B2/en active Active
-
2021
- 2021-03-04 US US17/191,748 patent/US11743464B2/en active Active
-
2023
- 2023-08-23 US US18/237,068 patent/US12231638B2/en active Active
-
2025
- 2025-01-15 US US19/022,947 patent/US20250159167A1/en active Pending
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100086030A1 (en) * | 2008-10-03 | 2010-04-08 | Qualcomm Incorporated | Video coding with large macroblocks |
Cited By (206)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11438607B2 (en) | 2009-11-20 | 2022-09-06 | Texas Instruments Incorporated | Block artifact suppression in video coding |
| US10897625B2 (en) * | 2009-11-20 | 2021-01-19 | Texas Instruments Incorporated | Block artifact suppression in video coding |
| US20130287099A1 (en) * | 2009-11-20 | 2013-10-31 | Texas Instruments Incorporated | Block Artifact Suppression in Video Coding |
| US12328437B2 (en) | 2009-11-20 | 2025-06-10 | Texas Instruments Incorporated | Block artifact suppression in video coding |
| US11336912B2 (en) * | 2010-05-13 | 2022-05-17 | Sharp Kabushiki Kaisha | Image decoding device, image encoding device, and image decoding method |
| US10904547B2 (en) * | 2010-05-13 | 2021-01-26 | Sharp Kabushikikaisha | Image decoding device, image encoding device, and image decoding method |
| US20190238874A1 (en) * | 2010-05-13 | 2019-08-01 | Sharp Kabushiki Kaisha | Image decoding device, image encoding device, and image decoding method |
| US10334247B2 (en) * | 2010-06-10 | 2019-06-25 | Interdigital Vc Holdings, Inc. | Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters |
| US10742981B2 (en) * | 2010-06-10 | 2020-08-11 | Interdigital Vc Holdings, Inc. | Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters |
| US11381818B2 (en) * | 2010-06-10 | 2022-07-05 | Interdigital Vc Holdings, Inc. | Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters |
| US20230328247A1 (en) * | 2010-06-10 | 2023-10-12 | Interdigital Vc Holdings, Inc. | Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters |
| US9749631B2 (en) * | 2010-06-10 | 2017-08-29 | Thomson Licensing | Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters |
| US20160105673A1 (en) * | 2010-06-10 | 2016-04-14 | Thomson Licensing | Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters |
| US9235774B2 (en) | 2010-06-10 | 2016-01-12 | Thomson Licensing | Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters |
| US10547840B2 (en) | 2010-06-10 | 2020-01-28 | Interdigital Vc Holdings, Inc. | Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters |
| US20220337838A1 (en) * | 2010-06-10 | 2022-10-20 | Interdigital Vc Holdings, Inc. | Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters |
| US11722669B2 (en) * | 2010-06-10 | 2023-08-08 | Interdigital Vc Holdings, Inc. | Methods and apparatus for determining quantization parameter predictors from a plurality of neighboring quantization parameters |
| US8923394B2 (en) * | 2010-07-21 | 2014-12-30 | Telefonaktiebolaget L M Ericsson (Publ) | Management of slices |
| WO2012011859A1 (en) * | 2010-07-21 | 2012-01-26 | Telefonaktiebolaget L M Ericsson (Publ) | Picture coding and decoding |
| US8861615B2 (en) | 2010-07-21 | 2014-10-14 | Telefonaktiebolaget L M Ericsson (Publ) | Picture coding and decoding |
| US20120287993A1 (en) * | 2010-07-21 | 2012-11-15 | Clinton Priddle | Management of slices |
| US9924187B2 (en) * | 2010-08-17 | 2018-03-20 | M&K Holdings Inc. | Method for restoring an intra prediction mode |
| US10827174B2 (en) | 2010-08-17 | 2020-11-03 | Electronics And Telecommunications Research Institute | Method and apparatus for encoding video, and decoding method and apparatus |
| US10085019B2 (en) * | 2010-08-17 | 2018-09-25 | M&K Holdings Inc. | Method for restoring an intra prediction mode |
| US10939106B2 (en) | 2010-08-17 | 2021-03-02 | Electronics And Telecommunications Research Institute | Method and apparatus for encoding video, and decoding method and apparatus |
| US11284072B2 (en) | 2010-08-17 | 2022-03-22 | M&K Holdings Inc. | Apparatus for decoding an image |
| US20170302923A9 (en) * | 2010-08-17 | 2017-10-19 | M&K Holdings Inc. | Apparatus for encoding an image |
| JP2018117356A (en) * | 2010-08-17 | 2018-07-26 | エレクトロニクス アンド テレコミュニケーションズ リサーチ インスチチュートElectronics And Telecommunications Research Institute | Video encoding method and apparatus, and decoding method and apparatus |
| US20170302926A9 (en) * | 2010-08-17 | 2017-10-19 | M&K Holdings Inc. | Method for encoding an intra prediction mode |
| US10123010B2 (en) * | 2010-08-17 | 2018-11-06 | M&K Holding Inc. | Apparatus for encoding an image |
| US20170085901A1 (en) * | 2010-08-17 | 2017-03-23 | M&K Holdings Inc. | Apparatus for Encoding Moving Picture |
| US20170302927A9 (en) * | 2010-08-17 | 2017-10-19 | M&K Holdings Inc. | Method for restoring an intra prediction mode |
| US20170302948A9 (en) * | 2010-08-17 | 2017-10-19 | M&K Holding Inc. | Method for restoring an intra prediction mode |
| US9918086B2 (en) * | 2010-08-17 | 2018-03-13 | M&K Holdings Inc. | Method for encoding an intra prediction mode |
| US12088807B2 (en) | 2010-08-17 | 2024-09-10 | Ideahub Inc. | United states method and apparatus for encoding video, and decoding method and apparatus |
| US9877039B2 (en) * | 2010-08-17 | 2018-01-23 | M&K Holdings Inc. | Apparatus for encoding moving picture |
| US11706430B2 (en) * | 2010-10-04 | 2023-07-18 | Electronics And Telecommunications Research Institute | Method for encoding/decoding block information using quad tree, and device for using same |
| US10567782B2 (en) * | 2010-10-04 | 2020-02-18 | Electronics And Telecommunications Research Institute | Method for encoding/decoding block information using quad tree, and device for using same |
| US10674169B2 (en) * | 2010-10-04 | 2020-06-02 | Electronics And Telecommunications Research Institute | Method for encoding/decoding block information using quad tree, and device for using same |
| US20170111646A1 (en) * | 2010-10-04 | 2017-04-20 | Electronics And Telecommunications Research Institute | Method for encoding/decoding block information using quad tree, and device for using same |
| US11223839B2 (en) * | 2010-10-04 | 2022-01-11 | Electronics And Telecommunications Research Institute | Method for encoding/decoding block information using quad tree, and device for using same |
| US20190037229A1 (en) * | 2010-10-04 | 2019-01-31 | Electronics And Telecommunications Research Institute | Method for encoding/decoding block information using quad tree, and device for using same |
| US9860546B2 (en) * | 2010-10-04 | 2018-01-02 | Electronics And Telecommunications Research Institute | Method for encoding/decoding block information using quad tree, and device for using same |
| US10110912B2 (en) * | 2010-10-04 | 2018-10-23 | Electronics And Telecommunications Research Institute | Method for encoding/decoding block information using quad tree, and device for using same |
| US20190020887A1 (en) * | 2010-10-04 | 2019-01-17 | Electronics And Telecommunications Research Instit Ute | Method for encoding/decoding block information using quad tree, and device for using same |
| US12225223B2 (en) * | 2010-10-04 | 2025-02-11 | Electronics And Telecommunications Research Institute | Method for encoding/decoding block information using quad tree, and device for using same |
| US10560709B2 (en) * | 2010-10-04 | 2020-02-11 | Electronics And Telecommunications Research Institute | Method for encoding/decoding block information using quad tree, and device for using same |
| US20220094958A1 (en) * | 2010-10-04 | 2022-03-24 | Electronics And Telecommunications Research Institute | Method for encoding/decoding block information using quad tree, and device for using same |
| US20190037228A1 (en) * | 2010-10-04 | 2019-01-31 | Electronics And Telecommunications Research Institute | Method for encoding/decoding block information using quad tree, and device for using same |
| US20230308673A1 (en) * | 2010-10-04 | 2023-09-28 | Electronics And Telecommunications Research Institute | Method for encoding/decoding block information using quad tree, and device for using same |
| US20130188731A1 (en) * | 2010-10-04 | 2013-07-25 | Korea Advanced Institute Of Science And Technology | Method for encoding/decoding block information using quad tree, and device for using same |
| US9544595B2 (en) * | 2010-10-04 | 2017-01-10 | Electronics And Telecommunications Research Institute | Method for encoding/decoding block information using quad tree, and device for using same |
| WO2012062161A1 (en) * | 2010-11-08 | 2012-05-18 | Mediatek Inc. | Method and apparatus of delta quantization parameter processing for high efficiency video coding |
| US20120189052A1 (en) * | 2011-01-24 | 2012-07-26 | Qualcomm Incorporated | Signaling quantization parameter changes for coded units in high efficiency video coding (hevc) |
| US9420282B2 (en) | 2011-01-25 | 2016-08-16 | Microsoft Technology Licensing, Llc | Video coding redundancy reduction |
| US20120201298A1 (en) * | 2011-02-04 | 2012-08-09 | General Instrument Corporation | Implicit Transform Unit Representation |
| US9380319B2 (en) * | 2011-02-04 | 2016-06-28 | Google Technology Holdings LLC | Implicit transform unit representation |
| US20220124331A1 (en) * | 2011-06-15 | 2022-04-21 | Sony Group Corporation | Binarization of dqp using separate absolute value and sign (savs) in cabac |
| US20230262219A1 (en) * | 2011-06-15 | 2023-08-17 | Sony Group Corporation | Binarization of dqp using separate absolute value and sign (savs) in cabac |
| US11665348B2 (en) * | 2011-06-15 | 2023-05-30 | Sony Group Corporation | Binarization of dQP using separate absolute value and sign (SAVS) in CABAC |
| US12149693B2 (en) * | 2011-06-15 | 2024-11-19 | Sony Group Corporation | Binarization of dQP using Separate Absolute Value and Sign (SAVS) in CABAC |
| US9066098B2 (en) * | 2011-06-21 | 2015-06-23 | Intellectual Discovery Co., Ltd. | Method and apparatus for adaptively encoding and decoding a quantization parameter based on a quadtree structure |
| US8964834B2 (en) * | 2011-06-21 | 2015-02-24 | Intellectual Discovery Co., Ltd. | Method and apparatus for adaptively encoding and decoding a quantization parameter based on a quadtree structure |
| USRE47465E1 (en) * | 2011-06-21 | 2019-06-25 | Intellectual Discovery Co., Ltd. | Method and apparatus for adaptively encoding and decoding a quantization parameter based on a quadtree structure |
| USRE46678E1 (en) * | 2011-06-21 | 2018-01-16 | Intellectual Discovery Co., Ltd. | Method and apparatus for adaptively encoding and decoding a quantization parameter based on a quadtree structure |
| USRE49330E1 (en) * | 2011-06-21 | 2022-12-06 | Dolby Laboratories Licensing Corporation | Method and apparatus for adaptively encoding and decoding a quantization parameter based on a quadtree structure |
| US20150117522A1 (en) * | 2011-06-21 | 2015-04-30 | Intellectual Discovery Co., Ltd. | Method and apparatus for adaptively encoding and decoding a quantization parameter based on a quadtree structure |
| US9854275B2 (en) * | 2011-06-25 | 2017-12-26 | Qualcomm Incorporated | Quantization in video coding |
| US20140241422A1 (en) * | 2011-06-28 | 2014-08-28 | Samsung Electronics Co., Ltd. | Method and apparatus for image encoding and decoding using adaptive quantization parameter differential |
| AU2016200045B2 (en) * | 2011-06-28 | 2016-09-29 | Nec Corporation | Method for coding video quantization parameter and method for decoding video quantization parameter |
| US20140105282A1 (en) * | 2011-06-28 | 2014-04-17 | Keiichi Chono | Method for coding video quantization parameter and method for decoding video quantization parameter |
| AU2016250440B2 (en) * | 2011-06-28 | 2018-01-25 | Nec Corporation | Method for coding video quantization parameter and method for decoding video quantization parameter |
| US11245902B2 (en) * | 2011-06-30 | 2022-02-08 | Sony Corporation | Binarization of DQP using separate absolute value and sign (SAVS) in CABAC |
| US9699456B2 (en) * | 2011-07-20 | 2017-07-04 | Qualcomm Incorporated | Buffering prediction data in video coding |
| US20130022119A1 (en) * | 2011-07-20 | 2013-01-24 | Qualcomm Incorporated | Buffering prediction data in video coding |
| US20140301449A1 (en) * | 2011-11-04 | 2014-10-09 | Infobridge Pte. Ltd. | Method of deriving quantization parameter |
| US20150117523A1 (en) * | 2011-11-04 | 2015-04-30 | Infobridge Pte. Ltd. | Method of deriving quantization parameter |
| US12244808B2 (en) * | 2011-11-04 | 2025-03-04 | Gensquare Llc | Method of deriving quantization parameter with differential and predicted quantization parameters |
| US10924734B2 (en) | 2011-11-04 | 2021-02-16 | Infobridge Pte. Ltd. | Method and apparatus of deriving quantization parameter |
| US20220279182A1 (en) * | 2011-11-04 | 2022-09-01 | Gensquare Llc | Method of deriving quantization parameter with differental and predicted quantization parameters |
| US9264723B2 (en) * | 2011-11-04 | 2016-02-16 | Infobridge Pte. Ltd. | Method of deriving quantization parameter with differential and predicted quantization parameters |
| US10375390B2 (en) * | 2011-11-04 | 2019-08-06 | Infobridge Pte. Ltd. | Method and apparatus of deriving intra prediction mode using most probable mode group |
| US10742983B2 (en) | 2011-11-04 | 2020-08-11 | Infobridge Pte. Ltd. | Method for generating intra prediction block with most probable mode |
| US20150381982A1 (en) * | 2011-11-04 | 2015-12-31 | Infobridge Pte. Ltd. | Method of deriving quantization parameter with differental and predicted quantization parameters |
| US20240048706A1 (en) * | 2011-11-04 | 2024-02-08 | Gensquare Llc | Method of deriving quantization parameter with differental and predicted quantization parameters |
| US20150381983A1 (en) * | 2011-11-04 | 2015-12-31 | Infobridge Pte. Ltd. | Method of deriving quantization parameter with differental and predicted quantization parameters |
| US11290719B2 (en) | 2011-11-04 | 2022-03-29 | Infobridge Pte. Ltd. | Method for generating intra prediction block with most probable mode |
| US20150381984A1 (en) * | 2011-11-04 | 2015-12-31 | Infobridge Pte. Ltd. | Method of deriving quantization parameter with differental and predicted quantization parameters |
| US11825092B2 (en) * | 2011-11-04 | 2023-11-21 | Gensquare Llc | Method of deriving quantization parameter with differental and predicted quantization parameters |
| US9204151B2 (en) * | 2011-11-04 | 2015-12-01 | Infobridge Pte. Ltd. | Method of deriving quantization parameter with differential and predicted quantization parameters |
| US9912950B2 (en) * | 2011-11-04 | 2018-03-06 | Infobridge Pte. Ltd. | Method of deriving quantization parameter with differential and predicted quantization parameters |
| US9699460B2 (en) * | 2011-11-04 | 2017-07-04 | Infobridge Pte. Ltd. | Method of deriving quantization parameter with differential and predicted quantization parameters |
| US9712825B2 (en) * | 2011-11-04 | 2017-07-18 | Infobridge Pte. Ltd. | Method of deriving quantization parameter with differental and predicted quantization parameters |
| US10313671B2 (en) | 2011-11-04 | 2019-06-04 | Infobridge Pte. Ltd. | Method for generating intra prediction block with most probable mode |
| US9712824B2 (en) * | 2011-11-04 | 2017-07-18 | Infobridge Pte. Ltd. | Method of deriving quantization parameter with differential and predicted quantization parameters |
| US10158857B2 (en) * | 2011-11-07 | 2018-12-18 | Infobridge Pte. Ltd. | Method of constructing merge list |
| US20130114695A1 (en) * | 2011-11-07 | 2013-05-09 | Qualcomm Incorporated | Signaling quantization matrices for video coding |
| US10277915B2 (en) * | 2011-11-07 | 2019-04-30 | Qualcomm Incorporated | Signaling quantization matrices for video coding |
| US11089322B2 (en) | 2011-11-07 | 2021-08-10 | Infobridge Pte. Ltd. | Apparatus for decoding video data |
| US10362312B2 (en) * | 2011-11-07 | 2019-07-23 | Infobridge Pte. Ltd. | Method of constructing merge list |
| US11089307B2 (en) * | 2011-11-07 | 2021-08-10 | Infobridge Pte. Ltd. | Method of constructing merge list |
| US20140294086A1 (en) * | 2011-11-07 | 2014-10-02 | Inforbridge Pte. Ltd. | Method of decoding video data |
| US9912953B2 (en) * | 2011-11-07 | 2018-03-06 | Infobridge Pte. Ltd. | Method of constructing merge list |
| US20160234497A1 (en) * | 2011-11-07 | 2016-08-11 | Infobridge Pte. Ltd. | Method of constructing merge list |
| US9532049B2 (en) * | 2011-11-07 | 2016-12-27 | Infobridge Pte. Ltd. | Method of decoding video data |
| US20160234502A1 (en) * | 2011-11-07 | 2016-08-11 | Infobridge Pte. Ltd. | Method of constructing merge list |
| US11997307B2 (en) | 2011-11-07 | 2024-05-28 | Gensquare Llc | Apparatus for decoding video data |
| US10182239B2 (en) | 2011-11-07 | 2019-01-15 | Infobridge Pte. Ltd. | Apparatus for decoding video data |
| US20160360201A1 (en) * | 2011-12-15 | 2016-12-08 | Tagivan Ii Llc | Image coding method, image decoding method, image coding apparatus, and image decoding apparatus |
| CN107087182A (en) * | 2011-12-15 | 2017-08-22 | 太格文-Ii有限责任公司 | Picture decoding method and picture decoding apparatus |
| US10003800B2 (en) * | 2011-12-15 | 2018-06-19 | Tagivan Ii Llc | Image coding method, image decoding method, image coding apparatus, and image decoding apparatus |
| US20180270483A1 (en) * | 2011-12-15 | 2018-09-20 | Tagivan Ii Llc | Image coding method, image decoding method, image coding apparatus, and image decoding apparatus |
| CN107071443A (en) * | 2011-12-15 | 2017-08-18 | 太格文-Ii有限责任公司 | Picture decoding method and picture decoding apparatus |
| US10609370B2 (en) * | 2011-12-15 | 2020-03-31 | Tagivan Ii Llc | Image coding method, image decoding method, image coding apparatus, and image decoding apparatus |
| TWI603611B (en) * | 2011-12-21 | 2017-10-21 | Jvc Kenwood Corp | Motion picture encoding apparatus, motion picture encoding method, and recording medium for moving picture encoding program |
| TWI578764B (en) * | 2011-12-21 | 2017-04-11 | Jvc Kenwood Corp | Dynamic image decoding device, dynamic image decoding method, and dynamic image decoding program |
| US10104399B2 (en) | 2012-01-13 | 2018-10-16 | Hfi Innovation Inc. | Method and apparatus for unification of coefficient scan of 8X8 transform units in HEVC |
| AU2012365727B2 (en) * | 2012-01-13 | 2015-11-05 | Hfi Innovation Inc. | Method and apparatus for unification of coefficient scan of 8x8 transform units in HEVC |
| US9544604B2 (en) | 2012-01-30 | 2017-01-10 | Samsung Electronics Co., Ltd. | Method and apparatus for hierarchical data unit-based video encoding and decoding comprising quantization parameter prediction |
| US9693061B2 (en) | 2012-01-30 | 2017-06-27 | Samsung Electronics Co., Ltd. | Method and apparatus for hierarchical data unit-based video encoding and decoding comprising quantization parameter prediction |
| US10045025B2 (en) | 2012-01-30 | 2018-08-07 | Samsung Electronics Co., Ltd. | Method and apparatus for hierarchical data unit-based video encoding and decoding comprising quantization parameter prediction |
| US9538192B2 (en) | 2012-01-30 | 2017-01-03 | Samsung Electronics Co., Ltd. | Method and apparatus for hierarchical data unit-based video encoding and decoding comprising quantization parameter prediction |
| US9544603B2 (en) | 2012-01-30 | 2017-01-10 | Samsung Electronics Co., Ltd. | Method and apparatus for hierarchical data unit-based video encoding and decoding comprising quantization parameter prediction |
| US9549185B2 (en) | 2012-01-30 | 2017-01-17 | Samsung Electronics Co., Ltd. | Method and apparatus for hierarchical data unit-based video encoding and decoding comprising quantization parameter prediction |
| US10218993B2 (en) | 2012-03-22 | 2019-02-26 | Lg Electronics Inc. | Video encoding method, video decoding method and apparatus using same |
| US10708610B2 (en) | 2012-03-22 | 2020-07-07 | Lg Electronics Inc. | Method for encoding and decoding in parallel processing and apparatus using same |
| US11202090B2 (en) | 2012-03-22 | 2021-12-14 | Lg Electronics Inc. | Method for encoding and decoding tiles and wavefront parallel processing and apparatus using same |
| US11838526B2 (en) | 2012-03-22 | 2023-12-05 | Lg Electronics Inc. | Method for encoding and decoding substreams and wavefront parallel processing, and apparatus using same |
| US9955178B2 (en) | 2012-03-22 | 2018-04-24 | Lg Electronics Inc. | Method for encoding and decoding tiles and wavefront parallel processing and apparatus using same |
| WO2013141665A1 (en) * | 2012-03-22 | 2013-09-26 | 엘지전자 주식회사 | Video encoding method, video decoding method and apparatus using same |
| CN104919798A (en) * | 2012-04-16 | 2015-09-16 | 华为技术有限公司 | Quantization matrix coding method and device |
| US10750187B2 (en) | 2012-04-23 | 2020-08-18 | Sun Patent Trust | Image encoding apparatus for encoding flags indicating removal time |
| RU2645291C2 (en) * | 2012-04-26 | 2018-02-19 | Квэлкомм Инкорпорейтед | Quantization parameter (qp) encoding when encoding video |
| US9521410B2 (en) * | 2012-04-26 | 2016-12-13 | Qualcomm Incorporated | Quantization parameter (QP) coding in video coding |
| US20130287103A1 (en) * | 2012-04-26 | 2013-10-31 | Qualcomm Incorporated | Quantization parameter (qp) coding in video coding |
| US9749645B2 (en) | 2012-06-22 | 2017-08-29 | Microsoft Technology Licensing, Llc | Coded-block-flag coding and derivation |
| US10264271B2 (en) | 2012-06-22 | 2019-04-16 | Microsoft Technology Licensing, Llc | Coded-block-flag coding and derivation |
| US9414054B2 (en) | 2012-07-02 | 2016-08-09 | Microsoft Technology Licensing, Llc | Control and use of chroma quantization parameter values |
| US10250882B2 (en) | 2012-07-02 | 2019-04-02 | Microsoft Technology Licensing, Llc | Control and use of chroma quantization parameter values |
| US10097832B2 (en) | 2012-07-02 | 2018-10-09 | Microsoft Technology Licensing, Llc | Use of chroma quantization parameter offsets in deblocking |
| US9591302B2 (en) | 2012-07-02 | 2017-03-07 | Microsoft Technology Licensing, Llc | Use of chroma quantization parameter offsets in deblocking |
| US9781421B2 (en) | 2012-07-02 | 2017-10-03 | Microsoft Technology Licensing, Llc | Use of chroma quantization parameter offsets in deblocking |
| US12170782B2 (en) * | 2012-08-15 | 2024-12-17 | Texas Instruments Incorporated | Fast intra-prediction mode selection in video coding |
| US20220417534A1 (en) * | 2012-08-15 | 2022-12-29 | Texas Instruments Incorporated | Fast Intra-Prediction Mode Selection in Video Coding |
| US20140079135A1 (en) * | 2012-09-14 | 2014-03-20 | Qualcomm Incoporated | Performing quantization to facilitate deblocking filtering |
| US20140133769A1 (en) * | 2012-11-13 | 2014-05-15 | Hon Hai Precision Industry Co., Ltd. | Electronic device and image block merging method |
| JP2014099852A (en) * | 2012-11-13 | 2014-05-29 | Hon Hai Precision Industry Co Ltd | Image division system and image division method |
| JP2014099850A (en) * | 2012-11-13 | 2014-05-29 | Hon Hai Precision Industry Co Ltd | Image division system and image division method |
| US9020283B2 (en) * | 2012-11-13 | 2015-04-28 | Zhongshan Innocloud Intellectual Property Services Co., Ltd. | Electronic device and method for splitting image |
| US20140133768A1 (en) * | 2012-11-13 | 2014-05-15 | Hon Hai Precision Industry Co., Ltd. | Electronic device and method for splitting image |
| US20140133575A1 (en) * | 2012-11-13 | 2014-05-15 | Hon Hai Precision Industry Co., Ltd. | Electronic device and method for splitting image |
| US12256087B2 (en) * | 2012-11-19 | 2025-03-18 | Texas Instruments Incorporated | Adaptive coding unit (CU) partitioning based on image statistics |
| US20220046252A1 (en) * | 2012-11-19 | 2022-02-10 | Texas Instruments Incorporated | Adaptive coding unit (cu) partitioning based on image statistics |
| US11695939B2 (en) * | 2012-11-19 | 2023-07-04 | Texas Instruments Incorporated | Adaptive coding unit (CU) partitioning based on image statistics |
| US20230353759A1 (en) * | 2012-11-19 | 2023-11-02 | Texas Instruments Incorporated | Adaptive coding unit (cu) partitioning based on image statistics |
| US9219915B1 (en) | 2013-01-17 | 2015-12-22 | Google Inc. | Selection of transform size in video coding |
| US9544597B1 (en) | 2013-02-11 | 2017-01-10 | Google Inc. | Hybrid transform in video encoding and decoding |
| US10462472B2 (en) | 2013-02-11 | 2019-10-29 | Google Llc | Motion vector dependent spatial transformation in video coding |
| US10142628B1 (en) | 2013-02-11 | 2018-11-27 | Google Llc | Hybrid transform in video codecs |
| US9967559B1 (en) | 2013-02-11 | 2018-05-08 | Google Llc | Motion vector dependent spatial transformation in video coding |
| US9491460B2 (en) * | 2013-03-29 | 2016-11-08 | Qualcomm Incorporated | Bandwidth reduction for video coding prediction |
| US20140294078A1 (en) * | 2013-03-29 | 2014-10-02 | Qualcomm Incorporated | Bandwidth reduction for video coding prediction |
| US20140301465A1 (en) * | 2013-04-05 | 2014-10-09 | Texas Instruments Incorporated | Video Coding Using Intra Block Copy |
| US20240422342A1 (en) * | 2013-04-05 | 2024-12-19 | Texas Instruments Incorporated | Video Coding Using Intra Block Copy |
| US20230085594A1 (en) * | 2013-04-05 | 2023-03-16 | Texas Instruments Incorporated | Video Coding Using Intra Block Copy |
| US12081787B2 (en) * | 2013-04-05 | 2024-09-03 | Texas Instruments Incorporated | Video coding using intra block copy |
| US10904551B2 (en) * | 2013-04-05 | 2021-01-26 | Texas Instruments Incorporated | Video coding using intra block copy |
| US11533503B2 (en) | 2013-04-05 | 2022-12-20 | Texas Instruments Incorporated | Video coding using intra block copy |
| US9674530B1 (en) | 2013-04-30 | 2017-06-06 | Google Inc. | Hybrid transforms in video coding |
| US9813730B2 (en) * | 2013-12-06 | 2017-11-07 | Mediatek Inc. | Method and apparatus for fine-grained motion boundary processing |
| US20150163509A1 (en) * | 2013-12-06 | 2015-06-11 | Mediatek Inc. | Method and Apparatus for Fine-grained Motion Boundary Processing |
| US20150215621A1 (en) * | 2014-01-30 | 2015-07-30 | Qualcomm Incorporated | Rate control using complexity in video coding |
| US9565451B1 (en) | 2014-10-31 | 2017-02-07 | Google Inc. | Prediction dependent transform coding |
| CN105812795A (en) * | 2014-12-31 | 2016-07-27 | 浙江大华技术股份有限公司 | Coding mode determining method and device of maximum coding unit |
| US20160261875A1 (en) * | 2015-03-06 | 2016-09-08 | Ali Corporation | Video stream processing method and video processing apparatus thereof |
| US9769499B2 (en) | 2015-08-11 | 2017-09-19 | Google Inc. | Super-transform video coding |
| US10277905B2 (en) | 2015-09-14 | 2019-04-30 | Google Llc | Transform selection for non-baseband signal coding |
| US9807423B1 (en) | 2015-11-24 | 2017-10-31 | Google Inc. | Hybrid transform scheme for video coding |
| US11076157B1 (en) * | 2016-05-03 | 2021-07-27 | NGCodec Inc. | Apparatus and method for rate control in accordance with block and stream analyses |
| EP3721631A1 (en) * | 2017-12-06 | 2020-10-14 | V-Nova International Limited | Method and apparatus for decoding a received set of encoded data |
| US12192499B2 (en) * | 2017-12-06 | 2025-01-07 | V-Nova International Limited | Methods and apparatuses for encoding and decoding a bytestream |
| US11575922B2 (en) | 2017-12-06 | 2023-02-07 | V-Nova International Limited | Methods and apparatuses for hierarchically encoding and decoding a bytestream |
| US12407844B2 (en) | 2017-12-06 | 2025-09-02 | V-Nova International Limited | Methods and apparatuses for encoding and decoding a bytestream |
| WO2019111004A1 (en) * | 2017-12-06 | 2019-06-13 | V-Nova International Ltd | Methods and apparatuses for encoding and decoding a bytestream |
| US11743479B2 (en) | 2017-12-06 | 2023-08-29 | V-Nova International Limited | Methods and apparatuses for encoding and decoding a bytestream |
| WO2019111012A1 (en) * | 2017-12-06 | 2019-06-13 | V-Nova International Ltd | Method and apparatus for decoding a received set of encoded data |
| CN111699696A (en) * | 2017-12-06 | 2020-09-22 | V-诺瓦国际有限公司 | Method and apparatus for encoding and decoding a byte stream |
| US11632560B2 (en) * | 2017-12-06 | 2023-04-18 | V-Nova International Limited | Methods and apparatuses for encoding and decoding a bytestream |
| CN111699695A (en) * | 2017-12-06 | 2020-09-22 | V-诺瓦国际有限公司 | Method and apparatus for decoding a received encoded data set |
| US11089316B2 (en) * | 2017-12-06 | 2021-08-10 | V-Nova International Limited | Method and apparatus for decoding a received set of encoded data |
| EP3721634A1 (en) * | 2017-12-06 | 2020-10-14 | V-Nova International Limited | Methods and apparatuses for encoding and decoding a bytestream |
| WO2019111010A1 (en) * | 2017-12-06 | 2019-06-13 | V-Nova International Ltd | Methods and apparatuses for encoding and decoding a bytestream |
| US20200374537A1 (en) * | 2017-12-06 | 2020-11-26 | V-Nova International Limited | Methods and apparatuses for encoding and decoding a bytestream |
| EP3721632A1 (en) * | 2017-12-06 | 2020-10-14 | V-Nova International Limited | Methods and apparatuses for encoding and decoding a bytestream |
| US11025945B2 (en) | 2018-10-04 | 2021-06-01 | Lg Electronics Inc. | History-based image coding method and apparatus |
| US11445209B2 (en) | 2018-10-04 | 2022-09-13 | Lg Electronics Inc. | History-based image coding method and apparatus |
| WO2020071829A1 (en) * | 2018-10-04 | 2020-04-09 | 엘지전자 주식회사 | History-based image coding method, and apparatus thereof |
| US11729414B2 (en) | 2018-10-04 | 2023-08-15 | Lg Electronics Inc. | History-based image coding method and apparatus |
| WO2020187587A1 (en) * | 2019-03-15 | 2020-09-24 | Dolby International Ab | Method and apparatus for updating a neural network |
| US12400113B2 (en) | 2019-03-15 | 2025-08-26 | Dolby International Ab | Method and apparatus for updating a neural network |
| US11122297B2 (en) | 2019-05-03 | 2021-09-14 | Google Llc | Using border-aligned block functions for image compression |
| US12382044B2 (en) | 2020-01-10 | 2025-08-05 | Samsung Electronics Co., Ltd. | Video decoding method and apparatus for obtaining quantization parameter, and video encoding method and apparatus for transmitting quantization parameter |
| US20210329285A1 (en) * | 2020-04-21 | 2021-10-21 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and non-transitory computer-readable storage medium |
| CN112291562A (en) * | 2020-10-29 | 2021-01-29 | 郑州轻工业大学 | Fast CU partition and intra mode decision method for H.266/VVC |
| WO2022232784A1 (en) * | 2021-04-26 | 2022-11-03 | Tencent America LLC | Template matching based intra prediction |
| CN116760988A (en) * | 2023-08-18 | 2023-09-15 | 瀚博半导体(上海)有限公司 | Video coding method and device based on human visual system |
Also Published As
| Publication number | Publication date |
|---|---|
| US9635364B2 (en) | 2017-04-25 |
| JP2016167862A (en) | 2016-09-15 |
| WO2011140211A2 (en) | 2011-11-10 |
| US20250159167A1 (en) | 2025-05-15 |
| US10368069B2 (en) | 2019-07-30 |
| JP2013529021A (en) | 2013-07-11 |
| US20190349586A1 (en) | 2019-11-14 |
| JP2016007051A (en) | 2016-01-14 |
| US9635365B2 (en) | 2017-04-25 |
| US20170026647A1 (en) | 2017-01-26 |
| US12231638B2 (en) | 2025-02-18 |
| US11743464B2 (en) | 2023-08-29 |
| US10972734B2 (en) | 2021-04-06 |
| US20150049805A1 (en) | 2015-02-19 |
| JP6372866B2 (en) | 2018-08-15 |
| WO2011140211A3 (en) | 2012-03-01 |
| JP6060229B2 (en) | 2017-01-11 |
| US20210195194A1 (en) | 2021-06-24 |
| US20230396768A1 (en) | 2023-12-07 |
| US20160249062A1 (en) | 2016-08-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12231638B2 (en) | Coding unit quantization parameters in video coding | |
| US11825078B2 (en) | Luma-based chroma intra-prediction for video coding | |
| US12335467B2 (en) | Temporal motion data candidate derivation in video coding | |
| US12231668B2 (en) | Simplified binary arithmetic coding engine | |
| US10244262B2 (en) | Pixel-based intra prediction for coding in HEVC | |
| US20190246129A1 (en) | Inter-prediction candidate index coding independent of inter-prediction candidate list construction in video coding | |
| US20130044811A1 (en) | Content-Based Adaptive Control of Intra-Prediction Modes in Video Encoding | |
| US20250227262A1 (en) | Chroma from Luma Prediction Using Neighbor Luma Samples | |
| US20210160481A1 (en) | Flexible signaling of qp offset for adaptive color transform in video coding | |
| CA3210537A1 (en) | Chroma from luma prediction using neighbor luma samples |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: TEXAS INSTRUMENTS INCORPORATED, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHOU, MINHUA;DEMIRCIN, MEHMET UMUT;BUDAGAVI, MADHUKAR;REEL/FRAME:026185/0339 Effective date: 20110421 |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |