+

WO2009091279A1 - Separate huffman coding of runlength and size data of dct coefficients - Google Patents

Separate huffman coding of runlength and size data of dct coefficients Download PDF

Info

Publication number
WO2009091279A1
WO2009091279A1 PCT/RU2008/000025 RU2008000025W WO2009091279A1 WO 2009091279 A1 WO2009091279 A1 WO 2009091279A1 RU 2008000025 W RU2008000025 W RU 2008000025W WO 2009091279 A1 WO2009091279 A1 WO 2009091279A1
Authority
WO
WIPO (PCT)
Prior art keywords
zero
coefficient
dct
coefficients
valued
Prior art date
Application number
PCT/RU2008/000025
Other languages
French (fr)
Inventor
Anton Valerievich Sergeev
Andrey Mikhailovich Turlikov
Ann Sergeevna Ukhanova
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corporation filed Critical Intel Corporation
Priority to CN2008801253255A priority Critical patent/CN101919252A/en
Priority to DE112008003626T priority patent/DE112008003626T5/en
Priority to PCT/RU2008/000025 priority patent/WO2009091279A1/en
Priority to EP08793987A priority patent/EP2235950A1/en
Publication of WO2009091279A1 publication Critical patent/WO2009091279A1/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/46Conversion to or from run-length codes, i.e. by representing the number of consecutive digits, or groups of digits, of the same kind by a code word and a digit indicative of that kind
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/40Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
    • H03M7/42Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code using table look-up for the coding or decoding process, e.g. using read-only memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • H04N19/426Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements using memory downsizing methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/439Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using cascaded computational arrangements for performing a single operation, e.g. filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/93Run-length coding

Definitions

  • the present disclosure describes a method for low complexity coding for use in data compression.
  • Data compression may be used to represent a set of information as space efficiently as possible.
  • a data compression code may provide a map between a set of source messages and a set of codewords. Some video compression systems may perform this coding during a limited time period. For example, for High Definition Television (HDTV) 108Oi (1080x1920 30fps) that is encoded by tiles (i.e., rectangular sub-images) of size 8x1920, every tile may be encoded during 497 microseconds. Once a tile has been encoded a successive tile may be present at the input of the compression system and may be ready for processing.
  • HDTV High Definition Television
  • entropy coding refers to a lossless data compression scheme that may be independent of the media's specific characteristics.
  • entropy coding is one of the most computationally complex elements of a video compression system.
  • entropy coding may involve the assignment of codes to symbols so as to match code lengths with the probabilities of the symbols.
  • the design of a data compression system typically involves a tradeoff between the effective compression of input data and minimum complexity costs in the design.
  • Entropy coding may be realized in modern video compression systems using a variety of different techniques, such as pair Run-Length (RLE) & Variable-Length Coding (VLC) or Arithmetic coding.
  • Arithmetic coding may provide a better compression rate, but may be very complex and may not be a good solution for low- complexity systems.
  • VLC-based techniques may provide a smaller compression rate and tend to be far less complex, thus these techniques may be more suitable for realtime applications.
  • FIG. 1 is a diagram showing an example of discrete cosine transform (DCT) based encoder processing
  • FIG. 2 is a diagram showing an example of DCT-based decoder processing
  • FIG. 3 is a diagram of an exemplary embodiment of low complexity entropy coding in accordance with the present disclosure
  • FIG. 4 is a diagram showing a Rate-distortion curve (i.e., compression rate vs.
  • PSNR PSNR
  • FIG. 5 is a diagram showing an estimated probability distribution of RUNLENGTH values in a standard JPEG algorithm
  • FIG. 6 is a diagram showing an estimated probability distribution of SIZE values in a standard JPEG algorithm
  • FIG. 7 is flowchart of operations showing standard entropy coding using the JPEG image compression algorithm
  • FIG.8 is a flowchart of operations showing entropy coding using short tables for RUNLENGTH in accordance with one exemplary embodiment of the present disclosure
  • FIG. 9 is a flowchart of operations in accordance with another exemplary embodiment of the present disclosure.
  • FIG. 10 is a flowchart of operations according to one embodiment of the present disclosure.
  • this disclosure describes a method for low complexity coding for use in data compression.
  • the embodiments described herein may be used in accordance with data compression standards such as the Joint Photographic Experts Group (JPEG) standard, ITU-T T.81, ISO/IEC IS 10918-1, published September 1992. More specifically, the embodiments described herein provide a number of entropy coding algorithms, which may be used to reduce the storage size required for Huffman tables used during a codec program.
  • JPEG Joint Photographic Experts Group
  • Data transmission of still pictures in various fields, such as color facsimile, desktop publishing, and medical imaging may be performed in accordance with the JPEG standard.
  • the JPEG standard may involve the use of discrete cosine transform (DCT) coding during the data compression process.
  • DCT discrete cosine transform
  • Discrete Cosine Transform (DCT) based compression may involve the compression of a stream of 8x8 blocks of grayscale sample images.
  • Each 8x8 block of source image samples is effectively a 64-point discrete signal which may be a function of the two spatial dimensions x and y.
  • the DCT coefficient values may thus be regarded as the relative amount of the 2D spatial frequencies contained in the 64-point input signal.
  • the coefficient with zero frequency in both dimensions is called the "DC coefficient” and the remaining 63 coefficients are called the "AC coefficients.” Additional information pertaining to the DCT is described in greater detail in "The JPEG Still Picture Compression Standard", Gregory K. Wallace, IEEE Transactions on Consumer Electronics, 1991.
  • DCT-based encoder 100 may be configured to receive some source image data (e.g., shown as 8x8 blocks in Figure 1) and to output that data in a compressed form.
  • Encoder 100 may include Forward DCT (FDCT) 102, quantizer 104, and entropy encoder 106.
  • Figure 2 depicts a DCT-based decoder 200, which receives the compressed image data and outputs the original source image data in a reconstructed form.
  • Decoder 200 may be viewed as a mirror image of encoder 100, as such decoder 200 may include entropy decoder 202, dequantizer 204, and Inverse DCT (IDCT) 206.
  • IDCT Inverse DCT
  • encoder 100 may be configured to receive source image data 101 for subsequent compression.
  • Source image data 101 may be grouped into 8x8 blocks, shifted from unsigned integers and input to Forward DCT 102.
  • Forward DCT 102 may provide an output (e.g., 64 DCT coefficients) to a quantizer 104.
  • quantizer 104 may be configured to compress a range of values (e.g., the amplitudes of the frequency components) to a single quantum value.
  • quantizer 104 may be configured to compress the data by representing DCT coefficients with no greater precision than is necessary to achieve the desired image quality.
  • quantizer 104 may provide a sequence of quantized DCT coefficients (DC and AC) as an input to entropy encoder 106.
  • Entropy encoder 106 may be configured to perform the final DCT-based processing operations. Entropy encoder 106 may achieve additional compression losslessly by encoding the quantized DCT coefficients more compactly based on their statistical characteristics. Entropy encoder 106 may be configured to perform a variety of different operations, as discussed in further detail below.
  • entropy encoding may include two distinct operations.
  • the first operation may convert a zig-zag sequence of quantized coefficients into an intermediate sequence of symbols.
  • the purpose and operation of the zig-zag sequence is described in further detail in the "JPEG Still Picture Compression Standard" cited above.
  • the second operation may convert the symbols to a data stream in which the symbols no longer have externally identifiable boundaries.
  • encoder 106 may convert the zig-zag sequence of quantized coefficients into an intermediate sequence of signals. Encoder 106 may also convert the symbols to a data stream in which the symbols no longer have externally identifiable boundaries. The form and definition of the intermediate symbols may be dependent on both the DCT-based mode of operation and the entropy coding method (Huffman, arithmetic, etc.). As discussed above, the input to entropy encoder 106 may be a sequence of quantized DCT coefficients (DC and AC).
  • each nonzero AC coefficient may be represented in combination with its "runlength.”
  • the term “runlength” or “RUNLENGTH” as used herein may correspond to the consecutive number of zero-valued AC coefficients that precede the nonzero AC coefficient in the zigzag sequence.
  • Each runlength/nonzero-coefficient combination may be represented by a pair of symbols, for example, Symbol-1 (RUNLENGTH, SIZE) and Symbol-2 (AMPLITUDE).
  • Symbol-1 may represent the two pieces of information, RUNLENGTH and SIZE.
  • RUNLENGTH may correspond to the number of consecutive zero- valued AC coefficients in the zigzag sequence preceding the nonzero AC coefficient being represented.
  • SIZE may correspond to the number of bits used to encode AMPLITUDE (i.e., Synibol-2).
  • AMPLITUDE as used herein may correspond to the amplitude of the nonzero AC coefficient.
  • Symbol-2 may represent a single piece of information designated AMPLITUDE.
  • the RUNLENGTH value may represent zero-runs of length 0 to 15. Actual zero-runs in the zig-zag sequence may be greater than 15, so the Symbol- 1 value
  • the special Symbol- 1 value (0,0) indicates an end of block (EOB) situation, and may be viewed as an "escape" symbol that terminates the 8x8 sample block.
  • the zig-zag sequence of 63 quantized AC coefficients may be represented as a sequence of Symbol- 1 and Symbol-2 symbol pairs.
  • each "pair” may have repetitions of Symbol- 1 in the case of a long RUNLENGTH or only one Symbol-1 in the case of an EOB.
  • the possible range of quantized AC coefficients may determine the range of values for which both the AMPLITUDE and the SIZE information may correspond.
  • AC coefficients one possible structure of the Symbol-1 and Symbol-2 intermediate representations is illustrated below in Tables 1 and 2, respectively.
  • the intermediate representation may be structured similarly.
  • Symbol-1 may represent only SIZE information
  • Symbol-2 may represent AMPLITUDE information as before: Symbol-l--(SIZE), Symbol-2 (AMPLITUDE).
  • the DC coefficient may be differentially encoded, it may be covered by twice as many integer values, [-2 11 , 2 n -l] as the AC coefficients, so one additional level may be added to the bottom of Table 2 for DC coefficients.
  • Symbol-1 for DC coefficients may thus represent a value from 1 to 11.
  • Huffman coding is a method that may take symbols (e.g., bytes, DCT coefficients, etc.) and encode them with variable length codes that may be assigned according to statistical probabilities.
  • the Huffman coding scheme used in JPEG compression may reduce file size further by replacing the fixed-size (e.g. 12-bit) codes with variable-length codes (i.e., 1-16 bits).
  • the replacement codes may add up to fewer bits than the original. If 5 on average, 10% fewer bits are needed to store the quantized DCT coefficients, then this may directly translate into a 10% file size reduction. The difficult part may be deciding which codes to replace with which variable-length bit-strings.
  • the Huffman Tables define these mappings, and are stored inside every JPEG image file. A separate Huffman table may be required for each of the following four components: luminance DC component, luminance AC components, chrominance DC component and chrominance AC components.
  • This may result in a total of 6840 bits or 855 bytes for standard entropy coding as shown in Table 6.
  • Standard run-length encoding (RLE) data compression may include several operations. For example, the following operations may be implemented while making an RLE-sequence: 1. Action-1: For each element of 63 AC coefficients it may be compared with zero. Thus, for each domain there may be 63 Action-1.
  • Action-2 If the current element of the sequence is 0, the counter of RUNLENGTH may be increased.
  • Action-3 If the number of RUNLENGTH is greater than 15 and it is not an EOB, then the counter of RUNLENGTH may be modified.
  • Action-4 If the current element is not zero, an address to the appropriate table may be created, the codeword may be obtained and written.
  • Action-5 Once the codeword has been obtained for the RUNLENGTH/SIZE combination, the AMPLITUDE value may be presented in binary form and written.
  • Action-6 Address calculation in Huffman tables (using values of RUNLENGTH and SIZE).
  • Table 3 shows the average complexity values per block for the operations listed above utilizing the YUV model, wherein Y corresponds to brightness (i.e., luminance) and UV corresponds to color (i.e., chrominance). For example, if average Action-3 Y complexity equals 0.1 there may be only one Action-3 Y per 10 blocks on average.
  • FIG. 3 depicts a system 300 for low complexity coding in accordance with an exemplary embodiment of the present disclosure.
  • System 300 may be configured to receive raw video data and to convert that data from a Red, Green, Blue (RGB) color space to a luminence and chrominance (YUV) color space using converter 302. Conversion from RGB to YUV may be performed using a variety of techniques, including, but not limited to the use of fast table drive algorithms.
  • RGB Red, Green, Blue
  • YUV luminence and chrominance
  • Each converted Y, U, and V frame may be grouped into 8x8 blocks and input into FDCT 304.
  • Each 8x8 block of source image samples is effectively a 64-point discrete signal which may be a function of x and y.
  • FDCT 304 may receive this signal as an input and decompose it into 64 orthogonal basis signals, Each may contain one of the 64 unique two- dimensional (2D) "spatial frequencies" which comprise the input signal's "spectrum.”
  • the output of FDCT 304 may be the set of 64 basis-signal amplitudes or "DCT coefficients" whose values may be uniquely determined by the particular 64-point input signal.
  • FDCT 304 may provide this output to quantizer 306. It should be noted that any portion of system 300 may be included in, for example, an integrated circuit or similar device.
  • each of the 64 DCT coefficients may be uniformly quantized in conjunction with a quantization table (e.g., 64 element), which may be specified by the application (or user) as an input to the entropy coder 308.
  • a quantization table e.g., 64 element
  • Each element may be any integer value from 1 to 255, which specifies the step size of the quantizer for its corresponding DCT coefficient.
  • Quantization may be defined as division of each DCT coefficient by its corresponding quantizer step size, followed by rounding to the nearest integer. This output value may then be normalized by the quantizer step size. Following quantization, each element may be input to entropy coder 308.
  • Entropy coder 308 may include RLE coding module 310 and Huffman coding module 312.
  • the standard run-length encoding described above with reference to Table 3 requires 4 Huffman tables.
  • separate tables may be used for the RUNLENGTH and SIZE values for AC coefficients.
  • the same tables may be used without changes. This may reduce the storage size required to maintain the Huffman tables.
  • this approach may result in a significant conservation of memory resources as compared to existing entropy encoding techniques, which requires 855 bytes.
  • Table 4 depicts a summary showing the reduced number of operations required when separate tables are used for RUNLENGTH AND SIZE for AC coefficients.
  • a measure of the complexity of this approach to RLE encoding may involve determining the number of operations required. The number of operations may be compared with those required for standard RLE as shown in Table 3 above. The primary difference appears in Action-4, as in this embodiment two different tables must be addressed, thus the number of these operations may double. Moreover, Action-6 is not required to count address in Huffman tables. A more detailed comparison between standard RLE and this embodiment may be formed by comparing Figures 7 and 8, which are described in further detail below.
  • Figure 4 depicts a diagram 400 showing the Rate-distortion curve (i.e., compression rate vs. quality of reconstructed image using PSNR metric) associated with standard JPEG entropy coding as compared with the entropy coding described in accordance with embodiments of the present disclosure.
  • the approach described above in Figure 3 may simplify the encoding process, although the compression ratio may be reduced slightly (see Figure 4), because RUNLENGTH and SIZE may be independently encoded, without using the correlation between them.
  • run-length encoding may be performed without using Huffman tables for RUNLENGTH. That is, for AC coefficients, only Huffman tables for SIZE may be used. For DC coefficients, the same tables described above may be used. Codewords for RUNLENGTH may not be required, instead these zeros may be written into the output stream. For example, if there are six zeroes and non-zero AC coefficients in sequence, the zeroes may be written into the stream followed by SIZE and AMPLITUDE.
  • Figure 6 depicts the probability distribution for SIZE values.
  • the distribution function for SIZE values may change for different movies and thus may be more complicated.
  • the probability at SIZE value 2 differs from approximately 0.45 for the claire80.avi to approximately 0.55 for the soldiers_hi.avi.
  • Table 5 depicts the complexity in terms of the number of operations required when performing run-length encoding without using Huffman tables for RUNLENGTH.
  • the elimination of the RUNLENGTH tables may also eliminate the need for the RUNLENGTH counter.
  • Action-2 and Action-3 i.e., RUNLENGTH counter modifications
  • the Action-6 requirement may also be eliminated as well as there is no need to calculate address in the Huffman tables.
  • the process of encoding may be simplified.
  • the compression ratio may also decrease more as shown in Figure 4, because the RUNLENGTH values are not compressed.
  • Figures 7-9 depict a series of flowcharts comparing standard JPEG entropy coding with the embodiments of the present disclosure.
  • Figure 7 depicts a flowchart 700 including operations 701-715 used to perform standard entropy coding in accordance with the JPEG image compression algorithm.
  • Flowchart 700 includes the operation of address calculation in the Huffman tables for RUNLENGTH and SIZE (i.e., Action-6).
  • the standard entropy coding technique shown in Figure 7 may require 855 bytes of memory consumption.
  • Figure 8 shows flowchart 800 including operations 801-814 configured to perform entropy coding using separate tables for RUNLENGTH and SIZE in accordance with the embodiments described herein.
  • the operation of address calculation in Huffman tables is removed (thus no Action-6 is required).
  • the embodiment shown in Figure 8 may require far less memory (i.e., 72 bytes) than standard entropy coding.
  • Figure 9 shows a flowchart 900 including operations 901-911 configured to perform entropy coding in accordance with the embodiments described herein.
  • Flowchart 900 depicts an embodiment of performing entropy coding without using tables for RUNLENGTH. In this embodiment, the operations related to the calculation of RUNLENGTH may be removed.
  • Flowchart 900 also depicts the writing of zero coefficients into the stream, as shown in operation 905.
  • the embodiment shown in Figure 9 may require less memory (i.e. 24 bytes) than standard entropy encoding as well.
  • Table 6 shows the variations in memory consumption between (1) existing entropy encoding techniques, (2) entropy coding using short RL tables, and (3) entropy coding without using tables for RUNLENGTH. As shown below, the embodiments described herein provide a significant amount of memory conservation as compared to standard entropy coding.
  • a flowchart 1000 of operations for data compression is provided in Figure 10.
  • the method may include converting data from a first color space to a second color space to generate converted data (1002).
  • the method may further include receiving said converted data at a discrete cosine transform (DCT), said DCT being configured to generate at least one DCT coefficient (1004).
  • DCT discrete cosine transform
  • the method may additionally include receiving said at least one DCT coefficient from said discrete cosine transform at a quantizer, said quantizer being configured to quantize said at least one DCT coefficient and to provide at least one quantized coefficient to an entropy encoder, said entropy encoder including a first Huffman table corresponding to a number of bits used to encode an amplitude of a non-zero AC coefficient (1006).
  • IC integrated circuit
  • SoC System-on-a-Chip
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • circuitry may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. It should be understood at the outset that any of the operations and/or operative components described in any embodiment herein may be implemented in software, firmware, hardwired circuitry and/or any combination thereof.
  • Embodiments of the methods described above may be implemented in a computer program that may be stored on a storage medium having instructions to program a system (e.g., a machine) to perform the methods.
  • the storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
  • Other embodiments may be implemented as software modules executed by a programmable control device.
  • the present disclosure may provide numerous advantages over the prior art.
  • the embodiments described herein describe a number of low complexity entropy coding schemes that may decrease the computational complexity of VLC-based entropy coder by approximately 50% and memory consumption by approximately 70% in comparison with standard JPEG.
  • the embodiments described herein allow for the selection of an efficient tradeoff between computational complexity and compression ratio.
  • the embodiments described herein may not require the coding of zero sequences between non-zero coefficients (RUNLENGTH).
  • Huffman tables for RUNLENGTH values are also not needed, thus reducing memory consumption (see Table 6). It was shown that probability of such sequences may be very small for different testing movies (see Figure 5) and the effect on compression ratio is quite small (see Figure 4).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The present disclosure provides a method for data compression. In one embodiment the method may include converting data from a first color space to a second color space to generate converted data. The method may further include receiving said converted data at a discrete cosine transform (DCT), said DCT being configured to generate at least one DCT coefficient. The method may additionally include receiving said at least one DCT coefficient from said discrete cosine transform at a quantizer, said quantizer being configured to quantize said at least one DCT coefficient and to provide at least one quantized coefficient to an entropy encoder, said entropy encoder including a first Huffman table corresponding to a number of bits used to encode an amplitude of a non-zero AC coefficient.

Description

LOW COMPLEXITY CODING IN DATA COMPRESSION SYSTEMS
FIELD
The present disclosure describes a method for low complexity coding for use in data compression. BACKGROUND
Data compression may be used to represent a set of information as space efficiently as possible. A data compression code may provide a map between a set of source messages and a set of codewords. Some video compression systems may perform this coding during a limited time period. For example, for High Definition Television (HDTV) 108Oi (1080x1920 30fps) that is encoded by tiles (i.e., rectangular sub-images) of size 8x1920, every tile may be encoded during 497 microseconds. Once a tile has been encoded a successive tile may be present at the input of the compression system and may be ready for processing.
In general, entropy coding refers to a lossless data compression scheme that may be independent of the media's specific characteristics. Typically, entropy coding is one of the most computationally complex elements of a video compression system. In some cases, entropy coding may involve the assignment of codes to symbols so as to match code lengths with the probabilities of the symbols. The design of a data compression system typically involves a tradeoff between the effective compression of input data and minimum complexity costs in the design.
Entropy coding may be realized in modern video compression systems using a variety of different techniques, such as pair Run-Length (RLE) & Variable-Length Coding (VLC) or Arithmetic coding. Arithmetic coding may provide a better compression rate, but may be very complex and may not be a good solution for low- complexity systems. VLC-based techniques may provide a smaller compression rate and tend to be far less complex, thus these techniques may be more suitable for realtime applications.
BRIEF DESCRIPTION OF DRAWINGS
Features and advantages of the claimed subject matter will be apparent from the following detailed description of embodiments consistent therewith, which description should be considered with reference to the accompanying drawings, wherein:
FIG. 1 is a diagram showing an example of discrete cosine transform (DCT) based encoder processing;
FIG. 2 is a diagram showing an example of DCT-based decoder processing; FIG. 3 is a diagram of an exemplary embodiment of low complexity entropy coding in accordance with the present disclosure; FIG. 4 is a diagram showing a Rate-distortion curve (i.e., compression rate vs.
PSNR) associated with standard JPEG entropy coding as compared with the entropy coding described in accordance with the present disclosure;
FIG. 5 is a diagram showing an estimated probability distribution of RUNLENGTH values in a standard JPEG algorithm; FIG. 6 is a diagram showing an estimated probability distribution of SIZE values in a standard JPEG algorithm;
FIG. 7 is flowchart of operations showing standard entropy coding using the JPEG image compression algorithm;
FIG.8 is a flowchart of operations showing entropy coding using short tables for RUNLENGTH in accordance with one exemplary embodiment of the present disclosure;
FIG. 9 is a flowchart of operations in accordance with another exemplary embodiment of the present disclosure; and
FIG. 10 is a flowchart of operations according to one embodiment of the present disclosure.
Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications, and variations thereof will be apparent to those skilled in the art.
DETAILED DESCRIPTION Generally, this disclosure describes a method for low complexity coding for use in data compression. The embodiments described herein may be used in accordance with data compression standards such as the Joint Photographic Experts Group (JPEG) standard, ITU-T T.81, ISO/IEC IS 10918-1, published September 1992. More specifically, the embodiments described herein provide a number of entropy coding algorithms, which may be used to reduce the storage size required for Huffman tables used during a codec program.
Data transmission of still pictures in various fields, such as color facsimile, desktop publishing, and medical imaging may be performed in accordance with the JPEG standard. The JPEG standard may involve the use of discrete cosine transform (DCT) coding during the data compression process.
Discrete Cosine Transform (DCT) based compression may involve the compression of a stream of 8x8 blocks of grayscale sample images. Each 8x8 block of source image samples is effectively a 64-point discrete signal which may be a function of the two spatial dimensions x and y. The DCT coefficient values may thus be regarded as the relative amount of the 2D spatial frequencies contained in the 64-point input signal. The coefficient with zero frequency in both dimensions is called the "DC coefficient" and the remaining 63 coefficients are called the "AC coefficients." Additional information pertaining to the DCT is described in greater detail in "The JPEG Still Picture Compression Standard", Gregory K. Wallace, IEEE Transactions on Consumer Electronics, 1991.
Referring now to Figure 1, a DCT based encoder 100 is shown. DCT-based encoder 100 may be configured to receive some source image data (e.g., shown as 8x8 blocks in Figure 1) and to output that data in a compressed form. Encoder 100 may include Forward DCT (FDCT) 102, quantizer 104, and entropy encoder 106. In contrast, Figure 2 depicts a DCT-based decoder 200, which receives the compressed image data and outputs the original source image data in a reconstructed form. Decoder 200 may be viewed as a mirror image of encoder 100, as such decoder 200 may include entropy decoder 202, dequantizer 204, and Inverse DCT (IDCT) 206.
Referring again to Figure 1, encoder 100 may be configured to receive source image data 101 for subsequent compression. Source image data 101 may be grouped into 8x8 blocks, shifted from unsigned integers and input to Forward DCT 102. As discussed above, the specific operation of forward DCT 102 is described in greater detail in "The JPEG Still Picture Compression Standard", Gregory K. Wallace, IEEE Transactions on Consumer Electronics, 1991. FDCT 102 may provide an output (e.g., 64 DCT coefficients) to a quantizer 104. In some embodiments, quantizer 104 may be configured to compress a range of values (e.g., the amplitudes of the frequency components) to a single quantum value. In this way, quantizer 104 may be configured to compress the data by representing DCT coefficients with no greater precision than is necessary to achieve the desired image quality. Thus, quantizer 104 may provide a sequence of quantized DCT coefficients (DC and AC) as an input to entropy encoder 106.
Entropy encoder 106 may be configured to perform the final DCT-based processing operations. Entropy encoder 106 may achieve additional compression losslessly by encoding the quantized DCT coefficients more compactly based on their statistical characteristics. Entropy encoder 106 may be configured to perform a variety of different operations, as discussed in further detail below.
Generally, entropy encoding may include two distinct operations. The first operation may convert a zig-zag sequence of quantized coefficients into an intermediate sequence of symbols. The purpose and operation of the zig-zag sequence is described in further detail in the "JPEG Still Picture Compression Standard" cited above. The second operation may convert the symbols to a data stream in which the symbols no longer have externally identifiable boundaries.
Thus, in some embodiments, encoder 106 may convert the zig-zag sequence of quantized coefficients into an intermediate sequence of signals. Encoder 106 may also convert the symbols to a data stream in which the symbols no longer have externally identifiable boundaries. The form and definition of the intermediate symbols may be dependent on both the DCT-based mode of operation and the entropy coding method (Huffman, arithmetic, etc.). As discussed above, the input to entropy encoder 106 may be a sequence of quantized DCT coefficients (DC and AC). In the intermediate symbol sequence, each nonzero AC coefficient may be represented in combination with its "runlength." The term "runlength" or "RUNLENGTH" as used herein may correspond to the consecutive number of zero-valued AC coefficients that precede the nonzero AC coefficient in the zigzag sequence.
Each runlength/nonzero-coefficient combination may be represented by a pair of symbols, for example, Symbol-1 (RUNLENGTH, SIZE) and Symbol-2 (AMPLITUDE). In this example, Symbol-1 may represent the two pieces of information, RUNLENGTH and SIZE. As mentioned above, RUNLENGTH may correspond to the number of consecutive zero- valued AC coefficients in the zigzag sequence preceding the nonzero AC coefficient being represented. The term "SIZE", as used herein, may correspond to the number of bits used to encode AMPLITUDE (i.e., Synibol-2). The term "AMPLITUDE" as used herein may correspond to the amplitude of the nonzero AC coefficient. Symbol-2 may represent a single piece of information designated AMPLITUDE.
Generally, the RUNLENGTH value may represent zero-runs of length 0 to 15. Actual zero-runs in the zig-zag sequence may be greater than 15, so the Symbol- 1 value
(15, 0) may be interpreted as the extension symbol with RUNLENGTH=IO. The special Symbol- 1 value (0,0) indicates an end of block (EOB) situation, and may be viewed as an "escape" symbol that terminates the 8x8 sample block.
Thus, for each 8x8 block of samples, the zig-zag sequence of 63 quantized AC coefficients may be represented as a sequence of Symbol- 1 and Symbol-2 symbol pairs. However, each "pair" may have repetitions of Symbol- 1 in the case of a long RUNLENGTH or only one Symbol-1 in the case of an EOB.
The possible range of quantized AC coefficients may determine the range of values for which both the AMPLITUDE and the SIZE information may correspond. For AC coefficients, one possible structure of the Symbol-1 and Symbol-2 intermediate representations is illustrated below in Tables 1 and 2, respectively. For DC coefficients the intermediate representation may be structured similarly. In this example, Symbol-1 may represent only SIZE information; Symbol-2 may represent AMPLITUDE information as before: Symbol-l--(SIZE), Symbol-2 (AMPLITUDE). Because the DC coefficient may be differentially encoded, it may be covered by twice as many integer values, [-211, 2n-l] as the AC coefficients, so one additional level may be added to the bottom of Table 2 for DC coefficients. Symbol-1 for DC coefficients may thus represent a value from 1 to 11.
Figure imgf000006_0001
Table 1. Baseline Huffman Coding Symbol-1 Structure SfZE AMPLITUDE
1
2
3 -7..-4,4..7
4 -15.,-8,8.J 5
5 -31..-I 6J 6..31
6 «63..-32,32..63
7 -127..«64,64..127
S ~255..~128, 12S.,255
9 -51 1..-256,256,.5 I I
Id -1O23..-512,512..1O23
Table 2. Baseline Entropy Coding Symbol-2 Structure
Huffman coding is a method that may take symbols (e.g., bytes, DCT coefficients, etc.) and encode them with variable length codes that may be assigned according to statistical probabilities. For example, the Huffman coding scheme used in JPEG compression may reduce file size further by replacing the fixed-size (e.g. 12-bit) codes with variable-length codes (i.e., 1-16 bits). On average, the replacement codes may add up to fewer bits than the original. If5 on average, 10% fewer bits are needed to store the quantized DCT coefficients, then this may directly translate into a 10% file size reduction. The difficult part may be deciding which codes to replace with which variable-length bit-strings. The Huffman Tables define these mappings, and are stored inside every JPEG image file. A separate Huffman table may be required for each of the following four components: luminance DC component, luminance AC components, chrominance DC component and chrominance AC components.
For example, the tables may include a table for luminance including AC coefficients (162 records, 162x20 = 3240 bits), a table for chrominance including AC coefficients (162 records, 162x20 = 3240 bits), a table for luminance including DC coefficients (12 records, 12x15 = 180 bits), and a table for chrominance including DC coefficients (12 records, 12x15 = 180 bits). This may result in a total of 6840 bits or 855 bytes for standard entropy coding as shown in Table 6. Standard run-length encoding (RLE) data compression may include several operations. For example, the following operations may be implemented while making an RLE-sequence: 1. Action-1: For each element of 63 AC coefficients it may be compared with zero. Thus, for each domain there may be 63 Action-1.
2. Action-2: If the current element of the sequence is 0, the counter of RUNLENGTH may be increased.
3. Action-3: If the number of RUNLENGTH is greater than 15 and it is not an EOB, then the counter of RUNLENGTH may be modified.
4. Action-4: If the current element is not zero, an address to the appropriate table may be created, the codeword may be obtained and written.
5. Action-5: Once the codeword has been obtained for the RUNLENGTH/SIZE combination, the AMPLITUDE value may be presented in binary form and written.
6. Action-6: Address calculation in Huffman tables (using values of RUNLENGTH and SIZE).
Table 3 shows the average complexity values per block for the operations listed above utilizing the YUV model, wherein Y corresponds to brightness (i.e., luminance) and UV corresponds to color (i.e., chrominance). For example, if average Action-3 Y complexity equals 0.1 there may be only one Action-3 Y per 10 blocks on average.
Figure imgf000008_0001
Table 3. Complexity of standard RLE Run-Length Encoding with short tables
Figure 3 depicts a system 300 for low complexity coding in accordance with an exemplary embodiment of the present disclosure. System 300 may be configured to receive raw video data and to convert that data from a Red, Green, Blue (RGB) color space to a luminence and chrominance (YUV) color space using converter 302. Conversion from RGB to YUV may be performed using a variety of techniques, including, but not limited to the use of fast table drive algorithms. Each converted Y, U, and V frame may be grouped into 8x8 blocks and input into FDCT 304. Each 8x8 block of source image samples is effectively a 64-point discrete signal which may be a function of x and y. FDCT 304 may receive this signal as an input and decompose it into 64 orthogonal basis signals, Each may contain one of the 64 unique two- dimensional (2D) "spatial frequencies" which comprise the input signal's "spectrum." The output of FDCT 304 may be the set of 64 basis-signal amplitudes or "DCT coefficients" whose values may be uniquely determined by the particular 64-point input signal. FDCT 304 may provide this output to quantizer 306. It should be noted that any portion of system 300 may be included in, for example, an integrated circuit or similar device.
After output from FDCT 304, each of the 64 DCT coefficients may be uniformly quantized in conjunction with a quantization table (e.g., 64 element), which may be specified by the application (or user) as an input to the entropy coder 308. Each element may be any integer value from 1 to 255, which specifies the step size of the quantizer for its corresponding DCT coefficient. Quantization may be defined as division of each DCT coefficient by its corresponding quantizer step size, followed by rounding to the nearest integer. This output value may then be normalized by the quantizer step size. Following quantization, each element may be input to entropy coder 308.
Entropy coder 308 may include RLE coding module 310 and Huffman coding module 312. The standard run-length encoding described above with reference to Table 3 requires 4 Huffman tables. In contrast, in this embodiment, separate tables may be used for the RUNLENGTH and SIZE values for AC coefficients. In this embodiment, for the DC coefficients, the same tables may be used without changes. This may reduce the storage size required to maintain the Huffman tables. For example, in accordance with this embodiment, a table of codewords for all possible RUNLENGTH values may consist of 16 records (16*10 = 160 bits for the UV-components, 14*16 = 224 bits for the Y-component), and the table for SIZE may include 12 records (12*16 = 192 bits). This may result in a total of 576 bits, or 72 bytes. Thus, this approach may result in a significant conservation of memory resources as compared to existing entropy encoding techniques, which requires 855 bytes.
Table 4 depicts a summary showing the reduced number of operations required when separate tables are used for RUNLENGTH AND SIZE for AC coefficients.
Figure imgf000010_0001
Table 4. Complexity of standard RLE with short tables
A measure of the complexity of this approach to RLE encoding may involve determining the number of operations required. The number of operations may be compared with those required for standard RLE as shown in Table 3 above. The primary difference appears in Action-4, as in this embodiment two different tables must be addressed, thus the number of these operations may double. Moreover, Action-6 is not required to count address in Huffman tables. A more detailed comparison between standard RLE and this embodiment may be formed by comparing Figures 7 and 8, which are described in further detail below.
Figure 4 depicts a diagram 400 showing the Rate-distortion curve (i.e., compression rate vs. quality of reconstructed image using PSNR metric) associated with standard JPEG entropy coding as compared with the entropy coding described in accordance with embodiments of the present disclosure. The approach described above in Figure 3 may simplify the encoding process, although the compression ratio may be reduced slightly (see Figure 4), because RUNLENGTH and SIZE may be independently encoded, without using the correlation between them. Figure 4 shows that the compression rate and corresponding quality level (calculated using the Rate-Distortion function) are decreasing very slightly. For example, for Rate = IMbps the decrease in quality level for the embodiments described herein is less than 0.IdB in comparison with standard JPEG using a PSNR metric. Run-Length Encoding without tables for RUNLENGTH
In some embodiments, run-length encoding may be performed without using Huffman tables for RUNLENGTH. That is, for AC coefficients, only Huffman tables for SIZE may be used. For DC coefficients, the same tables described above may be used. Codewords for RUNLENGTH may not be required, instead these zeros may be written into the output stream. For example, if there are six zeroes and non-zero AC coefficients in sequence, the zeroes may be written into the stream followed by SIZE and AMPLITUDE.
The determination to utilize only Huffman tables for SIZE may become apparent after reviewing Figures 5 and 6, which depict the probability distribution of both RUNLENGTH and SIZE values respectively. As shown in Figure 5, the probability of a large zero sequence may be very small for different testing images (e.g. claire80.avi vs. soldiers_hi.avi). Here, the distribution function does not appear to change significantly for the different testing images. For example, the probability of 5 successive zeroes occurring during a sequence of AC coefficients may be less than 0.008 for the different testing images.
In contrast, Figure 6 depicts the probability distribution for SIZE values. Here, it may be noted that the distribution function for SIZE values may change for different movies and thus may be more complicated. For example, the probability at SIZE value 2 differs from approximately 0.45 for the claire80.avi to approximately 0.55 for the soldiers_hi.avi.
Table 5 depicts the complexity in terms of the number of operations required when performing run-length encoding without using Huffman tables for RUNLENGTH. The elimination of the RUNLENGTH tables may also eliminate the need for the RUNLENGTH counter. Thus, Action-2 and Action-3 (i.e., RUNLENGTH counter modifications) are not necessary. The Action-6 requirement may also be eliminated as well as there is no need to calculate address in the Huffman tables.
Figure imgf000012_0001
Table 5. Complexity of standard RLE with short tables
Therefore, by eliminating the Huffman tables for RUNLENGTH, the process of encoding may be simplified. As a result, the compression ratio may also decrease more as shown in Figure 4, because the RUNLENGTH values are not compressed.
Figures 7-9 depict a series of flowcharts comparing standard JPEG entropy coding with the embodiments of the present disclosure. For example, Figure 7 depicts a flowchart 700 including operations 701-715 used to perform standard entropy coding in accordance with the JPEG image compression algorithm. Flowchart 700 includes the operation of address calculation in the Huffman tables for RUNLENGTH and SIZE (i.e., Action-6). The standard entropy coding technique shown in Figure 7 may require 855 bytes of memory consumption.
In contrast, Figure 8 shows flowchart 800 including operations 801-814 configured to perform entropy coding using separate tables for RUNLENGTH and SIZE in accordance with the embodiments described herein. In this embodiment, as discussed above, the operation of address calculation in Huffman tables is removed (thus no Action-6 is required). The embodiment shown in Figure 8 may require far less memory (i.e., 72 bytes) than standard entropy coding. Figure 9 shows a flowchart 900 including operations 901-911 configured to perform entropy coding in accordance with the embodiments described herein. Flowchart 900 depicts an embodiment of performing entropy coding without using tables for RUNLENGTH. In this embodiment, the operations related to the calculation of RUNLENGTH may be removed. For example, operations 705 (i.e., Action-2 corresponding to increasing Run counter by 1), 707, 709 and 710 of Figure 7 are removed. Flowchart 900 also depicts the writing of zero coefficients into the stream, as shown in operation 905. The embodiment shown in Figure 9 may require less memory (i.e. 24 bytes) than standard entropy encoding as well.
A summary showing standard entropy coding as well as the embodiments described herein is provided below in Table 6. Table 6 shows the variations in memory consumption between (1) existing entropy encoding techniques, (2) entropy coding using short RL tables, and (3) entropy coding without using tables for RUNLENGTH. As shown below, the embodiments described herein provide a significant amount of memory conservation as compared to standard entropy coding.
Figure imgf000013_0001
Table 6. Memory Consumption of the three techniques
In accordance with one exemplary embodiment of the present disclosure a flowchart 1000 of operations for data compression is provided in Figure 10. The method may include converting data from a first color space to a second color space to generate converted data (1002). The method may further include receiving said converted data at a discrete cosine transform (DCT), said DCT being configured to generate at least one DCT coefficient (1004). The method may additionally include receiving said at least one DCT coefficient from said discrete cosine transform at a quantizer, said quantizer being configured to quantize said at least one DCT coefficient and to provide at least one quantized coefficient to an entropy encoder, said entropy encoder including a first Huffman table corresponding to a number of bits used to encode an amplitude of a non-zero AC coefficient (1006). Of course, additional operations are also within the scope of the present disclosure. The embodiments described herein may be implemented, for example, in an integrated circuit (IC) which may include, for example, a System-on-a-Chip (SoC), an application specific integrated circuit (ASIC) and/or a field programmable gate array (FPGA). "Integrated circuit", as used in any embodiment herein, means a semiconductor device and/or microelectronic device, such as, for example, but not limited to, a semiconductor integrated circuit chip.
As used in any embodiment described herein, "circuitry" may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. It should be understood at the outset that any of the operations and/or operative components described in any embodiment herein may be implemented in software, firmware, hardwired circuitry and/or any combination thereof.
Embodiments of the methods described above may be implemented in a computer program that may be stored on a storage medium having instructions to program a system (e.g., a machine) to perform the methods. The storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, magnetic or optical cards, or any type of media suitable for storing electronic instructions. Other embodiments may be implemented as software modules executed by a programmable control device.
The present disclosure may provide numerous advantages over the prior art. For example, the embodiments described herein describe a number of low complexity entropy coding schemes that may decrease the computational complexity of VLC-based entropy coder by approximately 50% and memory consumption by approximately 70% in comparison with standard JPEG. The embodiments described herein allow for the selection of an efficient tradeoff between computational complexity and compression ratio.
For example, the embodiments described herein may not require the coding of zero sequences between non-zero coefficients (RUNLENGTH). In some embodiments, Huffman tables for RUNLENGTH values are also not needed, thus reducing memory consumption (see Table 6). It was shown that probability of such sequences may be very small for different testing movies (see Figure 5) and the effect on compression ratio is quite small (see Figure 4). Various features, aspects, and embodiments have been described herein. The features, aspects, and embodiments are susceptible to combination with one another as well as to variation and modification, as will be understood by those having skill in the art. The present disclosure should, therefore, be considered to encompass such combinations, variations, and modifications.

Claims

CLAIMS What is claimed is:
1. An apparatus, comprising: an integrated circuit (IC) configured to compress and/or decompress data, said integrated circuit including an entropy encoder configured to receive at least one discrete cosine transform (DCT) coefficient, said entropy encoder including a first Huffman table corresponding to a number of bits used to encode an amplitude of a nonzero AC coefficient.
2. The apparatus according to claim 1, wherein said entropy encoder further includes a second Huffman table corresponding to a consecutive number of zero-valued
AC coefficients that precedes said non-zero AC coefficient in a given sequence.
3. The apparatus according to claim 1, wherein said entropy encoder is configured to write at least one zero into an output stream, said at least one zero corresponding to a consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence
4. The apparatus according to claim 1, further comprising a quantizer configured to receive said at least one coefficient from a forward discrete cosine transform, said quantizer further configured to quantize said at least one coefficient and to provide at least one quantized coefficient to said entropy encoder.
5. The apparatus according to claim 4, wherein said forward discrete cosine transform is configured to receive converted data from a converter, said discrete cosine transform being further configured to generate said at least one DCT coefficient and to provide said at least one DCT coefficient to said quantizer.
6. A method for data compression comprising: converting data from a first color space to a second color space to generate converted data; receiving said converted data at a discrete cosine transform (DCT), said DCT being configured to generate at least one DCT coefficient; and receiving said at least one DCT coefficient from said discrete cosine transform at a quantizer, said quantizer being configured to quantize said at least one DCT coefficient and to provide at least one quantized coefficient to an entropy encoder, said entropy encoder including a first Huffman table corresponding to a number of bits used to encode an amplitude of a non-zero AC coefficient.
7. The method according to claim 6, wherein said entropy encoder further includes a second Huffman table corresponding to a consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence.
8. The method according to claim 6, wherein said entropy encoder is configured to write at least one zero into an output stream, said at least one zero corresponding to a consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence
9. The method according to claim 7, further comprising: comparing said at least one DCT coefficient with zero; increasing a counter corresponding to said consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence; modifying said counter if said consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence is greater than a given threshold and is not an end-of-block (EOB); detecting a non-zero coefficient and obtaining and writing a codeword corresponding to at least one of said consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence and said number of bits used to encode an amplitude of a non-zero AC coefficient; and writing said amplitude of said nonzero AC coefficient in binary form.
10. The method according to claim 8, further comprising: comparing said at least one DCT coefficient with zero; detecting a non-zero coefficient and obtaining and writing a codeword corresponding to at least one of said consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence and said number of bits used to encode an amplitude of a non-zero AC coefficient; and writing said amplitude of said nonzero AC coefficient in binary form.
11. An article comprising a storage medium having stored thereon instructions that when executed by a machine result in the following: converting data from a first color space to a second color space to generate converted data; receiving said converted data at a discrete cosine transform (DCT), said DCT being configured to generate at least one DCT coefficient; and receiving said at least one DCT coefficient from said discrete cosine transform at a quantizer, said quantizer being configured to quantize said at least one DCT coefficient and to provide at least one quantized coefficient to an entropy encoder, said entropy encoder including a first Huffman table corresponding to a number of bits used to encode an amplitude of a non-zero AC coefficient.
12. The article according to claim H5 wherein said entropy encoder further, includes a second Huffman table corresponding to a consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence.
13. The article according to claim 11, wherein said entropy encoder is configured to write at least one zero into an output stream, said at least one zero corresponding to a consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence
14. The article according to claim 12, further comprising: comparing said at least one DCT coefficient with zero; increasing a counter corresponding to said consecutive number of zero- valued AC coefficients that precedes said non-zero AC coefficient in a given sequence; modifying said counter if said consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence is greater than a given threshold and is not an end-of-block (EOB); detecting a non-zero coefficient and obtaining and writing a codeword corresponding to at least one of said consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence and said number of bits used to encode an amplitude of a non-zero AC coefficient; and writing said amplitude of said nonzero AC coefficient in binary form.
15. The article according to claim 13, further comprising: comparing said at least one DCT coefficient with zero; detecting a non-zero coefficient and obtaining and writing a codeword corresponding to at least one of said consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence and said number of bits used to encode an amplitude of a non-zero AC coefficient; and writing said amplitude of said nonzero AC coefficient in binary form.
PCT/RU2008/000025 2008-01-18 2008-01-18 Separate huffman coding of runlength and size data of dct coefficients WO2009091279A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN2008801253255A CN101919252A (en) 2008-01-18 2008-01-18 Separate huffman coding of runlength and size data of DCT coefficients
DE112008003626T DE112008003626T5 (en) 2008-01-18 2008-01-18 Low complexity coding in data compression systems
PCT/RU2008/000025 WO2009091279A1 (en) 2008-01-18 2008-01-18 Separate huffman coding of runlength and size data of dct coefficients
EP08793987A EP2235950A1 (en) 2008-01-18 2008-01-18 Separate huffman coding of runlength and size data of dct coefficients

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/RU2008/000025 WO2009091279A1 (en) 2008-01-18 2008-01-18 Separate huffman coding of runlength and size data of dct coefficients

Publications (1)

Publication Number Publication Date
WO2009091279A1 true WO2009091279A1 (en) 2009-07-23

Family

ID=39865630

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/RU2008/000025 WO2009091279A1 (en) 2008-01-18 2008-01-18 Separate huffman coding of runlength and size data of dct coefficients

Country Status (4)

Country Link
EP (1) EP2235950A1 (en)
CN (1) CN101919252A (en)
DE (1) DE112008003626T5 (en)
WO (1) WO2009091279A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9392288B2 (en) 2013-10-17 2016-07-12 Google Inc. Video coding using scatter-based scan tables
US9509998B1 (en) 2013-04-04 2016-11-29 Google Inc. Conditional predictive multi-symbol run-length coding
US9774856B1 (en) 2012-07-02 2017-09-26 Google Inc. Adaptive stochastic entropy coding
CN112398484A (en) * 2019-08-15 2021-02-23 华为技术有限公司 Coding method and related equipment

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8891616B1 (en) 2011-07-27 2014-11-18 Google Inc. Method and apparatus for entropy encoding based on encoding cost
US11039138B1 (en) 2012-03-08 2021-06-15 Google Llc Adaptive coding of prediction modes using probability distributions
US9179151B2 (en) 2013-10-18 2015-11-03 Google Inc. Spatial proximity context entropy coding
CN112449185B (en) * 2019-08-28 2022-01-25 腾讯科技(深圳)有限公司 Video decoding method, video encoding device, video encoding medium, and electronic apparatus
CN115664426B (en) * 2022-12-27 2023-03-21 深圳安德空间技术有限公司 Real-time lossless compression method and system for ground penetrating radar data
CN118055256B (en) * 2024-04-15 2024-11-01 浙江芯劢微电子股份有限公司 DCT (discrete cosine transform) transformed image compression coding method and DCT transformed image compression coding system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"DIGITAL COMPRESSION AND CODING OF CONTINUOUS-TONE STILL IMAGES PART I: REQUIREMENTS AND GUIDELINES", APPENDIX A. ISO DIS 10918-1. REQUIREMENTS AND GUIDELINES.INTERNATIONAL STANDARD DIS 10918-1. CCITT RECOMMENDATION T.81, XX, XX, 1 January 1993 (1993-01-01), pages 337,339 - 369,371, XP001023580 *
ABHINAV GUPTA ET AL: "Modified Runlength Coding for Improved JPEG Performance", INFORMATION AND COMMUNICATION TECHNOLOGY, 2007. ICICT '07. INTERN ATIONAL CONFERENCE ON, IEEE, PI, 1 March 2007 (2007-03-01), pages 235 - 237, XP031183552, ISBN: 978-984-32-3394-3 *
AGOSTINI L V ET AL: "Pipelined entropy coders for JPEG compression", INTEGRATED CIRCUITS AND SYSTEMS DESIGN, 2002. PROCEEDINGS. 15TH SYMPOS IUM ON 9-14, SEPT. 2002, PISCATAWAY, NJ, USA,IEEE, 9 September 2002 (2002-09-09), pages 203 - 208, XP010621790, ISBN: 978-0-7695-1807-7 *
WALLACE G K: "THE JPEG STILL PICTURE COMPRESSION STANDARD", IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, IEEE SERVICE CENTER, NEW YORK, NY, US, vol. 38, no. 1, 1 February 1992 (1992-02-01), pages XVIII - XXXIV, XP000297354, ISSN: 0098-3063 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9774856B1 (en) 2012-07-02 2017-09-26 Google Inc. Adaptive stochastic entropy coding
US9509998B1 (en) 2013-04-04 2016-11-29 Google Inc. Conditional predictive multi-symbol run-length coding
US9392288B2 (en) 2013-10-17 2016-07-12 Google Inc. Video coding using scatter-based scan tables
CN112398484A (en) * 2019-08-15 2021-02-23 华为技术有限公司 Coding method and related equipment
CN112398484B (en) * 2019-08-15 2024-04-23 华为技术有限公司 Coding method and related equipment

Also Published As

Publication number Publication date
EP2235950A1 (en) 2010-10-06
DE112008003626T5 (en) 2010-12-16
CN101919252A (en) 2010-12-15

Similar Documents

Publication Publication Date Title
US10893273B2 (en) Data encoding and decoding
EP2235950A1 (en) Separate huffman coding of runlength and size data of dct coefficients
TWI685245B (en) Data encoding and decoding
EP2984761B1 (en) Data encoding and decoding
US7099387B2 (en) Context-adaptive VLC video transform coefficients encoding/decoding methods and apparatuses
JP5550005B2 (en) Image compression method using random access function
US9544599B2 (en) Context adaptive data encoding
BRPI0210786B1 (en) method for encoding digital image data using adaptive video data compression
TWI650012B (en) Apparatus, method, and storage medium for data encoding and decoding
KR20040018400A (en) Dct compression using golomb-rice coding
JP2004531995A5 (en)
US8041131B2 (en) Variable length coding of coefficient clusters for image and video compression
EP1768415A1 (en) Adaptive scan order of DCT coefficients and its signaling
CN113613004A (en) Image encoding method, image encoding device, electronic device, and storage medium
US20080193028A1 (en) Method of high quality digital image compression
JP4181887B2 (en) Variable length coding apparatus and method
EP1797520B1 (en) Methods and systems for rate control in image compression
CN111491163B (en) Image block encoding based on pixel domain preprocessing operation on image blocks
US9451257B2 (en) Method and apparatus for image encoding and/or decoding and related computer program products
CN116647673A (en) Video encoding and decoding method and device
CN111541895B (en) Embedded codec (EBC) circuitry for position dependent entropy coding of residual level data
US20060278725A1 (en) Image encoding and decoding method and apparatus, and computer-readable recording medium storing program for executing the method
US7203373B2 (en) Image and video coding with redundant run-length-level-last codewords removed
KR20100013142A (en) Copression methode for frame memmory
Naaz et al. Implementation of hybrid algorithm for image compression and decompression

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200880125325.5

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08793987

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2008793987

Country of ref document: EP

RET De translation (de og part 6b)

Ref document number: 112008003626

Country of ref document: DE

Date of ref document: 20101216

Kind code of ref document: P

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载