WO2007116339A1 - Procede de decodage logiciel iteratif de mots de code contraints - Google Patents
Procede de decodage logiciel iteratif de mots de code contraints Download PDFInfo
- Publication number
- WO2007116339A1 WO2007116339A1 PCT/IB2007/051150 IB2007051150W WO2007116339A1 WO 2007116339 A1 WO2007116339 A1 WO 2007116339A1 IB 2007051150 W IB2007051150 W IB 2007051150W WO 2007116339 A1 WO2007116339 A1 WO 2007116339A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- codeword symbol
- codeword
- parity
- code
- run
- Prior art date
Links
- 238000000034 method Methods 0.000 claims abstract description 58
- 230000009897 systematic effect Effects 0.000 claims description 47
- 230000003287 optical effect Effects 0.000 claims description 10
- 230000005291 magnetic effect Effects 0.000 claims description 9
- 238000009826 distribution Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 5
- 239000011159 matrix material Substances 0.000 description 14
- 238000004422 calculation algorithm Methods 0.000 description 13
- 230000008901 benefit Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000012937 correction Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000021615 conjugation Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
- H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
- H03M13/11—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
- H03M13/1102—Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
- H03M13/1191—Codes on graphs other than LDPC codes
- H03M13/1194—Repeat-accumulate [RA] codes
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
- H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
- H03M13/11—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
- H03M13/1102—Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
- H03M13/1105—Decoding
- H03M13/1111—Soft-decision decoding, e.g. by means of message passing or belief propagation algorithms
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/63—Joint error correction and other techniques
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M5/00—Conversion of the form of the representation of individual digits
- H03M5/02—Conversion to or from representation by pulses
- H03M5/04—Conversion to or from representation by pulses the pulses having two levels
- H03M5/14—Code representation, e.g. transition, for a given bit cell depending on the information in one or more adjacent bit cells, e.g. delay modulation code, double density code
- H03M5/145—Conversion to or from block codes or representations thereof
Definitions
- the invention relates to a soft decoding method for decoding an input signal representing an error correcting code comprising a systematic part and a parity part.
- the invention also relates to a method for decoding encoded data, a soft decoder, a system for decoding encoded data, an integrated circuit, and a computer program product.
- the run-length limiting (RLL) constraint implies that the number of consecutive zeros or consecutive ones is constrained by a minimum and/or a maximum.
- the constraints are required for reliable data storage and retrieval because of physical limitations of the storage media and the recording and retrieval devices.
- a minimal run- length occurs for example if the area on the disc representing one bit is smaller than the minimal area that can be addressed by the recorder. Maximal run-lengths can be needed for example in cases when accurate sensor positioning during read-out is difficult for long run- lengths.
- the data Before storing the data to the optical or magnetic storage device, it is encoded to comply with the run-length limiting constraints by applying an RLL encoder. Before applying the RLL encoder, the data is also encoded with an error-correcting code (ECC) to be able to correct for possible errors that may occur in bits read back from the storage medium.
- ECC error-correcting code
- Bruner et al. discloses a system in which a Reed-Solomon encoder is used as an ECC encoder followed by an RLL encoder that encodes input symbols of for example 8 bits into 256 selected output symbols of 9 bits to realize a maximal run-length of 8.
- the corresponding decoding system consists of an RLL decoder followed by a Reed-Solomon ECC decoder.
- the RLL decoder can provide some level of error detection in that receipt of a nine bit value by the RLL decoder that does not map into the 256 selected combinations of nine bit values indicates the presence of at least one erroneous bit.
- Such an occurrence can be flagged as an erasure by the RLL decoder and provided to the error-correcting code (ECC) decoder to help identify the erroneous symbol or symbols.
- ECC error-correcting code
- Other erasure sources can be employed as recognized by those skilled in the art, such as upstream automatic gain control (AGC) circuitry used to normalize the data (e.g., the presence of a large increase in signal gain for example can serve as a thermal asperity erasure).
- AGC upstream automatic gain control
- Bruner et al. also describes an alternative circuit that can be used by a disk drive to encode input data.
- the input user data are provided to an RLL encoder block.
- the resulting RLL encoded user data are supplied as the systematic symbols to an ECC encoder block which calculates appropriate code words (i.e., parity symbols).
- the parity symbols are output to a second RLL encoder.
- the RLL encoded parity symbols are appended to the end of the RLL encoded user data (the systematic symbols) and output for writing to the discs.
- a read back circuit is also described used to recover the encoded data.
- a retrieved bit stream is separated into the RLL encoded user data and the RLL encoded parity symbols, the latter of which are decoded with a RLL decoder block.
- the decoded parity symbols are then used for ECC detection and correction of the RLL encoded user data by an ECC decoder. Corrected RLL encoded user data are then output to a second RLL decoder
- the encoding scheme described in Bruner et al. in which the data part is RLL encoded before the ECC encoding, and the parity part is RLL encoded after the ECC encoding, and the corresponding decoding scheme, is known as the Bliss scheme.
- error-correcting codes that may replace or complement the Reed-Solomon encoder used by Bruner et al.
- LDPC low-density parity check
- RA repeat accumulate
- the LDPC code is for example described in "Low cost LDPC decoder for DVB-S2" by J. Dielissen, A. Hekstra, and V. Berg, in: IEEE Design Automation and Test in Europe, Kunststoff, March 2006, referred to hereinafter as "Dielissen”.
- the RA code is for example described in "Factor graphs and the sum-product algorithm" by F.R. Kschischang, B.J. Frey, and H. -A. Loeliger, in: IEEE Trans, on Information Theory, Vol. 47, No. 2, 2001.
- RLL constraint is a maximum transition run (MTR) constraint.
- MTR maximum transition run
- This objective is realized by providing a soft decoding method for decoding an input signal representing an error correcting code comprising a systematic part and a parity part, the code being defined by a predetermined number of parity check equations (51) in a binary Galois field, a respective set of binary codeword symbol positions (52, 53) appearing in each parity check equation (51); the systematic part being restricted by a predetermined number of pattern constraints (71), a respective set of at least two systematic binary codeword symbol positions (72, 73) appearing in each pattern constraint (71); the method comprising computing a codeword symbol likelihood measure corresponding to at least one of the systematic binary codeword symbol positions (61), based on the input signal, the parity check equations (63), and the pattern constraints (64); and providing an output signal based on the codeword symbol likelihood measure.
- parity checks and pattern constraints help to identify and correct possible errors in the input signal.
- the parity checks usually involve sets of systematic symbols in combination with respective sets of parity symbols.
- the pattern constraints restrict the systematic part based on predefined structural information. The pattern constraints restrict the number of symbol patterns that may occur in the systematic part, thereby allowing the method to correct erroneous symbols if the pattern constraints are not satisfied.
- An example of structural information is a run- length limiting constraint. Other types of structural information are possible.
- One such example of structural information that can be used in the method relates to decoding encoded text data consisting of 8-bit sequences, where each 8-bit sequence represents an ASCII character, and only a predefined set of ASCII characters (for example a-z, A-Z, 0-9, and punctuation characters) are allowed to occur in the text.
- pattern constraints can be defined, eight binary codeword symbol positions appearing in each pattern constraint, each pattern constraint only allowing the predefined set of ASCII characters to occur.
- the pattern constraints define predefined symbol patterns that are allowed according to the run-length limiting constraint and/or predefined symbol patterns that are not allowed according to the run-length limiting constraint.
- the constraints can be formed by tabulating allowed and/or not-allowed symbol or bit patterns.
- the constraints can also be formed by means of a formula involving codeword symbol positions.
- the constraints can refer to absolute codeword symbol positions or they can refer to codeword symbol positions relative to each other.
- the symbol patterns that cannot occur according to the run-length constraint are not allowed in the decoded codeword. This helps to improve the error correcting capability in at least two ways. First, errors that are not detected by the parity check can be detected by the pattern constraints, and be thus corrected.
- the error can sometimes be corrected in more than one way. That means that the error-corrected codeword cannot be uniquely determined. Taking into account the additional information provided by the pattern constraints, it becomes possible to disambiguate these possible corrected code words.
- An interesting case is the case where the parity checks are designed to be able to correct only one error or a limited number of errors, but the input signal contains more errors. Sometimes the additional pattern constraint can be used to correct the errors in this case.
- the method is an iterative method, and the method comprises initializing the codeword symbol likelihood measure corresponding to at least one of the binary codeword symbol positions (61), based on the input signal; - iteratively updating the codeword symbol likelihood measure based on the parity check equations (63) and the pattern constraints (64).
- the restrictions on the systematic part relate to an encoding step performed on a signal before performing an error-correcting code encoding step on the signal, an output signal of the error-correcting code encoding step being a source of the input signal of the decoding method.
- the encoding step performed before performing an error-correcting code relates to a physical constraint of a storage medium.
- the encoding step to which the restrictions relate transforms the signal in a run- length limited signal.
- a number of codeword symbol positions appearing in each pattern constraint is at least three.
- Patterns of at least three symbol positions are of special importance because three bits contain sufficient information to significantly improve the error correcting capability.
- a number of codeword symbol positions appearing in each pattern constraint is exactly three. Such constraints can be computed more efficiently than patterns of substantially more than three symbol positions, and have the highest error-correcting capability.
- the step of iteratively updating the codeword symbol likelihood measure is performed also in dependence on the codeword symbol likelihood measure corresponding to a codeword symbol at the codeword symbol position (73) appearing in a pattern constraint (71) in which the at least one of the codeword symbol positions (72) also appears.
- the codeword symbol corresponding to the at least one of the codeword symbol positions can also be determined with a high certainty (or likelihood). If at least one of the codeword symbols at the codeword symbol positions appearing in a pattern constraint are known with a low certainty (or likelihood), the codeword symbol corresponding to the at least one of the codeword symbol positions can be determined with a low certainty (or likelihood). This principle is incorporated in the step of updating the codeword symbol likelihood measure.
- Another embodiment further comprises initializing a codeword parity symbol likelihood measure corresponding to at least one parity codeword symbol position belonging to the parity part, based on the input signal; and iteratively updating the codeword parity symbol likelihood measure based on the parity check equations.
- the parity part is decoded using an RLL decoder before applying the error-correcting decoding method. Therefore, the parity part does not need to comply with the RLL constraint. In this case, the pattern constraints are only applied to the systematic part of the code.
- Another embodiment further comprises distinguishing between a predetermined set of allowed codeword symbol patterns and a predetermined set of prohibited codeword symbol patterns.
- the codeword symbol likelihood measure can be updated such that allowed codeword symbol patterns are enhanced (or made more likely) and prohibited codeword symbol patterns are suppressed (or made less likely).
- the step of computing is also based on a predetermined probability distribution relating to a frequency of occurrence of a number of codeword symbol patterns.
- the step of updating is also based on a predetermined conditional probability distribution relating to a frequency of occurrence of a codeword symbol given at least one other codeword symbol.
- the (conditional) probability distribution allows to adjust the likelihood measure according to the (conditional) probability distribution. This results on average in fewer errors in the decoded codeword.
- Another embodiment further comprises stopping the iteration when a codeword corresponding to updated codeword symbol likelihood measures corresponding to all codeword symbol positions satisfies all characterizing parity check equations.
- the codeword corresponding to updated codeword symbol likelihood measures corresponding to all codeword symbol positions is also referred to as the decoded codeword.
- Another embodiment further comprises stopping the iteration when a codeword corresponding to updated codeword symbol likelihood measures corresponding to all codeword symbol positions satisfies all characterizing parity check equations and all characterizing pattern constraints.
- Violation of a pattern constraint also indicates the presence of an error in the decoded codeword. Therefore, iterations continue until both the parity check equations and the pattern constraints are satisfied.
- the code is a low-density parity check code or a repeat accumulate code.
- the method is highly effective when the linear code is a low-density parity check (LDPC) code or a repeat accumulate (RA) code.
- LDPC low-density parity check
- RA repeat accumulate
- Such codes can be decoded efficiently using an iterative decoder and the method according to the invention can be integrated easily into an iterative decoder.
- an LDPC code uses bits as the codeword symbols. With bits as the codeword symbols, it is relatively easy to define bit pattern constraints. Other types of codes can be used equivalently.
- the decoder is an iterative decoder and preferably the codeword symbols are bits.
- the error correcting performance of Repeat Accumulate (RA) codes are very similar to those of LDPC codes as both types of codes have very similar graph based decoding algorithms.
- RA codes come with a straightforward encoding method, whereas special constructions are necessary for LDPC codes to make low complexity encoding possible.
- the distinction between RA codes and LDPC codes is rather vague, as they both are based on pseudo-random interleavers and low-density parity check matrices (i.e. a limited number of non-zero entries per row or column of the parity check matrix, independent of the codeword length).
- the pattern constraints correspond to a run-length limiting constraint.
- run-length limiting constraints can be expressed as bit pattern constraints
- the method can be applied efficiently to run-length limiting constraints.
- the method is highly applicable to a combination of a RLL constraint and an LDPC code.
- the pattern constraints correspond to a run-length limiting constraint imposing a minimal run- length.
- a minimal run-length constraint imposes a minimum on the run-length in a data signal or a codeword.
- a minimal run- length constraint has a relatively large influence on the information rate of the signal and provides a relatively large opportunity to correct errors based on the corresponding pattern constraints.
- the pattern constraints involve a relatively small number of codeword symbol positions, which makes the method relatively computationally efficient.
- the pattern constraints correspond to a run-length limiting constraint imposing a maximum transition-run.
- a maximum transition-run constraint limits the number of consecutive segments having a minimal run-length.
- Such a constraint can also be efficiently used in the method as set forth.
- the pattern constraints correspond to a run-length limiting constraint imposing a maximal run-length.
- a maximal run-length of k implies that at most k+1 successive bits in the signal or codeword have the same value. Such a constraint can also be used in the method as set forth.
- the input signal relates to data stored on an optical disc memory or a magnetic memory.
- the physical properties of an optical and/or magnetic disc memory and/or the apparatus reading from and/or writing to the optical disc memory are such that only run-length limited data can be reliably stored on the disc. Also, more information can be stored in on the memory if the data are run- length limited.
- the input signal relates to data received from a digital data communications system.
- the method can also be applied with similar advantages for a decoder in a digital communications system, such as a LAN, WAN, WLAN, GSM, GPS, GPRS, DSL, telephone, or mobile phone communications system.
- An embodiment comprises: receiving an input signal (206) representing a likelihood of each bit in a code word comprising a systematic part and a parity part; - applying a soft-input-soft-output run- length limiting decoder (200) to the parity part of the code word; applying (202) the method of claim 1 with an input signal formed by the systematic part and the run-length limiting decoded parity part; applying a run- length limiting decoder (204) to the data part of the output signal obtained by the step of applying a method according to the invention.
- the method can be applied for example in a Bliss scheme.
- the method can be applied when the systematic part of the code is in RLL format.
- An embodiment comprises a soft decoder for decoding an input signal representing an error correcting code comprising a systematic part and a parity part, - the code being defined by a predetermined number of parity check equations
- the decoder comprising means for computing a codeword symbol likelihood measure corresponding to at least one of the systematic binary codeword symbol positions (61), based on the input signal, the parity check equations (63), and the pattern constraints (64); and means for providing an output signal based on the codeword symbol likelihood measure.
- An embodiment comprises means for receiving an input signal (206) representing a likelihood of each bit in a code word comprising a systematic part and a parity part; a first soft-input-soft-output run-length limiting decoder (200) for decoding the parity part of the code word to obtain a run-length limiting decoded parity part; a soft decoder (202) according to the invention arranged for receiving as an input signal the systematic part and the run-length limiting decoded parity part; - a second run- length limiting decoder (204) for decoding the systematic part of the output signal of the soft decoder.
- An embodiment comprises an integrated circuit comprising a system for decoding encoded data.
- An embodiment comprises a computer program product comprising instructions for causing a processor to perform the method according to the invention.
- Fig. 1 illustrates a signal encoding system
- Fig. 2 illustrates a signal decoding system
- Fig. 3 illustrates an example of a Tanner graph
- Fig. 4 illustrates a modified Tanner graph including nodes representing pattern constraints
- Figs. 5-8 highlight specific edges of the modified Tanner graph.
- Fig. 1 illustrates an encoding system, suitable for preparing an input signal for storage on a medium, such as an optical disc or a magnetic storage.
- the same embodiment can be used for preparing data for transmission over a transmission channel.
- the embodiment shown transforms the signal to satisfy a run- length limiting constraint and to add redundancy for later error correction purposes.
- An input signal 106 for example originating from a camera or microphone, from a transmission channel, or from a storage medium, is run-length limiting (RLL) encoded 100.
- the RLL encoding transforms the input signal into a signal satisfying a set of run- length limiting constraints.
- the run-length is defined as the number of consecutive bits of the signal having the same value.
- the set of run-length limiting constraints typically includes: a minimal run- length d+ 1 , - a maximal run- length k+ 1, and/or a maximal transition run r+ 1.
- the RLL encoder transforms the original signal into an RLL-encoded signal
- the bit statistics of the RLL-encoded signal depends on the particular RLL encoder used. For example, the conditional bit probability given bit values for neighboring bits can be determined by analyzing the incoming signal 106 characteristics and/or the characteristics of the RLL encoder 100. Alternatively, the (conditional) bit probabilities may be established by statistically analyzing the RLL encoded signal 108 for a representative population of signal bits 106. The (conditional) bit probabilities can be employed during the decoding stage to improve error-correcting capabilities as will be explained below.
- the RLL encoded signal 108 is input to an error-correcting code (ECC) encoder 102.
- ECC error-correcting code
- the ECC encoder 102 adds redundancy to the signal in the form of parity bits to obtain a parity signal 112.
- the ECC may be any systematic code, which is a code associated with code words having a systematic part and a parity part.
- the systematic part of the codeword corresponds to the input signal 108 of the ECC encoder 102, and the parity part 114 of the codeword is generated by the ECC encoder.
- the complete codeword comprises the concatenation of the systematic part 108 and the parity part 112. This is in contrast with a non-systematic code, wherein the input signal of the ECC encoder is fully replaced by the output of the ECC encoder.
- the ECC is a bit-wise code (i.e., a code with binary codeword symbols).
- a low-density parity check (LDPC) code is an example of such a bitwise code with a relatively high information ratio.
- the parity signal 112 generated by the ECC encoder 102 in dependence on the RLL encoded signal 108, does not necessarily comply with the run-length limiting constraints. Therefore, it is RLL encoded 104 to obtain an RLL encoded parity signal 114.
- the RLL encoder 104 may be a different type of encoder compared to the RLL encoder 100, or it may be the same type of encoder, or it may be the same encoder.
- the type of RLL encoders and the associated added signal redundancies influence the error correcting capabilities of the RLL code. Low error correcting capabilities of the RLL encoding may be compensated by the error-correcting capabilities of the ECC encoding.
- the RLL encoded parity signal 114 is concatenated to the RLL encoded signal 108 by concatenation 116, to obtain the output signal 118.
- the output signal 118 is passed to an optical or magnetic storage medium or to a transmission channel.
- Fig. 2 shows a decoding system complementing the encoding system of Fig. 1.
- An input signal 206 is obtained from a medium or channel.
- the input signal provides a likelihood of the value of each bit stored on the medium or transmitted over the channel.
- the input signal 206 is split by a signal splitter 218 into a systematic part 220 and a parity part 224.
- the parity part is RLL decoded by an RLL decoder 200 corresponding to RLL encoder 104.
- the RLL decoder 200 is a soft-input-soft-output RLL decoder. This means that both the input signal 224 and the output signal 208 of the RLL decoder 200 comprise bit likelihood values indicating the likelihood of each bit value being equal to zero or equal to one.
- the RLL decoded parity signal 208 is appended to the RLL encoded systematic part 220 by concatenation 210. This results in a codeword signal 212 comprising bit likelihood values, forming an input signal of ECC decoder 202.
- the RLL decoded parity signal 208 and the systematic part 220 are provided as separate inputs to the ECC decoder 202.
- the ECC decoder 202 removes errors that may be present in the codeword signal 212 according to the error-correcting code (ECC) used.
- ECC decoder 202 is a soft decoder, which means that its input signal 212 and/or output signal 214 comprise bit likelihood values.
- the codeword signal 212 comprises at least one codeword.
- a codeword comprises a sequence of symbols, each symbol in the codeword being associated with a codeword symbol position to indicate its position in the sequence of symbols.
- the error correction is performed such, that the resulting ECC decoded signal 214 fulfills the run-length limiting constraints that were originally imposed by the encoding system in RLL encoder 100.
- An example ECC decoder is an iterative LDPC decoder as explained hereinafter.
- the ECC decoded signal 214 is passed to RLL decoder 204 corresponding to RLL encoder 100.
- the resulting output signal 216 can for example be rendered or transmitted or stored on another storage medium.
- ECC decoder 202 In the following, some embodiments of the ECC decoder 202 are described in greater detail.
- the LDPC code is one of many examples of a code that has code words comprising codeword symbols in a binary Galois field.
- the LDPC code can be a systematic code.
- a parity check matrix defines a number of parity check equations in the binary Galois field that must hold if the codeword is error- free.
- N be the codeword length and M be the number of parity check equations.
- the parity check matrix H consists of M rows and N columns with elements "0" or "1".
- a ' 1 ' (or a nonzero value) in a row of the parity check matrix ⁇ thus means that the corresponding codeword symbol position in the matrix- vector multiplication appears in the parity check equation defined by that row of the parity check matrix.
- the set of binary codeword symbol positions appearing in each parity check equation are thus defined by the elements of each row of the parity check matrix.
- N(m) The number of elements in N(m) is referred to as K m .
- K m The number of elements in N(m)
- M(n) of indices of parity check equations that check the symbol position n is defined by
- the number of elements in M(n) is referred to as J n .
- the parity check matrix can be associated with a bipartite graph (V,E) called the Tanner graph, an example of which is shown in Fig. 3.
- the set of vertices (V) is the union of the set of symbol-nodes and the set of M parity check-nodes.
- Classical iterations of the LDPC algorithm consist of information sent from symbol-nodes (N) via the edges (E) to the check-nodes (M), and back.
- L n The x bit, signed input message into symbol-node n representing a likelihood of a bit value of the codeword symbol position corresponding to the symbol-node:
- K m K + ⁇ K n - (2) m' ⁇ M(n) ⁇ m
- A' mn The message sent from the check-node m to symbol-node n in the z th iteration:
- a straightforward implementation of the UMP-BP algorithm stores the received input message (all L n 's) in the decoder input memory, shown as , n 6 in Fig. 3.
- the set of input symbols A mn to the symbol-nodes constitute the middle state vector (through symbol-nodes).
- the set of input symbols X nm to the check-nodes constitutes the right state vector (through check-nodes).
- all messages X nm are sent from all symbol-nodes to the check-nodes.
- all messages A mn are sent from all check-nodes to the symbol-nodes.
- One of the key elements in LDPC decoding is the iterative nature. In practical decoders, the worst-case number of iterations ranges from 30 to 50, depending on the required performance.
- Gauss- Seidel iterations One way of performing the iteration is by using a technology called “staggered decoding”: intermediate results are used within an iteration, and in fact it is similar to the step from Jacobi iteration to Gauss-Seidel iteration in numerical mathematics.
- the technique is also known under “turbo decoding LDPC”, “shuffled decoding”, and “layered decoding”.
- this technology is referred to as Gauss- Seidel iterations.
- the messages calculated during the previous iteration must be stored until its last access, and consequently new messages must be stored in other locations.
- the messages can be overwritten immediately, and this technology therefore leads to half the message memory requirements.
- the second, and most important benefit of Gauss-Seidel iteration is the factor two reduction in the number of iterations required to meet a certain BER performance.
- the complexity of calculating check-nodes (3) is reduced when its arguments arrive consecutively. For the first part of (3), the property that the XOR of "all but one" can be used. This results in storing individual signs, and administrating a running XOR-value.
- the K m X nm 's are reconstructed.
- the new ⁇ m is known, and the X n 's in the memory can be updated.
- the next check-node uses a newer version of the symbol- node within an iteration, and Gauss-Seidel iteration are conducted. With these Gauss-Seidel iterations both the memory reduction and the halving of the number of iterations are established.
- the parity check matrix corresponding to the shown Tanner graph is:
- Parity check matrices used in practice have many more rows and columns than the example shown here. However this matrix is sufficiently complex to show the principle concept of the invention.
- parity check matrices corresponding to any linear error- correcting code (ECC) can be used, or a non-linear ECC could be used.
- ECC linear error-correcting code
- the corresponding LDPC decoder operates as set forth.
- Fig. 4 shows a Tanner graph corresponding to the same parity check matrix H.
- pattern constraint nodes can be found, labeled ki to k 4 in the Fig...
- the pattern constraint can be tabulated as shown in the following table, wherein + denotes a bit value of 0 and - denotes a bit value of 1.
- bit value 1 is 0 and bit value 2 is 1, then bit value 3 has to be 1, since combination 011 is allowed (fourth row of table above) while combination 010 is not allowed (third row of table above). If the two bit values are known only with a limited likelihood, then the third bit can also be determined with at most the limited likelihood.
- the following table provides the value of a third bit that can be thus determined.
- the table is split up into three parts of three columns each.
- the first two columns of each part are the input values for two particular bits, and the third column of each part is the value of the third bit as determined from the table above.
- the value of 0 in the determined bit column indicates that the third bit value cannot be determined from the two given bit values.
- Equations (2) - (5) The algorithm summarized in Equations (2) - (5) can be modified to incorporate the pattern constraint nodes as follows.
- the set N(k) of codeword symbol positions appearing in pattern constraint k is defined by
- N(k) ⁇ n I codeword symbol position n appears in pattern constraint k ⁇ .
- the set K(n) of indices to pattern constraints in which codeword symbol n appears is defined by
- K ⁇ n) ⁇ k I codeword symbol position n appears in pattern constraint k ⁇ .
- ⁇ n ' m The message (representing a codeword symbol likelihood measure) sent from symbol-node n to check-node m in the z th iteration.
- Fig. 6 is an illustration of the nodes involved in the computation of ⁇ ' 32 for the example parity check matrix H defined in (6).
- the summation is over the check-nodes mi and Hi 4 (63) and over the pattern constraint nodes Ic 1 , k 2 , and k3 (64).
- the message is sent to check node m 2 (62).
- ⁇ n ' k The message (representing another codeword symbol likelihood measure) sent from symbol-node n to pattern constraint-node k in the z th iteration.
- Fig. 8 is an example illustration of the nodes involved in the computation of 6 32 for the example parity check matrix H defined in (6).
- the summation is over the check nodes mi, m 2 , and Hi 4 (84) and over the pattern constraint nodes ki and k 3 (83).
- the message is sent to pattern constraint node k 2 (82).
- A' mn The message (representing another codeword symbol likelihood measure) sent from check-node m to symbol-node n in the z th iteration remains the same as in (3):
- K n ⁇ ⁇ si g n ( ⁇ :,j ⁇ * mm ⁇ 1 ⁇ ' ⁇ ,, ⁇ ,.. I f (9) n' ⁇ N(m ) ⁇ n
- Fig. 5 is an example illustration of the nodes involved in the computation of A 23 for the example parity check matrix H defined in (6).
- & kn The message (representing another codeword symbol likelihood measure) sent from the pattern constraint-node k to symbol-node n in the z th iteration.
- ⁇ H sign (sign ( ⁇ ; +2;i ) - sign ( ⁇ ; +u )) * min ⁇
- Fig. 7 is an example illustration of the nodes involved in the computation of ⁇ ' 23 for the example parity check matrix H defined in (6).
- the symbol nodes n 2 and 1I 4 (73) are used to compute the message.
- the message is sent to symbol node n 3 (72).
- the decoder output message representing a codeword symbol likelihood measure used for providing an output signal Unlike the ⁇ nm 's, the decoder output message X n uses all available information available in a symbol-node n, and is only necessary in the last iteration /:
- conditional probabilities are used to improve the accuracy of the messages & kn .
- the RLL encoder may have a certain redundancy associated with it.
- the conditional probabilities may be established by feeding a representative population of data bits to the encoding system of Fig. 1, and analyzing the output signal 118, without introducing any errors in the output signal. Another way to obtain the conditional probabilities is by analyzing the characteristics of the input signal and of the encoder.
- conditional likelihood LLR ⁇ z can be computed by combining (16) and (15). These conditional likelihoods LLR ⁇ z are used as the messages from the pattern constraint nodes to the to the symbol nodes (& kn ) instead of (10).
- the method equally applies to decoding systems using a different decoding algorithm.
- no real "messages” are sent, but rather values are iteratively computed according to the formulas set forth.
- the values of ⁇ n ' m and ⁇ n ' k which represent codeword symbol likelihood measures, may be maintained and updated in each iteration, using the values A' mn and & kn as intermediate results. Since A' mn and & kn are formulated in dependence on the corresponding parity check equations and the pattern constraints, respectively, the codeword symbol likelihood measures are updated based on the pattern constraints in which the codeword symbol position appears.
- An embodiment of the invention comprises an integrated circuit including a system clock for timing the processing, a memory for storing intermediate results, an output signal conditioner, and an input signal conditioner.
- the integrated circuit further comprises RLL decoding means (200, 204) and error-correcting decoding means (202) as set forth.
- the invention also extends to computer programs, particularly computer programs on or in a carrier, adapted for putting the invention into practice.
- the program may be in the form of source code, object code, a code intermediate source and object code such as partially compiled form, or in any other form suitable for use in the implementation of the method according to the invention.
- the carrier may be any entity or device capable of carrying the program.
- the carrier may include a storage medium, such as a ROM, for example a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example a floppy disc or hard disk.
- the carrier may be a transmissible carrier such as an electrical or optical signal, which may be conveyed via electrical or optical cable or by radio or other means.
- the carrier may be constituted by such cable or other device or means.
- the carrier may be an integrated circuit in which the program is embedded, the integrated circuit being adapted for performing, or for use in the performance of, the relevant method.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Probability & Statistics with Applications (AREA)
- Error Detection And Correction (AREA)
Abstract
La présente invention concerne un procédé de décodage logiciel itératif destiné au décodage d'un code comprenant une pluralité de symboles de mot de code binaire, chaque symbole de mot de code binaire ayant une position de symbole de mot de code binaire prédéterminée (52), le code étant caractérisé par un nombre prédéterminé d'équations de vérification de parité (51) dans un champ de Galois binaire, un ensemble respectif des positions de symbole de mot de code binaire (52, 53) apparaissant dans chaque équation de vérification de parité (51), et par un nombre prédéterminé de contraintes de motif (71), un ensemble respectif de positions de symbole de mot de code binaire (72, 73) apparaissant dans chaque contrainte de motif (71). Le procédé comprend l'actualisation itérative de la mesure de probabilité de symbole de mot de code en fonction d'au moins une des équations de vérification de parité (51) dans lesquelles apparaissent ladite ou lesdites positions de symbole de mot de code (52), et également en fonction d'au moins une des contraintes de motif (71) dans lesquelles apparaissent ladite ou lesdites positions de symbole de mot de code (72).
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP06112428.5 | 2006-04-10 | ||
EP06112428 | 2006-04-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2007116339A1 true WO2007116339A1 (fr) | 2007-10-18 |
Family
ID=38421471
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2007/051150 WO2007116339A1 (fr) | 2006-04-10 | 2007-03-30 | Procede de decodage logiciel iteratif de mots de code contraints |
Country Status (2)
Country | Link |
---|---|
TW (1) | TW200814544A (fr) |
WO (1) | WO2007116339A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2455496A (en) * | 2007-10-31 | 2009-06-17 | Hewlett Packard Development Co | Error Detection involving identification of patterns of errored sum nodes in a Tanner graph |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8768990B2 (en) | 2011-11-11 | 2014-07-01 | Lsi Corporation | Reconfigurable cyclic shifter arrangement |
RU2012146685A (ru) | 2012-11-01 | 2014-05-10 | ЭлЭсАй Корпорейшн | База данных наборов-ловушек для декодера на основе разреженного контроля четности |
-
2007
- 2007-03-30 WO PCT/IB2007/051150 patent/WO2007116339A1/fr active Application Filing
- 2007-04-04 TW TW96112156A patent/TW200814544A/zh unknown
Non-Patent Citations (5)
Title |
---|
C.B. SCHLEGEL AND L.C. PÉREZ: "Trellis and Turbo Coding", 2004, JOHN WILEY AND SONS, XP002449206 * |
DIELISSEN J ET AL: "Low cost LDPC decoder for DVB-S2", DESIGN, AUTOMATION AND TEST IN EUROPE, 2006. DATE '06. PROCEEDINGS MUNICH, GERMANY 06-10 MARCH 2006, PISCATAWAY, NJ, USA,IEEE, 6 March 2006 (2006-03-06), pages 1 - 6, XP010929526, ISBN: 3-9810801-1-4 * |
FAN J L ET AL: "CONSTRAINED CODING TECHNIQUES FOR SOFT ITERATIVE DECODERS", 1999 IEEE GLOBAL TELECOMMUNICATIONS CONFERENCE. GLOBECOM'99. SEAMLESS INTERCONNECTION FOR UNIVERSAL SERVICES. RIO DE JANEIRO, BRAZIL, DEC. 5-9, 1999, IEEE GLOBAL TELECOMMUNICATIONS CONFERENCE, NEW YORK, NY : IEEE, US, vol. VOL. IB, 5 December 1999 (1999-12-05), pages 723 - 727, XP000956712, ISBN: 0-7803-5797-3 * |
KSCHISCHANG F R ET AL: "FACTOR GRAPHS AND THE SUM-PRODUCT ALGORITHM", IEEE TRANSACTIONS ON INFORMATION THEORY, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 47, no. 2, February 2001 (2001-02-01), pages 498 - 519, XP001009659, ISSN: 0018-9448 * |
ZONGWANG LI AND V.K. VIJAYA KUMAR: "Low-Density Parity-Check Codes With Run Length Limited (RLL) Constraints", IEEE TRANSACTIONS ON MAGNETICS, vol. 42, no. 2, February 2006 (2006-02-01), pages 344 - 349, XP002448578 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2455496A (en) * | 2007-10-31 | 2009-06-17 | Hewlett Packard Development Co | Error Detection involving identification of patterns of errored sum nodes in a Tanner graph |
GB2455496B (en) * | 2007-10-31 | 2012-05-30 | Hewlett Packard Development Co | Error detection method and apparatus |
US8205134B2 (en) | 2007-10-31 | 2012-06-19 | Hewlett-Packard Development Company, L.P. | Error detection method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
TW200814544A (en) | 2008-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7752523B1 (en) | Reduced-complexity decoding of parity check codes | |
US20200177208A1 (en) | Device, system and method of implementing product error correction codes for fast encoding and decoding | |
US8495462B1 (en) | Absorb decode algorithm for 10gbase-t LDPC decoder | |
US9490849B1 (en) | Systems and methods for configuring product codes for error correction in a hard disk drive | |
US9048879B1 (en) | Error correction system using an iterative product code | |
US8086945B1 (en) | Tensor product codes containing an iterative code | |
US9015547B2 (en) | Multi-level LDPC layered decoder with out-of-order processing | |
US8996969B2 (en) | Low density parity check decoder with miscorrection handling | |
CN102077173B (zh) | 利用写入验证减轻代码的误码平层 | |
US7219288B2 (en) | Running minimum message passing LDPC decoding | |
EP1931034A2 (fr) | Procédé de correction d'erreur et appareil pour motifs d'erreurs prédéterminées | |
US9130590B2 (en) | Non-binary layered low density parity check decoder | |
US20090132894A1 (en) | Soft Output Bit Threshold Error Correction | |
US8543891B2 (en) | Power-optimized decoding of linear codes | |
US20130019141A1 (en) | Min-Sum Based Non-Binary LDPC Decoder | |
US8127216B2 (en) | Reduced state soft output processing | |
US7409622B1 (en) | System and method for reverse error correction coding | |
US9419651B2 (en) | Non-polynomial processing unit for soft-decision error correction coding | |
US20140082449A1 (en) | LDPC Decoder With Variable Node Hardening | |
US20090158128A1 (en) | Decoding device, decoding method, receiving device, and storage medium reproducing device | |
JP2014099944A (ja) | ハードウェア共用および直列和積アーキテクチャを用いる低密度パリティ検査復号の方法および装置 | |
US20090276685A1 (en) | Data decoding apparatus, magnetic disk apparatus, and data decoding method | |
WO2007116339A1 (fr) | Procede de decodage logiciel iteratif de mots de code contraints | |
US8719686B2 (en) | Probability-based multi-level LDPC decoder | |
JP2004342188A (ja) | 記録媒体再生装置、復号方法、復号処理プログラム、および、プログラム記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07735340 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 07735340 Country of ref document: EP Kind code of ref document: A1 |