WO2018121887A1 - Apparatus and method for shaping the probability distribution of a data sequence - Google Patents
Apparatus and method for shaping the probability distribution of a data sequence Download PDFInfo
- Publication number
- WO2018121887A1 WO2018121887A1 PCT/EP2017/050023 EP2017050023W WO2018121887A1 WO 2018121887 A1 WO2018121887 A1 WO 2018121887A1 EP 2017050023 W EP2017050023 W EP 2017050023W WO 2018121887 A1 WO2018121887 A1 WO 2018121887A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- shaping
- data sequence
- symbols
- probabilistic
- probability distribution
- Prior art date
Links
- 238000007493 shaping process Methods 0.000 title claims abstract description 168
- 238000009826 distribution Methods 0.000 title claims abstract description 144
- 238000000034 method Methods 0.000 title claims description 24
- 238000013507 mapping Methods 0.000 claims abstract description 46
- 230000005540 biological transmission Effects 0.000 claims description 12
- 230000003466 anti-cipated effect Effects 0.000 claims description 5
- 238000012545 processing Methods 0.000 description 7
- 238000004088 simulation Methods 0.000 description 6
- 230000003247 decreasing effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000009828 non-uniform distribution Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L27/00—Modulated-carrier systems
- H04L27/32—Carrier systems characterised by combinations of two or more of the types covered by groups H04L27/02, H04L27/10, H04L27/18 or H04L27/26
- H04L27/34—Amplitude- and phase-modulated carrier systems, e.g. quadrature-amplitude modulated carrier systems
- H04L27/3405—Modifications of the signal space to increase the efficiency of transmission, e.g. reduction of the bit error rate, bandwidth, or average power
- H04L27/3411—Modifications of the signal space to increase the efficiency of transmission, e.g. reduction of the bit error rate, bandwidth, or average power reducing the peak to average power ratio or the mean power of the constellation; Arrangements for increasing the shape gain of a signal set
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/0001—Systems modifying transmission characteristics according to link quality, e.g. power backoff
- H04L1/0002—Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the transmission rate
- H04L1/0003—Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the transmission rate by switching between different modulation schemes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L27/00—Modulated-carrier systems
- H04L27/32—Carrier systems characterised by combinations of two or more of the types covered by groups H04L27/02, H04L27/10, H04L27/18 or H04L27/26
- H04L27/34—Amplitude- and phase-modulated carrier systems, e.g. quadrature-amplitude modulated carrier systems
- H04L27/36—Modulator circuits; Transmitter circuits
Definitions
- This invention relates to apparatus and methods for shaping the probability distribution of a data sequence.
- PSCM Probabilistically shaped coded modulation
- the scheme uses a shaping encoder (ShEnc) and a channel encoder (ChEnc) at the transmitter side and a channel decoder (ChDec) followed by a shaping decoder (ShDec) at the receiver side.
- This brings a number of advantages.
- the shaping encoder transforms uniformly distributed bits of the input message to a non-uniform distribution so that the channel input symbols approach the distribution that is able to achieve capacity for the channel.
- the transmitter can adjust the rate of the transmission, without changing the parameters of the forward error correction (FEC) code.
- FEC forward error correction
- PSCM Bit-Interleaved Coded Modulation
- PES probabilistic amplitude shaping
- the shaping encoder 101 aims to produce a sequence of symbols (random variables) with a desired probability distribution given a sequence of symbols as an input.
- the input symbols often have a uniform probability distribution.
- the output symbols often have a non-uniform probability distribution.
- a shaping encoder is sometimes referred to as a distribution matcher (DM), and a shaping decoder is called a distribution dematcher or inverse DM.
- Distribution matching is usually performed on a block-to-block (or block-to- variable-length) basis.
- the shaping encoder usually maps a uniformly distributed input sequence of fixed length to a fixed length (or variable length, depending on the input sequence) sequence of symbols distributed according to a desired probability distribution. The mapping should be one-to-one.
- the shaping encoder 101 outputs a sequence 108 of n c amplitudes formed
- Each amplitude is mapped independently 102, by a fixed mapping b A , to a corresponding bit label of length m - 1.
- a probabilistic shaper comprising an input configured to receive an input data sequence, which has a first probability distribution.
- the probabilistic shaper also comprises a splitter configured to divide the input data sequence into a plurality of separate data sequences. It comprises a plurality of shaping encoders. Each shaping encoder is configured to receive a respective one of the separate data sequences and map it to an output data sequence having a different probability distribution from the first probability distribution. Each shaping encoder is configured to perform its mapping independently of the other shaping encoders. It also comprises a combiner configured to receive the output data sequences from the plurality of shaping encoders and combine them to form a single data sequence having a target probability distribution.
- the probabilistic shaper is able to increase throughput by dividing up the incoming data sequence into sequences that can be processed in parallel.
- the plurality of shaping encoders and the combiner may each configured to map their respective received data sequences to output data sequences that are formed from a respective alphabet, and the shaping encoders may each be configured to map to a smaller alphabet than the combiner. This reduces the complexity of the individual shape encoding operations and further increases throughput.
- the combiner may be configured to receive the output data sequences and map them to a single data sequence that is formed from a target alphabet, wherein the target alphabet comprises a plurality of symbols and each symbol can be represented by one or more sub-symbols.
- Each shaping encoder may be configured to map its respective separate data sequence to an output data sequence that is formed from an alphabet that comprises at least one of the sub-symbols. The shaping encoders are thus able to output individual data sequences, which can be independently formed and then jointly mapped to the target alphabet.
- the splitter may be configured to divide the input data sequence into a number of separate data sequences that is equal to the number of the plurality of sub-symbols that represent each symbol in the target alphabet.
- the probabilistic shaper may be configured such that one or more of those separate data sequences is not input into a shaping encoder but is instead input directly into the combiner. This can further improve throughput and reduces complexity.
- Each sub-symbol may comprise one or more bit levels.
- the combiner may be configured to treat the output data sequences from the plurality of shaping encoders as each providing a respective one or more bit levels of the sub-symbols. This assists the combiner to jointly map the outputs of the shaping encoder to the desired target alphabet.
- the combiner may be configured to combine the output data sequences to form the single data sequence by combining one or more sub-symbols that are comprised in the output data sequences of the plurality of shaping encoders to form a combination of sub-symbols and mapping the combination of sub-symbols to the symbol in the target alphabet that is represented by that combination. This enables the combiner to jointly map the outputs of the shaping encoder to the desired target alphabet.
- Each symbol in the target alphabet may be associated with a respective transmit power.
- the combiner may be configured to treat each symbol in the target alphabet as being represented by a specific combination of sub-symbols, whereby a symbol in the target alphabet that is associated with a higher transmit power than another symbol in the target alphabet is treated as being represented by a specific combination of sub- symbols that has a probability that is lower than or equal to a probability of a specific combination of sub-symbols that represents the other symbol. This improves performance by enabling the output distribution of the output symbols to approximate a Gaussian distribution.
- the combiner may be configured to use Natural Code mapping to map a combination of sub-symbols to a symbol in the target alphabet. This helps to provide a probabilistically ordered mapping.
- the combiner may be configured to form the single data sequence to have a target probability distribution that is anticipated to offer improved transmission performance compared with the first probability distribution.
- the improvements could arise in any area, including reduced error rates, reduced transmit power, reduced Peak-to- Averaged Power ratio (PAPR) etc.
- PAPR Peak-to- Averaged Power ratio
- a transmitter comprising a probabilistic shaper as described in any of the preceding paragraphs.
- a probabilistic deshaper comprises an input configured to receive an input data sequence, which has a target probability distribution.
- the probabilistic deshaper also comprises a divider configured to arrange the input data sequence into a plurality of separate data sequences. It comprises a plurality of shaping decoders, each configured to receive a respective one of the separate data sequences and demap it to an output data sequence having a different probability distribution from the target probability distribution.
- Each shaping decoder performs said demapping independently of the other shaping decoders.
- It also comprises a combiner configured to receive the output data sequences from the plurality of shaping decoders and combine them to form a single data sequence having a first probability distribution.
- the probabilistic deshaper is thus able to reverse the processing of the probabilistic shaper and increase throughput by dividing up the incoming data sequence into sequences that can be processed in parallel.
- the divider may comprise a demappper that is configured to receive the input data sequence, which is formed from a target alphabet, wherein the target alphabet comprises a plurality of symbols and each symbol can be represented by a plurality of sub-symbols. It may also be configured to demap the input data sequence to a demapped data sequence that is formed from an alphabet that comprises the plurality of sub-symbols. It may also be configured to output the demapped data sequence for arranging into the plurality of separate data sequences. The divider is thus able to divide the input data stream into appropriate sequences of sub-symbols that can be demapped to reclaim the original data that was input into the probabilistic shaper.
- the divider may be configured to arrange the demapped data sequence into a plurality of separate data sequences by arranging data corresponding to the same sub-symbol in the same sequence and data corresponding to different sub-symbols in different sequences.
- the divider is thus able to divide the input data stream into sequences of sub-symbols that can be independently demapped.
- a receiver comprises a probabilistic deshaper described in any of the preceding paragraphs.
- a method comprises receiving an input data sequence, which has a first probability distribution. It comprises dividing the input data sequence into a plurality of separate data sequences. It comprises independently mapping each separate data sequence to an output data sequence that has a different probability distribution from the first probability distribution. It also comprises combining the output data sequences to form a single data sequence having a target probability distribution.
- Figure 1 shows an example of a prior art PAS system
- Figure 2 shows an example of a probabilistic shaper according to an embodiment of the invention
- Figure 3 shows an example of a probabilistic deshaper according to an embodiment of the invention
- Figure 4 is a flow chart that illustrates an example of a shaping technique according to an embodiment of the invention
- Figure 5 shows a more detailed example of a probabilistic shaper
- Figure 6 shows an example of a probabilistic shaper configured to perform spilt shaping encoding
- Figure 7 shows an example of a probabilistic shaper configured to perform binary spilt shaping encoding
- Figure 8 shows an example of a probabilistic shaper incorporated in a PCSM transmit chain
- Figure 9 shows simulation results for split shaping encoders that use Gray Binary Code mapping and Natural Binary Code mapping.
- FIG. 2 shows an example of an apparatus for implementing probabilistic shaping.
- the apparatus which is termed a "probabilistic shaper" herein, is shown generally at 200. It comprises an input 201 , a splitter 202 and a plurality of shaping encoders 203.
- the splitter is configured to receive an input data sequence from the input and divide it into a plurality of separate data sequences.
- the splitter might simply be a switch but it could be implemented by any suitable means for separating an input data stream, including e.g. a demultiplexer.
- Each of the shaping encoders is configured to receive one of the separate data sequences and shape it to generate an output data sequence that has a different probability distribution from that of the original input data sequence.
- the shaping encoders could be configured to produce output sequences that all have the same probability distribution or they could differ from each other. Each shaping encoder performs its mapping independently of the other shaping encoders, so each shaping operation is performed without reference to the shaping operations being performed by other shaping encoders.
- a data sequence is typically formed of a plurality of bits or symbols. Those bits or symbols are usually taken from an alphabet that defines the available bits or symbols. For example, a binary alphabet is the set ⁇ 0, 1 ⁇ .
- the "probability distribution" of a data sequence defines the relative proportions of the different alphabet members that the sequence contains. So, for example, the probability distribution of a binary sequence refers to its relative proportions of '1 's and O's. Changing a probability distribution of a sequence is referred to as "shaping" herein.
- the input data sequence that is received by the probabilistic shaper is a uniform sequence, i.e. a sequence in which the relative proportions of the alphabet members are the same.
- the shaping encoders are configured to transform these uniform distributions into "biased" distributions, where the term “biased” refers to the probability of the different alphabet members in a sequence being different.
- the probabilistic shaper also includes a combiner 204.
- the combiner is configured to receive the output data sequences from the plurality of shaping encoders and combine them to form a single data sequence having a target probability distribution.
- the combiner may optionally incorporate a mapper 205, which is configured to map the output data sequences that the combiner receives from the shaping encoders to a target alphabet.
- the probabilistic shaper will usually form part of a transmitter. For example, it could form part of a transmitter that is capable of formatting data for transmission according to a PCSM transmit scheme. This formatting is reversed at the receiver.
- the processing that is performed by the probabilistic shaper shown in Figure 2 may be reversed by its mirror: the probabilistic deshaper shown in Figure 3.
- FIG. 3 shows an example of an apparatus for implementing probabilistic deshaping.
- the apparatus which is termed a "probabilistic deshaper" herein, is shown generally at 300. It comprises an input 301 , a divider 302 and a plurality of shaping decoders 303.
- the divider is configured to arrange the input data sequence into a plurality of separate data sequences. As with splitter 201 , the divider might simply be a switch. The job of the divider is likely to be slightly more complicated than that of splitter 201 , however (as will become apparent from some of the examples below) so it is likely to be implemented by a more complex component, such as a demultiplexer.
- the divider comprises an optional demapper 304, which may be required if split- shaping encoding has been implemented at the transmitter-side (which is described in more detail below).
- Each of the shaping decoders is configured to receive one of the separate data sequences and shape it to generate an output data sequence.
- the output data sequences have a different probability distribution from that of the original input data sequence.
- the shaping decoders could be configured to produce output sequences that all have the same probability distribution or they could differ from each other.
- Each shaping decoder performs its demapping independently of the other shaping encoders.
- the probabilistic deshaper also has a combiner 305 configured to receive the output data sequences from the plurality of shaping decoders and combine them to form a single data sequence. For the deshaper, this single data sequence should be equivalent to the original data sequence that was processed in the transmitter, i.e. the single data sequence should have the first probability distribution.
- the probabilistic shaper and deshaper work in a very similar way, in that both receive an input data sequence and divide it into a number of smaller, separate sequences, which are then independently mapped to an output sequence.
- This general method is illustrated in Figure 4, which commences in step S401 with receiving an input data sequence.
- this input data sequence will have a first probability distribution.
- this signal will be a received signal that has already gone through a shaping process and hence has the target distribution. Both the probabilistic shaper and the probabilistic deshaper then split their respective input sequences into a plurality of separate data sequences (step S402).
- the probabilistic shaper and the probabilistic deshaper both include shaping encoders/decoders for translating their respective input sequences into output sequences that have a different probability distribution from the data sequence that was originally received by the apparatus (step S403).
- both apparatus combine the outputs from the shaping encoders to form a single data sequence (step S404).
- this combination is likely to be a simple concatenation of the data sequences output by the different shaping decoders.
- this operation may be more complicated and involve jointly mapping sub-symbols in the output sequences of the shaping encoders to target symbols (as explained in more detail below).
- the apparatus shown in Figures 2 and 3 introduce a new architecture for block-to- block or block-to-variable-length shaping encoding and decoding. This new architecture provides reduced complexity and parallelisation, which improves throughput.
- Figures 2 and 3 are intended to correspond to a number of functional blocks. This is for illustrative purposes only. Figures 2 and 3 are not intended to define a strict division between different parts of hardware on a chip or between different programs, procedures or functions in software.
- some or all of the signal processing techniques described herein may be performed wholly or partly by a processor acting under software control.
- some or all of the signal processing techniques described herein are likely to be performed wholly or partly in hardware. This particularly applies to techniques incorporating repetitive arithmetic operations, such as mapping and shaping.
- the functional blocks are expected to be implemented as dedicated hardware in a transmitter/receiver chain.
- a transmitter chain and a receiver chain may be implemented in the same device and components.
- one or more of the components in the probabilistic shaper and deshaper may have a dual-purpose depending on whether the device is operating in a "transmit" mode or a "receive” mode.
- the shaping encoders and decoders may be the same components, just configured to perform different mappings depending on whether they are operating in transmit mode or receive mode.
- the majority of the description below concentrates on the transmitter-side, since it is the transmitter-side that mandates the receiver-side processing. It should be understood, even where this is not explicitly stated, that all of the techniques described below as being performed on the transmitter-side will be mirrored on the receiver-side in order to reverse the transmitter-processing and obtain the original data sequence.
- transmitter/receiver chain The specific components found in a transmitter/receiver chain are dependent on the exact waveform and telecommunications protocol that the transmitter/receiver is configured to implement.
- One or more implementations of the invention are described below with reference to an application in which the transmitter/receiver is configured to operate in accordance with a PCSM transmission scheme. This is for the purposes of example only; it should be understood that the scope of the invention is not limited to any particular transmission scheme, waveform, or telecommunications protocol.
- FIG. 5 A more detailed example of a probabilistic shaper is shown in Figure 5. This implementation may be termed the "separate shaping encoder" implementation.
- the separate shaping encoder splits the input sequence into multiple shorter sequences.
- an input data sequence of k bits 501 is split into two separate sequences 502, 503, each of length k / 2 .
- the splitter is not shown in Figure
- the probabilistic shaper includes two building block shaping encoders 504, 505, each of which is configured to receive one of the separate data sequences.
- the shaping encoders are configured to perform distribution matching on each sequence in parallel to generate respective output sequences 508, 509.
- the shaping encoders are both configured to map their respective input sequences to output sequences having a probability distribution P A .
- the shaping encoders could also map to output sequences having different probability distributions.
- the multiplexor 510 is then configured to concatenate the shaping encoder output sequences to form a single output sequence 51 1 .
- the combiner is thus able to be implemented by a multiplexor 510, since the single output sequence is formed through concatenating the output sequences from the shaping encoders.
- the shaping encoders may map to an intermediate alphabet that is different from the target alphabet.
- the combiner may be configured to jointly map the output sequences of the shaping encoders to the target alphabet to form the single output sequence. This is described in more detail below.
- the shaping encoders may map to a different alphabet from the combiner.
- the shaping encoders may map to a smaller alphabet than the combiner. This results in multiple shaping encoders, all operating on shorter sequences in parallel with each other.
- An example of a probabilistic shaper in which the shaping encoders and the combiner use different alphabets is shown in Figure 6.
- the building block shaping encoders 604, 605 each perform a distribution matching independently on smaller alphabets than the target alphabet X.
- Each shaping encoder uses its own respective alphabet, so shaping encoder 604 uses alphabet A to generate its output sequence 608 whereas shaping encoder 605 uses alphabet B to generate its output sequence 609.
- the joint distribution of the symbols in the shaping encoder output sequences 608, 609 are mapped to the symbols of the target alphabet by mapper 610.
- the general principle underlying the probabilistic shaper shown in Figure 6 is as follows.
- the probabilistic shaper may have k parallel building block shaping encoders. Together these shaping encoders can be considered as implementing a "split shaping encoder".
- the operation of the split shaping encoder will now be described with reference to an example in which the shaping system is configured as a CCDM (Constant Composition Distribution Matcher).
- CCDM Constant Composition Distribution Matcher
- the output alphabet ⁇ of the split shaping encoder has a size of w k , where w is the number of "levels" in the shaping encoder alphabets (which are the CCDM alphabets in this example).
- each symbol in the target alphabet may be capable of being represented by one or more sub-symbols, and those sub-symbols may in turn form the alphabets of the shaping encoders.
- each symbol in the target alphabet has a label L i .
- Each label may have k bits. These can be viewed as k "bit levels". The number of bit levels may be the same as the number of shaping encoders.
- Each shaping encoder can, in effect, be considered as being responsible for mapping the input data sequence to one of the "bit levels".
- the combiner correspondingly treats the output data sequences from the shaping encoders as each providing a respective bit-level for the symbol labels. (It is also possible for each shaping encoder to handle more than one bit level. This is described in more detail below).
- the probability of a label is then a product of the probabilities of corresponding bit levels. For example,
- the input parameters to the shaping system are the target output distribution P A and the length n c of the output data sequence.
- the output distribution P A is emulated by outputting a sequence symbols of a certain type, i.e., the
- n a is the number of occurrences of the symbol in the output sequence
- the labels can then be determined by selecting q kc sequences, each of which have the empirical distribution, and defining a one-to-one mapping between those labels and the symbols of the target alphabet.
- the distribution matching is then performed for each bit level of the binary labels, where the nth "bit-level" refers to the nth bit of a binary label .
- the binary output sequences are mapped back to symbols. In this way, the input bits can be processed independently and in parallel, such that a higher throughput can be obtained.
- the two shaping encoders 701 , 702 in Figure 7 are bit-level (or binary) shaping encoders.
- the apparatus in Figure 7 represents the receiver-side implementation; corresponding bit-level (or binary) shaping decoders would form the building blocks of the transmitter-side implementation. These building blocks could be replaced by any other binary source encoder or decoder (e.g. one that works on block-to-block basis), which outputs biased bit sequences.
- the target alphabet in the example of Figure 7 is The target alphabet
- the apparatus in Figure 7 is configured to perform distribution matching on two bit-levels, B 1 and B 2 , of binary labels assigned to the symbols.
- the symbols in the target alphabet can be labelled as follows:
- each symbol in the target alphabet is represented by two sub-symbols, 0 and 1 , which form a unique label .
- the shaping encoders each use the alphabets
- the shaping encoders use alphabets that are formed from the sub-symbols.
- the next stage is to find bit-level distributions such that a "good” approximation
- the binary shaping encoders 701 , 702 are each configured to map their respective input bits to an output bit sequence that show the appropriate distribution of Ts and O's according to their marginal probabilities B 1 and B 2 .
- the mapper 703 treats the output from binary shaping encoder 701 as providing bit B 1 and the output from binary shaping encoder 702 as providing bit B 2 . It therefore takes alternate bits from each sequence to jointly map bits B 1 and B 2 to the symbols of the target alphabet, to obtain a single output sequence of symbols that have the target probability distribution P A .
- the shaping apparatus in Figure 7 has an input length of This is
- the apparatus in Figure 7 is configured to perform binary distribution matching inside a non-binary distribution matcher, for reasons of reduced complexity and improved parallelization. These techniques are applicable to a PSCM scheme and can be incorporated before the channel code in a PSCM transmit chain (with the transmit chain being mirrored in the receive chain).
- Figure 8 shows an example of part of a PCSM transmit chain that incorporates a split shaping arrangement 801 such as that described above is shown. After the symbols are generated, the rest of the chain can function in the same way as the PAS system shown in Figure 1 .
- the mapping bA (from symbols to bits, just before the FEC Encoder) is suitably a Gray binary code (GBC) mapping, which is considered optimal for BICM.
- the divider may optionally include a demapper 304.
- the demapper is configured to receive the input data sequence, which is formed from the target alphabet. It is also configured to demap that input data sequence to a demapped data sequence that is formed from the appropriate sub-symbols.
- the divider is configured to arrange the output data sequences into a plurality of separate data sequences so that data corresponding to the same sub-symbol is directed to the same shaping decoder and data corresponding to different sub-symbols is directed to different shaping decoders. Essentially, this just reverses the process performed in the transmitter.
- a performance close to Additive White Gaussian Noise (AWGN) channel capacity can be achieved if signal amplitudes approximate a Maxwell-Boltzmann distribution , i.e., a target alphabet where the probability of each amplitude a ⁇
- v is a parameter depending on Signal-to-Noise Ratio (SNR).
- Tables 1 to 3 show results obtained by labelling the target symbols using Natural Binary Coding, i.e., 000, 001 , 010, 01 1 , 100, ...,1 1 1 .
- the results show correlation coefficients between the bits of the Maxwell-Boltzmann distribution and 16-ASK modulation amplitudes, i.e. 8 amplitudes (represented by B1 B2B3) on the positive part of the x-axis and 8 symmetric amplitudes on the negative part of the x- axis.
- the results show that the bits of different bit-levels are nearly independent of each other, and consequently confirm that bit-level shaping should perform well.
- the building block shaping encoders can also work on higher alphabets. This includes alphabets that are not necessarily binary and/or alphabets in which each symbol comprises more than one bit (so that each shaping encoder maps to more than one output bit). Using a higher-order shaping encoder can increase the accuracy of approximation of the desired target distribution P A . This is especially important when the correlation or dependency of some bit-levels is not weak.
- individual shaping encoders can be configured to consider two or more bit-levels jointly to form the output sub-symbols.
- the mapping of the sub-symbols to the target alphabet need not have any special structure.
- Each target symbol could be randomly assigned its unique label of sub- symbols.
- a preferred option is to assign the labels to achieve a probabilistically ordered mapping.
- target symbols that are associated with a relatively high transmit power are generally mapped to sub-symbol combinations that have a lower probability than the sub-symbol combinations to which relatively low transmit power symbols are mapped. It is also possible for symbols that neighbour each other in magnitude to be mapped to sub-symbol combinations that have the same probability.
- the probability distribution of transmit symbols in a PSCM system is preferably configured to ensure that the general trend of the output symbols is for their probability to decrease as their magnitude increases. This can be achieved as follows: First a series of assumptions is made.
- the probabilistic shaper comprises k parallel building block shaping encoders.
- the target alphabet ⁇ has a size of 2 k and each symbol in the alphabet has a label L £ of k bits.
- the binary target distribution that applies to each of the k building blocks shaping encoders can be designated as:
- the probability of a label is then a product of the probabilities of corresponding bit levels. For example,
- the labels L can then be sorted according to their probability. For example: where is the label index after the sorting operation.
- the result is target symbols that have the desired probability distribution.
- the shaping encoders may be configured to use a Natural Code as an alphabet.
- a probabilistically ordered Natural Code can achieve maximum redundancy in terms of unequal distribution from symbol level to bit levels. It can also be shown that under some constraints on the distribution applied by each of the k building block shaping encoders, a Natural Code mapping is a mapping which results in the target symbols having a decreasing probability distribution. This is demonstrated below using a practical, worked example. In this example it is assumed that the probabilistic shaper has 4 parallel building block binary shaping encoders.
- the target alphabet has a size of 16 and each symbol in the alphabet has a 4-bit label
- the distributions for each of the 4 binary shaping encoders can be designated:
- T nese distributions are subject to the following
- This general theorem can be extended to cover any Natural Code combiner, i.e. it is possible to derive a necessary condition for a Natural Code to be a probalistically ordered code.
- Non-probabilistically ordered mapping instead may result in a probability distribution which is non-decreasing with increasing amplitude. This waives the achievable shaping gain for AWGN channel.
- An example of a non-probabilistically ordered mapping is Gray binary code (GBC) mapping.
- GBC Gray binary code
- Figure 9 illustrates the resulting symbol probability distribution using Natural Binary Code mapping and Gray Binary Code mapping.
- the simulation results for Gray Binary Code are shown in plot 901 .
- the results show this mapping is non-probabilistically ordered mapping. It consequently results in poor performance.
- the simulation results for Natural Binary Code mapping are shown in plot 902. This mapping meets the constraints set out above and thus achieves a probabilistically ordered mapping.
- the shaping encoders may be configured to implement a mapping that uses any Natural Code.
- a Natural Code is an extension of Natural Binary Code, and extends to any N-ary alphabet.
- the Natural Code defines an order of the sequences, e.g. for a binary code. Specifically, a Natural Code orders the sequences such that the rightmost symbol is the least important one, the second rightmost symbol is the second least important, and so on, with the left-most symbol being the most important.
- the 10-ary natural code is decimal numbering.
- Natural Code on a sequences which have mixed alphabets.
- an alphabet may have a first symbol that is binary and a second that is 3-ary.
- the Natural Code would then be:
- the input data sequence is divided into a number of separate data sequences that is equal to the number of shaping encoders.
- Each separate data sequence is then processed by a shaping encoder before being mapped to the target alphabet.
- the probabilistic shaper is still configured to divide the input data sequence into a number of separate data sequences that is equal to the number of the plurality of sub-symbols that represent each symbol in the target alphabet.
- One or more of those separate data sequences is not input into a shaping encoder, however, but is instead input directly into the combiner. The reason for this is explained below.
- bit-level probabilities for the shaping encoders are preferably
- a PSCM system such as that shown in Figure 8, has the following free parameters available to it: (i) the probability distribution on each bit-level and (ii) the constellation scaling To determine
- An optimization method can be employed to obtain a "good" distribution P x that yields rates close to AWGN capacity.
- the distribution P x can be restricted to a Maxwell-Boltzmann distribution, i.e., then ⁇ and P x can be obtained by
- P x is a distribution on amplitudes P A and therefore a joint distribution on the bit labels (via a bit-to-symbol mapping).
- the aim is to find a product
- divergence D can be used as a measure of "closeness":
- the probabilistic deshaper on the receiver-side is configured to reverse the processing that has been performed at the transmitter-side.
- the shaping decoders are therefore preferably configured to use both an inverse probabilistically ordered mapping and Natural Code mapping in their operations to demap their respective input sequences to obtain the original data sequence.
- the probabilistic deshaper can be configured to mirror the transmitter-side by skipping shaping decoding for any bit levels that were not shaped at the transmitter.
- the desired distribution of the output symbols is preferably one that is anticipated to offer improved transmission performance compared with the distribution of the original data sequence.
- the probabilistic shaper described herein is able to achieve improved transmission performance.
- the improvements could arise in any area, including reduced error rates, reduced transmit power, reduced Peak-to-Averaged Power ratio (PAPR) etc.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Error Detection And Correction (AREA)
Abstract
A probabilistic shaper comprises an input configured to receive an input data sequence, which has a first probability distribution. The probabilistic shaper also comprises a splitter configured to divide the input data sequence into a plurality of separate data sequences. It comprises a plurality of shaping encoders. Each shaping encoder is configured to receive a respective one of the separate data sequences and map it to an output data sequence having a different probability distribution from the first probability distribution. Each shaping encoder is configured to perform its mapping independently of the other shaping encoders. It also comprises a combiner configured to receive the output data sequences from the plurality of shaping encoders and combine them to form a single data sequence having a target probability distribution. The probabilistic shaper is able to increase throughput by dividing up the incoming data sequence into shorter sequences that can be processed in parallel.
Description
Apparatus and Method for Shaping the Probability Distribution of a Data
Sequence
This invention relates to apparatus and methods for shaping the probability distribution of a data sequence.
Probabilistically shaped coded modulation (PSCM) is a transmission scheme that can support bandwidth efficient transmission with near Shannon capacity performance. The scheme uses a shaping encoder (ShEnc) and a channel encoder (ChEnc) at the transmitter side and a channel decoder (ChDec) followed by a shaping decoder (ShDec) at the receiver side. This brings a number of advantages. First, the shaping encoder transforms uniformly distributed bits of the input message to a non-uniform distribution so that the channel input symbols approach the distribution that is able to achieve capacity for the channel. Second, by changing the parameters of the shaping encoder, the transmitter can adjust the rate of the transmission, without changing the parameters of the forward error correction (FEC) code. Both of these aspects are different from conventional coded modulation schemes (such as Bit-Interleaved Coded Modulation (BICM)), where there is no distribution matching to optimize the distribution of the channel input symbols and where rate matching is achieved by adjusting the parameters of the FEC code. An example of PSCM is the probabilistic amplitude shaping (PAS) scheme proposed in "Bandwidth Efficient and Rate-Matched Low-Density Parity-Check Coded Modulation" by G. Bocherer et al (IEEE Trans. Commun., vol. 63, no. 12, pp. 4651 -4665, 2015). A block diagram of a PAS system is shown in Figure 1 . The shaping encoder 101 aims to produce a sequence of symbols (random variables) with a desired probability distribution given a sequence of symbols as an input. The input symbols often have a uniform probability distribution. The output symbols often have a non-uniform probability distribution. A shaping encoder is sometimes referred to as a distribution matcher (DM), and a shaping decoder is called a distribution dematcher or inverse DM. Distribution matching is usually performed on a block-to-block (or block-to- variable-length) basis. In other words, the shaping encoder usually maps a uniformly distributed input sequence of fixed length to a fixed length (or variable length,
depending on the input sequence) sequence of symbols distributed according to a desired probability distribution. The mapping should be one-to-one.
The PAS system shown in Figure 1 works as follows:
· First it is assumed that the system in Figure 1 is configured to handle a block of nc symbols from a 2m -ASK (amplitude shift key) alphabet.
encoder 101 .
from the alphabet A = {1, 3, ... , {2m - 1)} and with probability distribution PA 107.
• Each amplitude is mapped independently 102, by a fixed mapping bA, to a corresponding bit label of length m - 1.
• The concatenated bit labels
109 form a binary sequence that consists of (m - l)nc bits. This sequence is encoded by a systematic FEC encoder 103 of rate R = (m - 1)/m. In other words, one parity bit is produced for each amplitude.
reverse mapping bj1. The binary sequence
1 1 1 of nc parity bits is mapped to sign symbols S according to the followin
element-wise 105 and scaled by Δ 1 14 to obtain the channel input symbols 1 15.
Current PSCM implementations use arithmetic coding that enables a throughput of hundreds of Mbps. However, future mobile communications are anticipated to require a throughout up to tens of Gbps and fibre links are anticipated to require a throughout up to Tbps (see e.g. "Experimental Demonstration of Capacity Increase and Rate- Adaptation by Probabilistically Shaped 64-QAM" by Buchali et al (postdeadline paper PDP.3.4 at ECOC 2015).
It is an object of the invention to provide concepts for probabilistic shaping that can provide a higher throughput.
The foregoing and other objects are achieved by the features of the independent claims. Further implementation forms are apparent from the dependent claims, the description and the figures.
According to a first aspect, a probabilistic shaper is provided that comprises an input configured to receive an input data sequence, which has a first probability distribution. The probabilistic shaper also comprises a splitter configured to divide the input data sequence into a plurality of separate data sequences. It comprises a plurality of shaping encoders. Each shaping encoder is configured to receive a respective one of the separate data sequences and map it to an output data sequence having a different probability distribution from the first probability distribution. Each shaping encoder is configured to perform its mapping independently of the other shaping encoders. It also comprises a combiner configured to receive the output data sequences from the plurality of shaping encoders and combine them to form a single data sequence having a target probability distribution. The probabilistic shaper is able to increase throughput by dividing up the incoming data sequence into sequences that can be processed in parallel.
The plurality of shaping encoders and the combiner may each configured to map their respective received data sequences to output data sequences that are formed from a respective alphabet, and the shaping encoders may each be configured to map to a smaller alphabet than the combiner. This reduces the complexity of the individual shape encoding operations and further increases throughput.
The combiner may be configured to receive the output data sequences and map them to a single data sequence that is formed from a target alphabet, wherein the target alphabet comprises a plurality of symbols and each symbol can be represented by one or more sub-symbols. Each shaping encoder may be configured to map its respective separate data sequence to an output data sequence that is formed from an alphabet that comprises at least one of the sub-symbols. The shaping encoders are
thus able to output individual data sequences, which can be independently formed and then jointly mapped to the target alphabet.
The splitter may be configured to divide the input data sequence into a number of separate data sequences that is equal to the number of the plurality of sub-symbols that represent each symbol in the target alphabet. The probabilistic shaper may be configured such that one or more of those separate data sequences is not input into a shaping encoder but is instead input directly into the combiner. This can further improve throughput and reduces complexity.
Each sub-symbol may comprise one or more bit levels. The combiner may be configured to treat the output data sequences from the plurality of shaping encoders as each providing a respective one or more bit levels of the sub-symbols. This assists the combiner to jointly map the outputs of the shaping encoder to the desired target alphabet.
The combiner may be configured to combine the output data sequences to form the single data sequence by combining one or more sub-symbols that are comprised in the output data sequences of the plurality of shaping encoders to form a combination of sub-symbols and mapping the combination of sub-symbols to the symbol in the target alphabet that is represented by that combination. This enables the combiner to jointly map the outputs of the shaping encoder to the desired target alphabet.
Each symbol in the target alphabet may be associated with a respective transmit power. The combiner may be configured to treat each symbol in the target alphabet as being represented by a specific combination of sub-symbols, whereby a symbol in the target alphabet that is associated with a higher transmit power than another symbol in the target alphabet is treated as being represented by a specific combination of sub- symbols that has a probability that is lower than or equal to a probability of a specific combination of sub-symbols that represents the other symbol. This improves performance by enabling the output distribution of the output symbols to approximate a Gaussian distribution.
The combiner may be configured to use Natural Code mapping to map a combination of sub-symbols to a symbol in the target alphabet. This helps to provide a probabilistically ordered mapping. The combiner may be configured to form the single data sequence to have a target probability distribution that is anticipated to offer improved transmission performance compared with the first probability distribution. The improvements could arise in any area, including reduced error rates, reduced transmit power, reduced Peak-to- Averaged Power ratio (PAPR) etc. The probabilistic shaper is thus able to offer improved transmission performance.
According to a second aspect, a transmitter is provided that comprises a probabilistic shaper as described in any of the preceding paragraphs. According to a third aspect, a probabilistic deshaper is provided that comprises an input configured to receive an input data sequence, which has a target probability distribution. The probabilistic deshaper also comprises a divider configured to arrange the input data sequence into a plurality of separate data sequences. It comprises a plurality of shaping decoders, each configured to receive a respective one of the separate data sequences and demap it to an output data sequence having a different probability distribution from the target probability distribution. Each shaping decoder performs said demapping independently of the other shaping decoders. It also comprises a combiner configured to receive the output data sequences from the plurality of shaping decoders and combine them to form a single data sequence having a first probability distribution. The probabilistic deshaper is thus able to reverse the processing of the probabilistic shaper and increase throughput by dividing up the incoming data sequence into sequences that can be processed in parallel.
The divider may comprise a demappper that is configured to receive the input data sequence, which is formed from a target alphabet, wherein the target alphabet comprises a plurality of symbols and each symbol can be represented by a plurality of sub-symbols. It may also be configured to demap the input data sequence to a demapped data sequence that is formed from an alphabet that comprises the plurality of sub-symbols. It may also be configured to output the demapped data sequence for
arranging into the plurality of separate data sequences. The divider is thus able to divide the input data stream into appropriate sequences of sub-symbols that can be demapped to reclaim the original data that was input into the probabilistic shaper. The divider may be configured to arrange the demapped data sequence into a plurality of separate data sequences by arranging data corresponding to the same sub-symbol in the same sequence and data corresponding to different sub-symbols in different sequences. The divider is thus able to divide the input data stream into sequences of sub-symbols that can be independently demapped.
According to a fourth aspect, a receiver is provided that comprises a probabilistic deshaper described in any of the preceding paragraphs.
According to a fifth aspect, a method is provided that comprises receiving an input data sequence, which has a first probability distribution. It comprises dividing the input data sequence into a plurality of separate data sequences. It comprises independently mapping each separate data sequence to an output data sequence that has a different probability distribution from the first probability distribution. It also comprises combining the output data sequences to form a single data sequence having a target probability distribution.
The present invention will now be described by way of example with reference to the accompanying drawings. In the drawings: Figure 1 shows an example of a prior art PAS system;
Figure 2 shows an example of a probabilistic shaper according to an embodiment of the invention; Figure 3 shows an example of a probabilistic deshaper according to an embodiment of the invention;
Figure 4 is a flow chart that illustrates an example of a shaping technique according to an embodiment of the invention;
Figure 5 shows a more detailed example of a probabilistic shaper;
Figure 6 shows an example of a probabilistic shaper configured to perform spilt shaping encoding;
Figure 7 shows an example of a probabilistic shaper configured to perform binary spilt shaping encoding Figure 8 shows an example of a probabilistic shaper incorporated in a PCSM transmit chain; and
Figure 9 shows simulation results for split shaping encoders that use Gray Binary Code mapping and Natural Binary Code mapping.
Figure 2 shows an example of an apparatus for implementing probabilistic shaping. The apparatus, which is termed a "probabilistic shaper" herein, is shown generally at 200. It comprises an input 201 , a splitter 202 and a plurality of shaping encoders 203. The splitter is configured to receive an input data sequence from the input and divide it into a plurality of separate data sequences. The splitter might simply be a switch but it could be implemented by any suitable means for separating an input data stream, including e.g. a demultiplexer. Each of the shaping encoders is configured to receive one of the separate data sequences and shape it to generate an output data sequence that has a different probability distribution from that of the original input data sequence. The shaping encoders could be configured to produce output sequences that all have the same probability distribution or they could differ from each other. Each shaping encoder performs its mapping independently of the other shaping encoders, so each shaping operation is performed without reference to the shaping operations being performed by other shaping encoders.
A data sequence is typically formed of a plurality of bits or symbols. Those bits or symbols are usually taken from an alphabet that defines the available bits or symbols. For example, a binary alphabet is the set {0, 1 }. The "probability distribution" of a data
sequence defines the relative proportions of the different alphabet members that the sequence contains. So, for example, the probability distribution of a binary sequence refers to its relative proportions of '1 's and O's. Changing a probability distribution of a sequence is referred to as "shaping" herein.
In some implementations, the input data sequence that is received by the probabilistic shaper is a uniform sequence, i.e. a sequence in which the relative proportions of the alphabet members are the same. Preferably the shaping encoders are configured to transform these uniform distributions into "biased" distributions, where the term "biased" refers to the probability of the different alphabet members in a sequence being different.
The probabilistic shaper also includes a combiner 204. The combiner is configured to receive the output data sequences from the plurality of shaping encoders and combine them to form a single data sequence having a target probability distribution. The combiner may optionally incorporate a mapper 205, which is configured to map the output data sequences that the combiner receives from the shaping encoders to a target alphabet. The probabilistic shaper will usually form part of a transmitter. For example, it could form part of a transmitter that is capable of formatting data for transmission according to a PCSM transmit scheme. This formatting is reversed at the receiver. The processing that is performed by the probabilistic shaper shown in Figure 2 may be reversed by its mirror: the probabilistic deshaper shown in Figure 3.
Figure 3 shows an example of an apparatus for implementing probabilistic deshaping. The apparatus, which is termed a "probabilistic deshaper" herein, is shown generally at 300. It comprises an input 301 , a divider 302 and a plurality of shaping decoders 303. The divider is configured to arrange the input data sequence into a plurality of separate data sequences. As with splitter 201 , the divider might simply be a switch. The job of the divider is likely to be slightly more complicated than that of splitter 201 , however (as will become apparent from some of the examples below) so it is likely to be implemented by a more complex component, such as a demultiplexer. In Figure 3 the divider comprises an optional demapper 304, which may be required if split-
shaping encoding has been implemented at the transmitter-side (which is described in more detail below). Each of the shaping decoders is configured to receive one of the separate data sequences and shape it to generate an output data sequence. The output data sequences have a different probability distribution from that of the original input data sequence. The shaping decoders could be configured to produce output sequences that all have the same probability distribution or they could differ from each other. Each shaping decoder performs its demapping independently of the other shaping encoders. The probabilistic deshaper also has a combiner 305 configured to receive the output data sequences from the plurality of shaping decoders and combine them to form a single data sequence. For the deshaper, this single data sequence should be equivalent to the original data sequence that was processed in the transmitter, i.e. the single data sequence should have the first probability distribution.
In general terms the probabilistic shaper and deshaper work in a very similar way, in that both receive an input data sequence and divide it into a number of smaller, separate sequences, which are then independently mapped to an output sequence. This general method is illustrated in Figure 4, which commences in step S401 with receiving an input data sequence. In the probabilistic shaper this input data sequence will have a first probability distribution. In the probabilistic deshaper, this signal will be a received signal that has already gone through a shaping process and hence has the target distribution. Both the probabilistic shaper and the probabilistic deshaper then split their respective input sequences into a plurality of separate data sequences (step S402). Similarly, the probabilistic shaper and the probabilistic deshaper both include shaping encoders/decoders for translating their respective input sequences into output sequences that have a different probability distribution from the data sequence that was originally received by the apparatus (step S403). Finally, both apparatus combine the outputs from the shaping encoders to form a single data sequence (step S404). In the deshaper, this combination is likely to be a simple concatenation of the data sequences output by the different shaping decoders. In the shaper, this operation may be more complicated and involve jointly mapping sub-symbols in the output sequences of the shaping encoders to target symbols (as explained in more detail below).
The apparatus shown in Figures 2 and 3 introduce a new architecture for block-to- block or block-to-variable-length shaping encoding and decoding. This new architecture provides reduced complexity and parallelisation, which improves throughput.
The structures shown in Figures 2 and 3 (and all the block apparatus diagrams included herein) are intended to correspond to a number of functional blocks. This is for illustrative purposes only. Figures 2 and 3 are not intended to define a strict division between different parts of hardware on a chip or between different programs, procedures or functions in software. In some embodiments, some or all of the signal processing techniques described herein may be performed wholly or partly by a processor acting under software control. In some embodiments, some or all of the signal processing techniques described herein are likely to be performed wholly or partly in hardware. This particularly applies to techniques incorporating repetitive arithmetic operations, such as mapping and shaping. In some implementations, the functional blocks are expected to be implemented as dedicated hardware in a transmitter/receiver chain. A transmitter chain and a receiver chain may be implemented in the same device and components. Specifically, one or more of the components in the probabilistic shaper and deshaper may have a dual-purpose depending on whether the device is operating in a "transmit" mode or a "receive" mode. For example, the shaping encoders and decoders may be the same components, just configured to perform different mappings depending on whether they are operating in transmit mode or receive mode. The majority of the description below concentrates on the transmitter-side, since it is the transmitter-side that mandates the receiver-side processing. It should be understood, even where this is not explicitly stated, that all of the techniques described below as being performed on the transmitter-side will be mirrored on the receiver-side in order to reverse the transmitter-processing and obtain the original data sequence. The specific components found in a transmitter/receiver chain are dependent on the exact waveform and telecommunications protocol that the transmitter/receiver is configured to implement. One or more implementations of the invention are described below with reference to an application in which the transmitter/receiver is configured to operate in accordance with a PCSM transmission scheme. This is for the purposes
of example only; it should be understood that the scope of the invention is not limited to any particular transmission scheme, waveform, or telecommunications protocol.
A more detailed example of a probabilistic shaper is shown in Figure 5. This implementation may be termed the "separate shaping encoder" implementation. The separate shaping encoder splits the input sequence into multiple shorter sequences. In the example of Figure 5, an input data sequence of k bits 501 is split into two separate sequences 502, 503, each of length k/2. The splitter is not shown in Figure
5, but it could just be a switch that connects the input data sequence first to one shaping encoder and then to the other. The separate data sequences are the same length in Figure 5, but they could be of different lengths, as will become apparent from some of the examples below. The probabilistic shaper includes two building block shaping encoders 504, 505, each of which is configured to receive one of the separate data sequences. The shaping encoders are configured to perform distribution matching on each sequence in parallel to generate respective output sequences 508, 509. In Figure 5, the shaping encoders are both configured to map their respective input sequences to output sequences having a probability distribution PA. The shaping encoders could also map to output sequences having different probability distributions. The multiplexor 510 is then configured to concatenate the shaping encoder output sequences to form a single output sequence 51 1 .
In the example of Figure 5, the building block shaping encoders use the target alphabet, i.e. they both map to the alphabet of the final output sequence Λ = {A, B, C, D}. The combiner is thus able to be implemented by a multiplexor 510, since the single output sequence is formed through concatenating the output sequences from the shaping encoders. In other examples the shaping encoders may map to an intermediate alphabet that is different from the target alphabet. In those examples the combiner may be configured to jointly map the output sequences of the shaping encoders to the target alphabet to form the single output sequence. This is described in more detail below.
One way to increase the throughput is to split the input sequence of symbols into multiple sets of shorter symbol sequences. Using this technique, the shaping encoders
may map to a different alphabet from the combiner. In particular, the shaping encoders may map to a smaller alphabet than the combiner. This results in multiple shaping encoders, all operating on shorter sequences in parallel with each other. An example of a probabilistic shaper in which the shaping encoders and the combiner use different alphabets is shown in Figure 6. In this example the building block shaping encoders 604, 605 each perform a distribution matching independently on smaller alphabets than the target alphabet X. Each shaping encoder uses its own respective alphabet, so shaping encoder 604 uses alphabet A to generate its output sequence 608 whereas shaping encoder 605 uses alphabet B to generate its output sequence 609. The joint distribution of the symbols in the shaping encoder output sequences 608, 609 are mapped to the symbols of the target alphabet by mapper 610.
The general principle underlying the probabilistic shaper shown in Figure 6 is as follows. The probabilistic shaper may have k parallel building block shaping encoders. Together these shaping encoders can be considered as implementing a "split shaping encoder".
The operation of the split shaping encoder will now be described with reference to an example in which the shaping system is configured as a CCDM (Constant Composition Distribution Matcher). For details of a CCDM and its operation, see P. Schulte and G. Bocherer, "Constant Composition Distribution Matching," IEEE Trans. Inf. Theory, vol. 62, no. 1 , 2016) and for /c binary shaping encoders as an example. The output alphabet Λ of the split shaping encoder has a size of wk, where w is the number of "levels" in the shaping encoder alphabets (which are the CCDM alphabets in this example). For example, if the shaping encoders are mapping to a binary alphabet {0, 1 }, w = 2 and output alphabet
has a size of 2k. The shaping encoders preferably use alphabets that are related to the target alphabet. For example, each symbol in the target alphabet may be capable of being represented by one or more sub-symbols, and those sub-symbols may in turn form the alphabets of the shaping encoders. One way of thinking of this is that each symbol in the target alphabet has a label Li. Each label may have k bits. These can be viewed as k "bit levels". The number
of bit levels may be the same as the number of shaping encoders. Each shaping encoder can, in effect, be considered as being responsible for mapping the input data sequence to one of the "bit levels". The combiner correspondingly treats the output data sequences from the shaping encoders as each providing a respective bit-level for the symbol labels. (It is also possible for each shaping encoder to handle more than one bit level. This is described in more detail below).
The preferred shaping distribution for each of the k building block shaping encoders can be designated as The joint probabilities for all labels
of different bit levels are independent of each other. The probability of a label is then a product of the probabilities of corresponding bit levels. For example,
corresponds to the desired probability of the symbol in the target alphabet that the label represents. Some accuracy of the distribution matching may be lost for desired target distributions that cannot be accurately represented as a product of two or more independent distributions. However, this technique offers advantages of parallelization and higher throughput. For CCDM, the input parameters to the shaping system are the target output distribution PA and the length nc of the output data sequence. The output distribution PA is emulated by outputting a sequence symbols of a certain type, i.e., the
output sequence
contains a fixed number of individual symbol fronri An
empirical distribution for the output sequence is found, which is close to the desired target distribution PA. The empirical distribution of the symbols in the sequence is:
One option is to look for an empirical distribution PA which minimizes the Kullback- Leibler (KL) divergence which is a function of two probability distributions
defined on the same alphabet
The number of data sequences that are available with the required empirical distribution P^(a£) can be calculated as follows:
where is the multinomial coefficient.
For CCDM, the length of the input data sequence can be determined as kc = [\ogq N\ , (where q is the size of the input alphabet. [x\ denotes the floor function, i.e., the largest integer not greater than x). This is the maximum number of input symbols that can be used to label the available sequences using an alphabet of size q. For example, a binary alphabet has two levels, so kc = [\og2 N\ represents the maximum number of bits which can be used to label the sequences bijectively. The labels can then be determined by selecting qkc sequences, each of which have the empirical distribution, and defining a one-to-one mapping between those labels and the symbols of the target alphabet. The selection of which label maps to which symbol can be random or can be chosen to achieve some purpose, e.g. to achieve probabilistically ordered mapping. A practical worked example is described below. This explanation is based on the operation of a CCDM, which can be used to implement the shaping encoder building
blocks of the probabilistic shaper. This is just one example of a shaping encoder and it should be understood that the techniques described herein could be implemented using any type of shaping encoder. The worked example below refers to the probabilistic shaper shown in Figure 7. This apparatus is configured to implement bit-level or binary distribution matching. Instead of performing a distribution matching on a symbol level , the target symbols are given binary labels. The distribution matching is then performed for each bit level of the binary labels, where the nth "bit-level" refers to the nth bit of a binary label . After binary distribution matching, the binary output sequences are mapped back to symbols. In this way, the input bits can be processed independently and in parallel, such that a higher throughput can be obtained.
The two shaping encoders 701 , 702 in Figure 7 are bit-level (or binary) shaping encoders. The apparatus in Figure 7 represents the receiver-side implementation; corresponding bit-level (or binary) shaping decoders would form the building blocks of the transmitter-side implementation. These building blocks could be replaced by any other binary source encoder or decoder (e.g. one that works on block-to-block basis), which outputs biased bit sequences.
thus has four levels. The corresponding target probabilities
The length of the output data sequence
In this example the desired distribution of the output sequence can be matched exactly, i.e., no approximation is needed. We have:
Therefore, there are 55440 sequences with the desired target distribution and length nc = 12. In other words, there are 55440 sequences with 6 occurrences of A, 3 occurrences of B, 2 occurrences of C and 1 occurrence of D.
Instead of performing a symbol-level distribution matching, the apparatus in Figure 7 is configured to perform distribution matching on two bit-levels, B1 and B2 , of binary labels assigned to the symbols. The symbols in the target alphabet can be labelled as follows:
Thus each symbol in the target alphabet is represented by two sub-symbols, 0 and 1 , which form a unique label . The shaping encoders each use the alphabets
The shaping encoders use alphabets that are formed from the sub-symbols.
of joint probability distribution PA can be obtained by mapping the joint distribution to the symbols in the alphabet Λ. As mentioned above, the
probability of a label is then a product of the probabilities of corresponding bit levels. The marginal distributions for random variables B and B2 are thus It can be seen that the joint distribution:
matches exactly with the target distribution PA . It can also be seen that:
which gives an input length of
The binary shaping encoders 701 , 702 are each configured to map their respective input bits to an output bit sequence that show the appropriate distribution of Ts and O's according to their marginal probabilities B1 and B2. The mapper 703 treats the output from binary shaping encoder 701 as providing bit B1 and the output from binary shaping encoder 702 as providing bit B2. It therefore takes alternate bits from each sequence to jointly map bits B1 and B2 to the symbols of the target alphabet, to obtain a single output sequence of symbols that have the target probability distribution PA.
the same input length that would be obtained from a conventional prior art PAS system, such as that shown in Figure 1 . The advantages of the arrangement in Figure 7 are that the input bits can be processed in parallel and independently (and hence faster). Moreover, binary distribution matching is less complex, in general, than distribution matching on a symbol-level.
The apparatus in Figure 7 is configured to perform binary distribution matching inside a non-binary distribution matcher, for reasons of reduced complexity and improved parallelization. These techniques are applicable to a PSCM scheme and can be incorporated before the channel code in a PSCM transmit chain (with the transmit chain being mirrored in the receive chain). Figure 8 shows an example of part of a PCSM transmit chain that incorporates a split shaping arrangement 801 such as that described above is shown. After the symbols are generated, the rest of the chain can function in the same way as the PAS system shown in Figure 1 . The mapping bA (from symbols to bits, just before the FEC Encoder) is suitably a Gray binary code (GBC) mapping, which is considered optimal for BICM. This mapping, however, is different from the mapping used inside the probabilistic shaper, which targets shaping gain, instead of coded modulation gain. This is described in more detail below.
If split shaping encoding is implemented by the probabilistic shaper, this technique is preferably mirrored by the probabilistic deshaper that is to receive the signal. This places some additional requirements on divider 302. As mentioned above, the divider may optionally include a demapper 304. The demapper is configured to receive the input data sequence, which is formed from the target alphabet. It is also configured to demap that input data sequence to a demapped data sequence that is formed from the appropriate sub-symbols. The divider is configured to arrange the output data sequences into a plurality of separate data sequences so that data corresponding to the same sub-symbol is directed to the same shaping decoder and data corresponding to different sub-symbols is directed to different shaping decoders. Essentially, this just reverses the process performed in the transmitter.
Simulation results have shown that a probabilistic shaper with a split shaping encoder arrangement of the type shown in Figures 6 to 8 can achieve a shaping performance that is very close to that of a conventional symbol-level distribution matcher One reason for this is that the different bit levels considered for shaping are nearly independent of each other. The following example makes this clearer.
A performance close to Additive White Gaussian Noise (AWGN) channel capacity can be achieved if signal amplitudes approximate a Maxwell-Boltzmann distribution , i.e., a target alphabet where the probability of each amplitude a ε
where v is a parameter depending on Signal-to-Noise Ratio (SNR).
For this target probability distribution, we find that the correlations between the individual bits are small. Tables 1 to 3 show results obtained by labelling the target symbols using Natural Binary Coding, i.e., 000, 001 , 010, 01 1 , 100, ...,1 1 1 . The results show correlation coefficients between the bits of the Maxwell-Boltzmann distribution and 16-ASK modulation amplitudes, i.e. 8 amplitudes (represented by B1 B2B3) on the positive part of the x-axis and 8 symmetric amplitudes on the negative part of the x-
axis. The results show that the bits of different bit-levels are nearly independent of each other, and consequently confirm that bit-level shaping should perform well.
Table 1 : Correlation coefficient between different bit-levels for 16-ASK (8 amplitudes in the Split-ShEnc target alphabet) and Maxwell-Boltzmann distribution with v=0.0098.
Table 2. Correlation coefficient between different bit-levels for 16-ASK (8 amplitudes in the Split-ShEnc target alphabet) and Maxwell-Boltzmann distribution with v= 0.0494.
Table 3. Correlation coefficient between different bit-levels for 16-ASK (8 amplitudes in the Split-ShEnc target alphabet) and Maxwell-Boltzmann distribution with v=0.1 .
It is not possible to achieve every target distribution using split shaping encoding because it is restricted to distributions which are a product of the distributions that are achievable via the available bit levels. However, the simulation results show that a distribution of interest can be approximated such that the loss usually becomes negligible.
The building block shaping encoders can also work on higher alphabets. This includes alphabets that are not necessarily binary and/or alphabets in which each symbol comprises more than one bit (so that each shaping encoder maps to more than one output bit). Using a higher-order shaping encoder can increase the accuracy of approximation of the desired target distribution PA. This is especially important when the correlation or dependency of some bit-levels is not weak. In this case, individual shaping encoders can be configured to consider two or more bit-levels jointly to form the output sub-symbols. The mapping of the sub-symbols to the target alphabet need not have any special structure. Each target symbol could be randomly assigned its unique label of sub- symbols. However, a preferred option is to assign the labels to achieve a probabilistically ordered mapping. In this arrangement, target symbols that are associated with a relatively high transmit power are generally mapped to sub-symbol combinations that have a lower probability than the sub-symbol combinations to which relatively low transmit power symbols are mapped. It is also possible for symbols that neighbour each other in magnitude to be mapped to sub-symbol combinations that have the same probability. For AWGN channels, having the Gaussian probability distribution on transmit symbols is optimal. Therefore, using a probability distribution on transmit symbols close to Gaussian distribution should provide the maximum shaping gains. The Gaussian distribution decreases in magnitude for increasing magnitude, i.e. the higher the magnitude of a symbol, the lower the probability of that symbol . It is preferable for the probability distribution of transmit symbols in a PSCM system to have the same property. Therefore, the mapper inside the combiner is preferably configured to ensure that the general trend of the output symbols is for their probability to decrease as their magnitude increases. This can be achieved as follows: First a series of assumptions is made. The probabilistic shaper comprises k parallel building block shaping encoders. The target alphabet Λ has a size of 2k and each symbol in the alphabet has a label L£ of k bits.
The binary target distribution that applies to each of the k building blocks shaping encoders can be designated as:
Assuming that the different bit levels are independent of each other, the probability of a label is then a product of the probabilities of corresponding bit levels. For example,
The labels L, can then be sorted according to their probability. For example:
where is the label index after the sorting operation.
(assuming that the magnitudes of symbols are increasing).
The result is target symbols that have the desired probability distribution.
In one example the shaping encoders may be configured to use a Natural Code as an alphabet. A probabilistically ordered Natural Code can achieve maximum redundancy in terms of unequal distribution from symbol level to bit levels. It can also be shown that under some constraints on the distribution applied by each of the k building block shaping encoders, a Natural Code mapping is a mapping which results in the target symbols having a decreasing probability distribution. This is demonstrated below using a practical, worked example.
In this example it is assumed that the probabilistic shaper has 4 parallel building block binary shaping encoders. The target alphabet has a size of 16 and each symbol in the alphabet has a 4-bit label
This implies for any bi , b2, b3.
This is equivalent to , which implies that
for any b1 , b2.
This implies that for any bi .
This implies that
Therefore, the constraints imply that
is decreasing when ordered according to Natural Binary Code (i.e., 0000,0001 ,0010, 001 1 , ... ,1 1 1 1 ). As a practical example, consider showing that P
The construction above can be used to prove the inequality for any bit-labels. The reasoning above can be extended to a general theorem as follows:
Assume k independent bits corresponding probabilities
The bit labels ordered according to a Natural Binary Code,
where
This general theorem can be extended to cover any Natural Code combiner, i.e. it is possible to derive a necessary condition for a Natural Code to be a probalistically ordered code.
Using a non-probabilistically ordered mapping instead may result in a probability distribution which is non-decreasing with increasing amplitude. This waives the achievable shaping gain for AWGN channel. An example of a non-probabilistically ordered mapping is Gray binary code (GBC) mapping. Figure 9 illustrates the resulting symbol probability distribution using Natural Binary Code mapping and Gray Binary Code mapping. The simulation results for Gray Binary Code are shown in plot 901 . The results show this mapping is non-probabilistically ordered mapping. It consequently results in poor performance. The simulation results for Natural Binary Code mapping are shown in plot 902. This mapping meets the constraints set out above and thus achieves a probabilistically ordered mapping.
The shaping encoders may be configured to implement a mapping that uses any Natural Code. A Natural Code is an extension of Natural Binary Code, and extends to any N-ary alphabet. The Natural Code defines an order of the sequences, e.g. for a binary code. Specifically, a Natural Code orders the sequences such that the rightmost symbol is the least important one, the second rightmost symbol is the second least important, and so on, with the left-most symbol being the most important.
For a binary alphabet, a Natural Code would be:
#1 = 00
#2 = 01
#3 = 10
#4 = 1 1 For a 3-ary alphabet, a Natural Code would be:
#1 = 00
#2 = 01
#3 = 02
#4 = 10
#5 = 1 1
#6 = 12
#7 = 20
#8 = 21
#9 = 22
Similarly, the 10-ary natural code is decimal numbering.
It is also possible to have a Natural Code on a sequences which have mixed alphabets. For example, an alphabet may have a first symbol that is binary and a second that is 3-ary. The Natural Code would then be:
#1
#2
#3
#4
#5
#6 In all of the examples described above the input data sequence is divided into a number of separate data sequences that is equal to the number of shaping encoders. Each separate data sequence is then processed by a shaping encoder before being mapped to the target alphabet. However, in some implementations not all of the separate data sequences need to be processed by a shaping encoder. In these
implementations, the probabilistic shaper is still configured to divide the input data sequence into a number of separate data sequences that is equal to the number of the plurality of sub-symbols that represent each symbol in the target alphabet. One or more of those separate data sequences is not input into a shaping encoder, however, but is instead input directly into the combiner. The reason for this is explained below.
determined so as to achieve a Gaussian-like distribution of the target symbols. Consider transmission of a block of nc symbols with modulation order 2m and an average transmit power constraint P. A PSCM system, such as that shown in Figure 8, has the following free parameters available to it: (i) the probability distribution on each bit-level and (ii) the constellation scaling To determine
these parameters, a "good" probability distribution Px for the transmit symbols is identified. This enables the bit-level probabilities that generate a
distribution Px close to Px to be determined.
An optimization method can be employed to obtain a "good" distribution Px that yields rates close to AWGN capacity. For example, the distribution Px can be restricted to a Maxwell-Boltzmann distribution, i.e., then Δ and Px can be obtained by
Note that Px is a distribution on amplitudes PA and therefore a joint distribution on the bit labels (via a bit-to-symbol mapping). The aim is to find a product
distribution of independent bits that is close to P
divergence D can be used as a measure of "closeness":
The expectations can be maximized by choosing:
QB. is a marginal distribution of Bi. As a result, the system parameters can be
parameter Δ, such that the average power constraint is met with equality. Examples of the bit-level probabilities obtained via simulation are given in Table 4.
Table 4. Example bit-level probabilities obtained for PSCM with split shaping encoding, 256QAM modulation and a WiMAX LDPC code of rate 5/6 and length 576 bits.
The results in the table suggest that for some least significant bit (LSB) B3, shaping encoding can be skipped and the uniformly distributed input bits used to form the target symbol instead. This is because for high rates such as 5 or 6 [b/CU] (= bit per channel use) the probability PB3 (0) is close to 0.5, i.e. Ί 's and O's are nearly equally distributed in bit level B3. Skipping the shaping encoding for bit level B3 reduces the system computational complexity and further increases the throughput.
The probabilistic deshaper on the receiver-side is configured to reverse the processing that has been performed at the transmitter-side. The shaping decoders are therefore preferably configured to use both an inverse probabilistically ordered mapping and Natural Code mapping in their operations to demap their respective input sequences to obtain the original data sequence. Similarly, the probabilistic deshaper can be configured to mirror the transmitter-side by skipping shaping decoding for any bit levels that were not shaped at the transmitter.
The desired distribution of the output symbols is preferably one that is anticipated to offer improved transmission performance compared with the distribution of the original data sequence. By achieving a target distribution that approaches that desired
distribution, the probabilistic shaper described herein is able to achieve improved transmission performance. The improvements could arise in any area, including reduced error rates, reduced transmit power, reduced Peak-to-Averaged Power ratio (PAPR) etc.
It should be understood that where this explanation and the accompanying claims refer to the device doing something by performing certain steps or procedures or by implementing particular techniques that does not preclude the device from performing other steps or procedures or implementing other techniques as part of the same process. In other words, where the device is described as doing something "by" certain specified means, the word "by" is meant in the sense of the device performing a process "comprising" the specified means rather than "consisting of them.
The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein, and without limitation to the scope of the claims. The applicant indicates that aspects of the present invention may consist of any such individual feature or combination of features. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention.
Claims
1 . A probabilistic shaper comprising:
an input configured to receive an input data sequence, which has a first probability distribution;
a splitter configured to divide the input data sequence into a plurality of separate data sequences;
a plurality of shaping encoders, each configured to receive a respective one of the separate data sequences and map it to an output data sequence having a different probability distribution from the first probability distribution, wherein each shaping encoder performs its mapping independently of the other shaping encoders; and a combiner configured to receive the output data sequences from the plurality of shaping encoders and combine them to form a single data sequence having a target probability distribution.
2. A probabilistic shaper as claimed in claim 1 , wherein the plurality of shaping encoders and the combiner are each configured to map their respective received data sequences to output data sequences that are formed from a respective alphabet, and the shaping encoders are each configured to map to a smaller alphabet than the combiner.
3. A probabilistic shaper as claimed in claim 1 or 2, wherein:
the combiner is configured to receive the output data sequences and map them to a single data sequence that is formed from a target alphabet, wherein the target alphabet comprises a plurality of symbols and each symbol can be represented by one or more sub-symbols; and
each shaping encoder is configured to map its respective separate data sequence to an output data sequence that is formed from an alphabet that comprises at least one of the sub-symbols.
4. A probabilistic shaper as claimed in claim 3, wherein:
the splitter is configured to divide the input data sequence into a number of separate data sequences that is equal to the number of the plurality of sub-symbols that represent each symbol in the target alphabet; and
the probabilistic shaper is configured such that one or more of those separate data sequences is not input into a shaping encoder but is instead input directly into the combiner.
5. A probabilistic shaper as claimed in claim 3 or 4, wherein each sub-symbol comprises one or more bit levels and the combiner is configured to treat the output data sequences from the plurality of shaping encoders as each providing a respective one or more of the bit levels of the sub-symbols.
6. A probabilistic shaper as claimed in any of claims 3 to 5, wherein the combiner is configured to combine the output data sequences to form the single data sequence by:
combining one or more sub-symbols that are comprised in the output data sequences of the plurality of shaping encoders to form a combination of sub-symbols; and
mapping the combination of sub-symbols to the symbol in the target alphabet that is represented by that combination.
7. A probabilistic shaper as claimed in any of claims 3 to 6, wherein each symbol in the target alphabet is associated with a respective transmit power, the combiner being configured to treat each symbol in the target alphabet as being represented by a specific combination of sub-symbols, whereby a symbol in the target alphabet that is associated with a higher transmit power than another symbol in the target alphabet is treated as being represented by a specific combination of sub-symbols that has a probability that is lower than or equal to the probability of a specific combination of sub-symbols that represents the other symbol.
8. A probabilistic shaper as claimed in any of claims 3 to 7, wherein the combiner is configured to use Natural Code mapping to map a combination of sub-symbols to a symbol in the target alphabet.
9. A probabilistic shaper as claimed in any preceding claim, wherein the combiner is configured to form the single data sequence to have a target probability distribution
that is anticipated to offer improved transmission performance compared with the first probability distribution.
10. A transmitter comprising a probabilistic shaper as claimed in any preceding claim.
1 1 . A probabilistic deshaper comprising:
an input configured to receive an input data sequence, which has a target probability distribution;
a divider configured to arrange the input data sequence into a plurality of separate data sequences;
a plurality of shaping decoders, each configured to receive a respective one of the separate data sequences and demap it to an output data sequence having a different probability distribution from the target probability distribution, wherein each shaping decoder performs said demapping independently of the other shaping decoders; and
a combiner configured to receive the output data sequences from the plurality of shaping decoders and combine them to form a single data sequence having a first probability distribution.
12. A probabilistic deshaper as claimed in claim 1 1 , wherein the divider comprises a demappper configured to:
receive the input data sequence, which is formed from a target alphabet, wherein the target alphabet comprises a plurality of symbols and each symbol can be represented by a plurality of sub-symbols;
demap the input data sequence to a demapped data sequence that is formed from an alphabet that comprises the plurality of sub-symbols; and
output the demapped data sequence for arranging into the plurality of separate data sequences.
13. A probabilistic deshaper as claimed in claim 12, wherein the divider is configured to arrange the demapped data sequence into a plurality of separate data sequences by arranging data corresponding to the same sub-symbol in the same sequence and data corresponding to different sub-symbols in different sequences.
14. A receiver comprising a probabilistic deshaper as claimed in claims 12 and 13.
15. A method comprising:
receiving an input data sequence, which has a first probability distribution; dividing the input data sequence into a plurality of separate data sequences; independently mapping each separate data sequence to an output data sequence that has a different probability distribution from the first probability distribution; and
combining the output data sequences to form a single data sequence having a target probability distribution.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201780081987.6A CN110140330B (en) | 2017-01-02 | 2017-01-02 | Apparatus and method for shaping probability distribution of data sequence |
PCT/EP2017/050023 WO2018121887A1 (en) | 2017-01-02 | 2017-01-02 | Apparatus and method for shaping the probability distribution of a data sequence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2017/050023 WO2018121887A1 (en) | 2017-01-02 | 2017-01-02 | Apparatus and method for shaping the probability distribution of a data sequence |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018121887A1 true WO2018121887A1 (en) | 2018-07-05 |
Family
ID=57860809
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2017/050023 WO2018121887A1 (en) | 2017-01-02 | 2017-01-02 | Apparatus and method for shaping the probability distribution of a data sequence |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110140330B (en) |
WO (1) | WO2018121887A1 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019034780A1 (en) * | 2017-08-17 | 2019-02-21 | Sony Corporation | Mapping device and method, probabilistic signal point shaping device and method |
EP3605906A1 (en) * | 2018-08-02 | 2020-02-05 | Nokia Solutions and Networks Oy | Transmission of probabilistically shaped amplitudes using partially anti-symmetric amplitude labels |
CN110971559A (en) * | 2019-12-18 | 2020-04-07 | 南京信息工程大学 | A modulation and demodulation method for reducing the peak-to-average power ratio of OFDM-PON signals based on dynamic control factors |
WO2020108771A1 (en) * | 2018-11-30 | 2020-06-04 | Huawei Technologies Co., Ltd. | Device and method for probabilitic constellation shaping |
WO2020170497A1 (en) * | 2019-02-20 | 2020-08-27 | Mitsubishi Electric Corporation | Communication system and method for communicating symbols of bits |
CN113169809A (en) * | 2018-10-25 | 2021-07-23 | 华为技术有限公司 | Distribution matcher, channel encoder and method for encoding data bits or symbols |
CN113454962A (en) * | 2019-02-26 | 2021-09-28 | 三菱电机株式会社 | Distribution shaping method, distribution de-shaping method, distribution shaping encoder, distribution shaping decoder, and transmission system |
US11277225B2 (en) * | 2017-08-17 | 2022-03-15 | Sony Coroporation | Probabilistic signal point shaping device and method |
WO2022183472A1 (en) * | 2021-03-05 | 2022-09-09 | Qualcomm Incorporated | Constellation shaping-related coding selection |
WO2024016205A1 (en) * | 2022-07-20 | 2024-01-25 | Qualcomm Incorporated | Gray mapping for variable-to-fixed distribution matching |
WO2024045107A1 (en) * | 2022-09-01 | 2024-03-07 | Qualcomm Incorporated | Arithmetic coding including symbol sequence determination |
WO2024148548A1 (en) * | 2023-01-12 | 2024-07-18 | Qualcomm Incorporated | Approximation in probabilistic constellation shaping |
WO2025039176A1 (en) * | 2023-08-22 | 2025-02-27 | Qualcomm Incorporated | Shaping bit levels determination for bit level probabilistic shaping |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021038676A1 (en) * | 2019-08-26 | 2021-03-04 | 日本電信電話株式会社 | Wireless communication device and light transmission system |
CN111162801A (en) * | 2019-12-27 | 2020-05-15 | 上海微波技术研究所(中国电子科技集团公司第五十研究所) | Device and method for probability shaping distribution matcher based on sequence arrangement |
CN113746594A (en) * | 2020-05-29 | 2021-12-03 | 深圳市中兴微电子技术有限公司 | Probability shaping coding device, system and method |
CN114285519B (en) * | 2020-09-27 | 2024-04-26 | 中兴通讯股份有限公司 | Data transmitting and receiving method, terminal, system, device and readable storage medium |
WO2024216531A1 (en) * | 2023-04-19 | 2024-10-24 | Qualcomm Incorporated | Bit-level probabilistic shaping for wireless communications |
CN116418411B (en) * | 2023-06-06 | 2023-09-22 | 众瑞速联(武汉)科技有限公司 | Optical signal coding method and system for wavelength division multiplexing system |
CN119583004A (en) * | 2023-08-28 | 2025-03-07 | 华为技术有限公司 | Sequence processing method, device and system |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1324558A1 (en) * | 2001-12-28 | 2003-07-02 | Sony International (Europe) GmbH | Multiresolution broadcast transmitter and method using Gaussian trellis shaping to reduce average signal power and corresponding multi-stage decoder |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7756350B2 (en) * | 2006-11-13 | 2010-07-13 | Global Ip Solutions, Inc. | Lossless encoding and decoding of digital data |
EP3594943B1 (en) * | 2011-04-20 | 2024-02-21 | Panasonic Holdings Corporation | Device and method for execution of huffman coding |
KR102149770B1 (en) * | 2013-08-26 | 2020-08-31 | 삼성전자주식회사 | Memory controller and the method of operation thereof |
-
2017
- 2017-01-02 WO PCT/EP2017/050023 patent/WO2018121887A1/en active Application Filing
- 2017-01-02 CN CN201780081987.6A patent/CN110140330B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1324558A1 (en) * | 2001-12-28 | 2003-07-02 | Sony International (Europe) GmbH | Multiresolution broadcast transmitter and method using Gaussian trellis shaping to reduce average signal power and corresponding multi-stage decoder |
Non-Patent Citations (5)
Title |
---|
BOCHERER GEORG ET AL: "Bandwidth Efficient and Rate-Matched Low-Density Parity-Check Coded Modulation", IEEE TRANSACTIONS ON COMMUNICATIONS, IEEE SERVICE CENTER, PISCATAWAY, NJ. USA, vol. 63, no. 12, 1 December 2015 (2015-12-01), pages 4651 - 4665, XP011593618, ISSN: 0090-6778, [retrieved on 20151215], DOI: 10.1109/TCOMM.2015.2494016 * |
BUCHALI ET AL.: "Experimental Demonstration of Capacity Increase and Rate-Adaptation by Probabilistically Shaped 64-QAM", POSTDEADLINE PAPER PDP.3.4, 2015 |
G. BOCHERER ET AL.: "Bandwidth Efficient and Rate-Matched Low-Density Parity-Check Coded Modulation", IEEE TRANS. COMMUN., vol. 63, no. 12, 2015, pages 4651 - 4665, XP011593618, DOI: doi:10.1109/TCOMM.2015.2494016 |
P. SCHULTE; G. BOCHERER: "Constant Composition Distribution Matching", IEEE TRANS. INF. THEORY, vol. 62, no. 1, 2016, XP011594649, DOI: doi:10.1109/TIT.2015.2499181 |
SCHULTE PATRICK ET AL: "Constant Composition Distribution Matching", IEEE TRANSACTIONS ON INFORMATION THEORY, IEEE PRESS, USA, vol. 62, no. 1, 1 January 2016 (2016-01-01), pages 430 - 434, XP011594649, ISSN: 0018-9448, [retrieved on 20151218], DOI: 10.1109/TIT.2015.2499181 * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019034780A1 (en) * | 2017-08-17 | 2019-02-21 | Sony Corporation | Mapping device and method, probabilistic signal point shaping device and method |
US11277225B2 (en) * | 2017-08-17 | 2022-03-15 | Sony Coroporation | Probabilistic signal point shaping device and method |
US11159273B2 (en) | 2017-08-17 | 2021-10-26 | Sony Corporation | Mapping device and method, probabilistic signal point shaping device and method |
JP7007336B2 (en) | 2018-08-02 | 2022-01-24 | ノキア ソリューションズ アンド ネットワークス オサケユキチュア | Transmission of stochastic shaping amplitudes using partially asymmetric amplified labels |
EP3605906A1 (en) * | 2018-08-02 | 2020-02-05 | Nokia Solutions and Networks Oy | Transmission of probabilistically shaped amplitudes using partially anti-symmetric amplitude labels |
CN110798267A (en) * | 2018-08-02 | 2020-02-14 | 诺基亚通信公司 | Probability shaped amplitude transmission using partially antisymmetric amplitude labels |
JP2020048188A (en) * | 2018-08-02 | 2020-03-26 | ノキア ソリューションズ アンド ネットワークス オサケユキチュア | Transmission of probabilistically shaped amplitudes using partially anti-symmetric amplitude labels |
US10944504B2 (en) | 2018-08-02 | 2021-03-09 | Nokia Solutions And Networks Oy | Transmission of probabilistically shaped amplitudes using partially anti-symmetric amplitude labels |
CN110798267B (en) * | 2018-08-02 | 2023-02-17 | 诺基亚通信公司 | Probability shaped amplitude transmission using partially antisymmetric amplitude labels |
CN113169809B (en) * | 2018-10-25 | 2023-09-29 | 华为技术有限公司 | Distributed matcher, channel encoder and method for encoding data bits or symbols |
CN113169809A (en) * | 2018-10-25 | 2021-07-23 | 华为技术有限公司 | Distribution matcher, channel encoder and method for encoding data bits or symbols |
WO2020108771A1 (en) * | 2018-11-30 | 2020-06-04 | Huawei Technologies Co., Ltd. | Device and method for probabilitic constellation shaping |
WO2020170497A1 (en) * | 2019-02-20 | 2020-08-27 | Mitsubishi Electric Corporation | Communication system and method for communicating symbols of bits |
JP2022507015A (en) * | 2019-02-20 | 2022-01-18 | 三菱電機株式会社 | Communication system and how to communicate bit symbols |
CN113424467A (en) * | 2019-02-20 | 2021-09-21 | 三菱电机株式会社 | Communication system and method for communicating bit symbols |
CN113424467B (en) * | 2019-02-20 | 2024-04-19 | 三菱电机株式会社 | Communication system and method for communicating bit symbols |
CN113454962A (en) * | 2019-02-26 | 2021-09-28 | 三菱电机株式会社 | Distribution shaping method, distribution de-shaping method, distribution shaping encoder, distribution shaping decoder, and transmission system |
CN110971559B (en) * | 2019-12-18 | 2022-02-01 | 南京信息工程大学 | Modulation and demodulation method for reducing peak-to-average power ratio of OFDM-PON signal |
CN110971559A (en) * | 2019-12-18 | 2020-04-07 | 南京信息工程大学 | A modulation and demodulation method for reducing the peak-to-average power ratio of OFDM-PON signals based on dynamic control factors |
WO2022183472A1 (en) * | 2021-03-05 | 2022-09-09 | Qualcomm Incorporated | Constellation shaping-related coding selection |
US20240048430A1 (en) * | 2021-03-05 | 2024-02-08 | Qualcomm Incorporated | Constellation shaping-related coding selection |
WO2024016205A1 (en) * | 2022-07-20 | 2024-01-25 | Qualcomm Incorporated | Gray mapping for variable-to-fixed distribution matching |
WO2024045107A1 (en) * | 2022-09-01 | 2024-03-07 | Qualcomm Incorporated | Arithmetic coding including symbol sequence determination |
WO2024148548A1 (en) * | 2023-01-12 | 2024-07-18 | Qualcomm Incorporated | Approximation in probabilistic constellation shaping |
WO2025039176A1 (en) * | 2023-08-22 | 2025-02-27 | Qualcomm Incorporated | Shaping bit levels determination for bit level probabilistic shaping |
Also Published As
Publication number | Publication date |
---|---|
CN110140330B (en) | 2021-08-13 |
CN110140330A (en) | 2019-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018121887A1 (en) | Apparatus and method for shaping the probability distribution of a data sequence | |
EP3718228B1 (en) | Communication system and method using a set of distribution matchers | |
Schulte et al. | Divergence-optimal fixed-to-fixed length distribution matching with shell mapping | |
Pikus et al. | Bit-level probabilistically shaped coded modulation | |
US10727951B2 (en) | Low-complexity constellation shaping | |
CN113424467B (en) | Communication system and method for communicating bit symbols | |
US9246510B2 (en) | Apparatus and method for multilevel coding in communication systems | |
CN111670543B (en) | Multi-component encoding for signal shaping | |
US12081274B1 (en) | Probabilistic shaping techniques for high performance coherent optical transceivers | |
EP3328012B1 (en) | Methods of converting or reconverting a data signal and method and system for data transmission and/or data reception | |
EP3949184B1 (en) | Short block length distribution matching algorithm | |
EP3306821B1 (en) | Methods of converting or reconverting a data signal and method and system for data transmission and/or data reception | |
US9136870B2 (en) | Method and apparatus with error correction for dimmable visible light communication | |
WO2019197037A1 (en) | Multi-level encoder and decoder with shaping and methods for multi-level encoding and decoding with shaping | |
CN110892658B (en) | Device and method for coding a message with a target probability distribution of coded symbols | |
Wang et al. | Design of polar codes for run-length limited codes in visible light communications | |
KR100515472B1 (en) | Channel coding and decoding method and multiple-antenna communication systems performing the same | |
KR100526510B1 (en) | Signal transmitting/receiving apparatus and method over multiple antennas in a mobile communication system | |
EP4099652B1 (en) | Method and device for transmitting binary data | |
CN114616773A (en) | Distribution matcher and distribution matching method | |
KR20080026584A (en) | Convolutional coding method and system for block transmission of space-time code according to golden code technology | |
Zhang et al. | Optimal design of linear space code for MIMO optical wireless communications | |
Mittal et al. | Channel State Information feedback overhead reduction using Arithmetic coding in massive MIMO systems | |
KR102098299B1 (en) | Apparatus and method for generating soft output for multiple antenna system | |
Salman et al. | On the Capacity Region of the Three-Receiver Broadcast Channel With Receiver Message Cognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17700895 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17700895 Country of ref document: EP Kind code of ref document: A1 |