Non-binary Split LDPC Codes defined over Finite Groups Bilal Shams ∗,† , David Declercq ∗ , Vincent Heinrich † ∗

ETIS/ENSEA/UCP/CNRS-UMR8051 95014, Cergy-Pontoise, France 1

2

[email protected] [email protected]

ST-Microelectronics 38926, Crolles, France 2

[email protected]

Abstract—In this paper, we propose a practically implementable decoding algorithm for split LDPC codes with parity constraints defined over finite groups. The proposed decoding algorithm generalizes the orders of the variable and check nodes such that it may have messages of different order at the various nodes. This gives us a further degree of freedom in terms of better code construction. Using the binary image of the parity check matrix, we define the function node which maps lower order messages to higher order and vice-versa. In order to have a reduced compexity decoder which is practically implementable, we use the truncated messages concept at the check nodes and evaluate its performance. We show improved performance in the error floor region as compared to other non-split low complexity decoding algorithms.

I. I NTRODUCTION Binary Low Density Parity Check (LDPC) codes, first discovered by Gallager [1], show capacity approaching performance when the code size is very large. For moderate length codes, the performance can be improved by using non-binary LDPC (NB-LDPC) Codes which, however, offer an increased decoding complexity [2]. Consequently, a lot of research has been carried out on decreasing the decoding complexity for NB-LDPC codes as well as the construction of better codes for improved performance. In this context, a new family of LDPC codes named as split codes was introduced in [3] which provides a solution for decreasing the memory required for decoding LDPC codes. The basic concept of split codes is that the variable and check nodes are not processed in the same order of messages, rather the variable nodes are processed in a lower order, reducing memory requirements at the variable node. A symbol in GF (2p ) can be written in a (p × p) binary matrix form [4]. This matrix is split up into sub-matrices, splitting symbols into sub-symbols and lowering the field order at the variable nodes. However, the check nodes are processed in GF (2p ), in order to gain the advantages of a GF (2p ) code. Therefore, at the check nodes, messages of the lower field order have to be combined to form messages of order GF (2p ). On the contrary, at the variable nodes, the messages in GF (2p ) have to be marginalized to the lower

order. This leads to a decrease in the memory required at the variable nodes since lower sized vectors are used to store the probabilities. In this paper, we generalize the parity constraints of split codes to finite groups i.e. the parity check equations are defined over finite groups instead of finite fields [5]. This gives us the opportunity to replace the finite field multiplication with a general linear operator [6]. This generalization of the parity constraints gives us a further degree of freedom in terms of code construction as the various nodes of the code may belong to finite sets with different orders [7]. In section 2, we discuss the generalization of the parity constraints to groups and their associated function nodes. In section 3, we discuss the function nodes for split codes defined over groups. Here we also discuss the generalization of the splitting order along with the decoding algorithm for split codes. In order to simplify the decoding procedure, we adapt the message truncation concept, also termed as EMS [8], for split codes. In section 4, we report the simulation results comparing the performance of our proposed algorithm with the regular-EMS algorithm for various codes. II. PARITY C HECK C ONSTRAINTS OVER F INITE G ROUPS With codes defined in a finite field, the parity check equations comprise of multiplications with the field elements. However, with codes defined on finite groups, the parity constraints can be generalized by a function [9]. This allows us to consider a wider class of parity check codes compared to codes defined in a finite field [6]. The parity equation in the general case is: X fij (vi ) = O in G (1) i

where fij (.) is the general function over the group G which can be either linear or non-linear. It has the following properties [5]: • For the case when the parity check codes are defined in a finite field GF (q), the function fij (.) is linear and

represents a circular permutation which corresponds to the multiplication by the non-zero field element hij ∈ GF (q). • When the code is defined in a group G(q), any closed function from G(q) to G(q) can be used. • A special case of interest for codes defined in a group G(q), is when the function fij (.) is linear and it has a binary matrix representation of size (p × p). This is a generalized case of the codes defined in a finite field because it can either be a cyclic permutation or a random projection depending on whether the matrix is full-rank or rank-deficient. Another advantage of the generalization of the parity equations to groups is that it does not affect the decoding complexity [6]. However, as we make use of the binary images of group elements, we have to consider groups whose cardinality q is a power of 2. In this paper, we use codes defined over finite groups instead of finite fields [9], [10].

of message values and all the elements of the message are mapped by the function, whereas when it is rank deficient, the fuction is not a full mapping i.e. not all the elements of a message are projected by fij (.).

A. Binary Images and Bit clustering Let {bkj [i]}i=1...p be the binary map of the symbol ckj forming the binary vector bkj and Hij as the (p × p) binary image of a linear function fij (.) from G(q) to G(q). The binary image of a single parity check equation in the matrix form is written as [4]: dX c −1

Hij .bkj = Op

(2) Fig. 1. A binary cluster and its associated function (a). Full Rank Case (b). Rank Deficient Case

j=0

with Op being the all zero vector of length p. The binary image of such a parity check equation can also be seen as a matrix of size (p × dc p), which forms a local component code. This code can then be decoded in two ways: (i) treat locally and decode using a local binary decoding algorithm (ii) treat as a single parity-check in a finite non-binary group and decode using the BP equations over its non-binary image. We use the second method for decoding. The non-binary image is computed using a bit clustering process [5]. A cluster is defined as the (p × p) sub-matrix Hij in the binary image Hb of the whole parity check matrix. Each time a cluster contains a non-zero value, an edge is created in the Tanner Graph which connects the variable node to its corresponding check node. A linear function fij (.) is associated to each edge forming a closed function from G(q) to G(q) and it has Hij as its matrix representation. The function corresponds to a linear mapping within the considered group, such that: αn = f (αm )

(3)

with αn , αm ∈ G(q). It is obtained using: {bαn [k]}k=1..p = Hij .{bαm [k]}k=1..p

(4)

where {bαn [k]}k=1..p and {bαm [k]}k=1..p are the binary representations αn and αm respectively. The matrix Hij can either be a full rank matrix or it can be rank-deficient. If it is full rank, the function corresponds to a random permutation

Fig.1 shows two different non-zero binary clusters with their respective mappings and explains the two different cases of projection. The binary cluster in Fig.1.a shows a full rank matrix with a full-rank projection where all the elements of the message are mapped. The binary cluster in Fig.1.b represents a rank-deficient projection where some elements are projected more than once, leaving others un-projected, which are then considered to be null-valued. III. S PLIT C ODES A. Function nodes for Split Codes We extend the concept defined in the previous section to split codes by considering rectangular binary clusters. Instead of considering a square cluster of size (p × p), we consider a cluster of size (p2 × p1 ), with p1 < p2 . The function then makes a projection from a message of order G(2p1 ) to a message of order G(2p2 ). An example of a projection of an order-4 message to order-16 can be seen in Fig. 2 which has a rectangular binary cluster. As can be observed, (24 − 22 ) elements are left un-projected in the order-(24 ) message and they are then considered to be null-valued. This procedure thus allows us to map an order-G(q1 )

Fig. 3. A parity check matrix split into different orders of variable and check nodes

Fig. 2.

A rectangular binary cluster and its projection function

message Uvp of a variable node to an order-G(q2 ) message Upc at the check nodes input:  Upc [βj ] = Uvp [αi ] , if βj = fij (αi ) (5) 0

, elsewhere

where βj ∈ G(q2 ) and αi ∈ G(q1 ). Likewise, in the reverse direction, the messages of order-2p2 from the check nodes can be mapped back to messages in order-2p1 . This results in a memory reduction at the variable node by a factor of (2p2 −p1 ) for each message. B. Splitting Order of Variable and Check Nodes This concept generalizes the splitting order of symbols i.e. the various symbols nodes can be split to various orders 2p1 , 2p2 , ...2pn , where p1 6= p2 6= p3 ... 6= pn . Likewise, the order in which the check nodes are processed can also be generalized. For this purpose, the cluster size has to made variable for the various check nodes. However, here we have to note that for a single check node all the input messages must be of the same order. Fig.3 shows a parity check matrix with different order of processing for the variable nodes (2p1 , 2p2 , 2p3 ) and the check nodes (2p2 , 2p4 , 2p5 ). We can observe that for the first check nodes all the variable nodes messages of orders (2p1 , 2p2 , 2p3 ) are mapped to order 2p2 by the function nodes. Similarly, the various other function nodes map messages of different orders between the check nodes and the variable nodes. The generalization of the order of variable and check nodes gives us freedom in terms of the code-construction and helps in the development of better codes. There is, however, a disadvantage associated; for a fixed order p2 of check nodes, while splitting the symbol nodes to order p1 ,

the degree of check nodes is increased by a factor of p2 /p1 , hence increasing the check nodes processing complexity. However, this increase in complexity can be balanced by making use of the (2p2 − 2p1 ) zeros in a message vector at the check nodes input. The memory requirements at the check nodes are also reduced by storing the messages in 2p1 sized vectors, neglecting the null values in the input messages. A change in the order of the check nodes effects the number of check nodes. They are inversely proportional to each other; an increased check nodes order decreases the number of check nodes, whereas a decreased order increases the number of check nodes. C. Decoding Algorithm The Tanner graph for Split Codes generalized to groups is shown in Fig. 4. For the purpose of simplicity, all the variable nodes are processed in order 2p1 and all the check nodes in order 2p2 . The function nodes fij (.) maps 2p1 -order messages to an order of 2p2 while they are being passed from the variable nodes towards the check nodes and vice-versa in the reverse direction. We consider the message values as being Log Density Ratios (LDRs) of the probabilities of the symbols. The decoder is initialized with information from the channel. The decoding algorithm consists of four main iterative steps.



Variable Nodes Update: An output message on an edge of a variable node is calculated as the sum of the channel likelihoods and all the other input messages excluding the message from the edge for which the output is being calculated.



Function Nodes Update: The function nodes are updated according to Eq.(5) where messages in G(q1 ) are mapped to order G(q2 ). For each function node, there may exist a separate function depending on its binary cluster.

computational complexity rendering the decoder practically implementable. We refer to our algorithm as split-EMS. In Fig. 5, we show the decoding performance of our splitEMS algorithm versus the non-split regular EMS algorithm. We consider a length 576-bits LDPC code of rate R = 1/2 and defined in G(64). The cluster size for the split codes is p1 = 3 bits which allowed to process the variable nodes in messages of order-8. The messages are passed between the nodes using the Flooding schedule with a maximum of 100 iterations per codeword. We verified the performance for nm = 12 and nm = 18 for the two algorithms. We observe a loss of around 0.3dB and 0.4dB for nm = 12 and nm = 18 respectively in the water-fall region for split-EMS. However, the performance gap between the two algorithms becomes smaller at higher Eb /No and it seems that split codes may outperform the regular-EMS for very high Eb/N o. Fig. 4.

Tanner Graph for Generalised Split Codes



Check Nodes Update: In order to have a simplfied procedure at the check nodes, we use the EMS algorithm and consider only a certain number (nm ) of highest message values and perform the check update using those values only [8]. We employ the forward-backward strategy in the implementation of the update process. The outputs are the nm highest LLRS of symbols satisfying the parity equation of the check node.



Reverse Function Nodes Update: The function fij (.) given by Eq.(5) is used inversely on the messages from the check nodes to the variable nodes.  Vpv [αj ] = Vcp [βj ] , βj = fij (αi ) (6)

However, since we have less number of values at the check node output due to message truncation, we may often have the possibility that the element to be mapped to the order-G(q1 ) is not present in the order-G(q2 ) message. In that case, the compensation value γ, used to represent the values of the truncated elements of the message, is employed. These steps are iteratively repeated until a valid codeword is obtained or a fixed number of iterations have reached. At the end of each iteration, the a-posteriori information is collected in order to see if a valid codeword is obtained. IV. S IMULATION R ESULTS Split codes provide a solution for memory reduction but the computational complexity problem remains unresolved. They are, however, compatible with other low complexity decoding algorithms for NB-LDPC Codes. Therefore, we make use of concept used in the EMS algorithm where messages are truncated by considering the nm highest values in each message to process the check nodes [8]. This results in a decrease in performance but a huge reduction in the

Fig. 5.

EMS vs Split-EMS R=1/2 N=96(576 bits) G(64)

In Fig. 6 we compare the performance of the two algorithm for larger length code i.e. 2304-bits. The check nodes were processed in order-64, however, the variable nodes for split codes are processed in order-16 i.e. with p1 = 4. We verified the performance again for nm = 12 and nm = 18. There is a loss of 0.2dB for both nm = 12 and nm = 18 in the water-fall zone for split-EMS as compared to the EMS algorithm. However, for higher Eb/N o, we clearly see an improved performance by the split-EMS as compared to the EMS algorithm. This improved performance can be explained by the fact that split NB-LDPC codes have a binary parity check matrix which is locally less dense, resulting in a larger girth (length of the shortest cycle in the code). In Fig. 7, we compare the performance of two split codes with different splitting orders and the EMS algorithm. The two split codes are characterized by cluster sizes p1 = 3 and p1 = 4 respectively. The NB-LDPC code is defined in G(64) and has a length of Nb = 3000 bits with R = 1/2. We see the same kind of performance difference as before i.e. loss of performance in the water-fall region and improved performance in the error-floor region. However, an important

improves the performance, a proper order for the check nodes also plays a pivotal role in the decoding performance of split codes.

Fig. 6.

EMS vs. Split-EMS R=1/2 N=2304 bits G(64)

point to note here is that the same split code with a splitting order p1 = 4 performs better than p1 = 3. This proves that a choosing a proper splitting order plays a vital role in the decoding performance of split codes.

Fig. 7.

EMS vs. Split-EMS R=1/2 N=3000-bits G(64)

Fig. 8.

R=16/18 N=4800 bits ; G(256) for EMS, G(128) for split-EMS

V. C ONCLUSIONS Split codes were introduced with the aim to reduce the memory required for decoding by treating the variable and check nodes in different orders. We proposed a decoding algorithm for split codes which is practically implementable. We generalized the parity constraints of the check nodes to groups enabling us to have general linear function nodes in the Tanner graph. These linear functions map messages from one order to another. In order to have reduced-size vectors at the check node as well, the message truncation concept was used. We showed that choosing a proper splitting order for the variable and check nodes effects the decoding performance. Low memory requirements, less check nodes processing complexity and better performance in the errorfloor region make the proposed decoder a good candidate for hardware implementation. R EFERENCES

In Fig. 8, we compare the performance of a code of length 4800-bits with R = 0.88. We also changed the processing order of the check nodes for split-EMS. The check nodes of the split codes are processed in messages of order-128 as compared to the regular-EMS whose check nodes are processed in messages of order-256. The cluster size of split codes are of size (7×4) as compared to the binary cluster size of the regular-EMS of (8 × 8). We compare the performances for nm = 20 and nm = 32. There is a loss in performance of 0.05dB for nm = 20. However, for nm = 32, Split-EMS algorithm outperforms the EMS algorithm. This is because the check nodes of split-EMS are processed in a smaller field order and while considering nm = 32 for both algorithms, we lose less information as compared to the EMS algorithm. Secondly, the number of check nodes have increased because of the decreased order of the check nodes, which increases the error correcting capability of the code. Thus, not only choosing a proper order for the variable nodes

[1] R. G. Gallager. “Low density parity check codes”. M.I.T. Press, 1963. [2] M. Davey and D.J.C. Mackay. “Low density parity check codes over GF(q)”. IEEE Comm. Letters, vol 2:165–167, June 1998. [3] A. Voicila, D. Declercq, F. Verdier, M. Fossorier, and P. Urard. “Split Non-binary LDPC Codes”. In the proceedings of ISIT’08, July 2008. [4] C. Poulliat, M. Fossorier, and D. Declercq. “Design of Non-binary LDPC codes using their binary image: algebric properties”. ISIT’06, July 2006. [5] David Declercq. “Non-binary Group Decoder Diveristy”. IEEE transactions on communication, October 2008. [6] L. Sassatelli and D. Declercq. “Analysis of Non-binary Hybrid LDPC Codes”. in the proceedings of ISIT’07, June 2007. [7] L. Sassatelli and D. Declercq. “Non-binary Hybrid LDPC Codes: Structure, Decoding and Optimisation”. in the proceedings of ITW’06, Oct. 2006. [8] A. Voicila, D. Declercq, F. Verdier, and P. Urard M. Fossorier. “Lowcomplexity decoding for non-binary LDPC codes in high order fields”. IEEE Comm. Letters, vol 2, Feb 2008. [9] A. Goupil, M. Colas, G. Gelle, and D. Declercq. “On Belief Propagation Decoding of LDPC Codes over Groups”. In Int. Symposium on TurboCodes, Munich, Germany, April 2006. [10] D. Sridhara and T.E. Fuja. “Low Density Parity Check Codes over groups and rings”. in the proceedings of ITW2002, Oct. 2002.

Non-binary Split LDPC Codes defined over Finite Groups

mentable decoding algorithm for split LDPC codes with parity constraints defined ..... which increases the error correcting capability of the code. Thus, not only ...

818KB Sizes 5 Downloads 154 Views

Recommend Documents

Parallel Nonbinary LDPC Decoding on GPU - Rice ECE
For a massively parallel program developed for a GPU, data-parallel processing is .... vertical compression of matrix H generates very efficient data structures [6].

Parallel Nonbinary LDPC Decoding on GPU - Rice ECE
The execution of a kernel on a GPU is distributed according to a grid of .... As examples, Figure 3 shows the details of mapping CNP ..... Distributed Systems, vol.

Lifted Codes over Finite Chain Rings
Jun 22, 2011 - (ii) If i = 1 then R1 = F, and the result follows directly. Now suppose i > 1, let a = a0 + a1γ + ททท + ai−1γi−1 ∈ Ri. We define ρ to be a map from ...

MDS Codes over Finite Principal Ideal Rings
A finite family (ai)n i=1 of ideals of R, such that the canonical homomorphism of R to. ∏n i=1. (R/ai) is an isomorphism is called a direct decomposition of R.

Type II codes over finite rings
Jun 23, 2011 - II codes over Galois rings GR(2m,r) = Zpm [x]/(g(x)), where r is the degree of the basic irreducible polynomial g(x) in Zpm [x], see [3]. They also discussed interesting connections to unimodular lattices. In this paper, we shall first

Counting Codes over Rings
Sep 3, 2012 - [x,y] = x1y1 + ททท + xnyn. For any code C over R, we define the orthogonal to be. C⊥ = {x ∈ Rn ∣. ∣[x,c]=0, ∀c ∈ C}. Throughout the paper we assume that the rings are all Frobenius, see [8] for a definition of this cla

Improved quantum hypergraph-product LDPC codes - Semantic Scholar
Leonid Pryadko (University of California, Riverside). Improved quantum ... Example: Suppose we take LDPC code [n,k,d] with full rank matrix ; then parameters of ...

Pseudo-Codewords in LDPC Convolutional Codes
1On leave at Dept. of Math., Univ. of Notre Dame, Notre Dame, IN 46556,. USA. the maximum ... simulation results comparing LDPC convolutional codes to.

Improved quantum hypergraph-product LDPC codes - Semantic Scholar
Leonid Pryadko (University of California, Riverside). Improved quantum ... Example: Suppose we take LDPC code [n,k,d] with full rank matrix ; then parameters of ...

Generalized and Doubly Generalized LDPC Codes ...
The developed analytical tool is then exploited to design capacity ... error floor than capacity approaching LDPC and GLDPC codes, at the cost of increased.

On Regular Quasi-Cyclic LDPC Codes from Binomials - shiftleft.com
size r × r and the ring of polynomials of degree less than r,. F2[X]/〈Xr − 1〉, we can associate a polynomial parity-check matrix matrix H(X) ∈ (F2[X]/〈Xr − 1〉).

Type II Self-Dual Codes over Finite Rings and Even ...
Jun 22, 2011 - Zk for all lengths n ≡ 0 (mod 4). Proof. If there exists γ ∈ Zk with γ2 = −1 then (1,γ) generates a code with k vectors which is self-orthogonal. Hence there exist self-dual codes of all even lengths over Zk. Since k is not a

P-Codes -
data attached to SSIDs can be readily displayed in maps when using a. Geographic Information System (GIS). What do the P-Code and SSID numbers mean?

P-Codes -
or postal codes and are part of a data management system that provides a common ... Communale surrounding the site and an arbitrary unique identification.

Optimal Linear Codes over Zm
Jun 22, 2011 - where Ai,j are matrices in Zpe−i+1 . Note that this has appeared in incorrect forms often in the literature. Here the rank is simply the number of ...

Cyclic codes over Ak
Lemma 1. [1] If C is a cyclic code over Ak then the image of C under the. Gray map is a quasi-cyclic binary code of length 2kn of index 2k. In the usual correspondence, cyclic codes over Ak are in a bijective corre- spondence with the ideals of Ak[x]

Cyclic codes over Rk
Jun 22, 2011 - e-mail: [email protected] e-mail: [email protected] ...... [8] S.T. Dougherty and S. Ling, Cyclic codes over Z4 of even length , Designs, ...

Shadow Codes over Z4
Shadow Codes over Z4. Steven T. Dougherty. Department of Mathematics. University of Scranton. Scranton, PA 18510. USA. Email: [email protected].

Self-dual Codes over F3 + vF
A code over R3 is an R3−submodule of Rn. 3 . The euclidean scalar product is. ∑ i xiyi. The Gray map φ from Rn. 3 to F2n. 3 is defined as φ(x + vy)=(x, y) for all x, y ∈ Fn. 3 . The Lee weight of x + vy is the Hamming weight of its Gray image

MDR Codes over Zk
corresponds to the code word c = (c0,c1,c2,···,cn−1). Moreover multiplication by x corresponds to a cyclic shift. So, we can define a cyclic code of length n over Zk as an ideal of Zk[x]/(xn − 1). For generalizations of some standard results o

LMS Estimation of Signals defined over Graphs - IEEE Xplore
novel modeling and processing tools for the analysis of signals defined over a graph, or graph signals for short [1]–[3]. Graph signal processing (GSP) extends ...

Distributed Adaptive Learning of Signals Defined over ...
I. INTRODUCTION. Over the last few years, there was a surge of interest in the development of processing tools for the analysis of signals defined over a graph, ...

Self-Dual Codes and Finite Projective Planes
Jun 22, 2011 - where there are A(a0,a1,...,ap−1) vectors in C with ai coordinates with i in them, where i ∈ Fq. The Hamming weight enumerator is. WC(x, y) ...