1

List Decoding of Biorthogonal Codes and the Hadamard Transform with Linear Complexity Ilya Dumer, Grigory Kabatiansky, and C´edric Tavernier

Abstract— Let a biorthogonal Reed-Muller code RM(1, m) of length n = 2m be used on a memoryless channel with an input alphabet ±1 and a real-valued output R. Given any nonzero received vector y in the Euclidean space Rn and some parameter ² ∈ (0, 1), our goal is to perform list decoding of the code RM(1, m) and retrieve all codewords c located within the angle arccos ² from y. For an arbitrarily small ², we design an algorithm that outputs this  list of codewords {c} with the linear complexity order of n ln2 ² bit operations. Without loss of √ generality, let vector y be also scaled to the Euclidean length n of the transmitted vectors. Then an equivalent task is to retrieve all coefficients of the Hadamard transform of vector y, whose absolute values exceed n². Thus, this decoding algorithm retrieves all n²-significant coefficients   of the Hadamard transform with the linear complexity n ln2 ² instead of the complexity n ln2 n of the full Hadamard transform. Key words: soft-decision list decoding, biorthogonal codes, Hadamard transform.

I. I NTRODUCTION Biorthogonal (first-order) Reed-Muller codes RM(, m) have been extensively used in communications and addressed in many papers since 1960s. These codes have optimal parameters and achieve the maximum possible distance d = 2m−1 for the given length n = 2m and dimension m + 1. One renowned decoding algorithm designed by Green [1] performs maximum likelihood decoding of codes RM(, m) and finds the distances from the received vector ¡ y to ¢all codewords c of RM(, m) with complexity of O n ln2 n bit operations. Another algorithm designed by Litsyn and Shekhovtsov [2] performs bounded distance decoding and corrects up to n/4−1 errors with linear complexity O(n). In the area of probabilistic decoding, a major breakthrough has been achieved by Goldreich and Levin [3]. Their algorithm takes any received vector and outputs the list of codewords of RM(, m) within a decoding radius n2 (1 − ²) performing this task with a high probability 1−2−k and a low poly-logarithmic complexity poly(mk/²) for any k > 0 and ² ∈ (0, 1). Recently, list decoding of codes RM(, m) has been extended Ilya Dumer is with the Department of Electrical Engineering, University of California, Riverside, CA 92521, USA; (e-mail: [email protected]). Research supported in part by NSF grants CCF0622242 and CCF0635339. Grigory Kabatiansky is with the Institute for Information Transmission Problems, Moscow 101447, Russia and INRIA, Rocquencourt, France; (email: [email protected]). Research supported in part by RFFI grants 06-01-00226 and 06-07-89170 of the Russian Foundation for Fundamental Research. C´edric Tavernier is with Communications & Systems (CS), Le Plessis Robinson, France; (e-mail: [email protected]). Research supported in part by DISCREET, IST project no. 027679 of the European Commission’s Information Society Technology 6th Framework Programme. The material of this paper was presented in part at the 2007 IEEE Symp. Info. Theory, Nice, France, June 25–29, 2007.

to deterministic algorithms. In particular, the algorithm of [4] performs error-free list decoding within the radius n2 (1 − ²) with linear complexity O(n²−3 ) for any received vector. This paper advances the results of [4] in two different directions. First, we extend list decoding of codes RM(, m) to an arbitrary memoryless semi-continuous channel. Second, the¡ former¢ complexity O(n²−3 ) of [4] will be reduced to O n ln2 ² . In doing so, we use the following set up. Let a binary vector α = (α1 , ..., αn ) be mapped onto the j Euclidean vector a =(a1 , ..., an ) with symbols aj = (−1) . Given two binary vectors α and β, consider the Hamming distance d(a, b), Pthe Euclidean distance D(a, b), and the inner product ab = j aj bj of their maps a, b. Then ab = n − D2 (a, b)/2 = n − 2d(a, b)

(1) n

Now any binary code C is mapped into the√ cube {±1} , √ which in turn belongs to the Euclidean sphere S( n) of radius n in the Euclidean space Rn . Thus, any binary code C of Hamming distance d(C) becomes a spherical code, where two different codewords have the inner product at most n − 2d(C) and the angle at least arccos(1 − 2d(C)/n). Below we consider a memoryless channel with an input alphabet ±1 and some larger output alphabet (usually, R). We n use a code C ⊆ {±1} on this channel and replace an output zj in any position j = 1, ..., n with its log-likelihood ratio yj = ln

Pr{+1|zj } Pr{−1|zj }

We then call y = (y1 , ..., yn ) a received vector. Note that any codeword c1 has a higher posterior probability Pr(c1 |y) > Pr(c2 |y) than another codeword c2 if it also has a larger inner product c1 y > c2 y. Note that all codewords become equiprobable for y = 0; therefore, we will assume that y 6= 0. Without √ loss of generality, we can multiply y by the scalar n/ kyk , where P 2 kyk = j yj2 is the squared Euclidean length of vector y. √ Then all vectors y and c belong to the same sphere S( n), √ kyk = kck = n. We now proceed with the biorthogonal codes. Let X c(x) = c0 + ci xi 1≤i≤m

be any affine Boolean function defined on all points m x =(x1 , ..., xm ) ∈ Fm outputs 2 . As above, we map all 2 c(x) onto the vector c with symbols Y (−1)c(x) = (−1)c0 (−1)ci xi . 1≤i≤m

2

Then 2m+1 codevectors c form the biorthogonal RM(1, m) √ code C ⊂ {±1}n . Given any received vector y ∈ S( n) and any parameter ² ∈ (0, 1), our main goal is to retrieve all codewords c∗ ∈ C such that c∗ y ≥ n². In equivalent terms, given any y ∈ Rn and ² ∈ (0, 1), we seek the codewords c∗ within the angle arccos ² from y. To define the list {c∗ } , we will construct the corresponding list of affine functions X L∗ (y,²) = {c∗ (x) = c0 + ci xi : c∗ y ≥ n²}. 1≤i≤m

Here each function c∗ (x) is recorded as the vector (c0 , ..., cm ). Now let RM(1, m) be decomposed into the orthogonal Hadamard code H, whose codevectors are generated by n ¯ of n linear functions c(x) with c0 = 0, and its coset H complementary vectors. Recall also that n codewords of code H – considered as rows – form an n × n Hadamard matrix H, which satisfies equality H · H T = nE, where E is the identity matrix. Then the vector yH T = (yc : c ∈ H) represents the Hadamard transform of vector y, whereas the ¯ gives n opposite values. Here n positions vector (yc : c ∈ H) in both vectors yH T and −yH T are marked as binary mtuples (c1 , ..., cm ). Now we see that the list L∗ (y,²) gives all positions (c1 , ..., cm ) in which the coefficients yc of the Hadamard transform yH T have absolute values |yc| ≥ n². Our main result is as follows. Theorem 1: Let the biorthogonal code RM(1, m) of length n = 2m be used on a general memoryless √ channel. For any ² ∈ (0, 1) and any received vector y ∈ S( n), the list of affine functions {c∗ (x) : c∗ y ≥ n²} can be retrieved errorfree with complexity ¡ § © ª¨¢ O n ln2 min ²−2 , n § ¨ that has linear order of n ln2 ² for any fixed ² as n → ∞. Finally, we reformulate Theorem 1 as follows. Corollary 2: For any constant ² ∈§ (0, 1), ¨ code RM(1, m) requires linear complexity order of n ln2 ² to output the list of codewords located within • the angle arccos ² from any received vector y ∈ Rn (softdecision decoding); • the Hamming distance n2 (1 − ²) from any received vector y ∈ {±1}n (hard-decision decoding). Note that linear decoding complexity is achieved in Corollary 2 even if the decoding radius is within an arbitrarily small ²-margin to the code distance. In particular, the new algorithm removes the performance-complexity gap between the maximum likelihood decoding of the Green machine [1] and bounded-distance decoding of the Litsyn-Shekhovtsov algorithm [2]. Finally, note for a high-noise case, with ² of a vanishing order n−1/2 , the output list can include as many as O(n) codewords. Each of these is defined by log (2n) information bits (c0 , ..., cm ). Thus, in this high-noise case, the newly presented algorithm has complexity n ln2 n that closely approaches the bit size O (n ln n) of its output. In Section II, we consider the Johnson bound for realvalued outputs y and upper-bound the maximum list size maxy |L∗ (y,²)| for any code C. This also yields a tight bound

for any biorthogonal code RM(, m). Then in Sections III and IV, we proceed with a new soft-decision list decoding algorithm and prove Theorem 1. II. J OHNSON BOUND FOR CODES IN Rn Given a code C ⊆{±1}n of length n and any received vector y ∈ {±1}n , decoding within the Hamming radius δn produces the list {c ∈ C : cy ≥ (1 − 2δ) n}. Then the classic Johnson bound reads as follows. The Johnson bound. Let C(n, δn) be a code of the minimum Hamming distance d ≥ δn. Then for any y ∈ {±1}n and any positive ², such that 1 − 2δ < ²2 ≤ 1, the list L(C, y, ²)={c ∈ C : cy ≥ n²} has size ½ |L(C, y,²)| ≤ min

2δ , |C| 2δ + ²2 − 1

¾ (2)

The following lemma shows that the Johnson √ bound (2) can be applied to any soft-decision output y ∈ S( n) without any alterations. A similar lemma is given in [5] with a different proof. Lemma 3: The Johnson bound (2) holds for any code √ C(n, δn), any output y ∈ S( n), and any positive ², such that 1 − 2δ < ²2 ≤ 1. Proof. Consider any list L ⊆ C(n, d) of size L and let b = P (c : c ∈ L). Then we have inequality à !à ! X X 2 kbk = c c ≤ Ln + L(L − 1)(n − 2d). c∈L

c∈L

We can also use the Cauchy-Schwartz inequality 2

2

2

(yb) ≤ kyk kbk . By construction of our list L = {c : cy ≥ n²}, 2

(yb) ≥ L2 n2 ²2 . 2

Since kyk = n, we combine all three inequalities as follows L2 n2 ²2 ≤ n(Ln + L(L − 1)(n − 2d)), which gives the required bound (2). 2 For a code RM(1, m), Lemma 3 gives the following corollary. √ Corollary 4: Let y ∈ S( n) be a received vector. Then for any ² ∈ (0, 1), the list of affine functions L∗ (y,²) has size © ª |L∗ (y,²)| ≤ min ²−2 , n . Remark. The term |C| of (2) is replaced with n in the last expression. Here we use the fact that a binary code RM(1, m) is linear and contains an all-one codeword, in which case at most half the code belongs to L∗ (y,²) for any ² > 0.

3

III. L IST DECODING FOR CODES RM(1, m) In this section, we design the Sums-Facet-algorithm SF(m, ²) that performs soft-decision list decoding of a code RM(1, m) within a threshold n². Given any i = 1, ..., m − 1, we represent any m-variate linear Boolean function in the form m X

C (i) (x1 , . . . , xm ) = c1 x1 + . . . + ci xi +

cj xj + c0

j=i+1

Then we define its i-prefix as the i-variate linear Boolean function c(i) (x1 , · · · , xi ) = c1 x1 + · · · + ci xi that begins with the same i coefficients c1 , . . . , ci . Given a channel output y, the algorithm performs the following m steps. In each step i = 1, ..., m, SF algorithm receives some list of prefixes L(i−1) (y, ²) = {c(i−1) (x1 , · · · , xi−1 )}

(3)

(i) c∗

of required functions c∗ . For the that includes all prefixes given y and ², we will sometimes shorten our notation and call (i) the above lists L(i) and L∗ , respectively. In the sequel, we (i) show that L∗ ⊆ L(i) for all i. First, consider the m-dimensional Boolean cube and any i-dimensional facet Sα = {(x1 , . . . , xi , αi+1 , . . . , αm )},

α = (αi+1 , . . . , αm )

Here the prefixes (x1 , . . . , xi ) run through Fi2 for i ≤ m − 1 and the suffixes α are fixed. Also, Sα = Fm 2 for i = m. Let aα and bα denote the restrictions of some vectors a, b ∈ Rn to a given facet Sα , and aα bα be the inner product of these vectors. Let C(i) (x1 , . . . , xm ) and c(i) (x1 , . . . , xi ) denote the two vectors in {±1}n that correspond to the above functions C (i) and c(i) . Note that all facets Sα give the same vector c(i) (x1 , . . . , xi ), as the latter does not depend on α. Now we can use equality (i) (i) C(i) α (x1 , . . . , xm ) = hα c α (x1 , ..., xi ) = hα c (x1 , ..., xi )

where hα =

Y

(−1)cj αj +c0

i+1≤j≤m

is the same multiplicative constant for each x ∈ Sα . Since (i) hα = ±1, vectors C α and c(i) satisfy equalities (i) C(i) α = ±c ,

(i) yα C(i) α = ±yα c

(4) (i)

on each Sα . Now we use the inner products v α ≡ yα c(i) and define the Sums-Facet function ¯ X¯ ¯ X ¯¯ ¯ ¯ (i) ¯ ∆(y, c(i) ) = (5) ¯yα c(i) ¯ = ¯v α ¯ . α

α

L(i) (y, ²) = {c(i) : ∆(y, c(i) ) ≥ n²}.

(7)

Finally, for each α, let α ¯ = { (0, α) , (1, α)}. For each facet Sα , consider two i − 1 dimensional subfacets Sα¯ defined as S0,α = {(x1 , . . . , xi−1 , 0, αi+1 , . . . , αm )} S1,α = {(x1 , . . . , xi−1 , 1, αi+1 , . . . , αm )} (i)

Then the inner products v α used in (5) can be recalculated recursively for each prefix c(i) = c(i−1) + ci xi as follows (i−1)

(8)

In summary, SF(m, ²) performs three subroutines: A of (8), B of (5), and C of (7) in each step i = 1, ..., m. This is done as follows.

Also, consider the list (i)

Then in each step i, we use the function ∆(y, c(i) ) and construct the list of prefixes that pass the threshold test:

(i−1)

L(i) (y, ²) = {c(i) (x1 , ..., xi ) = c(i−1) + ci xi } (i)

α

α

ci v(i) α = v 0,α + (−1) v 1,α

and derives the subsequent list

L∗ (y,²) = {c∗ : c∗ y ≥ n², c∗ ∈ L∗ (y,²)}

The main idea of SF algorithm is to calculate ∆(y, c(i) ) for each candidate c(i) and employ this function as an upper bound for the unknown product yC(i) . Namely, (4) and (5) show that ¯ X ¯¯ X (i) ¯ (i) (6) yC(i) = yα C(i) ≤ ¯v α ¯ = ∆(y, c ). α

Sums-Facet-algorithm SF(m, ²). √ Input: numbers m, ², and vector y ∈ S( n) Set i = 0, L(0) = ∅, and c(0) = 1. Step i = 1, ..., m. Input: the list L(i−1) of prefixes (i−1)

c(i−1) and numbers v α¯

for all α ¯ ∈ Fm−i+1 2

A. For each i, c(i−1) ∈ L(i−1) , and ci = 0, 1, take prefix c(i) = c(i−1) + ci xi and calculate 2m−i (i)

(i−1)

(i−1)

numbers v α = v 0,α + (−1)ci v 1,α . B. For each c(i) , find ∆(y, c(i) ) =

P ¯¯ (i) ¯¯ α ¯v α ¯ .

C. Pass c(i) into L(i) if ∆(y, c(i) ) ≥ n². Step m + 1: pass c(m) ∈ L(m) into L(m+1) if yc(m) > 0. Otherwise, pass c(m) + 1.

(9)

Now consider the Sums-Facet function (5) in© more ª detail. Given any vector c(i) , define its Facet Span c(i) as the m−i subset of 22 vectors e c(i) obtained by flipping the vector (i) c on different facets Sα . In contrast to the linear extensions C(i) , most vectors e c(i) are obtained by nonlinear transformations. Then our function ∆(y, c(i) ) can be considered© as the ª maximum inner product ye c(i) over the entire span c(i) . This setting is illustrated as follows. Example. In Fig. 1, we consider the code RM(1,5) with decoding threshold yc ≥ 12. As an example, we analyze three candidates c(3) = x1 , x3 , and x1 x3 in step i = 3. These

4

Received vector y

Facet 1 z }| { ◦ ◦ ◦ ◦ ◦ • ◦•

Facet 2 z }| { • • • ◦ ◦ ◦ ◦◦

Facet 3 z }| { ◦ • ◦ ◦ • • •◦

Facet 4 z }| { • • ◦ ◦ ◦ • ••

Sums-Facet Function

Prefix x1 and product yx1 Prefix x3 and product yx3 Prefix x1 x3 and product y(x1 x3 )

◦ • ◦ • ◦ • ◦ • +4 ◦ ◦ ◦ ◦ • • • • +4 ◦ • ◦ • • ◦ • ◦ −4

◦ • ◦ • ◦ • ◦ • −2 ◦ ◦ ◦ ◦ • • • • −6 ◦ • ◦ • • ◦ • ◦ −2

◦•◦•◦•◦• 0 ◦ ◦ ◦ ◦ • • • • +4 ◦ • ◦ • • ◦ • ◦ +4

◦ • ◦ • ◦ • ◦ • +2 ◦ ◦ ◦ ◦ • • • • +2 ◦ • ◦ • • ◦ • ◦ −2

8 16 12

Best extension for x1 Best extension for x3 Best extension for x1 x3

no flip no flip flip x1 x3

flip x1 flip x3 flip x1 x3

no flip/flip no flip no flip

no flip no flip flip x1 x3

Fig. 1. Decoding of code RM(1,5). vectors along with y are shown in the first four lines of Fig. 1. Here symbols +1 are marked ◦, and symbols -1 are marked •. In step i = 3, all vectors form 4 facets Sα of length 8. Here we also give the values of inner products yα x1 , yα x3 and yα (x1 x3 ) obtained on each facet Sα . The last three lines of Fig. 1 indicate which facets Sα must be flipped within each span {x1 } , {x3 } and {x1 x3 } , to obtain optimal extensions ye c(3) that give the function ∆(y, c(3) ). We see that only x3 and x1 x3 pass the test, since max ye c(3) max ye c(3) max ye c(3)

= 16, if e c(3) ∈ {x3 } , = 12, if e c(3) ∈ {x1 x3 } = 8, if e c(3) ∈ {x1 }

Similarly, we can consider the following steps i = 4, 5 and use recursion (8) for the two remaining candidates x3 and x1 x3 . Then it is easy to verify from (8) that the candidate x3 x4 (obtained from x3 by taking c4 = 1 in step 4) gives inner product y(x3 x4 ) = 12 and passes the test, whereas all other extensions of x3 and x1 x3 fail. Finally, we compare the Sums-Facet-algorithm (9) with the classic Green machine. Similarly to SF algorithm, the Green (i) machine calculates all inner products v α using all facets Sα . This calculation is similar to Step A of our algorithm. However, the Green machine skips both Steps B and C of our algorithm. Instead, each step i outputs all 2i possible prefixes c(i) and their inner products yc(i) . More generally, the Green machine performs the complete Fast Hadamard Transform (FHT) of vector y using recursion (8). By contrast, SF algorithm represents an expurgated version of FHT, which eventually outputs only those coefficients of the HT-vector, whose absolute values exceed the given threshold n², if such coefficients exist. IV. L IST SIZE AND COMPLEXITY OF THE S UMS -FACET-A LGORITHM Lemma 5: For any received vector y, any ² ∈ (0, 1), and any step i = 1, ..., m, the list L(i) includes prefixes (3) of all required functions c∗ : (i)

L∗ ⊆ L(i) (i) Proof . By definition of c∗ , yc∗ ≥ n². Thus, ∆(y, c∗ ) ≥ n², (i) 2 according to (6). Then c∗ ∈ L(i) . Below we show that all incorrect candidates are filtered out after m + 1 steps. Lemma 6: The list L(m+1) obtained after all m + 1 steps equals the required list L∗ .

Proof . Indeed, any prefix c(m+1) left in the final step m+1 is a full function C(m+1) defined on the single facet Fm 2 . Also, yc(m+1) > 0. Therefore, yC(m+1) = yc(m+1) = ∆(y, c(m) ) ≥ n², and the proof is completed. 2 Remark. From (8), we also deduce that ∆(y, c(i) ) is a monotonic function on two consecutive prefixes c(i−1) and c(i) , and that strict inequality holds for at least one extension ci = 0, 1. This implies that, in general, consecutive steps become more restrictive: ∆(y, c(1) ) ≥ ... ≥ ∆(y, c(i) ) ≥ ... ≥ ∆(y, c(m) ) = yc(m+1) . The next lemma shows that ∆-function is fairly restrictive in a sense that each possible list L(i) does not accept too many incorrect candidates. Lemma 7: For any received vector y, any ² ∈ (0, 1), and any step i = 1, ..., m, the list L(i) has size ¯ ¯ © ª def ¯ ¯ L(i) = ¯L(i) (y, ²)¯ ≤ min ²−2 , 2i . Proof. The bound 2i is obvious, as there exist 2i prefixes c(i) . Note that on each facet Sα , the corresponding vectors c(i) form an orthogonal code of length 2i . For each prefix c(i) (i) and each facet Sα , let ˆ c α = ±c(i) be the vector such that ˆ c(i) α yα ≥ 0. (i)

(10)

n

Let ˆ c ∈ {±1} be the corresponding vector of length n that (i) equals ˆ c α on each facet Sα . For each α, different vectors c(i) (i) are orthogonal, and so are vectors ˆ c α . Then their extensions (i) ˆ c to full length are also orthogonal. Next, observe from (6) and (10) that for any prefix c(i) ∆(y, c(i) ) = ∆(y, ˆ c(i) ) = yˆ c(i) . Finally, recall that any prefix c(i) ∈ L(i) satisfies the SumsFacet-Criterion ∆(y, c(i) ) ≥ n². Therefore, ˆ c(i) y ≥ n². Now we apply the generalized 3 to the © (i) ª Johnson bound©of(i)Lemma ª orthogonal code ˆ c and its sublist ˆ c :ˆ c(i) y ≥ n² . Similarly to Corollary 4, this gives the estimate ²−2 for the latter list. 2 In combinatorial terms, the above proof shows that any two vectors taken from different Facet Spans are pairwise orthogonal. Fig. 1 also illustrates the same fact. Here the three original prefixes x1 , x3 , and x1 x3 are orthogonal on each facet, and so are all flipped versions of these prefixes. Now we introduce an important parameter § © ª¨ p = log2 min ²−2 , n .

5

Corollary 8: Each step i = 1, ..., m + 1 leaves ½ i 2 , if i ≤ p, L(i) ≤ 2p , if i ≥ p + 1

(11)

prefixes c(i) in the list L(i) . Proof of Theorem 1. Given two positive integers k and i, below we use a procedure Σ(2k , i) that adds 2k i-digital binary numbers. We estimate the complexity of Σ(2k , i) in bit operations, with one operation counted for each addition, inversion, and comparison of bits. To perform Σ(2k , i), we first couple two i-digital numbers and find 2k−1 (i + 1)-digital sums. Then we proceed in the same manner using pairwise additions. Then Σ(2k , i) requires k steps and has complexity k ¯ ¯ X ¯Σ(2k , i)¯ = 2k−l (i + l − 1) < (i + 1)2k l=1

Now we use Σ(2k , i) to estimate the complexity of step i = 1, ..., m of our SF algorithm (9), which outputs the list L(i) using three subroutines A, B, and C. For each candidate c(i) ∈ L(i−1) (y)×{0, 1}, our first subroutine A calculates 2m−i real (i) numbers v α . Any such calculation uses one addition and, possibly, one inversion of two real numbers in (8). To count these real-valued operations in binary bits, we assume that each symbol of the received vector y is formed by s bits, where s is a fixed parameter for any n. Thus, step i = 1 uses s-digital inputs and (s + 1)-digital outputs. Consequently, (i) step i inputs (s + i − 1)-digital numbers v α¯ and outputs 2m−i (i) (s + i)-digital numbers v α . This calculation requires (i)

Ψ1 ≤ (s + i − 1)2m−i+1 binary operations for each prefix c(i) . The second subroutine B calculates the function ∆(y, c(i) ) (i) of (5) using numbers v α . Here we need at most (s + i)2m−i bitwise inversions and ¯ ¯ ¯Σ(2m−i , s + i)¯ < (s + i + 1)2m−i bitwise additions. The procedure gives an (s + m)-digital number ∆(y, c(i) ) and has complexity (i)

Ψ2 ≤ (s + i)2m−i+1 + 2m−i The third subroutine retrieves the list (7) and requires (i)

Ψ3 = s + m operations per each candidate c(i) . Note that for any integers m, s ≥ 1, and any step i ≤ m,

(11) for L(i) , and complete the proof by calculating the entire complexity Ψ(m, ²) of SF(m, ²) as follows à p ! m+1 m+1 X X X Ψ(i) ≤ 5n (s + i) + (s + i)2p−i Ψ(m, ²) = i=1

i=1

p+1

≤ 5np(2s + p + 1)/2 + 5n(s + p + 2) ¡ § © ª¨¢ ≤ 5n(s + p)(p + 2) = O n ln2 min ²−2 , n 2 Remark. Alternatively, we can also assume that any two real numbers are added or compared in one operation. However, the above analysis is more conservative. In particular, it accounts for the fact that all steps employ real numbers of growing bit length s + m as m → ∞, even if the original channel symbols yj have fixed length s. Note also that we obtain a similar bound on Ψ(m, ²) if we replace s bit operations with one byte operation but still use increasing number of s-sized bytes as m → ∞. Above, our outputs y were taken from √ the entire space Rn , or, equivalently, from the sphere S( n) if normalized. Obviously, the same results also hold for any quantized channel, with the outputs taken from a discrete space {±sk }n n of size (2M ) , where sk are some positive constants and k = 1, ..., M. Finally, note that Corollary 2 is obtained if the inner product n² is replaced with the Euclidean or the Hamming distance. Note also that the last case - decoding in the Hamming metric - has been recently considered in [6] using a slightly different hard decision algorithm. It is also proven in [6] that for codes RM(1, m), the classic Johnson bound is tight up to the universal constant, given any Hamming radius t = d(1 − ²) with ² ∈ (0, 1). Namely, thereª exist the © outputs y that are surrounded by min ²−2 /4, n/2 or more codewords c ∈ RM (1, m) in a sphere of radius t. V. C ONCLUDING REMARKS In this paper, a new decoding algorithm is designed for biorthogonal codes RM(1, m) that decodes any output y into the list of codewords {c∗ : c∗ y ≥ n²} . For any ² ∈ §(0, 1),¨ the algorithm requires linear complexity order of n ln2 ² instead of the order n ln2 n of the Hadamard transform. In decoding process, the algorithm employs an efficient “SumsFacet function” and removes distant codewords c, by applying a threshold test to their intermediate prefixes instead of the codewords. In this way, this algorithm extends the classic Green algorithm. In equivalent § ¨ terms, the algorithm performs the linear order of n ln2 ² bit operations to retrieve all n²-significant Hadamard coefficients, whose absolute values exceed the threshold n².

s + m ≤ (s + i)2m−i Given at most 2L(i−1) candidates processed in any step i, the entire complexity Ψ(i) is h i (i) (i) (i) Ψ(i) ≤ 2L(i−1) Ψ1 + Ψ2 + Ψ3 ≤ 5L(i−1) (s+i)2m−i+1 Finally, the last step m + 1 requires only one comparison and at most s + m inversions per each prefix. Now we use bound

R EFERENCES [1] F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes, North-Holland, Amsterdam, 1977. [2] S. Litsyn and O. Shekhovtsov, “Fast decoding algorithm for first order Reed-Muller codes ”, Problems of Info. Transmission, vol. 19, pp. 87–91, 1983. [3] O. Goldreich and L.A. Levin, “A hard-core predicate for all one-way functions”, 21st ACM Symp. Theory of Computing, Seattle, WA, USA, May 14 - 17, 1989, pp. 25–32.

6

[4] G. Kabatiansky and C. Tavernier, “List decoding of Reed-Muller codes of the first order,” 9th Int. Workshop Algebraic and Combinatorial Coding Theory, Kranevo, Bulgaria. June 19-25, 2004, P. 230–235. [5] V. I. Levenstein, “Universal bounds for codes and designs,” Chapter 6 in Handbook of Coding Theory, V.S. Pless and W.C. Huffman (Eds.), Elsevier, 1998, pp. 499–648. [6] I. Dumer, G. Kabatiansky, C. Tavernier, “List decoding of the first-order binary Reed-Muller codes, ” Problems of Information Transmission, vol. 43, pp. 225-232, 2007.

Ilya Dumer (M’94–SM’04–F’07) received the M.Sc. degree from the Moscow Institute of Physics and Technology, Russia, in 1976 and the Ph.D. degree from the Institute for Information Transmission PLACE Problems of the Russian Academy of Sciences, in PHOTO 1981. From 1983 to 1995, he was with the Institute HERE for Information Transmission Problems. Since 1995, he has been a Professor of Electrical Engineering at the University of California, Riverside. During 1992-1993, he was a Royal Society Guest Research Fellow at Manchester University, Manchester, U.K., and during 1993-1994, an Alexander von Humboldt Fellow at the Institute for Experimental Mathematics in Essen, Germany. His research interests are in coding theory, discrete geometry, and their applications. Dr. Dumer is currently an Associate Editor for the IEEE TRANSACTIONS ON INFORMATION THEORY.

PLACE PHOTO HERE

Grigory Kabatiansky received the M.Sc. degree from the Moscow State University, Russia, in 1971 and the Ph.D. degree from the Moscow Institute of Aviation in 1979. Since 1990, he has been with the Institute for Information Transmission Problems of the Russian Academy of Sciences. He has also been an adjunct researcher with the TEAM SECRET at INRIA, France, since 2002 and with the FrenchRussian Laboratoire J.-V.Poncelet since 2005. His research interests are in coding theory and its applications.

PLACE PHOTO HERE

C´edric Tavernier received the M. Sc. Degree from the Pierre and Marie Curie University in Paris, France, in 1998 and the Ph.D. degree from the Ecole Polytechnique of France, in 2004. He worked as an Engineer cryptologist at the ERCOM, Paris, France, from 2004 to 2005, and at the THALES Communications, Colombes, France, from 2005 to 2008. Currently, C. Tavernier is an Engineer cryptologist at the Communication & Systems, Paris, France. His research interests are in coding theory and cryptography.

List Decoding of Biorthogonal Codes and the ...

an algorithm that outputs this list of codewords {c} with the linear complexity order ... Ilya Dumer is with the Department of Electrical Engineering, University of. California, Riverside, CA 92521, USA; (e-mail: [email protected]). Research supported ...... from the Moscow State University, Russia, in 1971 and the Ph.D. degree ...

152KB Sizes 1 Downloads 196 Views

Recommend Documents

List Decoding of Biorthogonal Codes and the ... | Google Sites
an input alphabet ±1 and a real-valued output R. Given any nonzero received vector y ... Key words: soft-decision list decoding, biorthogonal codes,. Hadamard ...

List Decoding of the First-Order Binary Reed–Muller Codes
Binary first-order Reed–Muller codes RM(1,m) have length n = 2m and consist of ..... MacWilliams, F.J. and Sloane, N.J.A., The Theory of Error-Correcting Codes, ...

List Decoding of Reed-Muller Codes
vector y and the candidate c(i)(x1,...,xm) = c1x1 + ... + cixi on facet Sj and denote. dSj (y, c(i)) the Hamming distance between these two vectors (of length 2i). Clearly that for any linear function c(x1,...,xm) such that c(i)(x1,...,xm) = c1x1 + .

List Decoding of Second Order Reed-Muller Codes and ...
Email: [email protected]. 2 THALES ... Email: [email protected]. Abstract. ...... However a solution was found at distance 1768. We deduce ...

List decoding of Reed-Muller codes up to the Johnson bound with ...
project no. 027679, funded in part by the European Commission's Information ... from any center y, where. Js = 2−1(1 .... This algorithm is call the Ratio algorithm ...

List decoding of Reed-Muller codes up to the Johnson bound with ...
Email: [email protected] ... Email: [email protected]. Abstract—A new .... Perr = P1 +P2. Here P1 is the probability that a “good” code vector c (i.e. ...

Systematic encoding and decoding of chain reaction codes
Nov 17, 2011 - Frojdh, et al., “File format sub-track selection and switching,” ISO/. IEC JTC1/SC29/WG11 MPEG2009 M16665, London UK., Jul. 2009, 14 pp. Gao, L. et al.: “Ef?cient Schemes for Broadcasting Popular Videos,”. Proc. Inter. Workshop

Systematic encoding and decoding of chain reaction codes
Nov 17, 2011 - 690-697 (Oct. 1998). Paris, et al., “Ef?cient broadcasting protocols for video on demand”,. International Symposium on Modeling, Analysis and Simulation of. Computer and Telecommunication systems (MASCOTS), vol. 6, pp. 127-132 (Jul

On the Linear Programming Decoding of HDPC Codes
The decision boundaries divide the signal space into M disjoint decision regions, each of which consists of all the point in Rn closest in. Euclidean distance to the received signal r. An ML decoder finds which decision region Zi contains r, and outp

Soft-decision list decoding of Reed-Muller codes with linear ... - Sites
Cédric Tavernier is with National Knowledge Center (NKC-EAI), Abu-. Dhabi, UAE; (e-mail: [email protected]). we consider the vector y = (y0, ..yn−1) ...

List Decoding for long Reed-Muller codes of order 2 ...
We call Reed-Muller code of order r and we denote. RM(r, m) the linear code of length n = 2m, minimal distance dmin = 2m−r and dimension (0 m. ) + ททท + (r.

Soft-decision list decoding of Reed-Muller codes with ...
Abstract—Let a binary Reed-Muller code RM(s, m) of length ... beyond the error-correcting radius and retrieves the code list LT ..... vector Q(x, α) is the output of a non-constant Boolean linear function. Then vector. ˆQ(x, α) has equal numbers

Efficient Decoding of Permutation Codes Obtained from ...
Index Terms—Permutation codes, Distance preserving maps ... have efficient decoding, are known to achieve this upper bound. (see [1], [2]). ... denote this mapping. The example below illustrates the map of b = (1, 1, 0, 1) to the permutation vector

Efficient Decoding of Permutation Codes Obtained from ...
N. Thus it is of interest to consider other means of obtaining permutation codes, for .... the transmitted symbol corresponding to bi = 0 is different from the received ...

On complexity of decoding Reed-Muller codes within ...
full list decoding up to the code distance d can be performed with a lower ... Both recursive and majority algorithms correct many error patterns beyond the BDD ...

A simple algorithm for decoding Reed-Solomon codes ...
relation to the Welch-Berlekamp [2] and Euclidean algorithms [3], [4] is given. II. DEFINITIONS AND NOTATIONS. Let us define the (n, k, d) Reed-Solomon (RS) code over GF(q) with length n = q − 1, number of information symbols k, designed distance d

Graph-covers and iterative decoding of finite length codes
ular low-density parity-check (LDPC) codes, which stands in ... called Tanner graph [1,2,4], with a given parity-check ...... able online under http://justice.mit.edu/.

Soft-Decision List Decoding with Linear Complexity for ...
a tight upper bound ˆLT on the number of codewords located .... While working with real numbers wj ∈ R, we count our ... The Cauchy-Schwartz inequality.

Soft-In Soft-Out Decoding of Reed–Solomon Codes ...
codes that is based on Vardy and Be'ery's optimal soft-in hard-out algo- rithm. .... Associated with is the bi- nary Bose–Chaudhuri–Hocquenghem (BCH) code.

Chase Decoding of Linear Z4 Codes at Low to Moderate ... - IEEE Xplore
two-stage decoder which performs HD decoding on C in each stage. Alternatively, one could have a two-stage soft-decision decoder by employing a Chase ...

Speech coding and decoding apparatus
May 30, 2000 - United States Patent. Akamine et al. ... (List continued on next page.) Issued: ... section, a prediction ?lter and a prediction parameter cal. [30].

On the list decodability of random linear codes with ...
Jul 9, 2013 - proof that the RIP is a sufficient condition follows, after some computations, from ..... consider the family of (binary) degree r Reed-Muller codes, RM(r, m) ⊂ .... Foundations and Trends in Theoretical Computer Science, 2011.

Unused Psn Codes List 2017 822
The official PlayStationStore - Buy the latest PlayStation games, movies and TV ... Generator Free Android Apk score, Free Game Generator Codes Psn Code ...

Google Play Gift Card Codes List 2017 514
Survey 2017 Nascar Live Free Game Generator Codes feed, Code Generator ... Generator Google Play Store Gift Card Code Generator App For Facebook Live ...