List Decoding for long Reed-Muller codes of order 2 and Application to Cryptanalysis of Block Ciphers Ilya Dumer, Rafa¨el Fourquet and C´edric Tavernier

Abstract—In this paper we design an algorithm determining a list of quadratic approximations of a m-variate Boolean function within a given bias. We show how to adapt this algorithm in order to find multiple quadratic approximations of several rounds of the DES with larger biases than biases obtained with linear approximations. More generally, assuming that we have low degree approximation, we propose a new very efficient resulting attack based on a soft decision decoding technique of low order Reed-Muller codes. We give our simulation results on the DES block cipher.

Keywords : Linear cryptanalysis, Reed-Muller codes, coding theory, DES, non-linear approximations. I. I NTRODUCTION Since it was designed by Matsui in 1993 [15] and its success in the cryptanalysis of the DES [16], linear cryptanalysis has become a powerful tool in the analysis of block ciphers. Now conceivers of block ciphers have at least to prove that their cipher is immune to linear cryptanalysis. One of the crucial steps of linear cryptanalysis in terms of time and memory complexity consists of the quantity of plaintext-ciphertext pairs (afterward denoted data-complexity) required so that the attack succeeds with a good probability. This data-complexity can be derived from linear relations involving key bits, plaintext and ciphertext bits. Suppose that the attacker obtained such a relation which is satisfied with a bias 1/2+ε or 1/2−ε, then this data complexity is proportional to 1/ε2 . Namely, Matsui proved that a data-complexity of N = 1/ε2 ≈ 243 was sufficient to recover the key of a 16-rounds DES with a probability of 85% using 243 evaluations of the DES [15]. More recently, Junod showed that with an available datacomplexity of 243 the complexity of the attack had been overestimated by M. Matsui, and that 241 evaluations of the DES were enough to succeed in 85% of the cases [17]. Non-linear cryptanalysis is a natural extension to Matsui’s linear cryptanalitic techniques in which linear approximations are replaced by non-linear expressions. To reduce the complexity, it is natural to considere higher degree approximation. Lars R. Knudsen and M.J.B. Robshaw first have considered in [24] non-linear approximation in linear cryptanalysis. The goal of this paper consists in presenting a general purpose algorithm which outputs quadratic relations between key, plaintext and ciphertext bits with the best possible biases. These algorithms reconstruct the quadratic relations coefficients by coefficient. More precisely we investigate the problem of finding almost all the quadratic approximations of

a m-variable Boolean function which are satisfied with a given bias . In previous works [26], [27], [28], authors construct approximation of block cipher by constructing approximation of S-boxes. This method has major drawback because it is difficult to exploit such approximation over more than one round of a block cipher. This problem has been adressed only in the case of degree one approximation in [23]. It can be related with the well studied problems of learning polynomials with queries in the field of computational learning theory, and of list-decoding in the field of coding theory. That has already been considered for improving fast correlation attacks on stream ciphers, [20]. For higher order, this problem has not been adressed for small biais ε and large number of variables m because it is well known that the size of the list can be exponential in m. Non-linear approximations often exhibit greater absolute biases than linear ones, however their use presents a serious disavantage. Determining the involved key bits is difficult, it requires to test exhaustively all possible candidates. We show in this paper that reconstructing the involved key bits is equivalent to a soft decision decoding problem of ReedMuller codes in a Gaussian channel. We show that finally, using a non linear approximation presents an advantage on linear approximation. The paper is organized as follows: in Sect. III, we show how we can use low order approximations of a block cipher in order to reconstruct efficiently the key-bits, by a known plaintext-ciphertext attack, and we give the corresponding complexity in Sect. IV. In Sect. V, we describe a list decoding algorithm for long Reed-Muller codes of order 2, which allows to find quadratic approximations of a block cipher. Finally, we propose an applications of these algorithms in Sect. VI. TO BE COMPLETED! II. N OTATIONS We denote by F2 the field with two elements, and by B, for some positive integer m, the set of all Boolean functions f : Fm 2 → F2 . For any functions f, g ∈ Bm , we denote by wt(f ) the Hamming weight of f , that is wt(f ) = |{x ∈ Fm 2 | f (x) = 1}|, and by d(f, g) the Hamming distance between f and g (d(f, g) = wt(f + g)). For any Boolean function f ∈ Bm , we denote by F(f ) the following character sum, related to the Walsh transform of f : X F(f ) = (−1)f (x) = n − 2wt(f ). (1) x∈Fm 2



• • •

• • • • •

We call Reed-Muller code of order r and we denote m RM (r, m) the linear code of length n =  2 , minimal  0 m−r distance dmin = 2 and dimension m + · · · + r, m, defined by {c ∈ GF (2)(r) [X1 , . . . , Xm ]}. In the coding theory field, a such element is represented by its truth table. A block cipher can be seen as a vectorial function Encrypt : GF (2)u × GF (2)v 7→ GF (2)u . We denote respectively by P, C, K, the plaintext, ciphertext and key vectors of a block cipher. We denote by ”h , iP k ” the usual scalar product of binary k vectors: hx, yik = i=1 xi yi , where x = (x1 , . . . , xk ) and y = (y1 , . . . , yk ). If Y = Encrypt(X, K), then Y = (Y1 , . . . , Yu ), X = (X1 , . . . , Xu ) and K = (K1 , . . . , Kv ). If i = (i1 , i2 , . . . , ih ), we denote X i = Xi1 × · · · × Xih . We denote P r the probability. We call element from RM (r, m), a m multivariate polynomial of degree r. For clearty reason we make a correspondance beween a position i = i0 + 2 · i1 + . . . + 2k · ik of a codeword and the coordinate (i0 , i1 , . . . , ik ). III. K EY R ECONSTRUCTION

Our aim in this section is to show how we can use low degree approximations of a block cipher in order to reconstruct those key bits which are involved in these relations. A. About maximum-likelihood decoding of RM codes Let RM (r, m) the r order Reed-Muller code of length  m n = 2m , dimension m and minimal distance 0 + ··· + r d = 2m−r . Let C be a noisy codeword. Maximum-likelihood (ML) decoding consists in finding the closest elements D ∈ RM (r, m) that minimize the Hamming distance between C and D. Results concerning the (ML) decoding of the RM (r, m) codes are known: it is shown in [12] that ML decoding of RM codes of fixed- order yields a substantially lower residual εmin = mr/2 n−1/2 (c(2r − 1)/r!)1/2 , m → ∞ r

(2)

where c > ln(4) and εmin corresponds to the bias of the error r weight n/2(1 − εmin r ). However, the best known algorithm works as follow: Theorem 1: [10]. Long RM (r, m) codes of fixed order r can be decoded with quasi-linear complexity O(n log(n)) and decoding threshold r

n/2(1 − ε˜r ), ε˜r = (cm/d)1/2 , c > ln(4), m → ∞ (3)

B. Higher order approximation and reconstruction of the key Finding good higher order approximations is a difficult task, however it is also very difficult to reconstruct efficiently the key from a higher order approximation. We show in this part that reconstructing the key is equivalent to a classical problem which is named soft decision decoding of Reed-Muller codes in a Gaussian channel. 1) Reed-Muller code interpretation: For a given block cipher, we denote by X = (X1 , . . . , Xu ) the plaintext, by K = (K1 , . . . , Kv ) the key, an by Y = Encrypt(X, K) = (Y1 , . . . , Yw ) the corresponding ciphertext. For any Boolean function g in w variables, the function g(Encrypt(X, K)) can be considered as a Boolean function in u + v variables. In this section, we assume that, for a given function g, we have an approximation P (X, K) of g(Encrypt(X, K)), of degree h in X and of degree k in K, such that Pr [g(Encrypt(X, K)) = P (X, K)] =

X,K

(4)

where P (X, K) =

X

a(i,j) X i K j =

i∈I,j∈J

X i∈I

Xi

X

a(i,j) K j .

j∈J

Here, i and j are multi-indices, and if for example i = i1 iu (i1 , . . . , iu ), X i stands for the monomial X  1 . . . Xu . We u u u recall  that  dim I =v  0 + 1 + · · · + h and dim J = v v + 0 1 + · · · + k . Practically, as for linear cryptanalysis, we assume that the equality (4), for almost all fixed value of K, still holds. Then, when K is fixed, we will consider PK (X) = P (X, K) as an element of RM (h, u) . The coefficients a(i,j) are known, but the coefficients K j represent the unknowns that we want to determine. More precisely, using a sample of plaintext-ciphertext pairs, we will reconstruct the polynomial PK (X), and by identification of the P coefficients, will be able to determine the terms of the form j∈J a(i,j) K j . This is a decoding problem of Reed-Muller codes of order h and length 2u . 2) Reconstruction of the key from given plaintext-ciphertext pairs: Before starting, for complexity reasons, we have to assume that the number u0 of variables involved in PK (X) is not too large, so that by denoting X 0 = (Xt1 , . . . , Xtu0 ) those variables, we have PK (X) = PK (X 0 ) ∈ RM (h, u0 ), with u0 < u. In fact, complexity of decoding algorithm of ReedMuller codes are exponential in u0 (see [10]). Now let us give a refinement of (4): it is possible, by sampling, to determine 0 biases εx , for each x ∈ Fu2 , such that: Pr [g(Encrypt(X, K)) = P (X, K) | X 0 = x] =

X,K

1 +εx (5) 2

Now, in order to reconstruct PK (X 0 ), we assume that we have a sample S of pairs (X, Encrypt(X, K)) of size N associated with the key K, and P we denote Sx = {(X, Y ) ∈ S | X 0 = x}, of size Nx ( x∈Fu0 Nx = N ). Then, using 2

We note that for the second order Reed-Muller code, experimental results of [13] showed that algorithm depicted of [14], [13] works well for εmin if m < 13. 2

1 + ε, 2

u0

(5), we will construct a vector y ∈ R2 , which will be soft decoded in RM(h, u0 ) to reconstruct PK (X 0 ). The value of 0 this vector y at position x ∈ Fu2 is constructed as follows.

According to (5), we have that PrX [g(Encrypt(X, K)) = PK (X 0 ) | X 0 = x] = 12 + εx . The main idea is then to consider, using S, that the vector PK (X 0 ) is transmitted Nx times at position x, over a channel with an error probability of 1/2 − εx , those transmitted values being the vector Rx = (g(Y ))(X,Y )∈Sx . We discuss in Sect. IV the complexity concerning the decoding of RM codes with repeated symbols. 0 For x ∈ Fu2 , let sx be the Hamming weight of Rx (i.e. the number of composant equal to 1 in Rx , and then the number of 0 is equal to Nx − sx ). Now let P1 = (1/2 − εx )Nx −sx × (1/2 + εx )sx and P0 = (1/2 − εx )sx × (1/2 + εx )Nx −sx . Then the probability that PK (x) = 1 equals p1 = P1 /(P0 + P1 ), and the probability that PK (x) = 0 equals p0 = P0 /(P0 +P1 ). So we could choose for the value of y at position x the most probable value sign(ln(p0 /p1 )), but it is more efficient to take the soft-quantity ln(p0 /p1 ), i.e we choose:   1/2 + εx . (6) y(x) = ln(p0 /p1 ) = (Nx − 2sx ) ln 1/2 − εx Manipulating log-probability quantities, rather than working with the probabilities themselves, is generally preferred due to computational issues such as a finite-precision representation of numbers, and since the log-probability quantities represent information as it is defined in the field of Information Theory. Soft information yields reliability measures for the received bits and is generated from channel observations in the physical layer. The positions x for which we have no information (Nx = 0) will be naturally vanished by setting y(x) = 0, these positions being considered as erasures. Finally we can reconstruct PK (X 0 ) by determining the element C ∈ RM (h, u0 ) that maximizes the quantity X y(x)(−1)C(x) . x∈Fu 2

0

y(x)(−1)L(x) . As L is affine, this task can be done by a fast Fourier transform and we obtain L(Z1 , Z2 , Z3 ) = l0 +l1 X1 +l2 X2 +l3 X3 = P (X 0 , K). Finally by identification, we get K2 , K4 , K1 K5 , K2 . P

3

x∈F22

C. Reconstruction of the key from chosen plaintext If P (X, K) ∈ RM (h, u) with u large, then for complexity reasons, it is not possible to apply the previous method, however, by assuming that attacker can chose the plaintext, we can use decoding algorithms for large length. This problem is more known as Learning Polynomials from Queries (see [5]). Reed-Muller RM (r, m) codes of length 2m are considered on a binary symmetric (BS) channel with high crossover error probability 1/2 − . For an arbitrarily small  > 0, recursive decoding algorithms of [11] are designed that retrieve all information bits of RM codes of fixed order r with a vanishing error probability and sublinear complexity of order O(mr+1 ). The algorithms utilize a vanishing fraction of the received symbols for both hard- and soft-decision decoding. IV. D ECODING OF RM CODES WITH REPEATED SYMBOLS A. Introduction In this section, we consider decoding thresholds for RM codes whose codewords are repeatedly transmitted l > 1 times, and show how this is related to the decoding problem of section III We assume that each symbol is transmited over a binary symmetric channel BSCp with an error probability p. We will use notation p = (1 − )/2,

q = (1 + )/2.

We first consider the decoding thresholds of recursive algorithms developed in [29] and [30]. These thresholds are defined in [30] for long RM codes as follows. Theorem 1: Let parameters r and m satisy asymtotic restriction

Thus we have translated the problem of reconstructing the key by a soft decision decoding of Reed-Muller code. Fortunately I. Dumer and K. Shabunov (see [1]) have constructed such a soft decision decoding algorithm for any order Reed-Muller 0 code. The complexity of this algorithm is in O(u0 2u ). 3) Example: Let P (X, K) = X1 K2 + X2 K4 + X3 K1 K5 + K2 ∈ RM (1, 3) for fixed K, and we assume that P (X, K) is equal to f (X, K) := g(Encrypt(X, K)) on more that 264+128 (1/2 + ε) inputs (X, K), X = (X1 , . . . , X64 ), and K = (K1 , . . . , K128 ). Here, X 0 = (X1 , X2 , X3 ), so that for fixed K we have PK (X 0 ) = P (X, K). Let S be a sample of N pairs (X, Encrypt(X, K)). Let s0x = #{(X, Y ) ∈ S | X 0 = x, g(Y ) = 0} and s1x = #{(X, Y ) ∈ S | X 0 = x, g(Y ) = 1}. Thus by construction we have a soft information for each position x by the formula   1/2 + εx 0 1 y(x) = (sx − sx ) ln . 1/2 − εx Then we determine the coefficients of PK by constructing the affine function L that maximize the quantity

m−r → ∞, as m → ∞. (7) ln m Then RM codes RM(r, m) can be decoded on a BSCp with complexity of order (3n log2 n)/2 and give a vanishing block error probability if   r p ≤ 1 − (4m/d)1/2 /2. (8) Thus, RM codes give a vanishing error probability if parameter  = 1 − 2p satisfies restriction  ≥ 1 , where r

1 = (4m/d)1/2 .

(9)

Our goal is to find a similar threshold l if the same codeword is transmitted l times, where l is some constant. We will consider two different settings. In setting A, each symbol ci of a code RM(r, m) is transmitted l times and is received as some vector us ≡ us (i) of length l and Hamming weight s ≡ s(i) that depends on position i = 1, ..., n. We then use majority-decoding of us into one symbol (0 or 1).

No futher information is used in subsequent decoding of RM codes. This corresponds to a hard-decision decoding of a repetition code (l, 1, l) for each position i. In the second setting B, we use soft-decision decoding. Given some vector us in position i, we have conditional probabilities ps qs , P {ci = 1|s} = P {ci = 0|s} = ps + qs ps + qs where ps =

  l s l−s q p , s

qs =

  l s l−s p q . s

(10)

More generally, we form a vector S = (s1 , ..., sn ) of n Haming weights si and the corresponding vector P = (P1 , ..., Pn ) of probabilities P {ci = 0|si }. Then we use recursive soft decision decoding of vector P into the code RM(r, m). The corresponing decoding threshold is defined in [30] as follows. For every weight s = 0, ..., l, let qs − ps (11) ys = P {ci = 0|s} − P {ci = 1|s} = ps + qs Next, we assume that ci = 0, in which case weight s and quantity ys become random variables that have probability qs . Then the random variable y = ys has the first two moments Ey =

n X

ys qs ,

s=0

Ey 2 =

n X

(12) ys2 qs .

ys =

qs − ps ∼ (l − 2s) qs + ps

(16)

Setting A. We calculate the error probability Pl for majority decoding of an (l, 1, l) code. Here we assume that l = 2t + 1 is odd. We omit the case l = 2t which is similar. Note that   l X l = 2l−1 s s=t+1 Then Pl =

l X

" −l

qs ∼ 2

s=t+1

#     l l X X l l + (l − 2s) s s s=t+1 s=t+1

  l X l 1 l −l 2s = + −2  2 2 s s=t+1 For any s and t ≥ 1, we now use equalities       √ l−1 2t l s=l , = 22t / ct, s−1 t s where c = c(t) satisfies inequality π ≤ c ≤ 4 for all t (see [31]) and tends to π for large t. Then   2t   2t   l   X X X 2t 2t l 2t +l 2s = 2l =l s t s s s=t s=0 s=t+1

s=0

≥ l2l−1 + 2l−1

Then the decoding threshold is defined in terms of parameter θ = Ey · (Ey 2 )−1/2

(13)

Theorem 2: [30] Consider long RM codes RM(r, m) that satisfy restriction (7). Then these codes can be decoded with complexity of order (3n log2 n)/2 and give: • a vanishing block error probability if  1/2r 4m θ≥ ; (14) d • a nonvanishing block error probability if  1/2r 1 . θ≤ d

(15)

In this way, parameter θ serves as a measure of channel quality similarly to the above parameter  of (9).

and Pl ≤ Finally,

1 2



1−

p

p

2l/c

 2l/c

note that Pl is an input error probability for an

RM code RM(r, m) Then condition (8) shows that any lrepeated RM code is decoded on a BCHp with a vanishing error probability if  ≥ 0l , where we take π ≤ c ≤ 4 and derive  r  1/2r  2 4m   , for any l ≥ 2   l d 0 l = (17) r  1/2r   π 4m   , for large l.  2l d Setting B. Then

B. Calculations for repeated RM codes Let l be some constant. To define the threshold l < 1 , we can assume that  → 0 for m → ∞, since 1 → 0 as m → ∞, according to (9). Then     l l l−s s qs = 2−l (1 + ) (1 − ) ∼ 2−l [1 + (l − 2s)] s s     l l s l−s ps = 2−l (1 + ) (1 − ) ∼ 2−l [1 − (l − 2s)] s s

l X

l   X l Ey = ys qs ∼ 2  (l − 2s)[1 + (l − 2s)] s s=0 s=0 −l

 Note that l − 2s is an odd function of s about l/2 and sl is an even function. Therefore l   X l (l − 2s) = 0 (18) s s=0

l   X l Ey ∼ 2  (l − 2s)2 s s=0 l   X l = 2−l 2 [l(l − 2s) − 2s(l − 2s)] s s=0 l   X l −l 2 2s(l − 2s) = −2  s s=0 −l 2

(19)

Here, our goal is to approximate (reduced) round of block cipher with quadratic functions. These approximations are meant to be used in the cryptanalysis depicted in Sect. III. We chose to build a list decoding algorithm of second order Reed-Muller code based on a very strong one, but which is deterministic and of exponential complexity in the number of variable m, which is not suitable for the values of m that we consider here.

Also, from (18), l   l   X X l l 2sl = l2 l = l 2 2l s s s=0 s=0

A. A very efficient deterministic candidate

Finally, note that         l 2 l l−2 l−1 s = [s(s − 1) + s] = l(l − 1) +l s s s−2 s−1 l   X l s=0

s

s2 = l2l−1 + l(l − 1)2l−2

Summarizing, we obtain Ey ∼ −2 l2 + 22 l + 2 l(l − 1) = 2 l

(20)

Next, we proceed with Ey 2 as follows Ey 2 ∼ 2−l 2

l   X l

s

s=0

(l − 2s)2 [1 + 2(l − 2s)]

Again, note that (l − 2s) is an odd function of s about l/2. Then l   X l (l − 2s)3 = 0 s s=0 and Ey 2 ∼ 2−l 2

l  X s=0

 l (l − 2s)2 s

Now (19) and (20) show that Ey 2 ∼ Ey ∼ 2 l √ θ = Ey · (Ey 2 )−1/2 =  l, Then according to Theorem 2, any l-repeated RM code is decoded on a BCHp with a vanishing error probability if ≥

We consider the strong list decoding algorithm for the second order Reed-Muller code RM (2, m) in [13]. This algorithm allowed to improve the lower bounds on the covering radius of the second order Reed-Muller code for 9 ≤ m ≤ 12. The notion of list decoding was introduced in 1957 by P. Elias in [2]. By definition, a list decoding algorithm of decoding radius T should produce, for any received vector y, the list LT (y) = {c ∈ C : d(y, c) ≤ T } of all vectors c belonging to a code C which are at distance at most T apart from y. In our case, we consider a decoding radius of the form T = n(1/2 − ε), where ε is a real number such that 0 < ε ≤ 1/2 and n = 2m , and we denote by    1 −ε Ly,ε = q ∈ RM(2, m) : d(y, q) ≤ n 2 the desired list, for a received vector y. In terms of character sum, according to relation (1), the list is also defined by Ly,ε = {q ∈ RM(2, m) : F(y + q) ≥ 2nε} .

3

00l

V. L IST DECODING OF THE SECOND ORDER R EED -M ULLER CODE

1 =√

 l

4m d

1/2r .

00 Comparing √ this with Theorem 1, we see that the estimate l improves l times the estimate (9) of non-repeated p √ codes. For all l, the latter estimate also reduces π/2 to 2 times its hard-decision counterpart of (17).

This algorithm used a specific representation for the elements of the RM(2, m) code: a quadratic Boolean function q ∈ RM(2, m) can be written in the following way: q(x1 , . . . , xm ) = Q1 (x1 , . . . , xm ) m X + xi Qi (x1 , . . . , xi−1 ),

(21)

i=2

where Q1 ∈ RM(1, m) is an affine Boolean function and Qi ∈ RM(1, i − 1) is a linear Boolean function for 2 ≤ i ≤ m. For any i, we will denote by RM(1, i)# the set of linear functions in i variables, that is the elements of RM(1, i) with a null constant term. Similarly, we denote by RM(2, i)# the elements of RM(2, i) with a null affine component, that is, according to representation (21), the elements q such that Q1 = 0. The method of this algorithm consists in reconstructing recursively the affine (or linear) coefficients Qi , 1 ≤ i ≤ m, according to representation (21), of the solutions q ∈ Ly,ε . At each step µ, 2 ≤ µ ≤ m, an intermediate list Lµy,ε containing some potential “prefixes” in µ variables of the solutions is constructed. To summarize the performances of this algorithm, we have the following:

Conjecture 1: The proposed sums-algorithm of [13] evaluates, for any received vector y, the list of all vectors q ∈ RM (2, m) P such that d(y, q) ≤ n(1/2 − ε) with complexity m O(n log(n) µ=2 |Lµy,ε |). Remark 1: Experimentally, the running time of a decoding has been verified to be proportional to n = 2m times the sum of the sizes of the lists Lµy,ε and these lists seems to be of acceptable size at least to decode far from the minimal distance. B. A probabilized version The almost linear complexity in n of Conjecture 1 is not suited for the large values of m that we consider here (e.g. m = 128). So we attempt here to design a “probabilized” version of this algorithm [13], which is intended to output almost all the solutions of the list decoding problem. The principle is rather similar to the probabilist algorithm for the first order Reed-Muller codes presented in [23], where the complexity depends on some sampling parameters. 1) Intermediate lists: Let us first introduce a useful notation: for f ∈ Bm , µ ∈ [1, . . . , m] and s ∈ Fm−µ , let fs ∈ Bµ 2 be the restriction of f to the “facet” µ Ss := {(x, s) ∈ Fm 2 | x ∈ F2 },

that is, fs (x) = f (x, s). For the sake of compactness, if s = (s1 , s2 , . . . , sm−µ ) ∈ Fm−µ and δ ∈ F2 , we will denote by 2 “δs”, that is “0s” or “1s”, the vector (δ, s1 , s2 , . . . , sm−µ ) in Fm−µ+1 . Let us note that, for s ∈ Fm−µ , we have 2 2 F(f ) =

X

X

(−1)fs (x) =

x∈Fµ s∈Fm−µ 2 2

X

F(fs ),

(22)

s∈F2m−µ

with F(fs ) =

X

(−1)fs (x,0) + (−1)fs (x,1)

x∈Fµ−1 2

= F(f0s ) + F(f1s ).

(23)

The proposed algorithm will exploit the relation (22), with f = y + q, by computing an upper bound of the quantity F(qs + ys ). The following definition allows to give a nice description of the restriction qs of q to a given facet Ss . 1: Let q(x1 , . . . , xm ) = Q1 (x1 , . . . , xm ) + PDefinition m ∈ RM(2, m) be a quadratic i=2 xi Qi (x1 , . . . , xi−1 ) Boolean function. For 2 ≤ µ ≤ m, we define the µ-th prefix q µ ∈ RM(2, µ)# of q as the quadratic part of q depending only on the first µ variables: q µ (x1 , . . . , xµ ) = x2 Q2 (x1 ) + · · · + xµ Qµ (x1 , . . . , xµ−1 ) At the µ-th step of the algorithm, we will determine a list Lµy,ε of candidates which “could”, but may not, coincide with the µ-th prefix of an element q ∈ Ly,ε . By definition, for µ ≤ m and for s ∈ Fm−µ , there exists an 2 affine function lq,s ∈ RM(1, µ) such that qs = q µ +lq,s , where

P lq,s is the restriction to Ss of the function Q1 + i|si =1 Qµ+i . As a consequence we can rewrite (22): X F(y + q) = F(ys + q µ + lq,s ). (24) s∈Fm−µ 2

Now, for each s ∈ Fm−µ , we have: 2 F(ys + q µ + lq,s ) ≤

max l∈RM(1,µ)

F(ys + q µ + l).

Finally, we deduce that if q ∈ Ly,ε , then: X Γµy (q) := max F(ys + q µ + l) s∈Fm−µ 2

l∈RM(1,µ)

≥ F(y + q) ≥ 2nε.

(25)

The key point is that Γµy (q) depends only on the prefix q µ of q. This means that a function q µ ∈ RM(2, µ)# could be the µ-th prefix of a solution q ∈ Ly,ε only if it satisfies the “Γ-criterion” implied by (25), namely only if Γµy (q µ ) ≥ 2nε. This motivates the introduction of the list Lµy,ε = {q µ ∈ RM(2, µ)# : Γµy (q µ ) ≥ 2nε}, consisting of every potential prefix of any function q ∈ Ly,ε . The definition of these intermediate lists associated with the criterion of acceptance of a candidate was the foundation of the algorithm of [13]. It worked by recursively determining the intermediate lists Lµy,ε , for 2 ≤ µ ≤ m, in the following way: # for each q µ−1 ∈ Lµ−1 y,ε , and for each Qµ ∈ RM(1, µ − 1) , 1 µ−1 the corresponding “successor” q +xµ Qµ is tested against the Γ-criterion to decide whether this candidate belongs or not to Lµy,ε . Now we need to go a little further, for the large values of m we consider here: 1) for a given s ∈ F2m−µ , we can not compute the “max” of (25) for each successor q µ−1 + xµ Qµ with the method proposed in [13]; 2) the number of elements involved in the extern sum of (25) is too huge (2m−µ ). Concerning the second point, we apply exactly the same method as in [23]: we will not compute the exhaustive sum, but only an estimated one, by choosing randomly a subset S ⊂ F2m−µ and by computing the sum over S. Concerning the first point, we need also to estimate the maximum, which we denote by max0 (see next section). Then the definition of the Γ-criterion becomes: X 0 Γµy (q) := max F(ys + q µ + l) l∈RM(1,µ) s∈S µ+1

≥2

|S|ε.

(26)

and the intermediate lists are formed by those q µ which satisfies it. It was noted and justified in [23] 2 that it is enough to have S of size O(1) in order to obtain a precise approximation. In practice we choose |S| ≈ 20. 1 it is showed in [13] that a function q µ = q µ−1 + x Q ∈ RM(2, µ)# µ µ µ−1 is in Lµ−1 can be in Lµ y,ε only if its prefix q y,ε 2 TODO: refer to submitted DCC version which contains some proofs?

2) Computing the criterion: In this section we propose a way of computing an approximation of the maximum involved in (26). The first step is to decompose the character sum according to whether xµ = 0 or xµ = 1, using (23), in the same way it was done in [13]. Then we use the techniques of [23] to construct recursively both the valid successors and the linear functions l which permit to attain the maximum 3 . More precisely, for 1 ≤ η < µ, we will maintain a list Aµ,η y,ε,ξ of linear function in RM(1, η)# which may be the linear prefix of a function Qµ leading to a valid successor q µ−1 + xµ Qµ of q µ−1 . To achieve this, we would also need to maintain, for each s ∈ S and h ∈ Aµ,η y,ε,ξ , a list Cs (h) allowing to attain the maximum (corresponding to the function l on which we take the maximum). The problem is that this last list Cs (h) is unmaintainable for time and memory complexity reasons, so we need a tradeoff, which leads to computing an upper-bound of the maximum. Let ξ be a real number such that 0 < ξ < 1 (in practice we choose ξ = 1/2). Then we will get interested in computing the maximum of (26), for each s ∈ S, only if (the probabilist estimation of) this maximum is greater than ξ × 2µ+1 ε. If it is not, then we replace it with this threshold value ξ × 2µ+1 ε. So the refined version of the Γ-criterion becomes:

2) for each of them, and for each s, constructing recursively functions l which are such that σ0 + σ1 > ξ × 2µ+1 ε. Suppose that we have a pair (Qµ , l) of functions which satisfy those two points. Let η and k such that 1 ≤ η ≤ η + k ≤ µ − 1. Let u0 = l and u1 = Qµ + l, and yi0 = yis + q µ−1 . Then we have, for i = 0, 1: σi = F(y0s + q µ−1 + ui ) X X X 0 (−1)yi (x,f,v)+ui (x,f,v) = v∈Fµ−1−k−η f ∈Fk x∈Fη 2 2 2 X X (−1)ui (0,0,v) (−1)ui (x,0,0) = v∈Fµ−1−k−η x∈Fη 2

2

×

X

yi0 (x,f,v)+ui (0,f,0)

(−1)

f ∈Fk 2



X X (x) uη yi0 (x,f,v)+ui (0,f,0) i ≤ (−1) (−1) η f ∈Fk v∈Fµ−1−k−η x∈F X

2

2

2

(29) Γµy (q)

:=

X

max(ξ × 2

s∈S µ+1

≥2

µ+1

0

ε,

µ

max l∈RM(1,µ)

F(ys + q + l))

|S|ε.

(27)

µ−1 Let us now get into the details. We assume that the list Ly,ε µ−1 µ−1 is already computed. Then for each q ∈ Ly,ε , we have to evaluate (or upper-bound), for each s ∈ S, the following quantity:

max l∈RM(1,µ)

F(ys + q µ + l) = =

max

l∈RM(1,µ)# , δ∈F2

max

l∈RM(1,µ)#

|F(ys + q + l)| ,

(28)

which is rewritten (see [13]) as: max l∈RM(1,µ)

=

=

F(ys + q µ + l) max

l∈RM(1,µ−1)# , lµ ∈F2

max

where is the prefix of ui in η variables. In particular, using the assumption that σ0 + σ1 > ξ × 2µ+1 ε, the case k = 0 gives: X X 0 η y0 (x,v)+l (x) ξ × 2µ+1 ε < (−1) x∈Fη v∈Fµ−1−η 2 2 X X η y10 (x,v)+Qη µ (x)+l (x) + (−1) (30) µ−1−η x∈Fη v∈F 2

2

F(ys + q µ + l + δ) µ

uηi

F(y0s + q µ−1 + l)

+ F(y1s + q µ−1 + Qµ + l + lµ ) F(y0s + q µ−1 + l)

l∈RM(1,µ−1)#

+ F(y1s + q µ−1 + Qµ + l) σ0 = F(y0s + q µ−1 + l) and σ1 = Let us denote F(y1s + q µ−1 + Qµ + l) . We then estimate this maximum by: 1) constructing recursively those Qµ which can lead to a satisfied Γ-criterion (27), 3 in [13], these functions are not constructed but evaluated exhaustively, using another kind of recursion.

The important point again is that this inequality (30) depends only on the prefixes Qηµ and lη in η variables, so we have defined a new criterion to decide if a pair (Qηµ , lη ) of linear functions in RM(1, η)# can lead to a satisfied Γ-criterion. Now, again, we are only able to compute an estimation of these quantities. We choose randomly a subset V ⊂ F2µ−1−η of size O(1) (≈ 20 e.g.) and a subset X ⊂ Fη2 of size O(1/ε2 ) (see [23] again for justifications). Then we define, for any s ∈ S and any h ∈ RM(1, η)# , the randomized criterion and the corresponding lists of functions lη by: X X η y00 (x,v)+lη (x) Λs (h, l ) = (−1) v∈V x∈X ! X 0 η + (−1)y1 (x,v)+h(x)+l (x) (31) x∈X

 Cs (h) = lη ∈ RM(1, η)# | Λs (h, lη ) > 2ξε|X||V | (32) We can now write a new version of the Γ-criterion for an “incomplete” successor q µ−1 + xµ Qηµ of q µ−1 (noting h =

Qηµ ):

for each x ∈ X, compute uηi (x), 2) for each hη ∈ Aµ,η , compute Cs (hη+k ), with complexity 22k . •

µ−1 Γµ,η , h) := y (q

X

max(2ξε|X||V |, max Λs (h, l)) l∈Cs (h)

s∈S

≥ 2ε|S||X||V |

(33)

and the corresponding list:

VI. A PPLICATION TO THE CRYPTANALYSIS OF REDUCED ROUND DES

µ−1 Aµ,η )= y,ε,ξ (q µ−1 {h ∈ RM(1, η)# | Γµ,η , h) ≥ 2ε|S||X||V |} (34) y (q

3) Constructing the linear lists: We describe now how we can obtain a list Lµy,ε from a list Lµ−1 y,ε by constructing µ−1 recursively the lists Aµ,η (q ), for 1 ≤ η ≤ µ − 1. We have y,ε,ξ now explicited the definition of what we called the “estimation of the maximum” and denoted max0 of (27), and the recursive definition of Lµy,ε is: µ,µ−1 µ−1 Lµy,ε = {q µ−1 + xµ h | q µ−1 ∈ Lµ−1 )} y,ε and h ∈ Ay,ε,ξ (q (35) Let η, k such that 1 ≤ η < η+k ≤ µ−1. For the same reasons as in the quadratic case, if h ∈ Aµ,η+k , then its prefix hη is in Aµ,η , and similarly if lη+k ∈ Csη+k (h), then its prefix lη is in Csη (hη ). This allows to construct Csη+k (h) from Csη (hη ) for each possible successor h ∈ RM(1, η + k)# of hη ∈ Aµ,η , and then Aµ,η+k from Aµ,η , using (29).

Incrementing the lists Cs Let us rewrite a randomized version of (29), using the same notations. Recall that this inequality depends only on the prefixes in η + k variables of u0 and u1 . For this reason, we can consider that u1 = u1η+k is any successor of uη1 = hη + lη (there are 2k such successors), for some hη ∈ Aµ,η and some lη ∈ Cs (hη ), and similarly that u0 is any successor of lη (the corresponding successor of hη is then u0 + u1 ): X X η X 0 ui yi (x,f,v)+ui (0,f ) σi = σi (ui ) ≤ (−1) (−1) v∈V x∈X f ∈Fk 2 (36) For u ∈ RM(1, η)# , let us denote by Succη+k (u) ⊂ RM(1, η + k)# the set of successors of u, which is of size 2k , that is Succη+k (u) = {v ∈ RM(1, η + k)# | v η = u}. Let hη ∈ Aµ,η . For each hη+k ∈ Succη+k (hη ), according to (31) (replacing X by X × F ), we have that Λs (hη+k , lη+k ) = σ0 (lη+k ) + σ1 (hη+k + lη+k ),

(37)

and [

Cs (hη+k ) =

lη ∈C

s

k+1

>2

{l ∈ Succη+k (lη ) | Λs (hη+k , l)

(hη )

ξε|X||V |}.

Incrementing the list A TODO: finish and conclude.

(38)

The complexity of this step consists in 1) computing σi : for this we have to compute • for each v ∈ V and x ∈ X, compute the inner sum with a Walsh-Hadamard transform of the function yi0 (x, ·, v) with complexity O(k2k ),

In this section, we will apply the algorithms of Sections V and III in order to mount a cryptanalysis for the DES block cipher. The first step consists in finding good quadratic approximations of this cipher. In this case, the plaintext, ciphertext anf key vector are elements of F64 2 . TO BE COMPLETED! VII. C ONCLUSION R EFERENCES [1] I. Dumer and K. Shabunov, 2006. Soft decision decoding of Reed-Muller codes: recursive lists, IEEE Trans. Info. Theory, vol. 52, no. 3, 12601266. [2] P. Elias, “List decoding for noisy channels” 1957-IRE WESCON Convention Record, Pt. 2, pp. 94–104, 1957. [3] V.Guruswami and M.Sudan, “Improved decoding of Reed-Solomon and algebraic-geometry codes ,” IEEE Trans. on Information Theory, vol. 45, pp. 1757–1767, 1999. [4] O.Goldreich and L.A.Levin, “A hard-core predicate for all one-way functions”, Proceedings of 21-st ACM Symp. on Theory of Computing, pp. 25–32, 1989. [5] O. Goldreich, R. Rubinfeld and M. Sudan, “Learning polynomials with queries: the highly noisy case”, SIAM J. on Discrete Math., pp. 535–570, 2000. [6] S. Litsyn and O.Shekhovtsov, “Fast decoding algorithm for first order Reed-Muller codes ”, Problems of Information Transmission,vol. 19, pp. 87–91, 1983. [7] G. A. Kabatianskii, “On decoding of Reed-Muller codes in semi continuous channels,” Proc. 2nd Int. Workshop “Algebr. and Comb. Coding theory” , Leningrad, USSR,pp. 87-91, 1990. [8] Ruud Pellikaan and Xin-Wen Wu, “List decoding of q-ary Reed-Muller Codes”, IEEE Trans. on Information Theory, vol. 50, pp. 679-682, 2004. [9] I.Dumer, “Recursive decoding of Reed-Muller codes”,Proceedings of 37th Allerton Conf. on Commun.,Contr. and Comp. , pp. 61–69, 1999. [10] I.Dumer, “Recursive decoding and its performance for low-rate ReedMuller codes”, IEEE Trans. on Information Theory, vol. 50, pp. 811-823, 2004. [11] Ilya Dumer, ”On recursive decoding with sublinear complexity for ReedMuller codes”, Information Theory Workshop, 2003. Proceedings. 2003 IEEE Volume, April 2003 Pages 14-17. [12] V. Sidel’nikov and A. Pershakov, ”Decoding of Reed-Muller codes with a large number of errors,” Probl. Inform. Transm., vol. 28, no. 3, pp. 8094, 1992. [13] R. Fourquet et C. Tavernier, ”An improved List Decoding Algorithm for the Second Order Reed-Muller Codes and its Applications”, in Designs, codes and cryptography (DCC), Mai 2007. [14] G. Kabatiansky et C. Tavernier, ”List decoding of second order ReedMuller codes”, ISCTA’05, Martin’s College, Ambleside, juillet 2005. [15] M. Matsui, ”Linear cryptanalysis method for the DES cipher”, In Advanced in cryptology - EUROCRYPT’93, volume 765 of Lecture Notes in Computer Science, pages 386397. Springer, 1993. [16] M. Matsui, ”The first experimental cryptanalysis of the Data Encryption Standard”, In Y. Desmedt, editor, Advances in Cryptology CRYPTO’94, Lecture Notes in Computer Science, pages 111. Springer, 1994. [17] P. Junod, ”On the complexity of matsui’s attack”, In S. Vaudenay and A. M. Youssef, editors, Selected Areas in Cryptography (SAC’01), volume 2259, pages 199211. Springer, 2001. [18] O. Goldreich and L. A. Levin. A hard core predicate for all one-way functions. In Proceedings of the 21-st ACM Symposium on Theory of Computing, pages 2532, May 1989.

[19] O. Goldreich, R. Rubinfeld, and M. Sudan. Learning polynomials with queries: the highly noisy case. In Proceedings of the 36th Annual Symposium on Foundations of Computer Science, pages 294303, 1995. Extended version: http://people.csail.mit.edu/madhu/papers.html. [20] T. Johansson and F. Jonsson. Fast correlation attacks through reconstruction of linear polynomials. In M. Bellare, editor, Advances in Cryptology CRYPTO 2000, volume 1880 of Lecture Notes in Computer Science, pages 300 315. Springer, 2000. [21] G. Kabatiansky and C. Tavernier. List decoding of Reed-Muller codes. In Ninth International Workshop on Algebraic and Combinatorial Coding Theory, ACCT’2004, pages 230235, June 2004. http://ced.tavernier.free.fr/Balgaria.pdf. [22] G. Kabatiansky and C. Tavernier. List decoding of first order ReedMuller codes II. In Tenth International Workshop on Algebraic and Combinatorial Coding Theory, ACCT’2006, pages 131134, September 2006. http://ced.tavernier.free.fr/Kabat.pdf. [23] P. Loidreau, R. Fourquet et C. Tavernier, ”Finding good linear approximations of block ciphers and its application to cryptanalysis of reduced round DES”, Workshop on Coding Theory and cryptography WCC 2009, Ullensvang, Norvge. [24] Lars R. Knudsen and M. J. B. Robshaw, ”Non-Linear Approximations in Linear Cryptanalysis”, in Advances in Cryptology EUROCRYPT’96, Volume 1070/1996, pages 224-236. http://www.cosic.esat.kuleuven.be/publications/article-153.ps [25] Lars R. Knudsen and M. J. B. Robshaw, ”Non-Linear Approximations in Linear Cryptanalysis”,Lecture Notes in Computer Science, Advances in Cryptology EUROCRYPT’96 Volume 1070/1996, Pages 224-236. [26] Juan M. E. Tapiador, John A. Clark and Julio C. Hernandez-Castro, ”Non-linear Cryptanalysis Revisited: Heuristic Search for Approximations to S-Boxes”, in LNCS Cryptography and Coding, Volume 4887/2007, Pages 99-117, 2007. [27] N. N. Tokareva, ”On quadratic approximations in block ciphers”, in Problems of Information Transmission, Volume 44, Issue 3, Pages 266286, 2008. [28] T. Shimoyama and T. Kaneko, ”Quadratic relation of s-box and its application to the linear attack of full round des”, in Advances in Cryptology CRYPTO98 (H. Krawczyk, ed.), vol. 1462 of Lecture Notes in Computer Science, pp. 200211, Springer-Verlag, 1998. [29] I. Dumer, “Recursive decoding and its performance for low-rate ReedMuller codes,” IEEE Trans. Inf. Theory, vol. 50, pp. 811-823, 2004. [30] I. Dumer, “Soft decision decoding of Reed-Muller codes: a simplified algorithm,” IEEE Trans. Info. Theory, vol. 52, pp. 954-963, 2006. [31] F.J. MacWilliams, N.J.A. Sloane, The Theory of Error-Correcting Codes, North-Holland, Amsterdam, 1981.

List Decoding for long Reed-Muller codes of order 2 ...

We call Reed-Muller code of order r and we denote. RM(r, m) the linear code of length n = 2m, minimal distance dmin = 2m−r and dimension (0 m. ) + ททท + (r.

211KB Sizes 0 Downloads 201 Views

Recommend Documents

List Decoding of the First-Order Binary Reed–Muller Codes
Binary first-order Reed–Muller codes RM(1,m) have length n = 2m and consist of ..... MacWilliams, F.J. and Sloane, N.J.A., The Theory of Error-Correcting Codes, ...

List Decoding of Biorthogonal Codes and the ... | Google Sites
an input alphabet ±1 and a real-valued output R. Given any nonzero received vector y ... Key words: soft-decision list decoding, biorthogonal codes,. Hadamard ...

List Decoding of Biorthogonal Codes and the ...
an algorithm that outputs this list of codewords {c} with the linear complexity order ... Ilya Dumer is with the Department of Electrical Engineering, University of. California, Riverside, CA 92521, USA; (e-mail: [email protected]). Research supported

List Decoding of Second Order Reed-Muller Codes and ...
Email: [email protected]. 2 THALES ... Email: [email protected]. Abstract. ...... However a solution was found at distance 1768. We deduce ...

List Decoding of Reed-Muller Codes
vector y and the candidate c(i)(x1,...,xm) = c1x1 + ... + cixi on facet Sj and denote. dSj (y, c(i)) the Hamming distance between these two vectors (of length 2i). Clearly that for any linear function c(x1,...,xm) such that c(i)(x1,...,xm) = c1x1 + .

List decoding of Reed-Muller codes up to the Johnson bound with ...
project no. 027679, funded in part by the European Commission's Information ... from any center y, where. Js = 2−1(1 .... This algorithm is call the Ratio algorithm ...

Soft-decision list decoding of Reed-Muller codes with linear ... - Sites
Cédric Tavernier is with National Knowledge Center (NKC-EAI), Abu-. Dhabi, UAE; (e-mail: [email protected]). we consider the vector y = (y0, ..yn−1) ...

List decoding of Reed-Muller codes up to the Johnson bound with ...
Email: [email protected] ... Email: [email protected]. Abstract—A new .... Perr = P1 +P2. Here P1 is the probability that a “good” code vector c (i.e. ...

Soft-decision list decoding of Reed-Muller codes with ...
Abstract—Let a binary Reed-Muller code RM(s, m) of length ... beyond the error-correcting radius and retrieves the code list LT ..... vector Q(x, α) is the output of a non-constant Boolean linear function. Then vector. ˆQ(x, α) has equal numbers

A simple algorithm for decoding Reed-Solomon codes ...
relation to the Welch-Berlekamp [2] and Euclidean algorithms [3], [4] is given. II. DEFINITIONS AND NOTATIONS. Let us define the (n, k, d) Reed-Solomon (RS) code over GF(q) with length n = q − 1, number of information symbols k, designed distance d

Efficient Decoding of Permutation Codes Obtained from ...
Index Terms—Permutation codes, Distance preserving maps ... have efficient decoding, are known to achieve this upper bound. (see [1], [2]). ... denote this mapping. The example below illustrates the map of b = (1, 1, 0, 1) to the permutation vector

Systematic encoding and decoding of chain reaction codes
Nov 17, 2011 - Frojdh, et al., “File format sub-track selection and switching,” ISO/. IEC JTC1/SC29/WG11 MPEG2009 M16665, London UK., Jul. 2009, 14 pp. Gao, L. et al.: “Ef?cient Schemes for Broadcasting Popular Videos,”. Proc. Inter. Workshop

Efficient Decoding of Permutation Codes Obtained from ...
N. Thus it is of interest to consider other means of obtaining permutation codes, for .... the transmitted symbol corresponding to bi = 0 is different from the received ...

Systematic encoding and decoding of chain reaction codes
Nov 17, 2011 - 690-697 (Oct. 1998). Paris, et al., “Ef?cient broadcasting protocols for video on demand”,. International Symposium on Modeling, Analysis and Simulation of. Computer and Telecommunication systems (MASCOTS), vol. 6, pp. 127-132 (Jul

On the Linear Programming Decoding of HDPC Codes
The decision boundaries divide the signal space into M disjoint decision regions, each of which consists of all the point in Rn closest in. Euclidean distance to the received signal r. An ML decoder finds which decision region Zi contains r, and outp

On complexity of decoding Reed-Muller codes within ...
full list decoding up to the code distance d can be performed with a lower ... Both recursive and majority algorithms correct many error patterns beyond the BDD ...

Soft-Decision List Decoding with Linear Complexity for ...
a tight upper bound ˆLT on the number of codewords located .... While working with real numbers wj ∈ R, we count our ... The Cauchy-Schwartz inequality.

Shadows of codes over rings of order 4
Steven T. Dougherty. Department of Mathematics. University of Scranton. Scranton, PA 18510. USA. Email: [email protected]. June 22, 2011. Abstract. We describe different ways of defining shadows for self-dual codes over rings of order 4. We

Maximum Distance Codes over Rings of Order 4
Maximum Distance Codes over Rings of. Order 4. Steven T. Dougherty. Department of Mathematics. University of Scranton. Scranton, PA 18510. USA. Email: ...

Publication list, long version
Jul 24, 2016 - Introduction, basic definitions, and notation. 1 .... 2.2.2 Introduction to many-valued logics ..... Theoretical Computer Science, 266 (2001), pp.

Graph-covers and iterative decoding of finite length codes
ular low-density parity-check (LDPC) codes, which stands in ... called Tanner graph [1,2,4], with a given parity-check ...... able online under http://justice.mit.edu/.

Soft-In Soft-Out Decoding of Reed–Solomon Codes ...
codes that is based on Vardy and Be'ery's optimal soft-in hard-out algo- rithm. .... Associated with is the bi- nary Bose–Chaudhuri–Hocquenghem (BCH) code.

Chase Decoding of Linear Z4 Codes at Low to Moderate ... - IEEE Xplore
two-stage decoder which performs HD decoding on C in each stage. Alternatively, one could have a two-stage soft-decision decoder by employing a Chase ...

36-1 Application for Order of Involuntary Commitment (2 pages ...
Open. Extract. Open with. Sign In. Details. Comments. General Info. Type. Dimensions. Size. Duration. Location. Modified. Created. Opened by me. Sharing. Description. Download Permission. Main menu. Displaying 36-1 Application for Order of Involuntar