178

IEEE COMMUNICATIONS LETTERS, VOL. 14, NO. 2, FEBRUARY 2010

Performance versus Overhead for Fountain Codes over π”½π‘ž Gianluigi Liva, Member, IEEE, Enrico Paolini, Member, IEEE, and Marco Chiani, Senior Member, IEEE

AbstractΒ—β€” Fountain codes for packet erasure recovery are investigated over Galois οƒželds of order π‘ž β‰₯ 2. It is shown through development of tight upper and lower bounds on the decoding failure probability under maximum likelihood decoding, that the adoption of higher order Galois οƒželds is beneοƒžcial, in terms of performance, for linear random fountain codes. Moreover, it is illustrated how Raptor codes can provide performances very close to those of random fountain codes, with an affordable encoding and decoding complexity. Non-binary Raptor codes turn out to represent an appealing option for applications requiring severe constraints in terms of performance versus overhead, especially for small source block sizes. Index TermsΒ—β€” Fountain codes, Raptor codes, maximum likelihood decoding.

I. I NTRODUCTION OUNTAIN codes have been introduced in [1] as a possible solution for information delivery in broadcast and multicast networks. A fountain encoder is capable to produce an undeοƒžned amount of encoded symbols (or output symbols) out of a source block formed by π‘˜ source symbols (or input symbols). In broadcast and multicast networks, each user collects symbols generated by the fountain encoder. Once a sufοƒžciently large amount of symbols has been received, the user is able to recover the π‘˜ input symbols. For an ideal fountain code this amount coincides with π‘˜: the decoder is able to recover the source block from any set of π‘˜ output symbols. For real fountain codes, the source block is recovered with a probability that is non-decreasing with the number of symbols received in surplus with respect to (w.r.t.) π‘˜. This integer number is referred to as the overhead, here denoted by 𝛿. Fountain codes are usually adopted in communication networks to recover lost packets. Here, an object (e.g., a οƒžle) is divided into π‘˜ source packets, all of the same length 𝐿 [bits], out of which the encoder produces an undeοƒžned amount of encoded packets, each of length 𝐿 [bits]. If a binary fountain code is used, each encoded packet may be obtained as a bitwise exclusive-or of a subset of the source packets. Similarly, for a fountain code over a Galois οƒželd π”½π‘ž of characteristic two with π‘ž > 2, each source packet is regarded as a collection of 𝐿/ log2 π‘ž symbols in π”½π‘ž : each encoded packet is obtained as a symbol-wise sum (in π”½π‘ž ) of a subset of the source packets. Hence, for a given object the encoding latency can be kept

F

Manuscript received October 22, 2009. The associate editor coordinating the review of this letter and approving it for publication was V. Stankovic. G. Liva is with the Institute of Communication and Navigation of the Deutsches Zentrum fur Luft- und Raumfahrt (DLR), 82234 Wessling, Germany (e-mail: [email protected]). E. Paolini and M. Chiani are with DEIS/WiLAB, University of Bologna, 47521 Cesena (FC), Italy (e-mail: {e.paolini, marco.chiani}@unibo.it). Supported in part by the EC under Seventh Framework Program grant agreement ICT OPTIMIX n.INFSO-ICT-214625 and in part by the EC-IST SatNEx-II Project (IST-27393). Digital Object Identiοƒžer 10.1109/LCOMM.2010.02.092080

constant, regardless the Galois οƒželd order used for performing the linear combinations. In this letter, two classes of fountain codes are considered, namely, linear random fountain (LRF) codes and Raptor codes [2]. For both, maximum-likelihood (ML) decoding is adopted. The decoding error probability of LRF codes over Galois οƒželds of order π‘ž β‰₯ 2, as a function of the overhead, is investigated in Section II. It is shown through tight upper and lower bounds that, by adopting a code construction on non-binary οƒželds, the probability of decoding success can be largely increased for the same overhead. In Section III, it is illustrated through simulation how Raptor codes constructed on Galois οƒželds of order π‘ž β‰₯ 2 are capable to closely approach the performance of LRF codes even for small overheads. Final remarks follow in Section IV. II. L INEAR R ANDOM F OUNTAIN C ODES OVER π”½π‘ž Let 𝒄 = [𝑐𝑖 ]𝑖=0,...,π‘˜βˆ’1 ∈ π”½π‘˜π‘ž be a vector of π‘˜ input symbols.1 A LRF code over π”½π‘ž is a random linear map π”½π‘˜π‘ž β†’ 𝔽ℕ π‘ž , where denotes the set of all sequences over 𝔽 . The encoder 𝔽ℕ π‘ž π‘ž generates the output symbol 𝑒𝑗 , 𝑗 ∈ β„•, as follows: βˆ™ for each input symbol 𝑐𝑖 , a coefοƒžcient 𝑔𝑗𝑖 ∈ π”½π‘ž is picked independently with uniform probability; βˆ‘π‘˜βˆ’1 βˆ™ the output symbol 𝑒𝑗 is computed as 𝑒𝑗 = 𝑖=0 𝑔𝑗𝑖 𝑐𝑖 , where all operations are performed in π”½π‘ž . Assume the fountain encoder generates a stream of 𝑛 output symbols. Denoting these symbols by 𝒆(0,...,π‘›βˆ’1) , we have 𝒆(0,...,π‘›βˆ’1) = G(0,...,π‘›βˆ’1) 𝒄 where ⎀ ⎑ 𝑔00 . . . 𝑔0 π‘˜βˆ’1 βŽ₯ ⎒ .. G(0,...,π‘›βˆ’1) = ⎣ ⎦. . π‘”π‘›βˆ’1 0 . . . π‘”π‘›βˆ’1 π‘˜βˆ’1

Note that, in general, G(0,...,π‘›βˆ’1) is a dense matrix. The index 𝑗 ∈ β„• assigned to the output symbol 𝑒𝑗 is also known as the encoded symbol identiοƒžer (ESI). For an ESI 𝑗, we let Ξ˜π‘— = {𝑔𝑗𝑖 : 𝑖 = 0, . . . , π‘˜ βˆ’ 1}. Assume π‘˜+𝛿 β‰₯ π‘˜ output symbols 𝒆(𝑗1 ,...,π‘—π‘˜+𝛿 ) are collected at the receiver (the other transmitted symbols being erased by the channel) and let 𝐽 = {𝑗1 , . . . , π‘—π‘˜+𝛿 } be the set of ESIs of these symbols. We have G(𝑗1 ,...,π‘—π‘˜+𝛿 ) 𝒄 = 𝒆(𝑗1 ,...,π‘—π‘˜+𝛿 )

(1)

where G(𝑗1 ,...,π‘—π‘˜+𝛿 ) is the ((π‘˜ + 𝛿) Γ— π‘˜) matrix composed of the π‘˜ + 𝛿 rows of G(0,...,π‘›βˆ’1) whose indexes belong to 𝐽. ML decoding consists of solving (1) through Gaussian elimination to recover all π‘˜ input symbols 𝒄. Note that, to this purpose, for each collected output symbol 𝑒𝑗 , the decoder needs the 1 Throughout

the letter vectors will be intended as column vectors.

c 2010 IEEE 1089-7798/10$25.00 ⃝

LIVA et al.: PERFORMANCE VERSUS OVERHEAD FOR FOUNTAIN CODES OVER 𝔽𝑄 0

10

179 𝒄

Upper bound

Lower bound

π’…π‘˜

Gβˆ’1 T

𝒇 𝒅𝑠

GLDPC

Ε¦1

10

q=2

GH

Ε¦2

π’…β„Ž

GLT Ξ˜π‘—

𝒆

random generator

10

𝑃𝑒

𝑗 (ESI) q=4 Ε¦3

10

Fig. 2.

Block diagram of the systematic Raptor encoder speciοƒžed in [6].

Ε¦4

10

q=8

q=64

Ε¦5

10

q=256 Ε¦6

10

0

1

2

3

4

5

6

7

8

9

10

𝛿

Fig. 1. Lower and upper bounds on the decoding error probability of LRF codes over π”½π‘ž , for π‘ž = 2, 4, 8, 64, 256. The bounds are independent of π‘˜.

corresponding Ξ˜π‘— .2 Decoding is successful if and only if rank(G(𝑗1 ,...,π‘—π‘˜+𝛿 ) ) = π‘˜. The decoding error probability is then given by (see, e.g., [3]) ) π‘˜ ( ∏ π‘ž π‘–βˆ’1 𝑃𝑒 (π‘˜, 𝛿, π‘ž) = 1 βˆ’ 1 βˆ’ π‘˜+𝛿 (2) π‘ž 𝑖=1 = 1 βˆ’ (π‘ž βˆ’π‘˜βˆ’π›Ώ ; π‘ž)π‘˜

(3)

where the formulation (3) uses the π‘ž-Pochhammer symbol. Proposition 1: The decoding failure probability of a LRF code over π”½π‘ž , under ML decoding, fulοƒžlls π‘ž βˆ’π›Ώβˆ’1 ≀ 𝑃𝑒 (π‘˜, 𝛿, π‘ž) <

1 βˆ’π›Ώ π‘ž π‘žβˆ’1

(4)

with equality for the lower bound if and only if π‘˜ = 1.3 Proof: The ∏ lower bound is obtained by observing that π‘˜ 1 βˆ’ 𝑃𝑒 (π‘˜, 𝛿, π‘ž) = 𝑖=1 (1 βˆ’ π‘ž π‘–βˆ’1βˆ’π‘˜βˆ’π›Ώ ) ≀ (1 βˆ’ π‘ž π‘˜βˆ’1βˆ’π‘˜βˆ’π›Ώ ) = 1 βˆ’ π‘ž βˆ’1βˆ’π›Ώ , where the inequality is due to each factor being less than 1. Note that equality holds if and only if π‘˜ = 1. The upper bound is proved by induction on π‘˜. The bound holds for π‘˜ = 1. In fact, 1 βˆ’ 𝑃𝑒 (1, 𝛿, π‘ž) = 1 βˆ’ π‘ž βˆ’1βˆ’π›Ώ = 1 1 βˆ’ 1π‘ž π‘ž βˆ’π›Ώ > 1 βˆ’ π‘žβˆ’1 π‘ž βˆ’π›Ώ . Assuming the bound is true for π‘˜, ∏ then it is true also for π‘˜ + 1. In fact, 1βˆ’π‘ƒπ‘’ (π‘˜ + 1, 𝛿, π‘ž) π‘˜+1 = 𝑖=1 (1βˆ’π‘ž π‘–βˆ’1βˆ’π‘˜βˆ’1βˆ’π›Ώ ) = )[1βˆ’π‘ƒπ‘’ (π‘˜, 𝛿+1, π‘ž)](1βˆ’π‘ž βˆ’1βˆ’π›Ώ ) > ( 1 1 βˆ’1βˆ’π›Ώ βˆ’1βˆ’π›Ώ (1 βˆ’ π‘žβˆ’1 π‘ž ) 1βˆ’π‘ž π‘ž βˆ’π›Ώ where the οƒžrst > 1 βˆ’ π‘žβˆ’1 inequality is due to the bound for π‘˜, and the second inequality can be easily veriοƒžed. Remarkably, the upper bound and the lower bound in (4) are independent of the number π‘˜ of input symbols, which allows to develop considerations valid for all π‘˜. The bounds are depicted in Fig. 1 as functions of 𝛿 for π‘ž = 2, 4, 8, 64 and 256. The two bounds converge for large π‘ž and the gap between them is very small for all π‘ž. It can be veriοƒžed that the upper bound is extremely tight even for π‘ž = 2 and π‘˜ in the order of a few tens. Fig. 1 reveals an inherent advantage, in terms of 2 In real systems, Θ is not usually transmitted as it is obtained by 𝑗 the decoder through the same pseudo-random generator used for encoding, starting from ESIs. Therefore, is is sufοƒžcient to transmit the ESI together with the corresponding output symbol. 3 The upper bound for the binary case, 𝑃 (π‘˜, 𝛿, 2) < 2βˆ’π›Ώ , appeared in [4]. 𝑒

performance for the same overhead, of constructing the code on higher order Galois οƒželds for a given π‘˜. For example, with only one symbol of overhead, we have 𝑃𝑒 ≃ 2.5 β‹… 10βˆ’4 for all π‘˜ over 𝔽64 , while we have 𝑃𝑒 β‰₯ 2.5 β‹… 10βˆ’1 for all π‘˜ over 𝔽2 . The independence of the two bounds from π‘˜ and the small gap between them emphasize a weak dependence of the performance on π‘˜, for a given overhead and Galois οƒželd order. Note that using a large block size π‘˜ increases the fountain code efοƒžciency deοƒžned as πœ‚ = π‘˜/(π‘˜ + 𝛿). However, LRF codes are not practical for large source blocks due to prohibitive π’ͺ(π‘˜ 3 ) complexity of ML decoding, in terms of both number of additions and number of multiplications in π”½π‘ž . Given a value of error probability, the efοƒžciency gain of a non-binary code w.r.t. a binary one becomes remarkable for small blocks (i.e., small π‘˜). Hence, the use of non-binary codes is appealing for small objects.

III. A C LASS OF R APTOR C ODES OVER π”½π‘ž A Raptor code is obtained by concatenating an outer high rate code (pre-code) with an inner Luby-transform (LT) code [5]. We derive Raptor codes on π”½π‘ž from their binary counterparts. In the process, we focus on the class of binary Raptor codes speciοƒžed in [6], whose encoder is depicted in Fig. 2. A non-systematic LT encoder generates the output symbols from 𝑙 = π‘˜ + 𝑠 + β„Ž symbols 𝒇 , known as the intermediate symbols. These latter symbols are generated by pre-coding the π‘˜ symbols π’…π‘˜ . We have 𝒇 𝑇 = [π’…π‘‡π‘˜ βˆ£π’…π‘‡π‘  βˆ£π’…π‘‡β„Ž ], where the 𝑠 symbols 𝒅𝑠 are known as the LDPC symbols and the β„Ž symbols π’…β„Ž as the half symbols. The (𝑠 Γ— π‘˜) and (β„Ž Γ— (π‘˜ + 𝑠)) encoding matrices GLDPC and GH , the encoding matrix GLT of the inner LT code and the parameters 𝑠 and β„Ž, depend on π‘˜ and are speciοƒžed in [6]. A systematic Raptor encoder is obtained through a rate-1 linear pre-coder that generates the π‘˜ symbols π’…π‘˜ from the π‘˜ input symbols 𝒄. This precoder can be represented as the product between 𝒄 and a properly chosen full-rank (π‘˜ Γ— π‘˜) matrix, denoted by Gβˆ’1 T in Fig. 2. Adopting the same notation as Section II, we now have 𝒆(0,...,π‘›βˆ’1) = GLT(0,...,π‘›βˆ’1) 𝒇 . Note that, as opposed to G(0,...,π‘›βˆ’1) for a LRF code, GLT(0,...,π‘›βˆ’1) is a sparse matrix. We derive Raptor codes over π”½π‘ž by extending to non-binary οƒželds the encoder structure depicted in Fig. 2, i.e., by replacing all component encoders with non-binary counterparts. Specifically, we replace each non-zero entry in GLDPC , GH and GLT(0,...,π‘›βˆ’1) with an element picked randomly in π”½π‘ž βˆ–{0}. Next, encoding and decoding are described. The set of constraints on the Raptor output symbols can be represented in a compact way, including the constraints imposed both by

180

IEEE COMMUNICATIONS LETTERS, VOL. 14, NO. 2, FEBRUARY 2010 0

Decoding Failure Rate

10

Raptor code, π‘˜ = 64 - 𝔽2 Raptor code, π‘˜ = 64 - 𝔽4 Raptor code, π‘˜ = 512 - 𝔽2 Raptor code, π‘˜ = 512 - 𝔽4 LRF code, upper bounds

Ε¦1

10

Ε¦2

10

Ε¦3

10

Ε¦4

10

0

1

2

3

4

5

6

7

8

9

10

𝛿

Fig. 3. Decoding failure rate vs. overhead for π‘ž-ary Raptor codes (π‘ž = 2, 4) with π‘˜ = 64 and π‘˜ = 512, compared to the upper bound (valid for all π‘˜) on the error probability of LRF codes over 𝔽2 and 𝔽4 .

the pre-coder and by the LT encoder, as [ ] 0 A(0,...,π‘›βˆ’1) 𝒇 = 𝒆(0,..,π‘›βˆ’1)

where 0 is the length-(𝑠 + β„Ž) all-zero column vector and A(0,...,π‘›βˆ’1) is a ((𝑠 + β„Ž + 𝑛) Γ— 𝑙) matrix over π”½π‘ž called the constraint matrix, given by ⎀ ⎑ GLDPC I𝑠 Z GH Iβ„Ž ⎦ . A(0,...,π‘›βˆ’1) = ⎣ GLT(0,...,π‘›βˆ’1)

Here, I𝑠 and Iβ„Ž are the (𝑠 Γ— 𝑠) and (β„Ž Γ— β„Ž) identity matrices, respectively, and Z is the (𝑠 Γ— β„Ž) all-zero matrix. In general, A(0,...,π‘›βˆ’1) is a sparse matrix. We use next the notation A(𝑗1 ,𝑗2 ,..,π‘—π‘Ÿ ) to indicate the ((𝑠 + β„Ž + π‘Ÿ) Γ— 𝑙) submatrix of A(0,...,π‘›βˆ’1) obtained by selecting only the rows of GLT(0,...,π‘›βˆ’1) corresponding to ESIs (𝑗1 , 𝑗2 , .., π‘—π‘Ÿ ). Encoding exploits the (𝑙 Γ— 𝑙) sub-matrix A(0,...,π‘˜βˆ’1) formed by the οƒžrst 𝑙 rows of A(0,...,π‘›βˆ’1) . Since encoding is systematic, we have 𝒄 = 𝒆(0,...,π‘˜βˆ’1) from which [ ] 0 A(0,...,π‘˜βˆ’1) 𝒇 = . (5) 𝒄

Encoding consists of οƒžrst solving (5) through Gaussian elimination to calculate the intermediate symbols 𝒇 ∈ π”½π‘™π‘ž , and then performing LT encoding of 𝒇 to obtain 𝒆(0,...,π‘›βˆ’1) . Assume now π‘˜ + 𝛿 β‰₯ π‘˜ output symbols with set of ESIs {𝑗1 , . . . , π‘—π‘˜+𝛿 } are collected at the decoder. ML decoding is performed by οƒžrst solving the system [ ] 0 A(𝑗1 ,𝑗2 ,..,π‘—π‘˜+𝛿 ) 𝒇 = (6) 𝒆(𝑗1 ,𝑗2 ,..,π‘—π‘˜+𝛿 )

through Gaussian elimination to obtain the intermediate symbols 𝒇 . Once 𝒇 has been recovered, the input symbols are obtained as 𝒄 = GLT(0,...,π‘˜βˆ’1) 𝒇 .

Raptor codes present advantages in terms of encoding and decoding complexity w.r.t. LRF counterparts. More speciοƒžcally, efοƒžcient methods for the solution of (5) and (6) exist, which exploit the sparseness of system of equations [7] [8]. Originally proposed for solving sparse systems of equations in 𝔽2 , the extension of these algorithms to π”½π‘ž is straightforward. Although exploiting such approaches the number of required additions and multiplications in π”½π‘ž remains cubic (in 𝑙), the cubic cost function is multiplied by a very small constant, making the overall complexity affordable. In Fig. 3 the decoding failure rate under ML decoding of binary Raptor codes from [6], with π‘˜ = 64 and π‘˜ = 512, and of their extension to 𝔽4 are depicted, as functions of the overhead. The (tight and valid for all π‘˜) upper bounds on the performance of LRF codes over 𝔽2 and 𝔽4 are also shown. Raptor codes approach closely the upper bounds, and the same was observed for codes on higher order οƒželds. This example shows that Raptor codes over π”½π‘ž obtained with the simple proposed technique achieve a performance very close to that of random codes, sharing the same performance advantages of adopting higher order Galois οƒželds. IV. C ONCLUSIONS In this letter, the performance of LRF codes over π”½π‘ž has been analyzed through tight upper and lower bounds, and the advantage of adopting higher-order Galois οƒželds in the code construction illustrated. A class of Raptor codes over π”½π‘ž has been then presented showing, through numerical simulation, how their performance is very close to that of LRF codes, while offering a manageable encoding and ML decoding complexity. Non-binary Raptor codes represent a very appealing option in the presence of severe performance versus overhead requirements, especially for small source block sizes. The bounds derived in Proposition 1 can be conοƒždently used to estimate their performance down to moderate error rates. R EFERENCES [1] J. Byers, M. Luby, M. Mitzenmacher, and A. Rege, Β“β€œA digital fountain approach to reliable distribution of bulk data,”” SIGCOMM Comput. Commun. Rev., vol. 28, no. 4, pp. 56––67, Oct. 1998. [2] M. Shokrollahi, Β“β€œRaptor codes,”” IEEE Trans. Inf. Theory, vol. 52, no. 6, pp. 2551––2567, June 2006. [3] R. Lidl and H. Niederreiter, Finite Fields. Cambridge, UK: Cambridge Univ. Press, 1997. [4] E. R. Berlekamp, Β“β€œThe technology of error-correcting codes,”” Proc. IEEE, vol. 68, no. 5, pp. 564––593, May 1980. [5] M. Luby, Β“β€œLT codes,”” in Proc. 43rd Annual IEEE Symp. on Foundations of Computer Science, Nov. 2002, pp. 271––282. [6] 3GPP TS 26.346 V9.0.0, Β“β€œTechnical speciοƒžcation group services and system aspects; multimedia broadcast/multicast service (MBMS); protocols and codecs (Release 8),”” Oct. 2009. [7] D. Burshtein and G. Miller, Β“β€œAn efοƒžcient maximum likelihood decoding of LDPC codes over the binary erasure channel,”” IEEE Trans Inf. Theory, vol. 50, no. 11, pp. 2837––2844, Nov. 2004. [8] E. Paolini, G. Liva, B. Matuz, and M. Chiani, Β“β€œPivoting algorithms for maximum-likelihood decoding of LDPC codes over erasure channels,”” in Proc. 2009 IEEE Global Telecommunications Conference, Honolulu, HI, USA, Nov. 2009.

Performance versus Overhead for Fountain Codes over ...

A LRF code over Fq is a random linear map Fk q. FN ... known as the encoded symbol identifier (ESI). For an ESI ... Therefore, is is sufficient to transmit the ESI together with .... approach to reliable distribution of bulk data, Ҁ SIGCOMM Comput.

204KB Sizes 2 Downloads 175 Views

Recommend Documents

Performance versus Overhead for Fountain Codes over ...
to those of random fountain codes, with an affordable encoding and decoding ... to recover the source block from any set of output symbols. For real fountainΒ ...

Fountain codes - IEEE Xplore
7 Richardson, T., Shokrollahi, M.A., and Urbanke, R.: 'Design of capacity-approaching irregular low-density parity check codes', IEEE. Trans. Inf. Theory, 2001Β ...

Counting Codes over Rings
Sep 3, 2012 - [x,y] = x1y1 + Γ ΒΈΒ—Γ ΒΈΒ—Γ ΒΈΒ— + xnyn. For any code C over R, we define the orthogonal to be. CҊΒ₯ = {x ҈ˆ Rn ҈£. ҈£[x,c]=0, Γ’ΒˆΒ€c ҈ˆ C}. Throughout the paper we assume that the rings are all Frobenius, see [8] for a definition of this cla

Throughput Versus Routing Overhead in Large Ad Hoc ...
Consider a wireless ad hoc network with n nodes distributed uniformly on ... not made or distributed for profit or commercial advantage and that copies bear thisΒ ...

A Low-Overhead High-Performance Unified Buffer ... - CiteSeerX
references, and the data structure that is used to maintain ...... running on a 133MHz Intel Pentium PC with 128MB RAM and a 1.6GB Quantum Fireball hard disk.

On Generalized Weights for Codes over Zk
Jun 22, 2011 - introduced the GHWR of a linear code C over a finite chain ring and studied some properties of the GHWR. For any g, 1 ҉€ g ҉€ rank(C), we define the higher weight spectrum as. A g i = |{D : D is a Zk-submodule of C with rank(D) = g a

Optimal Linear Codes over Zm
Jun 22, 2011 - where Ai,j are matrices in ZpeΓ’ΒˆΒ’i+1 . Note that this has appeared in incorrect forms often in the literature. Here the rank is simply the number ofΒ ...

Cyclic codes over Ak
Lemma 1. [1] If C is a cyclic code over Ak then the image of C under the. Gray map is a quasi-cyclic binary code of length 2kn of index 2k. In the usual correspondence, cyclic codes over Ak are in a bijective corre- spondence with the ideals of Ak[x]

Cyclic codes over Rk
Jun 22, 2011 - e-mail: [email protected] e-mail: [email protected] ...... [8] S.T. Dougherty and S. Ling, Cyclic codes over Z4 of even length , Designs,Β ...

Shadow Codes over Z4
Shadow Codes over Z4. Steven T. Dougherty. Department of Mathematics. University of Scranton. Scranton, PA 18510. USA. Email: [email protected].

Higher Weights for Codes over Rings
Jun 22, 2011 - number of basis elements for a code over this class of rings is just the rank of this code. 1.3 Higher Weights. Let R be a finite ...... (c1,Γ ΒΈΒ—Γ ΒΈΒ—Γ ΒΈΒ— ,cg)҈ˆC1Γ ΒΈΒ·Γ ΒΈΒ—Γ ΒΈΒ—Γ ΒΈΒ—Γ ΒΈΒ·Cg. ҈ a҈ˆRg j. Xna(c1,Γ ΒΈΒ—Γ ΒΈΒ—Γ ΒΈΒ— ,cg) a. , where cl = (cl1

Self-dual Codes over F3 + vF
A code over R3 is an R3Γ’ΒˆΒ’submodule of Rn. 3 . The euclidean scalar product is. Γ’ΒˆΒ‘ i xiyi. The Gray map φ from Rn. 3 to F2n. 3 is defined as φ(x + vy)=(x, y) for all x, y ҈ˆ Fn. 3 . The Lee weight of x + vy is the Hamming weight of its Gray image

MDR Codes over Zk
corresponds to the code word c = (c0,c1,c2,···,cnΓ’ΒˆΒ’1). Moreover multiplication by x corresponds to a cyclic shift. So, we can define a cyclic code of length n over Zk as an ideal of Zk[x]/(xn Γ’ΒˆΒ’ 1). For generalizations of some standard results o

Comparing Performance overhead of Virtual Machine ...
Web server and a DBMS, deployed on a virtual server during a VM live migration process. Ҁ’ Series of measurements with Xen hypervisor. Ҁ’ Preliminary statistical analysis (real and virtual). 4. WWW/INTERNET ... was chosen and in almost all results

Evaluation of the Performance/Energy Overhead in ...
compute and energy intensive application in energy constrained mobile devices. ... Indeed, the use of parallelism in data processing increases the performance.

A Low-Overhead High-Performance Unified Buffer ... - CiteSeerX
In traditional file system implementations, the Least. Recently Used (LRU) block replacement scheme is widely used to manage the buffer cache due to its sim-.

Quasi-Cyclic Codes as Cyclic Codes over a Family of ...
Oct 23, 2015 - reduction ¡ from RΓ’ΒˆΒ†[x] to F2[x] where ¡(f) = ¡(Γ’ΒˆΒ‘aixi) = Γ’ΒˆΒ‘Γ‚Β΅(ai)xi. A monic polynomial f over RΓ’ΒˆΒ†[x] is said to be a basic irreducible poly- nomial ifΒ ...

Self-Dual Codes over Rk and Binary Self-Dual Codes
Apr 19, 2012 - Additionally, ai does not contain any of the u1,u2,... .... (a1,a2), where the ai are units, must be generated by that vector and hence be a one-.

On Codes over Local Frobenius Rings: Generator ...
Jul 30, 2014 - of order 16 for illustration. ... It is well known, see [7], that the class of finite rings for which it makes ... codes is the class of finite Frobenius rings.

Cyclic Codes over Formal Power Series Rings
Jun 22, 2011 - Let i be an arbitrary positive integer. The rings Ri are defined as follows: Ri = {a0 + a1γ + ··· + aiΓ’ΒˆΒ’1γiΓ’ΒˆΒ’1 |ai ҈ˆ F} where γiΓ’ΒˆΒ’1 = 0, but γi = 0 in Ri.

Generalized Shadows of Codes over Rings
Jun 22, 2011 - Let R be finite commutative ring. A code over R is a subset of Rn and a linear code is a submodule of this space. To the ambient space RnΒ ...

ΘSΓ’ΒˆΒ’cyclic codes over Ak
Jul 6, 2015 - It is clear that for all x ҈ˆ Ak we have that. ΣA,k(Φk(x)) = Φk(ΘS(x)). (2). 3. ΘSΓ’ΒˆΒ’cyclic codes over Ak. We can now define skew cyclic codes using this family of rings and family of automorphisms. Definition 2. A subset C of An

Type IV Self-Dual Codes over Rings
Jun 22, 2011 - If C is self-dual then C(1) is doubly-even and C(2) = C(1)ҊΒ₯ [9]. Lemma 2.2 If C is a Type IV code over Z4 then the residue code C(1) containsΒ ...