Ros Gra10


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Ros Gra10

  1. 1. Design of a Concatenated Coding Scheme for a Bit-Shift Channel Eirik Rosnes† and Alexandre Graell i Amat‡ † Departmentof Informatics, University of Bergen, N-5020 Bergen, Norway Email: ‡ Department of Electronics, Institut TELECOM-TELECOM Bretagne, 29238 Brest, France Email: CI Abstract—In this work, we propose a concatenated coding scheme with iterative decoding for a bit-shift channel. In more K N CO Π 1 CM detail, we consider the serial concatenation of an outer error- 1+D correcting code with an inner modulation code, possibly preceded by an accumulator to improve iterative decoding performance. The bit-shift channel was originally proposed for magnetic and Fig. 1. Encoder structure. optical recoding channels, but has recently been popular for inductively coupled channels. In particular, we search for optimal inner block code and the accumulator are jointly decoded by encoder mappings from an iterative decoding perspective for the a single soft-input soft-output decoder working on the joint inner modulation code, which has been designed to be single bit- shift error-correcting and also to have large average power. This accumulator/block code trellis. We address the construction is important in inductively coupled channels, since the receiver of the joint accumulator/block code trellises and discuss their (or tag) gets its entire power from the received signal, and the state complexity. Finally, through an extrinsic information information should be modulated in a way that maximizes the transfer (EXIT) charts analysis [8], we design optimal encoder power transferred to the tag. mappings (in terms of iterative decoding thresholds) for the inner (precoded) modulation code. I. I NTRODUCTION The bit-shift channel [1, 2] with constrained input sequences II. S YSTEM M ODEL was originally proposed to model timing errors for mag- The encoder structure of the concatenated coding scheme netic and optical recoding. This channel was studied from considered in this work is depicted in Fig. 1. We consider the an information-theoretic point of view in [1, 2]. However, serial concatenation of an outer binary error-correcting code, devising practical low-complexity coding schemes for this CO , and an inner nonlinear binary block code, CM , possibly channel model has not received much attention. For some precoded by a rate-1, memory-one, accumulator with generator code constructions, the interested reader is referred to [3, 4] polynomial g(D) = 1/(1 + D), through an interleaver. The and references therein. concatenation of the accumulator and the modulation code In [5], the bit-shift channel model was slightly modified to is the inner encoder of the serial concatenation, denoted by deal with unconstrained input sequences and used to model CI . Note that the inner encoder satisfies CI ≡ CM if no timing errors in inductively coupled channels. Inductive cou- accumulator is used. The use of a recursive precoder prior to pling is a technique wherein one device (the reader) induces an the block code improves the iterative decoding performance. electrical current in another device (the tag), thereby providing Indeed, it is well-known that for serially concatenated codes, not only power for the tag, but also a communication channel. the use of an inner recursive encoder is a necessary condition The tag itself usually has no other energy sources, and can be for the absence of a high error floor [7, 9]. The information used either as a radio frequency identification (RFID) tag or sequence, of length K bits, is encoded by the outer encoder attached to a sensor or other device. of rate RO and permuted through an interleaver Π of size An overview of coding challenges for inductively coupled NΠ = K/RO . The resulting codeword (possibly precoded channels can be found in [6]. In a recent work [5], the by an accumulator) is divided into J blocks of k bits which issue of code design of modulation codes with error-correcting are mapped to an (n, k) nonlinear block code CM of rate capabilities was discussed. This work is a continuation of [5], RM = k/n. If J is not a divisor of NΠ (or NΠ + 1 if an in which we consider a more sophisticated coding scheme accumulator is used as a precoder, since the accumulator trellis for the bit-shift channel. In particular, we consider the serial is always terminated), tail bits are appended. The overall code concatenation of an outer error-correcting code with nonlinear rate is R = K/N = RO RI , where N = Jn is the code block codes, found by the search algorithm from [5], through block length and RI accounts also for the tail bits and the an interleaver. To improve iterative decoding performance, the trellis termination bits of the accumulator. Since CM is usually effect of an accumulate code before the inner block code, of low rate, a high-rate outer code is desirable. Here, we as described in [7], is discussed. At the receiver side the consider Hamming codes (HCs) for CO , which achieve good
  2. 2. TABLE I C HANNEL TRANSITION PROBABILITY Pr(yi = 0|yi−1 , xi−2 , xi−1 , xi , xi+1 ) OF CHANNEL MODEL FROM D EFINITION 1. T HE FIRST, THIRD , FIFTH , AND SEVENTH COLUMNS CONTAIN VALUES FOR THE 5- TUPLE (yi−1 .xi−2 xi−1 xi xi+1 ), WHILE THE SECOND , FOURTH , SIXTH , AND EIGHT COLUMNS CONTAIN THE CORRESPONDING VALUES FOR THE CHANNEL TRANSITION PROBABILITY Pr(yi = 0|yi−1 , xi−2 , xi−1 , xi , xi+1 ). F URTHERMORE , SINCE yi IS BINARY, Pr(yi = 1|yi−1 , xi−2 , xi−1 , xi , xi+1 ) = 1 − Pr(yi = 0|yi−1 , xi−2 , xi−1 , xi , xi+1 ) (0.0000) 1 (0.1000) 1 (1.0000) 1 (1.1000) 1 (0.0001) 1−ǫ (0.1001) 1−ǫ (1.0001) Not possible (1.1001) 1−ǫ ǫ(3−ǫ) (0.0010) 2ǫ (0.1010) 2ǫ (1.0010) ǫ (1.1010) 2−ǫ ǫ ǫ ǫ (0.0011) 1−ǫ (0.1011) 1−ǫ (1.0011) 0 (1.1011) (1−ǫ)(2−ǫ) 2−4ǫ+ǫ2 1−2ǫ 1−2ǫ (0.0100) (1−ǫ)(2−ǫ) (0.1100) 1 (1.0100) 1−ǫ (1.1100) 1−ǫ 2−4ǫ+ǫ2 (0.0101) 2−ǫ (0.1101) 1−ǫ (1.0101) 1 − 2ǫ (1.1101) 1 − 2ǫ (0.0110) ǫ (0.1110) Not possible (1.0110) ǫ (1.1110) ǫ (0.0111) 0 (0.1111) 0 (1.0111) 0 (1.1111) 0 performance at very high rates with low decoding complexity. χ(·) denotes the support set of its argument, i.e., the set of The codes CM are found by the search algorithm from [5]. nonzero coordinates. The minimum power of C is defined as Pmin (C) = mina∈C P (a), and the average power of C is A. The Bit-Shift Channel 1 defined as Pavg (C) = |C| a∈C P (a). The traditional bit-shift channel with constrained input n Let a ∈ GF (2) denote an arbitrary binary vector which is sequences is described in [1, 2]. In [5], the bit-shift channel parsed into a sequence of phrases, where each phrase is a con- model was slightly modified to deal with unconstrained input ˜ secutive sequence of equal bits. Denote by a = (˜0 , . . . , an−1 ) a ˜˜ sequences, and a channel model approximation was proposed. the corresponding integer sequence of phrase lengths (or run- Here, we will only describe this approximation, and we refer lengths). The minimum (maximum) run-length is the minimum the interested reader to [1, 2, 5] for a description of the original ˜ (maximum) component in a. Note that, in the following, when bit-shift channel. we speak about run-lengths for a block code C, concatenations Definition 1 ([5]): Define a channel model by the channel of codewords from C are also considered, i.e., we consider all transition probability sequences in C [J] for any finite J ≥ 1. L−1 In this work, we will use codes found using the search Pr(y|x) = Pr(yi |yi−1 , xi−2 , xi−1 , xi , xi+1 ) (1) algorithm in [5] as inner codes, possibly preceded by an i=0 accumulator to improve convergence properties in an iterative with binary input x = (x0 , . . . , xL−1 ) and binary output decoding scheme [7]. We constructed the following code. y = (y0 , . . . , yL−1 ), where Pr(yi |yi−1 , xi−2 , xi−1 , xi , xi+1 ) Example 1: The code is tabulated in Table I, and x−2 = x−1 = xL = y−1 = 0. The parameter ǫ in Table I, where 0 ≤ ǫ ≤ 1/2, is closely C = {(1101101), (1100110), (1100001), (1011110) related to the probability of a right or left bit-shift [5]. In this (1010101), (1001011), (0111100), (0111011)} work, we will use the channel model from Definition 1, since has rate 3/7, minimum power 3/7, average power 17/28, and this simplifies the computation of soft information from the maximum run-length of 4. This code is an optimal (in the channel, compared to the traditional bit-shift channel. sense that with the mentioned constraints it gives the highest B. Inner Codes possible rate) single bit-shift error-correcting code [5]. Let C denote an (n, M ) block code over GF (2), where n is the codeword length and M is the number of codewords, C. Capacity and Zero-Error Capacity and let C [J] denote the set of all sequences of length J ≥ 1 The channel model from Definition 1 is a Markov channel over C. It follows that the codewords from C [J] are binary with a finite state space. Therefore, a lower bound on the strings of length Jn. We say that the block code C is single capacity C (in bits per channel use) using the simulation-based bit-shift error-correcting if all binary channel error vectors of method from [10] can be computed. On the other hand, the bit- weight one are correctable when the binary input sequence shift channel may have a positive zero-error capacity CZE , i.e., is taken from C [J] for any finite J ≥ 1. In [5], a necessary there may be nonzero rates at which data can be transmitted and sufficient condition for a block code to be single bit-shift with zero error probability, depending on the actual constraints error-correcting was given. For the codes considered in this on the transmitted sequences. For instance, the simple code paper, it follows that M = 2k , and we will use the notation C = {0001, 1110} of rate 1/4 will give zero error probability (n, k) for a block code with M = 2k codewords. on the bit-shift channel, since the second bit can never be The power of a binary vector a ∈ GF (2)n , denoted by flipped by the channel. However, the average power of this P (a), is defined as the rational number |χ(a)|/n, where code is only 1/2. With an average power constraint and a
  3. 3. maximum run-length constraint, the exact value of CZE is not known, but we can compute lower bounds on it from code constructions. For instance, with an average power constraint of 17/28 and a maximum run-length constraint of 4 (the same constraints as for the code in Example 1), we found the code C = {(10001000111), (10001111011) (01111000111), (01110111011)} Fig. 2. Bit-oriented trellis for the block code in Example 2. The solid edges which will give a zero error probability, since the third and are labelled with a 1 and the dashed edges are labelled with a 0. The red edges seventh bits can never be flipped by the channel. The rate of have input label 1, while the corresponding black edges have input label 0. the code (which will give a lower bound on CZE ) is 2/11 = 0.1818. The code above is the best (in terms of rate) that we have been able to construct. We remark that the code above is a special case of a more general code construction that gives a lower bound on CZE as a function of an average power constraint when the maximum run-length is at most 4. III. D ESTINED T RELLISES The concept of a destined trellis, where each state deter- mines the last bits leading into the state and the first bits coming out of the state, was introduced in [11]. In more detail, Fig. 3. (2, 1)-destined bit-oriented trellis for the block code in Example 2. The solid edges are labelled with a 1 and the dashed edges are labelled with for each state s in the trellis, the value of the next Wf code a 0. The red and the corresponding black edges have input label 1 and 0, symbols, corresponding to the labels of all outgoing edges respectively. from s (possibly extended through states at consecutive trellis depths), must be the same. Similarly, the values of the previous let φ = φ(C) = (φ0 , φ1 ) denote a 2-tuple, where φa is the Wp code symbols, corresponding to all incoming edges into number of codewords from the code C with the property that s (possibly extended through states at previous trellis depths), the first bit is a. Also, let ψ S denote the restriction of ψ to must also be the same. A state that satisfies this condition, is a the coordinates in S, where S is a subset of {00, 01, 10, 11}. (Wp , Wf )-destined state. Furthermore, a trellis is (Wp , Wf )- For instance, ψ {00,10} = (ψ00 , ψ10 ). destined if every state in the trellis is (Wp , Wf )-destined. A Theorem 1: A joint (2, 1)-destined block-oriented trellis for destined trellis can be constructed from a conventional trellis an accumulator serially concatenated with an (n, k) block code by state splitting [12], as outlined in [11]. C with M = 2k , k ≥ 1, codewords and n ≥ 3, contains at When using a (2, 1)-destined trellis with the bit-shift chan- least χ(ψ(C))·χ(φ(C)) states at each depth. Also, there exists nel approximation in (1), the operation of the Viterbi or the an encoding for the block code that gives exactly χ(ψ(C)) · BCJR algorithm is as for a simple memoryless channel. In χ(φ(C)) states at each depth if and only if the following, with a bit-oriented trellis for a block code we ∃S {00, 01, 10, 11} : ψ S , 1|S| = 2k−1 (2) mean a trellis with a single bit on each edge, and with a block- oriented trellis we mean a trellis with exactly n bits on each where 1x denotes an all-one vector of length x, x ≥ 1, and edge. Also, since the transmitted sequence will contain several ·, · denotes the inner product operator. concatenated codewords from the block code, the last depth Example 3: For the code from Example 2, ψ = (0, 2, 1, 1) of the trellis is wrapped around to the beginning, i.e., we will and φ = (2, 2). Thus, a joint (2, 1)-destined block-oriented consider tailbiting trellises. trellis for an accumulator serially concatenated with this code, Example 2: The code contains at least χ(ψ(C)) · χ(φ(C)) = 3 · 2 = 6 states at each depth. Furthermore, choosing S = {10, 11}, the condition in C = {10101, 01001, 11010, 01111} (2) is satisfied, and we conclude that there exists an encoding is a very simple (5, 2) single bit-shift error-correcting block that gives exactly 6 states in the joint trellis at each depth. The code with minimum power 2/5, average power 3/5, and a encoder (10) → (01001), (01) → (11010), (00) → (10101), maximum run-length of 6 that can be found by computer and (11) → (01111) (the same encoder as in Fig. 2) has search. The code is optimal (in the sense that with the this property. The joint (2, 1)-destined block-oriented trellis mentioned constraints it gives the highest possible rate) [5]. is shown in Fig. 4. In more detail, there are only two non- In Fig. 2, a bit-oriented code trellis is depicted for this code. isomorphic joint trellises with the 4! = 24 possible encoders. By using the state splitting algorithm, we can construct the Out of the 24 possible encoders, 8 give 6 states in the joint (2, 1)-destined trellis in Fig. 3. trellis. The rest (16) give 8 states. To get all 24 encoders, it is Let ψ = ψ(C) = (ψ00 , ψ01 , ψ10 , ψ11 ) denote a 4-tuple, sufficient to change only the labelling of the two joint trellises. where ψab is the number of codewords from the code C with Example 4: For the block code from Example 1, ψ = the property that the last two bits are a and b. Furthermore, (1, 3, 2, 2) and φ = (2, 6). Thus, a joint (2, 1)-destined block-
  4. 4. 1 0.9 0.8 (31,26) HC 0.7 0.6 Ie(CI),Ia(CO) 11/01 11010 0.5 11/01 01001 0.4 (15,11) HC 00/10 10101 0.3 00/10 01111 0.2 Fig. 4. Joint (2, 1)-destined block-oriented trellis for an accumulator serially concatenated with the code from Example 2. As indicated to the right of the (7,3) BC1, ε=0.133 0.1 (7,3) BC1, ε=0.094 trellis, the green edges are labelled with (11010), the blue edges are labelled (7,3) BC2, ε=0.133 with (01001), the red edges are labelled with (10101), and the black edges 0 are labelled with (01111). The corresponding input labels are 11/01, 11/01, 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 00/10, and 00/10, respectively, where the first 2-tuple (11 for the green edge) Ia(CI),Ie(CO) corresponds to transitions in the top or bottom part of the trellis, where the top and bottom parts are separated by a dashed line, and the second 2-tuple Fig. 5. EXIT chart for the serial concatenation of an outer Hamming code (01 for the green edge) corresponds to transitions from the top to the bottom and an inner (precoded) modulation code for a bit-shift channel. part of the trellis, or the other way around. TABLE II C APACITY LOWER BOUNDS AND CONVERGENCE THRESHOLDS AND CODE PROPERTIES OF CONCATENATED CODES FOR A BIT- SHIFT CHANNEL oriented trellis for an accumulator serially concatenated with Threshold ǫ∗ Av. power Max. run-length C ZE this code, contains at least χ(ψ(C))·χ(φ(C)) = 4·2 = 8 states (15,11)–(5,2) 0.115 0.245 3/5 6 0.2353 at each depth. Further, choosing S = {10, 11}, the condition in (15,11)–(7,3) 0.133 0.226 17/28 4 0.1818 (31,26)–(5,2) 0.082 0.210 3/5 6 0.2353 (2) is satisfied, and we conclude that there exists an encoding (31,26)–(7,3) 0.094 0.193 17/28 4 0.1818 that gives exactly 8 states in the joint trellis at each depth. In fact, depending on the choice of the encoder, the joint trellis the combined accumulator/block code (solid curves) achieves will have either 8, 10, 12, or 14 states. Out of the 8! = 40320 Ie (CI ) = 1 for perfect a priori MI. Thus, a significantly lower possible encoders, 1152 give 8 states in the joint trellis. error floor is expected. A tunnel between the inner and outer encoder EXIT curves is observed at ǫ = 0.133 and ǫ = 0.094 IV. C ONCATENATED C ODE D ESIGN for R = 33/105 and R = 78/217, respectively, indicating In this section, we consider the design of a concatenated iterative decoding convergence around these values. On the coding scheme for the bit-shift channel approximation from other hand, if no iterations are considered, the nonprecoded Definition 1. Two HCs with parameters (15, 11) and (31, 26), schemes will perform the best, since they exhibit higher values respectively, are considered for the outer code. For the inner of Ie (CI ) when no a priori information is available. We ran code, the (7, 3) and (5, 2) modulation codes of Examples 1 and a search over all 12 (7, 3) codes with a maximum run-length 2, respectively, are used. We found optimal encoder mappings of at most 4 and an average power of at least 17/28. For for the inner precoded modulation codes through an EXIT comparison purposes, we report the EXIT curve for another charts analysis [8]. Here, by optimal mapping we mean the (7, 3) code (denoted by BC2 in the figure), with an optimal mapping that gives the best iterative decoding threshold. An encoder mapping, at ǫ = 0.133. In terms of convergence, the exhaustive search over all possible encoders was performed. (7, 3) code BC1 performs the best among all codes. In Fig. 5, we plot the EXIT curves for the serially con- The convergence thresholds of the concatenated codes are catenated code consisting of a (15, 11) HC and a (31, 26) given in Table II for several code rates. For comparison, we HC, respectively, concatenated with the (7, 3) modulation code also report in Table II a lower bound on the channel capacity, (denoted by BC1 in the figure) of Example 1 with 8 trellis denoted by ǫ∗ , computed using the method from [10], and a states and with an optimal encoder mapping. We observed that lower bound on the zero-error capacity, denoted by C ZE (see encoder mappings giving an increased number of trellis states Section II-C). Note that the ǫ∗ -values are lower bounds on ǫ in the joint trellis did not improve the thresholds. In Fig. 5, for a given rate and average power, obtained by considering Ia (CO ) and Ie (CO ) denote the prior and extrinsic mutual an independent and identically distributed input process with information (MI), respectively, for the outer encoder CO . Like- the average power tabulated in the fourth column of the table, wise, we denote by Ia (CI ) and Ie (CI ) the prior and extrinsic while the C ZE -values are lower bounds on the rate. Thus, MI, respectively, for the inner encoder CI . The nonprecoded they cannot be directly compared. The values for C ZE were scheme (dashed curves) shows a crossing between the EXIT obtained by code constructions (details are omitted due to lack curves of the outer and inner codes, resulting in a high error of space) that satisfy the average power and maximum run- floor. Indeed, as proved in [7], serially concatenated codes with length constraints in the fourth and fifth columns, respectively. a nonrecursive inner code cannot achieve Ie (CI ) = 1 with We observe that the concatenated code with an inner (7, 3) perfect a priori MI. On the other hand, the EXIT curve for code performs closer to capacity than the code with an inner
  5. 5. 0 0 10 10 (15,11) + acc. (7,3) BC1 (15,11) + acc. (7,3) BC2 -1 -1 10 10 (31,26) + acc. (7,3) BC1 (31,26) + acc. (7,3) BC2 (7,2) BC -2 -2 10 10 BER, FER BER,FER -3 -3 10 10 -4 -4 10 10 (15,11) + acc. (5,2) BC (31,26) + acc. (5,2) BC -5 -5 10 acc. (5,2) BC 10 (5,2) BC (7,2) BC -6 -6 10 -3 -2 -1 0 10 10 10 10 10 -3 -2 -1 0 10 10 10 10 ε ε Fig. 6. BER and FER curves for the concatenated coding scheme with an Fig. 7. BER and FER curves for the concatenated coding scheme with an inner accumulated (5, 2) code on a bit-shift channel. inner accumulated (7, 3) code on a bit-shift channel. (5, 2) code. Also, all concatenated codes have rates larger than power constraint is important in inductively coupled channels, C ZE . We remark that to achieve the rate C ZE with an average since the tag gets its entire power from the received signal. power constraint of 3/5, codes with a large k are needed. Interesting topics for future work include interleaver design, error floor analysis through bounding techniques, and the V. S IMULATION R ESULTS design of the inner code to achieve a low error floor. In Fig. 6, we give bit error rate (BER) results (empty markers) and frame error rate (FER) results (solid markers) R EFERENCES for the concatenated code with the inner accumulated (5, 2) [1] S. Shamai and E. Zehavi, “Bounds on the capacity of the bit-shift code from Example 2 with an optimal encoder mapping. The magnetic recording channel,” IEEE Trans. Inf. Theory, vol. 37, no. 3, pp. 863–872, May 1991. block length is K = 200 bits and a S-random interleaver [2] S. Baggen, V. Balakirsky, D. Denteneer, S. Egner, H. Hollmann, L. Tol- and a maximum of 10 iterations are used. In spite of the huizen, and E. Verbitskiy, “Entropy of a bit-shift channel,” IMS Lectures short block length, very low error probabilities are achieved. Notes-Monograph Series Dynamics & Stochastics, vol. 48, pp. 274–285, 2006. For comparison purposes, we also report in the figure the [3] T. Kløve, “Codes correcting a single insertion/deletion of a zero or a curve for a stand-alone (7, 2) modulation code of similar rate single peak-shift,” IEEE Trans. Inf. Theory, vol. 41, no. 1, pp. 279–283, as that of the (15, 11) − (5, 2) code. The (7, 2) code has a Jan. 1995. [4] A. V. Kuznetsov and A. J. H. Vinck, “A coding scheme for single peak- maximum run-length of 5 (relaxing this to 6 (or constraining shift correction in (d, k)-constrained channels,” IEEE Trans. Inf. Theory, it to 4) will not result in codes with better performance) and vol. 39, no. 4, pp. 1444–1450, Jul. 1993. an average power of 17/28. A significant gain is achieved by ´ [5] E. Rosnes, A. I. Barbero, and Ø. Ytrehus, “Coding for a bit-shift channel with applications to inductively coupled channels,” in Proc. IEEE Global the concatenated code. Finally, we also plot the curves for the Telecommun. Conf. (GLOBECOM), Honolulu, HI, Nov./Dec. 2009. (5, 2) and the accumulated (5, 2) stand-alone codes. Clearly, [6] Ø. Ytrehus, “Communication on inductively coupled channels: Overview when used as a stand-alone code, the nonaccumulated (5, 2) and challenges,” in Proc. 2nd Int. Castle Meeting on Coding Theory and Appl. (2ICMCTA) (Lecture Notes in Computer Science), vol. 5228, code performs better. In Fig. 7, we give BER (empty markers) Medina del Campo, Spain, Sep. 2008, pp. 186–195. and FER (solid markers) results for the concatenated code with [7] J. Kliewer, A. Huebner, and D. J. Costello, Jr., “On the achievable the inner accumulated (7, 3) code from Example 1 with an extrinsic information of inner decoders in serial concatenation,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Seattle, WA, Jul. 2006, pp. optimal encoder mapping. The block length is K = 200 bits 2680–2684. and a S-random interleaver and a maximum of 10 iterations are [8] S. ten Brink, “Convergence behaviour of iteratively decoded parallel used. Despite the slightly higher code rate, the code achieves concatenated codes,” IEEE Trans. Commun., vol. 49, no. 10, pp. 1727– 1737, Oct. 2001. slightly earlier convergence than the concatenated scheme with [9] S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, “Serial concate- the (5, 2) inner code, as predicted by the EXIT charts analysis. nation of interleaved codes: Performance analysis, design, and iterative Moreover, very low error rates are achieved. For comparison, decoding,” IEEE Trans. Inf. Theory, vol. 44, no. 3, pp. 909–926, May 1998. the FER curves with the accumulated (7, 3) code BC2 as inner [10] D. M. Arnold, H.-A. Loeliger, P. O. Vontobel, A. Kavˇ i´ , and W. Zeng, cc code, with an optimal encoder mapping, are also plotted. “Simulation-based computation of information rates for channels with memory,” IEEE Trans. Inf. Theory, vol. 52, no. 8, pp. 3498–3508, Aug. VI. C ONCLUSION AND F UTURE W ORK 2006. ´ [11] A. I. Barbero and Ø. Ytrehus, “On the bit oriented trellis structure of In this paper, we proposed a serially concatenated coding run length limited codes on discrete local data dependent channels,” scheme with iterative decoding for a bit-shift channel. Optimal Discrete Math., vol. 241, no. 1–3, pp. 51–63, Oct. 2001. encoder mappings from an iterative decoding perspective for [12] R. L. Adler, D. Coppersmith, and M. Hassner, “Algorithms for slid- ing block codes–An application of symbolic dynamics to information the inner modulation code, which is single bit-shift error- theory,” IEEE Trans. Inf. Theory, vol. IT-29, no. 1, pp. 5–22, Jan. 1983. correcting and also has large average power, were found. The