Channel coding for quantum key distribution

1,065 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,065
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
18
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Channel coding for quantum key distribution

  1. 1. Channel Coding for Quantum Key Distribution Gottfried Lechner gottfried.lechner@unisa.edu.au Institute for Telecommunications Research University of South Australia July 7, 2011 HiPANQ Workshop, Vienna 1 / 26
  2. 2. Outline Basics Channel Coding Slepian-Wolf Coding Binning and the Dual Channel Linear Block Codes, Syndrome and Rates QKD Reconciliation System Setup Example Optimisation Conclusions 2 / 26
  3. 3. Shannon: Commiunication in the Presence of NoiseChannel Coding1949 the channel capacity may be defined as ony, this operation consists of merely changing sound pressure into a proportional electrical current. In teleg- ,= T-oo C9g2 M T T--a T A precise meaning will be given later to of reliable resolution of the M signals. II. THE SAMPLING THEOR Let us suppose that the channel has width W in cps starting at zero frequen are allowed to use this channel for a c Fig. 1-General communications system. time T. Without any further restrict mean that we can use as signal functio of time whose spectra lie entirely with raphy, we have an encoding operation which produces a and whose time functions lie within th transmit data reliably from source to sequence of dots, dashes, and spaces destination though it is not possible to fulfill both corresponding to the letters of the message. To take a more complex tions exactly, it is possible to keep the maximise the example, speech functionsmultiplex sampled, compressed, outside theW, and PeT.< the time fun different in the R of c must be PCM telephony the the of code rate case such that the probability banderror to have we describe interval Can quantized and encoded, and finally interleaved properly way the functions which satisfy these the maximum to constructgiven by the capacity of the answer is the following: rate is the signal. channel C 3. The channel. This is merely the medium used to THEOREM 1: If a function f(t) contai transmit the signal from the transmitting to the receiv- higher than W cps, it is completely dete ing point. It may be a pair of wires, a coaxial cable, a its ordinates at a series of points space band of radio frequencies, etc. During transmission, or apart. at the receiving terminal, the signal may be perturbed This is a fact which is common knowl by noise or distortion. Noise and distortion may be dif- munication art. The intuitive justificat ferentiated on the basis that distortion is a fixed opera- contains no frequencies higher than tion applied to the signal, while noise involves statistical change to a substantially new value in and unpredictable perturbations. Distortion can, in one-half cycle of the highest frequency, principle, be corrected by applying the inverse opera- mathematical proof showing that this tion, while a perturbation due to noise cannot always be proximately, but exactly, true can be removed, since the signal does not always undergo the Let F(w) be the spectrum of f(t). Then same change during transmission. 4. The receiver. This operates on the received signal 1 a00 3 / 26
  4. 4. Shannon: Commiunication in the Presence of NoiseChannel Coding1949 the channel capacity may be defined as ony, this operation consists of merely changing sound pressure into a proportional electrical current. In teleg- ,= T-oo C9g2 M T T--a T A precise meaning will be given later to of reliable resolution of the M signals. II. THE SAMPLING THEOR Let us suppose that the channel has width W in cps starting at zero frequen are allowed to use this channel for a c Fig. 1-General communications system. time T. Without any further restrict mean that we can use as signal functio of time whose spectra lie entirely with raphy, we have an encoding operation which produces a and whose time functions lie within th transmit data reliably from source to sequence of dots, dashes, and spaces destination though it is not possible to fulfill both corresponding to the letters of the message. To take a more complex tions exactly, it is possible to keep the maximise the example, speech functionsmultiplex sampled, compressed, outside theW, and PeT.< the time fun different in the R of c must be PCM telephony the the of code rate case such that the probability banderror to have we describe interval Can quantized and encoded, and finally interleaved properly way the functions which satisfy these the maximum to constructgiven by the capacity of the answer is the following: rate is the signal. channel C 3. The channel. This is merely the medium used to THEOREM 1: If a function f(t) contai transmit the signal from the transmitting to the receiv- higher than W cps, it is completely dete ing point. It may be a pair of wires, a coaxial cable, a its ordinates at a series of points space Channel Coding Theorem [Shannon 1948] band of radio frequencies, etc. During transmission, or apart. at the receiving terminal, the signal may be perturbed This is a fact which is common knowl For any > 0 and by c <orC, for largeand distortion may be dif- munication a code of justificat Rnoise distortion. Noise enough N, there exists art. The intuitive ferentiated on the basis that distortion is a fixed opera- contains no frequencies higher than length N and rate Rc and a decoding algorithm, such that thesubstantially new value in tion applied to the signal, while noise involves statistical change to a maximal probability of blockand unpredictable perturbations. Distortion can, in one-half cycle of the highest frequency, error is less than . principle, be corrected by applying the inverse opera- mathematical proof showing that this tion, while a perturbation due to noise cannot always be proximately, but exactly, true can be removed, since the signal does not always undergo the Let F(w) be the spectrum of f(t). Then same change during transmission. 4. The receiver. This operates on the received signal 1 a00 3 / 26
  5. 5. 4 Simplifying Component DecodersTypical Approach choose a family of channels with a single parameter (e.g., AWGN, BSC, BEC,...) fix a code rate optimise the code such that it achieves vanishing error probability close to capacity 0 10 SPA MSA MSA − variable scaling −1 MSA − fixed scaling 0.60 10 MSA − fixed scaling 0.70 MSA − universal −2 10 bit error rate −3 10 −4 10 −5 10 −6 10 0.5 1 1.5 2 2.5 Eb N0 Figure 4.10: Bit error rates for irregular code with and without post-processing. 4 / 26
  6. 6. DAVID SLEPIAN AND JACK K. WOLF Slepian-Wolf Codingation sequences . . .,X- 1,X0,XI,. . . and -x-( ,xg .X,,“’ x “~01101~~~ ..x-:,x,*,x;l-. ed by repeated independent drawings of - ENCODER RATE RX 0bles X, Y from a given bivariate distribu- E Ce minimum number of bits per character CORRELATED 0 SOURCESse sequences that they can be faithfully so Dassumptions regarding the encoders and “Ye, ,Y, .Y,;.. Y “‘11000..~ ; ..Y-,*,Yo*8Y,p... hich are not at all obvious, are presented ENCODER RATE RY in the Rx-Ry plane. They generalize aor a single information sequence,namely Fig. 1. Correlated source coding configuration.uction.TRODUCTIONtement transmit two correlated sources over two noiseless channels generalize, to the caseencoding and decoding: H(X, Y) joint of twortain well-known results on the separate encoding and decoding: H(X) + H(Y) ≥ngle discrete information source. H(X, Y)onsideredis that depictedin Fig. 1. information sequences. * .,X- 1,,,,Y,, . . . are obtained by repeatedm a discrete bivariate distributionch sourceis constrained to operatee other source, while the decoder HtXIY) H(X) H(X,Y) RXded binary messagestreams. We Fig. 2. Admissible rate region W corresponding to Fig. 1umber of bits per sourcecharacteroded messagestreams in order to ensureaccuratereconstruction by the decoderof the outputs of both information sources.The results are presentedas an 25, 1972; revised December 28, 1972. sity of Hawaii, Honolulu, Hawaii, and allowed two-dimensional rate region 93for the two encoded Hill, N.J. 07974. message streamsas shown in Fig. 2. Note that in 93for this ersity of Hawaii, Honolulu, Hawaii, on itute of Brooklyn, Brooklyn, N.Y. case we can have both R, < H(X) and R, -c H(Y) al- 5 / 26
  7. 7. DAVID SLEPIAN AND JACK K. WOLF Slepian-Wolf Codingation sequences . . .,X- 1,X0,XI,. . . and -x-( ,xg .X,,“’ x “~01101~~~ ..x-:,x,*,x;l-. ed by repeated independent drawings of - ENCODER RATE RX 0bles X, Y from a given bivariate distribu- E Ce minimum number of bits per character CORRELATED 0 SOURCESse sequences that they can be faithfully so Dassumptions regarding the encoders and “Ye, ,Y, .Y,;.. Y “‘11000..~ ; ..Y-,*,Yo*8Y,p... hich are not at all obvious, are presented ENCODER RATE RY in the Rx-Ry plane. They generalize aor a single information sequence,namely Fig. 1. Correlated source coding configuration.uction.TRODUCTIONtement transmit two correlated sources over two noiseless channels generalize, to the caseencoding and decoding: H(X, Y) joint of twortain well-known results on the separate encoding and decoding: H(X) + H(Y) ≥ H(X, Y)ngle discrete information source.onsideredis that depictedin Fig. 1. information sequences. * .,X- 1, Slepian-Wolf Theorem (1973),,,Y,, . . . are obtained by repeatedm a discrete bivariate distribution rate region is given by the rate pairs satisfying The admissiblech sourceis constrained to operatee other source, while the decoder HtXIY) H(X) H(X,Y) RXded binary messagestreams. We Rx ≥ H(X|Y) Fig. 2. Admissible rate region W corresponding to Fig. 1umber of bits per sourcecharacteroded messagestreams in order to Ry ≥ H(Y|X) ensureaccuratereconstruction by the decoderof the outputs of both information + Ry ≥ H(X, are presentedas an Rx sources.The results Y) 25, 1972; revised December 28, 1972. sity of Hawaii, Honolulu, Hawaii, and allowed two-dimensional rate region 93for the two encoded Hill, N.J. 07974. message streamsas shown in Fig. 2. Note that in 93for this ersity of Hawaii, Honolulu, Hawaii,penalty if X and Y are encoded separately! There is N.Y. on case we can have both R, < H(X) and R, -c H(Y) al- itute of Brooklyn, Brooklyn, no 5 / 26
  8. 8. nimum number of bits per character CORRELATED 0 SOURCES quences that they can be faithfully so Dmptions regarding the encoders and Slepian-Wolf Coding are not at all obvious, are presented “Ye, ,Y, .Y,;.. Y ENCODER “‘11000..~ RATE RY ; ..Y-,*,Yo*8Y,p...he Rx-Ry plane. They generalize asingle information sequence,namely Fig. 1. Correlated source coding configuration.n.DUCTIONent neralize, to the case of twon well-known results on the discrete information source. deredis that depictedin Fig. 1.ormation sequences. * .,X- 1, , . . . are obtained by repeated discrete bivariate distributionourceis constrained to operateher source, while the decoder HtXIY) H(X) H(X,Y) RX binary messagestreams. We Fig. 2. Admissible rate region W corresponding to Fig. 1ber of bits per sourcecharacterd messagestreams in order to ensureaccuratereconstruction by the decoderof the outputs of both information sources.The results are presentedas an 1972; revised December 28, 1972. of Hawaii, Honolulu, Hawaii, and allowed two-dimensional rate region 93for the two encoded N.J. 07974. message streamsas shown in Fig. 2. Note that in 93for thisy of Hawaii, Honolulu, Hawaii, on of Brooklyn, Brooklyn, N.Y. case we can have both R, < H(X) and R, -c H(Y) al-rsity of South Australia. Downloaded on January 17, 2009 at 20:51 from IEEE Xplore. Restrictions apply. 6 / 26
  9. 9. nimum number of bits per character CORRELATED 0 SOURCES quences that they can be faithfully so Dmptions regarding the encoders and Slepian-Wolf Coding are not at all obvious, are presented “Ye, ,Y, .Y,;.. Y ENCODER “‘11000..~ RATE RY ; ..Y-,*,Yo*8Y,p...he Rx-Ry plane. They generalize asingle information sequence,namely Fig. 1. Correlated source coding configuration.n.DUCTIONent neralize, to the case of twon well-known results on the discrete information source. deredis that depictedin Fig. 1.ormation sequences. * .,X- 1, , . . . are obtained by repeated discrete bivariate distributionourceis constrained to operateher source, while the decoder HtXIY) H(X) H(X,Y) RX binary messagestreams. We Fig. 2. Admissible rate region W corresponding to Fig. 1ber of bits per sourcecharacterd messagestreams in order to ensureaccuratereconstruction by the decoderof the outputs assume thatbothis transmitted at H(Y) are presentedas an of Y information sources.The results 1972; revised December 28, 1972. we operate at a two-dimensional rateFig. 2. Note that two encoded of Hawaii, Honolulu, Hawaii, and N.J. 07974. allowed message corner point in region Slepian-Wolfthis streamsas shown of the 93for the in 93for regiony of Hawaii, Honolulu, Hawaii, on for this corner we can have can R, < H(X) syndrome of al- channel code of Brooklyn, Brooklyn, N.Y. case point we both use the and R, -c H(Y) a as a binning schemersity of South Australia. Downloaded on January 17, 2009 at 20:51 from IEEE Xplore. Restrictions apply. 6 / 26
  10. 10. Binning with Syndrome Bin 1 Bin 2 Bin 3 encoding of X can be done by random binning the syndrome of a linear code is used for binning 7 / 26
  11. 11. Dual Channel Correlated sources: assume sources X and Y with P(X, Y) = P(X)P(Y|X) generate X according to P(X) transmit X over the channel P(Y|X) to obtain Y 8 / 26
  12. 12. Dual Channel Correlated sources: assume sources X and Y with P(X, Y) = P(X)P(Y|X) generate X according to P(X) transmit X over the channel P(Y|X) to obtain Y What is the channel that is seen by the channel decoder? in general it is the dual channel which is not equal to P(Y|X) nor P(X|Y) the channel seen by the decoder is always a symmetric channel with uniform input therefore, linear codes can be used for the simple case of two binary sources correlated via a BSC all these channels are the same 8 / 26
  13. 13. Linear Block Codes, Syndrome and Rates x N C= x ∈ {0, 1} xHT = 0 N M Rc = N−M N =1− M N 9 / 26
  14. 14. Linear Block Codes, Syndrome and Rates x N C= x ∈ {0, 1} xHT = 0 N M Rc = N−M N =1− M N x s N Cs = x ∈ {0, 1} xHT = s N M Rs = M N = 1 − Rc efficiency parameter f = M H(X|Y) = M NH(X|Y) = Rs H(X|Y) 9 / 26
  15. 15. LDPC Codes N   0 0 0 0 0 0 0 0 1 0 1 0 1 0 1 0  0 0 0 0 0 0 1 1 0 1 0 1 0 0 0 0     0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0     1 1 0 1 0 0 0 0 0 0 0 0 0 1 0 0  H=    M  1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0   0 0 0 0 1 0 0 0 0 1 0 0 1 1 0 0     0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 1  dc 0 0 1 0 0 1 1 0 0 0 0 0 0 0 0 1 dv 10 / 26
  16. 16. Outline Basics Channel Coding Slepian-Wolf Coding Binning and the Dual Channel Linear Block Codes, Syndrome and Rates QKD Reconciliation System Setup Example Optimisation Conclusions 11 / 26
  17. 17. Quantum Key Distribution Public Channel Alice Bob X Y Quantum  Channel Alice and Bob generate a common key they communicate via a quantum channel and a public channel Eve attempts to gain knowledge of the key 12 / 26
  18. 18. System Setup X Public Encoder (Alice) Channel Bob Y the quantum channel creates a correlated source Alice observes X and Bob observes Y Alice has to communicate at least H(X|Y) over the public channel this corresponds to the corner point of the Slepian-Wolf region 13 / 26
  19. 19. Aims Aims of QKD: Alice and Bob want to create a common key the goal is to maximise the key generation rate this does not necessarily require error free communication (as long as the errors are detectable) the key generation rate can be limited by the quantum channel (quantum source) the data rate over the public channel the processing capabilities of Bob 14 / 26
  20. 20. Example Algorithm 1 Algorithm 2 WER 1 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.15 0.2 0.25 0.3 0.35 0.4 0.45 rs rs 15 / 26
  21. 21. Example Algorithm 1 Algorithm 2 WER 1 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.15 0.2 0.25 0.3 0.35 0.4 0.45 rs rs 15 / 26
  22. 22. Example Algorithm 1 Algorithm 2 WER Keyrate 1 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.15 0.2 0.25 0.3 0.35 0.4 0.45 rs rs 15 / 26
  23. 23. Example Algorithm 1 Algorithm 2 WER Keyrate Time 1 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.15 0.2 0.25 0.3 0.35 0.4 0.45 rs rs 15 / 26
  24. 24. Optimisation Problem maximum achievable key rate rk,max = fk (rs , pX,Y ) word error probability pe = fe (rs , pX,Y , A) decoding complexity td = ft (rs , pX,Y , A) 16 / 26
  25. 25. Optimisation Problem maximum achievable key rate rk,max = fk (rs , pX,Y ) word error probability pe = fe (rs , pX,Y , A) decoding complexity td = ft (rs , pX,Y , A) Optimisation Problem rk = max {rk,max (1 − pe )} subject to td < td,max where the maximisation is taken over 0 < rs < 0.5 all decoding algorithms A 16 / 26
  26. 26. Optimisation The decoding algorithm can either be fixed chosen from a fixed set of algorithms adaptively changed during the decoding process (e.g., gear-shift decoding) The coding rate can either be fixed chosen from a fixed set of rates (rate-compatible codes) adaptively changed during the decoding process (rateless codes) 17 / 26
  27. 27. Message-Passing Decoders Lch Lcv,j Lvc,j Lvc,i Lcv,i Sum-Product Algorithm (SPA) dv dc Lvc,j Lvc,i = Lch + Lcv,j Lcv,i = 2 tanh−1 tanh 2 j=1 j=i j=1 j=i Min-Sum Algorithm (MSA) dv dc Lvc,i = Lch + Lcv,j Lcv,i = α · min |Lvc,j | · sign(Lvc,j ) j=i j=1 j=i j=1 j=i Binary Message-Passing (BMP) mvc,i = majority(mch , mcv,j ) mcv,i = xor mvc,j j=i 18 / 26
  28. 28. Gear-Shift Decoding 1238 IEEE labeled rithms in .T gear-shif available quence c panded g E. Conv For eq gence th Fig. 2. Simple gear-shifting trellis with of size six and three algorithms. than the Notice that some vertices have fewer than three outgoing edges; this happens when some algorithms have a closed EXIT chart at this message-error rate, or chooses when two algorithms result in a parallel edge (in which case, only the lower sulting E complexity algorithm is retained). and henc In the from Ardakani and Kschischang, “Gear-shift decoding,” IEEE Trans. Com. 2006 timum ge Clearly, every gear-shifting sequence corresponds to a path complex 19 / 26
  29. 29. Fixed Rate vs Rateless error rate on the quantum channel known and large block length transmit syndrome over public channel and discard key if decoding is not successful (one bit feedback) error rate on the quantum channel varies not enough data on the public channel leads to high error rate too much data on the public channel reduces the keyrate 20 / 26
  30. 30. Literature Information Theory David Slepian and Jack K Wolf. Noiseless coding of correlated information sources. IEEE Transactions on Information Theory, 19(4):471 – 480, 1973. Aaron D Wyner. Recent results in the Shannon theory. IEEE Transactions on Information Theory, 20(1):2 – 10, 1974. Jun Chen, Da ke He, and Ashish Jagmohan. On the duality between Slepian–Wolf coding and channel coding under mismatched decoding. IEEE Transactions on Information Theory, 55(9):4006 – 4018, 2009. 21 / 26
  31. 31. Literature Coding Robert G Gallager. Low-density parity-check codes. IEEE Transactions on Information Theory, 8(1):21 – 28, 1962. Michael Luby. LT codes. In IEEE Symposium on Foundations of Computer Science, 2002, pages 271 – 280, 2002. Amin Shokrollahi. Raptor codes. IEEE Transactions on Information Theory, 52(6):2551 – 2567, 2006. T. Richardson and R. Urbanke. Modern Coding Theory. Cambridge University Press, 2008. 22 / 26
  32. 32. Literature QKD Basics Gilles Brassard and Louis Salvail. Secret-key reconciliation by public discussion. In Advances in Cryptology EUROCRYPT’93, pages 410–423, 1994. Tomohiro Sugimoto and Kouichi Yamazaki. A study on secret key reconciliation protocol ”cascade”. In IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, volume E83-A, pages 1987–1991, 2000. W T Buttler, S K Lamoreaux, J R Torgerson, G H Nickel, C H Donahue, and C G Peterson. Fast, efficient error reconciliation for quantum cryptography. arXiv, quant-ph, 2002. Hao Yan, Xiang Peng, Xiaxiang Lin, Wei Jiang, Tian Liu, and Hong Guo. Efficiency of Winnow protocol in secret key reconciliation. In Computer Science and Information Engineering, 2009 WRI World Congress on, volume 3, pages 238 – 242, 2009. 23 / 26
  33. 33. Literature Coding for QKD (non-exhaustive) David Elkouss, Anthony Leverrier, Romain Alleaume, and Joseph J Boutros. Efficient reconciliation protocol for discrete-variable quantum key distribution. In International Symposium on Information Theory, pages 1879–1883, 2009. David Elkouss, Jesus Martinez-Mateo, Daniel Lancho, and Vicente Martin. Rate compatible protocol for information reconciliation: An application to QKD. In Information Theory Workshop (ITW), 2010 IEEE, pages 1 – 5, 2010. David Elkouss, Jesus Martinez-Mateo, and Vicente Martin. Efficient reconciliation with rate adaptive codes in quantum key distribution. arXiv, quant-ph, 2010. David Elkouss, Jesus Martinez-Mateo, and Vicente Martin. Secure rate-adaptive reconciliation. In Information Theory and its Applications (ISITA), 2010 International Symposium on, pages 179 – 184, 2010. Kenta Kasai, Ryutaroh Matsumoto, and Kohichi Sakaniwa. Information reconciliation for QKD with rate-compatible non-binary LDPC codes. In Information Theory and its Applications (ISITA), 2010 International Symposium on, pages 922 – 927, 2010. Jesus Martinez-Mateo, David Elkouss, and Vicente Martin. Interactive reconciliation with low-density parity-check codes. In Turbo Codes and Iterative Information Processing (ISTC), 2010 6th International Symposium on, pages 270 – 274, 2010. 24 / 26
  34. 34. Literature Implementation (non-exhaustive) Chip Elliott, Alexander Colvin, David Pearson, Oleksiy Pikalo, John Schlafer, and Henry Yeh. Current status of the DARPA quantum network. arXiv, 2005. Jerome Lodewyck, Matthieu Bloch, Raul Garcia-Patron, Simon Fossier, Evgueni Karpov, Eleni Diamanti, Thierry Debuisschert, Nicolas J Cerf, Rosa Tualle-Brouri, Steven W McLaughlin, and Philippe Grangier. Quantum key distribution over 25 km with an all-fiber continuous-variable system. arXiv, quant-ph, 2007. ´ Simon Fossier, Eleni Diamanti, Thierry Debuisschert, Andre Villing, Rosa Tualle-Brouri, and Philippe Grangier. Field test of a continuous-variable quantum key distribution prototype. arXiv, quant-ph, 2008. Simon Fossier, J Lodewyck, Eleni Diamanti, Matthieu Bloch, Thierry Debuisschert, Rosa Tualle-Brouri, and Philippe Grangier. Quantum key distribution over 25 km, using a fiber setup based on continuous variables. In Lasers and Electro-Optics, 2008 and 2008 Conference on Quantum Electronics and Laser Science. CLEO/QELS 2008, pages 1 – 2, 2008. A Dixon, Z Yuan, J Dynes, A Sharpe, and Andrew Shields. Megabit per second quantum key distribution using practical InGaAs APDs. In Lasers and Electro-Optics, 2009 and 2009 Conference on Quantum Electronics and Laser Science. CLEO/QELS 2009, pages 1 – 2, 2009. 25 / 26
  35. 35. Conclusions Reconciliation for QKD is a Slepian-Wolf coding problem (in a corner point) linear codes are sufficient for the optimal solution maximising the key rate is not necessarily equivalent to minimising the error rate complexity constraints may lead to a non-trivial optimisation problem to find the best codes and decoding algorithms rate adaptive or rateless schemes might be necessary in case where the error rate on the quantum channel is unknown 26 / 26

×