LIBRARY OF THE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN 510.84 T£6r ^0.770-775 cop. 2_ I ihe person charging this material is re- sponsible for its return to the library from which it was withdrawn on or before the Latest Date stamped below. Theft, mutilation, and underlining of books are reasons for disciplinary action and may result in dismissal from the University. UNIVERSITY OF ILLINOIS LIBRARY AT URBANA-CHAMPAIGN Digitized by the Internet Archive in 2013 http://archive.org/details/analysisofburste770tayl UIUCDCS-R-75-770 SJO.W TMr AN ANALYSIS OF BURST ENCODING METHODS AND TRANSMISSION PROPERTIES by GARY LEWIS TAYLOR December 1975 UIUCDCS-R-75-770 AN ANALYSIS OF BURST ENCODING METHODS AND TRANSMISSION PROPERTIES BY GARY LEWIS TAYLOR December 1975 Department of Computer Science University of Illinois Urbana, Illinois 6l801 This work was supported in part by Contract No. N000-1U-75-C-0982 and. was submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science, at the University of Illinois 4« iii ACKNOWLEDGMENT The author wishes to thank the members of the Information Engineering Laboratory for their assistance and encouragement. He also wishes to thank Kathy Gee for typing this thesis, Stan Zundo for the drawings, and Dennis Reed for its publication. The author is indebted to his advisor, W. J. Kubitz, for counsel and technical assistance throughout the project. IV TABLE OF CONTENTS Page 1 INTRODUCTION 1 1.1 Overview 1 1.2 Definitions 3 2 VERNIER ENCODING *+ 2.1 Ramp Encoder ^ 2.2 Vernier Encoder ° 3 PARALLEL ENCODING AND FILTERING 9 3.1 Parallel Encoder 9 3.2 Filtering 9 3.3 PA Trees 13 3-h PA Filter IT k DELTA BLOCK ENCODING 20 k.l Delta 20 U.2 Optimality of the DBE 20 U.3 Filters for the DBE 23 U.U Filtering to Increase Precision 2U U.5 Summary of DBE Properties 33 5 ENCODING FOR ERROR CORRECTION 35 5.1 Constraints on the Encoder 35 5.2 Hybrid Encoder 35 6 SUMMARY OF ENCODING RESULTS 39 Page 7 SINGLE-BLOCK ERROR CORRECTION Ul 7.1 Characteristics of BSR Codes 1|1 7.2 Performance Measures ^2 7.3 Majority Decoding ^ "J.k Modifications to the Majority Decoder ^8 8 COMPARISON WITH OTHER CODES 55 8.1 Repetition Codes 55 8.2 Reed-Muller Codes 56 8.3 Theoretical Bound 57 Q.k Performance Comparison 59 9 OTHER ERROR CORRECTION METHODS 66 9.1 2-Way Majority Decoder 66 9.2 Recoding 68 10 CONCLUSIONS 72 APPENDICES 73 A Negative Coefficients 73 B Cut-off Frequency of a BSR 75 C Bandpass Encoder ..... 78 REFERENCES 8l 1 INTRODUCTION 1.1 Overview Burst Processing was proposed by Professor W. J. Poppelbaum as a low-cost alternative to both stochastic and weighted-binary pro- cessing. The proposed burst encoding was a serial binary data stream divided into "blocks" , each of which contained a "burst" of ones followed by zeros. Figure 1 shows a burst encoding with 10-bit blocks. Decoding one such block gives a precision of 1 in 10 while averaging ten adjacent blocks gives a precision of 1 in 100 with proper encoding. This system promised increased precision or speed over stochastics while retaining noise immunity relative to weighted-binary PCM. Work on Burst Processing by the Information Engineering Laboratory, directed by Professor Poppelbaum, has demonstrated the (2) feasibility of Burst Processing for a variety of applications Arithmetic elements for burst-encoded data have proven to be simple and inexpensive, as are the encoders and decoders. The proposed decoder for Burst processing was the "block sum register" , a shift register with each stage controlling a unit current source. The summation of the individual currents yields a quantized analog output proportional to the number of ones in the register. Making the shift register one block long provides an output proportional to the size of the burst when a single burst is contained in the register, and an interpolation between successive burst encoding as one burst is shifted out and another shifted into the register. SERIAL DATA FLOW OOllllOOOOOOllllOOOOOO. . .1111000000111000000011 Block 1 Value .h Block 2 Value .k Block 9 Value .h Block 10 Value .3 average of blocks 1-10 is .39 Figure 1. BURST ENCODING FOR .39 OVER 10 BLOCKS Input Output Current Input > — • D D -Q w Count Up D D Count Down UP/DOWN COUNTER D -• — ► Shift Out j. Parallel Binary -r Output Figure 2. BLOCK SUM REGISTERS OF LENGTH 5 1.2 Definitions This paper will discuss a broader class of encodings which con- tains the proposed burst encoding. It will deal with encodings that can be decoded by a block sum register. Block sum register decoding insures the noise tolerance of Burst Processing whether or not the one-bits occur in bursts. Other encodings promise simple arithmetic elements and precision equal to that of burst encoding as originally proposed. For the purposes of this paper, a block of data and a block sum register are defined as follows: In a serial binary data stream, any n consecutive bits constitute a block of length n whose value is the number of one-bits it contains. Each bit belongs to n consecutive overlapping blocks of length n. A block sum register (BSR) acts as a sliding window of width n on the binary sequence which is its input. The BSR output is proportional to the value of the block of data appearing in the window at any instant, with precision 1/n (n+1 possible values). Figure 2 shows BSR's for block length n=5 with analog and parallel binary outputs. The first part of this paper will discuss encodings that can be decoded by a BSR, their properties, some applications, and the encoders themselves. Later sections will then consider the error tolerance of systems based on BSR decoding and methods for achieving error correction prior to BSR decoding. 2 VERNIER ENCODING 2.1 Ramp Encoder The encoder first proposed for analog (DC ) -to-burst conversion was the vernier encoder. It is a refinement of a more basic encoder, called here the ramp encoder. Shown in Figure 3, the ramp encoder produces a burst encoding very simply. It compares the input signal to a sawtooth waveform (actually a stairstep) which rises from zero to its maximum during the period of one block. The output is one whenever the input is greater than the ramp, producing a burst of ones at the beginning of each block. The encoded signal resembles pulse-width-modulation with the width quantized, except that input samples are not taken periodically; in fact, the sampling time is determined by the time at which the input matched the ramp. The following calculation shows what conditions on the input are needed to insure that the ramp-encoded samples can be interpreted as periodic samples without large error. Since the input must be nearly constant for this to be the case, assume that the slope is constant during one ramp. Assume also that the expected value of the input is .5 and the input is confined to the interval [0,l]. Figure k illustrates the ramp and the input and gives their equations. Since the expected sampling time is the middle of the ramp, the en- coding error is taken to be the difference between the time T when the input crosses the ramp and the input value at time t=.5 (which is also equal to the average value of the input during the ramp) . The encoding Input > " Output ^V BSR Shift Out INPUT analog voltage output BSR OUTPUT WAVEFORM ENCODING . 1 1 1 , , 1 1 . . 1 , • '■ • i j 1 i , i , 1 Block A Block B Block C Figure 3. RAMP ENCODER ► t RAMP: y = t INPUT: y = a + bt a T = 1-b a , bv error = -r—- - (a + — J 1-b d b / 1 bv Figure h. RAMP SAMPLING TIME ERROR error is given by b / 1 b> 1-b U " 2 2 J . (1) Under the constraint that the input remain in [0,l], (l) is maximized for a given |b| by taking b >_ and a + b = 1. This gives a maximum •v. error of |— | . For this error to be smaller than the maximum quantiza- tion error on T , |— | < — is required, where n is the length of the encoded block. Note that this condition is equivalent to the requirement that the input vary less than the size of one quantization level during the period of one encoding ramp. Suppose a sinusoid of frequency F and extrema and 1 is to be encoded. The slope encountered here is not the largest that could be found in an input band-limited to F, but is taken to be the largest slope likely to occur. The maximum slope of this input is ttF. Recalling that the ramp frequency is 1 as in Figure h; meeting the slope condition, ttF 1 1 (l) , requires that — — < rr— . This means that ttF < — or that the ramp 2 2n n frequency must be at least nir times the frequency of the sinusoid. This 2 dictates a bit rate of at least n tt times the input frequency if one is to treat the samples as being periodic. An equivalent result will hold for the vernier encoder. 2.2 Vernier Encoder Shown in Figure 5, the vernier encoder produces a block of length n which contains bursts of ones beginning every m bits. Each sub-block of m bits is encoded by the m-bit ramp encoder except that the vernier register adds a small increment (l/n) to each successive m-bit ramp. With a decoding BSR of length n this encoding results in an output -p a o o . o c H o II k\* CC p ^ ■p S- »— ' \ o i i o o a (T i — i ii K ^» CO EH P GO D o 1- O h- 1 ) GQ / i O / h h i—« 1 ^W\, 1|> / I Eh M O M K > II P o o Eh K W P O o H > bO •H Pn -P ft precision of l/n for constant inputs, just as if a ramp encoder had been used. It is easily verified that — < rr— is required for this i 2 i 2n u encoder to keep the sampling-time error below the expected quantization error, just as for the ramp encoder above. The frequency limitation of either encoder can thus be written as f b >Tm 2 F , (2) where f, is the bit rate (clock rate) of the encoding, n is the length of the decoding BSR used to obtain full precision, and F is the highest frequency present at the input. It is important to note that (2) is based on the assumption that ramp-encoded samples are to be treated as periodic samples of the input. Processing such samples without this assumption will probably lead to a wider frequency range, but is not considered here. To encode with precision 1/p using BSR ' s of length m in the encoder, a maximum of log (p) BSR ' s is required if higher-order vernier registers are added as needed to achieve this precision. The encoder with m=l uses the fewest BSR current sources, the same number as a conventional binary A/D converter. This economy of BSR stages is the advantage of the vernier encoder over other encoders for BSR's. Its disadvantage, a frequency limitation implied by (2), can be removed by adding a sample- and-hold preceeding the encoder. In that case, sampling at the Nyquist rate (f, = 2nF) is sufficient to represent the input signal. 3 PARALLEL ENCODING AND FILTERING 3.1 Parallel Encoder The parallel analog-to-burst converter shown in Figure 6 performs conventional periodic sampling and quantization of the input with one sample every n clock periods. Adjacent but non-overlapping blocks contain consecutive sample values. BSR decoding provides smooth inter- polation between sample values but the output is guaranteed to match the input (within l/n precision) only once every n clock periods. Pro- ducing exactly the same encoding as a ramp encoder with a s ample -and-hold, the parallel encoder can operate faster but requires n comparators. 3.2 Filtering Parallel encoding was introduced to investigate digital filtering of a burst sequence. Application of the basic theories of digital filtering (understanding of which is assumed here and can be obtained from (3) or (H)) to parallel burst encoding is straightforward and leads to economical realizations. A finite impulse response (FIR) filter whose input is parallel-burst-encoded and whose output may be decoded by a BSR will be described. Recursive filters are not discussed because the precision of the data representation required for such filters favors the use of weighted-binary encoding over burst. The outputs of a FIR filter of order m are given by m y(k) = I c. x(k-i), (3) i=0 x where x(k) , k=l,2,3,... are consecutive input samples, y(k) are output samples, and c, i=l,2,..., m are consecutive values of the filter's 10 Input >- REF O f Parallel-in Serial-out Shift Register Out put Figure 6. PARALLEL ENCODER FOR n = 6 11 impulse response. For parallel burst encoding an input sequence w and output sequence z can be defined by n n x(k) = I w(kn + h) ; y(k) = J z(kn + j ) ; (U) j=l J=l so that w(kn + h) , h=l,2,...,n are the n bits comprising an input sample and z(kn + j), j=l,2,...,n comprise one output, but are in general real numbers since the c. are real. Note that if (2) is satisfied, ramp encoding is essentially the same as parallel encoding and the following results -will apply to ramp encoding. (3) can now be rewritten as m n h) I z(kn + j) = I c I w((k - i)a + ,1*1 i=0 h=l n m = I I c w((k - i)n + h) (5) h=l i=0 so that one possibility for defining z is seen to be m z' (j) = I c. w( -in + j) . (6) i*0 x The general FIR filter and an implementation based on (6) for burst encoding are shown in Figure "J. Note that the outputs are real and that a block of them must be summed to produce an output sample once every n clock periods. 12 unit time delays Input >__ (analog) ► Output (a) n-bit shift registers Input (burst) > ► Output (b) Figure J. GENERAL FIR FILTER (a) AND BURST VERSION (b) 13 If the c. are restricted to small non-negative integers, it is 1 possible to construct an economical filter which produces a BSR-decodable bit sequence as its output. The important implication of requiring the c i to be non-negative is that the zero-frequency component cannot be removed with such a filter. Appendix A discusses the removal of this restriction and the resulting filters. Keeping the c. small reduces the cost of filter hardware at the expense of limiting the performance to that obtainable with small integer coefficients. After a digression to explain the burst serial adder, a logic implementation of (6) (with restricted c^) for burst encoding will be presented. 3.3 PA Trees Implementations of burst filters require weighted summation of burst inputs. Although this could be done by a pseudo-BSR with unequal weights on the bits, a logic implementation using only one device is possible. This device has been called a "perverted adder" (PA) because it is a full adder with its outputs re-arranged to perform scaled burst addition. Figure 8 shows the construction of a PA from a full adder and how a serial adder for BSR-decodable bit streams is built from a PA and a flip- flop. One block of output from the serial adder contains c ones, where c = ^ (a + b + (carry-in bit) - (carry-out bit)) , (T) a and b are the numbers of ones in the corresponding input blocks , and the carry bits are those stored in the serial adder. Neglecting carries, the PA serial adder output is the scaled sum of its inputs: c = ^(a + b) ; where it is understood that all signals are BSR decoded. (- w Full Adder L. PA KA lU Inputs U — i : Carry Sum Sum C arr y I I (a) A \ f \ ! Sum Inputs PA Carry D , A + B (D) Figure 8. PERVERTED ADDER (a) AND SERIAL BURST ADDER (b) 15 Figure 9 shows trees of PA's used to perform weighted summation. Both trees have the same output function if delays and carries are neglected. Using only simple serial adders, tree (a) has two drawbacks. One is propogation delay through the adders, limiting the clock rate of the filter. The other is storage of more than the necessary number of carries. It is always possible to sum the inputs and stored carries, output a one if this sum exceeds some threshold (12 in this case), and store carries whose weights sum to less than the weight of one out- put bit. Trees like tree (a), however, may store carries in each level of the tree amounting to half of the weight of an output bit. In large trees these stored carries make the filter response slightly more low- pass than it was designed for. This is because the output cannot vary rapidly if carries are drifting through the tree more slowly than the input data. By storing only one carry per level so that the sum of the weights of all carries is less than one output bit, this effect is minimized. Tree (b) shows that the cost of extra logic (here just more adders used to perform majority and exclusive-or operations) required to condense the carries may be offset by reducing the number of adders required in the tree. A third input of each adder (except one per level) becomes available for use in the tree when stored carries are eliminated. Tree (b) also shows all flip-flops needed to reduce propogation delay to that of two full adders. The idea of rounding the result and forgetting the carries occurs at this point. This is not done because when BSR decoding is used, a block of output will be in error by only the carry into the first 16 SA SA SA = Serial Adder of Figure 8 (b) 3 3 SA SA SA SA 0" SA 2 2 SA -► Output Input Weights (a) 2 2 Second - Level Carry Input Weights First Level Carry- Perverted Adder of Figure 8 (a) = Full Adder Output (b) Figure 9- PA TREES: (a) SIMPLE TREE; (b) MODIFIED TREE IT bit minus the carry out of the last bit if the carries are saved. This error amounts to less than the value of one output bit-accuracy possible because carries saved from bits in the middle of the block are added to other bits in the same block producing no error. If no carry were saved, the maximum error that could occur in a block sum would be the value of half a BLOCK of ones. For example, adding a sequence of ones to a sequence of zeros would give all ones or all zeros, depending on the rounding algorithm. 3.U PA Filter Using the PA tree, a burst filter can be constructed and its performance analyzed with standard digital filter theory as above. Figure 10 shows the general PA filter for parallel encoding with block length n. A perturbation in filter response due to stored carries has already been discussed. Appendix A develops a modified PA tree for implementation of negative coefficients, which resembles and can be used in place of that shown in Figure 10. In the design of such a filter, one can easily determine real values for the coefficients c. given the order of the filter and the desired response, but then they must be "rounded" to obtain coefficients realizable in a tree. Although the same problem is encountered using weighted-binary representation, use of burst encoding implies a desire to minimize hardware cost, accentuating the problem. Actually, inter- acting with the problem of finding the coefficients are the questions of filter order and the size of the PA tree. Integer programming might be employed to find the best solution, or heuristic methods used to find a 18 w u A • a 1 •H G tsl i *~^ « £ C -p li, -p 2 ■H en Ph O CE ^ -p pu 2 2 o E- pq o Hoo li h4 •H H P-H II •H -P -P '"* 19 sub-optimal solution for this complicated trade-off between cost and performance. While this is the difficult step in the design of PA filters, it should be recalled that once coefficients are chosen, the characteristics of the filter are found easily. If the output of the filter of Figure 10 is decoded by a BSR , equations (3) through (6) guarantee only that once every n clock periods (when a full input sample has just entered the filter and the corres- ponding output block is located in the BSR) will the BSR output agree with the desired filtered version of the input. Between such samples, because the filter output is not in the burst format, smooth interpola- tion is not guaranteed. This situation may be remedied by using the same filter with a different encoding, the delta block encoding. 20 h DELTA BLOCK ENCODING k.l Delta Block Encoder The delta "block encoder (DBE) is an improvement over "both the vernier and parallel encoders in that the output of the decoding BSR follows a rapidly -varying input more closely. The DBE shown in Figure 11 uses the same components as the ramp encoder, but connected differently. In operation the encoder BSR contains a copy of the bits that will appear in the decoding BSR during the next clock period, assuming a zero output from the encoder. If the block sum is lower than the current input by more than half a bit, the output is a one; other- wise it is a zero. For slowly-varying inputs, this keeps the output equal to the input within the precision of the BSR during ALL clock periods. The parallel and vernier encoders guarantee this accuracy only once every n clock periods, relying on interpolation by the BSR between these. k.2 Optimality of the DBE The problem of signal representation for BSR decoding is to encode a band-limited signal as precisely as possible with a given bit rate. Because a long BSR cannot reproduce high frequencies, the bandwidth requirement limits the precision attainable. In Appendix B the cut-off frequency of a BSR is defined to be f /2n, where f is the bit rate and n is the block length. This means that F < f, /2n (8) — b is required if an input signal with frequency components up to F is to be 21 Input _^. Output For an n-bit decoding BSR the encoder uses an (n-l)-bit BSR. BSR NC I = l(no. of one "bits) I = 1/2 e 1. Figure 11. DELTA BLOCK ENCODER Input Signal- Zero Low-Pass Filtered Error- Figure 12. FILTERING QUANTIZATION NOISE 22 reproduced "by the BSR. To obtain maximum precision, n should he as large as possible, so equality should hold in (8). Note that the BSR length has been chosen for a given application to maximize the precision of representation subject to the constraint that the highest signal frequencies can be reproduced in the sense of Appendix B. The optimal encoder for a BSR of fixed length will now be discussed. Let the measure of error in reproduction be a non-decreasing function g of the magnitude of the difference between input and output, where the input samples x. and corresponding output samples y. are indexed by i. Given the input-output pairs for i = a-1 ,a-2 ,a-3, • • • ; an encoder should minimize a+b t^i .1 gtl^-yjn (9) i=a in the limit as b becomes infinitely large. Or, if the maximum error is to be minimized, the encoder should minimize E{ max g(|x i - y 1 | ) } (10) i=a,a+b with b again large. For example, if the output, noise power, integrated over a long time, is to be minimized, then the optimum encoder minimizes 2 (9) with b large and g(x) = x . The DBE is not optimum in the senses described above, but does minimize (9) and (10) with b = for any function g. Proof of this follows directly from the encoding algorithm, which produces an output bit which minimizes |x - y . 1 a a 1 23 The difference between the DBE and an optimal encoder is that the latter may increase x - y in order to decrease the expectation 1 a a 1 of future errors such that (9) or (10) is minimized with b > 0. This means that the optimal encoder bases its output decision on estimates of present and future values of its inputs rather than using only the difference between the encoder BSR and the current input. The particular techniques used would depend on the function g and prior knowledge of properties of the input. Because the design of an optimal encoder depends on the specific application for which it is required and because complicated hardware is needed to estimate future inputs and use these estimates to make the output decision, such encoders have not been studied. The DBE, as simple as the ramp encoder and nearly optimal in performance, is con- sidered to be the best practical alternative. Appendix C provides an example of the design of a specific BSR-DBE system. The amount of performance sacrificed by using the DBE in each such particular applica- tion remains an interesting question. U.3 Filters for the DBE The output of a DBE consists of blocks with ones distributed throughout. Although this looks unlike burst encoding, the value is still a block sum. This is sufficient to insure that the same filters (like Figure 10) that work for parallel encoding can be used with DBE encoding. Recall if the parallel encoder sampling rate were the Nyquist frequency of the input, then correct filter outputs occur at that frequency also. 2k If a DBE is used with a BSR whose cut-off frequency is the highest signal frequency, then adjacent non-overlapping blocks occur at the input's Nyquist frequency and the filter output has the same property as for parallel encoding. With the DBE, however, all the blocks between and overlapping those just considered are precise samples of the input during corresponding clock periods. This accuracy property is preserved by a filter described by (6) for the intermediate blocks. This means that the filter of Figure 10 can be used with the DBE to produce an output which, during each clock period, differs from the desired output only because of finite BSR precision and carries stored in the filter. This is an important improvement over parallel and ramp encoding, characteristic of the DBE whether or not filtering is employed. Appendix C shows an encoder for a band of frequencies centered away from Hz. Such signals can be encoded with precision equal to that attainable if the center were Hz providing that the decoding is done with a modification of the BSR. The filter of Figure 10 is adapted to this encoding by increasing the space between taps to n periods of the center frequency instead of n bits. This transformation produces a band- pass filter from a lowpass filter. h.k Filtering to Increase Precision It has been shown that the DBE causes a BSR output to follow the input signal each clock period. For an input that is band-limited and highly oversampled (typical of DBE operation) , this is more information than is theoretically required at the BSR output to reconstruct the input, 25 Consider adding a low-pass filter to the BSR output. Its cut-off frequency is determined so that the input is not attenuated, hut the stairstep BSR waveform is filtered to produce a smooth curve. This filter output is a better approximation to the input than a simple DBE-BSR pair provides and can he re-sampled to produce a BSR encoding with more precision than the original DBE output. If the input is nearly constant, of course, more precision hut no more accuracy can be obtained. This is because the quantization noise is then highly correlated with the input and predominantly low fre- quencies that cannot be separated from the signal by the filter. If the input varies rapidly, the quantization noise becomes independent of the input and higher in frequency. In this case, accuracy as well as precision can be improved by the filter. Figure 12 illustrates the effect of filtering on quantization noise. A similar situation exists with the standard delta modulator; which differs from the DBE in that its output is integrated over its entire past history, as if it were a BDE with a BSR of infinite length. The delta modulator output, when integrated, varies above and below the input producing quantization noise at frequencies near half the sampling frequency. The sampling rate is typically 100 times the Nyquist frequency of the input so that these rapid fluctuations, called granular noise, are (6) easily distinguished from the input signal. Goodman and Greenstein discuss filtration of the delta modulator output, removing granular noise, so that it may be re-encoded into ordinary binary PCM of high precision. 26 They report that for speech processing, the filter of choice is an FIR filter with uniform weights; i.e., a rectangular impulse response. Use of the filter lowers the delta modulator sampling rate necessary to obtain PCM samples of a given precision at the Nyquist frequency of the input . Adapting this technique to BSR encodings, the low-pass filter must do two things. First it must have an output with higher precision than its input, meaning for serial data that the bit rate increases. Second, by virtue of its frequency response, it must increase the accuracy of the output by removing high-frequency noise. Because it is most economical to implement and nearly optimal for the delta modulator, a filter with a rectangular impulse response will be used in the following example. Assume that the encoding is done by a DBE for a BSR of length 8 operating at l6 times the highest input frequency as required by BSR cut-off. The length of the low-pass filter impulse response is chosen to be 8 to obtain maximum increase in precision due to averaging. Since the frequency response of this filter is sin(8"rrf /f, )/(8irf/f, ) , the highest input frequency (f, /l6) is attenuated by a factor of 2/tt. Figure 13 shows such a filter, in which 8 block sums are averaged to produce an analog output. Notice that a signal bit may contribute to a single output several times if it is contained in several BSR's. The frequency response of this filter, the weighting pattern on encoded blocks, and the weighting pattern on successive bits of the encoding are shown in Figure lU. The bit weighting pattern is derived by inspection of the overlap of the BSR's in Figure 13. 27 o o Si H « W Eh H [in on H <1> bO •H 28 Cb) -• • • Weight ^ « •••••• 8 • • 9 Successive Blocks -• • • Successive Bits Weight -• • • Figure lU. FREQUENCY RESPONSE (a); BLOCK WEIGHTS ( b ) ; BIT WEIGHTS (c) 29 Closer to the desired logic implementation is the circuit of Figure 15, which has exactly the same characteristics as shown in the previous figure. This can be verified "by noting that the bit weighting pattern is the same. The summation of 8 bits can be done by a PA tree and the summation of 8 digits by a 6U-bit BSR operating at 8 times the original clock rate, assuming the digits are encoded as bursts. Actually this 8-fold increase in precision is not warranted for two reasons. First, output accuracy is not increased enough to require that precision. Table 1 shows the relative noise power above the Nyquist frequency (that which can be removed by filtering) produced by DBE encodings for 8, 16, and 32-bit BSR's, as well as for the DBE and low-pass filter of Figure 15. The system of Figure 15 could compete with a 32-bit encoding except that the 8-bit DBE it uses produces more quantization noise below the Nyquist frequency when the input varies slowly. The second reason is that presumably the speed of encoding was limited by delays in the comparator and BSR in the encoder. Using a similar BSR at the decoder, it probably could not operate 8 times as fast. Attributing half the encoding time to BSR settling, it is reason- able to expect only a doubling of BSR clock rate to be possible. Table 1 shows the high-frequency noise resulting from quantization of the filter output for BSR's of three lengths. A BSR of length l6, operating at twice the encoder clock rate, will be used in this example for the above reasons. P o H u •H •H o3 A o £ 2 O TJ P Cm a O P O O H CO K > O CJ> W CO 3 bO •H En -P ft H 31 TABLE 1. Relative High-Frequency Noise Power (average of results for various encoded sinusoids) Filtered DBE Comments BSR Len gth DBE Only 8 13.20 16 2.69 32 0.52 6H 3.08 BSR length chosen 0.99 0.57 full filter precision (as in Figure 15 ) Each summation of 8 bits must therefore be truncated to 2 bits (the range to 8 scaled to to 2). The truncated fraction should be saved and added to the next sum just as carries in the PA filter described earlier were saved. This insures that the difference between a l6-bit output block and the desired 6U-bit output block is less than the value of one bit of the smaller block. A PA tree can be used as the filter if it operates at the output clock frequency. On the first of two output clock periods four bits are summed with previous carries to produce an output bit and new carries. During the next clock period the remaining four bits are summed with the carry from the first four to give the second output bit and the carries. Figure l6 shows a PA tree implementation of the low-pass filter operating in this manner. Bits 1 to k are being summed in Figure 16, bit 8 has just been loaded, and bit 5 is being loaded into the shift register. During the next output clock period, bits 5 to 8 are summed and the input is shifted so that bit 9 is loaded. Then after that, the next two sums are bits 2 to 5 and 6 to 9, the next block of 8. 32 9 — toggle flip-flop L L numbers identify consecutive data "bits D D D PA PA D D D D D / / / / / PA D / / A / Filter / / / D / / / / / / / Output 16-BIT BSR I Analog Output Figure l6. PA LOW-PASS FILTER FOR PRECISION DOUBLING 33 The important characteristics of this system are 1. Output precision is doubled and accuracy nearly doubled for suitable inputs (note that high frequency noise can be deliberately added to a DC signal to take advantage of the averaging process). 2. Higher clock frequency at the encoder is not required. 3. Utilization of BSR speed is maximized because extra time is allowed for comparator delay at the encoder while maximum BSR-limited precision is maintained at the output. k. 5 Summary of DBE Properties The DBE has better high-frequency performance and signal representa- tion capability than the ramp, vernier, or parallel encoder. This is because each bit carries new information into the BSR, whereas with ramp encoding the BSR interpolates between samples. With a cheap low-pass digital filter, the DBE is clearly the best choice for encoding high frequency signals when the clock rate severely limits the precision of representation. This improvement carries with it no penalties in hardware, speed, error tolerance, or filter complexity; as far as results presented here indicate. The only disadvantage, as will be seen later, is that while the burst format allows error correction, DBE encoding has essentially no redundancy that can be used for error correction before decoding. Error correction after decoding, using the property that the encoded signal varies slowly, is still possible. 3H The DBE differs from the delta modulator in that its integration time is finite. One advantage of this is to limit the duration of the output error due to an error in one encoded bit. One disadvantage is limited dynamic range compared to the delta modulator. Both of these effects are properties of BSR decoding, not DBE encoding. 35 5 ENCODING FOR ERROR CORRECTION 5.1 Constraints on the Encoder While it has been pointed out that the DBE encoding is not suited for error correction before decoding, the following simple algorithm for error correction in burst encoding exists: Encode the input into blocks which never contain a single one or a single zero. At the error corrector, look for a bit which differs from both adjacent bits. Assuming this to be in error, complement it. When the probability of error is low, the bits thus corrected are with high probability true errors, although some errors are not corrected. Further discussion of this topic is deferred, since the only concern here is encoding. This type of error correction requires that an input be encoded in such a way that both ones and zeros occur in groups of two or more. The optimal encoding strategy would probably not have the property that the beginnings of bursts of ones occur periodically, as they do in parallel and ramp encoding. Such encoders are dismissed here as too complex, leaving those encoders which produce non-overlapping blocks of uniform length containing a single burst of ones followed by zeros. This burst encoding must have the additional property that no encoded blocks contain exactly one one-bit or one zero-bit. 5.2 Hybrid Encoder The problem is now to find the best BURST encoder for a BSR of fixed length in terms of signal representation and error correction capability. Consider the encoder of Figure 17. The output resembles Input > 36 Output to decoding BSR of length n BSR OF LENGTH n-m analog voltage output parallel burst encoder for signal (a-b) block length = m Figure IT. ENCODER FOR BLOCKS CONTAINING - SUB-BLOCKS m Output Input ANALOG VOLTAGE OUTPUTS M )- denotes a majority gate B denotes a 1-bit BSR, a delay flip-flop controlling an analog source Figure 18. MODIFICATION OF FIGURE IT 37 that of a vernier encoder in that full precision is found in a block that contains several hursts of ones. In this section only, let block refer to a fixed-length segment of data beginning with a burst of ones, containing exactly one such burst, and filled out with zeros. Each such block is encoded by a ramp or parallel encoder; but instead of a vernier register, a DBE-type feedback arrangement is used. This encoder is essentially a DBE for which each output is a digit (block of bits) instead of a single bit. For a given block size (m) and decoding BSR length (n) , under the constraint that each block consist of a burst of ones followed by zeros, the encoder of Figure IT is sub-optimal in the same sense that the DBE is sub-optimal for bit outputs. It represents an improvement over the vernier encoder for representation or rapidly varying signals just as the DBE is better than the ramp encoder. Because significant improvements in performance over that of the encoder of Figure IT will be costly subject to the above constraints, that encoder is adopted as the most practical for signal representation in a burst format for BSR decoding. The argument for this is the same as that for the DBE when only BSR decoding is required. This encoder is a compromise between the DBE and ramp encoders. As the block length (m) approaches one bit, this encoder approaches the DBE in performance. If the block length is the same as the BSR length (n), this encoder is just the ramp encoder. Since single ones or zeros cannot appear in the encoding, block length cannot be allowed to approach one bit. A modification of Figure IT, shown in in Figure 18, prevents single ones or zeros from occuring at 38 the expense of some encoding error. No scheme significantly different from this in performance has been found for removing single ones and zeros. Note that if the input "to the encoder of Figure 18 has the value of 1 hit per block it cannot be encoded, so the encoder produces blocks alternating between and 2 with average 1. In this way the additional quantization error introduced to allow error correction is minimized. The only remaining variable is the block length (m). Assuming the BSR length to be fixed, m can be any integer factor of this length. Short blocks can increase quantization error because single ones and zeros are more likely to be needed for correct representation. Long blocks allow better error correction because the probability of an undetectable error, one that occurs on a boundary between ones and zeros, is reduced. Short blocks, on the other hand, provide high frequency performance closer to that of the DBE compared to that of the ramp encoder. This trade-off cannot be explored until the details of error correction and error performance have been presented. 39 6 SUMMARY OF ENCODING RESULTS Several encoders have been described and compared in terms of their performance. The disadvantage of the ramp and vernier encoders is that, if their output blocks are to be interpreted as periodic samples of the input, then the input must vary only slowly. The DBE produces truly periodic samples at the bit rate, allowing much higher frequencies to be encoded. Alternately, for the same band of frequencies to be encoded, the DBE allows a longer BSR and higher precision. Digital filtering using PA trees for weighted summation can be done on parallel, DBE, and on ramp and vernier encodings if a stringent frequency criterion is met. Low-pass filtering can be used with the DBE to obtain a BSR encoding with more precision than the original. Error correction forces the use of the burst format but DBE techniques are still used to modify the vernier encoder. The result is a class of encoders which allows a trade-off based on block length between the better error correction capability of the ramp encoder and the high frequency performance of the DBE. Of course, if the input varies slowly enough, the vernier encoder may be used with a slight modification to avoid single ones and zeros. Detailed comparisons of these BSR encodings with weighted-binary PCM and delta modulation have not been made because it is obvious that, for signal representation using the fewest bits, BSR encodings offer no advantage. It is reasonable to expect this result, because BSR encodings have built-in redundancy which increases error tolerance. ko Expected to be one of its greatest advantages, the noise properties of burst encoding must be considered in a careful comparison with other coding schemes. The remainder of this paper discusses the performance of BSR encodings when errors occur and compares them with other codes. kl T SINGLE-BLOCK ERROR CORRECTION 7.1 Characteristics of BSR Codes The problem considered here is the inherent redundancy of BSR encodings to provide maximum error tolerance. In other words, the class of codes that can "be decoded by a BSR are to be studied as channel codes — codes used for transmission of information over a noisy channel. Two characteristics of BSR-decodable codes provide the starting point. The BSR produces a numerical output equal to a block sum. For this reason, codes for a BSR are best suited to transmission of numerical values, such as transducer outputs. Although logical values (bit strings representing alphaunumeric characters or other messages) could also be transmitted using a BSR code, there is no reason to think that this would be a useful thing to do. In fact, other codes that could be more easily implemented for transmission of logical values would perform better, in general. This study of BSR codes will be restricted to their use for transmission of numerical values, where the error in transmission is related in some way to the difference between the number encoded and decoded. The error is only indirectly related to the discrepancy between the encoded binary data and the binary data received for decoding by a BSR. Since the cost of a BSR and the speed of the channel required for transmission are both proportional to the block length associated with each encoding of a numerical value, the block length and hence the pre- cision of representation must be small for economy. When high precision U2 is required, averaging of many encoded numbers could provide the needed precision. This being the "basic philosophy of Burst Processing, averaging after decoding must be considered. The second characteristic of BSR codes, then, is that they should be judged not only by the size of the error made in a single transmission, but also by how much error exists in the average of many decoded blocks. 7. 2 Performance Measures Based on these characteristics, a pair of performance measures -will be selected and discussed. The proper selection is not obvious. The choice made here will be argued for but not rigorously supported. The performance goal selected is the minimization of the quantity 2 E{(y-x) }, where E{*} represents expectation, x is an input sample to the encoder, assumed to be uniformly distributed on the continuous interval [0,l] and independent of all previous inputs, y is the quantized output from the decoder which also takes on a value from [0,l]. Assuming that x represents a voltage to be reproduced as y, E{(y-x) } is just the noise power in the output y. One reason for the selection of this measure is that in many applications output noise is the quantity it is desirable to minimize. Another is that burst codes compare more favorably to other codes on this measure than on E{|y-x|}, for example. The reason is that large errors (whose squares are very large) are highly unlikely with BSR decoding. A third consideration is that 2 E{(y-x) } reflects both the quantization noise and the noise introduced by channel errors, allowing one to see the interaction of these two sources of error. The assumption that each x is independent is consistant k3 with single-block error correction, where the decoder output depends only on a single received block of data. The second goal is to minimize the noise present in the average of consecutive output blocks. If AA denotes the difference between the average of m output blocks and the average of the corresponding inputs, then E{AA 2 } = E (y-x) 2 + m-l ( E{y _ x)) 2 (ll) m m subject to the following assumptions: 1. Channel error probability is constant from one block to the next and errors in successive blocks are independent (stationary and memoryless channel). 2. All of the m encoded blocks are the same or at least they all 2 result in the same E{(y-x) } and E{y-x}. 3. Quantization error is zero. Inputs x are assumed to be already quantized and the quantization error is calculated separately. (ll) is derived using E{z 2 } = E{(z - E{z}) 2 } + (E{z}) 2 . (12) The sum of m blocks has expected error mE{y-x} and, letting z = y-x 2 2 in (12), variance m[E{(y-x) } - (E{y-x}) ]. Dividing this sum of blocks by m to obtain the average reduces the mean error by a factor of m and 2 the variance by a factor of m . Now using (12) again, with z = AA and using the known mean and variance of the average of m blocks, (ll) is obtained. hk The first term in (ll) is proportional to the first performance measure. Since the second is proportional to (E{y-x}) , minimization of this quantity is taken as the second performance goal. As m becomes large, this second term dominates the output noise; so (E{y-x}) , which can he thought of as the systematic error or the tendency of errors to occur in a preferred direction, "becomes a good measure of noise in the averaged output . Error correction schemes for hurst encoding will now he discussed relative to these performance goals. No encoding methods for a BSR other than hurst encoding have been found to provide good error correction performance. Burst encoding appears to he the most useful member of the class of BSR-decodable codes for these performance measures . 7. 3 Majority Decoding Many techniques for reducing the two performance measures were considered hut none was simpler than the majority decoding scheme for burst encodings shown in Figure 19. The encoder for this decoder was discussed in Section 5 so the performance will now be evaluated. This decoder does not require synchronization with the encoder, an unusual degree of freedom for error-correcting block codes. The price one pays for this is an increase in quantization noise due to the impossibility of decoding certain transmitted values. As discussed in Section 5, these values correspond to a single one-bit in the block or a single zero-bit, both of which are considered errors by the decoder. U5 from Channel ^ Decoded Output Figure 19- 3-BIT MAJORITY DECODER TIMING SIGNAL low when last hit of a block is in the middle flip-flop Figure 20. SYNCHRONIZED MAJORITY DECODER (A MODIFICATION OF FIGURE 18) The quantization noise introduced by an ordinary burst encoder encoding numbers in the interval [0,l] using n+1 levels is — . 12(n+l) 2 This is merely the expected value of the square of the quantization error assuming the inputs are distributed uniformly in [0,l] and an n-bit block is encoded. With the majority decoder, the value l/n must be sent as alternately and 2/n and the outputs averaged. With this approach, it takes twice as many blocks to specify a number near l/n to a given precision as it does for other values. Transmission of twice as many bits normally results in a factor of k reduction in the quantization noise because twice as many levels can be used. Since numbers near l/n (and near 1 - l/n) need twice as many transmissions to get a given precision, the quantization noise for these values is calculated as h times that of the other values. Assuming the input to be uniformly distributed in [0,l] gives the average quantization for majority decoding + ^4 ^— = n+T „ . (13) n+1 12(n+l) 2 n+1 12(n + l) 2 12(n + l) 3 Other interpretations of the quantization noise for this case are possible; hwoever the value (13) is used for all calculations in this paper. A significant increase in quantization noise occurs only if numbers near l/n or 1 - l/n are encoded. Next, the output noise power due to low channel error probabilities will be computed for burst encoding with and without majority error correction. Since it is uncorrelated with the quantization noise, the two just add to produce the total output noise E{(y-x) }. The output noise may be written as a Taylor series in the channel error hi probability p. The constant term is the quantization error already described. For direct BSR decoding, the first order term is ^— — . (n+l) 2 This can be seen from the following argument. If p is small, the probability of two errors in a block is negligible and the probability of one error in a block becomes just np. The square of the error produced by changing one bit in a block is — , hence the first (n+1) 2 order term ^— — . (n+1) 2 The first order term for the majority decoder is also easily calculated. The effect of single errors on each transmitted value is calculated separately and the results averaged assuming a uniform input distribution in [0,l], In some cases a single channel error will be corrected and in others a single error might cause a correct bit to be changed. The average first order term for n > 7 is given by £r [3-0 + 3-^-5- + 2— ^ + (n-7)--^-] = -^- (lU) nl (n+1) 2 (n+1) 2 (n+1) 2 (n+l) 2 -2 This shows that the noise for low p is proportional to n rather than n as for direct BSR decoding. For n > 7, the majority decoding scheme has a lower first order term but a larger constant term than for direct BSR decoding. Graphs of output noise with and without majority decoding and for codes not decoded by a BSR will be presented later. They verify the calculations of first order terms and show that at approximately p = .01, majority error correction makes up for increased quantization noise and the output noise is then lower for majority decoding than for direct BSR decoding. This is one of the goals of error correction. kQ The majority decoder's most important feature is that it reduces the systematic error, (E{y-x}) , significantly over direct BSR decoding. The reason is that for each possible encoded value, single errors in the blocks cause no systematic error. Errors which increase the value at the block are as likely as those which decrease it. Because of this property there is no first order term (and, of course, no constant term in either case) in the systematic error for majority decoding. This means that as many received blocks are averaged, assuming the quantization error is uncorrelated and cancels out, the noise decreases faster and toward a lower limit for majority decoding. Graphs to be discussed later show that if 10 blocks are encoded by the encoder of Section 5 so that quantization error in the average of the blocks is as low as possible, then the average of 10 blocks at the receiver has much lower noise if the majority decoder is used. It is important to note that this improvement relies on cancellation of errors and correction of single errors. The assumptions that the channel is stationary and memoryless are therefore crucial. 7.U Modifications to the Majority Decoder If one is willing to sacrifice the simplicity of the majority decoder and synchronize the receiver with the encoder, an improvement in performance can be achieved. Because the synchronized decoder knows where a block starts, errors that occur at the boundary between zeros at the end of one block and ones at the beginning of the next can be corrected easily. There are several possibilities for implementing Buch an improvement; but again, the simplest uses a majority function. This error-correcting decoder, except for the snychronization circuitry, h9 is shown in Figure 20. A timing singal is used to modify the strategy of the simple majority decoder. When the last bit of a block is in the middle position, a zero is substituted for the first bit of the next block. Similarly, the next output is the majority of the first two bits of the new block and a one. This decoder reduces by roughly one-half the number of locations in a block where a single error will go uncorrected. A comparison of graphical results will later show that synchronization lowers the output noise slightly. The systematic error, however, actually increases for the synchronous scheme due to imperfect cancellation of positive and negative errors when values 0, 1, 1/n, and 1 - 1/n are sent. If these were not sent, systematic error would remain the same with synchronization. With synchronization the values l/n and 1 - 1/n can again be transmitted and properly decoded so the quantization noise is the same as for direct BSR decoding. Why use synchronization if the improvement in performance is very small? One answer is that it may already be available. Consider the adaptation of burst encoding to a channel that is not stationary and memoryless, a channel on which bursts of noise several bits wide occur. Unless designed to correct bursts of errors, most codes perform poorly on bursty channels. Burst coding performance suffers because although most single errors can be recognized, a burst of errors in a block of zeros cannot be distinguished from a burst of ones in the encoding. As is done for other codes, the bursty channel may be adapted to burst coding by interleaving the burst code. This consists of storing 50 r consecutive n-bit blocks in a buffer and then transmitting the first bits of each of the r blocks. Then the second bits of each block are transmitted, in order, and so on until all the last bits are transmitted. If a burst of errors no longer than r bits occurs, at most one bit in each original block of length n is affected. The receiver incorporates a buffer to sort the incoming bits back into the correct blocks before a majority or other error correction algorithm is applied. This approach requires synchronization of the decoder with the encoder in order to reassemble the blocks properly. This synchronization can then be used to enhance the error correction capability. The overall effect of interleaving in the manner described, when all bursts of errors are of length r or less, is to make the channel appear stationary and memoryless to the basic coding procedure. A second application of synchronized error correction is to reduce quantization error when a majority of more than three bits is used to correct more errors. For example, if the majority function of 5 adjacent bits is used in the BSR decoder, many occurrences of two errors, even in the same direction, can be corrected. Without synchronization, however, the values 2/n and 1 - 2/n in addition to l/n and 1 - 1/n could not be decoded properly. If the block length n is small, then the additional quantization error introduced may be objectionable. Using synchronization, all values can be transmitted and the quantization error remains the same as for direct BSR decoding. Like 3-bit majority decoding, 5-bit majority decoding reduces systematic error but does produce non-zero systematic error even for one error per block (on certain transmitted values). For encoded values 51 in the middle of the range (near 1/2) single or double errors introduce no systematic error because in this case error probabilities in both directions cancel. Of course, with any majority decoding scheme as with direct BSR decoding, values in the middle of the range also produce lower output noise and systematic error than those near the extremes. With synchronized majority decoders, however, the tendency to have lower systematic error in the middle of the range is more pronounced than with the others. Figures 21 and 22 can now serve as a summary of the properties of synchronized majority decoders. All curves in both figures are for synchronized decoders, the curves labeled 3-MAJ and 5-MAJ representing 3-bit and 5-bit majority functions respectively. In all cases the block length n is 15- In both figures the CENTER ONLY curve represents the output noise for inputs distributed uniformly on [l/U, 3/^] and 5-bit majority decoding. The abscissa is the channel error probability and the ordinate is the logarithm of the output noise power, scaled so that OdB corresponds to an error of 1, the maximum that can occur for inputs and outputs restricted to [0,l]. The ordinate is given by the expression 10 log 10 E{(y-x) 2 } . (15) For Figure 21 a single block is considered. In (15) y represents a single block sum and x a single input sample. For Figure 22, the output noise is plotted for an average of 10 blocks, except in the CENTER ONLY curve which is for an average of 50 blocks to emphasize the low (E{y-x}) . In other words, in (15) y and x represent averages of 10 output blocks 52 QD O LU CO Q_ ol CD / CENTER ONLY iD-U 0.030 .050 0.100 0.150 — 4 C.200 — I 0.2SC P [ERROR] Figure 21. SYNCHRONIZED MAJORITY DECODERS WITH n = 15, AVERAGE OF 10 BLOCKS 53 CD O UJ CO O ID Q_ h- Z3 O 0.050 0.100 0.150 0.200 0.250 0.3GC P (ERROR] 0.350 0.M00 Figure 22. SYNCHRONIZED MAJORITY DECODERS WITH n = 15, AVERAGE OF 10 BLOCKS 5h and input samples respectively. Expression (ll) is used to calculate (15) in this case. These figures will be referred to later for comparison of synchronized majority decoding with other coding methods, 55 8 COMPARISON WITH OTHER CODES Two classes of codes were chosen for comparison with burst codes using the performance measures and assumptions about the channel previously discussed. Both are block codes that are easily decoded. Unlike the burst code, these codes require synchronization of the decoder. 8.1 Repetition Codes The first class of codes is the repetition codes. The following scheme is used to transmit a numerical value minimizing output noise. Each bit of the binary representation of the number to be encoded is repeated zero or an odd number of times and these bits are transmitted over the channel. For each information bit that was transmitted at least once, the decoder produces an output which is the majority function of the received bits corresponding to the repetitions of that information bit. For example, 11 = 1011 p is to be transmitted. One coding scheme would repeat the high-order bit 5 times, the second bit 3 times, and the third bit once, and the fourth bit not at all. The transmitted block would then be 111110001. Encoding 3 information bits into a code word of 9 bits, this is called a (3,9) code. No two errors in the first five positions would affect the result, nor any single error in positions 6, T 5 or 8. The fact that the last bit was not transmitted at all merely increases the quantization error. At the same time it reduces the effect of channel noise by allowing the higher-order bits to be repeated more often without increasing the block length. For any given channel error probability and encoded block length, the best repetition code for minimization of output noise can be found. That code is then compared to burst and other codes at that probability. Two important characteristics of these repetition codes are that the redundancy can be comparable to burst encoding (a few bits of a binary number are encoded into a large block) and the decoding algorithm is simple. 8.2 Reed-Muller Codes The second class of codes used for comparison is the Reed-Muller codes of order one. The description of these codes "will not be given (T) here but may be found in any text on coding theory . The important features of the first-order RM codes of length 2 -1, m >_ 3, are as follows : 1. m+1 information bits are encoded into a codeword of length 2 -1. This redundancy is comparable to that of burst coding. 2. All combinations of 2 -1 errors can be corrected. In other words, one-fourth or more of the received bits must be in error to cause a decoding error. 3. Decoding can be done by a simple majority (threshold) decoder. It is unreasonable to compare burst codes to multiple-error- correcting BCH codes and other codes that require expensive decoders. Burst coding was proposed for use when an inexpensive implementation was required. The first-order RM codes were chosen because their redundancy is similar to that of burst codes and their decoders are relatively simple. 57 It will now be shown that for low error probabilities, the first- order RM codes result in lower noise than the burst code of the same block length. It is essential to specify the same block length for both codes because block length is restricted by the speed of the channel and the rate at which samples of the input must be sent. With either coding scheme, the maximum block length would be used to minimize output noise. The number of quantization levels for the RM code of length 2 -1 is 2 . The number of quantization levels for a burst encoding of the same length is 2 . Thus for a channel error probability of zero, the RM codes produce one-fourth the output noise of burst codes (recall that noise power is proportional to the square of the quantization step size). Let n = 2 , the length of a code word plus 1. Since n/U errors are required to cause a decoding error in the RM codes, the probability of such an occurence if p (the probability of a channel error) is small is approximately n/U ,2m-l, n/U n n/k < ,,n/k P ( nA } P "U7S7F * (np) The probability of a decoding error for burst encoding is just np, the probability of one channel error in a block if p is small. Both the quantization noise and the noise contributed by channel errors is significantly smaller for RM codes than for burst codes in the range of channel error probabilities where np < 1. 8.3 Theoretical Bound One might also ask for a lower bound on the output noise possible with any code. This was computed with the following assumptions, 58 consistent with analysis of single block transmission properties of "burst encodings: 1. Input samples must be encoded at a rate of one per n bits transmitted on the channel. Samples need not be encoded into blocks of length n. 2. Successive samples are independent and distributed uniformly in the interval [0,l]. 3. The channel is stationary and memoryless and its error probability is known. The channel capacity is calculated by the formula C = p log 2 i + (1-p) log 2 -^y • (16) Sending n bits over the channel allows transmission of nC bits of (Pi) information, loosely speaking. The coding theorem asserts the existence of codes which allow transmission of information at a rate arbitrarily close to C bits per channel bit with a vanishingly small probability of error. Using this fact, each input sample can be quantized to nC bits and then encoded by the code. Entirely due to quantization error because there are no decoding errors, the output noise for this approach is E{(x-y) 2 } = ^r-r . (17) 12-2^ The function (lT) with C given by (l6) appears in Figures 23 and 2k as the curve labeled BOUND. The significance of these curves is that the performance they indicate can be approached arbitrarily closely by some codes with the given assumptions. This shows how much improvement is possible over the specific codes whose performance is compared 59 on the same graphs. It should he noted that improvements in the direction of the bound curve will become increasingly expensive as the bound is approached. Q.k Performance Comparison * Figures 23-26 provide a graphical comparison of the performance of repetition codes, burst encoding with and without majority error correction, a RM code, and the bound described earlier. Figures 21 and 22 given earlier allow synchronized majority decoding to be compared also. Figures 23 and 2k compare direct BSR decoding of burst encoding, various repetition codes, and the bound for block lengths 17 and 7. Computer calcualtions using the assumptions discussed earlier provided exact values for data points on the curves. For the graph of burst performance, several independent calculations were used to verify the correctness and a hardware simulation also corroborated the results. The graph of burst performance is identical to a graph of the output noise of a DBE-BSR pair, assuming input and output samples (x and y) correspond to adjacent but non-overlapping blocks. This graph illustrates the performance of the BSR rather than that of a particular encoder and encoding method. Error tolerance, determined by the BSR, is the same for both burst encoding and DBE encoding with direct BSR decoding. The conclusion drawn from Figures 23 and 2k is that the repetition codes out-perform burst codes on the given performance measure, for all 6o 0Q UJ - in o o~ o l/) - 1 o o 1 _ _- - o in i ^^<^^ y^ o a ru - i o nj . 1 / / / / *' / / y / / /i / cr. / / CJ en . . / / BOUND o i / i / m - 1 o i / i / §. V*- 1 1 1 1 1 — 1 1 0.000 0.050 0.100 0.150 0.200 0.250 0.300 0.350 0.400 P (ERROR) Figure 23. COMPARISON OF CODES WITH n = 7, ONE BLOCK OF OUTPUT 61 CD CD CO i — i D I— Z3 Q_. ^— ID O 0. GOO 0.050 0.100 0/150 0.200 0.250 0.300 0.350 0.L100 P(ERRQR) Figure 2k. COMPARISON OF CODES WITH n = IT , ONE BLOCK OF OUTPUT 62 ■+■ ■+- ■+- ■+- -t- 0.000 0.050 0.100 0.150 0.200 0.250 0.300 0.3^0 O.UCC P (ERROR) Figure 25. COMPARISON OF CODES WITH n = 15 ONE BLOCK OF OUTPUT 63 00 Q LlA CD O Q_ O 0.000 0.050 0.100 0.150 0.200 0.250 0.300 0.350 O.MOC P (ERROR! Figure 26. COMPARISON OF CODES WITH n = 15, AVERAGE OF 10 BLOCKS 6k values of channel error probability. This indicates the necessity of using error correction if burst codes are to be useful as channel codes . It is important to consider the reason for the good performance of BSR decoding at very high error probabilities. As p approaches 1/2 the distribution of received block sums approaches the binomial distribution centered at 1/2. In other words, regardless of what was sent, the decoded value is with high probability near 1/2. For the repetition codes and most others, as p approaches 1/2 the distribution of received values approaches a uniform distribution on [0,l]. This is the same as replacing the decoder with a wild guesser. Burst coding performs better because guessing 1/2 is a much better strategy when the penalty is proportional to the square of the error than a random choice from the interval [0,l]. Guessing 1/2 produces exactly half the output noise, in fact. Figures 25 and 26 compare burst encoding with (labeled CORR. BURST) and without (labeled BURST) 3-bit majority decoding, and the (5,15) RM code of the class described earlier. The triple-error-correcting RM code outperforms the others at low error probabilities as predicted earlier. This result holds both for a single block of output, Figure 25, and the average of 10 blocks, Figure 26. Although the burst results are computed exactly, the RM code performance was calculated with the assumption that if 5 or more errors were made, the decoded value was uniformly distributed on [0,l] and independent of what was sent (a wild guess). This is assumed to be slightly pessimistic. 65 In the range of error probabilities .05 to ,\ majority decoding improves burst coding in both figures. With reference to the RM code, this is the range in which burst coding looks most promising. Recall that for np.05, on the other hand, majority decoding produces relatively low noise and low systematic error without requiring a complicated decoder or even synchronization of the decoder with the encoder. This result, however, depends critically on the assumptions made about the channel. Channels with error probabilities greater than .05 which do not exhibit bursts of errors are rarely encountered. 66 9 OTHER ERROR CORRECTION METHODS If more than one block is considered at a time, error correction in burst coding can "be improved. Methods for accomplishing this are grouped in two categories. First are those which operate only on block sums, disregarding properties of the encoding of these blocks such as the actual bit pattern and the Hamming distance between the (2) various codewords. An approach of this type was suggested earlier. Under the assumption that a slowly-varying signal was encoded, con- secutive block sums were compared. If too great a difference was observed, one of the blocks was recognized as being in error and re- placed by some previously received "good" block. Methods like this one can be used with burst coding or any other block coding scheme and will not be discussed here. The second category contains algorithms that use the distance structure of the code or the bit pattern. Two very different approaches of this type will be considered. 9.1 2-Way Majority Decoder An extension to the majority decoding explained earlier provides the simple but improved decoder shown in Figure 27. Based on the assumption that a slowly-varying signal has been encoded in burst form, the decoder uses a majority function of 3 bits taken from corresponding locations of 3 consecutive blocks as the output of its first stage. Assuming the 3 blocks were the same before noise was added, this function corrects all errors so long as no two errors occur in the same position of 2 of the 3 blocks. Even if this stage is 67 ^ e> o o *H Vh o o 1 cr 1 CO X o CD CO u On J-i H 1- H CD 1 rity igur GQ 1 c O Pn 2 o c \ /s\ \ \$) \ r^ — 1 UJ or Q li. X ( CO Q 00 1 c ■ t L I J P O O p >H EH H K O I CM cvj 0) •H 68 followed "by the 3-bit majority decoder of Section 8 as in Figure 27, the errors corrected by the first stage may otherwise have gone un- corrected. One reason for this is that the error occurred on a boundary between ones and zeros. Another reason is that it may have been one of several adjacent errors. The error correction provided by this decoder reduces both output noise and systematic error over that of the simple 3-bit majority decoder, This was confirmed by a hardware simulation although no quantative results are available. At low error probabilities, virtually all errors are corrected by the 2-way majority decoder, even if they occur in bursts shorter than the block length. Like the decoder of Section 8, the error correction capability of this decoder may be increased by increasing either of the majority functions to 5 bits or more. This, however, restricts the signals that can be reproduced at the output. Tolerant of bursts of errors and still not requiring synchronization, the 2-way majority decoder makes burst encoding a powerful yet inexpensive error-correcting code. 9.2 Recoding Another approach to improved error correction for burst coding is to recode the burst data without introducing any additional redundancy. That is; using the same number of bits, encode the burst data in a form more suitable for tranmission over a noisy channel. This basic (9) idea, before application to Burst Processing, is due to Hellman . The goal is to design a decoder that normally converts the received data 69 back into the burst format; but when an error occurs, produces an out- put that looks nothing like a burst encoding. The error is then easily recognized and corrected. An encoder is then designed to provide the necessary data for transmission to the decoder. The most obvious decoder is a linear shift register circuit, such as that shown in Figure 28. The received data passes through a shift register and the output is an exclusive-or (modulo 2) sum of selected bits of the shift register. This decoder is characterized by its impulse response, which is the output sequence due to an input of a single one -bit followed by zeros. Assume that such a decoder produces an output in burst form. If a single channel error occurs , the output will be the sum of the correct burst output and the impulse response of the decoder, triggered by the error. The impulse response is chosen so that when added to a burst encoding, the result is unlike any burst encoding and unlike any burst encoding plus a shifted version of the impulse response. The first property allows recognition of an error and the second its location. Once this is known, the error is corrected by adding the impulse response due to it to the decoded data, canceling the original impulse response. One approach to this is to use the property of burst encoding that "edges , defined here as boundaries between ones and zeros in the data stream, occur with a known frequency. The impulse response is chosen to produce many more edges which are detected by a circuit that counts the number of edges in some window. When an error is recognized, its location is determined by shifting the decoded data through a register TO D D — i i — D D D — < i — D D D D \ ( ^ \. r 1 T to Channel Input from channel V D D D D D D 2 K D SHIFT REGISTER h~&~ D D Corrected Burst Output 1Z ERROR DETECTION CIRCUITRY denotes the exclusive-or summation Figure 28. ENCODER AND DECODER FOR RECODING ERROR CORRECTION 71 while adding to it a non-shifting copy of the impulse response. When this copy of the impulse response cancels the one due to the error, a circuit will detect a normal number of edges and the data is then correct for output. By introducing a sufficiently large number of edges in the impulse response, multiple errors in close proximity can be corrected. By making the impulse response long enough and the circuitry expensive enough, it should be possible to correct any number of errors occurring randomly or in bursts. Figure 28 shows an encoder and decoder with impulse response 1010010001. Used with a burst encoding with block length 10, it corrects single errors. In a 12-bit segment of burst data (a burst encoding viewed through a window of width 12) at most 3 edges will be found. If the above impulse response sequence is added, however, at least k edges will be found. Furthermore, two copies of this impulse response added to a burst sequence will produce at least k edges unless the two coincide. These properties enable the box labeled "Error Detection Circuitry" to locate an error provided that another error does not occur within 12 bits of it. The data is then corrected as it leaves the shift register. Because so many variations are possible, this decoder was not studied in detail. The example was given merely to illustrate that all widely-spaced single errors can be corrected even without the assumption that the input varies slowly. It appears that widely- spaced bursts of errors could also be corrected by a similar technique. 72 10 CONCLUSIONS Many of the results presented in this paper are constructive, suggesting improvements and techniques for Burst Processing. Although the comparative results are not extensive enough to be conclusive, insight into the capabilities of BSR encodings and suggestions of directions for future work are provided. The first major result is that the DBE is preferable to other encoders as long as error correction is not attempted. Filtering and reconstruction of the encoded signal are enhanced at no increase in hardware cost. Shown in the performance graphs, the error tolerance as the same as for any other encoding using direct BSR decoding. Another result is that the range of usefulness of burst encoding as a channel code is narrow. When possibly imprecise output samples are needed at a high rate as well as more precision from averaging, Burst Processing provides attractive error-correcting codes. In the range where most practical channels fall, error probabilities -2 ... below 10 , burst encoding is inferior to conventional error-correcting channel codes. The distance structure (Hamming distance between code- words) is simply not suited to error correction at low error probabilities. The recoding technique of the previous section illustrates this by improving error correction through re-encoding with no additional redundancy Among those presented in this paper, the most powerful error-correcting scheme for a BSR code is the 2-way majority decoder for burst encodings. Being very inexpensive to implement, its application to specific channel coding problems should be investigated. 73 APPENDIX A Negative Coefficients Building burst filters with negative coefficients requires subtraction and representation of negative numbers. One of the ways this can be achieved, using PA's, will be presented here with its application to filtering. If numbers in the interval [-1,1] are to be represented by a BSR encoding, the following mapping may be used: a = 1 / 2 (A+1), (Al) where A is the original number and a is its representation. This maps the given interval onto the interval [0,l] for the BSR. The only change this representation requires in the encoder and decoder is that each BSR output be level-shifted so that it can go negative. The important aspects of this representation are, of course, the logic implementations of addition and subtraction. For addition a PA can still be used. Its output equation is C + V2(A + B), neglecting carries. Substituting values in the original interval, V 2 (C + 1) = 'MA + 1) + V»(B * 1) ( A2 ) which can be rewritten as C = */2(A + B), as desired. Negation can be performed by taking the logical bit-wise complement of a block. The function thus performed is b = 1 - a. Substituting numbers in the original interval, lh v 2 (b + 1) = i - v 2 (a + 1: (A3) and this can be written as B = -A, the negation. Subtraction is thus implemented by a PA with an inverter at one input. By incorporating inverters at the inputs of some PA's in the tree, the PA filter can be generalized to accomodate negative coefficients. This allows construction of high-pass filters and band-pass filters which remove the DC component , as well as low-pass filters with a more rectangular response. An additional use of subtraction is to reduce the size of PA trees. For example, instead of realizing a weight of 15 by using four entries to a tree; using subtraction only two entries to the tree are needed — a positive one at the l6 level and a negative one at the 1 level. Though this does not reduce the depth of the tree, it decreases the number of adders required. 75 APPENDIX B Cut-off Frequency of a BSR The cut-off frequency of a BSR decoder is defined here to be f /2n, where f is the bit rate of the BSR and n is its length. The fundamental difference between this definition and the definition of cut-off frequency for a first-order low-pass RC filter is that for the BSR, attenuation of singals near the cut-off frequency is not the important factor in defining the cut-off. Figure Bl shows the rise time of a BSR as a function of the ratio of its length to that of a BSR with a 1 hz cut-off. The rise time is 13 percent greater for a BSR than for a first-order RC filter with the same cut-off frequency. Figure B2 shows the amplitude of the fundamental frequency component of an input square wave found at the output of a BSR. The square wave alternates between all ones and all zeros. Also pictured is the same quantity for the RC filter which, because the square wave is a super- position of sinusoids, is just a graph of its frequency response. The half-power point in the BSR response occurs 12 percent below the cut-off frequency. Note that the maximum amplitude of a specified frequency component is obtained at the output of a BSR by inputting a square wave of that frequency. This means that at the cut-off frequency, output amplitude is limited to .636 times the largest amplitude possible at lower frequencies. This is the crucial frequency limitation imposed by BSR decoding. An RC filter imposes no such limitation. 76 Rise Time (seconds ) 1.00- .25- n* is the length of a BSR with the same bit rate and cut-off frequency at lhz. Figure Bl. BSR rise time Output Amplitude at f 1.00 .50- Input : unit square wave at frequency f RC response = [l + (jr) 2 ]"* c 447 BSR response = SIN(ff-) c { 2 tj Figure B2. RESPONSE OF BSR AND RC FILTER WITH CUT-OFF f 77 The frequency response curve for the DBE-BSR pair is flat even above the BSR cut-off because of the property that the output follows the input as closely as possible and, for small high frequency signals, this is as close as BSR precision permits. Since no attenuation occurs, it is desirable to use a long BSR to obtain high precision. For this reason the cut-off frequency is defined as high as possible — higher than that of an RC filter with the same half-power frequency for square wave inputs. The upper frequency limitation is the constraint that high signal frequencies can be reproduced with sufficient amplitude. The parallel encoder-BSR pair provides the information necessary to reconstruct the input if the input is band-limited to half the block sampling frequency (f, /2n) and this is reasonably defined as the cut- off frequency for this encoder-decoder pair. When using a simple BSR decoder, attenuation of frequencies near the cut-off frequency does occur with parallel encoding. This is because the BSR does not provide the ideal low-pass filtering required to reconstruct the original signal. The conclusion is that although different encodings result in different frequency characteristics, BSR decoding introduces an upper limit on the frequency response of the system by limiting the amplitude at which high frequencies can be reproduced. The cut-off frequency in that sense is defined to be f /2n, where the maximum amplitude is reduced by a factor of 2/tt. 78 APPENDIX C Bandpass Encoder As an example, consider the design of an economical encoder and BSR decoder to operate with a 1 Mhz bit rate and reproduce frequencies in the range 0-50 khz , while minimizing the noise power at the output. A BSR with cut-off frequency 50 khz is assumed to give adequate high frequency performance. The block length chosen is 10, so as to minimize quantization noise subject to the frequency constraint. Minimization of output noise subject to the economy constraint suggests the DBE for reasons discussed in the section on optimality of the DBE. Choice of the DBE completes this design. As a modification of the above example, consider encoding a band-limited to 250 khz +_ 50 khz using the same 1 Mhz clock. A BSR with a cut-off higher than 300 khz could be only one bit long. It is possible, however, to use the narrow bandwidth to achieve the same precision as above by using a decoder which resembles a BSR and is of the same order of complexity. Figure CI shows a simplified version of this system. Each encoder samples the input at a rate of 250 khz , so that the input to each encoder appears to be band-limited to + 50 khz . Note that a 250 khz input to the system provides a constant sequence of samples to each encoder. Each encoder-decoder pair functions exactly as in the first example. Then the outputs from the decoding BSR ' s are taken in cyclic succession to reproduce the original waveform. While the effective sampling rate of this system is 1 Mhz, only twice the Nyquist frequency, the output precision is that of a 10-bit BSR as in the first example. 19 Encoder Decoder 10-BIT BSR T Output Figure CI. ENCODER USING FOUR 10-BIT DBE's Input Output A CURRENT-TO- VOLTAGE CONVERTER total of ten U-bit shift registers SR ■analog voltage proportional to the number of one-bits at ends of shift registers plus 1/2 *© SR SR • • • U* y ^ decoding is done by this same modified BSR arrangement Figure C2. PRACTICAL BAND ENCODER 80 Figure C2 gives the practical system. The modified BSR sums 10 bits spaced at intervals of one period of the center frequency. The output of this circuit is identical to that of Figure CI but the bit rate between encoder and decoder is seen clearly to be 1 Mhz. As explained in Appendix A, at 250 + 50 khz , signals are not attenuated but are limited in amplitude. 81 REFERENCES 1. Poppelbaum, W. J. , Appendix I to "A Practicability Program in Stochastic Processing," proposal for the Office of Naval Research, Department of Computer Science, University of Illinois, March 197^- 2. Faiman, M. , ed. , "Hardware Research (Navy)", annual progress report, Department of Computer Science, University of Illinois, April, 1975 • 3. Oppenheim, A. V. , and R. W. Schafer, Digital Signal Processing , Prentice-Hall, Englewood, New Jersey, 1975. k. Gold, B. , and C. M. Rader, Digital Processing of Signals , McGraw- Hill, New York, 1969. 5. Poppelbaum, W. J., "Application of Stochastic and Burst Processing to Communication and Computing Systems", proposal for the Office of Naval Research, Department of Computer Science, University of Illinois, July, 1975. 6. Goodman, D. J. , and L. J. Greenstein, "Quantizing Noise of AM/PCM Encoders," The Bell System Technical Journal , February 1973, pp. 183-20U. 7. Berlekamp, E. R., Algebraic Coding Theory , McGraw-Hill, New York, 1968. 8. Gallager , R. G. , Information Theory and Reliable Communication , John Wiley and Sons, New York, 1968. 9. Hellman, M. E. , "On Using Natural Redundancy for Error Detection," IEEE Transactions on Communications, October 197^+ » pp. 1690-1693. SECURITY CLASSIFICATION OF THIS PAGE (When Date Entered) REPORT DOCUMENTATION PAGE READ INSTRUCTIONS BEFORE COMPLETING FORM 1 REPORT NUMBER UIUCDCS-R-75-770 2. GOVT ACCESSION NO. 3. RECIPIENT'S CATALOG NUMBER 4 TITLE (end Submit) AN ANALYSIS OF BURST ENCODING METHODS AND TRANSMISSION PROPERTIES 8. TYPE OF REPORT ft PERIOD COVERED Master's Thesis « PERFORMING ORG. REPORT NUMBER UIUCDCS-R-75-770 7. AUTHORS,) GARY LEWIS TAYLOR • ■ CONTRACT OR GRANT NUMBERfaj N000-1U-75-C-0982 9 PERFORMING ORGANIZATION NAME AND ADDRESS Department of Computer Science University of Illinois at Urbana-Champaign Urbana, Illinois 6l801 10. PROGRAM ELEMENT, PROJECT, TASK AREA ft WORK UNIT NUMBERS 11. CONTROLLING OFFICE NAME AND ACDRESS Office of Naval Research 219 South Dearborn Street Chicago, Illinois 6060U 12. REPORT DATE December 1975 13. NUMBER OF PAGES 14 MONITORING AGENCY NAME ft ADDRESS) - // different from Controlling Office) IS. SECURITY CLASS, (ot tt.lt report; 15a. DECLASSIFI CATION/ DOWN GRADING SCHEDULE 16. DISTRIBUTION ST ATEMEN T (ol 1Mb Report) 17. DISTRIBUTION STATEMENT (ot tho abstract entered In Block 20, It different from Report) '8. SUPPLEMENTARY NOTES 19. KEY WORDS (Continue on reverae aid* II necmerery and Identify by block number) Burst Processing Error Correcting Codes 20. ABSTRACT (Continue on reveree aide If neceaaary and Identify by block number) This paper describes the performance of a class of unweighted binary number representations. One of these, called Burst Processing, has been pro- posed as a compromise between stochastic and weighted-binary number representa- tion for digital systems. Encoders, digital filters, and error correcting circuitry for these inherently redundant representations are discussed. Used as channel codes, these representations out-perform conventional error- correcting codes only in unusual circumstances. Their usefulness lies in the DD,: FOR'1 AN 73 1473 EDITION OF 1 NOV 68 IS OBSOLETE S/N 0102-014*6601 | SECURITY CLASSIFICATION OF THIS PAGE (When Data Bntered) BLIOGRAPHIC DATA EET '•tffffiJbSg-R-75-770 2. 3. Recipient's Accession No. riilc and Subtitle AN ANALYSIS OF BUEST ENCODING METHODS 5- Report Date December 1975 AND TRANSMISSION PROPERTIES 6. \uihor(s) . Gary Lewis Taylor 8- Performing Organization Rept. No -UIUCDCS-R-T5-TT0 Performing Organization Name and Address Department of Computer Science 10. Project /Task/Work Unit No. University of Illinois at Urbana-Champaign Urbana, Illinois 6l801 11. Contract /Grant No. N000-1U-T5-C-0982 Ting Organization Name and Address Office of Naval Research 219 South Dearborn Street 13. Type of Report & Period Covered Master's Thesis Chicago, Illinois 6o6oh 14. Supplementary Notes - Abstracts This paper describes the performance of a class of unweighted binary number representations. One of these, called Burst Processing, has been proposed as a compromise between stochastic and weighted-binary number representation for ditital systems. Encoders, digital filters, and error correcting circuitry for these inherently redundant representa- tions are discussed. Used as channel codes, these representations out-perform conventional error-correcting codes only in unusual circumstances. Their usefulness lies in the simplicity of the hardware required to accomplish digital signal processing tasks. Key Words and Document Analysis. 17a. Descriptors Burst Processing Error Correcting Codes 3 - Identifiers 'Open-Ended Terms :• < OSAT1 Fie Id /Group Availability Statement Release Unlimited * M NTIS-35 (10-7C 19. Security Class (This Report) UNCLASSIFIED 20. Security Class (This Page UNCLASSIFIED 21. No. of Pag. 22. Price USCOMM-DC 40329-P""!