LIBRARY OF THE UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN 510.84 co p. 2. Digitized by the Internet Archive in 2013 http://archive.org/details/molecularstochas723cutl 3 iv- ° ' /)Vi~Z^ UIUCDCS-R-75-723 IH£ 1JBRARY OF THE MOLECULAR STOCHASTICS: A STUDY OF DIRECT „.., isDueiHB 1 3 7975 PRODUCTION OF STOCHASTIC SEQUENCES FROM TRAM by James Roland Cutler UNIVERSITY OF (LLfNOIS April, 1975 UIUCDCS-R-75-723 MOLECULAR STOCHASTICS : A STUDY OF DIRECT PRODUCTION OF STOCHASTIC SEQUENCES FROM TRANSDUCERS BY JAMES ROLAND CUTLER April 1975 Department of Computer Science University of Illinois Urbana, Illinois 6l801 This work was supported in part by Contract No. NOOO-1U-67-A-0305-002U and was submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science, at the University of Illinois. MOLECULAE STOCHASTICS: A STUDY OF DIRECT PRODUCTION OF STOCHASTIC SEQUENCES FROM TRANSDUCERS James Roland Cutler Department of Computer Science University of Illinois at Urb ana-Champaign, 1975 ABSTRACT Molecular Stochastics is an extension of stochastic computing. The main concern of Molecular Stochastics is the generation of stochastic se- quences that are used as the encoded information in stochastic computing. Also, these stochastic sequences in order to have meaning should be dependent upon some physical parameter of interest such as temperature. Thus, the problem is constructing a transducer which outputs a stochastic sequence that varies with respect to a physical parameter. This paper presents the necessary theory along with a discussion of a number of noise sources. These sections imply a number of transducers which were constructed and tested. The data was then compared with the result pre- dicted by the theory. Ill ACKNOWLEDGMENT The author wishes to express his gratitude to his advisor, Profes- sor W. J. Poppelbaum, for his ideas, assistance and friendship on this project . He is indebted to Frank Serio, Sam McDowell, and Bill Marlatt who did exceptional work on the fabrication and assembly of the stochastic meter. He also wishes to acknowledge Stanley Zundo for the drawings, Dennis Reed for the printing and Evelyn Huxhold for her excellent work in typing the manuscript. Also, he thanks the members of the Information Engineering Laboratory for their enlightening discussions. He especially thanks his wife, Joyce, for her help in proofreading and for her encouragement. IV TABLE OF CONTENTS Page 1. INTRODUCTION 1 2. GENERAL THEORY 3 2.1 Calculation of the Mean of the Stochastic Sequence 3 2.2 Ergodicity of X(t) 5 2.3 Determination and Errors of a Finite Time Average 8 3. NOISE SOURCES ■ 12 3.1 Thermal Noise 12 3.2 Shot Noise l6 3.3 Turhulent Flow 22 k. IMPLEMENTATION OF THE TRANSDUCERS 26 U.l Temperature Transducer 26 U.2 Luminance Transducer 28 U.3 Flow Transducer 30 5. DATA AND RESULTS 3h 5.1 Method of Measurement and Calculation of Errors 3^- 5.2 Implementation of the Stochastic Meter 38 5.3 Data and Results k2 6. CONCLUSIONS 51 APPENDICES A. Specifications of the Stochastic Meter 53 B. Circuit Diagrams of the Transducers 62 C. Experimental Procedures for Measurements of the Transducers 65 REFERENCES 7h VITA 76 LIST OF FIGURES Figure Page 2.1 Block Diagram of a Transducer h 2.2 Typical Input X(t) and Corresponding Output Y(t) h 2.3 y y vs. a x 6 2.k Example of Y(t) and the Corresponding Y 9 3.1 Block Diagram of the Temperature Transducer 13 3.2 y Y vs . Temperature IT 3.3 Pulse Shape of h (t) 21 3.U Density of g Q (x), g-^x), g 2 (x) and g (x) 21 3.5 Density of I Q (t) for pT = 0.5 23 3.6 Density of I Q (t) for pT = 1.0 23 3.7 Density of I Q (t) for pT = 2.0 23 U.l Circuit Diagram of the Temperature Transducer Using Shot Noise 27 k.2 Graph of the Breakdown Region 27 U.3 Configuration of the Noise Source for the Luminance Transducer and the Equivalent Noise Circuit 29 k.k Graph of E 2 . vs. R 31 ni U.5 Arrangement for the Flow Transducer 32 5.1 Block Diagram of the Stochastic Meter ho 5.2 Graph of the Thermal Noise Temperature Transducer ^3 5-3 Graph of the Shot Noise Temperature Transducer hk 5.h Graph of the Luminance Transducer, Part A h6 5-5 Graph of the Luminance Transducer, Part B 1+7 5.6 Graph of the Flow Transducer U$ VI Figure Page A-l Circuit Diagram of the Power Supply of the Stochastic Meter 5^ A-2 Circuit Diagram of the Input Stages of the Stochastic Meter 55 A-3 Circuit Diagram of the Counter Card of the Stochastic Meter 56 A-k Circuit Diagram of the Timer Card of the Stochastic Meter 57 A-5 Circuit Diagram of the Display Card of the Stochastic Meter 58 A-6 Frequency Response of the Stochastic Meter 59 A-T Photograph of the Front Panel of the Stochastic Meter 6l B-l Circuit Diagrams of the Thermal Noise Temperature Transducer and the Shot Noise Temperature Transducer 63 B-2 Circuit Diagram of the Luminance Transducer 6h C-l Transmittance Curve for the Kodak Wratten Filter No. ^7B — 71 Vll LIST OF TABLES Table Page A-l Accuracy of the Stochastic Meter 60 C-l Data from the Thermal Noise Temperature Transducer 66 C-2 Data from the Shot Noise Temperature Transducer 67 C-3 Data from the Shot Noise Temperature Transducer 68 C-k Data from the Luminance Transducer 70 C-5 Data from the Flow Transducer 72 VI 11 LIST OF SYMBOLS f (x) - first order density function of the stationary random process, X(t) a. P(e) - probability of the event, e , occurring F v (x) - first order distribution function of X(t) X f (x , x _; x) - second order density function of X(t) XI c. F (x , x ; x) - second order distribution function of X(t) A. A. C- oo $ (oj) = I f v (x) e dx - characteristic function of X(t) A. A. ? x (u) = In ^(w) y = E{(X(t)} - expected value (mean) of X(t) A\ oo [by definition, u = / x f v (x) dx] A. A. _oo o Y 2 = E { (X(t) - y v ) 2 } - variance of X(t) A. A. oo 2 2 [by definition, a = / (x - u v ) f v (x) dx and a is called the standard A A. J\ A. _oo deviation of X(t)] R (x) = E { X(t + x) X(t)} - autocorrelation function of X(t) A. oo S (to) = / R v (x) e dr - power spectrum of X(t) A. A 1. INTRODUCTION Molecular Stochastics as defined by Poppelbaum is the study of the statistical parameters of the random fluctuations on a molecular level, and the use of these parameters (averages, variance) to assess physical parameters like temperature, pressure, velocity, etc. The methods of sto- [2 3 k 5] chastic processing are used throughout ' ' ' and we shall begin by summarizing these methods. To encode information in stochastic processing, a number is repre- sented as the probability of appearance of a 1 in a given time slot in a sequence of time slots. The multiplication of two numbers is very simple: The use of an AND gate is sufficient. Thus, the beauty of stochastic pro- cessing is that the arithmetic unit is very simple and an approximation to the result is available immediately: The accuracy of the result is, of course, determined by the length of the sample period. Certain generalizations of this so-called synchronous random pulse sequence (SRPS) system are sometimes useful, for instance one can take away the condition of the pulses occuring at fixed times. In previous research it has been shown that the results of very com- plicated formulas may be found quickly and easily by stochastic computing. An example of this, which has actually been implemented, is the On-line Fourier Transform of a 32 x 32 input matrix. The calculations are done by 102U separate stochastic arithmetic units: Because the arithmetic units of a stochastic computer are so simple (and small), the cost (in terms of both space and money) of such a large number of arithmetic units is not prohibitive! Up to now, the emphasis in stochastics has been placed on the computa- tion using the stochastic sequences, and not upon the production of the se- quences : The stochastic sequences were conversions of digital or analog data. In Molecular Stochastics we aim at the direct production of stochastic se- quences from transducers. The basic premise in Molecular Stochastics is that the physical variables in our environment (such as temperature) have a random component, and that this random component expresses somehow the physi- cal variables under consideration. The random component may occur naturally (i.e., it was present before the measurement) or may occur artificially (i.e., it is present in the measuring instrument). Usually this component is filtered out or averaged so that the result is either an analog measure- ment or a digital measurement. This study will use the random component to produce the measurement . 2. GENERAL THEORY Calculations that follow in section 2.1 show that if the random com- ponent depends upon the physical variable, the stochastic sequence engendered "by the random component will be useful to measure this variable. In particu- lar the mean, variance, and autocorrelation function of the stochastic se- quence will depend on the variable and can thus be used as a measure of the physical phenomenom. Section 2.2 contains a discussion of the necessary conditions for the property of ergodicity. Ergodicity occurs when the mean of the stochastic sequence is equal to the time average of the stochastic sequence. Thus, if the property of ergodicity is present, the mean of the sequence can be mea- sured by finding the time average. A finite time average is used to approximate the mean of the stochastic sequence. Section 2.3 discusses the errors produced and methods which can be used to minimize errors. 2.1 Calculation of the Mean of the Stochastic Sequence First, a general model of a transducer is needed. Figure 2.1 shows the block diagram of a transducer which can produce a stochastic sequence from a random process. Figure 2.2 shows a typical input and its correspond- ing output . Let us find the condition necessary for the stochastic sequence Y(t) to be dependent upon the measurement. Calculating the mean and vari- ance of Y(t) (in Figure 2.1) is straightforward given the density of X(t) . {1 if X(t) > A otherwise Then, M y = P[Y(t) = l] = P[X(t) > A] RANDOM PROCESS, X(t) DC LEVEL, A > STOCHASTIC SEQUENCE, Y(t) Figure 2.1 Block Diagram of a Transducer X(t)i, Y(t)A - t Figure 2.2 Typical Input X(t) and Corresponding Output Y(t) or U Y = 1 - F X (A) And, But, ^ = E[Y 2 (t)] - y 2 E[Y^(t)] = P[Y(t) = 1] = y, a y = y y (l - y y ) It is obvious that < y <: 1, and -: o < l/k. Also, in order for the mean of Y(t) to depend upon the variable, the density of X(t) must depend 2 upon that variable. In most cases, if either y of o have a dependence A A upon the variable, y depends upon the variable to be measured. For example, o suppose that X(t) is Gaussian distributed with mean y and variance a v A X [ => f x(t) (x) a /2tT -(x-y x ) A-u x exp — ], then y = - - erf(— — ). [erf(x) = 2o, r X x /2tT 2 2 dy]. Note that y is dependent upon the random process X(t) unless y is a constant equal to A: In that case, v = 1/2 for all a , A I X Assuming that y = 0, Figure 2.3 shows a graph of y with respect to a . A i A Thus it has been shown that in general the mean of the stochastic sequence Y(t) varies as the density of the random process X(t) varies. There are a few cases, however, when the mean does not change even if the density of X(t) varies. 2.2 Ergodicity of X(t) An ergodic process is one in which the mean and the time average are the same. Before mathematically defining ergodicity and giving an explana- tion of a necessary condition for ergodicity, we have to introduce the notion of autocorrelation function: It is given by X =1 en CM 0) u Ry(x) = E{Y(t + x) Y(t)} = P{Y(t + t) = 1 and Y(t) = 1} = P{X(t + t) > A and X(t) > A} = 1-2 F X (A) + F X (A, A; t) In most cases, it is difficult to calculate the closed form of the auto- correlation function. However, since it is a probability, there are limits that may be put on it; i.e., < R y (t) < 1. In specific cases it is possible to have better limits. In order to measure the mean of Y(t), the property of ergodicity must be present : The stochastic sequence Y(t ) is ergodic if the mean is equal to the time average or, mathematically, 1 T y = lim ± / Y(t)dt A theorem allows us to judge the ergodicity of a random process: It states 1 T that y = lim — / Y(t)dt if and only if I-** T lim I / (1 - £)(R (t) - y 2 )dt = (a) T+co x o T 1 2 (Note that equation (a) is equivalent to lim — / R(t)dt = y ). Using this T-m» o theorem, the stochastic sequence Y(t) is ergodic ( y v = 1 - F (A) = lim — T i X T ^ o 1 / Y(t)dt) if and only if T lim | / [1 - 2F X (A) + F (A, A; x)]dx = y y 2 = [l - F X (A)] 2 T 1 2 which is equivalent to lim — / F (A, A; i)dx = F (A). A necessary condi- T-x» ? tion for ergodicity (but not a sufficient condition) is that lim R(t) = y 2 or lim F (A, A; t) = F (A), which implies that X(t + x) and X(t) are uncor- related as t ->■«>. 8 2.3 Determination and Errors of a Finite Time Average There is one more important consideration that must be presented; i.e., the determination of the time average. As explained in later sections, the method that is used is the sampling of the stochastic sequence Y(t). Suppose that Y is defined as follows: {1 if Y(kX) = 1 where A is the sampling period if Y(kX) = Y is now the sampled version of Y(t). Figure 2.h shows a typical stochastic K sequence Y(t) and the corresponding Y . The problem now is to determine X and the a 's so that NX N » ' Y(t)at " ^-k \ There are several criteria that may be used to determine the unknowns . One possibility is to minimize the probability P{|A(Y) - A(Y)| > C} NX N where A(Y) = — / Y(t)dt and A(Y) = E a^ Y . This is in many cases very k=l ^ k difficult to accomplish since the density of the error, |a(Y) - A(Y)|, would have to be determined. A more reasonable solution is the minimizing of the mean-square error, e. e = E{|A(Y) - A(Y)| 2 } A theorem states that the mean-square error is minimized if E{[A(Y) - A(Y)] Y k > =0 for k = 1, 2, ..., N (b) (i.e., the samples, Y , are orthogonal to the error) and the minimum error, e , is e = E{ [A(Y) - A(Y)] A(Y)}. Making the appropriate substitutions, the equation (b) may be written NX N — / R(t - nX) dt = E a, R[(k - n)X] for n = 1, 2, . . . , N NA k=l ^ f(t) I i i i I ■ ill i i |_i i i I i ui i i li i i i lJ i u Y 1 i i i i 1, i , I, til i i I I l i I i I , i , I i i^. \ Figure 2.U Example of Y(t) and the Corresponding Y 10 For a given R(t) and a desired sampling rate, A, solutions may be found for the a ' s since there are N unknowns and W linear equations. The minimum k mean-square error is NA NA N e m * SI ' (1 " ST> E(t)dt " NX { * \ R(t " kX)dt k=l In considering the value of the sampling rate, it is obvious that the smalle: A becomes, the smaller the error, e , becomes. However, there is one other m observation that may be made about the sampling rate, A. If A is selected such that R(t+A) - R(t) for any t, the integral and the summation of R(t) an approximately the same. (it is assumed that R(t) is continuous). Suppose that NA N-l \f R(t)dt - A E R(kA) | < e k=0 is to be satisfied for some e > by choosing an appropriate A: N-l NA N-l A E R(kA) - e < / R(t)dt < A E R(kA) + e k=0 " k=0 NA A 2A NA But, / R(t)dt = / R(t)dt + / R(t)dt + ... + / R(t)dt A (N-l)A = R(t ) + AR(t ) + ... + AR(t ) where kA < t < (k+l)A U 1 1M — _L — K — since R(t) is continuous. So, N-l N-l N-l A E R(kA) - e < A E R(t, ) < A E R(kA) + e k=0 k=0 k=0 N-l N-l N-l E [R(kA) - =f] < E R(t, ) < E [R(kA) + r^- ] k=o m ~ k=0 k " k=0 NA Thus, if R(t k ) R(kA)| <^ then NA / R(t)dt N-l A E R(kA)| < e k=0 11 A stronger condition is JR(t + X) - R(t)| < jjf- for any t. Note that if R(A) - R(0), the probability of the stochastic sequence changing in the interval (t, t + A) is very small. By the Tcheby- cheff inequality, P{|Y(t + A) - Y(t)| > n) < — ^p NAn In summary, a concept of sampling Y(t) was introduced and this sam- ple, Y , was used to approximated the time average of Y(t). An error calcula- tion was then made using the mean-square value of the difference, A(Y) - A(Y), as the error. This discussion will be referred to in section 5-1- 12 3. NOISE SOURCES In this section we shall give a discussion of the types of noise sources which may be used to construct transducers. There are two methods tn which a noise source may be obtained: l) electronic noise such as thermal or shot noise that depend upon the physical parameter of interest or 2) the measurement of a naturally random physical parameter such as tur- bulent flow. These noise sources will provide the random process input to the com- parator (as in Figure 2.1). 3.1 Thermal Noise This noise is common and well-known. It is caused by the random movement of thermally excited electrons in a conductor. It has been shown theoretically and experimentally that the voltage power spectrum, S , of the thermal noise of a conductor is: S = 2kTR where k = Boltzmann's constant v T = absolute temperature R = resistance of the conductor. 2 Using this equation, the rms noise voltage, E (E = 2S Af where Af = noise X U v bandwidth), is k nv. for a 1000ft resistor and a bandwidth of 1 Hz and is Uo uv. for a 100 K resistor and a bandwidth of 1 MHz. Thus, for a reasonable size resistance and bandwidth, the rms noise is quite low (< 1 mv. ) and amplification will be necessary. Figure 3.1 shows the proper modification of Figure 2.1 to obtain a stochastic sequence. Thermal noise is generally assumed to be Gaussian distributed with zero mean and the power spectrum as above. Suppose that the transfer func- tion, H(joo), of the amplifier is: o 13 I- UJ I- LU c/) en o -( 'd c ai Ih H H in (U ft JJ O . -i H on IK »- o Ik „/ . \ _ Gv2a jw +a This implies that the amplifier is a low pass filter with gain, G and a cor- ner frequency of — . 2tt The random process, X(t), is still Gaussian. The mean, y , is: X u Y = / E{Z(t - a)} h(a) da A X = H(0) y, = The autocorrelation, R y (x), and power spectrum, S (w),are X S (u) = |H(ju))| M^ and Note that As shown earlier, 2q G 2 2 u + a (2kTR) 2 -a R^(t) = 2kTR G e a|T| R x (0) = a x 2 = 2kTR G 2 y y = P{X(t) > A} = — -erf ( — ) where a, r = G/2kTR X and. R y (x) = P { X(t) > A and X(t + t) > A} 0000 = // AA 2tt a 2 JTT7 X (x 1 - 2rx x x 2 + x 2 ) dx x dx 2 where r = e -a t 15 A 2 = [A -er f (^)] 2 W — ^ e (l + z) "x* d Z X 2rr/l - z From this expression, it is obvious that Y(t) is ergodic. A 2 r -, 1 2 C v (t) = / ■ e (1 + z) a dz Y 2-rr/l - z 2 X 2 A 2 A A which implies that *&—£ e (l + r) °X < § / C y (t) < ^^ e °X / As T ■> », both sides of the inequality approach zero. Thus, Y(t) is ergodic. It is obvious that < p < 1 and that [— -erf( — )] < R y (t) p -erf (— ) . Also, R y (t) is a continuous function and monotonically decreasing as |x| increases It may be shown that the slope of R y (t) is a maximum at x = 0. So in order to satisfy the condition R(t + A) - R(t) < — from section 2.3, it is sufficient to satisfy |R(A) - R(0)| < ^> From these equations, it is possible to determine the minimum sampling rate given the characteristics of the amplifier, the sampling period and the maxi- mum error for the time average. These results are used in Section 5-2. This discussion may be repeated for a bandpass or a high pass ampli- fier. The only change in the results will be in the correlation coefficient, r(x). This change will only affect the calculation of the sampling period since the maximum slope of R^Cx) may no longer be at the origin. The sensitivity of the mean of the stochastic sequence with re- spect to the temperature and the variable, A, is an important parameter. 16 Sensitivity is the slope of the function, y , with respect to temperature. Suppose that a temperature range of T < T < T is to be considered and A = y y (T 2 , A) - u Y (T l9 A) Then A 2 1 2K 2 T A 2 dA _ dA "" 1 2K 2 T n Setting the derivative equal to zero yields /T T A = K / _ (lnT - lnT ) 2 " 1 where T , T_ are in °K. Figure 3.2 shows the variation of u Y with respect to temperature. The temperature range is -^0°C to 100° C and according to the calculations above the greatest deviation in y will occur when A = 17K. One observation of this graph is that the change of y„ is small (< 0.05) even at the optimum setting of A. 3.2 Shot Noise Shot noise is caused by the random emission of electrons or the ran- dom passage of electrons across potential- barriers . It is common in diodes or other similar devices. It is experimentally known that shot noise has a power spectrum, S (go) , of S (w) = ql where q = electron charge I = current flowing in the diode. -19 Since q = 1.59 x 10 coulombs, the power spectrum and, as a consequence, the rms noise current are small values (~ 1 pa for I = 1 ya and Af = 1 Hz). However, in certain devices there is an internal mechanism which multiplies this noise. A discussion of this multiplication will come in Section k.l. Shot noise is generally modelled as a summation of pulses IT o ~1 o or CVJ cO o c O ii < ii < _ o OD _ O 10 CO II *; col n < It < 8 < 0) I m 6 n o o {A UJ UJ cr o u> Q UJ cr 3 O < C\J a; UJ 0. UJ o C\J m 6 d m 6 CM 6 o I (LI Sh -P (U ft e Eh >H 3. CM on CD 5b •rt 18 l(t) = l h(t - t. ) i where h(x) may have any shape (in this discussion, it is assumed that h(x) is triangular), and t. are random variables that are exponentially distri- 00 buted with parameter, p. Note that J" h(t)dt = q and that p is the average _oo number of electrons transferred per unit time. The power spectrum of l(t) is identical to the power spectrum referred to in the first paragraph, assuming that the triangular pulse is very small. For l(t) = E h(t - t.), i E { I(t)} = pq oo R(t) = p 2 q 2 + p / h(t) h(x + t)dt _00 and S(w) = 2tt p 2 H(o) 5(u) + p |h( jw) | 2 The density of shot noise is difficult to calculate in many cases (i.e., h(x) does not have a finite duration). It can be shown that if h(x) is a triangular pulse with finite duration, T, the density of l(t) is k f(x) = e- pT [g (x) + ... + g k (x) ^J"+ -..] where g n (x) = <5(x) and g. (x) = density of l(t) given that there are k t.'s that occur in the interval (t - T, k). Also, since the t.'s are independent of each other, ;(x) = g 1 (x) g (x) = g(x) * g(x) [* means convolution]. g 1 (x) = g(x) * ... * g(x) [k convolutions of g(x)], There are several interesting points to be made about the above density: 1. As k increases, g (x) approaches a Gaussian density. (Central- Limit Theorem) . 2. If pT << 1, f(x) may be approximated by using the first several terms of the sum, i.e., 19 2 f(x) - e" pT {g (x) + pT g(x) + ^f- [g(x) * g(x)]} 3. If T >> 1, f(x) may be approximated by a Gaussian density. h. If T ~ 1, f(x) is best approximated by adding terms to the sum in comment 2. It is interesting to note that the characteristic function may be determined if h(x) does not have a finite period. In this case. *(«) = P 7 [ e Ja3h(t) - 1] dt It is obvious that this equation is very difficult to solve. For example —at if h(t) = e for t > 0, f(«) -*■ [J« + -2T2T + ••• + -t^T + ■•• ] As stated before, the autocorrelation function is 00 R(t) = p 2 q 2 + p / h(t) h(t + x)dt —00 which implies that C(x) = R(t) - E 2 {l(t)} 00 = p / h(t) h(t + x)dt — oo The only condition necessary for ergodicity of the mean is that i T lim 7j7 / C(x)dx = T-+oo o or T 00 lim %- f f h(t) h(t + x )dt dx = "J-+O0 -°° 00 It is obvious that since I h(t)dt < °° , then 20 / h(t) h(t + x) < co for all t _oo and shot noise is ergodic . In the case of shot noise, the parameter that will vary "with respect to some variable to be measured will be the average number of electrons transferred per unit time, p. By looking at previous equations, both the mean and variance of l(t) will vary with respect to p. Thus, the density of l(t) will shift drastically and the range of the measurement will be very small. (The sensitivity of the measurement will be very good). The range can be extended by filtering out the mean of l(t). (This can be done by a capacitor). Suppose that I n (t) is defined as I Q (t) = I(t) - Vj so that Wn = ° and R I ^ = p ; h ^ h ^ + T)dt But this definition of I n (t) is equivalent to where I Q (t) = Z h Q (t - t i ) i / h (t)dt = — oo An interesting case occurs if r a - ^ f or < t < T h Q (t) =< |_0 otherwise Figure 3.3 shows the graph of h (t) and Figure 3.h shows the resulting den- sities, g Q , g , g , and g~(x) from the sum 21 holt) -Q-- Figure 3.3 Pulse Shape of h (t) ig.lx) ~T — 3a Figure 3.U Density of g n (x), g n (x), g Q (x) and g-,Cx) V " °1 22 f(x) = e" pT [g Q (x) + ... + g k (x) ^ff-+ .-.] As mentioned before, if k is large, g v (x) may be approximated by a Gaussian a 2 density. [The mean of g-, (x) is zero and variance is — — ] . 3x g k (x) ~-i£rz e 2 2k a Figures 3.5, 3.6 and 3-7 show the density of I n (t) for several values of pT. Note that the tails of the density expand as pT becomes larger. Assuming that T is a constant and p varies, the density of I n (t) varies and thus the mean of a stochastic sequence derived from I n (t) varies. 3 . 3 Turbulent Flow There are two types of fluid flow: laminar and turbulent. Turbulent flow occurs if the particles in a fluid have an irregular, fluctuating motion and erratic paths; which in laminar flow, particles have a steady path. Both types of flow obey the laws of fluid dynamics; however, for turbulent flow, the parameters such as pressure and velocity are considered to be random processes. There are two important equations that the parameters of the flow must obey: the equation of conservation of mass and the Navier-Stokes equations of motion. Assuming a constant density fluid, the equation of con- servation of mass is 9u 3v 3w __ where u = velocity in the x-direction 9x 3y + 3z v = velocity in the y-direction and w = velocity in the z-direction The corresponding Navier-Stokes equations of motion are 9u 3p 3 , 3u 2, 3 / 3u s 3 , 3u ,, 3t 3x 3x 3x 3y °y ° z ° z 3v 3p • 3 , 3v n 3 / 3v 2, 3 , 3v n p at = - "# + te (y te - puv) + ¥ (y ¥ " pv } "al (y ^ " pw) -2a -a if(x) l .6 2a 23 Figure 3.5 Density of I (t) for p T =0.5 -3a -2a 2a 3a -m~ X Figure 3.6 Density of I (t) for pT = 1.0 -4a -3a -2a 2a 3a 4a Figure 3-7 Density of I (t) for pT = 2.0 2k 9w 9p 9 / 9w » 9 / 9w v 3 / 3w 2% p ^7 = - 17 + te (y "S7 ~ puw) ¥ (y "s7 " pw) 17 (y 77 " pv ) where p = density of the fluid y = viscosity of the fluid p = pressure In most references in fluid dynamics, turbulent flow is treated by considering the time averages of the parameters - velocity and pressure - so that the above equations are: _9u _9v _9w _ 9x 9y 9z P TT Tr + T7 (u — - pu - pu' ) + — (y — - puv - pu'v 1 ) + — (y — - puw - u'w' ) 9v 9p j 9 , 9v — -f—n , 9 / 9v -2 "v 9 , 9v P 9¥ = " 97 + "9l U 3l" PUV " PUV ] + i7 (y ¥" PV " PV } + 77 ( ^ pvw - pv'w' ) 9w 9p 9 / 9w — ' -j— n , 9 , 9w ~ — ttn 9 / 9w p 9l = "i7 + 97 ( ^ 97 " puw " puw } + ^F (y a?" pw " pvw } + i7 ( ^ - 2 ~72n pw - pw' ) where u, w and v indicate the averages and u' , w' and v 1 indicate the random portion so that u = u + u' and so on. It should be pointed out that the two sets of Navier-Stokes equations are 2 2 2 identical, except for the additional components pu' , pv' , pw' , pu'v' , pu'w', pv'w' in the right side of the expressions. These factors are referred to as Reynold's stresses. Obviously, they are present only for turbulence. In most cases, the solutions for the above set of equations are very difficult to find, particularly when the Reynold's stresses are included. These stresses are not well known either experimentally or theoretically. Graphs may be found for specific cases; however, they are dependent upon 25 many variables such as position, viscosity, shape of channel or pipe, and roughness of the walls of the channel or pipe. In general, the Navier-Stokes equations may he written as — + f(u, v, w) = a(x, y, z, t) — + g(u, v, w) = b(x, y, z, t) — + h(u, v, w) = c(x, y, z, t) where f, g, h, a, b, c are functions of the corresponding variables. A simple case of these equations is called Langerin's equation which describes the motion of a free particle. ^■+ 3 v(t) = n(t) where n(t) is the driving function of the differential equation. In solutions for mean and autocorrelation function of v(t), n(t) is considered to be white noise with zero mean. Hence, S (w) = a S„(w) = v" ' 2 2 w + t> ( \ _ a -3 T R v (t) = — e 3v! The point here is that given information about the driving function, n(t), solutions of the mean and autocorrelation function of v(t) may be found. This statement is also true for the Navier-Stokes equations where a(s, y, z, t), b(x, y, z, t) and c(x, y, z, t) are the driving functions. The problem is that the equations are non-linear and thus, a closed-form solution is difficult to find. 26 k. IMPLEMENTATION OF THE TRANSDUCERS h. 1 Temperature Transducer The obvious way to measure temperature is to construct a transducer which uses thermal noise. A transducer was built using a resistor as the (thermal) noise source. Figure 3.1 shows the block diagram. The basic prob- lem with this transducer was that the amplitude of the noise source, the re- sistor, was too small and thus the amplifier was needed. However, the amplifier added undesirable noise and the resulting noise (the sum of the noise source and the amplifier noise) had an insufficient dependence upon the temperature. Another transducer has more dependence on the temperature. This one uses a reversed-biased junction as the noise source. Shot noise is present but it is multiplied in the breakdown region by a factor of M where M = 1 and n = . constant (2 < n < 10 ) " 1 - (- ) n V^ V = voltage across the junction V = breakdown voltage a Thus, the noise has been amplified with no additional circuitry. Figure k.l shows the circuit configuration for this transducer. This transducer corre- sponds to a noise source, dependent upon the temperature, which has sufficient voltage to be input directly to a comparator. This voltage is compared with a DC voltage, V , and the output of the comparator is the stochastic sequence. The calculations that follow show how this noise varies with respect to temperature. The operating point, I and V^- , of the junction is determined C -BC by the solution of two equations which are represented in Figure U.2. The V - V CC RC linear equation, I = , does not vary with respect to temperature. C R However, the remaining equation does and two variables, I pn and V_ both de- pend upon temperature. So, differentiating 27 V C c° V C c>V B , BREAKDOWN VOLTAGE OF THE BASE-TO-COLLECTOR JUNCTION > Y(t), STOCHASTIC SEQUENCE Figure h.l Circuit Diagram of the Temperature Transducer Using Shot Noise Figure k.2 Graph of the Breakdown Region 28 dI c - dI co nI co 6 dV dT " dT _ -n V„(l - 6 ) B n N 2 dT V where 6 = BC V T 1-6 I C0_ r 1 dI C0 n L I, "CO dT no 1 B -I 1 - 6 n V B dT CO r 1 dI CO ns L I C0 dT eV B dT ] for 6 = 1 - e and e « 1. It is known that ^o dT dI C0 1 dV B and 77— — 7^r~ have the same order. Thus, dl C ~ V B dT ^0 dV B dT m 2 ^. dT B This implies the dT is negative and that the change in the breakdown vol- tage is much more significant than the change in the leakage current. The result is that the temperature and the noise power are inversely proportional (since the noise power is directly proportional to the leakage current). Thus, it is expected that the mean of stochastic sequence is inversely proportional to the temperature (if V < mean of the noise). ^+.2 Luminance Transducer In this transducer a photo-transistor is used as a variable resistor at the input of an amplifier. The output noise of the amplifier is dependent upon the input impedance. Thus, the output noise of the amplifier varies with respect to the luminance. A design of this transducer is shown in Figure U . 3 - From the arrangement in Figure ^.3, calculations of input noise, E ., may be made. In the equivalent noise circuit, R is the resistance of the collector-to-emitter junction of the photo-transistor which is dependent upon the luminance. I is the shot noise resulting from the photo-diode and E 29 r cc C PHOTONS 1/ \ > NOISE PHOTO-TRANSISTOR ©i, i 1 1 ni <-no 1 Figure k.3 Configuration of the Noise Source for the Luminance Transducer and the Equivalent Noise Circuit 30 and I are the voltage and current noise, respectively, resulting from the amplifier. In most cases, I and E will be more significant than I . n n s E n ? = V^s 2 + T n ] + E n 2 where Z p = R H R C H ( " J V The purpose of the capacitor is to "block the DC voltage (low-pass filter); so that over the bandwidth of the amplifier, X >> R and X >> R and thus, Z - R II R_ . Figure k.k is a graph of E . as R varies. p " C ni The one problem that is neglected in the above discussion is the range of the input resistance of the amplifier. After the input resistance becomes large enough, the feedback is sufficient to bring the amplifier into oscillation. Thus, a reasonable range for the input resistance is from to R_,. Since the resistance, R, is the resistance of the photo-transistor and this resistance depends upon the luminance, the output noise of the amplifier is dependent upon the luminance. h . 3 Flow Transducer This transducer is the easiest to implement. The reason is that the noise source is the natural phenomenon; that is, a transducer that measures the velocity of a laminar flow will output a random process in a turbulent flow. Thus, a method similiar to that used to measure laminar flow will be used here. Figure 4.5 shows the necessary ingredients for the flow trans- ducer. The liquid enters at the left and strikes the obstacle. At this point, the flow becomes turbulent and the pressure transducer measures a random process which is dependent upon the average flow (volume rate of flow divided by the area of the tube). The item of interest is the fluctuating pressure measurement and not the average pressure. Thus, the pressure mea- surement (derived by the pressure transducer) is inserted into a comparator via a capacitor (low-pass filter). The resulting stochastic sequence then 31 m En + iRcds + In) i i I I — R, R Figure k.k Graph of E . vs. R ni 32 UJ o z o u en "> UJ CD 3 CD o w cd U o o ■P "K-f- and fosCt) = b e~ for t > Now, if Y(t) is sampled every A seconds, the corresponding sequence Y is a Markov chain with a probability transition matrix 35 P = 1 - A. 1 1 - A, •where A = 1 - e and A„ = 1 - e The number of time periods, N.,that are spent in state i is geometrically distributed: P(N Q = n) = A 1 (l - A 1 ) for n = , 1 , ... and P(N X = n) = A 2 (l - A 2 ) for n = 0, 1, Using the above equations, it is easy to determine the error of the approximation, Y(t) ~ Y . The pulse-widths of Y(t) and Y will now be COm- it It pared. (The pulse width is the time that Y(t) or Y is equal to l). The expected pulse width, T , of Y(t) is E{T } = / b t e~ bt dt . 1 b The expected pulse width, AN , of Y is oo AEfl^} = A Z n A 2 (l - A 2 ) n n=0 1 - A, = A -bA = A 1 - e -bA If A is chosen so that e~ l - 1 - bA (=> < bA << l), then AE{N,} - 1 - bA . 1 Note that the expected pulse widths of Y(t) and Y are approximately the 36 same if X is chosen properly; i.e., bX << 1. The mean square error, 2 E{(T - XN ) }, is more interesting: E{(T - AN X ) 2 } = E{T X 2 } + A 2 Efl^?} - 2A EfT^} A 2 _ (1 - A„)(2 - A ) (1 - A )(X + — ) = 4j + A^ S_ ^- _ 2A ^—2 ~ b A 2 A 2 2 A(l - A 2 )(A + |) b 2 A 2 Again, assuming that A is chosen such that e - 1 - bA, ■ . x(i - tx)(x + f) E{(T n - XNj 2 } - " b ■1 l y ^2 "bX b = A + A 2 . X b b Thus, the smaller the sampling period, the closer Y fits Y(t). (intui- tively, this result is expected). By using Tchebycheff ' s inequality, P{|T - XN J > n) < -^p " bn It is possible to use this equation as a measure of the error in the approxi- mation Y(t) ~ Y. . A more important consideration is the error in using the time average of the sampled data Y . This error "was discussed in section 3.3. It was 1 T N concluded that — / Y(t)dt is "optimally approximated" by E a Y where NX N k=l T = NX if the equations -^— f R(t - nX)dt = E a R[(k - n)x] for n = 1, WA k=l K 2, ..., N are satisfied. (The term "optimally approximated" means minimum mean square error. . . ) As described in the algorithm above, each sample Y is given equal K. weight (=^ a = — for k = 1, 2, ..., N). The reason for this assignment of 37 a is mainly to simplify the design of the stochastic meter described in K. section 5.2. However, as explained in section 3.3, "by carefully choosing the sampling rate, X, the mean square error of the time averages may be made as close to the minimum mean square error as necessary. Mathematically, the mean square error, e, is , NX N e = E{( i h Kt)at-i z Yk ) } k=l , HX . -, N-l N NX = =*■ / (1 - =7) R(t)dt + ± R(0) +r E (1 - f) R(kX) - -5- Z / R(t - kX)dt NA NA N N k=l N N 2 X k=l Consider that X is chosen so that for some e > 0, the equation NX N-l \f R(t)dt - X Z R(kX)| < e k=0 or, equivalently, NX N-l |^/R(t)dt - F ^_ n R(kA) l iNT k— U is satisfied. Referring to section 3.3, this condition implies that |R(t + X) - R(t)| < ^ It is obvious that the equations used for solving the a 's are nearly satis- fied. Intuitively, it is expected that the minimum mean square error, e , (as defined in section 2.3) would be approached as e -*■ 0. Calculations con- firm this notion. T N H N N NT = e + -i- Z Z R((k-n)X) - =£ Z / R(t - kX)dt N k=l n=l k=l 38 N , N n T 1 = e + jjj- E [| Z R((k-n)X) - | / R(t - kxjdt] k=l n=l ' m N k=l " n=l 1 e m + NA Remember that e was chosen arbitrarily and that A resulted from e. Also, note that the error, e, can be made very small by increasing N to be a large number. There is one remaining calculation. It is known that lim A^(Y) = y . since Y is ergodic. But if the sampling is finite, the mean-square error becomes ? ] 2 N_1 2 ' E{(iUY) - u r> = | R(0) +-f E (N - k) R(kX) - y a n k=l N-l = | [R(0) - y/] + | I (1 - |)-[R(kX) - u/] k=l = |c(o) + | I (1-|) C(kX) k=l 2 where C(kX) = R(kX) - y y Obviously, as N becomes larger, the mean square error becomes smaller. Neglecting the second term on the right side of the equation above and maximizing, Tchebycheff f s inequality becomes 2Nn This expression gives a measure of the accuracy of the resulting time average, Ajj(Y) . 5 .2 Implementation of the Stochastic Meter In the previous section, an algorithm was given to obtain the time average of Y(t). From this algorithm and several design specifications, a 39 meter, called a stochastic meter, that can determine the time average of a random process can "be constructed. Figure 5«1 shows a block diagram of the stochastic meter. Since this meter must accept a variety of inputs from the various transducers, a buffer which is a high impedance, wide bandwidth, low gain amplifier is the first stage. A variable gain amplifier and a variable DC level control for the comparator is available for experimental purposes. The comparator converts the input into a stochastic sequence. Note that the comparator is actually part of the transducer in the previous discussions. However, in order to have versatility, these circuits are included in the stochastic meter. The algorithm for obtaining the time average is implemented by the remaining circuitry which is digital (TTL). The 10 MHz master clock is used to sample the resulting stochastic sequence from the comparator and, also, to determine the length of the averaging period (AN). In order to avoid dividing the contents of the two counters (one with n and the other with II), the averaging period counter has a multiple of ten. Then the number, n, may be displayed directly and the decimal point is placed according to the averaging period. A justification of the 10 MHz master clock is needed. As shown in previous calculations the smaller the sampling period, the "better" the approximation of the sampled time average. In section 5-1, it was indicated that if the sampling period, A , was chosen so that |R(t + A) - R(t)| < ~ for all t then the mean square error, e, is e e < e + — - m NA kO <> 11 1 UJ < cr ol UJ UJ > •- < z r> UJ o 2 ° 1- Q O cr UJ (9 H < u cr UJ > < >- < a. CO Q i ■ Nl f J i , * > a. o _j u. i a. _j U- a o cr UJ Q- x t- 2 Z O UJ < -J cr UJ > < u CD -P CD S O •H -P 1/3 ^ O o -p CO 0) -p

m I .18 ^cr eS •H Q o o H pq CD bO ■H ill where e = minimum mean square error. In section 3.1, calculations were made assuming that the noise source was thermal noise (flat power spectrum and Gaussian distributed). The calculations for R Y (x) are very complicated and it was not presented in a closed form (rather R y (x) was . expressed as an inte- gral). However, an approximation may be made: OO 00 R(t) = f / f(x , x p ; x) dx dx where f(x , x ; x) = Gaussian A A density with correlation coefficient, r(x) = e -a t But R(0) = / / f(x_, x 2 ; t) dx dx 2 A -°° oo a Therefore, R(0) - R(X) = / / t{x- 9 x g ; x) dx dx 2 A -oo This integral may be approximated when r(A) - 1 (which is the case when X << l) | R (0) -E(X)| - e 2» 2 02i±xM A 2 mv • • -i • 4-u 4. 2o cos r(x) . e This implies that e < nr- ^tt — IMA The constants now need to be given values. The constant, A, is the DC lev- el of the comparator. The comparator takes values from 0.1a < A < ha, but in most cases A - a. The constant, NX, is the averaging period, and it _3 varies from 10 " : NX < 10, with the most common value being N = 1. The (X constant, — , is the corner frequency of the amplifiers in front of the comparator and a = 5 x 10 . The constant, e, is an error parameter and may be chosen so that the last equation in section 2.3 is satisfied. A table is given for various sampling. \2 X f = cos~ r(x) 2-n f e -005 fe" 1 10" T ID" 8 0.05 0.016 0.05 0.016 .018 .006 -7 These data imply that if A = 10 , e must be greater than 0.05. This value for e is satisfactory, so the 10 MHz clock- was chosen. Obviously, there is a trade-off involved. The practical considerations are that the comparator output has a non-zero rise-time and fall-time and the input noise frequency spectrum is not necessarily flat so that the corner frequency may be lower. It is not necessary to have a sampling rate much greater than the rise- time of the comparator output. Appendix I shows the characteristics of the stochastic meter. 5 . 3 Data and Results This section shows the data that was obtained from the transducers described in section k and shown in detail in Appendix B. The data and experimental procedures that are used to obtain the graphs in this section are presented in Appendix C. The graphs in Figures 5-2 and 5-3 show the data from two tempera- ture transducers: one using thermal noise and the other using a reverse- biased junction. The thermal noise temperature transducer does not respond according to the theory presented in section 3.1. In particular, the graph, Figure 5-2, does not resemble Figure 3.2; i.e., the time average increases as temperature increases in Figure 3.2 while the time average decreases as temperature increases in Figure 5-2. The cause of this discrepancy is that the response of the amplifier as the temperature changes was not considered in the theory. Obviously, the gain of the amplifier decreases as the U3 _L o o oo o o IP o m o en UJ a. o UJ Q UJ X - ac z q; h- < < o < > UJ >- z X OQ Q K z < Z < 2 O Z or < o CD _J u. H 3 CD 03 UJ O Ul o o z < H < < H X t— »- _l O to < o Q > _l Q o o o m o o o *■ M o ' * CO § tr h UJ Eh CD o 5 < 0) c _J Ed o 1 a ro H O •H O I" 11 ~, u. pq "■"" CD Ui re o Cm Ph z O < fl O z ft o — cd o 2 ?H C\J 3 _J d 00 d o 39VU3AV 3WI1 1+8 must "be maintained, especially when the transducer is placed very close to the light bulb. This procedure is discussed in detail in Appendix C. The goal of these procedures is to maintain a consistent spectral energy distribution while varying the luminance (i.e., to maintain the shape of the spectrum). Because the spectral distribution has two distinct shapes in each of these cases, the graphs in Figures 5.h and 5-5 are dif- ferent. In both cases, however, the resulting graphs are expected and agree with the discussion in section k.2. Figure 5.6 shows the graph of the flow transducer. The basic con- struction is shown in Figure U.5. The tube was a 1/2 inch inside diameter plexiglass tube. The obstacle was constructed in the following manner: l) the tube was divided into two parts by a perpendicular cut, 2) a 3/8 inch hole was placed in a l/k inch thick plexiglass sheet, and 3) the plexiglass sheet was epoxied to both parts of the tube so that the 3/8 inch hole was centered. The pressure transducer was placed downstream from the obstacle (about 1 inch). The remaining details of this transducer and the procedure in obtaining the data are explained in Appendix C. As mentioned in Appendix C problems arose from obtaining additional data points for the graph in Figure 5 .6. However, it is reasonable to assume that this graph is a good representation. Note that there is a similiarity in Figures 5.6 and 5.^ (if the horizontal axis is reversed). This similiarity is expected since both of the noise sources are Gaussian distributed. One more comment about the flow transducer should be made. As noted in Appendix C, the time period of the stochastic meter is set at 10 seconds. (All other readings were made at a one second time period). A longer time period was necessary because low frequencies (< 100 Hz) dominated the power ^9 o o <\J u ' a o 3 a> nz> CO m V. h E a) U o Eh > >- O H H O O 0) A3 _l -P Id > Cm O o m bJ o < o cr UJ vo > • < to ■H 39VH3AV 3WI1 50 spectrum of the noise source. The resulting stochastic sequence "would have long lengths of time (in the order of milliseconds or more) in which the sequence was in state or 1. Thus, the longer averaging period gives a better time average. 51 6. CONCLUSIONS From the theory and the actual implementation of these transducers it can "be concluded that the basic concept of Molecular Stochastic s is feasible. Both theoretically and realistically, these transducers may be constructed and there is a relationship between the time average of the stochastic sequence and the physical variable that is being measured. In every case that was realized (temperature, luminance and flow transducers) this relationship was not a linear function but it was one-to-one, continu- ous, and may be considered linear in smaller ranges. In every case, the relationship between the time average and the physical parameter being measured was predicted. A secondary concern of this project was the simplicity and cost of these transducers. One of the advantages of stochastic computing is the simplicity and economy of the arithmetic unit and so, the goal was to maintain these properties in the transducers. The transducers have, at most, an amplifier and a comparator along with a sensor (which may be a re- sistor, transistor, photo-transistor, etc.). The minimum amount of cir- cuitry is a comparator. In the transducers described in this paper, the shot noise temperature transducer would require the least amount of circuitry: a reverse biased base-to-collector junction (the sensor), one stage amplifier (one transistor), and one comparator (available in integrated circuits). A digital temperature transducer normally would use a thermistor as a sensor and a A/D converter to convert the analog signal into a digital result. An A/D converter is much more complex and expensive than a comparator and an ampli- fier combined. The point is that the transducers with the stochastic output yield a simplicity in transducer design (the A/D converter is eliminated and replaced with a comparator). 52 Although this paper covers the implementation of three different transducers, it is felt other kinds of transducers may be built. One that already has been built is the Geiger Counter. Other possibilities are transducers that measure voltage and current. As mentioned in Appendix B, the amplitude of the shot noise resulting from the reverse-biased junction is dependent upon the voltage across the junction. The range of voltage dependence is small, however (about 5 volts). Obviously, if a successful voltage transducer were constructed, the current transducer would follow immediately. 53 APPENDIX A Specifications of the Stochastic Meter The circuit diagrams of the stochastic meter are given in Figures A-l through A-5- In Figure A-6, the frequency response of the buffer and variable amplifier is given. This figure was used in section 5-2 to determine the constant a. The frequency response curve was obtained by using a sine wave input at different frequencies. The input and output amplitudes were measured and the gain (in dB) was then determined. In Table A-l, the accuracy of the stochastic meter is shown. In one case, the DC level of the comparator is fixed (so that the time average is about O.U) and the frequency of the input signal (the sine wave) is varied. In the other case, the frequency of the input signal is fixed (~ 10 KHz) and the DC level of the comparator is varied. The mean and standard devia- tion for each case were calculated from ten samples, S., by the formulas: 1 10 Mean s -=• E S 10 i=i * Standard Deviation = 10 r 1 J° - 2 t v2 n J~9 [ io E s i " (mean) ] i=l Note that the meter has an accuracy of 3 digits and when the slope of the input signal as it passes that threshold level is maximum, the accuracy is the best. Figure A-" shows the front view of the stochastic meter. 5h > cr o > CO CVJ 0) -p 0) S CJ ■H -P CO o o -p CQ (D -P Cm O ft in u > o ft CD p> ft O cd •H Q -P •H o •H O H I ■H ft UMMJ mm ^ > 55 u -P 0) -p cd ,3 o o -p CD & P Cm O en A 9 m GM M3 O UJ Oil-' _l 3 ft 5 ■<» «- ^ o o 5 xF< I i^-e 8 . ; = -Q o e xr 5) J»" Ifl at .o T =-0 r» o ID (SI ^- K> T ?H9 * iT o 3 o o3 H3 G>^ =H3 <£ or uj <*- . UJZ, -13 oo U (U -P o o .a -p O aJ ■H Q -P •H o •H O 57 3 0. UI * 2 | K A g ■I <\J Oj \ / [ N /\ _[ "! < < ( 1 i 3 U. ao OD C» » j Oil u «* J o J -1 v A o r ui g|8 o o fni " A B C D CU CARRY 74193 COUNTER 7 CO BORROW CLEAR LOAD - \°) a % 1 • U fa\ A -p IS) m m 2 in N ki ■H -P A 8 C D CU CARRY 74193 COUNTER 6 CO BORROW CLEAR LOAD = cd & o ~ i V o +J 2 CO IT] x in 1 £ ~ ~ 2 °T» - z ^e in n Cm O ff» A B C D CU CARRY 74193 COUNTER 5 CD BORROW CLEAR LOAD o ffi X in a 2 t* C, R,/C, A ONE- SHOT 74123 11 B CLEAR «-0 b •H EH B A B C D CU CARRY 74193 COUNTER 4 CD BORROW CLEAR LOAO _ O < ' -P * i A Cm r*\ O '(■ o lb i r « 1 i 3 ! u < or* ( UJ 1 t- i t/> i < ««S1 tR LLULU 70 :OUNTER CLOCK 50 i wS m m M tuO cd A B C CU CARRY 74193 COUNTER 3 CD BORROW CLEAR LOAD - n € Tr\- 2 -p n ^ t / a t •H 3 g o M 4k ■H in g a 1 ' • ff> A B C CU CARRY 74193 COUNTER 2 CD BORROW CLEAR LOAD - -3" 1 o ~ 1 I Z^ < i ■nT * M •H fj - £ __|i r A b n / V 1^, i - > * i — u » — a. ■ ' . A B C D CU CARRY 74193 COUNTER 1 CD BORROW CLEAR LOAD _ 1 1 i. - — 1 (T A A A A A s * ! A L=- * i 41 A 1 o "A *! A I £_. - _ _ -I g S? i i ' 7- x ■ i < °X K Ul -1 UI >- UI Ml * Hi. ° j 9 1 9 Z UI a -i o ■ ■ ■ ii_jr A •* ) a UJ I V ir < uj -I U o _ _j 58 (?) *-e ^^-e O _l ui a. S -© =M3 Tf CD Q NAA^j -<.o <.£>" -<^ -- o z LU ID a hi a: ■8" O o CM — 0- (8P) NIV9 o i o t 5-i o 0) Pi > ^ cu a P H ti •H U ho FH pi a o •H o t>J < Pi i> H 1 d N PP !SI < td EH O H o pi aj & H Q II T) -H Sh t> ctf d p PI ti a5 P ° £ ■h a p Cti O •H O |> OJ CD Q II d -H & ► d -p P CO I o o OJ >> — u a !>> CD o d PI 0< ) Sh > d H O U $ O o d H o co cu Pi ,3 P P> CQ Pi o u -=T LT\ o H H o -=r o H H o oo ! O • • i 1 t- H O OJ H UA CO O t— O on O CO VO o o O OJ H o O o _=f o vo -3- O oo r-\ H vo o : o o ! -=t O ! OJ O o OJ UA vo o o o -3- o ' H CO OJ H ■ H c— O o\ O i 00 O j 1 0) 1 PI 1 M ° i a3 •H ! u p ! cu 03 !> •H ! \ cu ; 0) Q a ■H d EH Sh ctf d P CQ CO ctJ d O •H P. OJ ft •H P OJ P> d I I o H II CD PI •H Em -d CD to Pi O O 1 P CU CO CO 01 > u 0) p cu Fi • 0J f) cu •H to P cfl CO o crt ,c! (I) CJ 1> o o p> ^ CO aj (I) 0) ,c! ■3 P +^ n ^H O o H Xi O +^ Sh o P> rU PI O P! o •H n d •H PI crt o hn o cu (1) co ^ EH H cu P O s 61 0) -p 0) S o •H -P CO Oj O o p co 0) ■H LT\ O LTN o a P !m H H OJ cj Eh 0) t3 s rS ■p 67 o EH -p cd h QJ & a 0) Q X) q p CO 68 in CU 03 EH ctf •H Sh CD -P H •H HH bO ■H CD CLI bO crj -P H O > crj •H U 03 > O\co O OO H CO CO LTN CO t— UAVO OJ ON t— -=t OJ LTN OJ 00 O UA -4" O o OJ ON ON OJ LTN j (\i onoj- H on on t— VO VO r— t- vo t— r— t— o- I s — O O O O o O o o o o o o o IA HCO J- J- -3- 0- OO LTN H ONVO on vd _=r O t— vo o on oj -=r on o UA OJ o H CO 00 rH O t— VO OJ oj cn OJ CO CO OO CO CO co co I s — co co CO o o o o o O O O O O O o o VD J IA ir\ LTN on C— 0O OJ I s — vo vo OJ OJ o LTN LTN O LT\ ON o on _3" LTN O ON OJ vo LTN LTN cn t~ OJ ON on ua oj CM ON CAOA ON ON ON On ON ON On On O o o o o O o o o o o o o O VD VO OJ H H CM4J4- OJ UA o r— on in OJ O ltn oj On OJ on H OJ • O O CO ON OJ lf\^t H co On co on UA r-\ H O o H O O H O O o o H H H H H H H H H H H H O OHO CO OJ o o on on oj CO VO O ONVO _=r OJ On oj co vo -3- on UA O • vo rH in H On O ON O CO H ON OJ o OJ 00 OJ OO OJ on oj on oj on OJ o H H H H H H H H H H H H O OJ UACO VO OO ON-3" CO UA UA vo vo UA J" VO o UA OJ IT— O ON H I s — UA O • -3" fOJ vo H On ON t— LTN ON rH On r— _=f -3- _* mj- _rj- cn cn ,-j- on -=r O rlHH H H H H H H H H O H On CO LTN CO ON CO LTN CO CO UA UA O OO O OJ LTN UA H I s — H On ltn on oj • ^j- ovo CO O On O t— VO co cn UA VO UAVO LTN LTN VO LTN VO LTN LTN UA O r-\ r-\ r-\ H H H rH H H H H O ON OJ LTN ON o co on o oj -3- oj ir- o mvo m j- UA CO On OJ O CO on on • onvo vo OO LTN H -3" LTN LTN _=f LfN H OJ CO CO CO CO CO CO CO CO CO CO CO o H H H H H H H H H H H O O H -3- ON ON H _=r t— ^t co IAJ- o CO j- vo On-3- On On I s — OJ OJ H OJ • I s - CO ON t- CO _=r H t— co vo I s — OJ H CAONCA ON ON On ON ON On ON ON O r-\ r-\ H H H H H H H H H O CO VO UA UA H on on o o on -=f I s — LTN 00 OJ H H I s - O IACOJ ON VO H • H _=f OJ O UA J- O ON On co o on o H H H H H O H O O O H o OJ OJ OJ OJ OJ OJ OJ OJ OJ OJ OJ o VO VO VO ON I s - OJ CO H ON H ON-3" H OJ H O ON co on oj h vo O On o OJ HJ- OJ 0J LTN On OJ H On OJ H UA LTN UA UA LTN UA_^|- LTN UA_rJ- LTN O OJ OJ OJ OJ OJ OJ OJ OJ OJ OJ OJ o o •H -p crj •H H !> 01 £ •H ua O 03 Q Ih ! cu Eh rH crj t3 -P 0) 03 -P H O > +3 w o o -p CQ •H nd OJ H ■3 •H rH crj > H VO t— -^ ON vo ua on t— o On r— o LTN LTN O VO LA I s — vo : r— o OI o O CO t— VO t— H t— VO On c— cn o ON CO CO CO CO CO ON CO CO t— 0O o LTN O O O o o o o o o o o o UAOO t— OJ vo CO on h oj J- D— _=!- O OJ On rr- t— ^r ON OJ UA OJ UA O -3- O -3- OJ LTN OJ LTN OJ -3" VO OJ OO -=r rH LTN o o o o o o o o o o o o _=t r-{ H H H H H H rH H H r-{ o oj on oj t-oo H ua on oj On H UA O ONVO UA t— ON r— On H UA On vo OI o t— t— On oo vo -3- OJ J- OO -=i- vo OI o CM OJ OJ OJ OJ OJ OJ OJ OJ CM OJ o -3- H H H H H H H H H rH H o o c— on on on ON t— t— -=f vo t— OO o On j- vo OJ on on UA rH H t— OJ CO o on on h -=t- t— on H ON t— o OJ OJ LTN t- t— t- t— t- E— t— vo vo r— t— o on H H H H H H H H rH H H o , co oj on o on b— CO CO CO t— VO I s — rn o OO VO 00 t— LA CM ua on h UA OJ CO -p o [— mvo j- on ^t OJ OJ OJ OJ _=f rH U o H H H H H H H H H H H o CD on OJ OJ CM OJ OJ OJ OJ OJ OJ OJ CM o ^ ri m On VO VO On ua UN OJ CO H C) VO J" H o on oj ir— oo ^r _=!- OJ OJ o cn -=1- _=f 1 o r-\ -3- cn cn cw o o H on cn OJ r^, -p LTN _3- j- ^t j- j- _=t- J- J- _=(- -=1- J- o O O CM OJ OJ OJ OJ OJ OJ OJ OJ OJ CM OJ o UA o t— C— H vo On t— H OO oo H o ua o on o oj J- OJ O OJ UA o rH CD o UA ir— ON t— IT— CO vo r—vo vo tr— H o o UA UA UA UA UA UA UA UA UA UA UA o oi OJ OJ OJ OJ OJ OJ CM OJ OJ OJ CM OJ o •H -=1- UA 01 OJ ON cn OJ o t— CM J- I s — o H o onVo ir- OJ OJ O OJ H D— On H o on j- h ua on 00 -=r -4- -=1- OO cn o i-q LTN vo vo vo vo vo VO VO VO vo vo vo o H OJ OJ OJ OJ OJ OJ OJ OJ OJ OJ OJ o CM t— UA 0O VO vo O O OJ vo H on o vo cn UA VO UA oj vo -=r oj CO ON H O VO I s - ON CO ON vo C— CO t- t— l>- H O vo vo vo vo vo vo vo vo vo vo VO o H CM OJ CM CM OJ OJ OJ OJ OJ OJ OJ o H OJ O O UA t— O -=!• UA o vo OO O oj oj on o t- UA ua on oo On ON o O t— uavo _=r j- _H- OJ OJ H o OO OJ LTN t — t— t— t— c— t— D— t— t— t— t— o OJ OJ OJ OJ OJ OJ OJ OJ OJ OJ OJ o H o •H -p crj •H !> crj c3 CD •H LA o co Q rl H CD EH s 5 a co P 71 .i% UJ o z < 2 V) z < l% 2 - >- (/) Z UJ Q 10% 1 - 100 % 200 300 T i 500 400 WAVELENGTH (nm 600 700 Figure C-l. Transmittance Curve for the Kodak Wratten Filter No. kjB 72 u CD a 3 rd CD a Oj }H - — v EH o (D > W o ^^ H S Pin o V • 0) .c; >> ■P -p •H S O o o u H Cj -p LTV H dJ cd G O •H LT\ O cd U H 0) TJ EH s cd ■d cd -P w 73 where v is in cm/sec and T is in sec. The constant, 789, was determined by using tubes with an inside diameter of 1/2 inch. The time, T, was measured by a timer that was controlled by two level sensors — one sensor to start the timer and one to stop it. The sensors were positioned in a flask so that the volume between the two sensors was one liter. The timer was a counter with a 10 MHz clock input. The water source was a water faucet supplied by the city water works. Because the pressure of the city water works is not constant but varies slightly, the velocity of the water through the tubes of the test set-up and the readings from the stochastic meter were affected. Thus, several data readings were ignored if the deviation from the mean appeared to be great. Also, the water faucet was such that it was difficult to set the average velocity between 100 cm/sec to 200 cm/sec. The stochastic meter had a gain of 50. 7^ REFERENCES 1. Poppelbaum, ¥. J., "A Practicability Program in Stochastic Processing," proposal for the Office of Naval Research, Department of Computer Science, University of Illinois, June 1973. 2. Esch, John ¥. , "RASCEL - A Programmable Analog Computer Based on a Regular Array of Stochastic Computing Element Logic," University of Illinois, June 19^9 • 3. Marvel, Orin, "Trans formatrix, An Image Processor; Input and Stochastic Processor Sections," University of Illinois, April 1970. h. Ryan, Lawrance, D. , "System and Circuit Design of the Transformatrix Coefficient Processor and Output Data Channel," University of Illi- nois, June 1971. 5. Wo, Yiu Kwan, "Ape Machine, A Novel Stochastic Computer Based on a Set of Automonous Processing Elements," University of Illinois, February 1973. 6. Poppelbaum, W. J. , Computer Hardware , Macmillan, New York, 1972. 7. Papoulis, A., Probability, Random Variables and Stochastic Processes , McGraw-Hill, New York, 1965. ' " 8. van der Ziel, A. , Noise: Sources, Characterization, Measurement , Prentice-Hall, Englewood Cliffs, N. J., 1970. 9. Feller, William, Introduction to Probability Theory and its Applica- tions , Vol. I, John Wiley and Sons, New York, 1950. 10. Sabersky, R. H. , A. J. Acosta and E. G. Hamptmann, Fluid Flow , Macmillan, New York, 1971. 11. Motchenbacher , C. D. and F. C. Fitchen, Low-Noise Electronic Design , John Wiley and Sons, New York, 1973. 12. Cramer, H. and M. R. Leadbetter, Stationary and Related Stochastic Processes, John Wiley and Sons, New York, 19&1 • 13. Bendat, I. S. , Principles and Applications of Random Noise Theory , John Wiley and Sons, New York, 1958- ik. Goldstein, S. (ed.), Modern Developments in Fluid Dynamics , Vol. I and Vol. 2, Dover, New York, 1965. 15. van der Ziel, "Noise in Solid-State Devices and Lasers," Proceedings of the IEEE , August 1970, pp. 1178-1203- 16. Hinze, J. 0., Turbulence , McGraw-Hill, New York, 1959- 75 17. Bhat, U. N. , Elements of Applied Stochastic Processes , John Wiley and Sons, New York, 1972. 18. Parzen, Emanual, Stochastic Processes , Holden-Day, San Francisco, 19&2. 19. Luxenberg, H. R. and R. L. Kuehn (eds.), Display Systems Engineering , McGraw-Hill, New York, 1968. 20. Williams, Charles S. and Orville A. Becklund, Optics , Wiley-Inter science, New York, 1972. 21. Millman, Jacob and Herbert Taub, Pulse, Digital and Switching Waveforms , McGraw-Hill, New York, 1965. 22. Wang, Shyh, Solid-State Electronics , McGraw-Hill, New York, 1966. 23. Sze, S. M. , Physics of Semiconductor Devices , Wiley-Interscience, New York, 1969. 76 VITA James R. Cutler was born in Mitchell, South Dakota on July 30, 19^6. He attended South Dakota State University at Brookings, South Dakota and received a B.S. in Electrical Engineering in June, 196U. He then worked for the Naval Ordinance Laboratory in White Oak, Silver Springs, Maryland from July, 1968 to August, 1971- During this time, he was a participant in a graduate study program at NOL. He attended Michigan State University at East Lansing, Michigan and received his M.S. in Electrical Engineering in June, 19^9 • In September, 1971 he enrolled at the University of Illinois, Urbana, Illinois. From that time, he was a research assistant working under the guidance of Professor W. J. Poppelbaum. He is a member of the Computer Group and Solid State Circuits Group in the IEEE as well as Sigma Xi. SECURITY CLASSIFICATION OF THIS PAGE (Whan Data Bntarad) REPORT DOCUMENTATION PAGE READ INSTRUCTIONS BEFORE COMPLETING FORM 1. REPORT NUMBER UIUCDCS-R-75-723 2. GOVT ACCESSION NO 3. RECIPIENT'S CATALOG NUMBER 4. TITLE (and Subtitle) MOLECULAR STOCHASTICS : A STUDY OF DIRECT PRODUCTION OF STOCHASTIC SEQUENCES FROM TRANSDUCERS S. TYPE OF REPORT ft PERIOD COVERED Ph.D. Thesis • ■ PERFORMING ORG. REPORT NUMBER ■ ■ CONTRACT OH GRANT NUMBERfa.) N000-11+-67-A-0305-002U 7. AUTHORf*) James Roland Cutler 9. PERFORMING ORGANIZATION NAME AND ADDRESS Department of Computer Science University of Illinois at Urbana-Champaign Urbana, Illinois 618OI 10. PROGRAM ELEMENT, PROJECT, TASK AREA & WORK UNIT NUMBERS CONTROLLING OFFICE NAME AND ADDRESS Office of Naval Research 219 South Dearborn Street Chicago. Illinois dQfiQk 12. REPORT DATE April 1975 IS. NUMBER OF PAGES O? IS. SECURIT 14. MONITORING AGENCY NAME » ADDRESS*"// dl Iterant Irotn Controlling Olflca) Y CLASS, (of thla Oport) is*, declassification/ downgrading SCHEDULE 16- DISTRIBUTION ST ATEMENT (ot thla Report) 17. DISTRIBUTION STATEMENT (of tha abmtract antarad In Block 20, II dltlaranl /root Report) 18. SUPPLEMENTARY NOTES 19. KEY WORDS (Continue on tavataa alda it nacaaamry Stochastic sequence Noise source Temperature transducer Luminance transducer id Identity by block number) Flow transducer Time average 20. ABSTRACT (Continue on tavataa alda II nacaaamry and Identity by block number) Molecular Stochastics is an extension of stochastic computing. The main concern of Molecular Stochastics is the generation of stochastic sequences that are used as the encoded information in stochastic computing. Also, these stochastic sequences in order to have meaning should be dependent upon some physical parameter of interest such as temperature. Thus, the problem is constructing a transducer which outputs a stochastic sequence that varies with respect to a physical parameter. >D , '% 'J, 1473 EDITION OF I NOV 68 IS OBSOLETE S/N 0102-014-6601 | SECURITY CLASSIFICATION OF THIS PAOE (Whan Dmla Knlarad) .LCUWlTY TY r.t ASSIFIOTION OF THIS P »,Gt(Whmn Dmtm Bnfrmd) 20. (Con't.) This paper presents the necessary theory along with a discussion of a number of noise sources. These sections imply a number of transducers which were constructed and tested. The data was then compared with the result pre- dicted by the theory. SECURITY CLASSIFICATION OF THIS PAGEr»»>«n Dmtm Batmnd) 3LI0GRAPHIC DATA EET 1. Report No. UIUCDCS-R-75-723 3. Recipient's Accession No. 1'itle and Subt itle MOLECULAR STOCHASTICS : A STUDY OF DIRECT PRODUCTION OF STOCHASTIC SEQUENCES FROM TRANSDUCERS 5. Report Date April 1975 iurhor(s) James Roland Cutler 8- Performing Organization Rept. No UIUCDCS-R-75-723 Performing Organization Name and Address Department of Computer Science University of Illinois at Urbana-Champaign Urbana, Illinois 6l801 10. Project/Task/Work Unit No. 11. Contract/Grant No. N000-1U-67-A-0305-002U ,11 Ting Organization Name and Address Office of Naval Research 219 South Dearborn Street Chicago, Illinois 6060k 13. Type of Report & Period Covered Ph.D. Thesis 14. i |.pl. mentary Notes Abstract s Molecular Stochastics is an extension of stochastic computing. The main concern ' Molecular Stochastics is the generation of stochastic sequences that are used as ie encoded information in stochastic computing. Also, these stochastic sequences in ■der to have meaning should be dependent upon some physical parameter of interest ich as temperature. Thus, the problem is constructing a transducer which outputs a lOchastic sequence that varies with respect to a physical parameter. This paper presents the necessary theory along with a discussion of a number of dse sources. These sections imply a number of transducers which were constructed id tested. The data was then compared with the result predicted by the theory. Kr> Uords and Document Analysis. 17o. Descriptors ochastic sequence dse source :mperature transducer iminance transducer -ow transducer .me average - ■ Opcn-Kndcd Terms " II I- if Id/Group ilitj Statcmcni Unlimited distribution 19. Security ( las-. (This Report ) UNCl.ASSl,li:i) 20. Security ( lass (T Page UN( LASS1FIED 21. No. of Pages 87 22. Prit 1 USCOMM-OC 40329-P71 fc V. ■V # ID a. UJ co