UNIVERSITY OF CALIFORNIA, SAN DIEGO UC SAN DIEGO LIBRARY 3 1822 04429 7414 OPTICAL SYSTEMS GROUP TECHNICAL NOTE NO. 227 Feb 1991 Offsite (Annex-Joi rnals) QC 974.5 . T43 no. 227 A Sensitivity Study of Daytime Visibility Determination With The Horizon Scanning Imager J. E. Shields R. W. Johnson M. E. Karr UNIVERSITY OF CALIFORNIA SAN DIEGO The material contained in this note is to be considered proprietary in nature and is not authorized for distribution without the prior consent of the Marine Physical Laboratory and the Air Force Geophysics Laboratory TY. TERS Contract Monitor, Dr. H. A. Brown Atmospheric Sciences Division INO. 3 FORNI SHI• ONIA. COM V :18689 ce Prepared for The Geophysics Laboratory, Air Force Systems Command United States Air Force, Hanscom AFB, Massachusetts 01731 under contract No. F19628-88-C-0154 SCRIPPS INSTITUTION OF OCEANOGRAPHY MARINE PHYSICAL LAB San Diego, CA 92152-6400 .!!!!!!wwwhhhhWww W . Mpi WW. AC del Wyp ://!!.W.W ' wwWw lin i i AmwWidWV w w w .www /v i/. wwwiwwi Astrimi www .ww w wwwviikä wwwwww w ....... ..... ...... . w ww /.. ......... Sowy, powWwwW . wwwww. .'.... .. .. ... .. . A siden d i s. N ...M WAPI . ..... .. . . . ......... . . . . . . . .... ...... ........... .. . ANONIMN W W W . ..... W . W WX / viii . wwwwww w w . ..... ... ...... . ..... . ..... .. .. .. Wwwmi. ...... w ...........www.www. w ww. ... .. .. .. . ABS UNIVERSITY OF CALIFORNIA, SAN DIEGO 3 1822 04429 7414 wwwwwwwwwwwwwwwwwwwwwwwww.......... . . . . Technical Note No. 227 A Sensitivity Study of Daytime Visibility Determination With The Horizon Scanning Imager J. E. Shields R. W. Johnson M. E. Karr Abstract The determination of visibility during the daytime by the Horizon Scanning Imager is influenced by a number of parameters, both measured and input. This technical note contains a sensitivity study, in which the impact of uncertainty in these parameters is determined. Results are presented in terms of plots of resulting percent change in the visibility determination as the measured and input parameters are changed. Following the discussions of the impact of the uncertainties, there is additional discussion of means of reducing these uncertainties in order to potentially improve the visibility determinations. .. ........ . . . ... ... ... ... ... . Contents Abstract ............ List of Illustrations ....... ............ 1.0 Introduction .......... 2.0 WIINIUL YILY IIIIIVI VII VIILUN Y UW . . . . . . . . . . . . . . . . . . . . . . . . . . Sensitivity to Inherent Contrast Values .... 2.1 Computation and Interpretation of the Co Sensitivity Plots ..... 2.2 Results of the Co Sensitivity Computations.. 3.0 Sensitivity to Measured Target Uncertainties ......... Computation of Sensitivity to Measured Target Uncertainties ... 3.2 Evaluation of Sensitivity to Measured Target Uncertainties ..... 3.1 4.0 .......... Sensitivity to Measured Horizon Brightness Uncertainties ...... 4.1 Computation of Sensitivity to Measured Horizon Uncertainties .... 4.2 Evaluation of Sensitivity to Measured Horizon Uncertainties ... 15 ............ Sensitivity to Non-linearity of Camera Response .. Computation of Sensitivity to Measured Non-linearity .................................... Evaluation of Sensitivity to Measured Non-linearity .......... 5.3 Characterization and Stability of System Response; Implications ....... Sensitivity to Target Range and Contrast Threshold ..... 6.1 Sensitivity to Target Range ............... 6.2 Sensitivity to Contrast Threshold.. 6.0 7.0 Summary ..... 8.0 Recommendations ... : 8.1 8.2 Improvements Relating to Measurement Accuracy ..... ........ Improvements Relating to Non-ideal Conditions ... :: .. 30 .... 37 9.0 10.0 Conclusion......... Acknowledgements ..... 11.0 References ....... - i- List of Illustrations Test la Test 1b Test 1c Test 1d Test 2a ........ 11 Test 2b Test 2c Test 2d Test 3a Sensitivity of Derived Visibility to an error in input Co when the actual Co is .8...... Sensitivity of Derived Visibility to an error in input Co when the actual Co is .5.... Sensitivity of Derived Visibility to a variation in actual Co when a fixed input Co of .8 is used Sensitivity of Derived Visibility to a variation in actual Co when a fixed input Co of .5 is used .. Sensitivity of Derived Visibility to Measured Target Radiance Uncertainty, Co=.8, Lq = 100.. Sensitivity of Derived Visibility to Measured Target Radiance Uncertainty, Co= .8, La = 200.......... Sensitivity of Derived Visibility to Measured Target Radiance Uncertainty, Co= .5, La = 100 .......... Sensitivity of Derived Visibility to Measured Target Radiance Uncertainty, Co=.5, Lq = 200 .......... Sensitivity of Derived Visibility to Measured Horizon Radiance Uncertainty, Co=.8, Lg = 100 ........... Sensitivity of Derived Visibility to Measured Horizon Radiance Uncertainty, Co=.8, Lq = 200 ........... Sensitivity of Derived Visibility to Measured Horizon Radiance Uncertainty, Co=.5, Lq = 100 ........... Sensitivity of Derived Visibility to Measured Horizon Radiance Uncertainty, Co=.5, Lq = 200 ......... Sensitivity of Derived Visibility to Non-Linearities in Camera Response, Co= .8, Lq = 100 ...... Sensitivity of Derived Visibility to Non-Linearities in Camera Response, Co=.8, Lq = 200.. Sensitivity of Derived Visibility to Non-Linearities in Camera Response, Co=.8, Lq = 220 ... Sensitivity of Derived Visibility to Non-Linearities in Camera Response, Co=.5, Lq = 100 ..... Sensitivity of Derived Visibility to Non-Linearities in Camera Response, Cor.5, Lq = 200 ........ Sensitivity of Derived Visibility to Non-Linearities in Camera Response, Co= .5, Lq = 220. Test 3b Test 3c Test 3d Test 4a Test 4b Test 4c Test 4d Test 4e Test 4f ..... - iii- Test 5a Test 5b ..... 30 Test 50 Sensitivity of Derived Visibility to Precision and Stability of Non-Linearity, Co=.8, Lq = 100... Sensitivity of Derived Visibility to Precision and Stability of Non-Linearity, Co=.8, Lq = 200... Sensitivity of Derived Visibility to Precision and Stability of Non-Linearity, Co=.8, Lq = 220 ............. Summary for Co =.8, Lq = 200 ...... Summary for Co =.5, Lq = 200 .............. Summary a Summary b -iv- 1.0 INTRODUCTION The determination of visibility during the daytime by the Horizon Scanning Imager (HSI) (Johnson et al, 1990) is influenced by a number of parameters, both measured and input. The system works essentially through measurement of the relative radiance of dark targets and the horizon sky, as explained in Johnson et al, 1989. These measurements are made for several targets in each of several azimuthal directions around the horizon. An apparent contrast of the target with respect to the sky can be computed directly from the measured radiances. Then the visibility can be determined from the apparent contrast, if the inherent contrast of the target with respect to the sky, and the range to the target, are known. . An early study of the impact of various theoretical uncertainties is included in Johnson et al, 1989. In this early study, the impact of uncertain inherent contrast, and overcast or partly cloudy horizon sky conditions were considered. This current note extends the study of the impact of inherent contrast uncertainties, and also evaluates the impact of measurement uncertainties. In particular, it is important to note which of the problem areas can be readily controlled, and which represent basic limitations to the accuracy of the device. This note will discuss specifically the sensitivity of visibility determinations to errors in input Co, measured target relative radiance, measured horizon relative radiance, camera linearity, and input target range. Each section first discusses how the sensitivity plots for the given parameter were derived. (Numeric examples are given in some cases.) This is followed by a discussion of the results implied by the plots. Each section then includes a discussion of the potential sources of these uncertainties, and potential means of minimizing the uncertainties. 2.0 SENSITIVITY TO INHERENT CONTRAST VALUES The inherent contrast is defined by C, = sko-phie ohe 2.1 where Lo is the inherent target radiance, and bLo is the inherent background radiance. Note that the word "inherent" signifies that the radiance is measured from a distance of 0. Also note that if the target is black, Lo is equal to 0, and the inherent contrast stays fixed at -1, even if the background radiance changes. The HSI system does not measure Co; this value is an input set by the user. The visibility is then computed from the equation 2.2 where r is range to the target, and ε is the human contrast threshold associated with the definition of visibility. Cr is the measured contrast, or the contrast at range r, defined by .1- C. - 1,-6L, 2.3 where the and bly are the target and background radiances, respectively, at range r. For a full discussion of these terms, see Duntley, 1957. Equation 2.2 is the primary expression utilized by the HSI to derive visibility from the measured contrast. This equation is derived in Johnson et al, 1989. It is valid if the contrast with respect to the horizon sky is used, and if the horizon sky radiance approximates the equilibrium radiance (La), defined in Johnson et al 1989, and earlier references. A threshold ε of .05 is normally used in the HSI; this is the value of human contrast threshold associated with the definition of visibility (ref. Douglas & Booker, 1977). nan It should also be noted that although Co values are negative for dark targets, the human visual response depends on the absolute magnitude of Co; therefore positive Co values have been used in most of our reports, for ease of presentation. In this note, the negative sign must be kept in some of the internal computations used to generate the plots, however the positive sign has been used in presenting the results, for consistency with earlier work. 2.1 Computation and Interpretation of the Co Sensitivity Plots The sensitivity of the derived visibility to uncertainties in the inherent contrast is illustrated by a series of four plots, labeled "Test 1a" through "Test 1d". The first plot, labeled Test la, shows the sensitivity of derived visibility to changes in input Co when the actual Co value is 0.8. That is, if we are measuring a target which has an inherent contrast of -0.8 with respect to the sky, but we input some other value, what is the error in derived visibility due to this input error. The visibility we would derive is given by Egn. 2.2 using the input Co; the visibility we should have derived, given the correct Co value, is given by Eqn 2.2 with a Co of 0.8 input. Thus to determine the error in the derived visibility, one first computes visibility using the input Co value and Eqn 2.2, then divides this by the visibility computed using a Co value of 0.8. For example, if a target at range 10 miles has an apparent contrast Cr value of 0.2, and the correct Co is 0.8, the derived visibility found using Eq. 2.2 should be 20 miles. An input Co value of 0.6 would yield a derived visibility of 22.6 miles, for an error of 13%. The error is computed for a range of input Cr and Co values. Note that the error does not depend on the input range (r) value, and the input € value is not normally varied; thus these variables do not impact the assessment of the error due to Co uncertainty. In the Test la plot, the uncertainty is plotted as a function of measured apparent contrast Cr. Note that when Cr is close to .05, the error approaches 0. Physically, this occurs when the target is at a range about equal to the visibility; that is, the apparent contrast approaches the visual threshold of .05 when the target range is close to the visibility. Thus the plot may be interpreted as showing that the error due to uncertainty in Co is quite small if the target range is similar to the visibility, and becomes larger as the target range becomes significantly less than the visibility. The HSI algorithm has two automatic cutoff values. When V/r is less than 1.15 or greater than 4.0, the visibility value is given a "greater than" or "less than" value. In Test 1a, the curves are plotted in black (solid lines) where the V/r values lie within this range . .. . 100.00 . .. . ." ... 0000 . ...... OOO .............. 80.00 + .... ... ..... Test 1 a Sensitivity of Derived Visibility to an errorin input Co when the actual Cois.8 ..... 1200 tut 5 . t . . 2 . ..... . . t . . . . : : 500 . 60.00 Co= .6 05 . . ...... . + . . . . 40.00 00 . 000 +$$ AND . . . . w OOOOO . S . . $ . % Change in Derived Visibility UDRU. DO * 100 . O 20.00 . . DOO . 1 . . . . . .. .. . . . .. .. . .. . . . 0.00 . 2 r WWW 6 . . . . . + + + + + + + + _ _ CUR __ VEL . . . O P . . DOO 12. . 0. . XO . . . . -20.00 OPO . . . . ONYO COM NO DU TO 1.0 LAO . 1 . O DO? . . . . ...! * YOU -40.00 + 0.00 0.10 0.20 0.30 0.40 0.50 0.60 Measured Apparent Contrast 1 ...... ... ... . . -3- " " " ...... ..... wwWWW w 100.00 - XXOOX VA VOX 946 . 5 . . 0 .. 03 80.00 50 0 Test 1b Sensitivity of Derived Visibility to an error in input Co when the actual Cois .5 DO ORD 0000. 10 +00:09 Co= .3 40.00 ... w #10 . + % Change in Derived Visibility -4- 20.00 1 W W 0.00 W .. ...... ... ...... .. ...... . ............ ...... . -20.00 OX 2 R !... . ... . UU : ++ . CORO -40.00 + 0.00 0.10 0.20 0.30 0.40 0.50 0.60 ..... .. .. Measured Apparent Contrast . . .............. ...... ...... ............................ .......... ......... . .. . . ..... ............. ............. . .... . ........... .. .. .......... ... . Www . Measured Apparent Contrast 0.60 0.50 0.40 0.30 0.20 0.10 0.00 -40.00 7 * + . . . . . . . . . _ _ + _ 20 . . . . . . + + * * -20.00 - . . . ... " 0.00 + 0 .2.0 # # . . # .. . _ _ _ _ . . . . .. . . . . . . . . . . . . .. . . 20.00 ... . -9- 6: +_ _ _ _ % Change in Derived Visibility . * . L . . 40.00 .. .. .. . . .. . OL . 0 60.00 ... . ... ........ : DO . fixed input Coof.8 is used to a variation in actual Cowhen a Sensitivity of Derived Visibility Test 1c 80.00 ........ 100.00 ........ .. ..... ........... ...... ............. .. ........ . . ........ . T WINS www. ............ 40.00 . .... O . : 200 ..... .0 . 1 .. . 1000000 UD1 20.00 Test 10 Sensitivity of Derived Visibility to a variation in actual Cowhen a- fixed input Coof.5 is used 0.00 -20.00 XXXX . WWW % Change in Derived Visibility .... .... ... -40.00 . ... . ....... t . .. .. . .. . . ... .. .. . . . ... .. .. . . . .... . . . . -60.00 .. . .. ... . . . .+ + . + -80.00 X . . OOO. ++++++ 11 -100.00 + 0.00 0.10 0.20 0.30 0.40 0.50 0.60 ............... .. .......... .. . Measured Apparent Contrast . ..... ... . ... . ... .. ............... . . : : .. .. .. .. .. ... and the derived visibility values are used; the portions of the curve which are grey show where V/r values are outside the bounds. For example, if a Co value of .6 is input, the V/r cutoff occurs at a measured Cr value of .32, and the maximum error is about 30%. The grey curves show what the error would be, if the possible Cr range were changed due to a change in the V/r cutoffs. Test 1b shows the sensitivity of visibility to Co input errors, if the actual Co is .5. The values of .8 and .5 for actual Co were chosen for tests 1a and 1b, because they are close to the normal bounds we might expect to use. A Co of .8 is perhaps typical of the blackest targets we can expect to find among targets of opportunity; a Co value of .5 was chosen to represent a significantly non-ideal (non-black) target. Test 1b shows similar characteristics to Test la, in the sense that the error in determined visibility due to the uncertainty in the Co value is vanishingly small for targets at range equal to visibility. The error increases as the target range becomes smaller, and measured apparent contrasts become larger. The errors become quite large at the maximum Cr values, indicating a certain risk associated with using non-ideal targets. The third and fourth plots, Tests lc and ld, show the impact of Co uncertainties in a slightly different way. These plots show the resulting error if a fixed input value is used, but the actual inherent contrast of the target with respect to the horizon sky changes. (Co can change because of the impact of changing lighting conditions on the target radiance; it is also impacted by changes in the horizon radiance. The equilibrium radiance, and therefore horizon radiance, is highly dependent on the path-of-sight's scattering angle with respect to the sun.) For an example of the impact of changes in Co, consider the case above where the visibility is 20 miles, and the range 10 miles. If Co is really .6, it can be derived from Eq. 2.2 that Cr must have been .1732. Then if we measure Cr = .1732, and input a Co value of .8, we will derive a visibility value of 18.1 miles, for a -9.4% error. It should be noted that in Test 1c, the V/r cutoffs all occur at the same value of Cr. Looking at Eq. 2.2, one can see that if Co and ε are fixed, V/r will be directly related to Cr. The HSI algorithm must compute V/r using its input value of Co. In Tests lc and ld, that input Co value is not changing, so the values of Cr associated with the V/r cutoffs do not change. In Tests la and 1b, the input value of Co is changing, and therefore the Cr value at which a given V/r threshold occurs changes too. 2.2 Results of the C. Sensitivity Computations As noted in the previous section, the errors in derived visibility (due to Co uncertainty) go to o when the target range equals visibility. At shorter ranges, the error can become quite large. Thus, all four of the plots show quite clearly that it is important to know the Co value reasonably well. For example, from Test 1a, if the actual inherent contrast is .8, and if one wishes to have an error less than 20%, the error in Co must be about .1 or less (when Co is .8). It is also quite obvious that the uncertainty is much less for ideal black targets than for non-ideal targets. In Test 1b, with actual inherent contrast of .5, the error can exceed 20% even if the contrast is known to within an uncertainty of .1 or less. This implies that if a non-ideal target is used, there will be loss of accuracy unless the Co is very tightly specified, or the range of acceptable Cr values is more tightly limited. Even if we accurately specify the Co value at some point in time, the system is somewhat vulnerable to natural changes in Co. As shown in Test lc, the resulting visibility errors are probably reasonable when a target near Co=.8 is used. The error resulting from -7. a change in Co for targets near Co=5, shown in Test 1d, are quite large, implying that we probably should not use targets with Co values as low as .5. Even though these Co sensitivity plots demonstrate a basic limit of system accuracy, there are some available trade-offs. We can help optimize system accuracy by utilizing a combination of several options such as: a) Limiting our targets to Co values closer to 1; i.e. we should probably exclude any targets with Co<.5. b) Using V/r thresholds which depend on Co. That is, we could allow V/r values as high as 4 when Co is close to .8, but use a V/r limit which is smaller when Co is less. Determine more accurate Co values, and better characterize variations in Co due to changes in horizon brightness, thus minimizing errors in input Co as discussed below. Study of the normally occurring Co variations in the available targets of opportunity, in conjunction with the use of the plots (1a-1d) which show the impact of these variations, should allow us to optimize these tradeoffs. Regarding option "c" above, there has been considerable interest in extracting the Co values, as discussed in Section 5 of Johnson, et al, 1990. If the visibility for a given scene can be determined, either based on a median value for the targets, or based on an independent source, one should be able to back out the value of Co required for a given target to yield that visibility value. The first goal of a Co study would be determination of the differing Co values for a variety of targets. A second goal would be characterizing the variation in Co over time for each target. This variation of Co over time, for a given target, is caused by changes in Lo as well as changes in blo. The changes in bLo are most strongly driven by changes in scattering angle with respect to the sun; this should be well behaved. Changes in blo due to aerosol load should be more difficult to handle. Likewise, changes in Lo due to the changing lighting distribution on the target, as weighted by the directional reflectance of the target, are complex. Whereas it should be possible to improve our current handling of input Co values, we do not expect to fully predict the Co behavior. In order to enable a Co study, it is first important to understand other sources of error on visibility; this was part of the reason for making this current study. It may also be important to derive curves, similar to those shown in the following section, which show the sensitivity of the extracted Co value to various types of error. In this way, the study can be designed to maximize the Co information returned, and minimize the impacts of measurement error on the Co determination. 3.0 SENSITIVITY TO MEASURED TARGET UNCERTAINTIES In any measurement system, there are limits to measurement accuracy. In the HSI, these can be caused by a number of factors such as measurement noise or non-uniformity in the basic chip sensitivity. This section discusses the impact of uncertainties in the measured target relative radiance. -8- . .. .. .. . . . ! .. . ' . * . - . . . #tul- wwwwwwwwwwwwwwwwwwww w 3.1 Computation of Sensitivity to Measured Target Uncertainties The signal from the target is actually a relative radiance, in the sense that relative, but not absolute, radiances are determined over the scene. This is a result of the use of the auto-iris. The auto-iris causes a fixed percentage change over the whole scene. From the definition of apparent contrast, Eq. 2.3, it may be shown that these relative radiances yield an absolute measurement of apparent contrast. The radiances are digitized on the image board by an A/D converter, to yield signals between 0 and 255. In this section we determine the error in computed visibility if there are known errors in this digitized signal. Signal noise is normally about 2-3 counts, whether the signal is near 0 or near 255. Many other types of signal errors we observe appear to be primarily additive, i.e. of a fixed value rather than a fixed percentage of the signal. Therefore, it was decided to determine the visibility error for signal errors of a given magnitude rather than a given percent. Errors of £2 and 4 counts, on the 0-255 scale, were selected. An error in the measured target relative radiance will cause an error in the apparent contrast of the target with respect to the horizon sky. The resulting error in derived visibility is itself a function of the apparent contrast. In order to present the data in a manner which would be useful for further engineering studies, it was decided to plot the resulting error as a function of the measured apparent contrast, rather than the actual apparent contrast (because the HSI can only return the measured apparent contrast). . . If there is an error given by the variable Err, the measured apparent contrast becomes C 1,4,+Err - 62) 3.1 (where the minus sign converts to the positive Cr values used in the plots). Suppose one obtains a measured contrast of Cr, which includes an error as given in Eq. 3.1. One can show that the actual apparent contrast Cr' is related to this measured apparent contrast by C' = 6L, C, +Err bL, 3.2 Then we can compute the visibility the HSI would return from using Eq. 2.2 with an input value of Cr, and we can compute the correct visibility (associated with 0 error) by using Eq. 2.2 with an input value of Cr', as given by Eq. 3.2. Note that by using this approach, in conjunction with Eq. 3.2, one derives the visibility error as a function of the measured (erroneous) Cr value. This is somewhat more complex than computing the visibility error as a function of actual apparent contrast (i.e. the value that would have been measured in the absence of error). It was decided to use the measured apparent contrast in the plot, since this is the parameter which is available on the HSI. As an example, if Co is .8, and the measured horizon brightness is 100, as in Test 2a, and a measured Cr value of -.2 is obtained for a target at range 10 miles, this means that the measured target radiance was 80. If this target radiance measurement included an error of +4, then the actual target radiance should have been 76. That is, the measurement was +4 -9- W RA W www . ben . $. * ***** ** ... Om i th IY . A: AV . . w you ..... .... .. . . ..... . ..... ist , too high. Then the actual Cc value should have been -.24. The actual visibility, obtained from using .24 in Eq 2.2, is 23.0 miles, whereas the visibility returned by the HSI would be 20 miles, which is 13% low. Tests 2a through 2d are plotted as a function of the measured Cr. Since Eq. 3.2 depends on horizon brightness, plots have been created for horizon brightnesses of 200, which are the normal setting, and 100, which is normally the lower limit. The results are also dependent on Co, and have been computed with Co values of .8 and .5. 3.2 Evaluation of Sensitivity to Measured Target Uncertainties he erromasured The deviations in the Test 2 series are somewhat less, in general, than those shown in the Test 1 series. That is, the sensitivity to measured target uncertainty is in general less than the sensitivity to Co uncertainty. The error in visibility is most sensitive to measured target uncertainties at low Cr values, i.e. when the target range is close to the visibility. That is, whereas the 4.0 V/r cutoff is critical in minimizing errors due to Co uncertainties, it is the other end, the 1.15 V/r cutoff, which is critical in minimizing errors due to iLr uncertainties. Values + end, th.0 Vr A comparison of Tests 2a and 2b shows immediately that there is considerable advantage, in terms of this type of error, in keeping the horizon radiance high. If the auto- iris is set so that the horizon radiance stays near 200, and targets with Co near .8 are chosen (Test 2b), the error resulting from a measurement error of +2 is less than 5% over most of the span. The situation is not quite so good when targets with Co values near .5 are chosen, as shown in Tests 2c and 2d. Test 2c illustrates that the combination of low Co and low horizon radiance are the worst case. In this plot, the error resulting from a measurement error of + 2 is between 10 and 20%. An obvious first step in utilizing these results is to ensure that the auto-iris is set so that the measured horizon radiances are reasonably high, i.e. near 200. It would be useful to determine the source and magnitude of typical measurement errors. Whereas the electronic noise can create an error of 2-3 counts in a specific pixel, the measured target radiances are normally the average of several pixels, and should not be subject to large random errors. We need to determine the typical error due to noise in the spatially averaged signal. If it is too large, additional temporal averaging could be used to minimize it. A slow rise in the dark current signal could also contribute to a measurement error. This may be more important than the noise, since spatial averaging does not decrease the error. Again, the data needs to be evaluated to determine typical error magnitudes. If necessary, a light trap could be installed at one azimuthal look angle, to enable measurement and correction of the dark signal. Another source of measurement error is the spatial non-uniformity in the sensor chip. This can be evaluated by extracting the signal for the various target positions, from a calibration measurement taken with uniform lighting on the chip. (This calibration measurement has already been acquired at a variety of light levels, as part of the linearity his non-uniformity is found to be large enough to cause a significant error (as indicated by Tests 2a-2d), the non-uniformity could easily be compensated for in the visibility algorithm. - 10 - ** oil+ A b o numli kasutamisesta ** tholes * trupin to vreme ** ** # 4 - wit .. .. ... wwwww . ... . ....... . . senuoy juajeddy painsbow | 09'0 090 000 0€ 0 070 010 1 00'0 +00:00- . . . . . . ... . + 9 +00002- ... * . W # . * . *. . . . ... w . . . .. *. . VU . . 9 O . 0 . .. # . 7 W . O . W O O _ . * O . _ _ . _. . . . . . . # . # . # # 10 2+ O # . + . + P 000 A _ _ .. . . _* 2 . * . . D + + + . + + + . . . * + # . + # . . . # # # . 100 O CORO 40 0- 0000 . DOO 00 YOU OTT C OLOCOU. . + + + + #_# _ # . # * . + + * + + _ TALO . . . . . . . . . . . . www. AL . #000 ............. . 50 TOTOOO.. S OUTOOO. BRODOTTO DOX . ... * . .. . .. .... . . . W . . . . . .. .. . . . . . . . .. 00:02 # . 0 : .. . . % Change in Derived Visibility .... .... . U1 00:0+ . .. . . . t . ! 2 O .... ... _ .. . 00:09 . . .. . ... ..... ....... . ........ .... 00L = 6 7 8 = 09 Ajurejeun əgue!peu la@vel pəinseaw Of A1!1!!S!A pənəd JO KI!A!!SUOS ez isəl 00-08 00'001 M . . ....... .... .. . . ... . . .... . .. .M www -11- w wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww ** ... 100.00 www WE . 80.00 Test 2b Sensitivity of Derived Visibility to Measured Target Radiance Uncertainty CO = .8 Lq = 200 .. .. . Wwwwwww w wwwwwwwwwwwwww.ip 60.00 . . ex i.WW 40.00 + WAY. A Riwi. % Change in Derived Visibility -12- |— 00:00 ở v• 00 #0000 . . 0t + . . . ODO . . ... . OO O . . . . . . + O . . . . COUNCIO # DODO es XOXOXY C#C#OOOOOOOO. * . TO 2010 +00'0 IVUUUUUUUUU S OO.OOO OCCUET O . . YO . ..OX ... U . . . . . . . . . OOO -- # WOOOOU - * ODEO . . ...PER OUD # .. . . -20.00 .... .. -40.00+ 0.00 0.10 0.20 0.30 0.40 0.50 0.60 . ... Measured Apparent Contrast . ............ .. . . .." ....... ... ..... ... Measured Apparent Contrast 0.60 0.50 0.40 0.30 0.20 0.10 0.00 -40.00 + . . . . . ...... OO. . . O . . . . . KONTO . O . . O . . . OOO . . WOOO . . . . . . DO21 SO . O . . -20.00 + . ........ . . CO0. +0. . . . . TO . . +2 . . .. O . OOO . O ........ . O . . .. ww w -0- 0000 2 . W DUO . . LOO . 0 00 . . 2 0. .0 . . _ * * > 20.00 - . C COPO . .. . + ... % Change in Derived Visibility . . . . . . DA 1 40.00 . 20 0 ODD . 1 W ... Utt AU ! 60.00 " . 10 + # 20. - L9 = 100 CO = .5 Measured Target Radiance Uncertainty Sensitivity of Derived Visibility to Test 2c 80.00 - 100.00 ....... . AMA .. . ... .. . .......... .... .... . . ... . -13- . ... .. . .. .. Measured Apparent Contrast 0.60 0.50 0.40 0.30 0.20 0.10 0.00 +00:00 !!. . 0. ...... ! . XX COM - 20.00 OOO SONO. . . . . . .. . 0.0. VO. . DOO OOO ... . man o- 000 * OO DOO 000_0 WOW O . _ Www DOU . .. .. . .. . .. . . .000 .. . . .. ....000 20.00 ... . OOOO % Change in Derived Visibility -14- . . . O O $ .. 40.00 +00'09 = 200 L CO = .5 Measured Target Radiance Uncertainty Sensitivity of Derived Visibility to Test 2d 80.00 - 100.00 . .. ... . . As with the Co sensitivity study, the plots which have been generated show the sensitivity of visibility to a given error; in this case a measurement error. It remains to determine the magnitude of the measurement errors which exist, and then decide which if any of the above correction procedures seem warranted. 4.0 SENSITIVITY TO MEASURED HORIZON BRIGHTNESS UNCERTAINTIES Just as there may be errors in the determination of the target relative radiance, there may be errors in the determination of the horizon relative radiance. These errors fall into two quite distinct categories. There may be measurement errors, in which the returned signal is incorrect due to noise, dark current, or chip non-uniformity. And there may be errors in the visibility determination because the measured background sky is not in fact the same as the equilibrium radiance, due to either misplacement of the horizon region of interest, or contamination of the region of interest by clouds, or other obstructions to visibility. In either case, there is an error in the determination of what is assumed to be equilibrium radiance. If this error has a given magnitude, in terms of signal change, the net effect on the derived visibility will be the same. Thus, although these plots are primarily intended to show the effect of measurement error, they can also be used to evaluate the impact of deviations from the ideal clear horizon. Test set 3a-3d evaluates the sensitivity of derived visibility to the measurement uncertainties. 4.1 Computation of Sensitivity to Measured Horizon Uncertainties As with the earlier plots, we wish to plot the errors as a function of the measured (rather than actual) apparent contrast. If there is a measurement error in the horizon radiance, then the measured apparent contrast (allowing for the change in sign) becomes C.= -1,4%-(ok,+Err)) (62, +Err) 4.1 (similar to Eq. 3.1). Then it can also be shown that for a measured Cr value, as given above, the correct value of apparent contrast, Cr, is given by '_ C, 62, +Err(C, -1) ol, 4.2 Since this is similar in concept to the derivation of the error resulting from target measurement uncertainty, I will not show a numeric example. 11 As with the previous test set, sample computations have been made for horizon radiances of 100 and 200, and Co values of .8 and .5. 4.2 Evaluation of Sensitivity to Measured Horizon Uncertainties Plots 3a-3d are quite similar to plots 2a-2d. On each of the four plots, the largest error occurs when Cr values are near .05. That is, as with Plots 2a-2d, the worst effect of the measurement error occurs when the target range is close to the visibility. The V/r cutoff at 1.15 protects the system from the largest errors in the grey region of each line). . .. . - 15 - .... . #myo mas ... -- -- <1 ms ..... ... . . ....... .... .. ..... ...... 100.00 . ................. - 80.00 Test 3a Sensitivity of Derived Visibility to Measured Horizon Radiance Uncertainty CO = .8 L o = 100 .. ....... ... ............... 60.00 www 40.00 -16- % Change in Derived Visibility 20 - 3+-too . . O. . DO .. + . . . . + + . ... . . . . + A . 00+ D O . . . . .. ... . 0 .. .. 0 . 00. 0 .0 .0 00 . OTO *.... - *.._ . OUT . ODIO OULOOD Y . . O P 0.00 m0t ... . . . - . . - _ _ # _# # OVOCE LO . . . # . + . + . + + + *_+ . * *. . * 1 . 0 . 2 . OOO . . -20.00 .. .. -40.00 0.00 0.10 0.20 0.30 0.40 0.50 0.60 Measured Apparent Contrast ........ . ...... ... .... . . " . -- . .. .... .. . . " . . ... ...... . . .... .... ....... Measured Apparent Contrast www. 0.60 0.50 0.40 0.30 0.20 0.10 0.00 -40.00 + ... ...... .................. - 20.00 . - 4 . O 0000 1. 000. 000 1. 000. 000.000 VI CC. UB.EDU . + + + + + + + + * + LO S . EE . 2.XX I . ADDOOMOOOOOO O . OOO ... . . F +4 20.00 _ _ % Change in Derived Visibility 40.00 60.00 ...... . . 80.00 L q = 200 CO = .8 Measured Horizon Radiance Uncertainty Sensitivity of Derived Visibility to Test 3b 100.00 wwwwww ................. - -17- " . ....... ..... ........ MEN .. senuo juajeddy pəinseaw 090 090 0010 08 0 020 010 000 +00:00- w ww. wowe 13 . 18. ? . L 2002 . V O . . . . . . . . . . . ... +00:03- .. . . • OOOOOO_+ .. . .. . . . . . ...... . . ul. . . ... . . . .... . . . . ... . . ! .. V. 1 vnt i va 04+00'0 . .. ... _ C __ 0 0 2.000.000.- WOODY ...........uty ren .. 0 . . . + d 100 . . .CO D ULTO O D D .. COD. ULO . ......... .. ..... .... 00:02 00 . + -18- e 2003 000 1. % Change in Derived Visibility . sw 00 . .*.. ... 01. # 1000 . . . .. 00:00 now ww. wil www . . .. . . . WO * . O www ..... .... th . . +00:09 DO . + D + w . 00:08 URDU. WOOD LOW .. 00L = by g = 00 K]นเนอวนก อวueupeg uoZuOH peanseeพ Op 11!!!!S!A pənəd Jo Kl!A!!!SUOS DE ISOL . 1 L00:00 ...... ...... .. .............. ....... . . . . .... 100.00 .... . .... Test 3d Sensitivity of Derived Visibility to Measured Horizon Radiance Uncertainty Co = .5 L = 2 0 0 . O 80.00 ... . ..... . . .. . .. * .. ... 11. . . ... .t . . . 60.00 .. ... . . .. . . . ... .. 11 .. . * 40.00 % Change in Derived Visibility +4 20.00 tt 10. .. ! .. . . . O . + Z ..... ! .. ... it # O . . O . . . . ALD O. . . . . _. _ VITO O ... O . . POD . . . . . . . . 0.00 DOO .. w . wwwwwwwwww * . O .... . .. OO . . . . . . .. .. . . . O . . . . .. OO. 10 . .. .. UD . .... .. . ..... .. -20.00 . . + . . . ...! . . -40.00 0.00 0.10 0.20 0.30 0.40 0.50 0.60 Measured Apparent Contrast -19- A comparison of plots 3a and 3b (as well as 3c vs 3d) shows the significant advantage in keeping the horizon radiance values near 200 (through proper setting of the auto-iris). Similarly, a comparison of plots 3a vs 3c, and 3b vs 3d, shows the advantage of using dark targets, with inherent radiances close to .8 (or close to 1, ideally). As with the previous tests, follow-up action should include evaluation of the magnitude of typical errors. The horizon radiance can be impacted by the same measurement errors mentioned in Section 3.2, namely noise, dark offset, and chip non- uniformity. In addition, we need to evaluate the magnitude of the signal error due to misplacement of the horizon region of interest (ROI) or corruption of the scene by clouds. It might be reasonable to acquire two horizon ROI's, and evaluate the standard deviation in each ROI as well as the comparative brightness. In this way the HSI could automatically detect cases in which the horizon is impacted by clouds. Evaluation of the data to determine the merit of these various schemes for improving equilibrium radiance determination seems appropriate. SENSITIVITY TO NON-LINEARITY OF CAMERA RESPONSE The CID sensor chips in the camera, in conjunction with the camera electronics, yield a reasonably linear camera response. That is, the output signal is in general proportional to the input brightness (or relative radiance in object space). However, the relation is not perfect, particularly near the high end of the camera sensitivity range. Therefore an evaluation of sensitivity to this type of error seems appropriate. 5.1 Computation of Sensitivity to Measured Non-linearity Computations of the error resulting from camera non-linearity are relatively straight forward. Linearity calibrations have been acquired for all cameras; they are reduced as discussed in Memo AV89-056t. The output of a linearity calculation is a look-up table. For each measured signal value, this table lists a corrected signal which is linearly related to the original radiance generating the measured signal. If the horizon has a measured brightness given by bLs, and the measured contrast is Cr, it can be shown that the measured target radiance is given by L = bli (1 - C) 5.1 (allowing for the change in sign in C). For a given Cr value, the visibility returned by the HSI is given by Eq. 2.2. The visibility which has been corrected for the non-linearity may be obtained by running the horizon radiance through the linearity look-up table, running the target radiance in Eq. 5.1 through the look-up table, and recomputing visibility. As an example, let Co equal .8, and the horizon radiance equal 100, as in Test 4a. When Cr= .2, the Lr value computed from Eq. 5.1 is 80. Application of the linearity table for the first system (linearity test LINO20) corrects the value of 80 to 82.2, and the value of 100 to 100.7. The corrected contrast, derived from Eq. 2.3, becomes .184. The corrected visibility, from Eq. 2.2, is then 18.8 when the target range is 10. The visibility returned by the HSI can be directly computed from Eq. 2.2, using Cp = .2; this value is 20, which is thus about 6% high. - 20 - As in the previous sections, these computations have been run for Co= .5 and .8, and horizon radiance = 100 and 200. In addition, computations have been made for horizon radiance = 220, for the following reasons. The camera sensitivity tends for most systems to be reasonably linear over most of the span, but tends to become increasingly non-linear as the upper limit of 255 is approached. Since the error can become somewhat critically dependent on the horizon radiance above 200, it was decided to run additional calculations for a horizon signal of 220. The resulting set of 6 plots are labeled Test 4a through Test 4f. In each plot, four curves are given. The first curve, from linearity LINO20, is for the camera which is currently at Otis. The third, LINO28, is for the camera currently at MPL. The other two are randomly chosen linearities (the ones that happened to be on my computer), and were acquired for cameras used in WSI units. They are included to give the reader a better feel for the range of values that might occur. 5.2 Evaluation of Sensitivity to Measured Non-linearity The sensitivity curves for the linearity are not so well behaved as the earlier plots. Consider first Tests 4a through 4c, i.e. those with Co=.8. In Test 4b, one can see that for Lg values of 200, all four cameras yield pretty similar results; the error is about -5% to -10%, and essentially independent of the Cr value. When the horizon radiance is higher, at a level of 220 (Test 4c), one of the cameras, represented by LINO24, has a quite large error of about 25%. This occurs because this one camera is still reasonably linear at a signal of 200, but is somewhat non-linear at the signal of 220. At the Lg value of 100, shown in Test 4a, three of the systems are quite accurate, but one is about 10% high. The plots for Co = .5, Tests 4d through 4f, are similar in character, although the magnitudes of the errors become somewhat larger. As noted above, several of the plots, such as Test 4b, show little variance in the percent error as Cris varied. This implies that in a given scene, both near and far targets would have roughly the same percent error. Thus the non-linearity of the camera causes, in this example, an overall bias in the visibility determination, rather than a lack of consistency between the individual target returns. It is helpful to understand why the effect of the non-linearity can be so large. Consider the case of the horizon radiance set at 200, and an inherent contrast of .8. The V/r limits serve to limit the Cr range to .07 to .4. In terms of target radiance, this means that the signal can be used only if it is between 120 and 186. This means that only a somewhat narrow range of signals are being used in the visibility determination. The determination is essentially based on difference measurements; precision becomes quite critical. The numeric example in Section 5.1 is a good example of this. The linearity correction was only about 2.2 at a signal of 80 (a 3% correction), yet the impact was a 6% change in determined visibility. The non-linearity is an instrumentation characteristic which can be compensated for in the software. To the extent that we are able to characterize the system response accurately, and to the extent that that response does not change, we should be able to make reasonable corrections. The next section discusses further the extent to which this system response can be characterized. 5.3 Characterization and Stability of System Response; Implications As noted above, the linearity correction can be quite critical. Applying a linearity correction is easy and quick, in terms of software development and processing time. - 21 - . . ...... --- . . . .. . . . ........................... Measured Apparent Contrast I 0.60 0.50 0.40 0.30 0.20 0.10 000 +00:09 . .. .... ... -40.00 - -20.00 .. . . ... . . . . 2. WOOOOOO _D1.D . . . 0.00 ... .... . ........ . WWW OF # # ++++ -22- . 200 t . DIETY . C OOOO . . . . . . . . . . . 10. " % Change in Derived Visibility .. ... . . DT PO ............... WO CO DOO 0 . 1 US DOO .......... . . ..... ........ .... 3 - 20.00 ... 1 4 = LINO32 3 = LINO28 2 = LINO24 1 = LINO20 A WN 40.00 - 60.00 - Lq = 100 CO = .8 Non-Linearities in Camera Response Sensitivity of Derived Visibility to Test 4a ... 80.00 7 . ...... ...... . .. ... . . .... . ... .. . .. .. . .......... ... . . . .. .. .. Measured Apparent Contrast 0.60 0.50 0.40 0.30 0.20 0.10 000 +00:09- . . ... ... ... ....... . .. . ...... . .. -40.00 ..... ... . . . 2 4 tn -20.00 ww 302... * P . . C O . OO . . . . . . . . . . VOL . . . _ . * . * . w . P UH .* + . VY wwwwwwwwwww YT Y ASUD.. * . _ _ _ LUXOR * * * * ** . .". CORO _ . . y t000 .... . . ...... .... . % Change in Derived Visibility w ....... . ....... ... . 20.00 4 = LINO32 3 = LINO28 2 = LINO24 1 = LINO20 40.00 ... ..... .. 002 = by 8 = 0 . 60.00 Non-Linearities in Camera Response Sensitivity of Derived Visibility to Test 4b ........... ..... . ..... . ............ 80.00 ............. ..... . . . -23- . . .... : ... ..... .. " .. wwww senuoy juǝseddy pajnseaw w wwwwww 09:0 09:0 000 08:0 02:0 OLO 000 +00:09- ame +00'07- . . . ... . ... . . .. .. ..... ....... . .... . .. ........ .. . . . . .. .. O . . . O . * . . . . DOG1. 00 W WW _ . 0 .... .. . . .. . . .. .. . 2 . . 2 . . . ... . . . . . . . . w –00:02 #_10..02 100 # O + . . . . YOYOTAOLA ... O CODOO LCLOUD .ODODO . .. OO . . O O 0 00 000 www -24- . . . % Change in Derived Visibility .. ... .. .. 00:02 . ZEONIT = D BZONIT = E DZONIT = 2 OZONI 7 = 1 ....... .. 00:00 032 = b 7 8 = 0 əsuodsəy ejeweyu! Sə!!!Jeau!7-UON of 11!1!9!S!A ponujajo K1!A!!!SUBS otsal +00:09 00'08 ................... Measured Apparent Contrast . 0.60 0.50 0.40 0.30 0.20 0.10 000 +00'09- ............... ... .. . - 40.00 .. ..... ... ... .. . . ..... . 16 ... . ...... .... -20.00 .. .. 4 2 DU 06 DO .... TOODOO . . 20 ON LO ********** 0.00 * ......... . ........ .. .. .. . . ..... ... . ....... . . # O ! . _ . . _ _*_. . . . . . . . . % Change in Derived Visibility . .. .... TO . .. 20.00 . .. 00100 . _ # + # # # . _ _1 _+ _ 0 . • * 1 # . * . * . . * 4 = LINO32 3 = LINO28 2 = LINO24 1 = LINO20 40.00 + . .. .. . . ... ....... . 6 . . ... 60.00 . . L q = 100 CO = .5 Non-Linearities in Camera Response Sensitivity of Derived Visibility to Test 4d .......... .. . . + 80.00 - " " " " " -25- . . . 80.00 - 60.00+ Test 4e Sensitivity of Derived Visibility to Non-Linearities in Camera Response Co = 5 L T = 200 40.00 = LINO20 2 = LINO24 3 = LINO28 4 = LINO32 20.00 % Change in Derived Visibility . -26- . ***** 0.00 . . OOP WOO 10. . . . . . . . -20.00 . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . ........... . . . . . . . ... . . . . . . .. .. . . . -40.00 -60.00 + 0.00 . .... 0.10 0.20 0.30 0.40 0.50 0.60 ...... Measured Apparent Contrast . ...... .. .... Measured Apparent Contrast 0.60 0.50 0.40 0.30 0.20 0.10 0.00 +00'09- .. . . WOO . NO. . ... . -40.00 + .. . . .. ... .. . ..... .... ......... ......... . ........... ** . ... . . . ..... . ...... ...." Po OXU 0 . . .. . 0 . O 0 _ * OTO .. _ .. _ . + O . . O ... .. . . O ww -20.00 - WWW - * 0.00 % Change in Derived Visibility ... ......... 20.00 4 = LINO32 3 = LINO28 2 = LINO24 1 = LINO20 40.00 WWW Lq = 220 Co = .5 Non-Linearities in Camera Response Sensitivity of Derived Visibility to Test 4 f - 60.00 .......... 80.00 .............. . -27- However it is important that the application of the linearity actually represent an improvement. In order for this to be true, we must be able to measure the system response accurately, and the response must be reasonably stable. In order to evaluate the extent to which these two conditions of accuracy and stability are true, three more plots labels Test 5a through Test 5c were generated. It turns out that one of the systems characterized in the Test 4 series was fielded for over a year, and then returned for post calibration. This system, characterized by linearity LINO24, was chosen for study. First, in order to characterize the measurement accuracy, a closer look at LINO24 was taken. During the linearity calibration, a duplicate set of measurements is acquired. First, the lamp is moved through a set of positions to characterize the change from the bright end to the dark end. Then a repeat set of data is acquired, moving this time from the dark end to the bright end. Suppose that at some point in time the repeat set of data on the linearity accurately characterize the system response, but we use the initial linearity data set to make our correction. The resulting error gives a measure of our measurement uncertainty caused by signal to noise ratio, in terms of its impact on the visibility determination. This is what is shown in Curve 1 of Plots 5a through 5c. The errors are quite small; nearly always less than 1%. Thus measurement error in the linearity determination is probably not enough to cause a problem. (Another test was run in which a camera was calibrated using two different calibration setup configurations, to characterize our measurement error caused by stray light or other setup errors. The impact of this difference on the visibility determination is less than 3%.) It is also important to test the impact of stability of system response. The camera calibrated in LINO24 was taken into the field (in a WSI unit) and run 24 hours a day for over a year. On return, it was recalibrated (LIN042), and found to be somewhat truncated. The full dark value had come up from a signal of 1 to a signal of 15, and the full bright value had decreased from 226 to 210. Thus both ends had contracted by counts of about 15. This is not unexpected after a year in the field. Two curves have been computed from this linearity data. Curve 2 in Plots 5a through 5c, labeled 42 vs 24, shows the impact on the visibility if the camera actually has the response indicated by LIN042, but the user applies the linearity correction based on LINO24. Thus, if one had measured the camera response as shown in LINO24, and applied this consistently, but the camera drifted over time to the response indicated by LIN042, the resulting error in the computed visibility would be as shown in Curve 2. Curve 3 shows the error if the response is as indicated by LIN042, and no linearity correction is made. Thus these two curves show the impact of drift in system response, with and without the use of the pre-deployment calibration. The error shown in curves 2 and 3 of Plots 5a through 5c is disconcertingly large. It is probably not atypical. Most of the cameras experience an increase in the dark level of 15 to 30 counts in the period of a year or more, and many experience high end truncation. The HSI unit may or may not have had a similar change. It has not been calibrated in 2 years, but it may not change as much per year due to its much shorter duty cycle of only a few hours a day. The fact remains that this is a very significant source of error that deserves some attention. Over the short run, an immediate answer to the problem of drift in the system response is to recalibrate the sensors more frequently. In particular, we need to check the response of the two units currently in use in HSI units. Over the long run, the use of some sort of in-field calibration device may be warranted. Placing a light trap in the field of view at the - 28 - * W . . . . . . . . . . ....... .. ........... . . ...... 100.00 7 Test 5a Sensitivity of Derived Visibility to Precision and Stability of Non-Linearity Co = .8 L q = 1 0 0 80.00 - 60.00 1 = Precision; LINO24b vs LINO24 2 = Stability; LIN042 vs LINO24 3 = Stability; LIN042 vs linear response ... . . .. . ... www . ... ...... .. . ... .. .. . . . . . . .. ..... 40.00 ..... .. . ..... .... % Change in Derived Visibility ... .. 20.00 he wharem . . . . : ve heyati .. . 0.00 ..ni . . ****** , .. -20.00 . . . . . . . . . . . . 0 OD 20 . . NONE -40.00 + 0.00 0.10 0.20 0.30 0.40 0.50 0.60 Measured Apparent Contrast ........... . . . . ...... " . -29- MA . 100.00 Test 5b Sensitivity of Derived Visibility to Precision and Stability of Non-Linearity CO = .8 Lq = 200 80.00 60.00 1 = Precision; LINO24b vs LINO24 2 = Stability; LIN042 vs LINO24 3 = Stability; LIN042 vs linear response . . . . 40.00 % Change in Derived Visibility -30- 20.00 0.00 . ... . ... .. ....... ... -20.00 + 3 -40.00 + 0.00 0.10 0.20 .... 0.30 0.40 0.50 0.60 .... Measured Apparent Contrast W ... ... . . ..... . .... ...... . . . ......... ... . ........ .. 100.00 80.00 - + Test 50 Sensitivity of Derived Visibility to Precision and Stability of Non-Linearity CO = .8 L9 = 220 1 = Precision; LINO24b vs LINO24 = Stability; LIN042 vs LINO24 = Stability; Lin042 vs linear response . 60.00 - .. . ......... . 40.00 - % Change in Derived Visibility 20.00 - . . . ...... ........... . ..... 0.00 . . . RO TOO ... ... ... ..... ... . .. -20.00 . . .* . . . . . O . O . . O O OO .. -. O O OOOO . . O YO . ... DIV AL ............ WOWO XOXO -40.00 + 0.00 0.10 0.50 0.60 0.20 0.30 0.40 Measured Apparent Contrast .. " " " " " " . .. " .. . .. . ............................ ...... -31- home position should allow us to monitor the increase in the full dark level. The occasional use of an artificial source which would saturate the image (radiometrically) would allow monitoring of the high end truncation. If feasible, the use of a field calibration device which could provide well defined relative flux levels within an image might allow in-situ determination of the effective response curve, for in-field updating of the linearity curve. In short, the type of error indicated by Test 5 is one that should be reasonably avoidable, through proper calibration of the camera, but it is an error that is apparently important to avoid. 6.0 SENSITIVITY TO TARGET RANGE AND CONTRAST THRESHOLD In the previous sections, plots have been shown which illustrate the system sensitivity to errors in the input Co value, errors in the measured target and horizon radiances, and errors due to system response non-linearity. There are two remaining input values; the target range, and the ε threshold. 6.1 Sensitivity to Target Range In considering errors in target range, it is probably reasonable to consider the fractional error. For a target which is near, it is reasonably easy to pinpoint its location within a small radius. For a farther target, it is increasingly difficult to pinpoint its location; the expected uncertainty might be a given fraction of the target distance. The sensitivity of visibility to errors in target range of a given percent is quite easy to compute. From Eq. 2.2, it may be shown that an error of x% in range yields an error of the same percent in visibility. That is, if one inputs a range value which is 5% high, the resulting visibility will be 5% high. This is a simple enough relationship that no plots were made. It is safe to conclude that if the HSI is to be accurate within a given percent, the target ranges must be accurate to that percent. 6.2 Sensitivity to Contrast Threshold The situation with visual threshold ε is somewhat different. It is my opinion that this number should not normally be varied. Visibility is defined in terms of a threshold contrast of .05. The human has a threshold contrast of approximately .05, if the target size and light adaptation level are maintained, along with other parameters such as glimpse time. The human threshold contrast is significantly changed if these conditions are changed. For example, if the human looks at a very small target (or very large), the threshold contrast will change, and the human will make an erroneous determination of visibility. If we choose to change the input ε value in the HSI, we can simulate the human error, and estimate the expected error that the human might make. In general however, the HSI should try to return the correct visibility, not what the human might call with an inadequate target. We do this by maintaining the input a value associated with the definition of visibility. Therefore, for this sensitivity study, it was not deemed appropriate to determine sensitivity to changes in ε. 7.0 SUMMARY Two summary plots, containing curves extracted from the earlier sections, have been created: one for Co = .8, Lq = 200, and one for Co= .5, Lq = 200. In these plots, the four - 32 - T A L ... IM ....... .. ..... . . ....... .. 100.00 Summary a for CO = .8 Lq = 200 . . . 80.00 - AWNO 60.00 = 0 error 1 = Co change = +.1 = Target radiance change = + 4 counts 3 = Horizon radiance change = + 4 counts = Non-linearity change ww 40.00 w OX OX OOK . % Change in Derived Visibility CD * Ou0W . ... ... .... . .. . . 20.00 D WA & TOYO #POWO DOOOO. .. . O . : . OT. #.... .1. DIO * YYYYYY Y . .CO +000 OLOVO LLOC . + 10 . . . 00. TY . OOOO.. . .O O OOOO D " SOO 2 OD * 100 II. CO DVOD DOO -20.00 - . -40.00 + 0.00 0.10 0.20 0.30 0.40 0.50 0.60 Measured Apparent Contrast .......... .. . ... -33- .. ....... ....... . ........ .. .. .... .. Measured Apparent Contrast 0.60 0.50 0.40 0.30 0.20 0.10 0.00 -40.007 ......... ........... ... ... ... ... .. . . . . . A - 20.00 + . OOOOOOO OWO OOOOOO . OOOO . 0- 0000 . _ * DIO 6.0 WY . . . O DIO OUT DATO YOYOY 3 .. ... . 3 20.00 + .. . OOI ODEXELICIO OOST 5. 000 . . . D 00000000 . -34- + % Change in Derived Visibility . .......... +0000 .40 DOU WWW .. . .......... DO ... 40.00 ** .. .. ... 30 000 . . . . 0. 300- .* DI" 60.00 . . 09 * O 4 = Non-linearity change 3 = Horizon radiance change = + 4 counts 2 = Target radiance change = + 4 counts 1 = Co change = + .1 0 = 0 error : 80.00 06 LT = 2 0 0 Co = 5 Summary b for . V SA WOO ent . ... .... ww. + 10 +. 100.00 ww wwwww .. . .. . w ............. ................ . curves show: the impact of a Co change of .1; the impact of a measured target radiance change of 4 (on the 0 to 255 scale); the impact of a measured horizon radiance change of 4; and the impact of system non-linearity for the system at Otis. All of these uncertainties can cause a certain amount of error. The sensitivity to Co uncertainty is very small when the target range is close to the visibility (near Cr= .05), and fairly large when the target is closer. Unfortunately however, the measurement uncertainties cause the most error when the target range is close to the visibility. That is, when the Co impacts are least, the measurement error impacts are largest. There are some techniques for improving our Co estimates, but in the final analysis the Co changes may be most difficult to handle. The measurement uncertainties may in many cases be mitigated by a combination of improved measurements techniques and improved data reduction techniques. As the magnitude and/or impact of measurement uncertainties are improved, it should be possible to chose targets closer to the visual threshold, which should help mitigate the impact of the Co uncertainties. Another way to look at the system is to consider that the ability to determine visibility from targets ranging near the visibility depends on the ability to accurately determine the difference between signals which are quite close. This requires precision, stability, and accuracy in the measurements acquired by the system. At the other extreme, the ability to determine visibility from targets which are at close range (and which therefore have apparent contrast somewhat close to the inherent contrast) depends on our ability to accurately characterize the Co values and their fluctuations. In the short run, there are obvious ways in which improvements can be made in terms of measurements. These include keeping the horizon radiance near 200, and measuring and applying the non-linearity correction. Other potential improvements such as correcting for chip non-uniformity, to improve measurement accuracy, may or may not be warranted. There are a variety of tests to help us determine questions such as this one. A significant improvement in the current accuracy should be readily realized as these changes are enacted. In the long run, as solid state sensors improve in stability and noise handling, we should expect significant improvements in the measurement capabilities. These in turn should allow us to make better use of the measurement regimes in which the sensitivity to Co changes becomes small. 8.0 RECOMMENDATIONS As an outcome of this study, there are a number of changes and/or tests which should be considered. These fall roughly into two categories. The first category is changes having to do with measurement accuracy, including both improvements to the measurement accuracy, and changes to mitigate the impact of measurement inaccuracy. The second category is changes having to do with the handling of non-ideal measurement conditions, such as non-ideal targets or non-ideal horizon skies. These two types of improvement categories will be discussed below. 8.1 Improvements Relating to Measurement Accuracy The first and most obvious change I would recommend is to ensure that the horizon radiance is near 200 counts. Whereas this does not in itself increase the measurement accuracy, it significantly mitigates the effects of measurement inaccuracy. With the MPL unit, the horizon signal is currently near 100; we need to verify that the camera response has not become truncated due to internal problems, and then adjust the auto-iris to yield a - 35 - horizon brightness near 200. With the Otis unit, we similarly need to determine the current signals which occur for the horizon, and optimize them as necessary. We need to acquire a new linearity calibration for the MPL unit, to determine how much change, if any, has occurred since the original calibration. It will be somewhat more difficult to acquire a linearity calibration for the Otis unit, but this is certainly something to do when possible. I would propose that we incorporate application of the linearity correction into a test version of the software. We should devise a technique for testing the sensor response in the field. Installation of a light trap, to enable testing of the dark end of the responsivity curve, could help us begin testing the efficacy of this sort of in-situ procedure. Next, there are several tests that involve investigation of the magnitude of existing measurement errors. As discussed earlier, both the horizon and target radiances are impacted by noise, changes in full dark, and chip non-uniformity. Any change in full dark may be treated as part of the sensor responsivity change documented by the linearity calibration, and need not be treated separately for now. The magnitude of system noise averaged over the horizon and target ROI's may be evaluated by grabbing several images of the same scene in close temporal succession, and comparing the resulting signals averaged over the ROI's. The errors due to chip non- uniformity may be evaluated using the linearity calibration data. If one wants to know the magnitude of non-uniformity for a given target location, one extracts the signal for that target ROI and for the horizon ROI, but using a calibration image of a uniform source. We can then decide if either system noise or chip non-uniformity cause errors large enough to require compensation. Since the impact of measurement error is worst as V/r approaches the lower limit of 1.15, and the impact of non-ideal Co is worst as V/r approaches 4.0, there is benefit in avoiding these limits. Since the optimum range is a function of the visibility, it is important to have enough targets so that there are always several within the optimum range, for all possible visibility values. Within the limits imposed by the availability of real world targets, an effort should be made to select many targets over a range of target distances. (For those familiar with transmissometers, it may be helpful to note that whereas a transmissometer is reasonably accurate over a range of visibility values determined by its base length, the HSI is reasonably accurate over a range of visibility values determined by the target ranges. By using very near targets under low visibility conditions, and far targets under high visibility conditions, we are creating an impact similar to adjusting the transmissometer base length.) Note also that it is very important that the range to these targets be determined accurately, since the error in visibility due to an error in range is directly proportional to the range error. This may involve driving out to the sites, and visually identifying the targets being used. 8.2 Improvements Relating to Non-ideal Conditions There are several things which can be done to improve the system with regard to non- ideal conditions. First, consider the impact of horizon sky problems. If the measured horizon radiance is not equal to the equilibrium radiance, there is a corresponding error in the visibility. Under clear sky conditions, the near-horizon radiance is expected to decrease away from the equilibrium radiance value as the elevation angle is increased. This occurs due to the decreasing turbidity of the path of sight. - 36 - + A . - ..WRW 1 It would be instructive to extract the change in horizon radiance over the range of elevation angle in the HSI field-of-view. This can be done with existing HSI imagery. This would allow us to determine the range of elevation angles which provide a sufficiently accurate determination of the equilibrium radiance in the absence of clouds). Similarly, we should investigate the incidence of clouds on the horizon which are bright enough to cause error. We can probably improve our handling of this possibility by using two horizon ROI's, and checking the signal standard deviation in each, as well as the difference in the average signals. If one ROI has a high STD and/or elevated signal, use of the other ROI might avoid the cloud. If both ROI's have high STD's, there is probably little that can be done currently other than alert the user, or potentially not use the visibility for that scene. A test program to investigate these possibilities should be implemented. Over the longer run, the case of cloud clutter at the horizon might be addressed by the next generation larger field-of-view WSI. If we make determinations using the WSI of where the clouds are, it should be feasible to utilize horizon ROI's which are in the clear areas indicated by the WSI. Similarly, introduction of a red/blue filter changer to the HSI could allow determination of the clear horizon regions for use in visibility determination. Finally comes the really difficult problem, changes is Co. Once we have improved the accuracy of the system in the ways discussed above, we will probably want to tackle determination of Co. This is done by using the median visibility determined from the available targets, or an independently determined visibility, then determining what Co values for each target will yield that visibility. It would be very helpful to run curves similar to the ones in this technical note which determine the sensitivity of the Co determination to known errors, so that we can adjust our technique appropriately. (For example, we know intuitively that the targets must be at close range, relative to the visibility, in order for visibility to be sensitive to Co, which means targets must be close to back out the Co value. But does measurement error then cause undue problems?) ......... If we are successful with Co extraction, a study of the time variation in Co would be helpful. The equilibrium radiance is expected to change as a function of the scattering angle with respect to the sun. This gives us a theoretical change in Co due to the change in the horizon brightness. If this is dominant over changes due to target brightness, the diurnal changes in Co might be reasonably predictable. . . .. . 9.0 CONCLUSION This note has presented plots illustrating the sensitivity of the visibility derived from the HSI measurements to a variety of sources of uncertainty. In general, the errors depend on the magnitude of the apparent contrast, which in turn depends on the relative values of visibility and target range. The sensitivity to measurement error is maximized when the target range is close to the visibility; the sensitivity to inherent contrast, on the other hand, is greatest when the target range is much less than the visibility. This study has defined some obvious ways to improve the system, as well as some areas requiring investigation of the data. Over the long term, retrofitting the HSI with the new generation of cooled solid state sensors provides potential for further improvement. With less noise, higher radiance resolution, and better stability, these cameras should allow more precise measurements and a corresponding improvement in the visibility determinations from the HSI. - 37 - . . ! i !! ***** **en * 4; -**** W * Indice the best * ** **** * **** th 10.0 ACKNOWLEDGEMENTS The authors would like to recognize the outstanding work of Carole Robb, publications support at Marine Physical Laboratory, in preparation of the plots and text. 11.0 REFERENCES Douglas, C. A., and R. L. Booker (1977), Visual Range: Concepts, Instrumental Determination, and Aviation Applications, U. S. Department of Transportation, Federal Aviation Administration, Systems Research and Development Service, Report No. FAA-RD-77-8, Washington, D. C. 20590. Duntley, S. Q., A. R. Boileau, and R. W. Preisendorfer (1957), Image Transmission by the Troposphere 1, University of California, San Diego, Scripps Institution of Oceanography, JOSA 47, 499-506. Johnson, R. W., W. S. Hering, and J. E. Shields (1989), Automated Visibility and Cloud Cover Measurements with a Solid-State Imaging System, University of California, San Diego, Scripps Institution of Oceanography, Marine Physical Laboratory, SIO Ref. 89- 7, GL-TR-89-0061. Johnson, R. W., M. E. Karr, and J. R. Varah (1990), Automated Visibility Measurements with a Horizon Scanning Imager, University of California, San Diego, Scr Institution of Oceanography, Marine Physical Laboratory, to be published. Shields, J. E. (1989), Software Documentation: Linearity Processing, University of California, San Diego, Scripps Institution of Oceanography, Marine Physical Laboratory, Technical Note AV89-056t. - 38 -