key: cord-017620-p65lijyu authors: Rodriguez-Proteau, Rosita; Grant, Roberta L. title: Toxicity Evaluation and Human Health Risk Assessment of Surface and Ground Water Contaminated by Recycled Hazardous Waste Materials date: 2005-07-07 journal: Water Pollution DOI: 10.1007/b11434 sha: doc_id: 17620 cord_uid: p65lijyu Prior to the 1970s, principles involving the fate and transport of hazardous chemicals from either hazardous waste spills or landfills into ground water and/or surface water were not fully understood. In addition, national guidance on proper waste disposal techniques was not well developed. As a result, there were many instances where hazardous waste was not disposed of properly, such as the Love Canal environmental pollution incident. This incident led to the passage of the Resource Conservation and Recovery Act (RCRA) of 1976. This act gave the United States Environmental Protection Agency regulatory control of all stages of the hazardous waste management cycle. Presently, numerous federal agencies provide guidance on methods and approaches used to evaluate potential health effects and assess risks from contaminated source media, i.e., soil, air, and water. These agencies also establish standards of exposure or health benchmark values in the different media, which are not expected to produce environmental or human health impacts. The risk assessment methodology is used by various regulatory agencies using the following steps: i) hazard identification; ii) dose-response (quantitative) assessment; iii) exposure assessment; iv) risk characterization. The overall objectives of risk assessment are to balance risks and benefits; to set target levels; to set priorities for program activities at regulatory agencies, industrial or commercial facilities, or environmental and consumer organizations; and to estimate residual risks and extent of risk reduction. The chapter will provide information on the concepts used in estimating risk and hazard due to exposure to ground and surface waters contaminated from the recycling of hazardous waste and/or hazardous waste materials for each of the steps in the risk assessment process. Moreover, this chapter will provide examples of contaminated water exposure pathway calculations as well as provide information on current guidelines, databases, and resources such as current drinking water standards, health advisories, and ambient water quality criteria. Finally, specific examples of contaminants released from recycled hazardous waste materials and case studies evaluating the human health effects due to contamination of ground and surface waters from recycled hazardous waste materials will be provided and discussed. After World War II, industries began to produce a whole new generation of industrial and consumer goods made of synthetic organic chemicals such as plastics, solvents, detergents, and pesticides. Industries profited enormously from the production and marketing of these products and consumers became accustomed to the convenience of synthetic products as well as cheap, convenient, throwaway packaging materials. As the industrial production of these products increased, so did the production, accumulation, and disposal of hazardous waste. Prior to 1976, facilities that handled and/or disposed of hazardous waste were not provided with detailed regulations and/or guidance on proper waste handling/ disposal techniques and, as a result, there were many instances where hazardous waste was improperly disposed. When chemicals are improperly disposed in the environment, abandoned hazardous waste sites are created that potentially affect human health and cost our society billions of dollars due to the high cost of not only evaluating human health and environmental impacts but also to performing site clean-ups. An example of one of the most well known incidents of improper disposal of hazardous waste was the Love Canal environmental pollution incident [ 1] . This incident led to the passage of the Resource Conservation and Recovery Act (RCRA) of 1976. This act gave the United States Environmental Protection Agency (USEPA) regulatory control of all stages of the hazardous waste management cycle from the "cradle-to-grave:' Beginning in the 1970s, Congress passed several other acts designed to protect human health and the environment ( Table 1 ) . Based on the legislative directives in these acts, the USEPA has issued numerous rules, regulations, and guidance documents that ensure that the use, disposal, processing, and handling of hazardous waste do not result in impacts to human health or the environment. State governments are authorized to implement these rules/regulations promulgated by the USEPA, to permit facilities that handle hazardous waste in their states, and to create additional state rules and state regulations that apply to the operations of facilities in their specific state. The emphasis in recent years is to prevent pollution by recycling hazardous waste followed by proper disposal practices. RCRA defines recyclable materials as "hazardous waste that are reclaimed to recover a usable product:' Recycling is a broad term that applies to those who use, reuse, or reclaim waste to use as an ingredient to make a product and to use as an effective substitute for a commercial product. A material is reclaimed if it is processed to recover a useful by-product or forms the starting material for The systematic scientific approach of evaluating potential adverse health effects resulting from human exposure to hazardous agents or situations occur by the following steps: i) hazard identification; ii) dose-response (quantitative) assessment; iii) exposure assessment; iv) risk characterization [ 4] . The overall objectives of risk assessment are to balance risks and benefits, to set target levels, to set priorities for program activities at regulatory agencies, industrial or commercial facilities, or environmental and consumer organizations, and to estimate residual risks and extent of risk reduction [ 5] . Diversity of risk assessment methodology helps ensure that all possible risk models and outcomes have been considered and minimize the potential for error [ 4] . This section will provide information on the concepts used in estimating risk and hazard due to exposure to ground water and surface water contaminated from the recycling of hazardous waste and/or hazardous waste materials for each of the aforementioned steps. The first step in the risk assessment process is an evaluation of all human and animal data to determine what health effects occur after exposure to a chemical. Well-conducted human studies are preferred, but occupational or accidental exposures to chemicals also provide useful information. However, in most cases, the results from animal studies are used as models to predict effects in humans since animal studies allow for controlled dose-response investigations and detailed, thorough toxicological analysis. Some toxicants produce health effects immediately following exposure such as air pollutants that can produce eye irritation in individuals after a few minutes of exposure. Other effects, such as organ damage due to metals and solvents, may not become manifested for months or years after first exposure. The time from the first exposure to the observation of a health effect is called the latent period. The length of this period is dependent on various factors such as the type of pathology induced by the compound/chemical of potential concern (COPC), dose, dose rate as well as host characteristics such as age at first exposure, gender, race, species, and strain. Other host factors that influence susceptibility to environmental exposures include genetic traits; preexisting diseases; behavioral traits such as smoking; coexisting exposures; and medication and vitamin supplementation [ 5] . Genetic studies include investigations of the effects of chemicals on the genes and chromosomes (genetic toxicology) and ecogenetics, a relatively new field, describes a host's genetic variation in predisposition and resistance to COPC exposure [5] . Ecogenetics involves studies of specific exposures ranging from pharmaceuticals known as pharmacogenetics, pesticides, inhaled pollutants, foods, food additives, to allergic and sensitizing agents [ 6] . Moreover, induction of a health effect at the molecular level may occur after a single exposure, after repeated exposures, or after long-term continuous exposure. The length of the induction period may be a function of the same variables as the latent period. Effective exposure time refers to the exposure time that occurred up to the point of induction [ 4] . Ineffective exposure is readily observed in dose-response curves as a saturation of response in the high dose range. An experimental study must follow the subjects beyond the length of the minimum latent period to observe all effects and cases associated with exposure. Under ideal circumstances, a study will follow subjects for their lifetime. Lifetime follow-up is common for animal studies but uncommon for epidemiology studies [ 4] . Qualitative assessment of hazard information should include a consideration of the consistency and concordance of the findings. Such assessments should include a determination of the consistency of the toxicological findings across species and target organs, an evaluation of consistency across duplicate experimental conditions, and the adequacy of the experiments to detect the adverse endpoints of interest [5] . For consideration of whether a COPC is a carcinogen, qualitative assessment of animal or human evidence is done by many agencies, including the USEPA and the International Agency for Research on Cancer (IARC). Similar evidence classifications are used for both animal and human evidence categories by both agencies. These evidence classifications are used for overall weight-of-evidence (WOE) carcinogenicity classification schemes where the alphanumeric classification levels recommended by USEPA [7] are shown in Table 2 . USEPA's WOE carcinogenicity classification schemes were first recommended in the Guidelines for Carcinogen Risk Assessment (USEPA, 1986 , hereafter "1986 cancer Table 2 USEPA's carcinogenicity classification scheme [7] Alphanumeric code Evidence of noncarcinogenicity for humans; no evidence of carcinogenicity in adequate studies in at least two species or in both epidemiological and animal studies Table 3 Weight-of-evidence classification scheme for qualitative assessment of chemical mixtures from Mumtaz and Durkin [9] Mechanistic understanding: I, II, and III I. Direct and unambiguous mechanistic data II. Mechanistic data on related compounds III. Inadequate or ambiguous mechanistic data Toxicologic significance: A, B, and C A. Direct evidence of toxicologic significance of interaction B. Probable evidence of a toxicologic significance based on related compounds C. Unclear evidence of a toxicologic significance Exposure modifiers: 1 and 2 1. Anticipated exposure duration and sequence 2. Different exposure duration or sequence 2.a. In vivo data 2.b. In vitro data 2.b.i. Anticipated route of exposure 2.b.ii. Different route of exposure Mixture is additive (=),greater than additive (> ), orless than additive ( <). guidelines") [7] . However, the Guidelines for Carcinogen Risk Assessment, Review Draft (USEPA, 1999,hereafter"1999 draft cancer guidelines") [8] recommend a WOE narrative describing a summary of the key evidence for carcinogenicity. The 1999 draft cancer guidelines will serve as interim guidance until USEPA issues final cancer guidelines [ 43] . For evaluating chemical mixtures of noncarcinogens, Mumtaz and Durkin [9] suggest the interaction data (i.e., independent joint action, similar joint action and synergistic action) and the qualitative and quantitative interaction matrix be taken into consideration when determining the hazard index. A qualitative WOE scheme for evaluating chemical mixtures is shown in Table 3 . The WOE takes into consideration the COPC, data, reference doses/concentrations, and hazard index based on additivity [10] . Figure 1 illustrates each of the chemical mixture's WOE determination by a symbol indicating the direction of the interaction followed by the alphanumeric expression in Table 3 . The first two components are the major factors for ranking the quality of the mechanistic data to support the risk assessment. Because toxicity studies must be evaluated to determine the quantitative dose-response relationship between the magnitude of exposure and the extent and severity of the adverse effect, a brief description of various toxicity tests will be provided. Different methodologies are used to characterize doseresponse relationships, depending on whether or not the chemical has been identified as a carcinogen or noncarcinogen. Carcinogens are assumed to pose some risk at any exposure level [4] . Four classes of toxicant-induced health effects include: i) cancer: genotoxic and nongenotoxic mechanisms; ii) hereditary effects: genotoxic mechanisms; iii) developmental effects: genatoxic or nongenotoxic mechanisms; iv) organ/tissue effects: nongenotoxic mechanisms [4] . The evaluation of chemicals for acute toxicity is necessary for the protection of public health and the environment. Acute toxicity is generally performed by the probable route of exposure in order to provide information on health hazards likely to arise from short-term exposure by that route (Table 4 ) [11] . As shown in Table 4 , there are four categories ranging from I to IV based on increasing doses. Generally, acute studies evaluate oral, dermal, inhalation, and eye and skin irritation as well as dermal sensitization. The acute inhalation studies are performed from one to seven days while the intermediate studies are performed from seven days to several months [12] .An evaluation of acute toxicity data includes the relationship of the exposure to the COPC and the incidence and severity of all abnormalities, gross lesions, body weight changes, effects on mortality, and any other toxic effects. An acute exposure is considered to be a one-time or short-term exposure with a duration of less than or equal to 24 h. Acute toxicity testing is conducted Toxity Evaluation and Human Health Risk Assessment of Surface and Ground Water 143 up to 7 days of exposure and subacute testing for 7-30 days. Testing periods for the evaluation of developmental effects is less than 15 days since developmental toxicity can occur after short periods of exposure. Sub chronic testing is typically conducted for 90 days to 1 year since subchronic exposures are considered to be multiple or continuous exposures occurring for approximately 10% of an experimental species lifetime. Chronic exposures are assumed to be multiple exposures occurring over an extended period of time, or a significant fraction of the animal's or the individual's lifetime. To minimize the number of animals used and to take full account of their welfare, USEPA recommends the use of data from structurally related substances or mixtures [ 11] . Review of existing toxicity information on chemical substances that are structurally related to the COPC may provide enough information to make preliminary hazard evaluations that may reduce the need for testing. For example, if a chemical can be predicted to have corrosive potential based on structure-activity relationships (SARs), dermal or eye irritation testing does not need to be performed in order to classify it as a corrosive agent. All the human carcinogens that have been identified have produced positive results in at least one animal model. In the absence of adequate human data, it is plausible to regard agents and/or mixtures for which sufficient evidence of carcinogenicity in animals exists to be a possible carcinogenic risk to humans [5] . Therefore, chemicals that cause tumors in animals are presumed to cause tumors in humans. In general, the most appropriate rodent bioassays are those that test the exposure pathways most relevant to human exposure pathways, i.e., inhalation, oral, dermal, etc. Because it is feasible to combine bioassays together, it is desirable to tie these bioassays with mechanistic studies, biomarker studies, and genetic studies to understand the mechanism(s) of toxicity and/or carcinogenicity [13] . A typical experimental design includes two different species, both genders, at least 50 subjects per experimental group using near lifetime exposures. For dose-response purposes, a minimum of three dose levels should be used. The highest dose, typically the maximum tolerated dose, MTD, is based on the findings from a 90-day study to ensure that the test dose is adequate for the assessment of chronic toxicity and carcinogenic potential. The lowest dose level should produce no evidence of toxicity. In the oral studies, the animals are dosed with the COPC on a 7-day per week basis for a period of at least 18 months for mice and hamsters and 24 months for rats [14] . For dermal studies, animals are treated with the COPC for at least 6 h per day on a 7 -day per week basis for a period. A minimum of 24 h should be allowed for the skin to recover before the next dosing. The COPC is applied uniformly over a shaved area that is approximately 10% of the total body surface area [14] . The animals are evaluated for an increase in number of tumors, size of tumors, and number of rare tumors seen and/or expressed. Even without toxicity, a high dose may trigger events different from those triggered by low-dose exposures. Also, these bioassays can be evaluated for uncontrolled effects by comparing weight vs time and mortality vs time curves [4] . If there is a divergence between the control group and the experimental group in the weight vs time curve, this indicates that there is a disruption of normal homeostasis due to high-level dosing. If there is a divergence in the mortality vs time curves, this indicates that there is an uncontrollable effect [4] . The National Toxicology Program (NTP) criterion for classifying a chemical as a carcinogen is that it must be tumorigenic in at least one site in one sex of F344 rats or B 6 C 3 F 1 mice. Validation and application of short-term tests (STT) are important in risk assessment because these assays can be designed to provide information about mechanisms of effects. Short-term toxicity experiments includes in vitro or short-term in vivo tests ranging from bacterial mutation assays to more elaborate in vivo short-term tests such as skin-painting studies in mice and altered rat liver foci assays. These studies determine if COPCs are mutagenic, indicating they have the potential to be carcinogens as well. In general, STT are fast and inexpensive compared with the lifetime rodent cancer bioassays [5] . Positive results of STT have been used to predict potential carcinogenicity. Common STT include the following: Ames Salmonella/microsome mutagenesis assay (SAL); assays for chromosome aberration (ABS); sister chromatid exchange induction (SCE) in Chinese hamster ovary cells; the mouse lymphoma L5178Y cell mutagenesis assay (MOLY). There are several limitations to STT such as: STT cannot replace long-term rodent studies for the identification of carcinogens; the available tests do not detect all classes of COPCs that are active in the carcinogenic process such as hormones; and negative results from STT cannot rule out carcinogenicity [ 4] . The most convincing evidence for human risk is a well-conducted epidemiological study where an association between exposure to COPC and a disease has been observed. These studies compare COPC-exposed individuals vs non-COPC-exposed individuals [5] . The major types of epidemiology studies are cross-sectional studies, cohort studies, and case-control studies. Cross-sectional studies survey groups of humans to identify risk factors and disease. These studies are not very useful for establishing a cause-and-effect relationship. Cohort studies evaluate individuals on the basis of their exposure to the COPC under investigation. These individuals are monitored for development of disease. Prospective studies monitor individuals who initially are diseasefree to determine if they develop the disease over time. In case-control studies, subjects are selected on the basis of disease status and are matched accordingly. The exposure histories of the two groups are compared to determine key consistent features. Thus, all case-control studies are retrospective studies [5] . Epidemiological findings are evaluated by the strength of association, consistency of observations, specificity, appropriateness of temporal relationship, dose responsiveness, biological plausibility and coherence, verification, and biological analogy [S].A disadvantage of epidemiological studies is an accurate measure of concentration or dose that the COPC-exposed individuals receives is not available, so estimates must be employed to quantify the relationship between exposure and adverse effects. Moreover, the control group is a major determinant of whether or not a statistically significant adverse effect can be detected. The various types of control groups are: regional general population; general population of a state; local general population; and workers in the same or a similar industry who are exposed to lower or zero levels of the toxicant under study [ 4] . Dose-response assessment is the fundamental basis of the quantitative relationship between exposure to an agent and the incidence of an adverse response. The procedures used to define the dose-response relationship for carcinogens and noncarcinogens differ. For carcinogens, a non-threshold, zero threshold, dose-response relationship is used when there are known or assumed risks of an adverse response at any dose above zero. Non-threshold toxicants include hereditary disease toxicants, genotoxic carcinogens, and genotoxic developmental toxicants. For noncarcinogens, a threshold, nonzero threshold is used to evaluate toxicants that are known or assumed to produce no adverse effects below a certain dose or dose rate. Threshold toxicants include nongenotoxic carcinogens, nongenotoxic developmental toxicants, and organ/ tissue toxicants [ 4] . The two different approaches will be discussed separately in this section. The toxicity factors used to evaluate oral exposure and inhalation exposure are expressed in different units to account for the unique differences between these two routes of exposure. Cancer slope factors (CSFs), in units of (mg/kg/day)-t, and reference doses (RIDs), in units of mg/kg/day, are used to quantify the relationship between dose and effect for oral exposure whereas unit risk factors (URFs), in units of (jlg/m 3 )-t, and reference concentrations (RfCs), in units of mg/m\ are used to describe the relationship between ambient air concentration and effect for inhalation exposure. The URF and RfC methodology accounts for the species-specific relationships of exposure concentration to deposited/delivered doses to the respiratory tract by employing animal-to-human dosimetric adjustments that are different than those employed for oral exposure. The interaction with the respiratory tract and ultimate disposition are considered as well as the physicochemical characteristics of the inhaled agent and whether the exposure is to particles or gases. Most important is the type of toxicity observed since direct effects on the respiratory tract (i.e., portal of entry effects) must be considered as opposed to toxicity remote to the portal-of-entry [15] . Based on the differences between oral and inhalation exposure, route to route extrapolation of oral toxicity values to inhalation toxicity values may not be appropriate. Please refer to Appendix B of the Soil Screening Guidance [16] for a discussion of issues relating to route-to-route extrapolation. Carcinogenic assessment assumes that exposure to any amount of a carcinogenic substance increases carcinogenic risk. Thus, zero risk does not exist (a non-threshold response) because there is no carcinogen exposure concentration low enough that will not increase risk of cancer. A genotoxic carcinogen alters the information coded in DNA; thus, it is reasonable to assume that these agents do not have a threshold so that a risk of cancer exists no matter how low the dose. There are three stages of genotoxic carcinogenesis: initiation, promotion, and progression. Initiation refers to the induction of an irreversible change in DNA caused by a mutagen. The initiator may be a direct-activating carcinogen or a carcinogenic metabolite. Promotion refers to the possibly reversible replication of initiated cells to form a "benign" lesion. Promoters are not genotoxic or carcinogenic but they enhance the tumorigenic response initiated by a primary or secondary carcinogen when administered at a later time. Complete carcinogens have initiation and promotion properties [ 4] . Nongenotoxic carcinogenesis does not involve direct interaction of a carcinogen with DNA. Mechanisms of nongenotoxic carcinogenesis include an accelerated replication that may increase the frequency of spontaneous mutations or increase the susceptibility of DNA damage. Cancer may be secondary to organ toxicity and may occur only at high dose rates. Moreover, many nongenotoxic cancer mechanisms are species-specific where the results from certain rodent species may not apply to human [4] . Several approaches and models are used to provide estimates of the upper limit on lifetime cancer risks per unit of dose or unit of ambient air concentration, i.e., the CSF or the URF, respectively. The upper bound excess cancer risk estimates may be calculated using models such as the one-hit, Weibull, logit, log-probit,or multistage models [5, 17] . The linearized multistage model is considered to be one of the more conservative models and is typically used because the mechanism of cancer is not well understood and one model may not be more predictive than another one [7, 17] . Because the risk assessor generally needs to extrapolate beyond the region of the dose-response curve for which experimentally observed data are available, models derived from mechanistic assumptions involve the use of a mathematical equation to describe dose-response relationships that are consistent with biological mechanisms of response [ 5] . "Hit models" for cancer modeling assume that i) an infinite number of targets exist, ii) after a minimum of targets have been modified, the host will elicit a toxic response, iii) a critical target is altered if a sufficient number of hits occurs, and iv) the probability of a hit in the lowdose range is proportional to the dose of COPC [18] . The one-hit linear model is the simplest mechanistic model where only one hit or critical cellular interaction is required for cell function to be altered. Multi-hit models describe hypothesized single-target multi-hit events as well as multi-target events in carcinogenesis. Biologically based dose-response (BBDR) modeling reflects specific biological process [5] . Because a large number of subjects would be required to detect small responses at very low doses, several theoretical mathematical extrapolation models have been proposed for relating dose and response in the subexperimental dose range: tolerance distribution models, mechanistic models, and enhanced models. These mathematical models generally extrapolate low-dose carcinogenic risks to humans based on effects observed at the high doses in experimental animal studies. The linear interpolation model interpolates between the response observed at the lowest experimental dose and the origin. Linear interpolation is recommended due to its conservatism, simplicity, and reliance because it is unlikely to underestimate the true-low dose risk [ 4] . There is no universally agreed upon method for estimating an equivalent human dose from an animal study. However, several methods are currently being used to obtain an estimate of the equivalent human dose. The first method calculates an equivalent human dose from an animal study by scaling the animal dose rate for animal body weight. To derive an equivalent human dose from animal data, the 1999 draft cancer guidelines recommend adjusting the daily applied oral doses experienced over a lifetime in proportion to BW 314 [8] . For noncarcinogens, an uncertainty factor is employed to estimate the equivalent human dose from an animal study if pharmacokinetic data is not available. Noncarcinogenic dose-response assessment utilizes a point of effects method which selects the highest dosage level tested in humans or animals at which no adverse effects were demonstrated and applies uncertainty factors or margins of safety to this dosage level to determine the level of exposure where no health effects will be observed, even for sensitive members of the population. Also, benchmark dose modeling may be conducted if the experimental data are adequate. Animal bioassay data are generally used for dose-response assessment; however, the risk assessor is normally interested in low environmental exposures of humans, which are generally below the experimentally observable range of responses seen in the animal assays. Thus, low-dose extrapolation and animal-to-human risk extrapolation methods are required and constitute major aspects of dose-response assessment. Human and animal dose rates are frequently reported in terms of the following abbreviations, which are defined below: LOEL Lowest observed effect level in mglkg·day, which produces a statistically or biologically significant effect LOAEL Lowest observed adverse effect level in mg/kg·day, which produces a statistically or biologically significant adverse effect NOEL No observed effect level in mg/kg·day, which does not produce a statistically or biologically significant effect NOAEL No observed adverse effect level in mg!kg·day, which does not produce a statistically or biologically significant adverse effect. Key factors in determining which NOAEL or LOAEL to use in calculating a reference dose (RID) is exposure duration. As mentioned previously, acute animal studies are typically conducted for up to 7 days, subacute studies for 7 to 30 days, and subchronic studies for 90 days to 1 year. Chronic studies are conducted for a significant portion of the lifetime of the animal. Animals may experience health effects during short-term exposure which may differ from effects observed after long-term exposure, so short-term animal studies less than 90 days should not be used to develop chronic RIDs except for the development of interim RIDs or developmental RIDs. Exceptionally high quality >90 day oral exposure studies may be used as a basis for developing an RID whereas the inhalation route is preferred for deriving a RfC [15] . Please note that the same approaches used to develop the RID are used to develop the RfC, the only difference being the route of exposure, animal-to-human dosimetric adjustments, and the units, (i.e., mg/m 3 for the RfC vs mg/kg/day for the RID). The highest dose level that does not produce a significantly elevated increase in an adverse response is the NOAEL. The NOAEL from the critical study should be used for criteria development, i.e., the health effect that occurs at the lowest dose. However, if a NOAEL is not available, then the LOAEL can be used if a LOAEL to NOAEL uncertainty factor (UF) is applied. Significance generally refers to both biological and statistical criteria and is dependent on the number of dose levels tested, the number of animals tested at each dose, and the background incidence of the adverse response in the control groups [5] . NOAELs can be used as a basis for risk assessment calculations such as RIDs and acceptable daily intake values (ADI). ADI and RID values should be viewed as a conservative estimate of levels below which adverse affects would not be expected; exposures at doses greater than the ADI or RID are associated with an increase probability (but not certainty) of adverse effects [19] . WHO uses ADI values for pesticides and food additives to define "the daily intake of chemical, which during an entire lifetime appears to be without appreciable risk on the basis of all known facts at that time" [ 5] . In order to remove the value judgments implied by the words "acceptable" and "safety", the ADI and safety factor (SF) terms have been replaced with the terms RID and UP/modifying factors (MF), respectively. USEPA publishes RIDs and RfCs in either IRIS or in the USEPA's Health Effects Assessment Summary Tables (HEAST) . RIDs and ADI values (Eqs. 1 and 2, respectively) are typically calculated from NOAEL values divided by the UF and/or MF: The uncertainty factor (UF) may range from 1 to 10,000 depending on the nature and quality of the data and is determined by multiplying different UFs together to account for five areas of scientific uncertainty [20] . The UF is primarily used to account for a potential difference between the animal's and human's sensitivity to a particular compound. The UFH and UF A accounts for possible intraand interspecies differences, respectively. As mentioned previously, an UFs is used to extrapolate from a subchronic duration study to a situation more relevant for chronic study and an UFL is used to extrapolate from a LOAEL to a NOAEL. An UF 0 is used to account for inadequate numbers of animals, incomplete databases, or other experimental limitations. A modifying factor (MF} can be used to account for additional scientific uncertainties. In general, the magnitude of the individual UFs is assigned a value of one, three, or ten, depending on the quality of the studies used in developing the RID or RfC. This UF is reduced whenever there is experimental evidence of concordance between animal and human pharmacokinetics and when the mechanism of toxicity has been established. Recently, benchmark dose modeling has been recommended by USEPA instead of the NOAEL approach. Criticism of the NOAEL approach exists because of its limitations, which include the following: i) the NOAEL must be one of the experimental doses tested; ii) once the dose is identified, the remaining doses are irrelevant; iii) larger NOAELs may occur in experiments with few animals thereby resulting in larger RIDs; iv) the NOAEL approach does not identify the actual responses at the NOAEL and will vary based on experimental design. These limitations of the NOAEL approach resulted in the benchmark dose (BMD) method [21] . The dose-response is modeled and the lower confidence bound for a dose (BMDL) at a specified response level, benchmark response (BMR), is calculated [5] . The BMDLx (with x representing the x percent BMR) is used as an alternative to the NOAEL value for the RID calculations. Thus, the calculation of the RID is shown in Eq. (3}: Advantages of the BMD approach includes: i) the ability to account for the full dose-response curve; ii) the inclusion of a measure of variability; iii) the use of responses within the experimental range; iv) the use of a consistent benchmark response level for RID calculations across studies [5] . There are numerous informational databases or resources that provide risk assessors essential information. USEPA publishes RIDs, RfCs, CSFs, and URFs in the Integrated Risk Information System (IRIS) or in the Health Effects Assessment Summary Tables (HEAST). The information in IRIS followed by HEAST should be used preferentially before all other sources. A recent review of other available resources was published in a special volume of Toxicology, vol157, 2001. Articles by Poore et al. [ 22] and Brinkhuis [ 23] provide a thorough review of U.S. government databases such as USEPA's IRIS at http://www.epa.gov/ iriswebp/iris/, National Center for Environmental Assessment (NCEA),ATSDR's chemical-specific toxicology profiles and acute, subchronic, and chronic minimal risk levels (MRLs ), and HazDat at http:/ /www.atsdr.cdc.gov/hazdat.html, among many other databases. The reviewers provide advise for effective search strategies as well as strategies for finding the appropriate toxicology information resources. Exposure occurs when a human contacts a chemical or physical agent. Exposure assessment examines a wide range of exposure parameters pertaining to the environmental scenarios of people who may be exposed to the agent under study. The information considered for the exposure assessment includes monitoring studies of chemical concentration in environmental media and/or food; modeling of environmental fate and transport of contaminants; and information on different activity patterns of different population subgroups. The principal pathways by which exposure occurs, the pattern of exposure, the determination of COPC intake by each pathway, as well as the number of persons and whether there are sensitive subpopulations that need to be evaluated are also included in the evaluation. In this step, the assessor characterizes the exposure setting with respect to the general physical characteristics of the site, the site COPCs, and the characteristics of the populations on or near the site. Hazard identification/evaluation consists of sampling and analysis of soil, ground water, surface water, air, and other environmental media at contaminated sites. A common method used in screening substances at a site is by comparison with background levels in soil or ground/ surface water [ 19] , determining if a chemical is detected or not and whether the detection limit for that chemical is less than reference concentrations as well as frequency of detection [24] . Once a list of COPCs have been identified at the site, the availability of chemical characteristics such as struc-ture, solubility, stability, pH sensitivity, electrophilicity, and chemical reactivity and toxicity data are collected and evaluated to ascertain the nature of health effects associated with exposure to these chemicals. In many cases, toxicity information on chemicals is limited. Knowing the COPC's characteristics can represent important information for hazard identification [S] .Also, SARs are useful in assessing the relative toxicity of chemically related compounds. During this phase of exposure assessment, the major pathways by which the previously identified populations may be exposed are identified. Therefore, locations of contaminated media, sources of release, fate and transport of COPCs, pathways and exposure points, routes of exposure (i.e., ingestion of drinking water, dermal contact when showering) and location and activities of the potentially exposed population are explored. For example, the common on-site pathways evaluated when conducting a RCRA remediation baseline risk assessment where unauthorized chemical releases have occurred includes direct contact with soil either by ingestion of soil and/ or inhalation of volatile chemicals or contaminated dust [ 19] . The migration of chemicals off-site can occur via wind-blown dust and vapor emissions from soil, leaching of chemicals to ground water with subsequent movement off-site, and run-off surface water. These off-site chemicals can eventually accumulate in other transport media such that the COPC ends up in vegetation crops, meat, milk, and fish that will eventually be consumed by humans. Therefore, pathways, sources of release, locations of contaminated media, fate and transport of COPCs, and location and activities of the potentially exposed population are explored. Exposure points and routes of exposure (ingestion, inhalation) are identified for each exposure pathway. It is necessary to identify populations likely to receive especially high exposure and populations likely to be unusually sensitive to the chemical's effects. An example of possible point of exposures and exposure routes due to exposure to ground water or surface water (i.e., source medium) used for drinking water is shown in Table 5 . Please note that all of these exposure path- Volatilization from water Air Inhalation into enclosed space ways are typically not evaluated when doing a risk assessment on contaminated drinking water since the techniques and exposure parameters for evaluating these routes of exposure are not well developed. Additional pathways to consider for surface water may include recreational exposures (i.e., swimming, boating), ingestion of contaminated fish, shellfish, etc., and dermal exposure to contaminated sediment. Finally, an attempt should be made to develop a number of exposure scenarios. Exposure scenarios are a combination of"exposure pathways" to which a single "receptor" may be subjected [25] . For example, a residential adult or child receptor may be exposed to all the exposure routes in Table 5 (i.e., drinking water, showering/bathing, washing/cooking food, and volatilization from ground water or drinking water into an enclosed space). An industrial receptor may only be exposed through the drinking water pathway and volatilization from ground water into an enclosed space and not be exposed through showering/bathing or washing/ cooking, because these activities are not allowed at an industrial site. Exposure scenarios are generally conservative and not intended to be entirely representative of actual scenarios at all sites. The scenarios allow for standardized and reproducible evaluation of risks across most sites and land use areas [ 25] . Conservatism allows for protection of potential receptors not directly evaluated such as special subpopulations and regionally specific land uses. The magnitude, frequency and duration of exposure for each pathway are next evaluated. For each potential exposure pathway, the chemical doses received by each exposure route needs to be calculated. Because chemical concentrations can vary, many different studies might be required to get a complete picture of the chemical's distribution patterns within the environment. Off-site sampling and analysis are preferred methods to determine the exposure concentrations in the environmental media at the point of exposure. Because sampling data forms the foundation of a risk assessment, it is important that site investigation activities are designed and implemented with the overall goals of the risk assessment to be performed [19] . For example, it is essential that appropriate analytical methods with proper quality assurance/quality control documentation be employed and that the analytical methods are sensitive enough to detect the COPC at concentrations that are below health protective reference concentrations. After the sampling data is collected and evaluated, then statistical techniques may be used to calculate the representative concentration of COPCs that will be contacted over the exposure area. Different statistical techniques may be required for the determination of representative concentrations in ground water vs surface water [24] . Fate and transport models can be used to estimate current concentrations in media and/or at locations for which sampling was not conducted. In addition, an increase in future chemical concentrations in media that are currently contaminated or that may become contaminated can be predicted by fate and transport modeling. Detailed discussions of these models are contained elsewhere in this book. Each scenario described in the exposure assessment should be accompanied by an estimated exposure dose for each pathway. Once the exposure pathway is determined, then the estimated risks and hazards from each exposure pathway can be characterized. Exposure estimates for the oral pathway are expressed in terms of the mass of substance in contact with the body per unit body weight per unit time (i.e., intakes) whereas exposure estimates from inhalation pathways are expressed as mass of substance per unit volume (i.e., inhalation concentrations). The general equation for calculating intakes (mg/kg/day) is as follows [24] : Intake, the amount of chemical at the exchange boundary (mg/kg body weight-day) C COPC concentration, average concentration contacted over the exposure period CR Contact rate, the amount of contaminated medium contacted per unit time or event EF Exposure frequency (days/year) ED Exposure duration (years) BW Body weight, the average body weight over the exposure period (kg) AT Averaging time or period over which exposure is averaged (days). Each exposure pathway has slightly different variations of the above basic equation. Please refer to Appendix A for examples of equations used to calculate intakes for the major exposure pathway from ground and surface waters as well as examples of exposure parameters employed to calculate intakes: Appendices A-1 and A-2, ingestion of drinking water; Appendices A-3 and A-4, ingestion of contaminated fish tissue; Appendices A-5 and A-6, dermal contact with contaminated water; and Appendix A-7 inhalation of volatiles from contaminated ground water or surface water. Please refer to Kasting and Robinson [26] and Exposure to Contaminants in Drinking Water [27] for additional information on the various issues involved in the assessment of dermal exposure to water. The exposure parameters (e.g., CR, EF, ED, BW, and AT) for each pathway are derived after an extensive literature review and statistical analysis [ 28] . For example, information on water ingestion rates, body weights, and fish ingestion rates for adults, children, and pregnant women used to develop the National Ambient Water Quality Criteria were obtained from the following documents: Exposure Factors Handbook [28]; National Health and Nutrition Examination Survey (NHANES III) [29] ; and United States Department of Agriculture (USDA) 1994-1996 Continuing Survey of Food Intakes [30] . Exposure parameters may represent central tendency or average values or maximum or near-maximum values [24] . Science policy decisions that consider the best available data and risk management judgments regarding the population to be protected are both used to choose appropriate exposure parameters. USEPA emphasizes that exposure assessments should strive to achieve an overall dose estimate that represents a "reasonable maximum exposure (RME):' The intent of the RME is to estimate a conservative exposure scenario that is within the range of possible exposures yet well above the average case (above the 90 1 h percentile of the actual distribution). However, estimates that are beyond the true distribution should be avoided. If near maximum or maximum values are chosen for each exposure parameter, then the combination of all maximum values for each exposure parameter would result in an unrealistic assessment of exposure. Using probabilistic risk assessment, Cullen demonstrated that if only two exposure parameters were chosen at maximum or near maximum values, and other parameters were chosen at medium values, than the risk and hazards estimates represented a RME (>99% percentile level) [31] . Risk assessors should identify the most sensitive parameters and use maximum or near-maximum values for one or a few of those variables. Central tendency or average values should be used for all other parameters [24] . When central tendency and/or maximum values are chosen for exposure parameters used to calculate intake for an exposure pathway, single point estimates of risk and hazard are calculated (i.e., a deterministic technique). However, probabilistic techniques like Monte Carlo analysis can be employed to provide different percentile estimates of risk and hazard (i.e., soth percentile or 95th percentile estimates) as well as to characterize variability and uncertainty in the risk assessment. Monte Carlo simulation is a statistical technique by which a quantity is calculated repeatedly, using randomly selected values from the entire frequency distribution for an exposure parameter or multiple exposure parameters for each calculation. USEPA recommends using computerized Monte Carlo simulations to provide probability distributions for dose and risk estimates by incorporating ranges for individual assumptions rather than a single dose or risk estimate [19] . Using better estimates for the distribution of contaminant levels is a major focus of recent risk assessment research. To obtain such estimates, several techniques, such as generating subjective uncertainty distributions and Monte Carlo composite analyses of parameter uncertainty, have been applied [5] . These are approaches that can provide a reality check that is useful in generating more realistic exposure estimates [ 5] . Also, high-end exposure estimates (HEEEs) and theoretical upper-bound estimates (TUBEs) are now recommended for specified populations as well as calculation of exposure for highly exposed individuals [5] . HEEE represents an estimate of the exposure in the upper ninetieth percentile while TUBEs represent exposure levels that exceed exposures experienced by all individuals in the exposure distribution and assume limits for all exposure variables [5] . Please refer to the Policy for Use of Probabilistic Analysis in Risk Assessment at the USEPA and Guiding Principles for Monte Carlo Analysis at http:/ /www.epa.gov/ ncea/mcpolicy.htm [32] . Risk characterization, the last step in the risk assessment process, links the toxicity evaluation (hazard identification and dose-response assessment) to the exposure assessment. Estimates of the upper-bound excess lifetime cancer risk and noncarcinogenic hazard for each pathway, each COPC, and each receptor identified during the exposure assessment are calculated. Another important component of risk characterization is the clear, transparent communication of risk and hazard estimates as well as an uncertainty analysis of those estimates to the risk manager. Cancer risk is usually expressed as an estimated rate of excess cancers in a population exposed to a COPC for a lifetime or portion of a lifetime [33] . Oral intakes are multiplied by the CSF (Eq. 5), dermal intakes are multiplied by the CSF adjusted for GI absorption (Eq. 6), and lifetime average inhalation concentrations are multiplied by the URF (Eq. 7) to obtain risk estimates. For evaluating the risk from oral exposure, the intakes from all ingestion pathways can be summed (i.e., ingestion of drinking water, ingestion of fish, etc.), then the total intake is multiplied by the CSF, as follows: (5) Intakeoral The combined amount of COPC from all oral pathways at the exchange boundary (mg/kg/day) (Appendices A-1 to A-4) CSF Cancer slope factor (mg/kg/day)-1 • For evaluating dermal exposure, the dermally absorbed dose (DAD) is calculated (Appendices A-5 and A-6) and multiplied by an adjusted CSF, CSF dermal· The CSF is typically derived based on oral dose-response relationships that are based on administered dose, whereas the dermal intake estimates are based on absorbed dose. Therefore, if the CSF is based on an administered dose, it should be adjusted for gastrointestinal absorption, if gastrointestinal absorption is significantly less than 100% (e.g., <50%) [34] . Therefore, if an estimate of the gastrointestinal absorption fraction (ABSGI) is available for the compound and ABSGI is less than 50% [34, 35] , then the oral dose-response factor, based on an administered dose, can be converted to an absorbed dose basis by dividing the CSF by the ABSGr to form a CSF derma!: where DAD Dermally absorbed dose (mg/kg/day) (Appendices A-5 and A-6) (6) CSFdermal Dermal cancer slope factor (mg/kg/day)-1 ; CSFdermat=(CSF/ABSG1). When ABSGr values are not available from Bast and Borges [35] for a compound, then USEPA Region 4 [36] recommends the following defaults for ABSG 1 : 80% for volatile organics; 50% for semi-volatile organics and nonvolatile organics; and 20% for inorganics. For evaluation of inhalation exposure, the lifetime average inhalation concentration is multiplied by the URF: where Cinh Concentration of COPC at the exchange boundary (mg/m 3 ) (Appendix A-7) URF Unit Risk Factor (Jlg/m 3 )-1 • To obtain a conservative total risk estimate, the risks for an individual COPC from each pathway is summed and then the risks from all COPCs are summed (Eq. 8): where (8) Rtotal Sum of all risk estimates from all i 1 h COPCs from all pathways. However, USEPA is still developing approaches to deal with the uncertainties associated with combining risk estimates of chemical mixtures across different routes of exposure (i.e., inhalation, oral, and dermal) since differences in the properties of the cells that line the surfaces of the air pathways and the lungs, the gastrointestinal tract and the skin may result in different intake patterns of chemical mixtures components depending on the route of exposure. Another consideration in dealing with chemical mixtures is the chemicals in a mixture may partition to contact media differently [37] .A risk estimate of 1X10 6 , 1x10 5 , or 1 x 10 4 is interpreted to mean that an individual has not more than, and likely less than, a 1 in 1 ,000,000, 1 in 100,000, or 1 in 10,000 chance, respectively, of developing cancer from the exposure being evaluated. The range of carcinogenic risks acceptable by the USEPA is IQ-6 to IQ-4 • For chronic exposures to noncarcinogens, the intake of a COPC is compared to the appropriate RID (i.e., oral RID or RID dermal) or RfC to form the hazard quotient (HQ) [33] . Oral intakes are compared to the RID (Eq. 9), dermally absorbed doses (DADs) are compared to the RID dermal (i.e., RID adjusted for GI absorption refer to the previous section for a discussion of procedures for adjusting toxicity factors for GI absorption) (Eq. 10), and inhalation intakes are compared to the RfC (Eq. 11) to obtained hazard quotients for each route of exposure: L Intakeoral HQoral = RjD (9) where Intakeoral The combined amount of COPC from all oral pathways at the exchange boundary (mg/kg/day) (Appendices A-1 to A-4) RID Oral reference dose (mg/kg/day) The total hazard index (HI) for an individual COPC from all routes of exposure is the sum of the HQs from all applicable pathways (oral, dermal, or inhalation) (Eq.12): HI; = L HQ; (12) where Hii The sum of the hazard quotients from all relevant pathway for the i 1 h CO PC. In order to be conservative, a total HI can be calculated by summing the His from each individual COPC. "If the overall HI value is less than one, public health risk is considered to be very low"; however, "if the HI value is equal to or greater than one, then the exposure assessment and hazard characterization should be investigated more thoroughly:' If the HI exceeds one, then the hazard estimates may be refined by grouping the COPCs that affect the same target organ or have the same mechanism of action and adding only the His from similar-acting COPCs [37, 38] . Ideally, chemicals would be grouped according to effect-specific toxicity criteria, information on chemicals exhibiting multiple effects would be available, and their exact mechanism of action would be known. Instead, RIDs and RfCs are available for just one of the several possible endpoints of toxicity for a chemical and data are often limited to gross toxicological effects in an organ or an entire organ system. The list of these specific endpoints of toxicity is limited so it is best to consult a toxicologist during this step of the hazard evaluation [16] . Each COPC and exposure pathway needs to be calculated to determine the actual risk. The HI provides a rough estimate of possible toxicity and requires careful interpretation [38] . The HI does not account for the number of individuals who might be affected by exposure or the severity of the effects. USEPA recommends that a HI of 1.0 be used for noncancer health effects. In the "real world", exposures generally involve complex mixtures of COPCs. There are three basic actions for mixtures: i) independent joint action, which describes COPCs that act independently and have different modes of action and are not expected to affect the toxicity of one another; ii) similar joint action or "dose addition;' which describes a mixture where the COPCs produce similar but independent effects; iii) synergistic action, which the effective mixture cannot be assessed from the individual ingredients but depends on knowledge of their combined toxicity [39] (Table 3 and Fig. 1 ). The total HI can exceed the target hazard level as a result of the presence of either one or more COPCs with an HQ exceeding the target hazard level or the summation of several COPC-specific HQs that are each less than the target hazard level. It is important to mention that the numbers generated by risk assessors should not be viewed as either accurate measures or even predictors of rates of adverse health effects in human populations [33] . The calculated estimates are routinely based on assumptions recognized as being conservative. Thus, these numbers should be used as tools open to interpretation on a site-by-site basis. It is important for the risk manager to be informed of the uncertainties during the risk assessment process. Significant limitations and uncertainties can exist throughout the entire risk assessment process; thus, it is important that a discussion of uncertainty accompanies risk assessment analysis so that the limitations of the quantitative results are taken into consideration. Both qualitative and quantitative methods have been developed to analyze the uncertainty associated with risk assessment. A quantitative analysis may be conducted using either a sensitivity analysis or a probability analysis. Listed below are the various reasons why uncertainty exists in a risk assessment analysis [ 4, 19] : -Deficient control groups -Difference in smoking habits between an epidemiology study group and a risk group -Differences or lack of consideration of pharmacokinetics and/or mechanism of toxicity between species -Failure to diagnose or misdiagnosis the cause of mortality -Inappropriate experimental study design -Lack of knowledge regarding combined biological effects of exposure to multiple toxicants -Limitation in data regarding nature and magnitude of levels in the environment -Low-dose extrapolation from high-dose experimental conditions -Reliance on mathematical models -Toxicant interaction with another agent -Use of animal studies in the determination of risk for humans 3 It is important to make clear the distinction between risk assessors and risk management. Risk assessors generate risk estimates but a risk manager considers these risk estimates, other scientific information, and integrates it into societal decisions [ 40 ] . For example, risk managers consider data analysis, technical concerns, economic concerns, and social/political concerns in addition to comparing the risk estimates to an acceptable level set by federal or state health agencies [ 40 ] . Generally, trade-offs or compromises are made for the lowest possible risk and society's demand for jobs and economic growth. Examples of questions that may be asked by risk managers are: "Is a particularly deadly type of cancer in a narrow population worse or better than widespread effects of a non-lethal nature? Can this decision be successfully defended in court?" In general, risk management decisions may be based more on political and economic factors than risk factors [ 40 ] . Risk assessment and risk management are an integral part of the contemporary regulatory scene. Risk management refers to the selections and implementation of the most appropriate regulatory action based on: i) goals; ii) social and political factors; iii) available control technology; iv) costs and benefits; v) results of risk assessment; vi) acceptable risk; vii) acceptable number of cases [4] . Another aspect to consider is cumulative risk in either the risk assessment or risk management phase. Cumulative risk evaluation considers all "involuntary" risk to which a receptor may be exposed by a variety of environmental risks such as: i) automobile exhaust emissions; ii) leaking underground storage tanks; iii) untreated sewage; iv) agricultural land runoff; v) industrial process air emissions; vi) conventional combustion-related air emissions [ 41] . However, at the present time, definite guidance from USEPA regarding the evaluation of cumulative risk is not available. A refined site-specific risk assessment takes into account the specific characteristics concerning the site, all relevant pathways a receptor is exposed to, and other site-specific information. This represents a "forward" calculation method where risk and hazard estimates are calculated. However, each state or USEPA regional office may utilize slightly different exposure factors, exposure scenarios, target risk and hazard levels or different procedures to account for childhood exposure, cumulative risk, etc. It is a very time-and resource-intensive process, which involves numerous scientific policy decisions. However, risk and hazard estimates from a refined site-specific risk assessment typically provide more realistic estimates than a generic screening level risk assessment. In contrast, media-specific comparison values can be calculated based on a "backward" calculation method based on standardized equations, USEPA toxicity values, standard exposure pathways or scenarios, default exposure factors, and conservative risk and hazard levels. USEPA Office of Water has derived Drinking Water Standards and Health Advisories to evaluate levels of contaminants in public drinking water supplies. To evaluate levels of contaminants in surface water, USEPA publishes guidance documents [42] as well as National Recommended Water Quality Criteria. State and tribal agencies then develop water quality standards for each water body in the state based on USEPA guidance and the use designation for the individual water body. USEPA must review the proposed state water quality standards before they become legally enforceable standards. If Drinking Water Standards and/ or state and tribal water quality standards are available for the COPCs present at the site, these standards generally must be used to evaluate human health impacts to groundwater and/or surface water, respectively. Water quality standards apply to surface waters of the United States, including rivers, streams, lakes, oceans, estuaries, and wetlands. Water quality standards consist, at a minimum, of three elements: 1) the "designated beneficial use" or "uses" of a water body or segment of a water body; 2) the water quality "criteria" necessary to protect the uses of that particular water body; 3) an antidegradation policy. Typical designated beneficial uses of water bodies include public water supply, propagation of fish and wildlife, recreation, agricultural water use, industrial water use and navigation. If information concerning COPCs is not present in the drinking water and/or state and tribal water standards databases, or additional exposure pathways need to be included during the site assessment, then media-specific comparison values are available from the Soil Screening Guidance [16] , several USEPA regional offices, and individual state governments (Table 6 ). These benchmark values may be used as a tool to perform initial site screenings or as initial cleanup goals, if applicable. The different media-specific comparison values are generic, but can be recalculated using more site-specific information and guidance provided on the applicable web addresses (Table 6 ). However, they usually do not consider all potential human health exposure pathways or consider ecological concerns. Many of the databases listed in Table 6 also provide COPC concentration in soil calculated with fate and transport models that are protective of ground water and surface water. If information concerning COPCs is not present in these databases, or additional exposure pathways need to be included during the site assessment, then a detailed toxicity evaluation and risk assessment may need to be conducted based on state or other regulatory agency guidelines. USEPA was granted authority to set drinking water standards by the Safe Drinking Water Act (SDWA) of 1974. The SDWA has since been amended in 1986 and 1996. The responsibility for implementing drinking water standards is delegated to states and tribes. USEPA is responsible for identifying contaminants to regulate, establishing priorities for contaminants that are of the greatest concern, and then deriving National Primary Drinking Water Regulations. The SDWA is applicable to public water systems that provide water for human consumption through at least 15 service connections or regularly serve at least 25 individuals. The standards apply to the water delivered to any user of a public water system. The standards are not applicable to private wells, although state and local governments do set rules to protect users of these wells. Owners are urged to test their wells annually for nitrate and coliform bacteria, to test their wells for other compounds if a problem is suspected, and to take precautions to ensure the protection and maintenance of their drinking water supplies. Even though these drinking water standards do not apply to private wells, many states adopt them as ground water standards or use them to evaluate whether concentrations of contaminants in ground water are above a level of concern. The Office of Water establishes National Primary Drinking Water Standards, Secondary Drinking Water Regulations, as well as Health Advisories. The derivation of these standards, regulations, and Health Advisories are discussed below. The Drinking Water Standards and Health Advisories tables may be reached from the Office of Science and Technology ( OST) home page at http://www.epa.gov/OST. The tables are accessed under the OST Programs heading on the OST Home page. National Primary Drinking Water Standards are regulations the USEPA sets to control the level of contaminants in the nation's drinking water. Maximum contaminant level goals (MCLGs) are the maximum level of a contaminant in drinking water at which no known or anticipated chronic adverse effect on the health of persons would occur, and which allows an adequate margin of safety. MCLGs are non-enforceable public health goals. Maximum contaminant levels (MCLs) are enforceable standards that are set as close to MCLGs as possible but take into consideration the availability of technology treatments and techniques as well as whether reliable analytical methods capable of detecting low concentrations of contaminants are available. The derivation of MCLGs and MCLs are discussed in the following sections. For noncarcinogens (not including microbial contaminants), the MCLG is based on the RID. The definition and derivation of the RID has been discussed previously. The RID is first adjusted for an adult with body weight assumed to be 70 kg and consuming 2 L of water per day to produce the Drinking Water Equivalent Level (DWEL): The DWEL represents the concentration of a substance in drinking water that is not expected to cause any adverse noncarcinogenic health effects in humans over a lifetime of exposure and assumes the only exposure to the chemical comes from drinking water. However, exposure to the chemical can also occur through other pathways and routes of exposure. Therefore, the MCLG is calculated by reducing the DWEL in proportion to the amount of exposure from drinking water relative to other sources (e.g., food, air). In the absence of actual exposure data, this relative source contribution (RSC) is generally assumed to be 20%. The final value is in mg/L and is generally rounded to one significant figure: MCLG(mg!L) = DWEL · RSC (14) If the chemical is considered to be a Class A orB carcinogen, then it is assumed that there is not a dose below which the chemical is considered safe. Therefore, the MCLG is set at zero. If a chemical is a Class C carcinogen and scientific data provides information that there is a threshold below which carcinogenesis does not occur, then the MCLG is set at a level above zero that is safe. Prior to 1996, the MCLG for Class C carcinogens was based on an RID approach that applied an additional uncertainty factor of 10 to account for possible carcinogenic potential of the chemical. If there weren't any reported noncancer effects, then the MCLG was based on a nominal lifetime excess cancer risk of 10-6 to 10-5 , if data were adequate. The Office of Water is now moving toward guidance contained in the 1999 draft cancer guidelines [8, 43] which allows standards for nonlin-ear carcinogens to be derived based on low dose extrapolation and a mode of action approach. For microbial contaminants that may present public health risk, the MCLG is set at zero because ingesting one protozoa, virus, or bacterium may cause adverse health effects. USEPA is conducting studies to determine whether there is a safe level above zero for some microbial contaminants. So far, however, this has not been established. As mentioned previously, Maximum Contaminant Levels (MCLs) are enforceable standards that are set as close to MCLGs as possible but take into consideration the availability of technology treatments and techniques as well as whether reliable analytical methods capable of detecting low concentrations of contaminants are available. If there is not a reliable analytical method, than a treatment technique (TT) is set rather than an MCL. A TT is an enforceable procedure or level of technological performance, which public water systems must follow to ensure control of a contaminant. In addition, MCLs take into account an economic analysis to determine whether the benefits of enforcing the standard justify the costs. For Group A and Group B carcinogens, MCLs are usually promulgated at the 1 o-6 to 1 o-4 risk level. Secondary drinking water regulations are non-enforceable Federal guidelines that take into account whether a chemical produces cosmetic effects such as tooth or skin discoloration or aesthetic effects such as affecting the taste, odor, or color of drinking water. Because there are at least 15 different contaminates (i.e., aluminum, chloride, copper, and fluoride) in drinking water that are not considered to be health threatening, secondary maximum contaminant levels (SMCLs) guidelines have been established for public water systems that voluntarily test the water. These secondary standards give the public water systems guidance on removing the contaminants. In most cases, the state health agencies and public water systems often monitor and treat their drinking water for secondary contaminants. In order to provide information and guidance concerning drinking water contaminants for which national regulations currently do not exist, the USEPA Health and Ecological Criteria Division, Office of Water, in cooperation with the Office of Research and Development prepares Health Advisories (HA). These detailed HAs are used to "estimate concentrations of the contaminant in drinking water that are not anticipated to cause any adverse noncarcinogenic health effects over specific exposure durations" [ 17] . They include a margin of safety to protect sensitive members of the population (e.g., children, the elderly, and pregnant women). HAs are not legally enforceable in the United States, are only used for guidance by Federal, state and local officials, and are subject to change as new information becomes available. Included in the HAs is information on analytical and treatment technologies. HAs are provided for acute or shortterm effects as well as chronic effects. The One-day HA, the Ten-day HA and the Longer-term HA are based on the assumption that all exposures to the contaminant comes from drinking water whereas the Lifetime HA takes into account other sources such as food, air, etc. The following types of HAs have been developed [17] . One-day HA -the concentration of a chemical in drinking water that is not expected to cause any adverse noncarcinogenic effects for up to one day of exposure. A one-day HA is generally based on data from acute human or animal studies involving up to 7 days of exposure. The protected individual is assumed to be a 10-kg child with an assumed volume of drinking water (DI) of 1 Lingested/day. Ten-day HA-the concentration of a chemical in drinking water that is not expected to cause any adverse noncarcinogenic effects for up to ten days of exposure. A Ten -day HA is generally based on subacute animal studies involving 7-30 days of exposure. Similarly to the One-day HA, the protected individual for the Ten-day HA is assumed to be a 10-kg child with an assumed DI of l L ingested/ day. Longer-term HA-the concentration of a chemical in drinking water that is not expected to cause any adverse noncarcinogenic effects for up to approximately seven years (10% of an individual's lifetime) of exposure, with a margin of safety. A Longer-term HA is generally based on subchronic animal studies involving 90 days to 1 year of exposure. The protected individual is assumed to be a 10-kg child with an assumed DI of 1 L ingested/day and a 70-kg adult with an assumed DI of 21 ingested/day. Lifetime HA -the concentration of a chemical in drinking water that is not expected to cause any adverse noncarcinogenic effects for a lifetime of exposure. A Lifetime HA is generally based on chronic or subchronic animal studies. The protected individual is assumed to be a 70-kg adult with an assumed DI of 2 L ingested/day. A DWEL is calculated and multiplied by a RSC of 20% to account for exposure to drinking water as well as other sources (food, air, etc.). Therefore, the lifetime HA is derived similarly to the MCLG. The following general formula is used to derive the One-day, Ten-day, and the Longer-term HAs and the DWEL: Health Advisories for the Assessment of Carcinogenic Risk (15) If a contaminant is recognized as a human or probable human carcinogen (Groups A or B), a carcinogenic slope factor (CSF) is derived based on techniques discussed above. The slope factor is then used to determine the concentrations of the chemical in drinking water that are associated with theoretical upper-bound excess lifetime cancer risks of w-4 , w-s, or w-6 • The following formula is used to calculate the concentration predicted to contribute an incremental risk level (RL) of 10-4 , w-5 , or 10-6 : Cvw(mg.IL) = ___ : : : . . _ where Cow Concentration in drinking water at desired RL (mg/L) RL Desired risk level (lo-4 , 10-5 , or 70 Assumed body weight of adult human (kg) CSF Carcinogenic potency factor for humans (mg/kglday}- 1 2 Assumed water consumption of an adult human (L!day). (16) If a DWEL was calculated for a Class A, B, or C carcinogen based on an RID study, (i.e., noncarcinogen), then the carcinogenic risk associated with lifetime exposure to the DWEL can be calculated to assist the risk manager for comparison in assessing the overall risks. The theoretical upper-bound cancer risk associated with lifetime exposure to the DWEL is calculated as follows: 2Lid· CSF Risk = DWEL · ----70 kg (17) Toxity Evaluation and Human Health Risk Assessment of Surface and Ground Water 167 6 USEPA is required by the Clean Water Act of 1972 to develop, publish, and revise ambient water quality criteria (AWQC). The AWQC "involves the calculation of the maximum water concentration for a pollutant that ensures drinking water and/or fish ingestion exposures will not result in human intake of that pollutant (i.e., the water quality criteria level) in amounts that exceed a specified level based upon the toxicological endpoint of concern" [20] . In October 2000, USEPA issued new guidelines [20] that replaced the 1980 AWQC National Guidelines [44] . The 2000 AWQS Guidelines incorporated significant scientific advances in the following key areas: cancer risk assessment (1986 cancer guidelines [7] vs the 1999 draft cancer guidelines) [ 8, 43] ; risk assessments for Class C carcinogens using nonlinear low-dose extrapolation; non-cancer risk assessments (benchmark dose approach and categorical regression); exposure assessments (consideration of non-water sources of exposures); bioaccumulation in fish (bioaccumulation factors, BAFs, are recommended for all compounds to calculate concentration in fish tissue). In addition, the procedures for deriving AWQC under the CWA were made more consistent to the procedures for deriving MCLG by the SDWA. This section will discuss guidelines from the Methodology for Deriving Ambient Water Quality Criteria for the Protection of Human Health, hereafter referred to as the AWQC Methodology Guidance [20] , accessible at http:/ /www.epa.gov/ost/ humanhealth/method/index.html. State and tribal environmental agencies are responsible for developing ambient water quality standards (AWQS) for each water body in the state based on guidance provided by USEPA [20] and the uses that water bodies have been designated for (i.e., drinking water supply, recreation, or fish protection, etc.). These designated uses are a part of the water quality standards, provide a regulatory goal for the water body and define the level of protection assigned to it. The Watershed Assessment, Tracking&Environmental Results database (WATERS), accessible at http:/ /www.epa.gov/waters/ provides information on the water body designation for each individual state and tribe. The exposure pathways typically evaluated for AWQC are direct ingestion of drinking water obtained from that water body and the consumption of fish/ shellfish obtained from that water body. When an AWQC is set, anticipated exposures from other sources of exposure (e.g., food, air) are taken into account for noncarcinogenic effects, or carcinogenic effects evaluated by the margin of exposure (MOE) approach (i.e., Class C carcinogens, using the 1986 WOE cancer guideline terminology). The amount of exposure attributed to each source compared to total exposure is called the relative source contribution (RSC) for that source. The RSC is typically set at 20% but if a site-specific assessment is conducted for a particular water body and it can be demonstrated that other sources of exposures are not likely to occur, then the RSC can be set as high as 80%. An Exposure Decision Tree Approach is described in the Methodology Guidance to assist in calculating a site-specific RSC for a water body [20] . The allowable dose (typically, the RID) is then allocated via the RSC approach to ensure that the criterion is protective enough, given the other anticipated sources of exposure: where AWQC Ambient Water Quality Criterion (mg/L) RID Reference dose for non-cancer effects (mg/kg-day) RSC Relative source contribution factor to account for non-water sources of exposure -may be either a percentage (multiplied) or amount subtracted, depending on whether multiple criteria are relevant to the chemical BW Human body weight ( default=70 kg for adults) DI Drinking water intake (default=2 L/day for adults) Fli Fish intake at trophic level (TL) i (i=2, 3, and 4) (defaults for total in-take=0.0175 kg/day for general adult population and sport anglers, and 0.1424 kg/day for subsistence fishers). Trophic level breakouts for the general adult population and sport anglers are: TL2=0.0038 kg/day; TL3=0.0080 kg/day; and TL4=0.0057 kg/day BAFi Bioaccumulation factor at trophic level i (i=2, 3, and 4), lipid normalized (L/kg). The following equation is used for deriving AWQC for chemicals evaluated with a nonlinear low-dose extrapolation (margin of exposure) based on guidance in the 1999 draft cancer guidelines: where POD Point of departure for carcinogens based on a nonlinear low-dose extrapolation (mg/kglday), usually a LOAEL, NOAEL, or LEDlO UF Uncertainty Factor for carcinogens based on a nonlinear low-dose extrapolation (unitless). For carcinogens, only two water sources (i.e., drinking water and fish ingestion) are considered when AWQC are derived. AWQC for carcinogens are determined with respect to the incremental lifetime risk posed by a substance's presence in water, and is not being set with regard to an individual's total risk from all sources of exposure [20] . The 1986 cancer guidelines are the basis for IRIS risk numbers that were used to derive the current AWQC, except for a few compounds developed using the revised cancer guidelines [14, 45] . Each new assessment applying the principles of the 1999 draft cancer guidelines [ 8, 43] will be subject to peer review before being used as the basis of revised, updated AWQC. The cancer-based AWQC was calculated using the risk specific dose (RSD) and other input parameters listed below. The RSD and AWQC for carcinogens was calculated for the specific targeted lifetime cancer risk (i.e., IQ-6 , IQ-5 , IQ-4 ), using the following two equations: where RSD Risk specific dose (mg/kg/day) Target Cancer Risk IQ-6 , IQ-5 , IQ-4 (lifetime incremental risk) CSF Cancer slope factor (mg/kg-day)-1 (20) (21) Exposure parameters based on a site-specific or regional basis can be substituted to reflect regional or local conditions and/or specific populations of concern. These include the relative source contribution, fish consumption rate, BAF (including factors used to derive BAFs such as concentration of particulate organic carbon applicable to the AWQC (kg/L) or concentration of dissolved organic carbon applicable to the AWQC (kg/L), percent lipid of fish consumed by target population, and species representative of given trophic levels. States and tribes are encouraged to make adjustments using the information and instructions provided in the AWQC Methodology Guidance [20] . The National Water Quality Standards Database (WQSDB) at the web address, http:/ /www.epa.gov/wqsdatabase/, provides access to several WQS reports that provide information about designated uses, water body names, State numeric water quality standards, and EPA recommended numeric water quality crite-ria. The WQSDB allows users the ability to compare WQS information across the nation using standard reports. Some states and tribes use an incidental ingestion value (II) instead of DI value when the water body is used for recreational purposes and not as a source of drinking water. However, an II value is not used to develop National AWQC. The default value for II is 0.01 L!day and is assumed to occur from swimming and other activities. The fish intake value is assumed to remain the same. Besides protection of human health, AWQC are developed based on other criteria such as organoleptic effects; aquatic life protection, sediment quality protection, nutrient criteria, microbial pathogens, biocriteria, excessive sedimentation, flow alterations, and wildlife criteria. For example, the National Recommended Water Quality Criteria table (http:/ /www.epa.gov/ost/standards/wqcriteria.html) lists freshwater or saltwater Criteria Maximum Concentration (CMC) criteria values that are the acute limits for the priority pollutant for the protection of aquatic life in freshwater or saltwater. The freshwater or saltwater Criterion Continuous Concentration (CCC) criteria value is the chronic limit for the priority pollutant for the protection of aquatic life in freshwater or saltwater, respectively. The table also includes criteria for organoleptic effects for 23 pollutants developed to prevent undesirable taste and/or odor imparted by them to ambient water. In some cases, a water quality criterion based on organoleptic effects or aquatic life protection would be more stringent than a criterion based on toxicologic endpoints. Information and links to guidance documents relating to these subjects may be reached from the Office of Water, Water Quality Criteria and Standards Program page at http://www.epa.gov/waterscience/standards/. As more knowledge is gained about the waste generated and disposed in landfllls by our society, there is great concern about the toxic effects that this waste has on our environment as well as animal and human health. Over the past decade, there have been numerous attempts to recycle various waste products generated by our society. This section will review some of the recent literature on recycled hazardous waste materials. Recycled concrete pavement as aggregate for the construction of highways can produce effluent with a high pH that can enter the underwater drains [46] . When Portland cement is recycled, the concrete consists of limestones and minerals where 60-65% is lime ( CaO ), silica (Si0 2 }, alumina (Al 2 0 3 }, and iron oxide (Fe 2 0 3 ). Ca(OH} 2 is formed sparingly in water and the saturated solution has a pH of 12.45 at 25 oc. The pH of the water effluent in underdrains is approximately 11-12. At this pH, the CaC0 3 precipitates out and forms deposits on the screen [ 46] . The deposition on the screens produces clogging and scales to form in the underdrain; thereby, causing vegetative kill around the outlet. In addition to recycled concrete, rubber is recycled for asphalt pavements. Recycling of rubber allows a means of disposal of scrap tires and reduces the quantity of construction materials for the asphalt. Asphalt pavement contains hot mix asphalt with and without crumb rubber modifier. The use of rubber tires reduces the weight of the asphalt and provides good drainage media as well as extending the life of the asphalt [47] . While there is an apparent benefit for recycling rubber, it has been found by the Minnesota pollution control agency that leaching can occur from the use of waste tires in sub grade roadbeds into the run -off water. In acidic conditions, leaching of barium, cadmium, lead, chromium, selenium, and zinc occurred from the asphalt while in basic conditions, there was leaching of polynuclear aromatic hydrocarbons. Thus, the recommended allowable levels (RALs) may be exceeded for drinking water standards in areas where there is recycled rubber in the asphalt. Paper or wood itself does not contain any hazardous chemicals unless the paper undergoes recycling. The recycling process requires de-inking of waste paper prior to recovery of the fiber generating a sludge that contains particles of ink and fibers too short to be converted to a finished paper product [ 48] . The de-inking chemicals such as sulfur, chlorine, cadmium, and fluorine are present in the sludge generated. A sludge is any solid, semi-solid, or liquid waste generated from a municipal, commercial, or industrial wastewater treatment plant, water supply treatment plant, or air pollution control facility exclusive of treated effluent from a wastewater treatment plant [ 48] . Thus, hazardous waste can be generated from the recycling paper process. A commodity used by numerous industries is plastic. Because of the enormous amount of plastic disposed by consumers on a daily basis, it has become a common recycled item at many facilities. Some metal sites will recycle the plastic insulation generated by their facility. The recycled plastic from such a facility generally includes metals such as lead, copper, manganese, and zinc, as well as dioxins, polychlorinated naphthalene, and polychlorinated biphenols. A leachate, contaminated run -off water, from the plastic "fluff" is formed during the recycling process. The leachate runs off into the water drains carrying hazardous chemical residue into soil and ground water. The plastic fluff is generally recycled on site into tiles, cushions, traffic cones, fenders, and highway barriers. The non-recyclable material and contaminated soil is generally taken to an off-site landfill. Another common way to recycle plastic is to use the "sink-float" process where paper, fiber, and metal can be separated from the plastics and then recycled. The "sink-float" process uses water where the heavy items sink and the light items float [49] . It has been demonstrated that recycled plastics can be used as construction material as an alternative to lumber. This product is made from used bottles collected at curbside for recycling. The recycled plastics undergo sorting to remove unpigmented polyethylene milk/water jugs and polyethylene terephthalate carbonated beverage bottles. The leftover plastic material is referred to as curbside tailings (CT). CT consists of approximately 80% polyolefin (polyethylene and polypropylene) with the remaining percentages made of polyethylene terephthalate, polystyrene, polyvinyl chloride, and other plastics [50] . The CT product has reasonable strength compared to wood. We is et al. ( 1992) evaluated three CT recycled plastic formulations in fiddler crabs, snail, and algae [50] . It was found that limb regeneration of the fiddler crabs was accelerated with all three formulations but had no effect on fertilized eggs or larval developments formulations. There was a significant reduction in the sperm fertilization success rate [50] . Furthermore, all three CT plastic formulation did not have an affect on the survival rate of snails or other algal species. The presence of metals in sludge and wastewater is a current problem. For instance, the use of sludge as a fertilizer of agricultural land generally receives cadmium (Cd++) from aerial deposition and phosphatic fertilizers. cd++ is considered a hazardous chemical and has been shown to produce toxicity of the lung and kidney and to be carcinogenic in rats [51] . The highest concentration of cd++ is found in tobacco, lettuce, spinach, and other leafy products/vegetables. Using crop uptake data from field trials, it is possible to relate potential human dietary intake of cd++ on which hazard depends, to soil concentrations of cadmium [52] . Transfer via farm animals to meat and dairy products for human consumption is thought to be minimal even allowing for some direct ingestion of sludge-treated soil by the animals. Background soil contains 0.1 to 1.0 mg cd++ /kg where 90% of cadmium is found in raw sewage that is converted to sludge. After the formation of sludge, 70% of the 90% Cd++ is removed primarily by sedimentation. In order for Cd++ uptake in roots to occur, Cd++ must be in its soluble form adjacent to the root membrane for some finite period [52] . Generally, a decrease in pH in soil will enhance the solubility of cd++, which will increase the crop uptake of cd++. In 1984, WHO/USEPA agreed that the maximum acceptable daily uptake of Cd++ was 70 f.lg/day. Where 200 1-lg cd++Jday over a 50-year period would be necessary to produce toxicity to the kidney. Farm animals fed fodder crops grown on sludge-treated soil will absorb ~5% of the Cd++ ingested [52] . In addition to recycling Cd++, lead (Pb )-EDTA wastewater also undergoes recycling. EDTA a chelating agent used in the soil washing process for the decontamination of Pb contaminated soil. Kim et al. [53] outlines a method to recycle Pb-EDTA wastewater by substituting the Pb-EDTA complex with FeH ions at a low pH followed by precipitation of Pb ions with phosphate or sulfate ions. FeH ions-EDTA will precipitate at a high pH with NaOH. The recycled EDTA solution can be recycled several times without losing its extractive power [53] . Recycling computers can be extremely hazardous if not properly disposed. There are many parts of the computer that are toxic. To begin with, the cathode ray tube (CRT) glass may be classified as a hazardous waste due to its high Pb concentration. The liquid crystal display (LCD), which contains benzene material for the liquid crystal, is also considered hazardous. In addition, the mercury switch, mercury relay, lithium battery, Ni-H battery, Ni-Cd battery, and polychlorinated biphenyl (PCB) capacitor are all hazardous materials. Because of this, Taiwan has recently established guidelines for the proper disposal of computer and/or computer parts [54] . The nine guidelines are: i) landfill or incineration of scrap computers shall be avoided; ii) the phosphorescent coatings which have been applied to the glass panel of CRT must be removed; iii) ALL the batteries (Li, Ni-Cd,Ni-H) must be removed by non-destructive means; iv) all the PCB capacitors which have a diameter greater than 1 em and a height larger than 2 em must be removed; v) all the mercury containing parts must be removed; vi) CRT must be ventilated before stored inside a building; vii) the high-Ph content funnel glass of the CRT must be properly treated; viii) the LCD of notebook computer must be removed by non-destructive means; and lastly, ix) plastic that contains the flameretardant, bromine, shall be treated properly. Hopefully, this model can be used in other countries where computer waste is becoming a major issue of environmental concern. Organic solvents have many applications in the industry such as formulation of products, thinning of products prior to use or cleaning of materials by removal of contaminants. During this application, solvent emission and waste solvent generation occur. Most organic solvents are known to have adverse effects on both human health and the environment. Solvents may affect the body through inhalation and skin contact and lead to either acute or chronic poisoning [55] . The effects of acute poisoning include narcosis, irritation of throat, eyes, or skin, dermatitis, and even death and the effects of chronic poisoning include damage to blood, lung, kidney, and gastrointestinal system and/or nervous system. In addition, many solvents are inflammable in nature. Waste management of organic solvents includes: source reduction, recycling, treatment, and disposal [55] . Case studies indicate that dry cleaning facilities use perchloroethylene (PERC) in which workers around the cleaning machines are subject to high health risks; thus, vapor recovery systems are used to reduce the PERC emissions especially from older machines [55] . Riess et al. [56] evaluated the recyclability of flame-retarded polymers that contain brominated flame-retardants from 108 televisions (TV) and 78 personal computers (PCs) obtained from a recycling company. The flame-retardants identified in the TV were: 54% high-impact polystyrene, 24% acrylonitrile butadiene styrene, 15% polystyrene, and 7% polyphenyleneoxide polystyrene. The flame retardants found in PCs were: 43% acrylonitrile butadiene styrene, 35% polyphenyleneoxide polystyrene, 18% high-impact polystyrene, 3% polystyrene, and 1% polyvinyl chloride. Recycling may be practical if 75% new material is added to mixture [56] . The Denver Potable Reuse Pilot Project began in 1968 to recycle wastewater effluent to achieve potable water quality as well as being economically competitive with conventional technology. Moreover, this project sponsored the first large-scale risk assessment studies using experimental animals [57] . After ten years, this pilot project was converted to a demonstration treatment plant to address many of the technical and non-technical issues. The objectives of the "Reuse Demonstration Project were (i) to establish end product safety, (ii) to demonstrate the reliability of the process, (iii) to generate public awareness, (iv) to generate regulatory agency acceptance, and (v) to provide data for a largescale implementation" [57] . However, insuring end-product water safety proved to be difficult to demonstrate because the health standards established for drinking water were not intended to apply to treated waters. Thus, additional criteria were used to prove that the effluent was suitable for human consumption. Below were the criteria used in this project: -The product was compared with the National Primary and Secondary Drinking Water Regulations values -The product was compared with federal or state regulated parameters -Effluent levels were compared with the levels suggested to be hazardous -Concentrations of product in the water were compared to Denver's current drinking water criteria or other "acceptable" water supplies in the U.S. and/or worldwide -Whole-animal studies (i.e., chronic toxicity, oncogenicity, and reproductive tests) were conducted using Denver's current drinking water as a comparison standard The Denver Project used two dosage groups per water sample: reclaimed water from the Demonstration Plant with reverse osmosis treatment (ROT) and Denver drinking water from the Foothills Water Treatment Plant (DWT). RO and DW were administered to Fischer 344 rats and B 6 C 3 F 1 mice at dosages at least 500 times the amount found in the original water samples. Ultrafiltration water treatment samples (UFT) were only administered to rats at the high dose (SOOx) and distilled water was used as the control in both the rats and mice studies. In addition to the chronic toxicity studies, reproductive toxicity studies were performed to identify potential adverse effects on reproductive performance, intrauterine development, and growth and development of the offspring. The teratology phase will identify potential embryotoxicity and teratogenicity. Administration of RO, UFT, and DW water at 500 times the amount found in the original water samples for 104 weeks in rats did not result in any toxicologic or carcinogenic toxicity [57] . The survival rate was slightly higher among the female rats (64%-84%) compared to the male rats (52%-70%). There were a variety of neoplasms seen in all treatment groups ( Table 7) . The "C" cell tumors in the thyroids were not considered treatment related because these neoplasms were within the anticipated ranges for the age and strain of the rat. Similar results were seen with the mouse chronic studies where there was no toxicity or carcinogenicity seen after 104 weeks of high dose treatment and the survival rate was identical to the rats. The organs most affected by the treatment were the hematopoietic system, liver, lung, and pituitary gland [57] . The remarkable finding of the reproductive studies was "the absence of treatment- related effects on reproductive performance, growth, mating capacity, survival of offspring, or fetal development in any of the treatment groups" [57] . The Denver Project met the outlined objectives at the start of the project and all three of the toxicity studies demonstrated that concentrations 500 times the original amount seen in sample water did not cause any notable toxicity. Thus, secondary wastewater can be recycled into safe drinking water for human consumption. Chemical mixtures have always been an issue of concern to address/assess the toxicity to the environment and to humans. An interagency agreement between ATSDR and the NTP resulted in participation in a Public Health Service (PHS) activity related to the Superfund Act (CERCLA Comprehensive Environmental Response, Compensation and Liability Act) [58] . Yang was the lead scientist at the National Institute of Environmental Health Sciences (NIEHS)/NTP for the development of the "Superfund Toxicology Program". Particular focus centered on chemical mixtures of environmental concern, especially groundwater contaminants derived from hazardous waste disposal and agricultural activities. Yang states that obtaining a "representative" sample is practically impossible [58] . A core sample from one location of a site will definitely be different from a core sample from a different location of a site. Also, a core sample taken from the exact location at different times of day and/or different days will be different because weather, activity at the site, and composition of the waste can change and degrade or synthesize new compounds. Thus, Yang proposed a strategy to study chemical mixtures [58] : 1. Study chemical mixtures between binary and complex mixtures to avoid duplication of earlier studies that evaluated the two extremes 2. Study chemically defined mixtures to make determination and mechanistic studies manageable 3. Study chemical mixtures related to groundwater contamination because groundwater contamination is among the most critical environmental issue 4. Study chemical mixtures at environmentally realistic concentrations to access the potential health effects of environmental pollution of long-term, low-level exposure 5. Study chemical mixtures with potential for life-time exposure. A chemical mixture of groundwater contaminants from hazardous waste sites and agricultural activities were created. This formulation mixture contained 25-chemicals that simulated groundwater contamination as shown in Table 8 . The concentrations selected represent the average survey values of 180 hazardous waste disposal sites representing all 10 USEPA regions. Even though such a mixture may never exist in reality, new insights may be gained to elucidate potential health effects from laboratory animals to human. For most of the end-points examined in this study, the results were negative. The negative results of this study were significant because various mixtures were tested at 10-to 100-fold or several orders of magnitude higher than potential human exposure levels [58] . Insights gained from Yang's project were: i) the effects will be subtle and marginal; ii) toxicologic interactions are possible at the environmentally realistic levels of exposure; iii) toxic responses may be from unconventional toxicologic end-point (immunosuppression, myelotoxicity); iv) possibility of subclinical residual effects may become more interactive with subsequent insults from chemical, physical, and/or biological agents; and v) negative results do not in-Toxity Evaluation and Human Health Risk Assessment of Surface and Ground Water 179 dicate safety for humans because the studies were done on rodents. Subsequent work on this mixture at low doses increased the acute toxicity of high doses of known hepatic and renal toxicants [59] . Recently, NIEHS has begun to focus on simpler mixtures of chemicals that share common mechanisms of action rather than complex mixtures. Over the past several decades, much effort has been made to establish national guidance on proper waste handling disposal techniques such that there are many local, state, national and federal agencies that provide guidelines to protect the surface and ground waters for humans. These guidelines also provide methods and approaches used to evaluate potential health effects and assess risks from contaminated source media, (i.e., soil, air, and water) as well as establish standards of exposure or health benchmark values in the different media, which are not expected to produce environmental or human health impacts. The use of the risk assessment methodology by various regulatory agencies using the following steps: i) hazard identification; ii) dose-response assessment; iii) exposure assessment; and iv) risk characterization balances the risks and benefits and sets the "acceptable" target levels of exposure to ground water and surface water. • For noncarcinogenic effects AT=ED; for carcinogenic effects AT=70 years. b Exhibit 3-2 of the interim dermal guidance document [34) . Different regulatory or state agencies may recommend different exposure parameters based on scientific policy or risk management decisions. Waste management guide. The Bureau of National Affairs Industrial waste recycling In: Jessup DH (ed) Waste management guide. Laws, issues, and solutions. The Bureau of National Affairs Revised RCRA inspection manual OSWER Directive 9938 Quantitative risk assessment for environmental and occupational health Casarett and Doull's Toxicology: the basic science of poisons The emerging field of ecogenetics Guidelines for carcinogen risk assessment 51 FR 33992 Guidelines for carcinogen risk assessment. Review draft. Office of Research and Development A weight -of-evidence scheme for assessing interactions in chemical mixtures Approaches and challenges in risk assessments of chemical mixtures. In: Yang RSH (ed) Toxicology of chemical mixtures Health Effect Test Guidelines: acute toxicity testing. US EPA, Office of Prevention, Pesticides, and Toxic Substances Chlorethoxyfos-review of a repeated exposure inhalation study and evaluation of that study by the hazard identification assessment review committee. US EPA, Office of Prevention, Pesticides, and Toxic Substances Biologic markers in risk assessment for environmental carcinogens Health Effect Test Guidelines: combined chronic toxicity/carcinogencity. US EPA, Office of Prevention, Pesticides, and Toxic Substances Methods for derivation of inhalation reference concentrations and application of inhalation dosimetry. US EPA, Office of Research and Development Soil screening guidance: technical background document. US EPA, Office of Waste and Emergency Response Health advisories of drinking water contaminants. US EPA, Office of Water and Health Advisories Assessment and management of chemical risks, vol1 Risk assessment in the remediation of hazardous waste sites Methodology for deriving ambient water quality criteria for the protection of human health Issues in qualitative and quantitative risk analysis for developmental toxicology Toxicology information resources at the environmental protection agency Risk assessment guidance for Superfund, voll. Human health evaluation manual (Part A) Assessment protocol for hazardous waste combustion facilities Can we assign an upper limit to skin permeability? International Life Science Institute (ILSI) (1999) Exposure to contaminants in drinking water, estimating uptake through the skin and by inhalation Memorandum on body weight estimates based on NHANES III data, including data tables and graphs. Analysis conducted and prepared by WESTAT, under EPA contract number 68-C-99-242 USDA ( 1998) 1994-1996 Continuing survey of food intakes by individuals and 1994-1996 diet and health knowledge survey Measures of compounding conservatism in probabilistic risk assessment Guiding principles for Monte Carlo analysis EPA/630/R-97/001 Risk Assessment Forum Chemical risk assessment numbers: what should they mean to engineers? Risk assessment guidance for Superfund, voll. Human health evaluation manual (parte, supplemental guidance for dermal risk assessment Derivation of toxicity values for dermal exposure Supplemental guidance to RAGSS. Region IV bulletins. Human Health Risk Assessment Waste Management Division Supplementary guidance for conducting health risk assessment of chemical mixtures Guidelines for the health risk assessment of chemical mixtures The toxicity of poisons applied jointly A practical guide to understanding, managing, and reviewing environmental risk assessment reports Addendum:Region 6 Risk Management -Draft human health risk assessment protocol for hazardous waste combustion facilities EPA year 2000 guidance document. Contract number 68-W1-0055 Guidelines and methodology used in the preparation of health effect assessment chapters of the consent decree water criteria documents Implementing the food quality protection act. US EPA, Office of Prevention, Pesticides, and Toxic Substance Remediation of hazardous effluent emitted from beneath newly constructed road systems and clogging of underdrain systems Assessment of water pollutants from asphalt pavement containing recycled rubber in Rhode Island The Rhode Island Department of Transportation Waste-to-energy plant for paper industry sludges disposal: technical-economic study Superfund at work: hazardous waste cleanup efforts nationwide. US EPA, Solid Waste and Emergency Response Toxicity of construction materials in the marine environment: a comparison of chromated-copper-arenate-treated wood and recycled plastic The control of the heavy metals health hazard in the reclamation of wastewater sludge as agricultural fertilizer Cadmium -a complex environmental problem. Part II Recycling of lead-contaminated EDTA wastewater Management of scrap computer recycling in Taiwan Management, disposal and recycling of waste industrial organic solvents in Hong Kong Analysis of flame retarded polymers and recycling materials Chemosphere Health effect studies on recycled drinking water from secondary wastewater. In: Yang RSH ( ed) Toxicology of chemical mixtures Toxicology of chemical mixtures derived from hazardous waste sites or application of pesticides and fertilizers. In: Yang RSH (ed) Toxicology of chemical mixtures Toxicology studies of a chemical mixture of 25 groundwater contaminants: hepatic and renal assessment, response to carbon tetrachloride challenge, and influence of treatment-induced water restriction Texas Natural Resource Conservation Commission (1999) Texas Risk Reduction Program Rule Review draft addendum to the methodology for assessing health risks associated with indirect exposure to combustor emissions Estimating exposure to dioxin-like compounds Review draft development of human health-based and ecologically-based exit criteria for the hazardous waste identification project. Office of Solid Waste I and II CW · IR · EF · ED Intake=--------- BW a A number of studies has shown that an age-adjusted approach should be used to calculate intakes for children for carcinogens to take into account the difference in ingestion rates, body weights, and exposure duration for children from 1 to 6 years old and others from 7 to 31 years [16] . b The exposure parameters were taken from the Texas Risk Reduction Program Rule [60] and are provided as examples only. Different regulatory or state agencies may recommend different exposure parameters based on scientific policy or risk management decisions. [20] . ' Use only when a RID is based on health effects in children [20] . d The Office of Water is in the process of preparing an Exposure Assessment Technical Support Document in which an age-adjusted approach will be used to calculate fish intakes for children for carcinogens to take into account the difference in ingestion rates, body weights, and exposure duration for children from 1 to 6 years old and others from 7 to 31 years [20] . For COPCs whose log K0w<4.0 For COPCs whose log Kow>4.0 CF = CW · BAF For dioxins, furans, and polychlorinated biphenyls CF = Csed · BSAF CF =Chemical concentration in fish (mg/kg), fresh weight (FW) CW =Chemical concentration in water (mg/L) BCF =Bioconcentration factor (L/kg FW)h BAF =Bioaccumulation factor (L/kg FW)b C,ed =Chemical concentration in sediment (mg/kg) BSAF =Biota-sediment accumulation factor (unitless)<• Please refer to reference [25] for a detailed discussion of procedures used to calculate chemical concentration in fish. Different regulatory or state agencies may recommend different procedures based on scientific policy or risk management decisions [20, 44] . b Please refer to Appendix A-3 of reference [25] for BCF, BAF, and BSAF values and procedures for calculating these values. Also, please refer to [20, 44] . c BSAFs are used to account for the transfer of COPCs from the bottom sediment to the lipid in fish [25, [61] [62] [63] . Organic compounds Non-steady state" Not applicable for inorganics Cinh =The concentration of COPC at the exchange boundary (mglm 3 ) CW =Chemical concentration in water (mg/L) VF =Volatilization factor [(mglm 3 )/(mg!L-H20)]• EF =Exposure frequency (days/year)h ED =Exposure duration (years)h AT =Averaging time in years (period over which exposure is averaged)h a Specific fate and transport models are used to derive volatilization factors to quantify the transfer of volatile COPCs from ground water into an enclosed space, from ground and surface waters into ambient air, etc. These fate and transport models are discussed elsewhere in this book. b The exposure parameters for EF, ED, and AT from Appendix A -1 can be used for the residential adult, residential child, and commercial/industrial worker for some pathways, but site-specific exposure parameters may need to be developed for other pathways.