|Epidemiologic evaluation of measurement data in the presence of detection limits.|
|Jump to Full Text|
|PMID: 15579415 Owner: NLM Status: MEDLINE|
|Quantitative measurements of environmental factors greatly improve the quality of epidemiologic studies but can pose challenges because of the presence of upper or lower detection limits or interfering compounds, which do not allow for precise measured values. We consider the regression of an environmental measurement (dependent variable) on several covariates (independent variables). Various strategies are commonly employed to impute values for interval-measured data, including assignment of one-half the detection limit to nondetected values or of "fill-in" values randomly selected from an appropriate distribution. On the basis of a limited simulation study, we found that the former approach can be biased unless the percentage of measurements below detection limits is small (5-10%). The fill-in approach generally produces unbiased parameter estimates but may produce biased variance estimates and thereby distort inference when 30% or more of the data are below detection limits. Truncated data methods (e.g., Tobit regression) and multiple imputation offer two unbiased approaches for analyzing measurement data with detection limits. If interest resides solely on regression parameters, then Tobit regression can be used. If individualized values for measurements below detection limits are needed for additional analysis, such as relative risk regression or graphical display, then multiple imputation produces unbiased estimates and nominal confidence intervals unless the proportion of missing data is extreme. We illustrate various approaches using measurements of pesticide residues in carpet dust in control subjects from a case-control study of non-Hodgkin lymphoma.|
|Jay H Lubin; Joanne S Colt; David Camann; Scott Davis; James R Cerhan; Richard K Severson; Leslie Bernstein; Patricia Hartge|
Related Documents :
|12342755 - A multistate model for coronary heart disease--an application to different prevention s...
19162455 - Prediction of periventricular leukomalacia. part i: selection of hemodynamic features u...
14981675 - Electronic monitoring device event modelling on an individual-subject basis using adapt...
17824415 - A global evaluation of metabolic theory as an explanation for terrestrial species richn...
17549885 - Social comparison and perceived breach of psychological contract: their effects on burn...
25238315 - Numerical discrimination is mediated by neural coding variation.
|Type: Journal Article; Research Support, U.S. Gov't, P.H.S.|
|Title: Environmental health perspectives Volume: 112 ISSN: 0091-6765 ISO Abbreviation: Environ. Health Perspect. Publication Date: 2004 Dec|
|Created Date: 2004-12-06 Completed Date: 2005-03-02 Revised Date: 2013-06-09|
Medline Journal Info:
|Nlm Unique ID: 0330411 Medline TA: Environ Health Perspect Country: United States|
|Languages: eng Pagination: 1691-6 Citation Subset: IM|
|Division of Cancer Epidemiology and Genetics, National Cancer Institute, Bethesda, Maryland, USA. firstname.lastname@example.org|
|APA/MLA Format Download EndNote Download BibTex|
Environmental Pollutants / poisoning*
Floors and Floorcoverings
Lymphoma, Non-Hodgkin / etiology
Reproducibility of Results
Sensitivity and Specificity
|N01-PC-65064/PC/NCI NIH HHS; N01-PC-67008/PC/NCI NIH HHS; N01-PC-67009/PC/NCI NIH HHS; N01-PC-67010/PC/NCI NIH HHS; N02-PC-71105/PC/NCI NIH HHS|
|0/Dust; 0/Environmental Pollutants|
Journal ID (nlm-ta): Environ Health Perspect
This is an Open Access article: verbatim copying and redistribution of this article are permitted in all media for any purpose, provided this notice is preserved along with the article's original DOI.
Received Day: 21 Month: 4 Year: 2004
Accepted Day: 13 Month: 9 Year: 2004
Print publication date: Month: 12 Year: 2004
Electronic publication date: Day: 13 Month: 9 Year: 2004
Volume: 112 Issue: 17
First Page: 1691 Last Page: 1696
Publisher Id: ehp0112-001691
PubMed Id: 15579415
|Epidemiologic Evaluation of Measurement Data in the Presence of Detection Limits|
|Jay H. Lubin1|
|Joanne S. Colt1|
|James R. Cerhan4|
|Richard K. Severson5|
1Division of Cancer Epidemiology and Genetics, National Cancer Institute, Bethesda, Maryland, USA
2Southwest Research Institute, San Antonio, Texas, USA
3Fred Hutchinson Cancer Research Center and the University of Washington, Seattle, Washington, USA
4Mayo Clinic, College of Medicine, Rochester, Minnesota, USA
5Karmanos Cancer Institute and Department of Family Medicine, Wayne State University, Detroit, Michigan, USA
6Department of Preventive Medicine, Norris Comprehensive Cancer Center, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
|[corresp] Address correspondence to J. Lubin, National Cancer Institute, Biostatistics Branch, 6120 Executive Boulevard, Room 8042, Rockville, MD 20852 USA. Telephone number: (301) 496-3357. Fax: (301) 402-0081. E-mail:
Support for this study included contracts with the National Cancer Institute: N01-PC-67010, N01-PC-67008, N02-PC-71105, N01-PC-67009, and N01-PC-65064.
The authors declare they have no competing financial interests.
Epidemiologic studies often collect quantitative measurement data to improve precision and reduce bias in exposure assessment and in the estimation of the effect of exposure on risk of disease, as measured by odds ratios (Hatch and Thomas 1993; Sim 2002). Some measurements serve as biomarkers for ?dose??for example, residual radiation in tooth enamel as a marker of exposure to ionizing radiation (Desrosiers and Schauer 2001)?whereas other measures are more indirect?for example, urinary cotinine level as an indicator of exposure to environmental tobacco smoke (Woodward and Al Delaimy 1999). Problems in the analysis of measurement data commonly arise because measurement procedures often have detection limits (DLs). A DL may represent a floor value, a ceiling value, or an interval where precise quantitative levels cannot be determined. For example, exposure assessment for nuclear workers relied on radiation film badges that record radiation levels only above a fixed minimum, because of limits in film photosensitivity (Gilbert et al. 1996; Kerr 1994). Investigators encountered ceiling levels of particle-bound polycyclic aromatic hydrocarbons in rural Chinese dwellings when values exceeded 60,000 ng/m3, the upper limit of the measurement protocol (Ligman et al. 2004).
Although values below or above a DL are ?missing,? data are not missing at random in the usual sense, because their absence reflects levels of exposure. This type of missing data is called ?nonignorable missing,? and the simple exclusion of such ?interval-measured? data can bias results (Little and Rubin 1987; Schafer 1997).
Analytic procedures for environmental measurement data with DLs are often presented in the context of environmental monitoring where the primary goal is estimation of distributional parameters when numbers of measurements are limited (Gleit 1985; Haas and Scheff 1990; Helsel 1990; Persson and Rootzen 197; Singh and Nocerino 2002; Travis and Land 1990). In epidemiologic studies, measurement data are used to characterize exposures of study subjects and are typically employed in two ways: a) to develop regression models to examine the relationship between a measured value (dependent variable) and covariates (independent variables); and b) as covariates in a risk analysis to estimate the relationship between a binary disease outcome and exposure measures and other factors. In this article, we focus on the first application, namely, the regression of an exposure measurement on covariate factors. The use of measurements with DLs in risk regression will be considered in another article.
Investigators apply various strategies for measurement data with DLs, including replacement of measurements below a DL with a single value, for example, DL, DL/2, or DL/?2 (Helsel 1990; Hornung and Reed 1990). Less frequently, measurements below a DL are assigned a value of zero. Unless such measurements indicate a true zero exposure, this latter strategy clearly distorts results and is not considered further in this article. If the distribution of the measurement data is known?for example, measurements are log-normally distributed?then an alternative strategy replaces values below the DL with expected values of the missing measurements, conditional on being less than the DL (Garland et al. 1993; Gleit 1985). For measurement Z and detection limit DL, we denote this value E[Z |Z < DL]. Calculation of the conditional expected value requires the investigator to either know or estimate parameters of the measurement distribution.
Substitution schemes like those described above are simple, because one value replaces all measurements below the DL, and, except for E[Z|Z < DL], distributional assumptions are not considered. However, because a single value represents all measurements below the DL, parameter estimates and their variances are likely biased, unless the proportion is small, which potentially distorts inference. This limitation led to a single-impute ?fill-in? method (Helsel 1990; Moschandreas et al. 2001a, 2001b). An investigator first characterizes the form of the distribution and estimates its parameters and then assigns randomly sampled values below the DL from the estimated distribution. Fill-in values along with measured values above the DL are then used in analyses. With appropriate estimation techniques, this approach accommodates multiple DLs.
As described by Helsel (1990) and applied by Moschandreas et al. (2001b), the fill-in method did not include complex modeling of regression factors. In addition, although the fill-in approach assigned random values from an appropriate distribution, it did not account for the variability of the imputation process, because the inserted values are not real data. In this article, we illustrate methods for epidemiologic data that account for measurements with DLs, using data from a case?control study of non-Hodgkin lymphoma (NHL) (Colt et al. 2004). The example evaluates the relationship between concentrations of pesticide analytes in carpet dust and use of pesticide products in and around the home. We restrict analysis to control subjects, with adjustment for study design factors.
The principal exposure of the general population to pesticides occurs in the home (Nigg et al. 1990) as the result of indoor use, track-in or drift from outdoors, intrusion of vapors from foundation treatments, or take-home contamination from occupational use (Bradman et al. 1997; Lewis et al. 1999, 2001). Pesticide residues are retained in carpets, migrating into the underlying foam pad, and may persist for months or years.
We consider data from controls from a multicenter, population-based case?control study of NHL, conducted in the United States: the Detroit, Michigan, metropolitan area; the state of Iowa; Los Angeles County, California; and the Seattle, Washington, metropolitan area (Colt et al. 2004). Controls include 1,057 residents 20?74 years of age, frequency matched to cases on age, sex, race, and study area, with an oversampling of African-American subjects in Los Angeles and Detroit.
Interviewers collected information from respondents on lifetime residential history and the frequency and form of pesticides used to treat various types of pests (e.g., flying insects, crawling insects, lawn weeds). Interviewers obtained vacuum cleaner bags from 95% of subjects who had used their vacuum cleaners within the past year and had owned at least half of their carpets or rugs for 5 years or more. Bags were shipped in insulated boxes by overnight mail to Southwest Research Institute and placed in freezers. Samples were collected and analyzed for 513 control subjects.
The protocol for the collection and measurement of dust samples has been described previously (Colt et al. 2004). Briefly, before extraction and analysis, dust samples were sieved through a 100-mesh sieve to obtain the fine (< 150 ?m) dust. Neutral extractions were carried out for 25 pesticides (18 insecticides, six herbicides, and ortho-phenylphenol), seven polycyclic aromatic hydrocarbons, and five polychlorinated biphenyl congeners. Acid extractions were carried out for four herbicides and pentachlorophenol. Extracts were analyzed using gas chromatography/mass spectrometry (GC/MS) in selected ion monitoring mode. Analyte amounts were quantified using the internal standard method. In the full study, GC/MS analysts were blinded to disease status.
After analyzing about half of the samples, investigators began monitoring additional ions for some neutral analytes to clarify identification at low levels, resulting in raised DLs for 14 pesticides. DLs were also raised when < 2 g dust were available. An additional problem with some dust samples involved the presence of interfering compounds (i.e., compounds that coeluted with the target analyte), creating uncertainty and prohibiting assignment of specific concentrations.
For three scenarios analysts could provide concentrations only within an interval, which we accommodated by defining a lower bound (LB) and an upper bound (UB) of possible values. If the analyte was not detected and no interferences were present (type I), the LB was set to zero and the UB was set to the specified DL. If there was an interfering compound but insufficient evidence for the presence of the target analyte (type II), the GC/MS analyst reported the result as a nondetect with a DL equal to the entire peak of the coeluting compounds. We set the LB to zero and the UB to 20% of the raised peak or to the DL, whichever was larger. If the target analyte and the interference were both present (type III), the analyst reported an ?elevated detect? with a concentration equal to the entire peak of the coeluting compounds. We set the LB bound to the maximum of 20% of the recorded peak, or the DL, and the UB to the maximum of 90% of the reported peak, or the DL, resulting in an interval bounded away from zero.
For ease of presentation, we allow the replacement of measurements below the DL with DL/2 (which applies to missing data types I and II) to refer more generally to the replacement with (LB + UB)/2 (which applies to missing data types I, II, and III).
Preliminary analysis indicates that measurement data are consistent with a log-normal distribution. If Z denotes the measured value of an analyte and is log-normally distributed, denoted Z ~ LN(?,?2), then by definition log(Z) is a normal random variable with mean ? and variance ?2, denoted log(Z) ~ N(?,?2) (Singh et al. 1997). Suppose X = (X0, ? , XK)t is a column vector of covariates, with X0 = 1, and ?= (?0, ? , ?K)t is a column vector of regression parameters, where t denotes vector transpose. If data are complete, then a linear regression equation has the form log(Z) = ?tX + ?, where ?~ N(0, ?2). For each X, the model implies that Z is log-normally distributed with mean ?tX; that is, Z ~ LN(?tX, ?2).
We evaluate the association between analyte concentration and pesticide use by fitting a linear regression model of the logarithm of the analyte level on subject characteristics. Regression (independent) covariates include indicator variables for season of sample collection, presence of oriental rugs, study center, sex, age (< 45, 45?64, ? 65 years), race (African American, Caucasian, other), type of home (single family, townhouse/duplex/apartment, other), year of home construction (< 1940< 1940?1959?1960?1979, ? 1980), and educational level (< 12, 12?15, ? 16 years). As in Colt et al. (2004), covariates vary slightly with analyte. Models also include five variables describing the use of insect treatment products: ever/never used products to treat for crawling insects, flying insects, fleas/ticks, termites, and lawn/garden insects. We use data from current homes only.
Regression analysis is hampered by the presence of measurements known only within bounds. We assume that the probability distributions of measurements below the DL (more precisely, within the LB and UB interval) depend only on observed data; that is, the interval-measured concentrations arise from the same distributions that generate the measured values. Let F(?) be the cumulative distribution function and f(?) the probability density function for a log-normal random variable. Suppose Xi = (Xi0, ? ,XiK)t is the covariate vector for the ith of i = 1, ? , n subjects. LBi and UBi are recorded for i = 1, ? , n0 individuals, whereas a specific Zi measurement is recorded for i = n0 + 1, ? , n0 + n1 individuals. LB and UB are subscripted to allow different DLs. Using a Tobit regression approach (Gilbert 1987; Persson and Rootzen 1977; Tobin 1958), the log-likelihood function has the form
The first summand derives from the n0 interval measured values and involves the difference of the cumulative distribution function F evaluated at UB and at LB; that is, the probability the measurement lies between the LB and UB. The second summand derives from the n1 detected values. Maximum likelihood estimates (MLEs) for ? and their covariance matrix are obtained by maximizing Equation 1 and computing the inverse information matrix using standard methods.
If the goal is to evaluate pesticide use and analyte levels in carpet dust, represented by the ? parameters, then the Tobit regression of Equation 1 is sufficient and no imputation is required. For further analysis or for graphical display, it is useful to generate values for measurements below DLs. We consider several different approaches, including inserting DL/2, inserting E[Z|Z < DL], or using a single or multiple imputation (Little and Rubin 1987).
A multiple imputation procedure is carried out as follows. Using all data (measured concentrations, missing data types I?III, and covariates), we create the log-likelihood function 1, solve for the MLEs of ? and ?2 (denoted ?? and ??2), and impute a value by randomly sampling from a log-normal distribution with the estimated parameters. However, in selecting fill-in values we cannot ignore that ?? and ??2 are themselves estimates with uncertainties. We therefore do not use ?? and ??2 for the imputation, but rather ?? and ??2, which are estimated from a bootstrap sample of the data (Efron 1979). Bootstrap data are generated as described below by sampling with replacement, and represent a sample from the same universe as the original data. We repeat the process to create multiple data sets, which are then independently analyzed and combined in a way that accounts for the imputation. Differences in regression results in the multiple data sets reflect variability due to the imputation process.
This procedure, however, omits a source of variability. We have tacitly assumed that the LB and UB are fixed and known in advance. When there are no interfering compounds (missing type I), the assumption is justified because the DL is determined before the GC/MS dust analysis. When there are interfering compounds (missing types II and III), the assumption cannot be fully justified because the bounds depend on the amount of interference and therefore are random. In the NHL data, we assume this uncertainty is small relative to other uncertainties. The imputation proceeds as follows:
Step 1: Create a bootstrap sample and obtain estimates ?? and ??2 based on Equation 2. Bootstrap data are generated by sampling with replacement n times from the n subjects. Sampling ?with replacement? selects one record at random and then ?puts it back? and selects a second record. After n repetitions, some subjects are selected multiple times, whereas other subjects are not selected at all. If wi is the number of times the ith subject is sampled, then the log-likelihood function for the bootstrap data is
Step 2: Impute analyte values based on sampling from LN (??tX, ??2). For the ith subject, assign the value
This quantity consists of various elements. F(LBi; ??tX, ??2) and F(UBi; ??tX, ??2) are the cumulative probabilities at ULi and UBi, respectively, based on parameters ??, ??2. Both values lie between zero and one. Select randomly from a uniform distribution on the interval [a, b], denoted Unif[a, b], in particular the interval [F(LBi; ??tXi, ??2), F(UBi; ??tXi, ??2)]. The inverse cumulative distribution function, F?1(?), is the required imputed value in original units between LBi and UBi. Repeat using the same ??, ??2 for each missing value. Detected values are not altered.
Step 3: Repeat steps 1 and 2 to create M plausible (or ?fill-in?) data sets. Remarkably, M need not be large, and a recommended value is between 3 and 5, with larger values if greater proportions of data are missing (Little and Rubin 1987; Rubin 1987). We select M = 10 to fully account for the variance from the imputation.
Step 4: Fit a regression model to each of the M data sets and obtain M sets of parameter estimates and covariance matrices. Combine the M sets of estimates to account for the imputation (Little and Rubin 1987; Schafer 1997). The imputation procedure results in confidence intervals (CIs) that are wider than the single-imputation, fill-in approach.
We conducted a simulation study, using a simple regression model with zero intercept and no covariates, to evaluate the imputation approaches, the effects of the proportion of data below the DL, and sample size. We generated data sets of size n by sampling from a log-normal distribution with parameters (?,?2), and defined the DL such that in expectation p percent of the samples falls below the DL; that is, DL = F?1(p; ?, ?2). The simulation involves 5,000 independent data sets for each set of parameters. We compared five approaches: a) direct estimation (Tobit regression) of MLEs (?? and ??2) using Equation 1; b) multiple imputation with allowance for uncertainty in model parameters; c) single imputation based on a random fill-in value for each datum below the DL, using MLEs (?? and ??2) from Equation 1; d) insertion of DL/2 for all data below the DL; and e) insertion of E[Z|Z < DL] for data below the DL with the expected value based on the MLEs (?? and ??2) from Equation 1. For approaches b) through e), estimators are the mean and variance of the logarithm of the observed and imputed data, with adjustment for multiple imputation in b). We compare results with estimates based on complete data.
For the NHL example, we use SAS (SAS System for Windows, version 8.2; SAS Institute Inc., Cary, NC) to generate bootstrap samples, fit linear regressions (PROC REG), solve log-likelihood Equations 1 and 2 (PROC LIFEREG), and combine results from multiple data sets (PROC MIANALYZE). The simulation was conducted using MATLAB (version 7.0; MathWorks Inc., Natick, MA).
We limited results to four insecticides, which exhibited various types and proportions of missing data: propoxur and carbaryl, both carbamate insecticides; chlorpyrifos, an organophosphate; and ?-chlordane, an organochlorine.
After omitting subjects? missing questionnaire data, there are 478 control subjects with carpet dust measurements and all regression variables. The percentages of measurements below DLs or known only within bounds vary from 25.7% for propoxur to 67.0% for carbaryl (Table 1). The arithmetic mean (AM), geometric mean (GM), and geometric standard deviation (GSD), with fill-in imputations for interval-measured values, indicate that concentrations for the individual analytes varied substantially. For carbaryl and ?-chlordane, the GM falls within the range of missing data. Figure 1A and B show quantile plots for measurements of propoxur and carbaryl and reveals good concordance with a log-normal distribution; Figure 1A and B show values used for imputation based on DL/2, denoted by stars, and the conditional expected value, denoted by triangles. For carbaryl, DL/2 values are nearly twice the conditional expected values. By construction, the fill-in values conform to the estimated distribution.
In several instances, estimates for the various types of pests treated differ substantially, particularly for analytes with a high percentage of missing data. The multiplicative standard errors for the replacement approaches (i.e., inserting DL/2, E[Z|LB < Z < UB], or a fill-in value) are smaller than standard errors from the multiple imputation approach and direct estimation. The smaller standard errors result from an inadequate account of missing data and result in CIs that are too narrow and inflated type I error rates. Table 2 shows several variables that do not achieve traditional significance levels when imputation is taken into account. In some instances, there are marked differences in estimates. Estimated increases in carpet dust levels of ?-chlordane among subjects treating for termites are 2.6- and 3.1-fold based on DL/2 insertion and fill-in methods, respectively, and 3.7-fold based on multiple imputation and direct estimation approaches.
Comparing the two fill-in approaches, standard errors are smaller when the covariate information is included than when covariate information is omitted. Fill-in values are obtained from regression models by sampling from LN(??tXi, ??2). Figure 1C and D show quantile plots of residuals, that is, from exp[log(Z) ? ??tX]for each subject. Although GMs of the residuals are close to the expected value of 1.0 for the error distributions, plots suggest a slight underprediction at extreme values for propoxur and carbaryl.
For the simulation study, we set ? = 0 and ?2 = 1 without loss of generality and present results for n = 50, 100, 200, and 400 and with DLs such that the expected proportions of values below the DL are p = 10, 30, 50, and 70%. With 5,000 repetitions, the standard error for coverage of 95% CIs is 0.003. Table 3 shows that estimates of ? based on Tobit regression, multiple imputation, and single impute fill-in approaches are generally unbiased. Insertion of DL/2 or E[Z|Z < DL] results in substantial bias unless the proportion of missing data is small, ? 10%. Table 3 also shows coverage of the 95% CI for the estimate of ?. In comparison with complete data, Tobit regression and the multiple imputation approaches are the only methods that achieve nominal coverage over a broad range of simulation parameters, although the multiple imputation begins to degrade when more than about 50% of the measurements are below DLs. The single imputation approach results in anomalous CIs when about ? 30% of the data are below DLs.
Results of our analysis of use of pesticide products in and around the home and pesticide residues in carpet dust and of the simulation study suggest that the method of imputation of missing environmental measurement data can substantially affect estimation of effects and statistical inference. The practice of inserting a single value, such as DL/2 or the conditional expected value E[Z|Z < DL] or by analogy DL/?2 , is ill-advised unless there are relatively few measurements below DLs. The use of a single imputation to fill in missing data is unbiased or minimally biased quite generally but suffers from inaccurate estimates of variance and, consequently, CIs that are too narrow, particularly when missing data exceed about 30%. The best protection against biased inference in the presence of nonignorable missing data is the use of multiple imputation, although with a high proportion of values below the DL, a large number of measurements are needed. It is worth reiterating, however, that multiple imputation is necessary only if explicit values are needed for measurements below DLs. If the purpose is to estimate regression parameters, then procedures for truncated data, such as Tobit regression, are nominal (Little and Rubin 1987).
In environmental monitoring, estimation of distributional parameters is often problematic because of limited numbers of measurements and an inability to evaluate distributional forms precisely. With 5?15 measurements, MLEs can be biased (Gleit 1985), suggesting the need for more robust approaches (Helsel 1990). With epidemiologic data, which usually include hundreds or thousands of measurements, MLEs are unbiased and fully efficient (Gilliom and Helsel 1986), and more detailed regression analyses are feasible.
When analyzing environmental data on pesticides, Moschandreas et al. used a fill-in imputation approach that applied the ?best-fitting? probability distribution for values above a DL (Helsel 1990; Moschandreas et al. 2001a, 2001b), although Helsel and Hirsch (1991) had cautioned that the approach should be used primarily for estimating summary statistics. The approach we outline permits multiple DLs, incorporates regression parameters, and applies multiple imputation to account correctly for interval-measured data and to allow unbiased inference. However, our simulation study suggests that the fill-in approach may be quite adequate when measurements below the DL account for less than about 30% of the data.
The Tobit regression and multiple imputation approaches assume that the limits of detection are fixed and known in advance. In our example, we are justified in assuming DLs are fixed for type I missing measurements, but not for type II and III missing data where DLs depend on the amount of interfering compounds and are random variables. If the DL is not known, an estimate of its value is the minimum order statistic of the observed measurements?that is, the smallest measured value. Simulations suggest that for a random DL, estimates remain unbiased but variances are underestimated (Zuehlke 2003). Thus, CIs in Table 2 may be too narrow. However, relative to other sources of uncertainty that arise in the collection and handling of carpet dust samples, and the accuracy of questionnaire information, additional uncertainty induced by random DLs for type II and III missing values is likely small.
Environmental data are frequently well approximated by a log-normal distribution, and our data on concentrations of pesticide analyte in carpet dust are consistent with this assumption. Equations 1 and 2 remain valid for more general distributions, although estimation of parameters may be more problematic and necessitate potentially computer-intensive search algorithms. Validity of parameter estimates and their variances depend, of course, on the correct choice of error distribution. Our simulation study was based on a correct distributional form; however, misspecification of the probability model can lead to markedly biased results (Paarsch 1984). In the absence of knowledge about the error distribution, semiparametric and nonparametric methods have been proposed (Austin 2002a; Chay and Powell 2001; DiNardo and Tobias 2001). Bayesian approaches have also been suggested in the Tobit regression context (Austin 2002b). A reviewer suggested considering the set of measurements of a subject as a vector of multivariate outcomes, so that the covariance structure among the analytes could provide information for the imputation process. In our example, this requires the assumption that the logarithms of the measurements are multivariate normally distributed. The suggestion, however, adds complexity as the number of analytes increases, and additional work is needed to evaluate its practical feasibility.
The motivation for this work arose from the analysis of pesticide analytes in carpet dust and use of pesticide products in and around the home. However, data with DLs arise in a variety of settings, including upper DLs from health-care?related questionnaire data (Austin 2002a) and psychological profile scores, such as the Fagerstrom test for nicotine dependence (Fagerstrom and Schneider 1989; Heatherton et al. 1991) and lower DLs in radiation film badge measurements (Gilbert et al. 1996; Kerr 1994).
In summary, with epidemiologic data, our analyses indicate that unless there are very few measurements below DLs (< 5?10%), inserting DL/2, E[Z|Z < DL], or any single value to impute missing measurement data is not advisable. Further, inserting a randomly selected fill-in value is also inadvisable, unless the proportion of missing data is less than about 30%. Multiple imputation of missing data is the best approach of ensuring unbiased estimates of effects and nominal CIs.
|Austin PC. 2002a;A comparison of methods for analyzing health-related quality-of-life measuresValue Health 5:329–337.|
|Austin PC. 2002b;Bayesian extensions of the Tobit model for analyzing measures of health statusMed Decis Making 22:152–162.|
|Bradman MA,Harnly ME,Draper W,Seidel S,Teran S,Wakeham D,et al. 1997;Pesticide exposures to children from California?s Central Valley: results of a pilot studyJ Expo Anal Environ Epidemiol 7:217–234. [pmid: 9185013]|
|Chay KY,Powell JL. 2001;Semiparametric censored regression modelsJ Econ Perspect 15:29–42.|
|Colt JS,Lubin J,Camann D,Davis S,Cerhan J,Severson RK,et al. 2004;Comparison of pesticide levels in carpet dust and self-reported pest treatment practices in four US sitesJ Expo Anal Environ Epidemiol 14:74–83. [pmid: 14726946]|
|Desrosiers M,Schauer DA. 2001;Electron paramagnetic resonance (EPR) biodosimetryNucl Instr Meth B 184:219–228.|
|DiNardo J,Tobias J. 2001;Nonparametric density and regression estimationJ Econ Perspect 15:11–28.|
|Efron B. 1979;Bootstrap methods; another look at the jack-knifeAnn Stat 7:1–26.|
|Fagerstrom KO,Schneider NG. 1989;Measuring nicotine dependence?a review of the Fagerstrom tolerance questionnaireJ Behav Med 12:159–182. [pmid: 2668531]|
|Garland M,Morris JS,Rosner BA,Stampfer MJ,Spate VL,Baskett CJ,et al. 1993;Toenail trace-element levels as biomarkers?reproducibility over a 6-year periodCancer Epidemiol Biomarkers Prev 2:493–497. [pmid: 8220096]|
|Gilbert ES,Fix JJ,Baumgartner WV. 1996;An approach to evaluating bias and uncertainty in estimates of external dose obtained from personal dosimetersHealth Phys 70:336–345. [pmid: 8609025]|
|Gilbert, RO. 1987. Statistical Methods for Environmental Pollution Monitoring. New York:Van Nostrand Reinhold.|
|Gilliom RJ,Helsel DR. 1986;Estimation of distributional parameters for censored trace level water quality data. I. Estimation techniquesWater Resour Res 22:135–146.|
|Gleit A. 1985;Estimation for small normal data sets with detection limitsEnviron Sci Technol 19:1201–1206.|
|Haas CN,Scheff PA. 1990;Estimation of averages in truncated samplesEnviron Sci Technol 24:912–919.|
|Hatch M,Thomas D. 1993;Measurement issues in environmental epidemiologyEnviron Health Perspect 101:49–57. [pmid: 8206042]|
|Heatherton TF,Kozlowski LT,Frecker RC,Fagerstrom KO. 1991;The Fagerstrom test for nicotine dependence?a revision of the Fagerstrom tolerance questionnaireBr J Addict 86:1119–1127. [pmid: 1932883]|
|Helsel DR. 1990;Less than obvious?statistical treatment of data below the detection limitEnviron Sci Technol 24:1766–1774.|
|Helsel, DR. Hirsch, RM. 1991. Statistical methods in water resources. In: Techniques of Water-Resources, Book 4. Reston, VA:U.S. Geological Survey. Available: http://water.usgs.gov/pubs/twri/twri4a3/ [accessed 13 August 2004].|
|Hornung RW,Reed LD. 1990;Estimation of average concentration in the presence of nondetectable valuesAppl Occup Environ Hyg 5:46–51.|
|Kerr GD. 1994;Missing dose from mortality studies of radiation effects among workers at Oak-Ridge-National-LaboratoryHealth Phys 66:206–208. [pmid: 8282563]|
|Lewis RG,Fortune CR,Blanchard FT,Camann DE. 2001;Movement and deposition of two organophosphorus pesticides within a residence after interior and exterior applicationsJ Air Waste Manage Assoc 51:339–351.|
|Lewis RG,Fortune CR,Willis RD,Camann DE,Antley JT. 1999;Distribution of pesticides and polycyclic aromatic hydrocarbons in house dust as a function of particle sizeEnviron Health Perspect 107:721–726. [pmid: 10464072]|
|Ligman, B. Shaughnessy, R. Kleinerman, R. Lubin, J. Fisher, E. Wang, ZY. 2004. Indoor air pollution characterization of underground dwellings in China, 1997. Blacksburg, VA:Virginia Polytechnic Institute and State University Press, 51?56.|
|Little, RJA. Rubin, DB. 1987. Statistical Analysis with Missing Data. New York:John Wiley & Sons.|
|Moschandreas DJ,Karuchit S,Kim Y,Ari H,Lebowitz MD,O?Rourke MK,et al. 2001a;On predicting multi-route and multimedia residential exposure to chlorpyrifos and diazinonJ Expo Anal Environ Epidemiol 11:56–65.|
|Moschandreas DJ,Kim Y,Karuchit S,Ari H,Lebowitz MD,O?Rourke MK,et al. 2001b;In-residence, multiple route exposures to chlorpyrifos and diazinon estimated by indirect method modelsAtmos Environ 35:2201–2213.|
|Nigg, N. Beier, RC. Carter, O. Chaisson, C. Franklin, C. Lavy, T. 1990. Exposure to pesticides. In: The Effects of Pesticides on Human Health, Vol 18 (Baker SR, Wilkinson CF, eds). Princeton, NJ:Princeton Scientific, 35?130.|
|Paarsch HJ. 1984;A Monte-Carlo comparison of estimators for censored regression modelsJ Econ 24:197–213.|
|Persson T,Rootzen H. 1977;Simple and highly efficient estimators for a type I censored normal sampleBiometrika 64:123–128.|
|Rubin, DB. 1987. Multiple Imputation for Nonresponse in Surveys. New York:John Wiley & Sons.|
|Schafer, JL. 1997. Analysis of Incomplete Multivariate Data. New York:Chapman & Hall.|
|Sim M. 2002;Case studies in the use of toxicological measures in epidemiological studiesToxicology 181:405–409. [pmid: 12505343]|
|Singh A,Nocerino J. 2002;Robust estimation of mean and variance using environmental data sets with below detection limit observationsChemometr Intell Lab 60:69–86.|
|Singh, AK. Singh, A. Engelhardt, M. 1997. The Lognormal Distribution in Environmental Applications. Washington, DC:U.S. Environmental Protection Agency, Office of Solid Waste and Emergency Response.|
|Tobin J. 1958;Estimation of relationships for limited dependent variablesEconometrica 26:24–36.|
|Travis CC,Land ML. 1990;Estimating the mean of data sets with nondetectable valuesEnviron Sci Technol 24:961–962.|
|Woodward A,Al Delaimy W. 1999;Measures of exposure to environmental tobacco smoke?validity, precision, and relevanceAnn NY Acad Sci 895:156–172. [pmid: 10676415]|
|Zuehlke TW. 2003;Estimation of a Tobit model with unknown censoring thresholdAppl Econ 35:1163–1169.|
Percentage of measurements below DLs or known only within bounds and AMs, GMs, and GSDs based on fill-in values from a single imputation (data on 478 control subjects).
|Measurements known only within bounds
tfn1-ehp0112-001691Types of missing measurements are as follows: no analyte detected and no interfering compound (I), no analyte detected but with an interfering compound present (II), and analyte and interfering compounds both present (III). The range for the DLs reflects the minimum of LBs and the maximum of UBs for the nondetected measurements.
Proportional increase in analyte concentration in carpet dust (ng/g) for selected uses.
|Insecticide, imputation approacha method||Adjustment||exp(? )||SE||exp(? )||SE||exp(? )||SE||exp(? )||SE||exp(? )||SE|
|?E[Z|LB < Z < UB]||No||1.432b||1.170||0.986||1.147||1.231||1.156||1.135||1.223||0.751b||1.154|
|?E[Z|LB < Z < UB]||No||0.849||1.226||0.629b||1.194||1.703b||1.208||1.199||1.300||1.746b||1.205|
|?E[Z|LB < Z < UB]||No||1.620b||1.218||0.771||1.188||1.300||1.190||1.613c||1.288||0.746||1.196|
|?E[Z|LB < Z < UB]||No||0.954||1.153||0.925||1.132||0.894||1.140||3.031b||1.199||1.110||1.138|
tfn2-ehp0112-001691Entries are exponentials of parameter estimates (?) and their SEs from linear regression models of the logarithm of pesticide analyte on age, sex, race, education, study site, season, and pesticide use variables. Regression models also included year house was built (propoxur, carbaryl, ?-chlordane), type of home (carbaryl), and presence of oriental rugs (?-chlordane).
aSee ?Materials and Methods? for a description of methods; adjusted imputation includes regression variables.
b95% CI excludes 1.
c90% CI excludes 1.
Results of simulation study of imputation approachesa for log-normally distributed data with ? = 0 and ?2 = 1 with a DL (entries are means of 5,000 repetitions).
|Sample size (no.)||Percent < DL||Complete data||Tobit analysis||Multi-impute using (??, ??2)||Single impute using (??, ??2)||Insert DL/2||Insert E[Z|Z < DL]|
|?Estimate of ?||10.0||0.002||0.000||?0.003||?0.003||?0.020||0.007|
|?Coverage of 95% CI||10.0||0.947||0.944||0.943||0.943||0.943||0.942|
|?Estimate of ?||10.0||0.003||0.002||0.000||0.000||?0.019||0.009|
|?Coverage of 95% CI||10.0||0.944||0.945||0.940||0.940||0.943||0.942|
|?Estimate of ?||10.0||?0.001||?0.002||?0.002||?0.002||?0.023||0.006|
|?Coverage of 95% CI||10.0||0.952||0.950||0.951||0.950||0.941||0.946|
|?Estimate of ?||10.0||0.001||0.001||0.001||0.001||?0.021||0.008|
|?Coverage of 95% CI||10.0||0.954||0.954||0.952||0.951||0.931||0.949|
aParameter estimation using observed data with DLs (Tobit analysis), (??, ??2) multiple imputation with allowance for uncertainty in model parameters using (??, ??2), a single imputation using (??, ??2), the insertion of DL/2, and insertion of the expected value conditional on being below the DL, E[Z|Z < DL].
Keywords: dust, environmental exposure, imputation, missing data, non-Hodgkin lymphoma, pesticides.
Previous Document: Estimating the independent effects of multiple pollutants in the presence of measurement error: an a...
Next Document: Use of pharmacokinetic modeling to design studies for pathway-specific exposure model evaluation.