Abstract
We describe an exploratory, dataoriented approach for identifying candidates for differential gene expression in cDNA microarray experiments in terms of αoutliers and outlier regions, using simultaneous tolerance intervals relative to the line of equivalence (Cy5 = Cy3). We demonstrate the improved performance of our approach over existing singleslide methods using public datasets and simulation studies.
Background
Multiple studies validate the utility of cDNA microarrays for comparing relative mRNA transcript levels between different biological samples. Both the biological systems under study and the technology itself contribute to the variability within and between microarrays [111]. A fundamental question is determining which of the potentially tens of thousands of genes assayed have transcript levels that differ significantly in the two samples. Experimental designs utilizing many levels of replication improve the ability to identify differentiallyexpressed genes [212]. However, the vast majority of studies utilize no or limited replication due to practical considerations of cost and feasibility. Thus, statistical techniques are needed for cDNA microarray studies with constraints on replication. A common strategy is to equate differentiallyexpressed genes with those genes having a ratio of hybridization intensity values greater, or less, than some userdefined threshold [13,14], such as twofold change.
We describe a new approach for identifying differentially expressed gene candidates in cDNA microarray experiments without replication or with limited replication. We illustrate its utility by applying it to published data and demonstrate its advantages over current approaches. Microarray datasets are comprised of pairs of processed fluorescent intensity values, background corrected and normalized, for each of the N genes on the microarray. We discuss a model for such data in which the log_{2}(Cy5) and log_{2}(Cy3) values are linearly related and are samples drawn from a bivariate normal population 'contaminated' with outliers (see detailed definitions of the outliergenerating model in Methods) and possibly distorted due to heteroscedasticity. In a contaminated bivariate normal distribution, the main body of data is a sample from a bivariate normal distribution and constitutes the regular observations. The dataset also contains nonregular observations, 'outliers' or 'contaminants', which represent systematic deviations that, as we describe below, are candidates for differential expression. We check these underlying assumptions by applying exploratory data analysis tools (scatter plots with tolerance ellipses, QuantileQuantile normal plots (QQNPs) with simulation envelopes and boxplots for residuals) to simulated and empirical datasets.
We formulate our method in terms of an αoutliergenerating model and outlier regions [15,16]. In a scatter plot of suitably normalized log_{2}(Cy5) versus log_{2}(Cy3) intensity values the majority of data points lie in the vicinity of the line of equivalence (Cy5 = Cy3). The line of equivalence can be estimated using a robust linear regression estimator and normalizing data by the regression fit making slope = 1 and intercept = 0. We subtract the fit from data to compute residuals [log_{2}(Cy5)  log_{2}(Cy3)] which represent the vertical distances from the line of equivalence to the data points and are equivalent to the logtransformed ratios log_{2}(Cy5/Cy3). After these steps, we apply robust scatter plot smoothers to quantify and take into account the distortion of the data, if any, by heteroscedasticity. Data points far from the line of equivalence, 'outliers', are considered to be of greatest interest since they correspond to genes having noticeably different hybridization intensity values. An outlier could represent one of five circumstances: a gene with higher individual variability than the majority of genes; an outlier by chance; a sporadic technical or biological outlier; a systematic technical outlier (due to, for example, heteroscedasticity); or a systematic biological outlier due to differential expression. We assume that the further away from the line of equivalence an outlier is located, the more likely that it is genuinely 'up' or 'downregulated'. We compare our approach with some existing singleslide methods [13,14,1720] and demonstrate that it works well in practice.
Results
We examined 10 published cDNA microarray experiments that compared 6,295 transcript levels in wildtype Saccharomyces cerevisiae and single gene deletion mutants pertinent to copper and iron metabolism [18]. The deleted genes were mac1/ YMR021C (experiment number 96), cin5/YOR028C (26), cup5/YEL027W (36), fre6/YLL051C (64), sod1/YJR104C (162), spf1/YEL031W (163), vma8/YEL051W (189), yap1/YML007W (195), yer033c (214) and ymr031c (250).
Exploratory data analysis supports model for cDNA microarray data
We used exploratory data analysis tools to assess the assumptions underlying our method. We assume that biological and technical noise results in the majority of the measured expression levels changing randomly, independently, nondirectionally and by a small amount. Thus, the log_{2}(Cy3) and log_{2}(Cy5) variables in cDNA microarray data should be linearly related and come from a contaminated bivariate normal distribution, possibly distorted due to heteroscedasticity. Using each tool, we compared the observed mac1 data and datasets simulated as samples from a bivariate normal distribution with parameters corresponding to robust estimates of the location and variancecovariance matrix. Figure 1 shows concentration ellipses and QQNP for log_{2}(Cy5/Cy3) values for empirical and simulated datasets.
Figure 1. Visual tests of the underlying assumptions. Five concentration ellipses for (a) the standardized mac1 dataset and (b) for a Monte Carlo simulated dataset with the same parameters of location and variancecovariance matrix (we used robust versions the location and scale estimators) as in mac1 data. The tolerance ellipses cover 90% (red), 95% (blue), 99% (green), 99.9% (orange) and 99.99% (light orange) portions of the standard normal distribution to assist in visually testing the assumption of contaminated bivariate normality. QQNP of residuals for (c) mac1 dataset and (d) for the corresponding Monte Carlo simulated dataset for comparison with (a) and (b). Outlying points are given in different colors in accordance with STIs in Figure 19b.
The log_{2}(Cy5) versus log_{2}(Cy3) scatter plots and concentration ellipses (Figure 1a,1b) provide a visual assessment of bivariate normality. The distribution of the mac1 empirical data is similar to the simulated ('ideal') bivariate normal population except for the presence of strong outliers. The mac1 data include significantly more unexpected events than might be expected for a sample from a bivariate normal population.
The QQNP for residuals (log_{2}(Cy5/Cy3)) compares the quantiles of the empirical data with the quantiles of the standard normal distribution (Figures 1c,d). The mac1 simulated data points lie along a straight line (the line for the standard normal distribution) except for some heavy tails due to finite sample size (Figure 1d). The empirical mac1 data points (Figure 1c) also conform to a normal distribution except for longer tails (increased incidence of outliers and possible heteroscedasticity). Examination of the empirical data using exploratory data analysis tools supports our premise that the logtransformed channel intensities (log_{2}(Cy3) and log_{2}(Cy5)) are linearly related and come from a contaminated bivariate normal distribution possibly distorted with heteroscedasticity.
The other nine datasets show a similar pattern (Figures 2,3,4,5,6,7,8,9,10). Figure 2 shows nine scatter plots with tolerance ellipses for the empirical logtransformed normalized channel intensities. There are strong bivariate outliers and differential gene expression candidates will be represented by Youtliers. A data point which is an Xoutlier or YXoutlier probably represents a technical gross error. Figure 3 represents scatter plots for simulated data produced using robust estimates of location and scale parameters for the corresponding empirical datasets. A 99.99% tolerance ellipse covers the simulated data points with no outliers. Figure 4 displays results after outlier removal from the empirical data using a simple cutoff and ignoring heteroscedasticity, if any. The majority of data points look like regular observations sampled from a bivariate normal population.
Figure 2. Overlay of concentration ellipses for the bivariate standard normal on real data. Scatter plots of nine datasets from Hughes et al. [18] overlaid with concentration ellipses for the standard normal distribution (see Figure 1 for the portions captured). Channel intensity values were log(base 2)transformed, normalized and standardized.
Figure 3. Overlay of concentration ellipses for the bivariate standard normal on simulated data. Scatter plots of nine simulated datasets (generated as random samples from a bivariate normal population) with overlaid concentration ellipses for the standard bivariate normal distribution (see Figure 1 for the portions captured). Channel intensity values were log(base 2)transformed, normalized and standardized.
Figure 4. Overlay of concentration ellipses for the bivariate standard normal on real data with prominent outliers removed. Scatter plots of nine datasets from Hughes et al. [18] after outlier removal with concentration ellipses for the standard bivariate normal distribution (see Figure 1 for the portions captured). Twosided 99.9% cutoff and robust measure of scale (median absolute deviation) for residuals were used to identify outliers. Channel intensity values were log(base 2)transformed, normalized and standardized.
Figure 5. QQNP for real data: residuals of nine datasets from Hughes et al. [18]. Channel intensity values were log(base 2)transformed and normalized. Compare with Figure 18 (colors for outliers match the tolerance band colors).
Figure 6. QQNP for residuals of nine simulated datasets (generated as random samples from a normal population). Channel intensity values were log(base 2)transformed and normalized. Compare with Figure 18 (colors for outliers match the tolerance band colors).
Figure 7. QQNP for residuals of nine datasets from Hughes et al. [18] after prominent outlier removal. Twosided 99.9% region and robust measure of scale (median absolute deviation) for residuals were used to remove outliers. Channel intensity values were log(base 2)transformed and normalized. Compare with Figure 18 (colors for outliers match the tolerance band colors).
Figure 8. QQNP with simulation envelopes (based on 1,000 random samples from a normal population) for residuals of nine datasets from Hughes et al. [18]. Channel intensity values were log(base 2)transformed and normalized. The envelopes are depicted as dashed red lines.
Figure 9. QQNP with simulation envelopes for Monte Carlo simulated data. QQNP with simulation envelopes (based on 1,000 random samples from a normal population) for residuals of nine simulated datasets (generated as random samples from a normal population). Channel intensity values were log(base 2)transformed and normalized. The envelopes are depicted as dashed red lines.
Figure 10. QQNP with simulation envelopes (based on 1,000 random samples from a normal population) for residuals of nine datasets from Hughes et al. [18] after prominent outlier removal. Twosided 99.9% cutoff and robust measure of scale (median absolute deviation) for residuals were used to remove outliers. Channel intensity values were log(base 2)transformed and normalized. The envelopes are depicted as dashed red lines.
Ordinary QQNP results represented in Figures 5,6,7 are a different view of the data shown in Figures 2,3,4, comparing empirical, or simulated, quantiles with quantiles of the standard normal distribution. Outlying observations are all in the heavy tails. Figure 6 demonstrates the absence of strong outliers in the simulated data but heavy tails still persist due to finite sampling. Figure 7 shows that after the outlier removal, the main body of data ('regular observations') may be reasonably approximated with a normal distribution. Simulation envelopes for the QQNP support this conclusion, see text below.
Figures 8,9,10 illustrate the use of simulation envelopes. Clearly all simulated data points should be inside the envelopes as shown in Figure 9. Figure 10 confirms our expectations that after outlier removal the main body of regular data would be within the simulation envelopes. A humpshaped deviation from the envelope limits in almost all the examples suggests a systematic technical error (the nonlinear local bias is very reproducible and may be seen in the majority of other scatter plots based on datasets from Hughes et al. [18]). Such a local nonlinearity could be removed by applying a lowessbased normalization with appropriate smoothing parameters. The source of the systematic deviation is not known.
Detection of residual heteroscedasticity
In microarray data, the variance of residuals (log_{2}(Cy5/Cy3)) is not a constant (homoscedasticity) but rather varies (heteroscedasticity) with intensity level (log_{2}(Cy5Cy3)/2 or log_{2}(Cy3) or log_{2}(Cy5)). The presence of residual heteroscedasticity argues strongly against arbitrary threshold methods to identify candidates for differential expression [10,11,19,2124]. Approaches for assessing the heterogeneity of residual variance [10,11,25,26] include graphical, parametric and nonparametric methods. Here, we use nonparametric regression smoothing in 'absolute residuals versus log_{2}(Cy3)' scatter plots to quantify residual variance. We compared Splus/R supsmu (super smoother) and lowess (robust locally weighted regression) with other methods (scatter plots with tolerance ellipses, QQNPs with simulation envelopes, boxplots for residuals) and found that our regression smoother approach for absolute residuals performs well.
We used these smoothing methods to assess heteroscedasticity in the following way: grouping data into subsets of equal size and then applying regression smoothers to the median absolute residual in each group against median of log_{2}(Cy3) for that group [25]; same as this method but using boxplot for each group. One benefit of the latter approach is the fact that it can be used directly not only for residual diagnostics but also to take into account heteroscedasticity and estimate the number of candidates for differential gene expression. We also used Spearman rank correlation coefficients of the absolute residuals versus log_{2}(Cy3) to check the smoothingbased methods for consistency: positive values indicate increasing residual variance, negative ones indicate decreasing variance [25].
We show an example of the results in Table 1 and Figures 11,12,13,14,15,16 for the cup5 dataset. Figure 11 demonstrates the dependence between smoothed absolute residuals and smoothing parameters for supsmu and lowess procedures. As expected, supsmu procedure is more sensitive to prominent outliers in low intensity regions because it uses an automatically adjusted variable span. Prominent outliers in the low intensity area are both Y and Xoutliers and should be discarded. For the majority of cup5 data, supsmu and lowess generate similar results. Figure 12 shows supsmu and lowess smoothing for 20 medianbased sequential intervals of equal size and using different values of smoothing parameters at 'higher' resolution. Figure 13 does the same using cup5 data in background. Figures 14 and 15 show boxplots for residuals using 10 and 20 equal size sequential intervals, respectively. They confirm the presence of heteroscedasticity as well. Boxplots for residuals with 20 subgroups of equal size using ±3IQRbased upper and lower extremes give an estimate for k = 75. This estimate is close to k = 61 identified by adjusted supsmubased 99.998% simultaneous tolerance intervals (STIs) (Table 2). The difference in the estimates (14, or about 19%) can be explained by the fact that the ±3IQR rule generates about a 99.995% twosided tolerance interval for a normally distributed population, while for a sample of finite size the corresponding upper and lower tolerance limits are wider (compare Equations 8 and 9) to cover 99.998% per residual group. Figure 16 is a smoothed version of Figure 14 using supsmu and lowess procedures for 3IQRbased extreme limits.
Table 1. Detecting heteroscedasticity
Table 2. Candidate differential expressed genes with different approaches
Figure 11. The use of smoothed absolute residuals to diagnose and quantify residual heteroscedasticity. 'Absolute residuals versus log_{2}(Cy3)' scatter plot smoothed using supsmu and lowess and different values of the smoothing parameters bass and f, respectively. The figure illustrates the dependence of smoothing effect from magnitude of smoothing parameters. bass is control of the low frequency emphasis when using cross validation. The larger the value of bass (up to ten), the smoother the fit from automatic span selection [51,52,63]. f is fraction of the data used for smoothing at each log_{2}(Cy3) point. The larger the f value, the smoother the fit [51,52,63].
Figure 12. The use of smoothed absolute residuals for sequential intensity intervals. 'Absolute residuals versus log_{2}(Cy3)' scatter plot based on supsmu and lowess for 20 sequential intervals of equal size and using different values of smoothing parameters (see legend to Figure 11 for details). Data are shown with higher resolution than in Figure 13.
Figure 13. The use of smoothed absolute residuals for sequential intensity intervals. 'Absolute residuals versus log_{2}(Cy3)' scatter plot with supsmu and lowess smoothing based on 20 sequential intervals of equal size with shown values of smoothing parameters (see legend to Figure 11 for details). Scale is different from Figure 12.
Figure 14. Boxplots for residuals using ten sequential intervals of equal size. A box corresponds to the IQR (interquartile range), the mid point is a sample median, and whiskers are 3IQR limits. Outliers (nonregular observations) are points outside the whiskers. Abscissa is based on medians for ten intervals of approximately equal size (the total sample size is 6,068, the first nine sets were 606, the tenth set was 614).
Figure 15. Boxplots for residuals using 20 sequential intervals of equal size. Details are the same as for Figure 14 but using 20 sequential intervals.
Figure 16. Diagnostics for heteroscedasticity pased on the upper and lower extremes computed as 3IQR for ten sequential intervals of equal size and using supsmu and lowess smoothers (compare with Figure 14).
Statistical significance of outliers: ordinary and smoothed STIs
Our method equates contaminants of bivariate normal distributions (outliers) with candidates for differential expression. The outlier identification method has been developed previously for other applications [16,27,28]. We employ an approach based on the perspective of αoutliers and outlier regions [15,16]. In this approach, a point above the line of equivalence (that is, Cy3 = Cy5) is viewed as a candidate for an upregulated gene, one below the line as a downregulated gene and one in the vicinity of the line as an unchanged gene. Intuitively, points further away from the line  stronger outliers  are most likely to represent differentiallyexpressed genes. In other words, the probability that the observed difference in transcript level between the two samples might have arisen by chance decreases. To quantify these qualitative ideas, we applied statistical criteria to decide when points might result from no actual difference in expression (for example, due to random fluctuations) versus those corresponding to genuine differential expression. We used a general αoutlier model for residuals (logtransformed normalized ratios) to identify candidates for differential expression.
In order to estimate the statistical significance of outliers, we used simultaneous tolerance intervals (STIs) based on Scheffé simultaneous confidence principles [29,30]. This approach guarantees a desired confidence level across the whole range of the predictor variable X = log_{2}(Cy3), log_{2}(Cy5) or log_{2}(Cy3Cy5)/2 and for all P = 100%q(the portion of the normal distribution covered by a certain STI, see Methods). We modified the approach using robust regression smoothers (supsmu or lowess) to approximate an unknown relationship, s^{2 }= F(X), between residual variance and intensity. Five STIs for the mac1 empirical and simulated datasets are shown (Figure 17a,b). For ordinary STIs, random fluctuations are seen to contribute to data points located away from the line of equivalence. However, the empirical data contain more and stronger outliers than the simulated data (Figure 17b). The five ordinary STIs were constructed under the assumption that the residual variance is constant (homoscedasticity) across the entire range of values for the predictor variable. We notice that for a large sample size (N = 6068, see Methods) ordinary STIs appear as straight lines (for small and moderate datasets they appear as hyperbolas; see Equations 2, 8, and 9 in Methods). Figure 17c shows residuals (log_{2}(Cy5/Cy3)) as a function of X = log_{2}(Cy3) for the empirical mac1 dataset. This plot and residual plots for other nine experiments (Figure 18) reveal that residual variance is not a constant (heteroscedasticity). Residual variance is commonly high for small values of X_{i}; it decreases to a minimum and may increase for large values of X_{i}, that is, the empirical dependence appears hyperbolic. We account for the heteroscedasticity by the use of smoothed STIs (Figure 17d). Accordingly, smoothed STIs appear as curves that are wider at low and high X_{i }values. Therefore, for a given portion of the normal distribution covered by a certain STI, points with X_{i }values at either extreme are further away from the line of equivalence.
Figure 17. Five ordinary STIs for the (a) real and (b) simulated mac1 datasets. The line of equivalence (black) has slope 1 and intercept 0 which corresponds to the case of Cy5 = Cy3. (c,d) Scatter plots of residuals for the real mac1 dataset versus predictor variable. On this figure (c,d) and other figures 'residuals' mean 'log_{2}(Cy5/Cy3)'. The residuals are depicted in (c) and this is the same as (a) except that the linear trend has been subtracted resulting in a slope A = 0. The ordinary STIs assume residual homoscedasticity. (d) STIs shown corrected with the Splus scatter plot smoother supsmu to reveal the dependence of residual variance on the value of the predictor variable. The supsmubased STIs assume residual heteroscedasticity. Pink and cyan points lie in the interval between the upper and lower 95% and 99% STIs, respectively. Red and blue points lie above the upper and lower 99% STI, respectively. Black points lie below the upper and lower 95% STI. In all panels, the 95% (innermost), 99%, 99.8%, 99.98% and 99.998% (outermost) STIs are shown (covered with probability at least 0.9999). The vertical dotted line marks the location of the minima of the empirical hyperbolas. Therefore, red/pink, blue/cyan and black points represent upregulated, downregulated and unchanged genes, respectively.
Figure 18. Residuals versus X≡log_{2}(Cy3) scatter plots for nine different cDNA microarray experiments. Each plot, yer033c, cup5, spf1, ymr031c, vma8, yap1, sod1, fre6 and cin5, shows the 95% (innermost), 99%, 99.8%, 99.98% and 99.998% (outermost) adjusted supsmubased STIs (covered with probability at least 0.9999). Red and blue dots mark upregulated and downregulated genes, respectively, with pvalue ≤ 0.01 (≤ 0.05). The corresponding knockout genes are identified as the most prominent downregulated ones (cin5, sod1, spf1, vma8 and yer033c) or one of the most prominent downregulated genes (cup5, fre6, yap1 and ymr031c) with pvalues always much less than 0.00002.
For mac1, the width of the smoothed STIs is somewhat greater at low intensities compared to those at high intensities. In the midrange of X_{i }values, smoothed STIs lead to intervals that are narrower than ordinary STIs that do not consider heteroscedasticity. Therefore, candidates for differentiallyexpressed genes are more likely to be identified in the middle range of X_{i }values and are less likely to be defined at the extremes.
We evaluated the lowess and supsmusmoothing procedures by applying them to a simulated dataset taken from an 'ideal' bivariate normal population with the same parameters as the empirical mac1 dataset. The robust scale estimates using the Huber τestimator for scale, supsmu and lowessbased scale estimators are shown in Figure 19a. The smoothed scale estimators generate approximately straight lines parallel to the Huber τscale estimator.
Figure 19. Scatter plots of residuals for mac1 versus log_{2}(Cy3) as independent variables together with various STIs. (a) Residuals for artificial data drawn from a bivariate normal distribution with the same parameters as the real mac1 data shown below. This plot is based on the same simulated dataset as in Figure 17b except that the linear trend has been subtracted resulting in slope = 0. A robust scale estimator (a Huber τestimate of scale) for residuals (green, outer), supsmu (purple, middle) and lowessbased (orange, inner) scale estimators are shown. Scale factors calculated to adjust supsmu and lowessbased scale estimates were 1.25 and 1.35, respectively. (b) Adjusted supsmubased STIs for the real data at the 95% (innermost), 99%, 99.8%, 99.98% and 99.998% (outermost) levels. These adjusted STIs take into account differences between the ordinary STIs and supsmubased STIs for an artificial dataset having the same parameters (location, scale and coefficient of correlation) as its cognate real data. The vertical dotted line marks the location of the minima of the empirical hyperbolas. Red or pink and blue or cyan dots correspond to upregulated and downregulated genes, respectively, with pvalue ≤ 0.01 (≤ 0.05). The most prominent downregulated gene with case index number 4,301 is Mac1 (see also Table 3), which is not expressed in the mac1 strain.
Table 3. Candidate differentiallyexpressed genes
Adjusted STI
An adjustment for Gaussian efficiency is necessary for the application of robust estimators [31] such as those we use for outlier identification in the presence of heteroscedasticity. We therefore adjust smoothed STIs to improve their accuracy. We calculate an adjustment constant (scale factor) to compensate for the difference between the Huber τestimator for scale and supsmu or lowessbased scale estimators (see Methods for details). The adjusted constant is used as a scale factor for the smoothed STIs for empirical data. Adjusted smoothed STIs are shown in Figure 18 and Figure 19b. The dramatically different STIs amongst the ten datasets reflect their individual patterns of residual variance and demonstrate the necessity of tailored analysis of a dataset.
Candidates for differential expression
We identify candidates for differential expression by using STIs containing the P = 100(1  α) portion of normal distribution covered with probability at least 1 γ (see Methods). For mac1, no simulated data lies outside the 99.998% ordinary STIs (γlevel = 0.0001) suggesting that empirical data points outside the corresponding adjusted smoothed supsmubased STIs are good candidates for differentiallyexpressed genes. For the mac1 analysis (Table 3), 41 candidate genes for upregulation and 20 candidates for downregulation are identified using a 99.998% (γlevel = 0.0001) adjusted supsmubased STI. For the ten datasets examined, up to approximately 2% of the genes were candidates for differential expression (see Table 2, Table 3 and Table 4 for mac1 data and a comparative summary table for all ten datasets in Additional data). Overall, adjusted smoothed STIs provide a better balance between sensitivity and specificity across the whole range of predictor variable values (log_{2}(Cy3)) and are thus more reliable than ordinary STIs. The approach takes into consideration multiplicity of comparisons, variation in the experimental response around the line of equivalence (or around zero for residuals) and intensity dependent variation in residual variance.
Table 4. Comparison of genes identified as differentially expressed in mac1Δ
Differential expression in a single cDNA microarray: adjusted smoothed STIs and existing methods
We compared the adjusted smoothed STI technique for identifying differentiallyexpressed genes with other methods for single cDNA microarray data [13,14,1719]. Within the framework of outlier detection analysis, the primary difference amongst these methods is the means used to define statistical intervals (see Discussion for the details of each model). In the arbitrary ratio approach, Y_{i}/X_{i }= log_{2}(Cy5/Cy3) = r_{i }defines a gene i as being differentially expressed if r_{i }>t where t is a userdefined threshold [13,14,17]. The most frequently used value t corresponds to residuals of 1 (twofold down) and 1 (twofold up). Figure 20 compares differential expression in three different experiments using a ratio threshold and adjusted supsmubased STIs. For t = ± 1, any gene outside this cutoff would be deemed as up or downregulated. However, employing the criterion of genes higher than the 99.998% (γlevel = 0.0001) adjusted supsmubased STIs would yield additional candidates for differential expression. Although t = ± 1 seems more conservative for these datasets, it may be overly liberal for others.
Figure 20. Comparison with a twofold cutoff. Candidates for differentiallyexpressed genes defined using adjusted supsmubased STIs or a threshold for ratios from the (a) mac1, (b) spf1 and (c) cin5 experiments. The dashed horizontal lines with intercepts of 1 and +1 correspond to twofold changes in logtransformed (base 2) ratio. Red and blue dots denote genes upregulated and downregulated, respectively, according to this criterion. Moving away from the zero line, the 95%, 99%, 99.8%, 99.98% and 99.998% adjusted supsmubased STIs are shown (covered with probability at least 0.9999). Further details about the mac1 genes identified with case index numbers can be found in Table 3 (see also Table 2 for comparison of the adjusted supsmubased STIs with other procedures).
Hughes et al. [18] developed an error model that made use of additional information about the variability of each gene based on 63 'same versus same' control experiments. Figure 21a and Table 4 compare differential expression in the mac1 data as defined using the 'genespecific' error model [18] and adjusted supsmubased STIs. As we discuss below, some genes which were identified as differentially expressed using our adjusted supsmubased STIs were not identified by the error model [18]. Our approach with four other models [17,19,20] (see also Discussion) in an outlier detection framework is compared in Figure 21a,b.
Figure 21. Comparison with other methods. (a) Comparison of mac1 candidates for differentiallyexpressed genes defined using adjusted supsmubased STIs and by Hughes et al. [18]. Red and blue dots denote genes designated as being upregulated and downregulated, respectively, by the Hughes et al. 'genespecific' error model at pvalue ≤ 0.05. The 95%, 99%, 99.8%, 99.98% and 99.998% adjusted supsmubased STIs are shown (covered with probability at least 0.9999). The vertical dotted line marks the location of the minima of the empirical hyperbolas. (b) Comparison 95% adjusted supsmubased STIs (red) with statistical intervals based on three other singleslide methods: a hierarchical GammaGammaBernoulli model [19] with the posterior odds of change in expression 1:1(innermost), 1:10 (middle) and 1:100 (outmost); mixture of orthogonal residuals with posterior probability of differential expression 95% (blue) based on approach from Sapir and Churchill [20]; and 95% band using asymmetric density function for raw ratios (Cy5/Cy3) assuming the same coefficient of variation for both fluors [17].
In general, our adjusted smoothed STI method generates narrower bands in the midrange of gene expression levels and broader bands in low and higher intensity areas. For mac1, the bands for Newton et al.'s method [19] and our method appear similar qualitatively. The STIbased measure of statistical significance takes into consideration the unique features and properties of empirical microarray datasets.
Simulation studies
We carried out simulation studies using sample parameter estimates from the mac1 dataset to assess the performance of each of the singleslide methods. We created artificial datasets with 100 candidates (outliers) for differential expression. We simulated k = 100 nonregular observations and Nk = 6,068100 = 5,968 regular observations (the main body of nondifferentiallyexpressed genes). A random component was added to each outlier value using standard normal distribution with variance dependent on intensity. This set of 100 represents the 'true' outliers due to 'differential expression'. We simulated heteroscedasticity present in many datasets by including intensitydependent variability in the low and high intensity levels for both nonregular and regular data points. We then compared the performance of each method to identify candidates for differential expression in multiple repeat runs of the simulation. (R code and data used for the simulations can be obtained from the authors upon request.) Figure 22 shows a plot of the simulated data with true outliers shown in red. We compared the performance of several different singleslide methods at ten 'cutoff' levels of relatively equivalent stringency as shown in Table 5 (except for Chen et al. [17] which use only two levels of significance). We compared PPV (positive predictive value), NPV (negative predictive value), sensitivity, specificity and likelihood ratios at each cutoff for each method (please see definitions of the test accuracy measures in [32] and in Additional data). We plotted these results in Figure 23 as a receiver operating characteristic (ROC) curve, a PPV curve, and a likelihood ratio curve. These results clearly demonstrate that our method outperforms existing singleslide methods with improved positive predictive values, likelihood ratio and higher ROC curves (greater area under the curve). These improved performance differences are most apparent at the most stringent significance levels which are likely to be most relevant in the context of multiple comparisons.
Table 5. Comparison of performance with simulated datasets with 100 true positives
Figure 22. Simulated dataset for method performance testing. Nonregular observations (k = 100) and regular observations (Nk = 6,068100 = 5,968) as the main body of nondifferentiallyexpressed genes were used for the simulations. Sample parameters are taken from the mac1 dataset. A random component was added to each outlier value using standard normal distribution with variance dependent on intensity. Heteroscedasticity for regular observations was also simulated by including intensity dependent variability in the low and high intensity levels. Three nonparametric smoothing methods were used to check absolute residuals for heteroscedasticity: supsmu, lowess and loess (the latter was based on a locallyquadratic fitting).
Figure 23. Comparison of performance of three singleslide methods on simulated data. (a) ROC curves for simulated data for each method. Each point represents one of the ten cutoffs used in Table 4 for each method. Area under curve is one in the case of an ideal method. ROC curves do not take prevalence into account and upper curves have better accuracy than lower ones (see text for details). (b) PPV curves which represent the probabilities that a gene identified as differentially expressed represents a true positive. (c) Likelihood ratio (or Bayes' factor) curves are calculated as Sensitivity/(1Specificity) and can also be defined as the ratio of posterior odds to prior odds. PPV values takes into account prevalence of being differentially expressed in the simulated population.
Comparison of biological significance of mac1 results
The effects of the mac1Δ on the metabolism and gene expression in yeast are well documented. The absence of the Mac1p, a copper responsive transcription factor, results in downregulation of copper uptake transporters and subsequent copper deficiency [3336]. Copper is required for Fet3p which in turn is necessary for iron uptake in yeast. As a consequence, copper deficiency results in secondary iron deficiency [37,38]. Iron deficiency leads to activation of iron responsive transcription factors, Aft1p and Aft2p, which induce transcription of a host of genes encoding proteins involved in iron uptake [39]. The identification of upregulation of these target genes provides a reasonable biological standard for comparing the performance of the different methods. In addition, downregulation of Mac1p targets might be expected in a mac1Δ. In Table 4, we present a comparison of the methods at relatively equivalently high levels of stringency for likely Aft1/2p and Mac1p targets present on the arrays and identified by at least one method as differentially expressed. We also include the twofold cutoff for comparison. A total of 13 genes were not identified as upregulated by Hughes et al. Two genes previously identified as a Mac1p target (YFR055W) [40] or downregulated in mac1Δ (CTT1) [33] and MAC1 itself were not identified by the Hughes et al. method. While identifying many of the Aft1/2p targets excluded by Hughes et al., the other two methods did not identify MRS4 or SMF3, which are regulated in response to iron deficiency, nor did they identify YFR055W and CTT1 as downregulated. We suggest that these results provide some biological validation of our approach and indicate increased performance of our method over the other methods at stringent significance levels  necessary given the multiplicity of comparisons.
Discussion
Multiple replications in the design of reagents (multiple spotting of each gene on a microarray) and experimental approach (multiple replicates of each hybridization) provide the soundest approach to confirm differential expression of genes (see, for example, [212]). However, experimental realities such as limited samples (for example, tumor specimen), a large number of samples (for example, time course experiments) and experimental cost have resulted in the vast majority of published cDNA microarray studies using limited or no replication. Several methods currently exist for the analysis of data from experiments with limited or no replication. Unfortunately, real microarray data generally violate the assumptions underlying these methods.
Limitations of underlying assumptions of current singleslide methods
Chen et al. [17] assumed that raw nonnormalized and non logtransformed Cy5 and Cy3 intensities (raw intensities) are drawn from independent normal populations with common coefficient of variation. An asymmetric density function for raw ratios was derived. This results in asymmetric bands with the identification of more upregulated than downregulated genes irrespective of the dataset (compare Figure 4 in [19] and Figure 7 in [8]).
Newton et al. [19] assumed that because raw Cy5 and Cy3 intensities are always positive they can be considered as observations from a Gamma distribution with the same coefficient of variation (if Cy3 and Cy5 are independent, their joint distribution is a bivariate Beta distribution). Hierarchical GammaGamma and GammaGammaBernoulli models were formulated in which the posterior odds of change in expression were an additive (Cy5 + Cy3) and multiplicative (Cy5Cy3) functions of intensity. Contours of the posterior odds in (X ≡ log(Cy3), Y ≡ log(Cy5)) scatter plots were used to identify differentiallyexpressed genes. In practical situations, it may be difficult to determine if data are lognormal or Gamma [41] but we argue that the former is more realistic for microarray data because the combination of biological and experimental noise results in the majority of the measured expression levels changing randomly, independently, nondirectionally and for those changes to be small. The central limit theorem would therefore predict bivariate normality for the majority of logtransformed spot intensity values.
Sapir and Churchill [20] compute the posterior probability of differential expression using a mixture of orthogonal residuals derived from ordinary least squares regression of (X ≡ log_{2}(Cy3), Y ≡ log_{2}(Cy5)). The approach assumes that differentiallyexpressed genes are drawn from populations with unknown distributions approximated with uniform distributions. This mixture model approach assumes that all outliers (k nonregular observations) follow the same distribution D_{0 }= D_{1 }= ... = D_{k}. Under a mixture model, contaminants are less separated from the regular observations than when using Fergusontype model [16]. The method used to obtain orthogonal residuals in this approach is not resistant to outliers [42] and is redundant because the use of logtransformed ratios log_{2}(Cy5/Cy3) assumes normalization by linear regression to enforce slope equal to 1 and intercept equal to 0 (compare [8]). This approach does not take into consideration residual heteroscedasticity. Similarly, Yue et al. [1] described a method based on parametric twosided tolerance intervals for ratios. This approach does not consider residual heteroscedasticity and the multiplicity of comparisons.
Hughes et al. [18] performed 63 control 'same versus same' hybridizations in addition to 300 'treatment versus control' cDNA microarray experiments, many in duplicate. They filtered candidates for differential expression on the basis of the information about individual variability in expression levels, and genes with unusually high variation were discarded (Figure 21a). One possible drawback is the assumption that the expression variance is the same in both samples. Hughes et al. [18] quantified residual heteroscedasticity using weighted location and scale estimators for 'same versus same' replicates. Their method used nonrobust versions for the estimators  consequently the location estimates may have a bias and the scale estimates would be significantly inflated in the presence of outliers [43]. In addition, since this method uses extensive 'same versus same' hybridizations, it cannot be considered to be a singleslide method.
Advantages of αoutlier model and outlier identification method
In this work, we have described a post hoc (dataoriented) method, which makes fewer assumptions about the nature of the data, tests the assumptions to ensure their validity and produces computationally reasonable results. We showed that a reasonable model is one where the processed fluorescent intensity values are samples drawn from a bivariate normal population contaminated with outliers and possibly distorted due to heteroscedasticity. After a normalization by a robust linear regression fit to make slope equal to 1 and intercept equal to 0, in general, most data points in the log_{2}(Cy5) versus log_{2}(Cy3) scatter plot lie close to the line of equivalence (log_{2}(Cy5) = log_{2}(Cy3)) while a limited number of data points, outliers, lie outside the vicinity. The outliers are good candidates for differentiallyexpressed genes. The further an outlier is located from the line of equivalence the more likely it is to represent a systematic outlier rather than a chance observation. The αoutliergenerating model approach for identifying differentially expressed gene candidates makes no assumptions about outlier distribution and dependency structure of the candidates. The mixture model of Sapir and Churchill [20] assumes that all candidates have the same distribution D_{0 }(D_{0 }= D_{1 }= ... = D_{k}, see Methods). Other approaches (for example, [17,19]) assume independence of channel intensities. The individual distributions are usually not known and could be different for each gene, since only replication allows estimation of the distributions. Multiple levels of dependence, such as coregulated genes, are expected rather than unlikely in gene expression analysis. Since our model and analysis approach does not require such assumptions, which are likely to be violated by the data, we would argue that our approach is more realistic and generally applicable.
Accommodating heteroscedasticity in outlier identification
We compensate for a source of reproducible systematic technical error, heteroscedasticity, by using robust nonparametric regression smoothers to quantify the differences in the variability of gene expression values as a function of spot intensity levels. STIs corrected for heteroscedasticity and adjusted for Gaussian efficiency relative to the line of equivalence (Cy5 = Cy3) serve as a probabilistic tool for identifying outliers. Our approach uses robust scatter plot smoothing techniques to simultaneously diagnose and quantify the variance structure of the data and allow natural accommodation of heteroscedasticity in the identification of outliers. This post hoc approach makes sense especially in view of the large sample size common in microarray experiments.
αOutliergenerating model can be extended to multiple slide studies
We can extend our adjusted smoothed STI approach to datasets with multiple levels of replication. This provides a consistent method for experiments with and without replication. It is not clear how extant singleslide methods could be adapted for multipleslide comparisons. Usually, two methods, each with their own data models and assumptions  one for singleslide and a second for a multipleslide based method, are used.
Transformations versus interpretation of microarray datasets
Our method is based on limited data transformations (for example, background subtraction, logtransformation and global channel normalization) designed to preserve the data distribution and account for heteroscedasticity. A variety of nonlinear transformation methods can be used to remove heteroscedasticity, for example, variancestabilizing monotonic continuous nonlinear transformations [2123,44]. Equalizing residual variance in this manner does not guarantee that bivariate normality will be preserved for the majority of genes which are not differentially expressed. A model distribution assumption is especially important for statistical inference in the case of limited or no replication in the data. Nonlinear transformation methods require preliminary research and computational experimentation with different types of transformations for each specific microarray dataset in order to make a choice between different transformations. Although transformation methods could represent a valuable approach to microarray data analysis, any complex nonlinear data transformation calls into question the validity of the transformations. Therefore, the application of these transformation methods requires trial and error followed by validation of each transformation for a particular experimental dataset [44]. We suggest that our approach, which relies on interpretation of existing data distributions including any heteroscedasticity rather than application of methods to change distributions, provides a reasonable alternative to variancestabilizing methods.
Multiple comparisons in microarray data analysis
Typically, microarray data involve thousands of genes so clearly there is a problem of multiplicity of comparisons. Other modelbased singleslide approaches do not consider this issue explicitly (see singleslide procedures described in [1,13,14,17,18]). First, we identify candidate outliers without correction to obtain unadjusted pvalues (Table 3). A pvalue is a probability to reject the null hypothesis when the null hypothesis is true and represents a measure of statistical significance in terms of false positive rate. One way to obtain adjusted pvalues is to apply a Bonferroni correction based on N (the sample size of the entire dataset) which may be too conservative, so we examine two alternative corrections. In one alternative approach, we apply a multiplicity of comparison correction based on an estimate of k (number of nonregular observations) rather than the sample size of the entire dataset. This approach emphasizes stable outliers at the expense of other possible outliers (that is, Nk) which are inliers in the current singleslide experiment. Clearly, this Bonferroni correction by k provides a much less conservative result than the correction by N and we would argue more reasonable correction to identify true outliers. Other robust exploratory tools (see Methods) can be used to estimate k. In a more sophisticated approach to address these issues, the qvalue is calculated from the ordered list of unadjusted pvalues [45,46] (Figure 24). The qvalue is the minimum false discovery rate [47] for a particular feature from a list of all features [45,46]. The false discovery rate is the proportion of true null hypotheses among all null hypotheses which were found to be significant  for example, a false discovery rate of 1% means that among all candidates for differential expression found significant, 1% of these are true nulls on average [46].
Figure 24. Interpreting qvalues and calibrating qvalue cutoffs. Four plots to facilitate qvalue interpretation and calibrate the qvalue cutoff [45,46] using the function qplot(). (a) The estimated portion of the true null hypotheses (π_{0}) versus the tuning parameter λ ('bootstrap' method is used for automatically choosing λ by the software and π_{0 }estimate is 0.978). (b) The expected proportion of false positives (qvalue) for different pvalue cutoffs. (c) The number of significant candidates for differential expression for each qvalue. (d) The expected portion of false positives as a function of the number of candidates for differential expression called significant. The dotted black line in (a) is π_{0 }approximation using bootstrap method; the dotted color lines in (b) (green for expected false positives (FP) on average < 1 and red for < 0.1) are used to match q and pvalue levels (0.011 and 0.0016 for expected FT < 1, 0.0015 and 0.000015 for expected FT < 0.1, correspondingly) for the expected FP cutoffs; the dotted color lines in (c) are used to match qvalue cutoffs (0.011 and 0.0015) and the number of significant tests on average (83 and 59); the dotted color lines in (d) are used to match expected FP cutoffs (< 1 and < 0.1) and the number of significant tests on average (83 and 59, correspondingly).
Exploratory and confirmatory differential gene expression analysis
We suggest distinguishing explicitly between exploratory data analysis to identify candidates for differential gene expression and confirmatory analysis to identify differentiallyexpressed genes based on strict statistical inference. Exploratory differential gene expression analysis is appropriate for datasets with limited replication to identify the most likely candidates for differential expression. Clearly, additional independent experimental approaches or additional replicates are needed to confirm the exploratory analysis and distinguish outliers by chance from systematic outliers. Alternatively, confirmatory differential gene expression analysis with multiple layers of experimental and technical replication provides sound conclusions based solely on the microarray datasets. Nevertheless, exploratory microarray data analysis followed by independent confirmatory validation studies (for example, quantitative RTPCR) represents a practical and cost effective solution for expression studies.
Methods
Transcript profiling data
The transcript profiling datasets examined in this study are from a published study that employed cDNA microarrays to compare gene expression in wild type S. cerevisiae and single gene deletion mutants [18]. Hughes et al. monitored 6,295 S. cerevisiae genes [18] in their study. For our analysis we used 6,068 of the 6,295 genes monitored. For each experiment, we used a Monte Carlo procedure to generate simulated datasets of N = 6,068 data points drawn from an 'ideal' bivariate normal population having the same parameters of location (means) and variancecovariance matrix as the empirical, observed data (the R/Splus function rmvnorm()).
Data preprocessing
For each experiment, the observed data consisted of N = 6,068 pairs of Cy3 and Cy5 fluorescent intensity values which had been background corrected, tested for linearity, normalized by total signal intensity and log transformed [18], log_{2}(Cy3)_{i}, log_{2}(Cy5)_{i}, i = 1,... 6,068.
Exploratory data analysis
The empirical bivariate intensity data {log_{2}(Cy5), log_{2}(Cy3)} and simulated datasets were examined using the following exploratory data analysis tools: concentration ellipses (ellipse() from package ellipse in R) and QQNP for residuals (qqnorm()).
Graphical tests for normality
Our approach assumes that the majority (more than 50%) of genes under consideration are not differentially expressed. We posit that changes in the expression levels of these genes are random, independent, nondirectional and relatively small. These assumptions suggest that for the majority of data points, scatter plots 'log_{2}(Cy5) versus log_{2}(Cy3)' should exhibit bivariate normality, or univariate normality if log_{2}(Cy5/Cy3) is used as a measure of differential expression. Since we assume linearity (additivity), the data generally need to be normalized to remove nonlinearity, if any, using global or printtip group lowess method [48].
In addition to mac1, we examined data from nine other experiments, mac1, cin5/YOR028C, cup5/YEL027W, fre6/YLL051C, sod1/YJR104C, spf1/YEL031W, vma8/YEL051W, yap1/YML007W, yer033c and ymr031c. We tested these data for normality using three tools: scatter plots with concentration ellipses (tolerance ellipses), ordinary QQNPs for residuals, and QQNPs with simulation envelopes. The first and the third tools could be used to identify candidates for differential expression if the data are homoscedastic. We note that homoscedasticity may be imposed by applying variancestabilizing transformations as discussed above (see Discussion).
For each experiment, we examined empirical and simulated data: firstly, logtransformed normalized channel intensities, log_{2}(Cy5) versus log_{2}(Cy3); secondly, channel intensities simulated as random samples from a bivariate normal population with the same location and scale parameters as in the first set of data; and thirdly, the same as the first set of data but after removing outliers defined as data points outside the 99.9% cutoff obtained using robust location and scale estimators. We used these tools rather than formal, analytical tests for univariate and bivariate normality [49,50] because the latter are sensitive to even small departures from normality if the sample size is large, that is, many thousands of data points.
Scatter plots with concentration ellipses (tolerance ellipses)
We overlay scatter plots of the experimental data with concentration ellipses for a bivariate normal distribution; the ellipses indicate curves of constant probability density for the standard bivariate normal population. The plots can be used as a visual test for bivariate normality and to investigate systematic and random deviations from it. As a diagnostic display for outliers [31], the plots can detect (projecting the data on the line Y = X [43]) regular observations, with internal X and wellfitting Y, and three types of outliers: only Youtliers (vertical outliers with internal X and nonfitting Y), only Xoutliers (outlying X and wellfitting Y) and both Y and Xoutliers (YXoutliers with outlying X and nonfitting Y).
In this work, we used sample medians and mads (medians of absolute deviations from the sample medians) to construct tolerance ellipses (Figures 2,3,4) but more sophisticated robust location and scale estimators can be utilized. For location, this includes a Huber Mestimator or Tukey's bisquare with 96% Gaussian efficiency [51,52] (location.m() in Splus and hubers() from package MASS in R). For scale, this includes a Huber τestimate (scale.tau() in Splus and hubers() in R), a bisquare Aestimate of scale (scale.a() in Splus), which are 80% Gaussian efficient [51,52], or the use of MVE (cov.mve()/plot.mve()) and MCD (cov.mcd()/plot.mcd()) estimators.
QQNP for residuals
QQNP is a plot of the data sorted in ascending order compared with the corresponding quantiles of the standard normal distribution, that is, a normal distribution with mean zero and variance one [53,54]. An approximately linear plot signifies that the data are reasonably Gaussian. A Ushape suggests that the empirical distribution is skewed. A plot that is bent down on the left and bent up on the right denotes a distribution with 'heavier' (longer) tails than the standard normal. Although useful for the analysis of residual distribution, QQNP for residuals is not an effective tool for identifying outliers for three reasons. Firstly, there is no formal test to judge departures from the normal distribution [53]; secondly, residual heteroscedasticity, if any, is not considered; and thirdly, the residuals are linear combinations of random variables and tend to be more normal ('supernormality'), than the underlying error distribution. A possible solution for the latter is use of simulation envelopes [51,55,56].
QQNP with simulation envelopes for residuals
To enhance interpretation of QQNP, an approach based on simulation envelopes can be applied [51,55,56]. The simulation envelopes are obtained from randomly generated normal samples, which are standardized, sorted and then used to identify the maximum and minimum values to construct the upper and lower envelopes [51]. The simulation is repeated 1,000 times. For our implementation of the algorithm we used Splus code developed by Venables and Ripley [51].
Data structure and outlier model
The term 'outlier' lacks a precise definition. Usually, outliers are interpreted as gross errors, or extreme, spurious, discordant, contaminating observations. In many contexts, outliers are undesirable data points. In our application, they are a matter of considerable biological interest because they are candidates for differentiallyexpressed genes. Outliergenerating models [15,16] assume that a sample of size N contains Nk regular data points which are i.i.d. (independently and identically distributed) observations from a distribution F. The remaining k nonregular observations come from other distributions D_{1}, ..., D_{k}. If these distributions are defined as D_{i }= F(χ  μ_{i}), or D_{i }= F(χ/σ_{i}) where μ_{i }> 0, σ_{i }> 1, i = 1, ..., k, then the resulting models are known as locationslippage (Fergusontype model) or scaleslippage, respectively. An alternative approach [15] connects outliers with their surprisingly extreme nature. Such a definition makes sense due to a logical relationship between extreme observations, outliers and contaminants [28]: extreme observations may or may not be outliers, outliers are always extreme observations, outliers may or may not be contaminants, and contaminants may or may not be outliers.
The following types of outliers in microarray experiments are possible: outliers by chance (due to finite sample sizes used), sporadic technical or biological outliers, and systematic outliers. Systematic outliers can be divided into reproducible technical outliers (for example, outliers due to heteroscedasticity), and biological outliers which contain both differentiallyexpressed genes, and genes with unusual high individual variability in expression. In general, only sufficient replication can distinguish differentiallyexpressed genes from other types of outliers. However, some types of outliers can be quantified in a singleslide experiment as we have shown in this article, in 'same versus same' hybridizations, or using loop design [4,5,57].
Data structure
The distribution F can be any unimodal symmetric distribution with positive density. We require that F be reasonably approximated by a normal distribution N(0, σ^{2}(I)) with a zero mean and an unknown intensitydependent variance. Generally, the extreme contaminants may differ from the regular observations by their distributions but their location in the sample may overlap the bulk of regular data points. Outliers may depend on each other as well as on the regular observations.
A singleslide cDNA microarray experiment without spot replication generates 2N channel intensity values that can be reduced to N logtransformed ratios. Therefore, we can consider each singleslide outcome as a sample of N size. As a result, in singleslide experiments the number of genes is the sample size. In M replicated cDNA microarray experiments we have a sample of size M (logtransformed ratios) for every gene. Hence, we can consider a singleslide experiment without spot replication as a special case of the multipleslide design with M technical replicates when M = 1. In this case, each gene could be considered as the sampling unit rather than each array.
Outlier model
We define the following outlier model [15,16], in which an observation is equivalent to log_{2}(Cy5/ Cy3).
Consider a random sample of size N consisting of n regular observations and k nonregular observations or α'outliers with respect to H_{0}, N = n + k, and α' = 1  (1  α)^{1/N}, and k <N/2. In other words, α is a significance level without adjustment for multiplicity of comparisons while α' is based on a correction for multiple comparisons. The n = N  k regular observations are i.i.d. data points drawn from the same underlying distribution of a parametric family. For example, H_{0 }distribution could be normal N(μ, σ^{2}), for which the parameters μ and σ^{2 }are unknown, and k is unknown as well (with the only restriction that k <N/2). The k nonregular observations have unknown individual distributions: D_{1}, ..., D_{k}, that may depend on each other as well as the regular observations.
As an outlier identification rule we use a onestep procedure to detect an unknown number of outliers. The approach can be applied to univariate, bivariate and multivariate samples [16]. Given the random sample of N observations, the task is to determine whether the point is an α'outlier with respect to the underlying normal distribution N(μ, σ^{2}). The estimated outlier region is specified by lower and upper bounds which are functions of N and α'. All points that are less (greater) than the lower (upper) bound will be in the outlier region being identified as α'outliers. For the 'normalizing condition' [15,16]:
P (no outliers amongst N i.i.d. data points from a normal distribution) = 1  α (1)
Therefore, a regular observation can be identified as an αoutlier but with probability α only. For the univariate homoscedastic case, consider the value t_{i }= log_{2}(Cy5/Cy3)_{i } m/s, where m and s are location and scale estimators, respectively. An outlier can be defined as a point i for which t_{i }≥ c_{N}(α'), where c_{N}(α') is a cutoff that is a function of N and α'. The choice of m and s is highly important for the performance and robust versions of these estimators are preferable [15,16].
A comparison of some specific outlier identification rules based on performance criteria revealed that onestep procedures with robust estimates for location and scale (outlier resistant rules) are superior at identifying outliers [16]).
The outlier model we used is only completely specified when we know distribution(s) for regular observations and distributions of the contaminants [16]. From this one can conclude that in the absence of information about the distributions of the contaminants, data analysis can be just explorative; for example, without replication it is impossible to differentiate between systematic outliers due to differential expression or other types of outliers.
Ordinary STIs
The relationship between (residual) outlier test statistics and tolerance regions [27,29,30] has shown a need for using STIs to detect Youtliers in regression. For probability at least (1  γ), the central tolerance intervals (STIs are centered about 0) that are simultaneous in X and q can be determined using:
where λ(q) is a twosided quantile for the standard normal distribution, P = 100%q is portion of the population to be covered with the STI, / is a confidence level, χ^{2}(N  2, γ/2) is the lower γ/2 quantile of a χ^{2}distribution with N  2 degrees of freedom, F(2, N  2, 1  γ/2) is the upper (1  γ/2)quantile of Fdistribution with 2 and N  2 degrees of freedom. Equation 2 uses the Bonferroni inequality to evaluate how far out on the tail of its distribution each observation lies. It is based on formula 6.5 discussed in [29,30]. The corresponding formulae for sample mean and residual variance are:
In our case, X_{i }= log_{2}(Cy3_{i}) and residuals Y_{i }= log_{2}(Cy5/Cy3), where i = 1,..., N. Because log_{2}(Cy5_{i}) and log_{2}(Cy3_{i}) are highly correlated, X_{i }= log_{2}(Cy5_{i}) or X_{i }= log_{2}(Cy3_{i }+ Cy5_{i})/2 could be used as well. We used scatter plots 'residuals versus average spot intensity' to compute pvalues. For figures we used 'residuals versus log_{2}(Cy3_{i})' scatter plots.
We used robust estimators for the location (robust options: sample median, Huber Mestimator, Tukey's bisquare) and for the scale (supsmu or lowess fits for s^{2 }= f(X) adjusted with simple Monte Carlo simulations to guarantee approximate Gaussian efficiency). We computed five STIs with five different portions of the normal distribution: 95%, 99%, 99.8%, 99.98% and 99.998%, correspondingly, and covered with probability at least 1  γ = 99.99%. The corresponding interval estimates for pvalues using, for example, formula 8 to approximate sample distribution for residuals under the null hypothesis are: 0.05 <p, 0.01 <p ≤ 0.05, 0.002 <p ≤ 0.01, 0.0002 <p ≤ 0.002, 0.00002 <p ≤ 0.0002 and p ≤ 0.00002. 'pvalue' here is the chance or probability that the tolerance interval constructed from a single sample will not include the true inlier or regular observation. Alternatively, it is the probability of having observed our data, or more extreme data, when the null hypothesis is true. Therefore, the null hypothesis here is: 'a data point is an inlier from i.i.d. random sample of size N', and we assume the corresponding null distribution for inlying residuals to be a normal distribution with mean = 0 and unknown variance, which may not be a constant.
Simultaneous tolerance intervals for data with replication
If replicated data are available, we have several (for example, n_{i}) observations on Y at each point X : Y_{ij }= log_{2 }(Cy5/Cy3)_{ij}, i = 1,..., K, j = 1,..., n_{i}. In terms of ANOVA it can be described as twoway layout (if all n_{i }are equal) [58,59].
This information can be used when building STIs following an analogy with simultaneous prediction interval [58,59] (see also Equations 12 and 13 below):
where
is the total number of observations, n_{i }is the number of replicates at point X_{i},
where X_{i }= log_{2}(Cy3_{i}), and residuals Y_{ij }= log_{2}(Cy5_{ij}/Cy3_{ij}), i = 1,..., K and j = 1,..., n_{i}. If n_{i }≡ 1 then K ≡ N and Equation 5 coincides with Equation 2.
Given replicated data, the fitted linear model can be checked by breaking the residual sum of squares into two components, lack of fit sum of squares and pure error sum of squares provided that the pure error is approximately the same throughout the data [58].
Other approximations for tolerance intervals
A twosided βexpectation tolerance interval is given by [60]:
± (s(I)K(N, β)) (8)
where K(N,β) = (1 + 1/N)^{1/2 }t(N1,(1  β)/2) and t(N1,(1  β)/2) is a Student variable with N 1 degrees of freedom. As it is clear from tables for tolerance factors [60], βexpectation tolerance intervals defined by (8) coincide with βcontent tolerance intervals for residuals of linear regression described with (2) if N is large (> 1000).
For very large sample sizes, the null distribution could be approximated by a normal approximation N(0,s^{2}(I)):
± (s(I) λ (q)) (9)
where s(I) is intensitydependent scale estimator and λ(q) is a twosided quantile (100%q is a portion of the population to be covered) of the N(0,1) defined by Φ(λ) = (1  β/2)^{1/N }(Φ(λ) is a cumulative distribution function for the standard normal distribution). The goal is to quantify s(I) in the presence of strong outliers. Our solution is the use of robust scatter plot smoothers for absolute residuals which has been described previously [25] (see details of our implementation below).
Relationships between STIs and simultaneous prediction intervals
In cDNA microarrays, the total number of predictions to be made is unknown or subject to chance, or the number of prediction intervals to be estimated simultaneously is large. Given such conditions, STIs are preferred over simultaneous prediction intervals (SPIs) [30]. When N is large, as in our case, the corresponding STIs and SPIs are indistinguishable on the scatter plots. If the majority of residuals are normally distributed, then using Bonferrroni procedure [30] we have this expression for the residuals:
where k is the number of 'future observations' or the number of the null hypotheses tested (we test k null hypotheses about k outliers). When N is large then the second term under the square root is about 1 and the following approximation holds:
It is a horizontal band based on Bonferronicorrected tvalue and one can expect that (1  α/2)100% of the residuals will lie in the interval in repeat runs under the same experimental conditions. Thus, observations with the residuals situated far from the horizontal band can be identified as outliers. Equations 10 and 11 assume an outside estimate for k. We note that Equation 11 coincides with Equation 8 (within a degree of freedom) if one takes Bonferroni correction into account. In previous work, we used SPIs rather than STIs to identify candidates for differential expression [61].
If we have a replication (for example, we consider K points X_{i }with n_{i }replicates in each point, i = 1,..., K) then this information also may be incorporated while building SPIs [58,59]:
where: q is the number of future observations to be averaged at point X_{i}, n_{i }is the number of replicates at point X_{i}, residual variance
, ,
and the total number of observations is
.
If both n_{i }≡ 1 and q ≡ 1 then Equation 12 coincides with Equation 10 (K ≡ N). Equation 12 can be simplified if there is the same number of replicates in each point n_{i }≡ M and q ≡ M (that is, we'll predict mean values of future M points for each X_{i}):
If N is large and using approximation similar to Equation 11:
Therefore, for example, if we have spot duplicates on a slide and assume M ≡ 2 for each spot then the corresponding SPIs would be narrower than ones for the case without the duplication by a factor 1/ ≈ 0.71 if predicting averages of future spot duplicates.
Smoothed STIs
STIs computed using Equations 2 and 5 (or SPIs using Equations 10 and 12) assume residual homoscedasticity: s^{2 }is independent of X_{i }and thus constant for all values X_{1},..., X_{N}. We refer to these STIs as ordinary STIs. For microarray data used, however, residual variance versus predictor variable plots usually provided evidence for heteroscedasticity: s^{2 }is dependent on X_{i }and thus not constant. To account for this relationship between X and s^{2}, we smoothed the data (X_{i},Y_{i}), i = 1,..., N. The scatter plot smoothers supsmu() and lowess() perform locally linear (symmetric or containing the knearest neighbors) OLSfit to point X_{i }[51,52,62,63]. The span parameter is the fraction of data points used for smoothing  larger values result in smoother fits. We used supsmu() with the span parameter bass = 24 and lowess() with f = 0.20.4. These values provide a compromise between sensitivity to local variation and smoothness (compare [8]).
The function lowess() computes a robust locallyweighted linear fit [63]. An extended version of the function, loess(), includes an option for locallyquadratic fitting [52]. A window dependent on f is positioned around X_{i}. Data points inside the window are weighted and a robust weighted regression is used to compute , the predicted value of Y_{i }at X_{i}. No assumptions are made about the X_{i }values being evenly spaced and the span parameter f is constant across the entire range of the predictor variable X. A fixed span parameter, however, is problematic if the curvature of the underlying function varies; an increase in curvature would necessitate a decrease in the span, for example. The function supsmu() avoids this difficulty by automatically choosing the variable span with crossvalidation [62] of residuals in the neighborhood of X_{i}. The function supsmu() is faster than lowess() and whilst it is less robust for small sample sizes (N < 40), cDNA data are large (here N = 6,068) and so it is sufficiently robust. The choice between supsmu() and lowess() is a choice between more or less sensitivity to underlying curvature; loess() with locallyquadratic fitting also performs well recovering curvature in empirical data.
Adjusting the smoothed STIs
The scale estimate for the smoothed STIs, however, may be slightly different than the scale estimate for the ordinary STIs (Figure 19a) suggesting that smoothed STIs for real data require adjustments to improve their accuracy. Such adjusted STIs are determined by first generating simulated datasets assuming bivariate normality with the same parameters as in real data. Adjustments for STI scale estimator are calculated using the scale factor. The scale factor is defined as ratio in the simulated data of the Huber τestimate for scale to the average scale estimate (based on the supsmu (or lowess) scale estimator). The scale factor is then applied as an adjustment to smoothed STIs to derive adjusted smoothed STIs for the real mac1 (or any other) data (Figure 19a,b). For example, in Figure 19a the scale factors to adjust supsmu and lowessbased scale estimates are 1.25 and 1.35, respectively. Those estimates are very stable in repeat simulations: for example, their standard error based on ten repeat simulations for mac1 dataset was 0.0008.
Computing p and qvalues
For every gene, we can compute the value of log_{2}(Cy5/Cy3)/s(I) for residuals, where s(I) is a robust scale estimator that depends on intensity level. Then one can calculate pvalues as a measure of statistical significance for every gene, using any appropriate formulas mentioned above if N large. An easy and accurate way to do it is the use, for example, formula 11. Then, having a list of pvalues, one can calculate the corresponding qvalues using R software developed by John Storey [46].
Calibrating qvalues
Calibrating plots help to choose a cutoff for qvalues (Figure 24). For example, if we select the number of false positives less than one, on average, then the number of significant tests = 83, qvalue cutoff = 0.011 that corresponds to pvalue cutoff = 0.0016.
Simulating differential expression
We took sample parameters and the 100 most prominent residual outliers from mac1 dataset. We then considered the nonregular logtransformed ratios (k = 100) as location estimates, adding random component to simulate intensitydependent variation. Heteroscedasticity for the main body of data (Nk = 6,068100 = 5,968 regular observations) was also simulated using the same intensity dependence. Specifically, we define that heteroscedasticity takes place for log_{2}(Cy3) intensity values ≤ 3 or ≥ +2.
For that intensity range, the scale estimator was considered as a linear function of log_{2}(Cy3). For example, for regular observations:
simulated log_{2 }(Cy5/Cy3) = initial log_{2 }(Cy5/Cy3) + e(I) (15)
where e(I) is intensitydependent random variation generated using rnorm() function, e(I) = rnorm(k, mean=0, sigma=aI+b), where I=log_{2}(Cy3).
We define a = 0.5, b = 1.5 for the lower intensity area (log_{2 }(Cy3) < 3) and a = 0.5, b = 1 for the upper intensity area (log_{2 }(Cy3) > +2).
Heteroscedasticity for outliers was simulated in a similar way. R code and datasets used for simulations are available from the authors upon request.
Software implementation
All data processing, analysis and visualization were performed using Splus 2000^{® }[64]. Routines were written that used standard Splus procedures and functions wherever possible and which generated HTML output. The R version of the code (DIGEX.R) is available from [65]. R [66] is a language and environment for statistical computing and graphics similar to Splus. The R equivalents of Splus functions are given in the text.
Additional data files
A comparative summary table (in html format; Additional data file 1) coupled with an auxiliary legend file (Additional data file 2) shows candidates for differential gene expression in all ten experiments used in the paper. An Excel table (Additional data file 3) gives the test accuracy definitions that were used for simulations to evaluate method performance.
Additional data file 1. A comparative summary table
Format: HTM Size: 900KB Download file
Additional data file 2. An auxiliary legend file
Format: HTM Size: 1KB Download file
Additional data file 3. An Excel table gives the test accuracy definitions that were used for simulations to evaluate method performance
Format: XLS Size: 16KB Download file
This file can be viewed with: Microsoft Excel Viewer
Acknowledgements
We would like to acknowledge Ruben Zamar, Victor Yohai and Ricardo Maronna for critical reading of the manuscript. Our thanks go to Rus Yukhananov as well for helpful comments. We thank Cathy White for technical assistance. Work by Alex Loguinov and Chris Vulpe was supported by the Life Sciences Informatics Program of the University of California and the International Copper Association. Work by I.S.M. was supported the Director, Office of Energy Research, Office of Health and Environmental Research, Division of the US Department of Energy under Contract number DEAC0376F00098.
References

Yue H, Eastman PS, Wang BB, Minor J, Doctolero MH, Nuttall RL, Stack R, Becker JW, Montgomery JR, Vainer M, Johnston R: An evaluation of the performance of cDNA microarrays for detecting changes in global mRNA expression.
Nucleic Acids Res 2001, 29:E41. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Lee ML, Kuo FC, Whitmore GA, Sklar J: Importance of replication in microarray gene expression studies: statistical methods and evidence from repetitive cDNA hybridizations.
Proc Natl Acad Sci USA 2000, 97:98349839. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Kerr MK, Churchill GA: Experimental design for gene expression microarrays.
Biostatistics 2001, 2:183201. PubMed Abstract  Publisher Full Text

Kerr MK, Churchill GA: Statistical design and the analysis of gene expression microarray data.
Genet Res 2001, 77:123128. PubMed Abstract  Publisher Full Text

Kerr MK, Martin M, Churchill GA: Analysis of variances for gene expression microarray.
J Comput Biol 2000, 7:819837. PubMed Abstract  Publisher Full Text

Callow MJ, Dudoit S, Gong EL, Speed TP, Rubin EM: Microarray expression profiling identifies genes with altered expression in HDLdeficient mice.
Genome Res 2000, 10:20222029. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Tsodikov A, Szabo A, Jones D: Adjustments and measures of differential expression for microarray data.
Bioinformatics 2002, 18:251260. PubMed Abstract  Publisher Full Text

Dudoit S, Yang LH, Callow MJ, Speed TP: Statistical methods for identifying differentially expressed genes in restricted cDNA microarray experiments.

Pan W, Lin J, Le CT: How many replicates of arrays are required to detect gene expression changes in microarray experiments? A mixture model approach.
Genome Biol 2002, 3:research0022.1research0022.10. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Kohane IS, Kho AT, Butte AJ: Microarrays for integrative genomics. Cambridge: The MIT Press; 2002.

Yang IV, Chen E, Hassenman J, Liang W, Frank BC, Wang S, Sharov V, Saeed AI, White J, Li J, Lee NH, et al.: Within the fold: assessing differential expression measures and reproducibility in microarray assays.
Genome Biol 2002, 3:research0062.10062.12. PubMed Abstract  BioMed Central Full Text  PubMed Central Full Text

Baggerly KA, Coombes KR, Hess KR, Stivers DN, Abruzzo LV, Zhang W: Identifying differentially expressed genes in cDNA microarray experiments.

DeRisi JL, Iyer VR, Brown PO: Exploring the metabolic and genetic control of gene expression on a genomic scale.
Science 1997, 278:680686. PubMed Abstract  Publisher Full Text

Gross C, Kelleher M, Iyer VR, Brown PO, Winge DR: Identification of the copper regulon in Saccharomyces cerevisiae by DNA microarrays.
J Biol Chem 2000, 275:3231032316. PubMed Abstract  Publisher Full Text

Davies L, Gather U: The identification of multiple outliers.

Gather U, Becker C: Outlier identification and robust methods. In In Handbook of statistics. Edited by Maddala GS, Rao CR. Amsterdam: Elsevier Sciences; 1997:123143. Publisher Full Text

Chen Y, Dougherty ER, Bittner ML: Ratiobased decisions and the quantitative analysis of cDNA microarray images.

Hughes TR, Marton Mj, Jones AR, Roberts CJ, Stoughton R, Armour CD, Bennett HA, Coffey E, Dai H, He YD, et al.: Functional discovery via a compendium of expression profiles.
Cell 2000, 102:109126. PubMed Abstract  Publisher Full Text

Newton MA, Kendziorsky CM, Richmond CS, Blattner FR, Tsui KW: On differential variability of expression ratios: improving statistical inference about gene expression changes from microarray data.
J Comput Biol 2001, 8:3752. PubMed Abstract  Publisher Full Text

Sapir M, Churchill G: Estimating the posterior probability of differential gene expression from microarray data. In Poster. Jackson Laboratory, Bar Harbor, ME; 2000.

Rocke DM, Durbin B: A model for measurement error for gene expression arrays.
J Comput Biol 2001, 8:557569. PubMed Abstract  Publisher Full Text

Durbin B, Hardin J, Hawkins D, Rocke DM: A variancestabilizing transformation for geneexpression microarray data.
Bioinformatics 2002, 18 Suppl 1:S105S110. PubMed Abstract  Publisher Full Text

Huber W, von Heydebreck A, Sultman H, Potuska A, Vingron M: Variance stabilization applied to microarray data calibration and to the quantification of differential expression.
Bioinformatics 2002, 18 Suppl11:S96S104. PubMed Abstract  Publisher Full Text

Draghici S: Statistical intelligence: effective analysis of highdensity microarray data.
Drug Discov Today 2002, 7 (11 Suppl):S55S63. PubMed Abstract  Publisher Full Text

Carroll RJ, Ruppert D: Transformation and weighting in regression. New York: Chapman and Hall; 1988.

Cook RD, Weisberg S: Residuals and influence in regression. New York: Chapman and Hall; 1982.

Hawkins DM: Identification of outliers. New York: Chapman and Hall; 1980.

Barnett V, Lewis T: Outliers in statistical data. New York: Wiley; 1994.

Lieberman GJ, Miller RG: Simultaneous tolerance intervals in regression.

Miller RG Jr: Simultaneous Statistical Inference. New York: Springer; 1981.

Rousseeuw PJ, Leroy AM: Robust regression and outlier detection. Wiley series in probability and mathematical statistics. New York: Wiley; 1987.

Altman DG: Practical statistics for medical research. Boca Raton: Chapman & Hall/CRC; 1999.

Jungmann J, Reins HA, Lee J, Romeo A, Hassett R, Kosman D, Jentsch S: MAC1, a nuclear regulatory protein related to Cudependent transcription factors is involved in CU/Fe utilization and stress resistance in yeast.
EMBO J 1993, 12:50515056. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Labbe S, Zhu Z, Thiele DJ: Copperspecific transcriptional repression of yeast genes encoding critical components in the copper transport pathway.
J Biol Chem 1997, 272:1595115958. PubMed Abstract  Publisher Full Text

YamaguchiIwai Y, Serpe M, Haile D, Yang W, Kosman DJ, Klausner RD, Dancis A: Homeostatic regulation of copper uptake in yeast via direct binding of MAC1 protein to upstream regulatory sequences of FRE1 and CTRL.
J Biol Chem 1997, 272:1771117718. PubMed Abstract  Publisher Full Text

Zhu Z, Labbé S, Peña MM, Thiele DJ: Copper differentially regulates the activity and degradation of yeast Mac1 transcription factor.
J Biol Chem 1998, 273:12771280. PubMed Abstract  Publisher Full Text

Dancis A: Genetic analysis of iron uptake in the yeast Saccharomyces cerevisiae.
J Pediatr 1998, 132:S24S29. PubMed Abstract  Publisher Full Text

De Freitas J, Wintz H, Kim JH, Poynton H, Fox T, Vulpe C: Yeast, a model organism for iron and copper metabolism studies.
Biometals 2003, 16:185197. PubMed Abstract  Publisher Full Text

Rutherford JC, Jaron S, Winge DR: Aft1p and Aft2p mediate ironresponsive gene expression in yeast through related promoter elements.
J Biol Chem 2003, 278:2763627643. PubMed Abstract  Publisher Full Text

Gross C, Kelleher M, Iyer VR, Brown PO, Winge DR: Identification of the copper regulon in Saccharomyces cerevisiae by DNA microarrays.
J Biol Chem 2000, 275:3231032316. PubMed Abstract  Publisher Full Text

Wiens BL: When lognormal and gamma models give different results: a case study.

Maronna RA, Yohai VJ: Robust estimation of multivariate location and scale. In In Encyclopedia of Statistical Sciences. Edited by Kotz S, Read C, Banks D. New York: Wiley; 1998:589596.

Cui X, Kerr MK, Churchill G: Data transformations for cDNA microarray data. In Technical Report. Jackson Laboratory, Bar Harbor, ME; 2002.

Storey JD, Tibshirani R: Statistical significance for genomewide studies.
Proc Natl Acad Sci USA 2003, 100:944099445. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Benjamini Y, Hochberg Y: Controlling the false discovery rate: a practical and powerful approach to multiple testing.

Yang YH, Dudoit S, Luu P, Lin DM, Peng V, Ngai J, Speed TP: Normalization for cDNA microarray data: a robust composite method addressing single and multiple slide systematic variation.
Nucleic Acids Res 2002, 30:e15. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Mardia KV: Tests of univariate and multivariate normality. In In Handbook of Statistics. Edited by Krishnaiah PR. NorthHolland: Elsevier; 1980:279320. Publisher Full Text

D'Agostino RB: Tests for the normal distribution. In In Goodnessoffit techniques. Edited by D'Agostino RB, Stephens MA. New York: Marcel Dekker; 1986:367419.

Venable WN, Riply BD: Modern applied statistics with Splus. 3rd edition. New York: Springer; 1999.

MathSoft: Splus 2000: Guide to Statistics. SeattleWashington: MathSoft; 1999.

Chambers JM, Cleveland WS, Kleiner B, Tukey PA: Graphical Methods for Data Analysis. Belmont, California: Wadsworth; 1983.

Goodall C: Examining residuals. In In Understanding Robust and Exploratory Data Analysis. Edited by Hoaglin DC, Mosteller F, Tukey JW. New York: Wiley; 1983:211246.

Atkinson AC: Plots, transformations, and regression. Oxford: Oxford University Press; 1985.

Oleksiak MF, Churchill GA, Crawford DL: Variation in gene expression within and among natural populations.
Nat Genet 2002, 32:261266. PubMed Abstract  Publisher Full Text

Draper NR, Smith H: Applied regression analysis. New York: Wiley; 1998.

Brownlee KA: Statistical theory and methodology in science and engineering. New York: Wiley; 1965.

Guttman I: Statistical tolerance regions: classical and Bayesian. London: Griffin; 1970.

Loguinov AV, Anderson LM, Crosby GJ, Yukhananov RY: Gene expression following acute morphine administration.
Physiol Genomics 2001, 6:169181. PubMed Abstract  Publisher Full Text

Friedman JH: A variable span smoother. Laboratory for computational statistics. Technical Report 5. Department of Statistics, Stanford: Stanford University; 1984.

Cleveland WS: Robust locally weighted regression and smoothing scatterplots.

Insightful [http://www.insightful.com] webcite

DIGEX.R [http://nature.berkeley.edu/~loguinov] webcite

The R Project for Statistical Computing [http://www.rproject.org/] webcite

YamaguchiIwai Y, Stearman R, Dancis A, Klausner RD: Ironregulated DNA binding by the AFT1 protein controls the iron regulon in yeast.
EMBO J 1996, 15:33773384. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

De Freitas JM, Kim JH, Poynton HC, Su T, Wintz H, Fox TC, Holman PS, Loguinov AV, Keles S, Van Der Laan M, Vulpe C: Exploratory and confirmatory gene expression profiling of mac1.
J Biol Chem 2004, 279:44504458. PubMed Abstract  Publisher Full Text

Rutherford JC, Jaron S, Winge DR: Aft1p and Aft2p mediate ironresponsive gene expression in yeast through related promoter elements.
J Biol Chem 2003, 278:2763627643. PubMed Abstract  Publisher Full Text

Yun CW, Ferea T, Rashford J, Ardon O, Brown PO, Botstein D, Kaplan J, Philpott CC: Desferrioxaminemediated iron uptake in Saccharomyces cerevisiae: evidence for two pathways of iron uptake.
J Biol Chem 2000, 275:1070910715. PubMed Abstract  Publisher Full Text

Spizzo T, Byersdorfer C, Duesterhoeft S, Eide D: The yeast FET5 gene encodes a FET3related multicopper oxidase implicated in iron transport.
Mol Gen Genet 1997, 256:547556. PubMed Abstract  Publisher Full Text

Gross C, Kelleher M, Iyer VR, Brown PO, Winge DR: Identification of the copper regulon in Saccharomyces cerevisiae by DNA microarrays.
J Biol Chem 2000, 275:3231032316. PubMed Abstract  Publisher Full Text

Foury F, Roganti T: Deletion of the mitochondrial carrier genes MRS3 and MRS4 suppresses mitochondrial iron accumulation in a yeast frataxindeficient strain.
J Biol Chem 2002, 277:2447524483. PubMed Abstract  Publisher Full Text

Martins LJ, Jensen LT, Simon JR, Keller GL, Winge DR, Simons JR: Metalloregulation of FRE1 and FRE2 homologs in Saccharomyces cerevisiae.
J Biol Chem 1998, 273:2371623721. PubMed Abstract  Publisher Full Text

Portnoy ME, Liu XF, Culotta VC: Saccharomyces cerevisiae expresses three functionally distinct homologues of the nramp family of metal transporters.
Mol Cell Biol 2000, 20:78937902. PubMed Abstract  Publisher Full Text  PubMed Central Full Text

Foury F, Talibi D: Mitochondrial control of iron homeostasis. A genome wide analysis of gene expression in a yeast frataxindeficient strain.
J Biol Chem 2001, 276:77627768. PubMed Abstract  Publisher Full Text

Protchenko O, Philpott CC: Regulation of intracellular heme levels by HMX1, a homologue of heme oxygenase, in Saccharomyces cerevisiae.
J Biol Chem 2003, 278:3658236587. PubMed Abstract  Publisher Full Text

Lin SJ, Pufahl RA, Dancis A, O'Halloran TV, Culotta VC: role for the Saccharomyces cerevisiae A ATX1 gene in copper trafficking and iron transport.
J Biol Chem 1997, 272:92159220. PubMed Abstract  Publisher Full Text

Protchenko O, Ferea T, Rashford J, Tiedeman J, Brown PO, Botstein D, Philpott CC: Three cell wall mannoproteins facilitate the uptake of iron in Saccharomyces cerevisiae.
J Biol Chem 2001, 276:4924449250. PubMed Abstract  Publisher Full Text