Identification of biomarkers from mass spectrometry data using a "common" peak approach.  
Jump to Full Text  
MedLine Citation:

PMID: 16869977 Owner: NLM Status: MEDLINE 
Abstract/OtherAbstract:

BACKGROUND: Proteomic data obtained from mass spectrometry have attracted great interest for the detection of earlystage cancer. However, as mass spectrometry data are highdimensional, identification of biomarkers is a key problem. RESULTS: This paper proposes the use of "common" peaks in data as biomarkers. Analysis is conducted as follows: data preprocessing, identification of biomarkers, and application of AdaBoost to construct a classification function. Informative "common" peaks are selected by AdaBoost. AsymBoost is also examined to balance false negatives and false positives. The effectiveness of the approach is demonstrated using an ovarian cancer dataset. CONCLUSION: Continuous covariates and discrete covariates can be used in the present approach. The difference between the result for the continuous covariates and that for the discrete covariates was investigated in detail. In the example considered here, both covariates provide a good prediction, but it seems that they provide different kinds of information. We can obtain more information on the structure of the data by integrating both results. 
Authors:

Tadayoshi Fushiki; Hironori Fujisawa; Shinto Eguchi 
Related Documents
:

16901217  Concept of sample in omics technology. 16411267  Autumn 2005 workshop of the human proteome organisation proteomics standards initiative... 20610277  Statistical consideration for clinical biomarker research in bladder cancer. 20835247  A quantitative targeted proteomics approach to validate predicted microrna targets in c... 16623517  Molecular depth profiling with cluster ion beams. 20231897  Measuring the conservation value of tropical primary forests: the effect of occasional ... 
Publication Detail:

Type: Journal Article; Research Support, NonU.S. Gov't Date: 20060726 
Journal Detail:

Title: BMC bioinformatics Volume: 7 ISSN: 14712105 ISO Abbreviation: BMC Bioinformatics Publication Date: 2006 
Date Detail:

Created Date: 20060817 Completed Date: 20060928 Revised Date: 20091118 
Medline Journal Info:

Nlm Unique ID: 100965194 Medline TA: BMC Bioinformatics Country: England 
Other Details:

Languages: eng Pagination: 358 Citation Subset: IM 
Affiliation:

Department of Mathematical Analysis and Statistical Inference, Institute of Statistical Mathematics, Tokyo, Japan. fushiki@ism.ac.jp 
Export Citation:

APA/MLA Format Download EndNote Download BibTex 
MeSH Terms  
Descriptor/Qualifier:

Algorithms Computational Biology / methods* Female Gene Expression Regulation, Neoplastic Humans Mass Spectrometry / methods* Models, Statistical Neoplasm Proteins / chemistry* Ovarian Neoplasms / genetics* Protein Array Analysis / methods* Proteome Proteomics / methods* Software Tumor Markers, Biological / metabolism* 
Chemical  
Reg. No./Substance:

0/Neoplasm Proteins; 0/Proteome; 0/Tumor Markers, Biological 
Comments/Corrections 
Full Text  
Journal Information Journal ID (nlmta): BMC Bioinformatics ISSN: 14712105 Publisher: BioMed Central, London 
Article Information Download PDF Copyright ? 2006 Fushiki et al; licensee BioMed Central Ltd. openaccess: This is an Open Access article distributed under the terms of the Creative Commons Attribution License (), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Received Day: 12 Month: 3 Year: 2006 Accepted Day: 26 Month: 7 Year: 2006 collection publication date: Year: 2006 Electronic publication date: Day: 26 Month: 7 Year: 2006 Volume: 7First Page: 358 Last Page: 358 ID: 1550264 Publisher Id: 147121057358 PubMed Id: 16869977 DOI: 10.1186/147121057358 
Identification of biomarkers from mass spectrometry data using a "common" peak approach  
Tadayoshi Fushiki1  Email: fushiki@ism.ac.jp 
Hironori Fujisawa1  Email: fujisawa@ism.ac.jp 
Shinto Eguchi1  Email: eguchi@ism.ac.jp 
1Department of Mathematical Analysis and Statistical Inference, Institute of Statistical Mathematics, Tokyo, Japan 
Mass spectrometry is being used to generate protein profiles from human serum, and proteomic data obtained from mass spectrometry have attracted great interest for the detection of earlystage cancer (for example, [^{1}^{}^{3}]). Recent advancements in proteomics come from the development of protein mass spectrometry. Matrixassisted laser desorption and ionization (MALDI) and surface enhanced laser desorption/ionization (SELDI) mass spectrometry provide highresolution measurements. Mass spectrometry data are ideally continuous data. Some method is required to deal with highdimensional but small samplesize data, similar to microarray data. An effective methodology for identifying biomarkers in highdimensional data is thus an important problem.
Ovarian Dataset 8702, available from the National Cancer Institute, was analyzed in this paper. This dataset is raw data and consists of 91 controls and 162 ovarian cancer patients. The mass spectrometry data for an individual is illustrated in Fig. 1. The horizontal axis indicates the m/zvalue and the vertical axis the intensity of ion. A characteristic of the data is that a number of peaks can be observed. Each peak represents a singly charged positive ion originating from a protein in the sample. Peaks present in the mass spectrometry data may be usable as biomarkers to judge whether an individual is affected or not.
Some methodologies have been proposed for the identification of biomarkers from ideally continuous mass spectrometry data. One approach is to use peaks in the data to identify biomarkers. Yasui et al.[^{4}] and Tibshirani et al.[^{5}] adopted this approach. Another approach is based on binning of data. Yu et al.[^{6}] and Geurts et al.[^{7}] analyzed mass spectrometry data after binning the data. The methodology presented in this paper adopts the former approach, since peaks in mass spectrometry data are considered to represent biological information. Our idea is that "common" peaks within the sample might contain useful information. This means that a peak seen for only one subject may be noise, whereas a peak exhibited by many subjects might be useful. In this study, an ovarian cancer dataset was analyzed as follows: (1) preprocessing, (2) peak detection, (3) identification of biomarkers, (4) classification. AdaBoost was used to construct a classification function. AsymBoost was also examined for balancing the false negatives and the false positives. The effectiveness of the approach is evaluated using validation data.
The proposed approach is closely related to that of Yasui et al.[^{4}]. In [^{4}], biomarkers were specified through classification by AdaBoost. The present approach differs in that "common" peaks are extracted before classification to specify biomarkers. By specifying biomarkers before classification, the dimension of covariates becomes smaller in classification. We think that it is better if the number of covariates is small in classification. Whereas Yasui et al.[^{4}] used discrete covariates, both continuous covariates and discrete covariates can be used in the present approach, according to the situations. Furthermore, we recommend that the results obtained using discrete covariates should be compared with those obtained using continuous covariates. In the example considered here, more information on the structure of the data can be obtained through the use of both covariates, i.e., different kinds of informative features can be obtained by the use of both covariates.
The range of m/zvalue in the dataset is approximately [0, 20000]. However, the frequency of the peaks is too high in the interval [0,1500] and in some cases it is difficult to derive information from peaks in the interval [0,1500]. Therefore, the dataset is analyzed only in the interval [1500, 20000], as in [^{4}].
As is often the case in statistical learning [^{8}], the dataset is divided into two sets; a training dataset that consists of 73 controls and 130 ovarian cancer patients and a test dataset that consists of 18 controls and 32 ovarian cancer patients. The number of the training and test datasets are denoted by N and n, respectively (N = 203 and n = 50). The method proposed in this paper is trained using the training dataset, and the performance of the trained scheme is checked using the test dataset.
Proteomic data obtained from mass spectrometry are often inaccurate in some senses. For example, the mass/charge axis shift is a big problem in many cases [^{9},^{10}]. Therefore preprocessing of the data is very important. Preprocessing methods have recently been proposed by Wong et al.[^{9}] and Jeffries [^{10}].
In this paper, preprocessing of the dataset is performed using SpecAlign [^{9}] as follows: (i) subtract baseline, (ii) generate spectrum average, (iii) spectra alignment (peak matching method). It should be noted that it is difficult to align spectra perfectly even if some alignment algorithm is used. In the section of Identification of biomarkers, this problem is reconsidered.
The peak detection rule of Yasui et al.[^{4}] is adopted here. An m/z point is regarded as a peak if it takes the maximum value in the knearest neighborhood. If k is small, a point is easily recognized as a peak. An appropriate k can be selected by examining some ks as done in Yasui et al.[^{4}]. We empirically set k = 10. In this study, a slightly small k is used, since only the "common" peak is considered as a biomarker.
Suppose that some individuals have a peak at a certain m/zvalue, m*. It is then expected that there exists a protein related with the ion corresponding to m*. Therefore, m* may be a biomarker that can be used to judge whether an individual is affected or not. But a peak exhibited by only one subject may just be noise. The peaks "commonly" exhibited by many subjects are thus candidates of biomarkers. However, there remains the problem that the m/zvalues are not perfectly aligned in general, so that the above idea cannot be applied directly. By overcoming this problem of imperfect alignment, the method for identifying such "common peaks" is derived in the following.
First, an "average of peaks" is constructed by averaging Gaussian kernels with centers at peaks, as illustrated in Fig. 2. The "average of peaks" is then expressed as
A(x)=1N?i=1N?jexp?[?(x?pi,j)2{?(pi,j)}2],?????(1) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGbbqqcqGGOaakcqWG4baEcqGGPaqkcqGH9aqpdaWcaaqaaiabigdaXaqaaiabd6eaobaadaaeWbqaamaaqafabaGagiyzauMaeiiEaGNaeiiCaahaleaacqWGQbGAaeqaniabggHiLdaaleaacqWGPbqAcqGH9aqpcqaIXaqmaeaacqWGobGta0GaeyyeIuoakmaadmGabaGaeyOeI0YaaSaaaeaacqGGOaakcqWG4baEcqGHsislcqWGWbaCdaWgaaWcbaGaemyAaKMaeiilaWIaemOAaOgabeaakiabcMcaPmaaCaaaleqabaGaeGOmaidaaaGcbaGaei4EaShcciGae83WdmNaeiikaGIaemiCaa3aaSbaaSqaaiabdMgaPjabcYcaSiabdQgaQbqabaGccqGGPaqkcqGG9bqFdaahaaWcbeqaaiabikdaYaaaaaaakiaawUfacaGLDbaacqGGSaalcaWLjaGaaCzcamaabmGabaGaeGymaedacaGLOaGaayzkaaaaaa@6167@
where p_{i,j }is the m/zvalue of the ith observation and the jth peak and ? (p_{i,j}) shows the "width" of the peak. In this study, ? (p_{i,j}) = 0.001 ? p_{i,j}. In general, a very small ? is not desirable because the same peaks could not be "added" properly, but a very large ? is not also desirable because a peak affects other peaks. The value of ? could be decided based on the accuracy of the mass/charge axis. In this study, 2? (p_{i,j})/p_{i,j }= 0.002. This corresponds to about ? 0.2% error of the mass/charge value of each peak point. In this dataset, a small change of ? did not affect the result. Secondly, a biomarker is identified by finding the peak greater than h_{th }in the "average of peaks," where h_{th }is a parameter controlling how "common" peaks can be regarded as biomarkers (Fig. 2 (e)). With h_{th }= 0.1, 146 biomarkers were obtained by the procedure described in the previous section (Fig. 3 (c)). The features of the approach are as follows:
? In general, peaks cannot be aligned perfectly even if some alignment algorithm is applied, as stated in the section of Preprocessing. In the "average of peaks," even if peaks are not aligned perfectly, they can be added because they have "width" ? (p_{i,j}) (Figs. 2 (d) and 2 (e)).
? Another possible approach is to use the average of intensities (Fig. 2 (b)). However, we think that the "average of peaks" is more effective in mass spectrometry data (Fig. 2 (e)). "Common" peaks with small intensities can be found easily in Fig. 3 (b), whereas it is difficult to find such "small common" peaks in Fig. 3 (a). Furthermore, the difference between controls and ovarian cancer patients can be seen more clearly in the "average of peaks" than in the average of intensities (Fig. 4).
There are many ways to reduce the number of biomarkers determined by the above procedure. One way is to select biomarkers that are effective in classification. In this study, however, we simply used the biomarkers obtained by the above procedure.
The covariates are extracted from the data as discrete variables and/or continuous variables. Let m_{1}, m_{2}, ? be the m/zvalues of the biomarkers. The discrete covariate is obtained by searching for a peak within a window of the biomarker, i.e.,
xj={1,there?exists?a?peak?within?a?window?[(1??)mj,(1+?)mj],0,otherwise. MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWG4baEdaWgaaWcbaGaemOAaOgabeaakiabg2da9maaceqabaqbaeaabiGaaaqaaiabigdaXiabcYcaSaqaaiabbsha0jabbIgaOjabbwgaLjabbkhaYjabbwgaLjabbccaGiabbwgaLjabbIha4jabbMgaPjabbohaZjabbsha0jabbohaZjabbccaGiabbggaHjabbccaGiabbchaWjabbwgaLjabbggaHjabbUgaRjabbccaGiabbEha3jabbMgaPjabbsha0jabbIgaOjabbMgaPjabb6gaUjabbccaGiabbggaHjabbccaGiabbEha3jabbMgaPjabb6gaUjabbsgaKjabb+gaVjabbEha3jabbccaGiabcUfaBjabcIcaOiabigdaXiabgkHiTGGaciab=f8aYjabcMcaPiabd2gaTnaaBaaaleaacqWGQbGAaeqaaOGaeiilaWIaeiikaGIaeGymaeJaey4kaSIae8xWdiNaeiykaKIaemyBa02aaSbaaSqaaiabdQgaQbqabaGccqGGDbqxcqGGSaalaeaacqaIWaamcqGGSaalaeaacqqGVbWBcqqG0baDcqqGObaAcqqGLbqzcqqGYbGCcqqG3bWDcqqGPbqAcqqGZbWCcqqGLbqzcqGGUaGlaaaacaGL7baaaaa@84E6@
The continuous covariate is the maximum value of the intensity within the window.
x_{j }= the maximum value of the intensity within [(1  ?)m_{j}, (1 + ?)m_{j}].
In this study, ? = 0.002.
The present objective is to find the important features of a peak pattern associated with a disease on the basis of peak identification on proteomic data. We introduce AdaBoost for the extraction of informative patterns in the feature space based on examples consisting of N pairs of the feature vector and the label. In this context, the feature vector is obtained from peak intensities over the detected m/zvalues for a subject, and the label expresses the disease status of the subject. For pattern classification, one of two cases are employed, that is, in which the feature vector is composed of discrete or continuous values, as discussed in the preceding section.
Ensemble learning has been studied in machine learning. AdaBoost [^{11}] is one of the most efficient learning methods in ensemble learning. As explained below, AdaBoost provides a classification function by a linear combination of weak learners. The AdaBoost algorithm can be regarded as a sequential minimization algorithm for the exponential loss function.
Let x be a feature vector and y a binary label with values +1 and 1. A classification function f(x) is then used to predict the label y. If f(x) is positive (or negative), then the label is predicted as +1 (or 1). Suppose that a class of classification functions ? MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamrtHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaWaaeGaeaaakeaaimaacqWFXeIraaa@3787@ = {f} is provided. In AdaBoost, a classification function f in ? MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamrtHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaWaaeGaeaaakeaaimaacqWFXeIraaa@3787@ is called a weak learner. A new classification function F is constructed by taking a linear combination of classification functions in ? MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamrtHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaWaaeGaeaaakeaaimaacqWFXeIraaa@3787@, i.e.,
F(x;?)=?t=1T?tft(x), MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGgbGrcqGGOaakcqWG4baEcqGG7aWoiiGacqWFYoGycqGGPaqkcqGH9aqpdaaeWbqaaiab=j7aInaaBaaaleaacqWG0baDaeqaaOGaemOzay2aaSbaaSqaaiabdsha0bqabaGccqGGOaakcqWG4baEcqGGPaqkaSqaaiabdsha0jabg2da9iabigdaXaqaaiabdsfaubqdcqGHris5aOGaeiilaWcaaa@45C6@
where ? = (?_{1}, ?, ?_{T}) is a weight vector and f_{t }? ? MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamrtHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaWaaeGaeaaakeaaimaacqWFXeIraaa@3787@ for t = 1, ?, T. The sign of F(x; ?) provides a label prediction of y. This is a rule of majority vote by T classification functions f_{1}(x), ?, f_{T}(x) with weights ?_{1}, ?, ?_{T}. Consider a problem in which weights ?_{1}, ?, ?_{T }and classification functions f_{1}(x), ?,f_{T}(x) are optimally combined based on N given examples of (x_{1}, y_{1}), ?, (x_{N}, y_{N}). AdaBoost aims to solve the problem by minimizing the exponential loss defined by
?i=1Nexp?[?yi{?1f1(xi)+?+?TfT(xi)}].?????(2) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaadaaeWbqaaiGbcwgaLjabcIha4jabcchaWjabcUfaBjabgkHiTiabdMha5naaBaaaleaacqWGPbqAaeqaaOGaei4EaShcciGae8NSdi2aaSbaaSqaaiabigdaXaqabaGccqWGMbGzdaWgaaWcbaGaeGymaedabeaakiabcIcaOiabdIha4naaBaaaleaacqWGPbqAaeqaaOGaeiykaKIaey4kaSIaeS47IWKaey4kaSIae8NSdi2aaSbaaSqaaiabdsfaubqabaaabaGaemyAaKMaeyypa0JaeGymaedabaGaemOta4eaniabggHiLdGccqWGMbGzdaWgaaWcbaGaemivaqfabeaakiabcIcaOiabdIha4naaBaaaleaacqWGPbqAaeqaaOGaeiykaKIaeiyFa0Naeiyxa0LaeiOla4IaaCzcaiaaxMaadaqadiqaaiabikdaYaGaayjkaiaawMcaaaaa@5DDD@
AdaBoost does not jointly provide the optimal solution, but offers a sophisticated learning algorithm with sequential structure involving two stages of optimization in which the best weak learner f_{t}(x) is selected in the first stage and the best scalar weight ?_{t }is determined in the second stage at the tstep.
In this study, decision stumps were used as weak learners. A decision stump is a naive classification function in which, for a subject with a feature vector x of peak intensities, the label is predicted by observing whether a certain peak intensity is larger than a predetermined value or not. Accordingly, the set of weak learners is relatively large, but all of the weak learners are literally weak, since they respond only to a peak pattern. The set of the decision stumps is denoted by
{d_{j}(x) = sign(x_{j }b)  j = 1, ? , J, ? <b < ? }.
AdaBoost efficiently integrates the set of weak learners by sequential minimization of the exponential loss. As a result, the learning process of AdaBoost can be traced, and the final classification function can be reexpressed as the sum of the peak pattern functions F_{j}(x)'s, where
Fj(x)=?{tft=dj}?t?sign?(xj?bt), MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGgbGrdaWgaaWcbaGaemOAaOgabeaakiabcIcaOiabdIha4jabcMcaPiabg2da9maaqafabaacciGae8NSdi2aaSbaaSqaaiabdsha0bqabaaabaGaei4EaSNaemiDaqNaeiiFaWNaemOzay2aaSbaaWqaaiabdsha0bqabaWccqGH9aqpcqWGKbazdaWgaaadbaGaemOAaOgabeaaliabc2ha9bqab0GaeyyeIuoakiabbccaGiabbohaZjabbMgaPjabbEgaNjabb6gaUjabbccaGiabcIcaOiabdIha4naaBaaaleaacqWGQbGAaeqaaOGaeyOeI0IaemOyai2aaSbaaSqaaiabdsha0bqabaGccqGGPaqkcqGGSaalaaa@5652@
in which the sum of coefficients ?_{t }is referred to as the score S_{j}[^{12}]. In this way, the score S_{j }expresses the degree of importance for the jth peak in terms of contribution to integrating a final classification function in the process of learning algorithm.
Figure 5 (a) shows the test error of the classification result by AdaBoost with the discrete covariates. The test error is calculated by the test dataset with n = 50 observations separated from the training dataset. Figure 5 (b) shows the false negative and false positive results. The results obtained by AdaBoost with the continuous covariates are shown in Fig. 6. These figures show typical behaviors of AdaBoost. In Figs. 5 (b) and 6 (b), the false negatives are much larger than the false positives. Table 1 shows the test error of AdaBoost, where the iteration number T is decided by 10fold cross validation.
In Table 1, the test error with discrete covariates and that with continuous covariates are the same.
The score S_{j }is calculated to consider the difference between the obtained prediction functions. Each S_{j }gives the influence of the jth covariate for the obtained classification function. Tables 2 and 3 show the ten highest values among S_{j}'s in the discrete and continuous cases, respectively. From Tables 2 and 3, the prediction functions obtained using the discrete and continuous variables are different. The result obtained using continuous covariates appears to be based on a difference in the intensity under the condition that the peak exists, but the result obtained using discrete covariates is likely to be based on a rougher difference whether the peak exists at the m/zvalue. In this dataset, there may be many peaks that affect classification, but each individual peak will not have sufficient information to perfectly distinguish cancer patients from controls. In such a situation, the continuous covariates with high score and the discrete covariates with high score are different, but a classification function with high predictive performance will be obtained by combining information from many peaks by either continuous covariates or discrete covariates.
The above results were obtained with h_{th }= 0.1. We can use more (or fewer) covariates by setting a smaller (or larger) h_{th}. Tables 4 and 5 show the results for various values of h_{th }in the discrete and continuous cases, respectively. When h_{th }is large, the test error becomes larger in the discrete case, whereas in the continuous case, the test error does not vary as much.
In Figs. 5 and 6, the false negatives are much larger than the false positives, but this is not a desirable result. In order to suppress false negatives, AsymBoost [^{13}] may be useful. In AdaBoost, the loss function is given by (2), but in AsymBoost, the loss function is given by
?i=1Nwiexp?[?yi{?1f1(xi)+?+?TfT(xi)}],?????(3) MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaadaaeWbqaaiabdEha3naaBaaaleaacqWGPbqAaeqaaaqaaiabdMgaPjabg2da9iabigdaXaqaaiabd6eaobqdcqGHris5aOGagiyzauMaeiiEaGNaeiiCaaNaei4waSLaeyOeI0IaemyEaK3aaSbaaSqaaiabdMgaPbqabaGccqGG7bWEiiGacqWFYoGydaWgaaWcbaGaeGymaedabeaakiabdAgaMnaaBaaaleaacqaIXaqmaeqaaOGaeiikaGIaemiEaG3aaSbaaSqaaiabdMgaPbqabaGccqGGPaqkcqGHRaWkcqWIVlctcqGHRaWkcqWFYoGydaWgaaWcbaGaemivaqfabeaakiabdAgaMnaaBaaaleaacqWGubavaeqaaOGaeiikaGIaemiEaG3aaSbaaSqaaiabdMgaPbqabaGccqGGPaqkcqGG9bqFcqGGDbqxcqGGSaalcaWLjaGaaCzcamaabmGabaGaeG4mamdacaGLOaGaayzkaaaaaa@60E3@
where each initial weight w_{i }is set as follows:
wi={?,if?the?ith?patient?is?cancer,1,otherwise. MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWG3bWDdaWgaaWcbaGaemyAaKgabeaakiabg2da9maaceqabaqbaeaabiGaaaqaaGGaciab=f7aHjabcYcaSaqaaiabbMgaPjabbAgaMjabbccaGiabbsha0jabbIgaOjabbwgaLjabbccaGiabdMgaPjabb2caTiabbsha0jabbIgaOjabbccaGiabbchaWjabbggaHjabbsha0jabbMgaPjabbwgaLjabb6gaUjabbsha0jabbccaGiabbMgaPjabbohaZjabbccaGiabbogaJjabbggaHjabb6gaUjabbogaJjabbwgaLjabbkhaYjabcYcaSaqaaiabigdaXiabcYcaSaqaaiabb+gaVjabbsha0jabbIgaOjabbwgaLjabbkhaYjabbEha3jabbMgaPjabbohaZjabbwgaLjabb6caUaaaaiaawUhaaaaa@687B@
It is expected that the false negatives will be suppressed by using a large ?. In Figs. 7 and 8, the false negatives might be suppressed by using a large ? at the expense of the larger false positives, in particular when the number of iterations is small.
We proposed a methodology for identifying biomarkers from highdimensional mass spectrometry data. "Common" peaks in the data are regarded as biomarkers. The number of biomarkers can be changed by varying the value of h_{th}, which is a threshold value that controls how "common" peaks can be regarded as biomarkers. By identifying biomarkers, the number of covariates is reduced, so that classification is facilitated. We can select discrete or continuous covariates depending on the situation.
The effectiveness of our approach was demonstrated through application to an ovarian cancer dataset. It was shown that a prediction function with high performance can be obtained by a simple application of AdaBoost.
A simple method was used to analyze data in this study. In general, however, a more sophisticated method may be required to extract a covariate at a biomarker. For example, when discrete covariates are extracted, we can use peaks obtained by a stricter rule or we can extract variables effective for classification [^{5}]. When continuous variables are extracted, we can use the intensity at the m/zvalue nearest to a biomarker, or the average of intensities within a window including a biomarker.
In this paper, the difference between the result for the continuous covariates and that for the discrete covariates was investigated in detail. In the example, the result obtained using continuous covariates appeared to be based on a difference in the intensity under the condition that the peak exists, but the result obtained using discrete covariates was likely to be based on a rougher difference whether the peak exists at the m/zvalue. In general, whether discrete covariates are better or continuous covariates are better depends on data. If the value of the intensity in the data is reliable, it may be better to use continuous covariates. If not, it may be better to use discrete covariates. We consider that both cases of covariates should be examined and the results compared and inspected in detail for practical almost studies. We conclude that we can obtain more information on the structure of the data by integrating both results.
TF wrote programs for this study and main parts of the manuscript. HF wrote a half of the section of Background. SE proposed the use of Asymboost when the false negative is small, and wrote the explanation on AdaBoost algorithm. All authors read and approved the final manuscript.
We would like to thank Masaaki Matsuura for his helpful comments on this study. This work was supported by Transdisciplinary Research Integration Center, Research Organization of Information and Systems.
References
Petricoin EF III,Ardekani AM,Hitt BA,Levine PJ,Fusaro VA,Steinberg SM,Mills GB,Simone C,Fishman DA,Kohn EC,Liotta LA. Use of proteomic patterns in serum to identify ovarian cancerLancet 2002;359:572–577. [pmid: 11867112] [doi: 10.1016/S01406736(02)077462]  
Hanash S. Disease proteomicsNature 2003;422:226–232. [pmid: 12634796] [doi: 10.1038/nature01514]  
Wu B,Abbott T,Fishman D,McMurray W,Mor G,Stone K,Ward D,Williams K,Zhao H. Comparison of statistical methods for classification of ovarian cancer using mass spectrometry dataBioinformatics 2003;19:1636–1643. [pmid: 12967959] [doi: 10.1093/bioinformatics/btg210]  
Yasui Y,Pepe M,Thompson ML,Adam BL,Wright GL Jr,Qu Y,Potter JD,Winget M,Thornquist M,Feng Z. A dataanalytic strategy for protein biomarker discovery: profiling of highdimensional proteomic data for cancer detectionBiostatistics 2003;4:449–463. [pmid: 12925511] [doi: 10.1093/biostatistics/4.3.449]  
Tibshirani R,Hastie T,Narasimhan B,Soltys S,Shi G,Koong A,Le QT. Sample classification from protein mass spectrometry, by 'peak probability contrasts'Bioinformatics 2004;20:3034–3044. [pmid: 15226172] [doi: 10.1093/bioinformatics/bth357]  
Yu JS,Ongarello S,Fiedler R,Chen XW,Toffolo G,Cobelli C,Trajanoski Z. Ovarian cancer identification based on dimensionality reduction for highthroughput mass spectrometry dataBioinformatics 2005;21:2200–2209. [pmid: 15784749] [doi: 10.1093/bioinformatics/bti370]  
Geurts P,Fillet M,de Seny D,Meuwis MA,Malaise M,Merville MP,Wehenkel L. Proteomic mass spectra classification using decision tree based ensemble methodsBioinformatics 2005;21:3138–3145. [pmid: 15890743] [doi: 10.1093/bioinformatics/bti494]  
Hastie T,Tibshirani R,Friedman J. The Elements of Statistical Learning. 2001New York: SpringerVerlag;  
Wong JWH,Cagney G,Cartwright HM. SpecAlign ? processing and alignment of mass spectra datasetsBioinformatics 2005;21:2088–2090. [pmid: 15691857] [doi: 10.1093/bioinformatics/bti300]  
Jeffries N. Algorithms for alignment of mass spectrometry proteomic dataBioinformatics 2005;21:3066–3073. [pmid: 15879456] [doi: 10.1093/bioinformatics/bti482]  
Freund Y,Schapire R. A decisiontheoretic generalization of online learning and an application to boostingJournal of Computer and System Sciences 1997;55:119–139. [doi: 10.1006/jcss.1997.1504]  
Takenouchi T,Ushijima M,Eguchi S. GroupAdaBoost for selecting important genesIEEE 5th Symposium on Bioinformatics and Bioengineering 2005:218–226.  
Viola P,Jones M. Fast and robust classification using asymmetric adaboost and a detector cascadeNeural Information Processing Systems 14 
Figures
Tables
Test errors, false negatives and false positives.
test error  false negative  false positive  
discrete  0.06  0.11  0.03 
continuous  0.06  0.11  0.03 
Ten highest values of S_{j }in discrete case.
peak(m/z)  S_{j} 
3675  1.42 
13651  1.06 
7247  1.02 
4907  1.00 
7539  0.82 
8569  0.76 
4792  0.65 
16668  0.62 
1849  0.61 
4728  0.54 
Ten highest values of S_{j }in continuous case.
peak(m/z)  S_{j} 
17095  3.98 
14798  2.46 
7247  1.97 
2095  1.68 
4029  1.62 
5271  1.56 
8039  1.38 
4118  1.19 
4773  1.07 
15044  1.05 
Test error when h_{th }varies in discrete case.
h_{th}  test error  number of variables 
0.0  0.04  232 
0.1  0.06  146 
0.2  0.08  128 
0.4  0.16  102 
0.8  0.1  69 
Test error when h_{th }varies in continuous case.
h_{th}  test error  number of variables 
0.0  0.06  232 
0.1  0.06  146 
0.2  0.06  128 
0.4  0.08  102 
0.8  0.12  69 
Article Categories:

Previous Document: Which women stop smoking during pregnancy and the effect on breastfeeding duration.
Next Document: Predictors of infusion reactions during infliximab treatment in patients with arthritis.