Document Detail

A batch rival penalized expectation-maximization algorithm for gaussian mixture clustering with automatic model selection.
Jump to Full Text
MedLine Citation:
PMID:  22400050     Owner:  NLM     Status:  In-Data-Review    
Abstract/OtherAbstract:
Within the learning framework of maximum weighted likelihood (MWL) proposed by Cheung, 2004 and 2005, this paper will develop a batch Rival Penalized Expectation-Maximization (RPEM) algorithm for density mixture clustering provided that all observations are available before the learning process. Compared to the adaptive RPEM algorithm in Cheung, 2004 and 2005, this batch RPEM need not assign the learning rate analogous to the Expectation-Maximization (EM) algorithm (Dempster et al., 1977), but still preserves the capability of automatic model selection. Further, the convergence speed of this batch RPEM is faster than the EM and the adaptive RPEM in general. The experiments show the superior performance of the proposed algorithm on the synthetic data and color image segmentation.
Authors:
Jiechang Wen; Dan Zhang; Yiu-Ming Cheung; Hailin Liu; Xinge You
Publication Detail:
Type:  Journal Article     Date:  2012-01-30
Journal Detail:
Title:  Computational and mathematical methods in medicine     Volume:  2012     ISSN:  1748-6718     ISO Abbreviation:  Comput Math Methods Med     Publication Date:  2012  
Date Detail:
Created Date:  2012-03-08     Completed Date:  -     Revised Date:  -    
Medline Journal Info:
Nlm Unique ID:  101277751     Medline TA:  Comput Math Methods Med     Country:  United States    
Other Details:
Languages:  eng     Pagination:  425730     Citation Subset:  IM    
Affiliation:
Faculty of Applied Mathematics, Guangdong University of Technology, Guangzhou 510520, China.
Export Citation:
APA/MLA Format     Download EndNote     Download BibTex
MeSH Terms
Descriptor/Qualifier:

From MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine

Full Text
Journal Information
Journal ID (nlm-ta): Comput Math Methods Med
Journal ID (publisher-id): CMMM
ISSN: 1748-670X
ISSN: 1748-6718
Publisher: Hindawi Publishing Corporation
Article Information
Download PDF
Copyright © 2012 Jiechang Wen et al.
open-access:
Received Day: 31 Month: 8 Year: 2011
Accepted Day: 9 Month: 10 Year: 2011
Print publication date: Year: 2012
Electronic publication date: Day: 30 Month: 1 Year: 2012
Volume: 2012E-location ID: 425730
ID: 3287038
PubMed Id: 22400050
DOI: 10.1155/2012/425730

A Batch Rival Penalized Expectation-Maximization Algorithm for Gaussian Mixture Clustering with Automatic Model Selection
Jiechang Wen1
Dan Zhang2
Yiu-ming Cheung2*
Hailin Liu1
Xinge You3
1Faculty of Applied Mathematics, Guangdong University of Technology, Guangzhou 510520, China
2Department of Computer Science, Hong Kong Baptist University, Kowloon, Hong Kong
3Department of Electronics and Information Engineering, Huazhong University of Science &Technology, Wuhan, China
Correspondence: *Yiu-ming Cheung: ymc@comp.hkbu.edu.hk
[other] Academic Editor: Sheng-yong Chen

1. Introduction

As a typical statistical technique, clustering analysis has been widely applied to a variety of scientific areas such as data mining [1], vector quantization [2, 3], image processing [47], and so forth. In particular, clustering analysis provides a useful tool to solve the several computer vision problems, for example, multithresholding of gray level images, analysis of the Hough space, and range image segmentation, formulated in the feature space paradigm [8]. In general, one kind of clustering analysis can be formulated as a density mixture modeling, in which each mixture component represents the density distribution of a data cluster. Subsequently, the task of clustering analysis is to identify the dense regions of the input (also called observation interchangeably) densities in a mixture. Such a clustering is therefore called a density mixture clustering.

In general, the Expectation-Maximum (EM) algorithm [9, 10] has provided a general solution for the parameter estimation of a density mixture model. Nevertheless, it needs to preassign an appropriate number of density components, that is, the number of clusters. Roughly, the mixture may overfit the data if too many components are utilized, whereas a mixture with too few components may not be flexible enough to approximate the true underlying model. Subsequently, the EM almost always leads to a poor estimate result if the number of components is misspecified. Unfortunately, from the practical viewpoint, it is hard or even impossible to know the exact cluster number in advance. In the literature, one promising way is to develop a clustering algorithm that is able to perform a correct clustering without preassigning the exact number of clusters. Such algorithms include the RPCL algorithm [11] and its improved version, namely, RPCCL [12]. More recently, Cheung [13, 14] has proposed a general learning framework, namely, Maximum Weighted Likelihood (MWL), through which an adaptive Rival Penalized EM (RPEM) algorithm has been proposed for density mixture clustering. The RPEM learns the density parameters by making mixture component compete each other at each time step. Not only are the associated parameters of the winning density component updated to adapt to an input, but also all rivals' parameters are penalized with the strength proportional to the corresponding posterior density probabilities. Therefore, this intrinsic rival penalization mechanism enables the RPEM to automatically select an appropriate number of densities by gradually fading out the redundant densities from a density mixture. Furthermore, a simplified version of RPEM has included RPCL and RPCCL as its special cases with some new extensions.

In the papers [13, 14], the RPEM algorithm learns the parameters via a stochastic gradient ascending method; that is, we update the parameters immediately and adaptively once the current observation is available. In general, the adaptiveness of the RPEM makes it more applicable to the environment changed over time. Nevertheless, the convergence speed of the RPEM relies on the value of learning rate. Often, by a rule of thumb, we arbitrarily set the learning rate at a small positive constant. If the value of learning rate is assigned too small, the algorithm will converge at a very slow speed. On the contrary, if it is too large, the algorithm may even oscillate. In general, it is a nontrivial task to assign an appropriate value to the learning rate, although we can pay extra efforts to make the learning rate dynamically change over time, for example, see [15].

In this paper, we further study the MWL learning framework and develop a batch RPEM algorithm accordingly provided that all observations are available before the learning process. Compared to the adaptive RPEM, this batch one need not assign the learning rate analogous to the EM, but still preserves the capability of automatic model selection. Further, the convergence speed of this batch RPEM is faster than the EM and the adaptive RPEM in general. The experiments have shown the superior performance of the proposed algorithm on the synthetic data and color image segmentation.

The remainder of this paper is organized as follows. Section 2 reviews the MWL learning framework. In Section 3, we present the batch RPEM algorithm in detail, in which the weights involve a coefficient ɛ. We will therefore further explore the assignment of ɛ in Section 4. Section 5 shows the detailed experiment results. Finally, we draw a conclusion in Section 6.


2. Overview of Maximum Weighted Likelihood (MWL) Learning Framework

Suppose that an input xd comes from the following density mixture model:

[Formula ID: EEq1]
(1) 
P(x ∣ Θ)=∑j=1kαjp(x ∣ θj), ∑j=1kαj=1,αj>0, ∀1≤j≤k,
where Θ is the parameter set of {αj, θj}j=1k. Furthermore, k is the number of components, αj is the mixture proportion of the jth component, and p(x | θj) is a multivariate probability density function (pdf) of x parameterized by θj. As long as we know the value of Θ, an input x can be classified into a certain cluster via its posterior probability:

[Formula ID: EEq2]
(2) 
h(j ∣ x,Θ)=αjp(x ∣ θj)P(x ∣ Θ)
using the winner-take-all rule, that is, assigning an input x to Cluster c if c = argmax⁡jh(j | x, Θ) or taking its soft version which assigns x to Cluster j with the probability h(j | x, Θ).  Therefore, how to estimate the parameter set Θ, particularly without knowing the correct value of k in advance, is a key issue in density mixture clustering.

In the MWL learning framework [13, 14], the parameter set Θ is learned via maximizing the following Weighted Likelihood (WL) cost function:

[Formula ID: EEq3]
(3) 
l(Θ)=ω(Θ;x)+ν(Θ;x)
with

[Formula ID: EEq5]
(4) 
ω(Θ;x)=∫∑j=1kg(j ∣ x,Θ)ln⁡[αjp(x ∣ θj)]dF(x),ν(Θ;x)=−∫∑j=1kg(j ∣ x,Θ)ln⁡h(j ∣ x,Θ)dF(x),
where g(j | x, Θ)'s are the designable weights satisfying the two conditions:

  • (1)  j=1kg(j | x, Θ) = 1,
  • (2)  for all  j,  g(j | x, Θ) = 0  if  h(j | x, Θ) = 0.

Suppose that a set of N i.i.d. observations, denoted as X = {x1, x2,…, xN}, comes from the density mixture model in (1). The empirical WL function of (3), written as Q(Θ; X), can be given as

[Formula ID: EEq8]
(5) 
Q(Θ;X)=ω(Θ;X)+ν(Θ;X)
with
[Formula ID: EEq10]
(6) 
ω(Θ;X)=1N∑t=1N∑  j=1kg(j ∣ xt,Θ)ln⁡[αjp(xt ∣ θj)],ν(Θ;X)=−1N∑t=1N∑  j=1kg(j ∣ xt,Θ)ln⁡h(j ∣ xt,Θ).
Moreover, the weights g(j | xt, Θ)'s have been generally designed as [13, 14]

[Formula ID: EEq11]
(7) 
g(j ∣ xt,Θ)=(1+ɛt)I(j ∣ xt,Θ)−ɛth(j ∣ xt,Θ),
where ɛt is a coefficient varying with the time step t in general. Please note that g(j | xt, Θ)'s in (7) can be negative as well as positive. For simplicity, we hereinafter set ɛt as a constant, denoted as ɛ. Moreover, I(j | xt,  Θ) is an indicator function with

[Formula ID: EEq12]
(8) 
I(j ∣ xt,Θ)={1, if    j=c=argmax⁡1≤j≤kh(j ∣ xt,Θ),0, otherwise.

Subsequently, under a specific weight design, the papers [13, 14] have presented the adaptive RPEM to learn Θ via maximizing the WL function of (5) using a stochastic gradient ascent method, which is able to fade out the redundant densities gradually from a density mixture. Consequently, it can automatically select an appropriate number of density components in density mixture clustering. Interested readers may refer to the paper [14] for more details. We summarize the main steps of the adaptive RPEM in Algorithm 1. Although the experiments have shown the superior performance of the adaptive RPEM on automatic model selection, its convergence speed, however, relies on the value of learning rate. Under the circumstances, we will present a batch version without the learning rate in the next section.


3. Batch RPEM Algorithm

To estimate the parameter set within the MWL framework, we have to maximize the empirical WL function Q(Θ; X) in (5). In general, we update the parameters via maximizing the first term of (5), that is, ω(Θ; X), by fixing the second term ν(Θ; X). Subsequently, we need to solve the following nonlinear optimization problem:

[Formula ID: EEq13]
(9) 
Θ˜=argmax⁡Θ{ω(Θ;X)}
subject to the constraints as shown in (1). We adopt the Lagrange method analogous to the EM by introducing a Lagrange multiplier λ into the Lagrange function. Subsequently, we have

[Formula ID: EEq14]
(10) 
ℒ(Θ,λ)=ω(Θ;X)+λ(∑j=1kαj−1).
In this paper, we concentrate on the Gaussian mixture model only, that is, each component p(x | θj) in (1) is a Gaussian density. We then have

[Formula ID: EEq15]
(11) 
p(j ∣ xt,θj) =(2π)−d/2|Cj|−1/2exp⁡⁡[−12(xt−μj)TCj−1(xt−μj)],
where θj = (μj, Cj), μj and Cj are the mean (also called seed points interchangeably) and the covariance of the jth density, respectively.

Through optimizing (10), we then finally obtain the batch RPEM algorithm as shown in Algorithm 2. If a covariance matrix Cj(n+1) is singular, then it indicates that the corresponding jth density component is degenerated and can be simply discarded without being learned any more in the subsequent iterations. In this case, we have to normalize those remaining αr(n+1)'s (rj) so that their sum is always kept to be 1.

In the above batch RPEM, its capability of automatic model selection is controlled by the weight functions g(j | xt, θ)'s, which further rely on the parameter ɛ as shown in (7). Subsequently, a new question is arisen: how to assign an appropriate value of ɛ? The next section will answer this question.


4. How to Assign Parameter  ɛ?

To deal with how to assign an appropriate value of ɛ, we rewrite (7) as the following form:

[Formula ID: EEq18]
(12) 
g(j ∣ xt,Θ)={(1+ɛ)I(c ∣ xt,Θ)−ɛh(c ∣ xt,Θ),if  j=c,−ɛh(j ∣ xt),otherwise={h(c ∣ xt,Θ)+(1+ɛ)(1−h(c ∣ xt,Θ)),if  j=c,h(j ∣ xt,Θ)−(1+ɛ)h(j ∣ xt,Θ),otherwise.
Intuitively, the term (1 + ɛ)(1 − h(c | xt, Θ)) can be regarded as the award received by the winning density component (i.e., the cth density with I(c | xt, Θ) = 1), and meanwhile the term −(1 + ɛ)h(j | xt, Θ) is the penalty of the rival components (i.e., those densities with I(j | xt, Θ) = 0). In general, it is expected that the award is positive and the penalty is negative. That is, ɛ should be greater than −1. Otherwise, as ɛ < −1, we will meet an awkward situation; that is, the amount of award is negative and the penalty one becomes positive. This implies that we will penalize the winner and award the rivals, which evidently violates our expectations. Furthermore, as ɛ = −1, both of the award and penalty amount become zero. In this special case, the batch RPEM is actually degenerated into the EM without the property of automatic model selection. As a result, ɛ is required to be greater than −1. In addition, ɛ in the batch RPEM should be a negative value. Otherwise, the weights of the rival components g(j | xt, Θ) = −ɛh(j | xt)'s become negative, resulting in some αj's to be negative finally. Hence, an appropriate selection of ɛ in the batch RPEM would be a negative value and greater than −1. That is, ɛ should be fallen into the range of (−1,0).

Furthermore, our empirical studies have found that a smaller ɛ will lead the batch RPEM algorithm to a more robust performance, especially when the value of k is large and the data are overlapped considerably. In other words, the algorithm has a poor capability of automatic model selection if ɛ is close to zero. To illustrate this scenario, we have utilized two synthetic data sets that are generated from the two bivariate three-Gaussian mixtures individually as shown in Figures 1(a) and 1(b), where each data set consists of 1,000 observations with the true mixture proportions: α1* = 0.4, α2* = 0.3, and α3* = 0.3. Also, the true μj*'s and Cj*'s of data set 1 in Figure 1(a) are

[Formula ID: EEq19]
(13) 
μ1∗=(1.01.0),    μ2∗=(1.05.0),    μ3∗=(5.05.0),C1∗=(0.30.20.20.4),  C2∗=(0.2−0.1−0.10.3),  C3∗=(0.30−0.20−0.200.25),  
while the true parameters of data set 2 in Figure 1(b) are
[Formula ID: EEq20]
(14) 
μ1∗=(1.01.0),    μ2∗=(1.02.5),    μ3∗=(2.52.5),C1∗=(0.30.10.10.4),  C2∗=(0.30.00.00.3),  C3∗=(0.30−0.05−0.050.25).  
It can be seen that the clusters in data set 1 are well separated, whereas the clusters in data set 2 are overlapped considerably.

For each data set, we conducted the three experiments by setting k = 3, k = 8, and k = 20, respectively. Also, all αj's and Cj's were initialized at 1/k and the identity matrix, respectively. During the learning process, we discarded those densities whose covariance matrices Cj's were singular. Table 1 shows the performance of the batch RPEM over the parameter ɛ. We found that, as k = 3 and k = 8, all ɛ's we have tried from −0.9 to −0.1 lead to the good performance of the algorithm when using the data set 1. For example, as k = 8 and ɛ = −0.8, we randomly initialized the eight seed points in the input space as shown in Figure 2(a). After all the parameters were converged, 2 out of 8 density components had been discarded and the mixture proportions of the remaining components were converged to α1 = 0.2960, α2 = 0.0036, α3 = 0.2900, α4 = 0.0058, α5 = 0.0136, and α6 = 0.3910. It can be seen that the three principal mixing proportions, α1, α3, and α6, have well estimated the true ones, while the other proportions were inclined to zero. The corresponding three μj's and Cj's were

[Formula ID: EEq21]
(15) 
μ1=(5.064.96),    μ3=(0.984.98),  μ6=(1.000.96),C1=(0.29−0.17−0.170.22),  C3=(0.18−0.08−0.080.25),  C6=(0.290.190.190.39).  
As shown in Figure 2(b), the three μj's have successfully stabilized at the corresponding cluster centers, meanwhile the other three redundant seed points have been pushed away and stably located at the boundary of the clusters. That is, the redundant densities have been fade out through the learning, thus the batch RPEM can select the model automatically as well as the adaptive version.

Nevertheless, when k is set at a large value, for example, say k = 20, it is found that the proposed algorithm could not fade out the redundant density components from a mixture if ɛ is close to 0. Instead, we should set ɛ at a value close to −1. For example, as k* = 3, k = 20, and ɛ = −0.9, we ran the proposed algorithm. It was found that 13 of 20 seed points were maintained by discarding those whose covariance matrices Cj's were singular. The mixture proportions of the remaining components were converged to α1 = 0.0421, α2 = 0.0169, α3 = 0.0051, α4 = 0.2349, α5 = 0.0036, α6 = 0.0149, α7 = 0.3444, α8 = 0.0049, α9 = 0.0210, α10 = 0.0029, α11 = 0.0057, α12 = 0.2944, and α13 = 0.0091. The three principal mixing proportions, α4, α7, and α12, have also well estimated the true ones while the other proportions were tended to zero. Furthermore, the corresponding μj's were μ1 = [1.0872,4.9986]T, μ2 = [0.9897,0.9640]T, and μ3 = [5.0754,4.9552]T. As shown in Figure 3(a), the learned μj's are correctly allocated at the center of the three clusters and the other redundant seed points were driven away to the boundaries of clusters. Hence, the batch algorithm performed a good model selection by assigning ɛ = −0.9. In contrast, if we assign ɛ to some value close to zero, the algorithm will lead to a poor model selection. We take ɛ = −0.1 for instance. The mixture proportions of the remaining 19 out of 20 components were converged to α1 = 0.0461, α2 = 0.0121, α3 = 0.0439, α4 = 0.1404, α5 = 0.0070, α6 = 0.0258, α7 = 0.0178, α8 = 0.0348, α9 = 0.0659, α10 = 0.0513, α11 = 0.0493, α12 = 0.0352, α13 = 0.0362, α14 = 0.0528, α15 = 0.0587, α16 = 0.0171, α17 = 0.1916, α18 = 0.0882, and α19 = 0.0260. It can be seen that none of αj's tends to zero. As shown in Figure 3(b), all the converged positions have a bias from the cluster centers. In other words, the algorithm has a poor performance as ɛ get close to zero. Hence, if k is large, it would be better to choose a relative smaller value of ɛ between −1 and 0.

In addition, we also investigated the assignment of ɛ on data set 2, where the data are considerably overlapped. We take k* = 3, k = 20, and ɛ = −0.9 for instance. The converged positions of the seed points are shown in Figure 4(a), where the learned positions converged to the cluster centers while driving the redundant seed points to the boundaries of the clusters. That is, the proposed batch algorithm can work quite well as ɛ = −0.9. Also, we let k* = 3, k = 20, and ɛ = −0.2 to run the algorithm again for comparison. As shown in Figure 4(b), the converged positions of the seed points have a bias from the cluster centers. This implies that the values of ɛ that are close to zero cannot work well in this case. More examples can be found in Table 1. It can be seen that the feasible region of ɛ is shrunk over the overlap level of the data. For example, the appropriate values of ɛ are in the range of [−0.9, −0.6] only when using the date set 2 with k* = 3 and k = 3 or 8, respectively. In contrast, ɛ is feasible in the full range of [−0.9, −0.1] where we have tried so far as data set 1 is used. Hence, by a rule of thumb, we should choose an appropriate value of ɛ close to −1. Nevertheless, we have also noted that it is not a good choice if ɛ is too close to −1. In fact, the proposed algorithm will gradually degenerate to the EM as ɛ tends to −1; that is, the capability of the proposed algorithm on model selection will be reduced as ɛ tends to −1. Hence, to sum up, empirical studies have found that [−0.9, −0.8] is an appropriate feasible region of ɛ. In the next section, we therefore arbitrarily set ɛ at −0.8.


5. Experimental Results

To evaluate the performance of the batch RPEM algorithm, we have conducted the following three experiments.

5.1. Experiment  1: Batch RPEM on Synthetic Data with K = K*

This experiment was to evaluate the convergence speed of the batch RPEM. We utilized 1,000 data points from a mixture of three bivariate Gaussian densities with the true parameters as follows:

[Formula ID: EEq22]
(16) 
α1∗=0.3,    α2∗=0.4,    α3∗=0.3,μ1∗=[1.0,1.0]T,   μ2∗=[1.0,2.5]T, μ3∗=[2.5,2.5]T,  C1∗=(0.200.050.050.30),    C2∗=(0.20.00.00.2),  C3∗=(0.2−0.1−0.10.2).
We let k = 3, which is equal to the true mixture number k* = 3. The three seed points were randomly allocated in the observation space as shown in Figure 5(a), where the data are considerably overlapped. Moreover, all αj's and Cj's were initialized at 1/k and the identity matrix, respectively. Figure 5(d) shows the positions of the three converged seed points, which are all stably located at the corresponding cluster centers. For comparison, we also implemented the EM under the same experimental environment. Figure 5(b) shows that the EM had successfully located the three seed points as well as the batch RPEM.

Nevertheless, as shown in Figures 6(c) and 7, the batch RPEM converges at 20 epochs, while the EM needs 60 epochs as shown in Figure 6(a). That is, the convergence speed of the batch RPEM is significantly faster than the EM. This indicates that the intrinsic rival-penalization scheme of the batch RPEM, analogous to the RPCL [11], RPCCL [12], and the adaptive RPEM [14], is able to drive away the rival seed points so that they can be more quickly towards the other cluster centers. As a result, the batch RPEM converges much faster than the EM. Moreover, we also compared it with the adaptive RPEM, in which we set the learning rate η = 0.001. Figure 5(c) shows the convergent results of the seed points. It can be seen that the adaptive RPEM works quite well in this case, but it needs around 70 epochs as shown in Figure 6(b). Actually, the adaptive RPEM can be further speed up if an appropriate learning rate is adopted, which, however, is not a trivial task.

5.2. Experiment  2: Batch RPEM on Synthetic Data with K > K*

This experiment will investigate the performance of batch RPEM performance as k > k*. We generated 1,000 observations from a mixture of five bivariate Gaussian density distributions with the mixing proportions:

[Formula ID: EEq23]
(17) 
α1∗=0.1, α2∗=0.2, α3∗=0.3, α4∗=0.2, α5∗=0.2
and the true cluster centers:
[Formula ID: EEq24]
(18) 
μ1∗=[1.0,1.0]T, μ2∗=[1.0,2.5]T, μ3∗=[2.5,2.5]T,  μ4∗=[2.5,1.0]T, μ5∗=[4.0,2.0]T.
15 seed points were initialized in the input space arbitrarily. During the learning, the three density components were discarded because their corresponding covariances became singular. As a result, the remaining 12 converged proportions were α1 = 0.0065, α2 = 0.0113, α3 = 0.1929, α4 = 0.0030, α5 = 0.0068, α6 = 0.2013, α7 = 0.2084, α8 = 0.0074, α9 = 0.0083, α10 = 0.0986, α11 = 0.2531, and α12 = 0.0022. It can be seen that the five principal values α3, α6,α7, α10, and α11 were estimated well, while the others were learned towards zero. A snapshot of the corresponding μj's were μ3   = [4.0348,2.0075]T, μ6 = [0.9990,2.4571]T, μ7 = [2.4725,0.9220]T, μ10 = [0.9553,1.0277]T, and μ11 = [2.5189,2.5199]T. As shown in Figure 8, these five seed points have successfully allocated in the cluster centers, meanwhile the batch RPEM drove the redundant seed points to the boundaries of the clusters.

5.3. Experiment  3: Batch RPEM on Color Image Segmentation

This experiment further investigated the batch RPEM algorithm on color image segmentation in comparison to the EM algorithm. We implemented the image segmentation in the red-green-blue (RGB) color space model that represents each pixel in an image by a three-color vector. We conducted color image segmentation on a 122 × 152 hand image and a 96 × 128 house image as shown in Figures 9(a) and 10, respectively. For the former, we initially assigned 10 seed points randomly. After the convergence of the algorithms' performance, a snapshot of their segmentation results is shown in Figures 9(b) and 9(c). It can be seen that the blue tiny swim ring-shaped region after segmentation process by the batch RPEM is smoother than the EM. Further, the tiny nail regions have been partitioned by the batch RPEM but the EM is not. In other words, the batch RPEM algorithm performs better than the EM algorithm.

For the house image, we initially assigned the seed points to be 80. A snapshot of the converged segmentation results of the EM and the batch RPEM is shown in Figure 11. It can be seen that the texture on the red wall and the green lawn has no longer maintained after the segmentation process both by the EM and the RPEM. However, the small white regions of windows on red wall were disappeared by the EM as well as the triangle shadow area on the wall. In contrast, the batch RPEM algorithm partitioned these regions well as shown in Figure 11(b). Actually, the batch RPEM has drove out the redundant seed points far away and maintained some principal components correctly, which therefore leads to a better performance in color image segmentation.


6. Conclusion

In this paper, we have developed a batch RPEM algorithm based on MWL learning framework for Gaussian mixture clustering. Compared to the adaptive RPEM, this new one need not select the value of learning rate. As a result, it can learn faster in general and still preserve the capability of automatic model selection analogous to the adaptive one. We have evaluated the proposed batch RPEM algorithm on both synthetic data and color image segmentation. The numerical results have shown the efficacy of the proposed algorithm.


Acknowledgments

This work was jointly supported by the grants from the Research Grant Council of the Hong Kong SAR with the Project Code: HKBU 210309, the Natural Science Foundation of China (60974077), the Natural Science Foundation of Guangdong Province (s2011010005075), Guangzhou Technology Projects (11c42110781), the Grants 60403011 and 60973154 from the NSFC, and NCET-07-0338 from the Ministry of Education, China. This work was also partially supported by the Fundamental Research Funds for the Central Universities, HUST:2010ZD025, and Hubei Provincial Science Foundation under Grant 2010CDA006, China.


References
1. Fayyad U,Piatetsky-Shpiro G,Smyth P,Uthurusamy R. Advances in Knowledge Discovery and Data MiningYear: 1996MIT Press
2. Fritzke B. The LBG-U method for vector quantization—an improvement over LBG inspired from neural networksNeural Processing LettersYear: 1997513545
3. Linde Y,Buzo A,Gray R. An algorithm for vector quantizer designIEEE Transactions on CommunicationsYear: 19802818495
4. Chen SY,Tong HY,Wang ZJ,Liu S,Li M,Zhang BW. Improved generalized belief propagation for vision processingMathematical Problems in EngineeringYear: 201112 pages Article ID 416963.. 2740335
5. Chen SY,Tong HY,Cattani C. Markov models for image labelingMathematical Problems in EngineeringYear: 2012201218 pages Article ID 814356..
6. Lim YW,Lee SU. On the color image segmentation algorithm based on the thresholding and the fuzzy C-means techniquesPattern RecognitionYear: 1990239935952
7. Uchiyama T,Arib MA. Color image segmentation using competitive learningIEEE Transactions on Pattern Analysis and Machine IntelligenceYear: 1994161211971206
8. Jolion JM,Meer P,Bataouche S. Robust clustering with applications in computer visionIEEE Transactions on Pattern Analysis and Machine IntelligenceYear: 1991138791802
9. Dempster A,Laird N,Rubin D. Maximum likelihood from incomplete data via the EM algorithmJournal of the Royal Statistical Society BYear: 1977391138 Article ID 0501537..
10. Jeff BilmesA. A Gentle Tutorial of the EM Algorithm and Its Application to Parameter Estimation for Gausssian Mixture and Hidden Markov ModelsYear: 1998Berkeley, Calif, USAInternational Computer Science Institute, and Computer Science Division, Department of Electrical Engineering and Computer Science TR-97-021.
11. Xu L,Krzyzak A,Oja E. Rival penalized competive learning for clustering analysis, RBF Net, and curve detectionIEEE Transactions on Neural NetworksYear: 19934463664918267764
12. Cheung YM. Rival penalized controlled competitive learning for data clustering with unknown cluster numberIn: Proceedings of the 9th International Conference on Neural Information Processing (ICONIP '02)2002
13. Cheung YM. A rival penalized EM algorithm towards maximizing weighted likelihood for density mixture clustering with automatic model selectionIn: Proceedings of the 17th International Conference on Pattern Recognition (ICPR '04), vol. 42004Cambridge, UK633636
14. Cheung YM. Maximum weighted likelihood via rival penalized EM for density mixture clustering with automatic model selectionIEEE Transactions on Knowledge and Data EngineeringYear: 2005176750761
15. Zhao XM,Cheung YM,Chen L,Aihara K. A new technique for adjusting the learning rate of RPEM algorithm automaticallyIn: Proceedings of the 12th International Symposium on Artificial Life and RoboticsJanuary 2007Oita, Japan597600

Article Categories:
  • Research Article


Previous Document:  Modeling the spatial distribution of chronic tumor hypoxia: implications for experimental and clinic...
Next Document:  Computational fluid dynamics analysis of the effect of plaques in the left coronary artery.