Data Mining: The Textbook



Yüklə 17,13 Mb.
səhifə222/423
tarix07.01.2024
ölçüsü17,13 Mb.
#211690
1   ...   218   219   220   221   222   223   224   225   ...   423
1-Data Mining tarjima

11.6. SEMISUPERVISED LEARNING

363

11.6.1 Generic Meta-algorithms


The goal of generic meta-algorithms is to use existing classification algorithms to enhance the classification process with unlabeled data. The simplest method is self-training, in which the smoothness assumption is used to incrementally expand the labeled portions of the training data. The major drawback of this approach is that it might lead to overfitting. One way of avoiding overfitting is by using co-training. Co-training partitions the feature space and independently labels instances using classifiers trained on each of these feature spaces. The labeled instances from one classifier are used as feedback to the other, and vice versa.


11.6.1.1 Self-Training


The self-training procedure can use any existing classification algorithm A as input. The classifier A is used to incrementally assign labels to unlabeled examples for which it has the most confident prediction. As input, the self-training procedure uses the initial labeled set L, the unlabeled set U , and a user-defined parameter k that may sometimes be set to





  1. The self-training procedure iteratively uses the following steps:




    1. Use algorithm A on the current labeled set L to identify the k instances in the unla-beled data U for which the classifier A is the most confident.




    1. Assign labels to the k most confidently predicted instances and add them to L. Remove these instances from U .

It is easy to see that the self-training procedure will work very well for the simple example of Fig. 11.2. However, in practice, the different classes may not be quite as cleanly separated. The major drawback of self-training is that the addition of predicted labels to the training data can lead to propagation of errors in the presence of noise. Another procedure, known as co-training, is able to avoid such overfitting more effectively.


11.6.1.2 Co-training


In co-training, it is assumed that the feature set can be partitioned into two disjoint groups F1 and F2, such that each of them is sufficient to learn the target classification function. It is important to select the two feature subsets so that they are as independent from one another as possible. Two classifiers are constructed, such that one classifier is constructed on each of these groups. These classifiers are not allowed to interact with one another directly for prediction of unlabeled examples though they are used to build up training sets for each other. This is the reason that the approach is referred to as co-training.


Let L be the labeled training data and U be the unlabeled data. Let L1 and L2 be the labeled sets for each of these classifiers. The sets L1 and L2 are initialized to the available labeled data L, except that they are represented in terms of disjoint feature sets F1 and F2, respectively. Over the course of the co-training process, as different examples from the initially unlabeled set U are added to L1 and L2, respectively, the training instances in L1 and L2 may vary from one another. Two classifier models A1 and A2 are constructed using the training sets L1 and L2, respectively. The following steps are then iteratively applied:





  1. Train classifier A1 using labeled set L1, and add k most confidently predicted instances from unlabeled set U − L2 to training data set L2 for classifier A2.




  1. Train classifier A2 using labeled set L2, and add k most confidently predicted instances from unlabeled set U − L1 to training data set L1 for classifier A1.

364 CHAPTER 11. DATA CLASSIFICATION: ADVANCED CONCEPTS


In many implementations of the method, the most confidently labeled examples for each class are added to the training sets of the other classifier. This procedure is repeated until all instances are labeled. The two classifiers are then retrained with the expanded training data sets. This approach can be used to label not only the unlabeled data set U , but also unseen test instances. At the end of the procedure, two classifiers are returned. For an unseen test instance, each classifier may be used to determine the class label scores. The score for a test instance is determined by combining the scores of the two classifiers. For example, if the Bayes method is used as the base classifier, then the product of the posterior probabilities returned by the two classifiers may be used.


The co-training approach is more robust to noise because of the disjoint feature sets used by the two algorithms. An important assumption is that of conditional independence of the features in the two sets with respect to a particular class. In other words, after the class label is fixed, the features in one subset are conditionally independent of the other. The intuition for this is that instances generated by one classifier appear to be randomly distributed to the other, and vice versa. As a result, the approach will generally be more robust to noise than the self-training method.


11.6.2 Specific Variations of Classification Algorithms


The algorithms in the previous section were designed as generic meta-algorithms that can use virtually any known classification algorithm A for semisupervised learning. A few meth-ods have also been designed that rely on variations of other classification algorithms, such as variations of the Bayes classifier and support vector machines.


11.6.2.1 Semisupervised Bayes Classification with EM


An important observation is that both the EM-clustering algorithm (cf. Sect. 6.5 of Chap. 6) and the naive Bayes classifier (cf. Sect. 10.5.1 of Chap. 10) use the same generative mix-ture model, wherein examples from each cluster (class) are generated from a predefined distribution, such as the Bernoulli or the Gaussian. In the case of the naive Bayes classifier, the iterative approach of the EM -algorithm is not required because the class memberships of the training data are already fixed, which makes the E-step unnecessary. In the case of semisupervised classification, however, the unlabeled examples need to be assigned to classes in order to expand the training data. Therefore, the iterative approach of the EM-algorithm again becomes essential. Semisupervised Bayes classification can be viewed as a combination of EM clustering and the naive Bayes classifier.


This method was originally proposed in the context of text data, although this discussion will assume categorical data for simplicity. Note that the binary representation of text data may also be considered categorical data. The naive Bayes algorithm requires the estimation of the conditional probabilities of the feature values for each class. Specifically, Eq. 10.22 in Sect. 10.5.1 of Chap. 10 requires the estimation of P (xj = a j |C = c). This expression represents the conditional probability of the feature value, given the class and is estimated from the training data. The estimation cannot be performed accurately, if the number of training examples is small. Consider the case of the text domain. If only five to ten labeled documents are available for a particular class, and xj is a binary variable corresponding to the presence or absence of a particular word j, then this estimation cannot be performed robustly. As discussed earlier, the joint distribution of features with labeled and unlabeled data can be very helpful in this respect.




Yüklə 17,13 Mb.

Dostları ilə paylaş:
1   ...   218   219   220   221   222   223   224   225   ...   423




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©azkurs.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin