Data Mining: The Textbook



Yüklə 17,13 Mb.
səhifə267/423
tarix07.01.2024
ölçüsü17,13 Mb.
#211690
1   ...   263   264   265   266   267   268   269   270   ...   423
1-Data Mining tarjima

f (xi) = xi


f (xi) = log(xi).

Frequency damping is optional and is often omitted. This is equivalent to setting f (xi) to xi. The normalized frequency h(xi) for the ith word may be defined as follows:





h(xi) = f (xi)idi.

(13.2)

This model is popularly referred to as the tf-idf model, where tf represents the term fre-quency and idf represents the inverse document frequency.


The normalized representation of the document is used for data mining algorithms. A popularly used measure is the cosine measure. The cosine measure between two documents with raw frequencies X = (x1 . . . xd) and Y = (y1 . . . yd) is defined using their normalized representations:





















d































i=1 h(xi)h(yi)







cos(X, Y ) =

(13.3)






















d




d

















































i=1 h(xi)2




i=1 h(yi)2







Another measure that is less commonly used for text is the Jaccard coefficient J(X, Y ):
























d


































i=1 h(xi)h(yi)







(13.4)




J(X,Y) =










.




d

+

d

d



















i=1 h(xi)2

i=1 h(yi)2

i=1 h(xi)h(yi)













13.2. DOCUMENT PREPARATION AND SIMILARITYCOMPUTATION

433

The Jaccard coefficient is rarely used for the text domain, but it is used very commonly for sparse binary data as well as sets. Many forms of transaction and market basket data use the Jaccard coefficient. It needs to be pointed out that the transaction and market basket data share many similarities with text because of their sparse and nonnegative characteristics. Most text mining techniques discussed in this chapter can also be applied to these domains with minor modifications.


13.2.2 Specialized Preprocessing for Web Documents


Web documents require specialized preprocessing techniques because of some common prop-erties of their structure, and the richness of the links inside them. Two major aspects of Web document preprocessing include the removal of specific parts of the documents (e.g., tags) that are not useful, and the leveraging of the actual structure of the document. HTML tags are generally removed by most preprocessing techniques.


HTML documents have numerous fields in them, such as the title, the metadata, and the body of the document. Typically, analytical algorithms treat these fields with different levels of importance, and therefore weigh them differently. For example, the title of a document is considered more important than the body and is weighted more heavily. Another example is the anchor text in Web documents. Anchor text contains a description of the Web page pointed to by a link. Due to its descriptive nature, it is considered important, but it is sometimes not relevant to the topic of the page itself. Therefore, it is often removed from the text of the document. In some cases, where possible, anchor text could even be added to the text of the document to which it points. This is because anchor text is often a summary description of the document to which it points.


A Web page may often be organized into content blocks that are not related to the primary subject matter of the page. A typical Web page will have many irrelevant blocks, such as advertisements, disclaimers, or notices, that are not very helpful for mining. It has been shown that the quality of mining results improve when only the text in the main block is used. However, the (automated) determination of main blocks from web- scale collections is itself a data mining problem of interest. While it is relatively easy to decompose the Web page into blocks, it is sometimes difficult to identify the main block. Most automated methods for determining main blocks rely on the fact that a particular site will typically utilize a similar layout for the documents on the site. Therefore, if a collection of documents is available from the site, two types of automated methods can be used:





  1. Block labeling as a classification problem: The idea in this case is to create a new training data set that extracts visual rendering features for each block in the training data, using Web browsers such as Internet Explorer. Many browsers provide an API that can be used to extract the coordinates for each block. The main block is then manually labeled for some examples. This results in a training data set. The resulting training data set is used to build a classification model. This model is used to identify the main block in the remaining (unlabeled) documents of the site.




  1. Tree matching approach: Most Web sites generate the documents using a fixed tem-plate. Therefore, if the template can be extracted, then the main block can be identified relatively easily. The first step is to extract tag trees from the HTML pages. These represent the frequent tree patterns in the Web site. The tree matching algorithm, discussed in the bibliographic section, can be used to determine such templates from these tag trees. After the templates have been found, it is determined, which block is the main one in the extracted template. Many of the peripheral blocks often have similar content in different pages and can therefore be eliminated.

434 CHAPTER 13. MINING TEXT DATA

13.3 Specialized Clustering Methods for Text


Most of the algorithms discussed in Chap. 6 can be extended to text data. This is because the vector space representation of text is also a multidimensional data point. The discus-sion in this chapter will first focus on generic modifications to multidimensional clustering algorithms, and then present specific algorithms in these contexts. Some of the clustering methods discussed in Chap. 6 are used more commonly than others in the text domain. Algorithms that leverage the nonnegative, sparse, and high-dimensional features of the text domain are usually preferable to those that do not. Many clustering algorithms require significant adjustments to address the special structure of text data. In the following, the required modifications to some of the important algorithms will be discussed in detail.


13.3.1 Representative-Based Algorithms


These correspond to the family of algorithms such as k-means, k-modes, and k-median algorithms. Among these, the k-means algorithms are the most popularly used for text data. Two major modifications are required for effectively applying these algorithms to text data.





  1. The first modification is the choice of the similarity function. Instead of the Euclidean distance, the cosine similarity function is used.




  1. Modifications are made to the computation of the cluster centroid. All words in the centroid are not retained. The low-frequency words in the cluster are projected out. Typically, a maximum of 200 to 400 words in each centroid are retained. This is also referred to as a cluster digest, and it provides a representative set of topical words for the cluster. Projection-based document clustering has been shown to have significant effectiveness advantages. A smaller number of words in the centroid speeds up the similarity computations as well.

A specialized variation of the k-means for text, which uses concepts from hierarchical clus-tering, will be discussed in this section. Hierarchical methods can be generalized easily to text because they are based on generic notions of similarity and distances. Furthermore, combining them with the k-means algorithm results in both stability and efficiency.


13.3.1.1 Scatter/Gather Approach


Strictly speaking, the scatter/gather terminology does not refer to the clustering algorithm itself but the browsing ability enabled by the clustering. This section will, however, focus on the clustering algorithm. This algorithm uses a combination of k -means clustering and hierarchical partitioning. While hierarchical partitioning algorithms are very robust, they typically scale worse than Ω(n2), where n is the number of documents in the collection. On the other hand, the k-means algorithm scales as O(k · n), where k is the number of clusters. While the k-means algorithm is more efficient, it can sometimes be sensitive to the choice of seeds. This is particularly true for text data in which each document contains only a small part of the lexicon. For example, consider the case where the document set is to be partitioned into five clusters. A vanilla k -means algorithm will select five documents from the original data as the initial seeds. The number of distinct words in these five documents will typically be a very small subset of the entire lexicon. Therefore, the first few iterations of k-means may not be able to assign many documents meaningfully to clusters when they do not contain a significant number of words from this small lexicon subset. This initial




Yüklə 17,13 Mb.

Dostları ilə paylaş:
1   ...   263   264   265   266   267   268   269   270   ...   423




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©azkurs.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin