Data Mining: The Textbook



Yüklə 17,13 Mb.
səhifə219/423
tarix07.01.2024
ölçüsü17,13 Mb.
#211690
1   ...   215   216   217   218   219   220   221   222   ...   423
1-Data Mining tarjima

m




y =wihi(X).

(11.14)



i=1

For example, in polynomial regression, the higher powers of each dimension up to order r are used as a new set of derived features. This approach expands the number of dimensions by a factor of r, but it allows greater expressiveness in terms of nonlinear relationships. The main disadvantage of the approach is that it expands the dimensionality of the parameter set W , and can therefore result in overfitting. Therefore, it is important to use regularization.


Arbitrary nonlinear relationships can also be captured by methods such as kernel ridge regression. In order to use kernels, the main goal is to show that the closed-form solution to linear ridge regression can be expressed in terms of dot products between training and test instances. One way of achieving this goal is by formulating the dual of the linear ridge regression problem [448], and then using the kernel trick as in SVMs. A simpler approach is to make use of a specialized variant of the Sherman–Morrison–Woodbury identity in matrix algebra (see Exercise 14), which is true for any n × d data matrix D and scalar λ:





(DT D + λId)1DT = DT (DDT + λIn)1.

(11.15)


Note that Id is a d×d identity matrix, whereas In is an test instance Z, which is expressed as a row vector, the


n×n identity matrix. For an unseen prediction F (Z) of linear regression



is given by Z W T . By substituting the closed-form solution of ridge regression for W T and then making use of the aforementioned identity, we obtain:



F (




) =




(DT D + λId)1DT




=













(11.16)




Z

Z

ZDT (DDT + λIn)1










y

y.




Note that ZDT is an n-dimensional row vector of dot products between the test instance Z and the n training instances. According to the kernel trick, we can replace this row vector with a vector κ containing the n kernel similarities between the test and training instances. Furthermore, the matrix DDT contains the n × n dot products between the training instances. We can replace this matrix with the n × n kernel matrix K constructed on the training instances. Then, the prediction for test instance Z is as follows:



F (




) =




(K + λIn)1







(11.17)




Z




κ

y.




360 CHAPTER 11. DATA CLASSIFICATION: ADVANCED CONCEPTS

The kernel trick can also be applied to other variants of linear regression, such as Fisher’s discriminant and logistic regression. The extension to Fisher’s discriminant is straightfor-ward because it is a special case of linear regression, whereas the derivation for kernel logistic regression uses the dual optimization formulation like SVMs.


11.5.5 From Decision Trees to Regression Trees


Regression trees are designed to model nonlinear relationships between the features and the response variable. If the regression model is constructed at each leaf node in a hierarchical partitioning of the data, locally optimized linear regression models can be obtained within each partition. Even when the relationship between the class variable and feature variables is nonlinear, a local linear approximation is quite effective. Each test instance can then be classified with its locally optimized linear regression model by determining its appropriate partition. This hierarchical partitioning is essentially a decision tree because the assigned partition of a test instance is determined by the split criteria at the internal nodes. The overall strategy of constructing a decision tree remains the same as in the case of categorical class variables. Similarly, the splits can use univariate (axis-parallel) splits on the feature variables, as in a traditional decision tree. However, changes need to be made to the splitting and pruning criteria because of the numeric class variable:





  1. Splitting criterion: In the case of categorical classes, the splitting criterion uses the Gini index or entropy of the class variable as a qualitative measure to decide the splitting attribute. However, in the case of numeric classes, an error-based measure is used. The regression modeling approach of the previous section is applied to each child resulting from a potential split. The aggregate squared error of prediction of all the training data points in the different child nodes is computed. The split with the minimum aggregate squared error is selected among all possible splits at a particular node.

The main computational problem with this approach is that a linear regression model needs to be constructed for each possible split. An alternative is to not use linear regression in the tree construction phase. The average variance of the numeric class variable in the children nodes resulting from a possible split is used as the quality criterion for split evaluation. In other words, the Gini index splitting criterion for the categorical class variable in traditional decision-tree construction is replaced with the variance of the numeric class variable. The linear regression models are constructed at the leaf nodes for prediction only after the entire tree has already been constructed. While this approach will result in larger trees, it is more practical from a computa-tional point of view.






  1. Yüklə 17,13 Mb.

    Dostları ilə paylaş:
1   ...   215   216   217   218   219   220   221   222   ...   423




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©azkurs.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin