Machine Learning Algorithms in Web Page Classification
Machine Learning Algorithms in Web Page Classification
ABSTRACT
In this paper we use machine learning algorithms like SVM, KNN and GIS to perform a behavior comparison on the web pages classifications problem, from the experiment we see in the SVM with small number of negative documents to build the centroids has the smallest storage requirement and the least on line test computation cost. But almost all GIS with different number of nearest neighbors have an even higher storage requirement and on line test computation cost than KNN. This suggests that some future work should be done to try to reduce the storage requirement and on list test cost of GIS.
KEYWORDS:
Web Classifications, Machine Learning, LIBSVM, SVM, K-NN.
1. INTRODUCTION
Nowadays, the web pages is growing at an exponential rate and can cover almost any information needed. However, the huge amount of web pages makes it more and more difficult to effectively find the target information for a user. Generally two solutions exist, hierarchical browsing and keyword searching. However, these pages vary to a great extent in both the information content and quality. Moreover, the organization of these pages does not allow for easy search. So an efficient and accurate method for classifying this huge amount of data is very essential if the web pages is to be exploited to its full potential. This has been felt for a long time and many approaches have been tried to solve this problem. Many different Machine learningbased algorithms have been applied to the text classification task, including k-Nearest Neighbour (k-NN Algorithm) [2], Bayesian algorithm [7], Support Vector Machine (SVM) [5], Neural Networks [8], and decision trees [9].
2. RELATED WORK
In past years, there have been extensive investigations and rapid progresses in automatically hierarchical classification. Basically, the models depend on the hierarchical structure of the
DOI : 10.5121/ijcsit.2012.4508 93
International Journal of Computer Science & Information Technology (IJCSIT) Vol 4, No 5, October 2012
dataset and the assignment of documents to nodes in the structure. Nevertheless, most of researches adopt a top-down model for classification. M.-Y. Kan and H. O. N. Thi. [9]; have summarized four structures used in text classification: virtual category tree, category tree, virtual directed acyclic category graph and directed acyclic category graph. The second and forth structures are popularly used in many web directory services since the documents could be assigned to both internal and leaf categories. These two models used category-similarity measures and distance-based measures to describe the degree of falsely classification in judging the classification performance. F. Sebastiani [5]; found a boost of the accuracy for hierarchical models over flat models using a sequential Boolean decision rule and a multiplicative decision rules. They claimed only few of the comparisons are required in their approaches because of the efficiency of sequential approach. They also have shown that SVM take a good performance for virtual category tree, but category tree is not considered in their work. E. Gabrilovich and S. Markovitch [6]; proposed an efficient optimization algorithm, which is based on incremental conditional gradient ascent in single-example sub-spaces spanned by the marginal dual variables. The classification model is a variant of the Maximum Margin Markov Network framework, which is equipped with an exponential family defined on the edges. This method solved the scaling problem to medium-sized datasets but whether it is fit for huge amount data set is unclear. P. N. Bennett and N. Nguyen[2]; used a refined evaluation, which turns the hierarchical SVM classifier into an approximate value of the Bayes optimal classifier with respect to a simple stochastic model for the labels. They announced an improvement on hierarchical SVM algorithm by replacing its top-down evaluation with a recursive bottom-up scheme. Loss function is used to measure the performance in classification filed in this paper. The aim of our work is to discuss the problem of imbalanced data and measurement of multi-label classification and find strategies to solve them. In the experiments, we compare the classification performance using both the standard measurements precision/recall and extended measurement loss value.
94
International Journal of Computer Science & Information Technology (IJCSIT) Vol 4, No 5, October 2012
For nonlinear-separable case, SVM use kernel-based methods to map the input space to a socalled feature space. The basic idea of kernel methods is finding a map from the input space which is nonlinear separable to a linear-separable feature space ; Figure 2. However, the problem with the feature space is that it is usually of very large or even infinite dimensions and thus computing dot product in this space is intractable. Fortunately, kernel methods overcomes this problem by finding maps such that computing dot products in feature spaces becomes computing kernel functions in input spaces
k(x,y) = <(x).(y)>;
where k(x,y) is a kernel function in the input space and <(x).(y)> is a dot product in the feature space. Therefore, the dot product in feature spaces can be computed even if the map is unknown. Some most widely used kernel functions are Gaussian RBF, Polynomial, Sigmoidal, and B-Splines.
95
International Journal of Computer Science & Information Technology (IJCSIT) Vol 4, No 5, October 2012
S ( X , D
) =
m i = 1
x id
2 i
ij m
i = 1
2 ij
i = 1
The score of documents belong to every corresponding category is accumulated and a threshold can be set to assign some categories to this new unknown document. Only the score of the k nearest neighbor documents of this new unknown document query is taken into account.
96
International Journal of Computer Science & Information Technology (IJCSIT) Vol 4, No 5, October 2012
Re
The pseudo-code is as follows: Input: The training set T The category C Procedure GIS(T,C)
p (G ) =
I
( k rank
(I
))
Let G and G_new be generalized instances. GS be generalized instance set, and GS is initialized to empty. Repeat Randomly select a positive instance as G. Rank instances in T according to the similarity Compute Rep(G) Repeat G_new=G G=Generalize(G_new,k nearest neighor) Rank instances in T according to the similarity Compute Rep(G) Until Rep(G)<Rep(G_new) Add G_new to GS Remove top k instances from T. Until no positive instances in T Return GS In this algorithm, we use a SVM generalization step. formula that is described in previous section for the
4. TRAINING PHASE:
The SVM classifier of our application is implemented based on the LIBSVM library. The library realizes C-support vector classification (C-SVC), -support vector classification (-SVC), support vector regression (-SVR), and incorporates many efficient features such as caching, sequential minimal optimization and performs well in moderate-sized problems (about tens of thousands of training data points). In our application, we only use C-SVC with RBF kernel function
K ( x, y ) = e
x y
97
International Journal of Computer Science & Information Technology (IJCSIT) Vol 4, No 5, October 2012
To train our SVM classifier we use a collection of 7566 Aljazeera News web documents that are already categorized in 16 broad categories by domain experts. After parsing and pre-processing, the total number of terms are 189815, meaning that each document is represented by a 189815dimensional sparse vector a large dimension that is considered to be extremely difficult in other machine learning techniques such as neural networks or decision trees. Aside from the training data set, another testing data set consisting of 490 web documents that are also already classified in the same 16 categories is used to evaluate the classifier. To decide the best parameters (i.e., C and ) for the classifier, a grid search (i.e., search over various pairs (C, )) is needed to be performed. However, since a full grid search is very time-consuming we fix the parameter as 1/k (k is the number of training data) and only try various values of C. A recommended range of values of C is 20, 21, 22, , 210 which is known good enough in practice. The classification accuracies obtained over training and testing data with various values of C are shown in Table 1 below.
Table 1: Classification Accuracy over Training and Testing Data for Various Values of C Parameter.
Accuracy over training data 59.91% 69.53% 78.09% 85.47% 90.92% 94.14% 96.52% 97.82% 98.23% 98.49% 98.61%
Accuracy over testing data 44.90% 53.06% 57.35% 61.22% 64.69% 68.57% 71.22% 71.02% 71.02% 72.45% 72.45%
As we can see in Table 1, with C=1024 the classifier performs most accurately. Therefore, we choose C=1024 for our classifier.
5. DATASET:
We use a collection of 21578 Aljazeera News as the Dataset for the algorithms. It is a small corpus. But it is a widely used corpus in the text categorization task and it is easy to use this corpus to compare with the performance of other algorithms.
98
International Journal of Computer Science & Information Technology (IJCSIT) Vol 4, No 5, October 2012
5.1
Experiment Results:
Precision 0.83814
0.781316 0.845085
Recall
F1
Here we use all negative documents as well as positive documents to build the centroid for every category in SVM; for kNN we choose k as 45; and for GIS we choose the number of nearest neighbors as 40. 5.2 Result Analysis: 5.2.1 Negative Documents in SVM: There have been some arguments for the effectiveness of using negative documents in constructing the centroids for the GIS algorithm. Some people claimed that using a rather small number of negative is enough for GIS. Some experiments are done for this problem. In the following graph, the x-axis represents the number of negative documents used to build the centroids. The y-axis is the performance of GIS algorithm that is tried with 10-fold cross validation.
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 100 200 400 800 All
Precision Recall 1F
International Journal of Computer Science & Information Technology (IJCSIT) Vol 4, No 5, October 2012
We can see that the performance of SVM algorithm does has relationship with the number of negative documents used to build the centroid in the corpus. And it is interesting to see that precision seems to increase monotonically with the number of negative documents. It can be explained intuitively, as more and more negative documents are subtracted from the centroid, the non-relevant document will have a larger distance with the centroid so the precision will get higher. But the recalls trend is more complex. 5.2.2 The Number of K nearest neighbors in GIS In the following graph, the x-axis represents the number of k nearest neighbors used to build the generalized instance, and y-axis represents the performance of GIS algorithms by 10-fold cross validations.
0.88 0.87 0.86 0.85 0.84 0.83 0.82 0.81 0.8 0.79 40 80 160 320
Precision Recall 1F
We can see from the graph that there is a tradeoff between the number of nearest neighbors and the performance. Intuitively speaking, a too small number of nearest neighbors cannot give a very much generalization power to the centroid and cannot make the centroid insensitive to the noise too much. But too much number of the nearest neighbors will ignore the local detail information. So there should be such kind of trade off in the number of near neighbors choice.
6. CONCLUSION:
From the graphs, we can see SVM with small number of negative documents to build the centroids has the smallest storage requirement and the least on line test computation cost. But almost all GIS with different number of nearest neighbors have a even higher storage requirement and on line test computation cost than KNN. This suggests that some future work should be done to try to reduce the storage requirement and on list test cost of GIS. But in reality, people often only keep most important (high weight) terms in each centroid instead of keeping everything. So
100
International Journal of Computer Science & Information Technology (IJCSIT) Vol 4, No 5, October 2012
in that case, the analysis does not make much sense. But if we cut off the number of terms, the performance will be influenced. There is also some tradeoff for this factor. And that is something that we should do more work to investigate. GIS is some idea that wants to find the true granularity for instances set in an automatically and supervised way. There was some other work that tries to do the same work in unsupervised way [5]. Their results are different on two different Datasets. It seems that our experiment does not support the effectiveness of this idea. But there needs more work to explore this work.
7. REFERENCES
[1] E. Baykan, M. Henzinger, L. Marian, and I. Weber. A comprehensive study of features and algorithms for url-based topic Classication. ACM Transactions on the Web, 5:15:115:29, July 2011. [2] P. N. Bennett and N. Nguyen. Rened experts: improving Classication in large taxonomies. In Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1118. ACM, 2009 [3] A. Broder, M. Fontoura, E. Gabrilovich, A. Joshi, V. Josifovski, and T. Zhang. Robust Classication of rare queries using web knowledge. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 231238, New York, NY, July 2007. ACM Press [4] C. Castillo and B. D. Davison. Adversarial web search. Foundations and Trends in Information Retrieval, 4(5):377486, 2010. [5] F. Sebastiani. Machine learning in automated text categorization. ACM Computing Surveys, 34(1):1 47, March 2002. [6] E. Gabrilovich and S. Markovitch. Text categorization with many redundant features: Using aggressive feature selection to make SVMs competitive with C4.5. In Proceedings of the 21st International Conference on Machine Learning, page 41, New York, NY, 2004. ACM Press. [7] H. Guan, J. Zhou, and M. Guo. A class-feature-centroid classier for text categorization. In 18th International World Wide Web Conference, pages 201201, April 2009. [8] C.-C. Huang, S.-L. Chuang, and L.-F. Chien. Liveclassier: Creating hierarchical text classiers through web corpora. In Proceedings of the 13th International Conference on World Wide Web, pages 184192, New York, NY, 2004. ACM Press [9] M.-Y. Kan and H. O. N. Thi. Fast webpage Classication using URL features. In Proceedings of the 14th ACM International Conference on Information and Knowledge Management, pages 325326, New York, NY, 2005. ACM Press. [10] H. Malik. Improving hierarchical svms by hierarchy attening and lazy Classication. In Proceedings of Large-Scale Hierarchical Classication Workshop, 2010. [11] X. Qi and B. D. Davison. Hierarchy evolution for improved Classication. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management, pages 21932196, October 2011. [12] W. A. Awad, S. M. ELseuofi ; "Machine Learning Methods for Spam E-Mail Classification"; Int. J. of Computer Applications (IJCA); Vol. 16; No. 1; 2011.
101