Tag Archive: machine learning


Abstract—Clustering is an easy to use and implement method of unsupervised inductive inference. Clustering can be used to learn discrete or continuous valued hypotheses and create compact groups of objects that display similar characteristics, while maintaining a high degree of separation from other groupings. This paper is a survey of some of the methods of constructing and evaluating clusters.

Index Terms—machine learning, hierarchical agglomerative clustering, k-means clustering, unsupervised learning, Pearson’s coefficient, euclidean distance

1. Introduction

Clustering is a method of dividing a heterogenous group of objects into smaller, more homogenous groups that display similar characteristics to other objects in the cluster while displaying one or more dissimilar characteristics to objects in other clusters. Clustering is an unsupervised learning technique that has many applications in data mining, pattern recognition, image analysis and market segmentation. Clustering is easy to implement and fairly quick to come up with the groupings.

The main purposes of a Department of Corrections at any governmental level are to enhance public safety, to provide proper care of the inmates, to supervise inmates under their jurisdiction and to assist inmates’ re-entry into society.

There is no doubt that inmates at correctional institutions are dangerous to society. However, even after they are incarcerated at these institutions, some individuals remain ongoing offenders. While all individuals in prison have displayed some sort of deviant behavior, it is hypothesized that certain combinations of personality traits make some inmates more likely to be sexual predators and some inmates more likely to be sexual victims of these predators. At most correctional facilities, sexual contact between inmates, consensual or not, is not permitted.

Identification of those inmates likely to be sexually predatory toward other inmates would greatly assist corrections facilities in their goal of providing a safer environment for incarceration. Clustering can help with this goal by comparing a particular offender to known perpetrators and victims. After comparison, victims can be incarcerated separately from predators and receive any special needs assistance that can be offered while predators can be segregated in such a fashion as to reduce the potential for successful predatory behaviors.

1.1 Outline of Research

In this research survey, we implemented two different types of clustering algorithms – a standard “bottom-up” hierarchical method, single link clustering, and a standard “top-down” partitional algorithm, k-means clustering. We evaluated different distance measure criteria, including Euclidian distance and Pearson’s correlation coefficient. Results are discussed in section 4 after running the clustering algorithms multiple times with the provided Colorado inmate dataset.

1.2 Data

The dataset that we used was provided by Dr. Coolidge of the Department of Psychology at the University of Colorado at Colorado Springs. The dataset is publicly available at http://www.cs.uccs.edu/~kalita/work/cs586/2010/CoolidgePerpetratorVictimData.csv. This dataset pertains to scores on personality disorder tests given to inmates in the State of Colorado. Dr. Coolidge’s inventory of personality disorder tests are given to all inmates in the State of Colorado.. This dataset provided contained 100 rows (25 rows describing victims of sexual abuse and 75 describing perpetrators of sexual abuse) with 14 attributes chosen by Dr. Coolidge.

The data described different measurements of how inmates had scored on different personality disorders tests. The tests included antisocial (AN), avoidant (AV), borderline (BO), dependent (DE), depressive (DP), histrionic (HI), narcissistic (NA), obsessive-compulsive (OC), paranoid (PA), passive-aggressive (PG), schizotypal (ST), schizoid (SZ), sadistic (SA) and self-defeating (SD) markers. The scores on these individual tests are measured by T scores, which are a type of standardized score that can be mathematically transformed into other types of standardized scores. T scores have a Gaussian distribution and a score of 50 is always considered the mean and a standard deviation is always 10.

It should be noted that even though the dataset is quite small, with only 100 rows available, the quality of the data in the dataset is very good. The astute reader can appreciate the fact that incarcerated persons might not be completely truthful in their answering of the test questions for a variety of reasons, such as a lack of caring or the desire to appear more “damaged” than any other inmate. The data is cross-checked with several other validation methods to ensure that the answers provided are reasonable. Test scores that are not reasonable were discarded by Dr. Coolidge and not included in this dataset.

2. Application

We chose to write the implementation of the clustering algorithms in Java because of the ease of use of the language. Java also presented superior capabilities in working with and parsing data from files. Using Java allowed the author to more efficiently model the problem through the use of OO concepts, such as polymorphism and inheritance. Lastly, several different Java libraries were available, such as Java-ML [1] that increased the ability to analyze the clusters after the algorithms had been run.

2.1 Hierarchical Cluster Construction

In agglomerative single link clustering, clusters are constructed by comparing each point (or cluster) with every other point (or cluster). Each object is placed in a separate cluster, and at each iteration of the algorithm, we merge the closest pair of points (or clusters), until certain termination conditions are satisfied.

This algorithm requires defining the idea of a single link or the proximity of other points to a single point. For single link, the proximity of two points (or clusters) is defined as the minimum of the distance between any two points (or clusters). If there exists such a link, or edge, then the two points (or clusters) are merged together.

This is often called “the nearest neighbor clustering technique.” [4] Relating this algorithm to graph theory, this clustering technique constructs a minimum spanning tree by finding connected components, so the algorithm is quite similar to Kruskal’s or Prim’s algorithm.

MSTSingleLink (Elements, AdjacencyMatrix)
Create a set of clusters C = {t1, t2,…,tn} from Elements
Create a partial distance matrix showing the distance between all clusters in C.
k = n, where n is the number of clusters
d = 0
Begin
Ci, Cj = Closest clusters in AdjacencyMatrix
d = dis(Ci, Cj) // update dendrogram with distance threshold
dis({Ci ∪ Cj}, C) // recalculate the distance from the new cluster to all other points in C
C = C – {Ci} – {Cj} ∪ {Ci ∪ Cj} // merge the two closest clusters or points
dis(Ci, Cj) = 0
Until k = 1

Typically, for this algorithm, the termination criteria is for all elements grouped together in one cluster. A better termination criterion would be to record the distances at which the merges of individual objects and clusters take place and if there is a large jump in this distance (large being defined by the user), it might give the user an indication that the two objects or clusters should not be merged because they are highly dissimilar. Note that the running time for MSTSingleLink is O(n^2). This running time makes it impractical for large datasets. For further information on single link MST, see [2], [4].

2.2 Partitional Cluster Construction

Where a hierarchical clustering algorithm creates clusters in multiple steps, a partitional algorithm, such as k-means, creates the clusters in one step. [4] Also, in partitional clustering, the number of clusters to create must be known a priori and used as input to the algorithm.

In k-means, elements are moved among sets of clusters until some sort of termination criteria is reached, such as convergence. A possible measure for convergence could be testing to see if cluster elements have not changed clusters. Using k-means allows one to achieve a “high degree of similarity among elements, while a high degree of dissimilarity among elements in different clusters.” [4]

KMeansCluster (Elements, k)
Create a set of clusters C = {t1, t2,…,tn} from Elements
Assign intial values for means, m1, m2,…,mk
Begin
Assign each item ti to the cluster which has the closest mean
Calculate new means for each cluster
Until convergence criteria met

Note that the running time for KMeansCluster is O(tkn), where t is the number of iterations. While, k-means does not suffer from the chaining problem, it does have other problems. k-means does not handle outliers well, work with categorical data or produce any clusters shapes other than convex clusters. [4] Also, while k-means produces good results, it does not scale well and is not time-efficient. [4] While this particular provided dataset is not large, k-means could have problems with attempting to cluster millions of objects. Lastly, it is possible for k-means to find a local optimum and miss the global optimum. For further information on k-means clustering, see [2], [4].

2.3 Distance Criterion

In both algorithms, cluster formation relies on having some notion of a distance measure. Using this metric, we can determine how “similar” two elements are. Depending on the distance metric chosen, it will influence the shape of our clusters. While there are many distance measures such as Mahalanobis, hamming, city-block and Minkowski [2], [6], in our implementation, we used two different distance measures, a euclidean distance measure and Pearson’s correlation coefficient.

2.3.1 Euclidean Distance

Euclidean distance is the ordinary distance between two points that one would measure with a ruler. It is a simple distance metric and by far the most commonly used, where one has to make sure all attributes have the same scale [2]. It is defined by this equation

2.3.2 Pearson’s Correlation Coefficient

Pearson’s correlation coefficient is a measure of the linear dependence between two variables X and Y, giving a value between +1 and −1 inclusive. A value of 1 implies that a linear equation describes the relationship between X and Y perfectly, with all data points lying on a line for which Y increases as X increases. A value of −1 implies that all data points lie on a line for which Y decreases as X increases. A value of 0 implies that there is no linear correlation between the variables. [3] Although it depends on the data being analyzed, typically anything over 0.5 or below -0.5 indicated a large correlation. Pearson’s correlation coefficient can be calculated using the equation below, where X and Y bar are the means of the two variables.

For further information on Pearson’s correlation coefficient, see [3].

2.4 Testing

Testing of the clustering algorithms was performed with the entire dataset of 100 examples. For single link clustering, the algorithm was run using the Euclidean distance measure and Pearson’s correlation coefficient. For k-means clustering, the algorithm was run using the Euclidean distance measure 8 times. Each time the program was run, the number of clusters specified, k, was increased by one from 2 clusters to 10 clusters. Discussion of results are in section 4.

3. Potential Problems

There was one main problem that we encountered during implementation and testing. Our biggest problem is that with single link clustering, we observed the chaining effect. The chaining effect is where “points that are not related to each other at all are merged together to form a cluster simply because they happen to be near (perhaps via a transitive relationship) points that are close to each other.” [4] By including dissimilar objects, this can cause clusters to become skewed. A potential solution to this problem would be to either specify a maximum distance threshold, above which points (or clusters) would not be merged. This could also serve as part of the termination criteria. Another solution, would be to use a complete link distance criteria. [4] The chaining effect is most obviously seen when the output of the clustering program is a dendrogram.

4. Evaluation

For the evaluation of the results, we have chosen to use several different evaluation criteria. For single link clustering, we will qualify our results by examining the intra-cluster and inter-cluster distance, which measures the homogeneity of the clusters. In addition, we also evaluate the distance threshold at which the clusters were merged and the entropy within the cluster. Lastly, we evaluate the cluster by calculating the recall, precision and F measure of that cluster.

For k-means clustering, we evaluate the clusters using some of the same techniques (recall, precision and F measure) and we also introduce the squared sum of errors measure, and Bayesian information criterion measure.

4.1 Macro Evaluation

In single link clustering, the distance threshold that produced the best clusters was 46.23 (see Figure 4.1). Cluster 1 were inmates that exhibited personalities of victims of sexual abuse while cluster 2 were inmates that exhibited personalities of perpetrators of sexual abuse.

The remaining clusters, 3, 4, 5 and 6 were outliers and in their own clusters and their personalities exhibited behavior of both victims and perpetrators of sexual abuse. We noted that the distance threshold was much higher to merge clusters 4, 5, 6 with thresholds of 49.60, 55.22 and 59.63.

Cluster Cluster Members Intra-cluster Distance Inter-cluster Distance
1 1, 65, 4, 6, 97, 45, 49, 58, 53, 19, 18, 42, 67, 55, 62, 36, 7, 32, 54, 69, 39, 38, 89, 24, 26, 72, 8, 3, 9, 11, 95, 91, 27 15.64 90.82
2 2, 10, 16, 41, 50, 47, 51, 87, 78, 64, 68, 79, 77, 44, 84, 29, 31, 34, 63, 33, 90, 80, 40, 74, 82, 37, 43, 71, 48, 93, 96, 98, 22, 73, 52, 20, 57, 59, 46, 61, 75, 85, 66, 92, 83, 94, 88, 81, 70, 100, 60, 99, 86, 56, 13, 15, 30, 21, 14, 12, 23 8.73 79.20
3 5, 35, 25 0.17 66.25
4 17 0.0 67.19
5 28 0.0 67.19
6 76 0.0 67.20
Figure 4.1 – Index of clusters with intra-cluster and inter-cluster distance for Euclidean single link clustering

In Pearson’s coefficient clustering, the distance threshold that produced the best clusters was .035 (see Figure 4.2).
Cluster Cluster Members Intra-cluster Distance Inter-cluster Distance
1 1, 17, 14, 52, 48, 54, 27, 21, 22, 26, 77, 74, 33, 50, 57, 62, 47, 49, 79, 31, 34, 69, 67, 2, 86, 71, 100, 3, 41, 10, 29, 8, 25, 40, 83, 37, 19, 44, 46, 72, 55, 81, 88, 66, 30, 98, 38, 70, 20, 45, 36, 80, 60, 87, 13, 56, 91, 68, 51, 23, 16, 53, 89, 12, 65, 82, 94, 96, 90, 64, 58, 43 19.71 138.31
2 4, 11, 35, 84, 39, 15, 61, 63, 18, 92, 24, 76, 6, 7, 59, 95, 78 3.68 108.01
3 5, 97, 73, 93, 75 2.5 106.01
4 32, 42, 99 0.0 106.72
5 28 0.0 109.65
6 9 0.0 109.65
7 85 0.0 109.65
Figure 4.2 – Index of clusters with intra-cluster and inter-cluster distance for Pearson’s single link clustering

Based on these results, using the Euclidean distance may produce better clusters based on intra-cluster distance. Another improvement to the results may come in the form of changing the policy on finding the best distance at which to merge the clusters (i.e. using complete link or average link distance measures). If a more accurate method of distance finding were implemented, we would expect to see a more consistent result set because there would be less of an effect from the chaining problem.

4.2 Micro Evaluation

When the individual clusters are broken down and the individual members are analyzed, we achieve the following results. These recall, precision, F measure and entropy measurements assume the same clusters as above in section 4.1. In our calculations, we assign a negative value to not identifying a sexual abuse perpetrator because they would be allowed to interact with the general inmate population instead of being in administrative segregation. In addition, we also only consider the first two clusters, of either predator or victim, but no mixed classes.

Cluster Recall Precision F-measure
1 44.00% 32.35% 0.37
2 66.67% 75.75% 0.71
Overall 55.34% 54.05% 0.54
Figure 4.5 – Recall, Precision, F-Measure for Euclidean Single Link Clustering

Cluster Recall Precision F-measure
1 64.00% 22.22% 0.33
2 13.33% 70.58% 0.22
Overall 38.67% 46.40% 0.28
Figure 4.5 – Recall, Precision, F-Measure for Pearson’s Coefficient Single Link Clustering

From these results, it shows that despite the chaining effect, the Euclidean distance appears to be the superior distance measure when clustering via hierarchical agglomerative methods.

Our next test involved using k-means clustering. We ran the algorithm from 2 to 10 clusters and we measured the accuracy of the clusters using Bayesian information criteria (a criterion for model selection among a class of parametric models) and the sum of squared errors, as defined below:

where x is the observed data, n is the number of data points in x, k is the number of free parameters to be estimated, p(x|k) is the likelihood of the observed data given the number of parameters and L is the maximized value of the likelihood function.

k Clusters Members BIC Score SSE
2 Cluster 1: 2, 5, 10, 12, 14, 15, 16, 20, 21, 22, 23, 25, 29, 31, 33, 34, 35, 37, 40, 41, 43, 44, 46, 47, 50, 51, 56, 57, 59, 60, 61, 63, 66, 70, 71, 73, 75, 77, 78, 79, 80, 81, 83, 84, 85, 86, 87, 88, 90, 92, 94, 98, 99, 100
Cluster 2: 1, 3, 4, 6, 7, 8, 9, 11, 13, 17, 18, 19, 24, 26, 27, 28, 30, 32, 36, 38, 39, 42, 45, 48, 49, 52, 53, 54, 55, 58, 62, 64, 65, 67, 68, 69, 72, 74, 76, 82, 89, 91, 93, 95, 96, 97 144,666.03 223,893.45
3 Cluster 1: 1, 3, 4, 6, 7, 9, 11, 13, 18, 21, 24, 30, 32, 36, 38, 41, 42, 44, 45, 47, 48, 49, 50, 52, 53, 54, 55, 58, 62, 64, 65, 67, 68, 69, 74, 78, 82, 93, 95, 96, 97, 98
Cluster 2: 2, 5, 10, 12, 14, 15, 16, 20, 22, 23, 25, 29, 31, 33, 34, 35, 37, 40, 43, 46, 51, 56, 57, 59, 60, 61, 63, 66, 70, 71, 73, 75, 77, 79, 80, 81, 83, 84, 85, 86, 87, 88, 90, 92, 94, 99, 100
Cluster 3: 8, 17, 19, 26, 27, 28, 39, 72, 76, 89, 91 142,191.31 190,017.27
4 Cluster 1: 8, 17, 26, 28, 39, 72, 76, 89,
Cluster 2: 3, 4, 6, 19, 27, 36, 42, 45, 49, 55, 62, 67, 91, 95, 97
Cluster 3: 2, 5, 10, 12, 14, 15, 16, 20, 22, 23, 25, 29, 31, 33, 34, 35, 37, 40, 43, 46, 51, 56, 57, 59, 60, 61, 63, 66, 70, 71, 73, 75, 77, 80, 81, 83, 85, 86, 88, 90, 92, 94, 99, 100
Cluster 4: 1, 7, 9, 11, 13, 18, 21, 24, 30, 32, 38, 41, 44, 47, 48, 50, 52, 53, 54, 58, 64, 65, 68, 69, 74, 78, 79, 82, 84, 87, 93, 96, 98 143,341.18 174,843.29
5 Cluster 1: 8, 17, 24, 26, 38, 65, 72, 89
Cluster 2: 2, 10, 12, 13, 15, 16, 21, 22, 23, 29, 30, 32, 33, 37, 40, 41, 44, 47, 48, 50, 51, 52, 56, 71, 73, 77, 78, 79, 84, 87, 90, 93, 96, 98
Cluster 3: 5, 14, 20, 25, 31, 34, 35, 43, 46, 57, 59, 60, 61, 63, 66, 70, 75, 80, 81, 83, 85, 86, 88, 92, 94, 99, 100
Cluster 4: 7, 28, 36, 39, 76
Cluster 5: 1, 3, 4, 6, 9, 11, 18, 19, 27, 42, 45, 49, 53, 54, 55, 58, 62, 64, 67, 68, 69, 74, 82, 91, 95, 97 141,884.73 152,909.21
6 Cluster 1: 8, 17, 24, 26, 38, 65, 72, 89
Cluster 2: 2, 5, 10, 12, 14, 20, 22, 23, 25, 29, 31, 33, 34, 35, 37, 40, 43, 46, 51, 56, 57, 59, 60, 61, 63, 66, 70, 71, 75, 77, 80, 81, 83, 85, 86, 88, 90, 92, 94, 99, 100
Cluster 3: 1, 7, 13, 15, 16, 21, 30, 32, 41, 44, 47, 48, 50, 52, 53, 54, 58, 64, 68, 69, 73, 74, 78, 79, 82, 84, 87, 93, 96, 98
Cluster 4: 4, 6, 18, 19, 36, 39, 42, 45, 49, 62, 67, 91, 97
Cluster 5: 28, 76
Cluster 6: 3, 9, 11, 27, 55, 95 141,230.52 144,776.69
7 Cluster 1: 2, 10, 12, 13, 15, 16, 21, 22, 23, 29, 30, 31, 33, 37, 40, 41, 44, 47, 48, 50, 51, 56, 73, 77, 78, 79, 84, 87, 90, 93, 94, 98
Cluster 2: 5, 14, 20, 25, 34, 35, 43, 46, 57, 59, 60, 61, 63, 66, 70, 71, 75, 80, 81, 83, 85, 86, 88, 92, 99, 100,
Cluster 3: 4, 6, 9, 11, 18, 19, 42, 45, 49, 53, 55, 58, 62, 64, 67, 68, 74, 82, 95, 96, 97
Cluster 4: 28, 76,
Cluster 5: 3, 27, 91
Cluster 6: 1, 7, 32, 36, 39, 52, 54, 69
Cluster 7: 8, 17, 24, 26, 38, 65, 72, 89 140,546.79 135,787.72
8 Cluster 1: 3, 27, 95
Cluster 2: 1, 7, 24, 30, 32, 38, 53, 54, 58, 64, 65, 68, 69, 74, 82
Cluster 3: 39
Cluster 4: 9, 11, 18, 42, 62, 67,
Cluster 5: 4, 6, 19, 36, 45, 49, 55, 91, 97
Cluster 6: 5, 10, 12, 14, 20, 22, 23, 25, 29, 31, 34, 35, 37, 40, 43, 46, 56, 57, 59, 60, 61, 63, 66, 70, 71, 75, 77, 80, 81, 83, 85, 86, 88, 90, 92, 94, 99, 100
Cluster 7: 8, 17, 26, 28, 72, 76, 89
Cluster 8: 2, 13, 15, 16, 21, 33, 41, 44, 47, 48, 50, 51, 52, 73, 78, 79, 84, 87, 93, 96, 98 140,510.19 138,410.06
9 Cluster 1: 3, 9, 11, 74, 82, 95
Cluster 2: 15, 29, 30, 31, 33, 34, 40, 41, 44, 47, 50, 73, 78, 79, 84, 87, 90
Cluster 3: 8, 17, 24, 38, 65, 72, 89,
Cluster 4: 26, 28, 39, 76
Cluster 5: 5, 14, 20, 25, 35, 46, 57, 59, 60, 61, 63, 66, 70, 75, 80, 81, 83, 85, 86, 88, 92, 100
Cluster 6: 4, 6, 18, 19, 27, 36, 42, 45, 49, 55, 62, 67, 91, 97
Cluster 7: 2, 10, 16, 21, 22, 37, 43, 51, 71, 77, 94, 98, 99
Cluster 8: 1, 7, 13, 32, 48, 52, 53, 54, 58, 64, 68, 69, 93, 96
Cluster 9: 12, 23, 56 140,889.59 126,846.88
10 Cluster 1: 28, 76
Cluster 2: 91
Cluster 3: 30, 32, 53, 54, 58, 64, 68, 69, 74, 82
Cluster 4: 5, 14, 20, 25, 35, 43, 46, 57, 59, 60, 61, 63, 66, 70, 71, 75, 80, 81, 83, 85, 86, 88, 92, 99, 100
Cluster 5: 27
Cluster 6: 9
Cluster 7: 3, 4, 6, 11, 18, 19, 42, 45, 49, 55, 62, 67, 95, 97
Cluster 8: 1, 7, 13, 21, 36, 39, 48, 52, 93, 96
Cluster 9: 2, 10, 12, 15, 16, 22, 23, 29, 31, 33, 34, 37, 40, 41, 44, 47, 50, 51, 56, 73, 77, 78, 79, 84, 87, 90, 94, 98
Cluster 10: 8, 17, 24, 26, 38, 65, 72, 89 140,789.73 117,432.09
Figure 4.6 – Cluster Members, BIC Score and SSE Score for Euclidean k-means Clustering

Cluster Recall Precision F-measure
1 48.00% 22.22% 0.30
2 17.30% 28.20% 0.21
Overall 32.65% 25.21% 0.26
Figure 4.7 – Recall, Precision, F-Measure for Euclidean k-means Clustering

In these results, we can see that, as the number of clusters increases, the squared sum of errors decreases. This is because we are including fewer dissimilar items in each of the clusters, so they more accurately represent the true nature of that cluster. I would also expect the precision, recall, and F measure to increase as the number of clusters increase as well. However, it would become harder to interpret the actual “class” of the clusters as k increases as we were instructed to disregard the class of each instance in the dataset. Additionally, we might achieve better results if we used decision trees to identify the most influential personality test markers and then used a subset of those markers for clustering.

Based on all results, while not highly accurate, prison officials could obtain good insight into what attributes are the most important in regards to who might be on-going offenders.

5. Conclusion

In this research project, different methods of constructing clusters were explored. Additionally, different distance measures were implemented and then analyzed to see how they affected the accuracy of the clusters created.

While the results of this project show only a maximum of 67% accuracy, clustering is still a valid machine learning technique. With an more advanced algorithm and an increased size dataset, clustering may be able to predict predators and victims at a much better rate.

References

Abeel, T.; de Peer, Y. V. & Saeys, Y. Java-ML: A Machine Learning Library, Journal of Machine Learning Research, 2009, 10, 931-934

Alpaydin, E. Introduction to Machine Learning, Second Edition. The MIT Press, Cambridge, MA. 2010.

Coolidge, F. Statistics A Gentle Introduction, 2nd edition. SAGE Publications, Inc. 2006.

Dunham, M. Data Mining: Introductory and Advanced Topics. Prentice-Hall. 2002.

Saha, S., Bandyopadhya, S. Performance Evaluation of Some Symmetry-Based Cluster Validity Indexes. IEEE Transactions on Systems, Man and Cybernetics – Part C: Applications and Review. Vol. 39, No. 4. July 2009.

Jain, A.K., Murty, M.N., Flynn, P.J. Data clustering: a review. ACM Computing Surveys. Vol. 31, No. 3. Sept. 1999.

In a previous post, I explored how one might apply classification to solve a complex problem. This post will explore the code necessary to implement that nearest neighbor classification algorithm. If you would like a full copy of the source code, it is available here in zip format.

Knn.java – This is the main driver of the code. To do the classification, we are essentially interested in finding the distance between the particular instance we are trying to classify to other instances.  We then determine the classification of the instance we want from a “majority vote” of the other k closest instances.  Each feature of an instance is a separate class that essentially just stores a continuous or discrete value depending on if you are using regression or not to classify your neighbors.  The additional feature classes and file reader are left to the reader as an exercise.  Note that it would be fairly easy to weight features using this model depending on if you want to give one feature more clout than another in determining the neighbors.

The nice visualization of the algorithm is provided by Kardi Teknomo. As you can see, we take the number of k closest instances and use a “majority vote” to classify the instance.  While this is an extremely simple method, it is great for noisy data and large data sets.  The two drawbacks are the running time O(n^2) and the fact that we have to determine k ahead of time.  However, despite this, as shown in the previous paper, the accuracy can be quite high.

import java.util.*;

public class Knn {
	public static final String PATH_TO_DATA_FILE = "coupious.data";
	public static final int NUM_ATTRS = 9;
	public static final int K = 262;

	public static final int CATEGORY_INDEX = 0;
	public static final int DISTANCE_INDEX = 1;
	public static final int EXPIRATION_INDEX = 2;
	public static final int HANDSET_INDEX = 3;
	public static final int OFFER_INDEX = 4;
	public static final int WSACTION_INDEX = 5;
	public static final int NUM_RUNS = 1000;
	public static double averageDistance = 0;

	public static void main(String[] args) {
		ArrayList instances = null;
		ArrayList distances = null;
		ArrayList neighbors = null;
		WSAction.Action classification = null;
		Instance classificationInstance = null;
		FileReader reader = null;
		int numRuns = 0, truePositives = 0, falsePositives = 0, falseNegatives = 0, trueNegatives = 0;
		double precision = 0, recall = 0, fMeasure = 0;

		falsePositives = 1;

		reader = new FileReader(PATH_TO_DATA_FILE);
		instances = reader.buildInstances();

		do {
			classificationInstance = extractIndividualInstance(instances);

			distances = calculateDistances(instances, classificationInstance);
			neighbors = getNearestNeighbors(distances);
			classification = determineMajority(neighbors);

			System.out.println("Gathering " + K + " nearest neighbors to:");
			printClassificationInstance(classificationInstance);

			printNeighbors(neighbors);
			System.out.println("\nExpected situation result for instance: " + classification.toString());

			if(classification.toString().equals(((WSAction)classificationInstance.getAttributes().get(WSACTION_INDEX)).getAction().toString())) {
				truePositives++;
			}
			else {
				falseNegatives++;
			}
			numRuns++;

			instances.add(classificationInstance);
		} while(numRuns < NUM_RUNS);

		precision = ((double)(truePositives / (double)(truePositives + falsePositives)));
		recall = ((double)(truePositives / (double)(truePositives + falseNegatives)));
		fMeasure = ((double)(precision * recall) / (double)(precision + recall));

		System.out.println("Precision: " + precision);
		System.out.println("Recall: " + recall);
		System.out.println("F-Measure: " + fMeasure);
		System.out.println("Average distance: " + (double)(averageDistance / (double)(NUM_RUNS * K)));
	}

	public static Instance extractIndividualInstance(ArrayList instances) {
		Random generator = new Random(new Date().getTime());
		int random = generator.nextInt(instances.size() - 1);

		Instance singleInstance = instances.get(random);
		instances.remove(random);

		return singleInstance;
	}

	public static void printClassificationInstance(Instance classificationInstance) {
		for(Feature f : classificationInstance.getAttributes()) {
			System.out.print(f.getName() + ": ");
			if(f instanceof Category) {
				System.out.println(((Category)f).getCategory().toString());
			}
			else if(f instanceof Distance) {
				System.out.println(((Distance)f).getDistance().toString());
			}
			else if (f instanceof Expiration) {
				System.out.println(((Expiration)f).getExpiry().toString());
			}
			else if (f instanceof Handset) {
				System.out.print(((Handset)f).getOs().toString() + ", ");
				System.out.println(((Handset)f).getDevice().toString());
			}
			else if (f instanceof Offer) {
				System.out.println(((Offer)f).getOfferType().toString());
			}
			else if (f instanceof WSAction) {
				System.out.println(((WSAction)f).getAction().toString());
			}
		}
	}

	public static void printNeighbors(ArrayList neighbors) {
		int i = 0;
		for(Neighbor neighbor : neighbors) {
			Instance instance = neighbor.getInstance();

			System.out.println("\nNeighbor " + (i + 1) + ", distance: " + neighbor.getDistance());
			i++;
			for(Feature f : instance.getAttributes()) {
				System.out.print(f.getName() + ": ");
				if(f instanceof Category) {
					System.out.println(((Category)f).getCategory().toString());
				}
				else if(f instanceof Distance) {
					System.out.println(((Distance)f).getDistance().toString());
				}
				else if (f instanceof Expiration) {
					System.out.println(((Expiration)f).getExpiry().toString());
				}
				else if (f instanceof Handset) {
					System.out.print(((Handset)f).getOs().toString() + ", ");
					System.out.println(((Handset)f).getDevice().toString());
				}
				else if (f instanceof Offer) {
					System.out.println(((Offer)f).getOfferType().toString());
				}
				else if (f instanceof WSAction) {
					System.out.println(((WSAction)f).getAction().toString());
				}
			}
		}
	}

	public static WSAction.Action determineMajority(ArrayList neighbors) {
		int yea = 0, ney = 0;

		for(int i = 0; i < neighbors.size(); i++) { 			Neighbor neighbor = neighbors.get(i); 			Instance instance = neighbor.getInstance(); 			if(instance.isRedeemed()) { 				yea++; 			} 			else { 				ney++; 			} 		} 		 		if(yea > ney) {
			return WSAction.Action.Redeem;
		}
		else {
			return WSAction.Action.Hit;
		}
	}

	public static ArrayList getNearestNeighbors(ArrayList distances) {
		ArrayList neighbors = new ArrayList();

		for(int i = 0; i < K; i++) {
			averageDistance += distances.get(i).getDistance();
			neighbors.add(distances.get(i));
		}

		return neighbors;
	}

	public static ArrayList calculateDistances(ArrayList instances, Instance singleInstance) {
		ArrayList distances = new ArrayList();
		Neighbor neighbor = null;
		int distance = 0;

		for(int i = 0; i < instances.size(); i++) {
			Instance instance = instances.get(i);
			distance = 0;
			neighbor = new Neighbor();

			// for each feature, go through and calculate the "distance"
			for(Feature f : instance.getAttributes()) {
				if(f instanceof Category) {
					Category.Categories cat = ((Category) f).getCategory();
					Category singleInstanceCat = (Category)singleInstance.getAttributes().get(CATEGORY_INDEX);
					distance += Math.pow((cat.ordinal() - singleInstanceCat.getCategory().ordinal()), 2);
				}
				else if(f instanceof Distance) {
					Distance.DistanceRange dist = ((Distance) f).getDistance();
					Distance singleInstanceDist = (Distance)singleInstance.getAttributes().get(DISTANCE_INDEX);
					distance += Math.pow((dist.ordinal() - singleInstanceDist.getDistance().ordinal()), 2);
				}
				else if (f instanceof Expiration) {
					Expiration.Expiry exp = ((Expiration) f).getExpiry();
					Expiration singleInstanceExp = (Expiration)singleInstance.getAttributes().get(EXPIRATION_INDEX);
					distance += Math.pow((exp.ordinal() - singleInstanceExp.getExpiry().ordinal()), 2);
				}
				else if (f instanceof Handset) {
					// there are two calculations needed here, one for device, one for OS
					Handset.Device device = ((Handset) f).getDevice();
					Handset singleInstanceDevice = (Handset)singleInstance.getAttributes().get(HANDSET_INDEX);
					distance += Math.pow((device.ordinal() - singleInstanceDevice.getDevice().ordinal()), 2);

					Handset.OS os = ((Handset) f).getOs();
					Handset singleInstanceOs = (Handset)singleInstance.getAttributes().get(HANDSET_INDEX);
					distance += Math.pow((os.ordinal() - singleInstanceOs.getOs().ordinal()), 2);
				}
				else if (f instanceof Offer) {
					Offer.OfferType offer = ((Offer) f).getOfferType();
					Offer singleInstanceOffer = (Offer)singleInstance.getAttributes().get(OFFER_INDEX);
					distance += Math.pow((offer.ordinal() - singleInstanceOffer.getOfferType().ordinal()), 2);
				}
				else if (f instanceof WSAction) {
					WSAction.Action action = ((WSAction) f).getAction();
					WSAction singleInstanceAction = (WSAction)singleInstance.getAttributes().get(WSACTION_INDEX);
					distance += Math.pow((action.ordinal() - singleInstanceAction.getAction().ordinal()), 2);
				}
				else {
					System.out.println("Unknown category in distance calculation.  Exiting for debug: " + f);
					System.exit(1);
				}
			}
			neighbor.setDistance(distance);
			neighbor.setInstance(instance);

			distances.add(neighbor);
		}

		for (int i = 0; i < distances.size(); i++) {
			for (int j = 0; j < distances.size() - i - 1; j++) { 				if(distances.get(j).getDistance() > distances.get(j + 1).getDistance()) {
					Neighbor tempNeighbor = distances.get(j);
					distances.set(j, distances.get(j + 1));
					distances.set(j + 1, tempNeighbor);
				}
			}
		}

		return distances;
	}

}

Abstract—Recommendation systems take artifacts about items and provide suggestions to the user on what other products the might like. There are many different types of recommender algorithms, including nearest-neighbor, linear classifiers and SVMs. However, most recommender systems are collaborative systems that rely on users to rate the products that they bought. This paper presents a analysis of recommender systems using a mobile device and backend data points for a coupon delivery system.

Index Terms—machine learning, recommender systems, supervised learning, nearest neighbor, classification

1. INTRODUCTION

Recommendations are a part of everyday life. An individual constantly receives recommendations from friends, family, salespeople and Internet resources, such as online reviews. We want to make the most informed choices possible about decisions in our daily life. For example, when buying a flat screen TV, we want to have the best resolution, size and refresh rate for the money. There are many factors that influence our decisions – budget, time, product features and most importantly, previous experience. We can analyze all the factors that led up to the decision and then make a conclusion or decision based on those results. A recommender system uses historical data to recommend new items that may of be interest to a particular user.

Coupious is a mobile phone application that gives it’s user coupons and deals for businesses around their geographic location. The application runs on the user’s cell phone and automatically retrieves coupons based upon GPS coordinates. Redemption of these coupons is as simple as tapping a “Use Now” button. Coupious’ services is now available on the iPhone, iPod Touch, and Android platforms. Coupious is currently available in Minneapolis, MN, West Lafayette, IN at Purdue University and Berkley, CA. Clip Mobile is the Canada based version of Coupious that is currently available in Toronto.

Using push technology, it is possible to integrate a mobile recommendation system into Coupious. The benefit of this would be threefold: 1) offer the customers the best possible coupons based on their personal spending habits – if a user feels they received a “good deal” with Coupious, they would be more likely to use it again and integrate it into their bargain shopping strategy, 2) offer businesses the ability to capitalize on their market demographics – the ability to reach individual customers to provide goods or services would drive home more revenue and add value to the product and 3) adding coupons to the service would immediately make the system more useful to a user as it would present, desirable, geographically proximate offers without extraneous offers.

1.1 OUTLINE OF RESEARCH

In this research project, we evaluated the batch k-nearest neighbors algorithm in Java. We chose to write the implementation of the decision tree in Java because of the ease of use of the language. Java also presented superior capabilities in working with and parsing data from files. Using Java allowed the author to more efficiently model the problem through the use of OO concepts, such as polymorphism and inheritance. The kNN algorithm was originally suggested by Donald Knuth in [9] where it was referenced as the post-office problem, where one would want to assign residences to the nearest post office.

The goal of this research was to find a solution that we felt would be successful in achieving the highest rate of coupon redemption. Presumptively, in achieving the highest rate of redemption required learning what the user likes with the smallest error percentage. Additionally, we wanted to know if increasing the attributes used for computation, would effect the quality of the result set.

1.2 DATA

The data that we used is from the Coupious production database. Currently, there are approximately 70,000 rows of data (“nearby” queries, impressions and details impressions) and approximately 3,400 of those represent actual coupon redemptions. The data is an aggregate from March 25th, 2009 until February 11, 2010. The results are from a mixture of different cities where Coupious is currently in production.

From a logical standpoint, Coupious is simply a conduit through which a user may earn his discount and has no vested interest in whether or not a user redeems a coupon in a particular session. However, from a business standpoint, Coupious markets the product based on being able to entice sales through coupon redemption. Therefore, for classification purposes, sessions that ended in one or more redemptions will be labeled +1 and sessions that ended without redemption will be labeled -1.

1.3 PREVIOUS WORK

While there hasn’t been any previous work in the space of mobile recommendation systems, there has been a large amount of work in the recommender systems space and classification. In [1], [2] and [3], direct marketing is studied using collaborative filtering. In [3], the authors use SVMs and latent class models to model predictions of whether or not a customer would be likely to buy a particular product. The most direct comparison of work would be in [1] and [8], where SVMs and linear classifiers are used to cluster content driven recommender systems.

2. APPLICATION

Broadly, recommender systems can be grouped into two categories, content-based and collaborative based. In content-based systems, the recommendations are based solely on the attributes of the actual product. For example, in Coupious, attributes of a particular coupon redemption includes the distance from the merchant when the coupon was used, the date and time of the redemption, the category of the coupon, the expiry and the offer text. These attributes are physical characteristics of the coupon. Recommendations can be made to users without relying on any experience-based information.

In collaboration systems, recommendations are provided based on not only product attributes but the overlap of preferences of “like-minded” people, first introduced by Goldberg et. al [5]. For example, a user is asked to rate how well they liked the product or give an opinion of a movie. This provides the algorithm a base line of preference for a particular user, which allows the algorithm to associate product attributes with a positive or negative response. Since Coupious does not ask for user ratings, this paper will focus exclusively on content-based applications.

Many content-based systems have similar or common attributes. As stated in [3], the “central problem of content-based recommendation is identifying a sufficiently large set of key attributes.” If the attribute set is too small, there may not be enough information for the program to build a profile of the user. Conversely, if there are too many attributes, the program won’t be able to identify the truly influential attributes, which leads to poor performance [6]. Also, while the label for a particular feature vector with Coupious data is +1 or -1, many of the features in the data are multi-valued attributes (such as distance, date-time stamps, etc.), which maybe hard to represent in a binary manner, if the algorithm requires it.

In feature selection, we are “interested in finding k of the d dimensions that give us the most information and accuracy and we discard the other (d – k) dimensions [4].” How can we find the attributes that will give us the most information and accuracy? For Coupious, the attribute set is quite limited. For this research, we explicitly decided the features that will contribute to our recommendations. In all cases, all attributes were under consideration while the algorithm was running and we never partitioned the attributes for different runs to create different recommendations.

2.1 THE kNN ALGORITHM

As shown in [7], “Once the clustering is complete, performance can be very good, since the size of the group that must be analyzed is much smaller.” The nearest neighbor algorithm is such a classification algorithm. K-nearest neighbors is one of the simplest machine learning algorithms, where an instance is classified by a majority vote by the closest training examples in the hypothesis space. This algorithm is an example of instance-based learning, where previous data points are stored and interpolations is performed to find a recommendation. An important distinction to make is that kNN does not try to create a model when it are given test data, but rather performs the computation when tested. The algorithm works by attempting to find a previously seen data point that is “closest” to the query data point and then uses its previous output for prediction. The algorithm calculates its approximation using this definition:

The algorithm is shown below

KNN (Examples, Target_Instance)

  • Determine the parameter K = number of nearest neighbors.
  • Calculate the distance between Target_Instance and all the Examples based on the Euclidian distance.
  • Sort the distance and determine nearest neighbor based on the Kth minimum distance.
  • Gather the Category Y of the nearest neighbors.
  • Use simple majority of the category of nearest neighbors as the prediction value of the Target_Instance.
  • Return classification of Target_Instance.

This algorithm lends itself well to the Coupious application. When a user uses the Coupious application, they want it to launch quickly and present a list of coupons within 10-15 seconds. Since this algorithm is so simple, we are able to calculate coupons that the user might enjoy fairly quickly and deliver them to the handset. Also, because Coupious doesn’t know any personal details about the user, the ability to cluster users into groups without the need for any heavy additional implementation on the front or back ends of the application is advantageous.

There are several possible problems with this approach. While this is a simple algorithm, it doesn’t take high feature dimensionality into account. If there are many features, the algorithm will have to perform a lot of computations to create the clusters. Additionally, each attribute is given the same degree of influence on the recommendation. In Coupious, the date and time the coupon was redeemed has the same amount of bearing on the recommendation as does the current distance from the offer and previous redemption history. The features may not scale compared to their importance and the performance of the recommendation may be degraded by irrelevant features. Lastly, [6] describes the curse of dimensionality, which describes how, if a product has 20 attributes, 2 of which are actually useful, how classification may be completely different when considering each of the 20 attributes but identical when only considering the 2 relevant attributes.

2.2 ATTRIBUTE SELECTION

While implementing the k-nearest neighbors algorithm, we decided to evaluate instances based upon seven different attributes, including coupon category, distance, expiration, offer, redemption date, handset and handset operating system, and upon the session result, a “hit” or “redemption.” The offer and expiration attributes required extra processing before they were able to be used for clustering. In both of these attributes, a “bag of words” (applying a Naive Bayes classifier to text to determine the classification) implementation was used to determine what type of offer the coupon had made.

For the OFFER attribute, we split the value space into 4 discrete values, PAYFORFREE (an instance where the customer had to pay some initial amount to receive a free item), PERCENTAGE (an instance where the customer received some percentage discount), DOLLAR (an instance where a customer received a dollar amount discount) and UNKNOWN (an instance where the classification was unknown). While the PERCENTAGE and DOLLAR values essentially compute to the exact same values when calculated, consumers tend to react differently when seeing higher percentage discounts versus dollar discounts (i.e. 50% off instead of $5 discount, even though they may equate to an identical discount, if the price were $10).

For the expiration attribute, some text parsing was done to determine what type of expiration the coupon had (DATE, USES, NONE or UNKNOWN). If the parsing detected a date format, we used DATE and if the parsing detected that it was a limited usage coupon (either limited by total uses across a population, herein known as “global” or limited by uses per customer, herein known as “local”), we designated that the coupon was USES. If the coupon was valid indefinitely, we designated that the coupon expiration was NONE. We did not record the distance of dates in the future, by the type of limited usage (global or local) or by the number of uses left.

An important detail to note is that in our implementation of kNN, if an attribute was unknown, we declined using it for computation of the nearest neighbor because the UNKNOWN attribute was typically the last value in a Java enumeration and therefore would be assigned a high integer value which would skew the results unnecessarily if used for computation.

For the remaining attributes (OFFERTIME, CATEGORY, DISTANCE, HANDSET, which considers handset model and OS and ACTION), the value space of each attribute was divided over the possible discrete values for that attribute according to Table 2.2.

Attribute Possible Values
OFFERTIME Morning, Afternoon, Evening, Night, Unknown
CATEGORY Entertainment, Automotive, Food & Dining, Health & Beauty, Retail, Sports & Recreation, Travel, Clothing & Apparel, Electronics & Appliances, Furniture & Decor, Grocery, Hobbies & Crafts, Home Services, Hotels & Lodging, Nightlife & Bars, Nonprofits & Youth, Office Supplies, Other, Pet Services, Professional Services, Real Estate, Unknown
DISTANCE Less than 2 miles, 2 to 5 miles, 5 to 10 miles, 10 to 20 miles, 20 to 50 miles, 50 to 100 miles, Unknown
HANDSET Device: iPhone, iPod, G1, Hero, myTouch, Droid, Unknown
OS: iPhone, Android, Unknown
ACTION Redeem, Hit, Unknown

Table 2.2

In the case of the handset OS, iPhone and iPod were classified together as the iPhone OS as they are the same OS with different build targets. It is important to note that kNN works equally well with continuous valued attributes as well as discrete-valued attributes. For further discussion on using kNN with real-valued attributes, see [10].

3. PROBLEMS

There were several problems that were encountered while implementing kNN. The first problem was “Majority Voting.” Majority voting is the last step in the algorithm to classify an instance and is an inherent problem with the way the kNN algorithm works. If a particular class dominates the training data, it will skew the votes towards that class since we are only considering data at a local level, that is the distance from our classification instance to the nearest data points. There are two ways to solve this problem: 1) balancing the dataset, or, 2) weighting the neighboring instances. Balancing is the simplest technique where an equal proportion of either class are present in the training data. A more complicated, but more effective, method is to weight the neighboring instances such that neighbors that are further away have a smaller weight value than closer neighbors.

Determining “k” beforehand is a troublesome problem. This is due to the fact that, if “k” is too large, the separation between classifications becomes blurred. This could cause the program to group two clusters together that are, in fact, distinct clusters themselves. However, a large “k” value does reduce the effect of noise by including more samples in the hypothesis. If “k” is too small, we might not calculate a good representation of the sample space because our results were too local. In either case, if the target attribute is binary, “k” should be an odd number to avoid the possibilities of ties with majority voting.

Also, kNN has a high computation cost because it has to consider the distance of every point from itself to another point. The time-complexity of kNN is O(n), which wouldn’t scale well to hypothesis spaces with millions of instances.

Discretizing non-related attributes, like category, presented a unique challenge for the author. When considering continuous attributes, such as distance, it was easy to discretize this data. In the case of Coupious, the distance ranges were already defined in the application so we just had to translate those over to our classification algorithm. However, some attributes, such as category, while related at an attribute level (in that they were all categories of coupon), had no real values. In this case, we simply assigned them increasing integer values

Another problem that we encountered is the sparsity problem. Since we implemented a content-driven model, the degree of accuracy relied upon how much data we had about a particular end user to build their profile. If the customer only had one or two sessions ending in no redemptions, it might not be possible to achieve any accuracy about this person. We dealt with this problem by artificially creating new records to supplement previous real records.

Coupious relies on the GPS module inside the smart phone to tell us where a user is currently located. From that position, the user gets a list of coupons that are close to the user’s location. However, there are no safeguards in place to guarantee that a redemption is real. A curious user may attempt to redeem a coupon when he is not at the actual merchant location. In the data, we can account for this by calculating the distance from the merchant at the time of the redemption request. However, this GPS reading may be inaccurate as the GPS model can adjust it’s accuracy to save power and battery life.

Lastly, accuracy is somewhat limited because of the fact that we are using a content-only model. There is no way to interact with the user to ask if the recommendations are truly useful. To achieve this additional metric would require major changes to the application and the backend systems that are outside the scope of this research paper.

Despite the multiple problems with kNN, it is quite robust to noisy data as indicated in section 4, which makes it well-suited for this classification task, as the author can only verify the reasonableness of the data, not the integrity.

4. TESTING

Testing of the kNN logic was carried out using a 3-fold cross validation method, were the data was divided into training and test sets such that | R | = k X | S |, where R is the size of the training set, k is the relative size and S is the size of the test set.

For each test run, we chose 4,000 training examples and 1 test example at random. We attempted to keep labeled classes in the training set as balanced as possible by setting a threshold n. This threshold prevented the classes from becoming unbalanced by more than a difference of n. If the threshold n was ever reached, we discarded random selections until we were under the threshold again and thereby balance the classes. Discussion of results are in section 5.

5. EVALUATION

Even though kNN is a simplistic algorithm, the classification results were quite accurate. To test the algorithm, we performed 10,000 independent runs where an equal number of “hit” and “redemption” rows were selected at random (2,000 of each so as to keep the inductive bias as fair as possible). An individual classification instance was chosen at random from that set of 4,000 instances and was then classified according to its nearest neighbors.

5.1 MACRO EVALUATION

When evaluating the results, there was one main factor that affected the recall and it was the size of the k neighbors considered in the calculation (see Table 5.1). In the 10,000 independent runs using random subsets of data for each test, the overall recall for the kNN algorithm was 85.32%. The average F-measure for these runs was 0.45.

However, if one considers a subset of results with k-size less than or equal to 15, the average recall was much higher – 94%. After a k-size greater than 30, we see a significant drop-off of recall. This can be attributed to the fact that the groups are becoming less defined because, as k-size grows, the nodes that are being used are “further” away from the classification instance, and therefore the results are not as “good.”

5.2 MICRO EVALUATION

When broken down into smaller sets of 1,000 runs, the recall and F-measure vary greatly. In 16 runs, we had a range from 29.40% recall to 98.20% recall. The median recall was 92.70%.

Run K Size Recall F-measure Avg. Distance
1 1 98.20% 0.49 0.18
2 2 97.80% 0.49 0.23
3 3 95.80% 0.48 0.49
4 4 95.70% 0.48 0.44
5 5 95.60% 0.48 0.46
6 6 95.20% 0.48 0.59
7 7 93.10% 0.48 0.48
8 8 92.70% 0.47 0.69
9 9 94.20% 0.48 0.82
10 10 93.40% 0.48 0.88
11 15 91.70% 0.48 1.21
12 30 87.50% 0.47 2.16
13 50 85.10% 0.46 3.63
14 100 70.80% 0.42 7.57
15 200 48.90% 0.32 16.57
16 500 29.40% 0.22 31.64
Overall 85.32% 0.45 4.25

Table 5.1

There is a large gap between our maximum and our minimum recall. This can be attributed mainly to our k-size. Based on these results, we would feel fairly confident that we could make a useful prediction, but only if we had a confidence rating on the results (due to the high variability of the results).

These results could provide good insight into what k-size or neighbors are the most influential in suggesting a coupon. This would allow us to more carefully target Coupious advertising based on the user.

When could one consider these results valid? If the average distance of a classification instance to the examples is below a threshold such that the classification truly reflects its neighbors, we could consider the results valid. Another form of validating the results could be pruning the attribute set. If we were able to prune away attributes that didn’t affect the recall in a negative manner, we would be left with a set of attributes that truly influence the customers purchasing behavior, although this task is better suited for a decision or regression tree. An improvement to this kNN algorithm might come in the form of altering the k-size based upon the population or attributes that we are considering.

6. CONCLUSION

In this research project, we have implemented the kNN algorithm for recommender systems. The algorithm for the nearest neighbor was explored and several problems were identified and overcome. Different techniques were investigated to improve the accuracy of the system. The results of this project show an overall accuracy of 85.32%, which makes kNN an excellent, simple technique for implementing recommender systems.

REFERENCES

[1] Zhang, T., Iyengar, V. S. Recommender Systems Using Linear Classifiers. Journal of Machine Learning Research 2. (2002). 313-334.
[2] Basu, C., Hirsh, H., Cohen, W. Recommendation as Classification: Using Social and Content-Based Information in Recommendation.
[3] Cheung, K. W., Kowk, J. T., Law, M. H., Tsui, K. C. Mining Customer Product Ratings for Personalized Marketing
[4] E. Alpayden., Introduction to Machine Learning, 2nd ed. MIT Press. Cambridge, Mass, 2010.
[5] D. Goldberg, D. Nichols, B.M. Oki, D. Terry, Collaborative filtering to weave an information tapestry, Communications of the ACM 35. (12) (December 1992) 61 – 70.
[6] T.M. Mitchell, Machine Learning, New York. McGraw-Hill, 1997.
[7] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl, “Recommender Systems for Large-Scale E-Commerce: Scalable Neighborhood Formation Using Clustering,” Proc. Fifth Int’l Conf. Computer and Information Technology, 2002.
[8] D. Barbella, S. Benzaid, J. Christensen, B. Jackson, X. V. Qin, D. Musicant. Understanding Support Vector Machine Classifications via a Recommender System-Like Approach. [Online]. http://www.cs.carleton.edu/faculty/dmusican/svmzen.pdf
[9] D.E. Knuth. “The Art of Computer Programming.” Addison-Wesley. 1973.
[10] C. Elkan. “Nearest Neighbor Classification.” University of California – San Diego. 2007. [Online]. http://cseweb.ucsd.edu/~elkan/151/nearestn.pdf

In a previous post, I explored how one might apply decision trees to solve a complex problem. This post will explore the code necessary to implement that decision tree. If you would like a full copy of the source code, it is available here in zip format.

Entropy.java – In Entropy.java, we are concerned with calculating the amount of entropy, or the amount of uncertainty or randomness with a particular variable. For example, consider a classifier with two classes, YES and NO. If a particular variable or attribute, say x has three training examples of class YES and three training examples of class NO (for a total of six), the entropy would be 1. This is because there is an equal number of both classes for this variable and is the most mixed up you can get. Likewise, if x had all six training examples of a particular class, say YES, then entropy would be 0 because this particular variable would be pure, thus making it a leaf node in our decision tree.

Entropy may be calculated in the following way:


import java.util.ArrayList;

public class Entropy {	
	public static double calculateEntropy(ArrayList<Record> data) {
		double entropy = 0;
		
		if(data.size() == 0) {
			// nothing to do
			return 0;
		}
		
		for(int i = 0; i < Hw1.setSize("PlayTennis"); i++) {
			int count = 0;
			for(int j = 0; j < data.size(); j++) {
				Record record = data.get(j);
				
				if(record.getAttributes().get(4).getValue() == i) {
					count++;
				}
			}
				
			double probability = count / (double)data.size();
			if(count > 0) {
				entropy += -probability * (Math.log(probability) / Math.log(2));
			}
		}
		
		return entropy;
	}
	
	public static double calculateGain(double rootEntropy, ArrayList<Double> subEntropies, ArrayList<Integer> setSizes, int data) {
		double gain = rootEntropy; 
		
		for(int i = 0; i < subEntropies.size(); i++) {
			gain += -((setSizes.get(i) / (double)data) * subEntropies.get(i));
		}
		
		return gain;
	}
}

Tree.java – This tree class contains all our code for building our decision tree. Note that each level, we choose the attribute that presents the best gain for that node. The gain is simply the expected reduction in the entropy of Xachieved by learning the state of the random variable A. Gain is also known as Kullback-Leibler divergence. Gain can be calculated in the following way:

Notice that gain is calculated as a function of all the values of the attribute.

import java.io.*;
import java.util.*;

public class Tree {
	public Node buildTree(ArrayList<Record> records, Node root, LearningSet learningSet) {
		int bestAttribute = -1;
		double bestGain = 0;
		root.setEntropy(Entropy.calculateEntropy(root.getData()));
		
		if(root.getEntropy() == 0) {
			return root;
		}
		
		for(int i = 0; i < Hw1.NUM_ATTRS - 2; i++) {
			if(!Hw1.isAttributeUsed(i)) {
				double entropy = 0;
				ArrayList<Double> entropies = new ArrayList<Double>();
				ArrayList<Integer> setSizes = new ArrayList<Integer>();
				
				for(int j = 0; j < Hw1.NUM_ATTRS - 2; j++) {
					ArrayList<Record> subset = subset(root, i, j);
					setSizes.add(subset.size());
					
					if(subset.size() != 0) {
						entropy = Entropy.calculateEntropy(subset);
						entropies.add(entropy);
					}
				}
				
				double gain = Entropy.calculateGain(root.getEntropy(), entropies, setSizes, root.getData().size());
				
				if(gain > bestGain) {
					bestAttribute = i;
					bestGain = gain;
				}
			}
		}
		if(bestAttribute != -1) {
			int setSize = Hw1.setSize(Hw1.attrMap.get(bestAttribute));
			root.setTestAttribute(new DiscreteAttribute(Hw1.attrMap.get(bestAttribute), 0));
			root.children = new Node[setSize];
			root.setUsed(true);
			Hw1.usedAttributes.add(bestAttribute);
			
			for (int j = 0; j< setSize; j++) {
				root.children[j] = new Node();
				root.children[j].setParent(root);
				root.children[j].setData(subset(root, bestAttribute, j));
				root.children[j].getTestAttribute().setName(Hw1.getLeafNames(bestAttribute, j));
				root.children[j].getTestAttribute().setValue(j);
			}

			for (int j = 0; j < setSize; j++) {
				buildTree(root.children[j].getData(), root.children[j], learningSet);
			}

			root.setData(null);
		}
		else {
			return root;
		}
		
		return root;
	}
	
	public ArrayList<Record> subset(Node root, int attr, int value) {
		ArrayList<Record> subset = new ArrayList<Record>();
		
		for(int i = 0; i < root.getData().size(); i++) {
			Record record = root.getData().get(i);
			
			if(record.getAttributes().get(attr).getValue() == value) {
				subset.add(record);
			}
		}
		return subset;
	}
	
	public double calculateSurrogates(ArrayList<Record> records) {
		return 0;
	}
}

DiscreteAttribute.java


public class DiscreteAttribute extends Attribute {
	public static final int Sunny = 0;
	public static final int Overcast = 1;
	public static final int Rain = 2;

	public static final int Hot = 0;
	public static final int Mild = 1;
	public static final int Cool = 2;
	
	public static final int High = 0;
	public static final int Normal = 1;
	
	public static final int Weak = 0;
	public static final int Strong = 1;
	
	public static final int PlayNo = 0;
	public static final int PlayYes = 1;
	
	enum PlayTennis {
		No,
		Yes
	}
	
	enum Wind {
		Weak,
		Strong
	}
	
	enum Humidity {
		High,
		Normal
	}
	
	enum Temp {
		Hot,
		Mild,
		Cool
	}
	
	enum Outlook {
		Sunny,
		Overcast,
		Rain
	}

	public DiscreteAttribute(String name, double value) {
		super(name, value);
	}

	public DiscreteAttribute(String name, String value) {
		super(name, value);
	}
}

Hw1.java – This class is our main driver class

import java.util.*;

public class Hw1 {
	public static int NUM_ATTRS = 6;
	public static ArrayList<String> attrMap;
	public static ArrayList<Integer> usedAttributes = new ArrayList<Integer>();

	public static void main(String[] args) {
		populateAttrMap();

		Tree t = new Tree();
		ArrayList<Record> records;
		LearningSet learningSet = new LearningSet();
		
		// read in all our data
		records = FileReader.buildRecords();
		
		Node root = new Node();
		
		for(Record record : records) {
			root.getData().add(record);
		}
		
		t.buildTree(records, root, learningSet);
		traverseTree(records.get(12), root);
		return;
	}
	
	public static void traverseTree(Record r, Node root) {
		while(root.children != null) {
			double nodeValue = 0;
			for(int i = 0; i < r.getAttributes().size(); i++) {
				if(r.getAttributes().get(i).getName().equalsIgnoreCase(root.getTestAttribute().getName())) {
					nodeValue = r.getAttributes().get(i).getValue();
					break;
				}
			}
			for(int i = 0; i < root.getChildren().length; i++) {
				if(nodeValue == root.children[i].getTestAttribute().getValue()) {
					traverseTree(r, root.children[i]);
				}
			}
		}
		
		System.out.print("Prediction for Play Tennis: ");
		if(root.getTestAttribute().getValue() == 0) {
			System.out.println("No");
		}
		else if(root.getTestAttribute().getValue() == 0) {
			System.out.println("Yes");
		}

		return;
	}
	
	public static boolean isAttributeUsed(int attribute) {
		if(usedAttributes.contains(attribute)) {
			return true;
		}
		else {
			return false;
		}
	}
	
	public static int setSize(String set) {
		if(set.equalsIgnoreCase("Outlook")) {
			return 3;
		}
		else if(set.equalsIgnoreCase("Wind")) {
			return 2;
		}
		else if(set.equalsIgnoreCase("Temperature")) {
			return 3;
		}
		else if(set.equalsIgnoreCase("Humidity")) {
			return 2;
		}
		else if(set.equalsIgnoreCase("PlayTennis")) {
			return 2;
		}
		return 0;
	}
	
	public static String getLeafNames(int attributeNum, int valueNum) {
		if(attributeNum == 0) {
			if(valueNum == 0) {
				return "Sunny";
			}
			else if(valueNum == 1) {
				return "Overcast";
			}
			else if(valueNum == 2) {
				return "Rain";
			}
		}
		else if(attributeNum == 1) {
			if(valueNum == 0) {
				return "Hot";
			}
			else if(valueNum == 1) {
				return "Mild";
			}
			else if(valueNum == 2) {
				return "Cool";
			}
		}
		else if(attributeNum == 2) {
			if(valueNum == 0) {
				return "High";
			}
			else if(valueNum == 1) {
				return "Normal";
			}
		}
		else if(attributeNum == 3) {
			if(valueNum == 0) {
				return "Weak";
			}
			else if(valueNum == 1) {
				return "Strong";
			}
		}
		
		return null;
	}
	
	public static void populateAttrMap() {
		attrMap = new ArrayList<String>();
		attrMap.add("Outlook");
		attrMap.add("Temperature");
		attrMap.add("Humidity");
		attrMap.add("Wind");
		attrMap.add("PlayTennis");
	}
}

Node.java – Node.java holds our information in the tree.


import java.util.*;

public class Node {
	private Node parent;
	public Node[] children;
	private ArrayList<Record> data;
	private double entropy;
	private boolean isUsed;
	private DiscreteAttribute testAttribute;

	public Node() {
		this.data = new ArrayList<Record>();
		setEntropy(0.0);
		setParent(null);
		setChildren(null);
		setUsed(false);
		setTestAttribute(new DiscreteAttribute("", 0));
	}

	public void setParent(Node parent) {
		this.parent = parent;
	}

	public Node getParent() {
		return parent;
	}

	public void setData(ArrayList<Record> data) {
		this.data = data;
	}

	public ArrayList<Record> getData() {
		return data;
	}

	public void setEntropy(double entropy) {
		this.entropy = entropy;
	}

	public double getEntropy() {
		return entropy;
	}

	public void setChildren(Node[] children) {
		this.children = children;
	}

	public Node[] getChildren() {
		return children;
	}

	public void setUsed(boolean isUsed) {
		this.isUsed = isUsed;
	}

	public boolean isUsed() {
		return isUsed;
	}

	public void setTestAttribute(DiscreteAttribute testAttribute) {
		this.testAttribute = testAttribute;
	}

	public DiscreteAttribute getTestAttribute() {
		return testAttribute;
	}
}

FileReader.java – The least interesting class in the code

import java.io.*;
import java.util.ArrayList;
import java.util.StringTokenizer;

public class FileReader {
	public static final String PATH_TO_DATA_FILE = "playtennis.data";

    public static ArrayList<Record> buildRecords() {
		BufferedReader reader = null;
		DataInputStream dis = null;
		ArrayList<Record> records = new ArrayList<Record>();

        try { 
           File f = new File(PATH_TO_DATA_FILE);
           FileInputStream fis = new FileInputStream(f); 
           reader = new BufferedReader(new InputStreamReader(fis));;
           
           // read the first record of the file
           String line;
           Record r = null;
           ArrayList<DiscreteAttribute> attributes;
           while ((line = reader.readLine()) != null) {
              StringTokenizer st = new StringTokenizer(line, ",");
              attributes = new ArrayList<DiscreteAttribute>();
              r = new Record();
              
              if(Hw1.NUM_ATTRS != st.countTokens()) {
            	  throw new Exception("Unknown number of attributes!");
              }
              	
			  @SuppressWarnings("unused")
			  String day = st.nextToken();
			  String outlook = st.nextToken();
			  String temperature = st.nextToken();
			  String humidity = st.nextToken();
			  String wind = st.nextToken();
			  String playTennis = st.nextToken();
			  
			  if(outlook.equalsIgnoreCase("overcast")) {
				  attributes.add(new DiscreteAttribute("Outlook", DiscreteAttribute.Overcast));
			  }
			  else if(outlook.equalsIgnoreCase("sunny")) {
				  attributes.add(new DiscreteAttribute("Outlook", DiscreteAttribute.Sunny));
			  }
			  else if(outlook.equalsIgnoreCase("rain")) {
				  attributes.add(new DiscreteAttribute("Outlook", DiscreteAttribute.Rain));
			  }
			  
			  if(temperature.equalsIgnoreCase("hot")) {
				  attributes.add(new DiscreteAttribute("Temperature", DiscreteAttribute.Hot));
			  }
			  else if(temperature.equalsIgnoreCase("mild")) {
				  attributes.add(new DiscreteAttribute("Temperature", DiscreteAttribute.Mild));
			  }
			  else if(temperature.equalsIgnoreCase("cool")) {
				  attributes.add(new DiscreteAttribute("Temperature", DiscreteAttribute.Cool));
			  }
			  
			  if(humidity.equalsIgnoreCase("high")) {
				  attributes.add(new DiscreteAttribute("Humidity", DiscreteAttribute.High));
			  }
			  else if(humidity.equalsIgnoreCase("normal")) {
				  attributes.add(new DiscreteAttribute("Humidity", DiscreteAttribute.Normal));
			  }
			  
			  if(wind.equalsIgnoreCase("weak")) {
				  attributes.add(new DiscreteAttribute("Wind", DiscreteAttribute.Weak));

			  }
			  else if(wind.equalsIgnoreCase("strong")) {
				  attributes.add(new DiscreteAttribute("Wind", DiscreteAttribute.Strong));

			  }
			  
			  if(playTennis.equalsIgnoreCase("no")) {
				  attributes.add(new DiscreteAttribute("PlayTennis", DiscreteAttribute.PlayNo));
			  }
			  else if(playTennis.equalsIgnoreCase("yes")) {
				  attributes.add(new DiscreteAttribute("PlayTennis", DiscreteAttribute.PlayYes));
			  }
			    		    
			  r.setAttributes(attributes);
			  records.add(r);
           }

        } 
        catch (IOException e) { 
           System.out.println("Uh oh, got an IOException error: " + e.getMessage()); 
        } 
        catch (Exception e) {
            System.out.println("Uh oh, got an Exception error: " + e.getMessage()); 
        }
        finally { 
           if (dis != null) {
              try {
                 dis.close();
              } catch (IOException ioe) {
                 System.out.println("IOException error trying to close the file: " + ioe.getMessage()); 
              }
           }
        }
		return records;
	}
}

**EDIT:**

playtennis.data is only a simple text file that describes the learned attribute “play tennis” for a particular learning episode. I modeled my playtennis.data file off of Tom Mitchell’s play tennis example in his book “Machine Learning.” Essentially what it contains is attributes describing each learning episode: outlook (sunny, overcast, rain), wind (strong, weak) and humidity (high, normal) and play tennis (yes, no). Based off this information, one can trace the decision tree to derive your learned attribute. A sample decision tree is below. All you have to do is create a text file (tab delimited) that describes each particular situation (make sure you match the columns in the text file to the parsing that occurs in the Java).

**EDIT 2:**

After so many requests, I have worked up a small playtennis.data file based on the Tom Mitchell “Machine Learning” book. This follows his example data exactly and can be found here (rename the file to “playtennis.data” before running as WordPress wouldn’t let me upload a .data file due to “security restrictions.” A small note of caution: the above code was put together very quickly for purposes of my own learning. I am not claiming that it is by any means complete or fault tolerant, however, I do believe that the entropy and gain calculation is sound and correct.

Abstract—Decision trees are one of the most widely used methods of inductive inference.  They can be used to learn discrete or continuous valued hypotheses and create compact rules for evaluation of a set of data.  An advantage of decision  and regression trees is that they are robust to noisy data, which makes them perfectly suited to be able to predict whether a heart attack patient will be alive one year after his incident where all the data may not be available.  This paper is a survey of some of the methods of constructing and evaluating decision trees.

1.  INTRODUCTION

A myocardial infarction, commonly referred to as a heart attack, is a serious medical condition where blood vessels that supply blood to the heart are blocked, preventing enough oxygen from getting to the heart. The heart muscle dies from the lack of oxygen and impairs heart function or kills the patient. Heart attacks are positively correlated with diabetes, smoking, high blood pressure, obesity, age and alcohol consumption. While prognosis varies greatly based on underlying personal health, the extent of damage and the treatment given, for the period of 2005-2008 in the United States, the median mortality rate at 30 days was 16.6% with a range from 10.9% to 24.9% depending on the admitting hospital.[9] That rate improves to 96% at the 1 year mark if the patient survives the heart attack.

Physicians would like to be able to tell their patients their possible rate of survival and predict whether or not a certain treatment could help the patient. In order to make that prediction, we can use decision trees to model whether or not the patient has a good chance of survival. Using regression trees, we can map the input space into a real-valued domain using attributes that cardiologists could examine to determine the patient’s chances of survival after a given timeframe.

In building a decision tree, we use the most influential attribute values to represent the internal nodes of the tree at each level.  Each internal node tests an attribute value, each edge corresponds to an attribute value and each leaf node leads to a classification – in our case, deceased or alive.  We are able to traverse the tree from the root to classify an unseen example.  The tree can also be expressed in the form of simple rules.  This would be helpful when explaining the prognosis to the patient.

1.1  OUTLINE OF RESEARCH

In this research survey, we implemented a decision tree using an adapted ID3 algorithm.  We evaluated different splitting criterion as well as different approaches to handling missing attributes in the dataset.  In addition, the author considers different approaches to handling continuous valued attributes and methods to reduce decision tree over-fitting.  Lastly, results are discussed in section 4 after running the decision tree multiple times with the echocardiogram dataset.

1.2  DATA

The data that we used is from the University of California at Irvine machine learning repository.  The dataset that we chose was the 1989 echocardiogram dataset.  This dataset contained 132 rows, with 12 attributes, 8 of which were actually usable for decision tree construction (the remaining four were references for the original contributor of this dataset).  This dataset had missing values and all of the attributes were continuous valued.

The data described different measurements of patients who had suffered from acute myocardial infarction at some point in the past.  The attributes included “AGE-AT-HEART-ATTACK” (the patients’ age in years when the heart attack happened), “PERICARDIAL-EFFUSION” (binary choice relative to fluid around the heart), “FRACTIONAL-SHORTENING” (a measure of contractility around the heart where lower numbers are abnormal), “EPSS” (E-point septal separation which is another measure of contractility where larger numbers are abnormal), “LVDD” (left ventricular end-diastolic dimension, where larger numbers are abnormal), “WALL-MOTION-INDEX” (a measure of how many segments of the left ventricle are seen moving divided by the number of segments seen) and “ALIVE-AT-ONE” (a binary choice where 0 represents deceased or unknown and 1 is alive at one year).

It is important to note that not all rows could be used for learning.  Two attributes, “SURVIVAL” and “STILL-ALIVE” must be analyzed as a set.  SURVIVAL described the number of months the patient had survived since the heart attack.  Some of the rows described patients who survived less than a year and are still alive based on STILL-ALIVE (a binary attribute, 0 representing deceased and 1 representing alive).  These patients cannot be used for prediction.

It has previously been noted that “the problem addressed by past researchers was to predict from other variables whether or not the patient will survive at least one year.  The most difficult part of this problem is correctly predicting that the patient will not survive.  This difficulty seems to stem from the size of the dataset.” [1]  In implementing the decision tree logic, we have found that this is the case as well.

2.  APPLICATION

We chose to write the implementation of the decision tree in Java because of the ease of use of the language.  Java also presented superior capabilities in working with and parsing data from files.  Using Java allowed the author to more efficiently model the problem through the use of OO concepts, such as polymorphism and inheritance.

2.1  DECISION TREE CONSTRUCTION

The decision tree is constructed using the ID3 algorithm originally described by Quinlan [4] and shown in Mitchell [2] with an adaptation by the author to handle numeric, continuous valued attributes, missing attributes and pruning.  ID3 is a simple decision tree algorithm.  The algorithm is shown below

ID3 (Examples, Target_Attribute, Attributes)
  • Create a root node for the tree
  • If all Examples are positive, return the single-node tree root, with label = +.
  • If all Examples are negative, return the single-node tree root, with label = -.
  • If the number of predicting Attributes is empty, then return the single node tree root, with label = most common
    value of target attribute in the examples.
  • Otherwise Begin
    • A = The Attribute that best classifies Examples.
    • Decision tree attribute for root = A.
    • For each possible value, vi, of A
      • Add a new tree branch below root, corresponding to the test A = vi.
      • Let Examples(vi) be the subset of examples that have the value vi for A
      • If Examples(vi) is empty
        • Below this new branch, add a leaf node with the label = most common target value in the examples.
      • Else
        • Below this new branch add the subtree ID3 (Examples(vi), Target_Attribute, Attributes - {A})
  • End
  • Return root

The ID3 algorithm “uses information gain as splitting criteria and growing stops when all instances belong to a single value of target feature or when best information gain is not greater than zero.” [6]  For further information on ID3, see [2], [4], [6], [7].

2.2  SPLITTING CRITERION

A decision tree is formed by having some concrete concept of splitting data into subsets which form children nodes of the parent.  In our implementation of the decision tree, we decided to use a univariate impurity-based splitting criterion called entropy.

Entropy is the measure of impurity or chaos in the data.  If all elements in a set of data belong to the same class, the entropy would be zero and if all the elements in a dataset were evenly mixed, the entropy would be one.  Entropy may be measured with the following equation

where is the number of positive training examples in T and is the number of negative training examples in T.  For further discussion on entropy, see Mitchell [2] or Alpaydin [3].

The ID3 algorithm can use this measure of entropy of each attribute set of values to determine the best gain, or expected reduction in entropy due to splitting on A or the difference in entropies before splitting and after splitting on A.

At each level of the tree, the “best” attributes can be found to create the maximum reduction in entropy, called information gain.  These two calculations represent the preference to create the smallest tree possible for several different reasons – mainly that a short hypothesis that accurately describes the data is unlikely to be coincidence.  For further discussion on entropy and gain, see Mitchell [2], Alpaydin [3], Steinberg [5] and Lior et al [6].

2.3  MISSING VALUES

In the dataset, there are many missing values.  The missing values show up mainly for the attributes ALIVE-AT-ONE (43.2%), EPSS (10.6%) and LVDD (7.5% of the data).  In addition, 66.3% of the data had a row with at least 1 missing attribute value.

Clearly, this dataset is not ideal for predicting attributes.  Fortunately, decision trees are robust to noisy data.  The strategy of replacing the missing attribute values with the most common value among training examples was suggested in [2].  We decided to implement this idea to deal with these missing values and initiate those values with a surrogate value.  The surrogate values were calculated using the average of all of the attributes for continuous-valued data or the most common attribute value for discrete valued data.  Our replacement was done only after all the data had been read in from the dataset instead of using a moving average while the data was still being read.

There are two main reasons we chose this method.  First, this is an extremely simple method that doesn’t require a lot of calculation.  Since decision trees cannot back up once they have made a splitting decision, we would never have to worry about the data changing without needing to regenerate the tree.  Second, this allowed the program to describe a finer grain representation of the true average value of the particular attribute for the global dataset.

2.4  HANDLING CONTINUOUS AND DISCRETE ATTRIBUTES

In ID3, handling discrete attributes are quite simple.  For each attribute, a child node is created and that branch is taken when the tree is traversed to evaluate a test set.  To handle continuous data, some partitioning of the attribute values must take place to discretize the possible range.  For example, if we had the attributes shown in the following table:

PlayTennis Temperature
No 40
No 48
Yes 60
Yes 72
Yes 80
No 90

We may consider creating a discrete variable that corresponds to Temperature > 40 and Temperature < (48 + 60) / 2 that would produce a true or false value.

For our implementation, we decided to take the average of all of the values and make that our discrete split value, where all instances less than the average branch went to the left node and all the instances greater than the discrete value branch went to the right node.  The reason that we chose this approach is because it was a simple implementation over other methods that included binary search, statistical search and weighted searches. [2] [6]  This causes our tree to look closer in form to a CART tree rather than an ID3 tree, as CART can only produce trees that are binary trees.  CART uses this approach of a single partitioning value when using univariate splitting criteria. [5]  The main reason that we are able to implement this approach is the data is such that it is increasingly “abnormal” the larger the value becomes.

2.5  TESTING

Testing of the decision tree logic was carried out using the PlayTennis example shown in [2].  This example used only discrete values.  After verifying the decision tree could accurately classify these examples, the program was adapted to use continuous valued attributes.

Testing performed was done with a 3-fold cross validation method, were the data was divided into training and test sets such that | R | = k X | S |, where R is the size of the training set, k is the relative size and S is the size of the test set.

For each test run, we chose 88 training examples and 44 test examples at random. We attempted to keep labeled classes in the training set as balanced as possible by setting a threshold n. This threshold prevented the classes from becoming unbalanced by no more than a difference of n. If the threshold n was ever reached, we discarded random selections until we were under the threshold again in order to balance the classes.  Discussion of results are in section 4.

2.6  DECISION TREE PRUNING

In the process of building the decision tree, the accuracy of the tree is determined by the training examples.  “However, measured over an independent set of training examples, the accuracy first increases and then decreases.” [2]  This is an instance of over-fitting the data.  To prevent this over-fitting condition, the tree is evaluated and then cut back to the “essential” nodes such that the accuracy does not decrease with real-world training examples.

In our implementation of pruning, we had no stopping criteria to prevent over-fitting.  Our implementation let the over-fitting occur and then we used a post-pruning method called Reduced-Error pruning, as described by Quinlan. [7]  In this algorithm, the tree nodes are traversed from bottom to top while the procedure checking to see if replacing it with the most frequent class improves the accuracy of the decision tree.  Pruning continues until accuracy decreases and the algorithm ends with the smallest, accurate decision tree.

Further discussion of pruning may be found in [2], [3], [5] and [6].

3.  POTENTIAL PROBLEMS

There are some problems that we encountered during implementation.  The first problem is that ID3 does not natively support missing attributes, numeric attributes or pruning.  The algorithm had to be adapted to support these features.  In adapting the algorithm, less than efficient methods were chosen.  In my implementation, we split at the mean of the attribute values.  This would cause the surrogate values and the split points to be the same value.  In addition, this method does not handle outliers in the data well and forces everything towards the center of the sample set.  Implementing a better search algorithm or multivariate splitting might allow the author to see improved accuracy.  Another alternative would be to use the C4.5 algorithm [8], which is an evolution of ID3 adds full support for these requirements.

Another problem that we experienced is having enough training data without missing attributes to build an effective decision tree.  With the large missing attribute rate, it would be hard to get a good handle on any sort of trends in the dataset.  A possible solution may be to use a weighted method to assign the most probable value at the point where we encounter the missing value.

4.  EVALUATION

For the evaluation of the results, we have chosen not to assign value to the results so that a result of ALIVE-AT-ONE equal to 1 is positive and a result of ALIVE-AT-ONE equal to 1 as well instead of negative.  One may choose to assign intrinsic values to the instances, but one could easily evaluate these results without them.  We have chosen to only focus on the results rather than make a determination of the “goodness” of a particular outcome.  Since we have chosen this method of evaluation, we would not have any false positives or true negative results, therefore we will not be reporting any precision calculations.

4.2  MACRO EVALUATION

In 10,000 independent runs using random subsets of data for each test, the overall recall for the decision tree was 66.82%.  The average F-measure for these runs was 0.38.  See figure 4.1.  In the majority of the runs, the most influential attributes where SURVIVAL, EPSS and LVDD, where each of these attributes appeared in 100% of the decision trees created.

4.1 MIRCO EVALUATION

When broken down into smaller sets of 1,000 runs, the recall and F-measure vary greatly.  In 10 runs, we had a range from 27.27% recall to 93.18% recall.  The median recall was 84.09%.

Run Recall F-measure
1 90.90% 0.47
2 84.09% 0.46
3 93.18% 0.48
4 88.63% 0.47
5 31.81% 0.24
6 84.09% 0.46
7 45.45% 0.31
8 27.27% 0.21
9 38.63% 0.28
10 88.63% 0.47
Overall 66.82% 0.38
Figure 4.1

There is a large gap between our maximum and our minimum recall.  This can be attributed to several issues, including poor data in the data set and less than efficient splitting choices.  The accuracy of the decision tree depends partly on the data with which you decide to train.  The data that we used was missing many attributes, with almost half (43.2%) of those attribute values being the target attribute.  In the absence of a value, the most common value was substituted which, in the case of this data set, would heavily skew the results towards predicting death. It should be noted that overly pessimistic results and overly optimistic results can each present their own dangers to the patient.

Another improvement to the results may come in the form of changing the policy on attribute splitting and missing value handling.  If a more accurate method of splitting were implemented (multivariate criterion, using a better search method, etc.), we would expect to see a more consistent result set.

Based on these results, we would feel fairly confident that we could make a useful prediction, but only if we had a confidence rating on the results (due to the high variability of the results).

Based on these results, while not highly accurate, they could provide good insight into what attributes are the most important in regards to classification.  In other words, the decision tree would describe the attributes that should be used for other machine learning programs, such as clustering, artificial neural networks or support vector machines.  We could be confident that these attributes are the most important because they were chosen through entropy and gain calculations while constructing the decision tree.

5.  CONCLUSION

In this research project, different methods of constructing decision and regression trees were explored.  Additionally, different methods of node splitting, missing attribute substitution and tree pruning were investigated.  While the results of this project show only a 66% accuracy, decision trees are still a valid machine learning technique.  With an augmented decision logic and a better data set, decision trees may be able to predict discrete or continuous data at a much better rate.

References

  1. Salzberg, Stephen.  University of California, Irvine Machine Learning Data Repository.  1989.  [Online]. http://archive.ics.uci.edu/ml/datasets/Echocardiogram
  2. Mitchell, Tom M.  Machine Learning WCB McGraw-Hill, Boston, MA.  1997
  3. Alpaydin, Ethem.  Introduction to Machine Learning, Second Edition.  The MIT Press, Cambridge, MA.  2010.
  4. Quinlan, J. R. Induction of Decision Trees. Mach. Learn. 1, 1 (Mar. 1986), 81-106.
  5. Steinberg, Dan.  CART: Classification and Regression Trees. Taylor and Francis Group. pp 179-201, 2009.
  6. Rokach, Lior and Maimon, Oded. Top-Down Induction of Decision Tree Classifieres – A Survey.  IEEE Transactions on Systems, Man and Cybernetics – Part C: Applications and Reviews.  Vol. 35, No. 4.  pp 476-487, November 2005.
  7. Quinlan, J.R.  Simplifying Decision Trees. International Journal of Man-Machine Studies.  Vol. 27, pp. 221-234, 1987.
  8. Quinlan, J.R.  C4.5: Programs for Machine Learning.  San Francisco, CA: Morgan Kaufmann, 1993.
  9. Krumholz H et al. Patterns of hospital performance in acute myocardial infarction and heart failure – 30-day mortality and readmission. Circulation: Cardiovascular Quality and Outcomes. [Online] http://circoutcomes.ahajournals.org/cgi/content/abstract/2/5/407. 2009.
All code owned and written by David Stites and published on this blog is licensed under MIT/BSD.