Tag Archive: knn


In a previous post, I explored how one might apply classification to solve a complex problem. This post will explore the code necessary to implement that nearest neighbor classification algorithm. If you would like a full copy of the source code, it is available here in zip format.

Knn.java – This is the main driver of the code. To do the classification, we are essentially interested in finding the distance between the particular instance we are trying to classify to other instances.  We then determine the classification of the instance we want from a “majority vote” of the other k closest instances.  Each feature of an instance is a separate class that essentially just stores a continuous or discrete value depending on if you are using regression or not to classify your neighbors.  The additional feature classes and file reader are left to the reader as an exercise.  Note that it would be fairly easy to weight features using this model depending on if you want to give one feature more clout than another in determining the neighbors.

The nice visualization of the algorithm is provided by Kardi Teknomo. As you can see, we take the number of k closest instances and use a “majority vote” to classify the instance.  While this is an extremely simple method, it is great for noisy data and large data sets.  The two drawbacks are the running time O(n^2) and the fact that we have to determine k ahead of time.  However, despite this, as shown in the previous paper, the accuracy can be quite high.

import java.util.*;

public class Knn {
	public static final String PATH_TO_DATA_FILE = "coupious.data";
	public static final int NUM_ATTRS = 9;
	public static final int K = 262;

	public static final int CATEGORY_INDEX = 0;
	public static final int DISTANCE_INDEX = 1;
	public static final int EXPIRATION_INDEX = 2;
	public static final int HANDSET_INDEX = 3;
	public static final int OFFER_INDEX = 4;
	public static final int WSACTION_INDEX = 5;
	public static final int NUM_RUNS = 1000;
	public static double averageDistance = 0;

	public static void main(String[] args) {
		ArrayList instances = null;
		ArrayList distances = null;
		ArrayList neighbors = null;
		WSAction.Action classification = null;
		Instance classificationInstance = null;
		FileReader reader = null;
		int numRuns = 0, truePositives = 0, falsePositives = 0, falseNegatives = 0, trueNegatives = 0;
		double precision = 0, recall = 0, fMeasure = 0;

		falsePositives = 1;

		reader = new FileReader(PATH_TO_DATA_FILE);
		instances = reader.buildInstances();

		do {
			classificationInstance = extractIndividualInstance(instances);

			distances = calculateDistances(instances, classificationInstance);
			neighbors = getNearestNeighbors(distances);
			classification = determineMajority(neighbors);

			System.out.println("Gathering " + K + " nearest neighbors to:");
			printClassificationInstance(classificationInstance);

			printNeighbors(neighbors);
			System.out.println("\nExpected situation result for instance: " + classification.toString());

			if(classification.toString().equals(((WSAction)classificationInstance.getAttributes().get(WSACTION_INDEX)).getAction().toString())) {
				truePositives++;
			}
			else {
				falseNegatives++;
			}
			numRuns++;

			instances.add(classificationInstance);
		} while(numRuns < NUM_RUNS);

		precision = ((double)(truePositives / (double)(truePositives + falsePositives)));
		recall = ((double)(truePositives / (double)(truePositives + falseNegatives)));
		fMeasure = ((double)(precision * recall) / (double)(precision + recall));

		System.out.println("Precision: " + precision);
		System.out.println("Recall: " + recall);
		System.out.println("F-Measure: " + fMeasure);
		System.out.println("Average distance: " + (double)(averageDistance / (double)(NUM_RUNS * K)));
	}

	public static Instance extractIndividualInstance(ArrayList instances) {
		Random generator = new Random(new Date().getTime());
		int random = generator.nextInt(instances.size() - 1);

		Instance singleInstance = instances.get(random);
		instances.remove(random);

		return singleInstance;
	}

	public static void printClassificationInstance(Instance classificationInstance) {
		for(Feature f : classificationInstance.getAttributes()) {
			System.out.print(f.getName() + ": ");
			if(f instanceof Category) {
				System.out.println(((Category)f).getCategory().toString());
			}
			else if(f instanceof Distance) {
				System.out.println(((Distance)f).getDistance().toString());
			}
			else if (f instanceof Expiration) {
				System.out.println(((Expiration)f).getExpiry().toString());
			}
			else if (f instanceof Handset) {
				System.out.print(((Handset)f).getOs().toString() + ", ");
				System.out.println(((Handset)f).getDevice().toString());
			}
			else if (f instanceof Offer) {
				System.out.println(((Offer)f).getOfferType().toString());
			}
			else if (f instanceof WSAction) {
				System.out.println(((WSAction)f).getAction().toString());
			}
		}
	}

	public static void printNeighbors(ArrayList neighbors) {
		int i = 0;
		for(Neighbor neighbor : neighbors) {
			Instance instance = neighbor.getInstance();

			System.out.println("\nNeighbor " + (i + 1) + ", distance: " + neighbor.getDistance());
			i++;
			for(Feature f : instance.getAttributes()) {
				System.out.print(f.getName() + ": ");
				if(f instanceof Category) {
					System.out.println(((Category)f).getCategory().toString());
				}
				else if(f instanceof Distance) {
					System.out.println(((Distance)f).getDistance().toString());
				}
				else if (f instanceof Expiration) {
					System.out.println(((Expiration)f).getExpiry().toString());
				}
				else if (f instanceof Handset) {
					System.out.print(((Handset)f).getOs().toString() + ", ");
					System.out.println(((Handset)f).getDevice().toString());
				}
				else if (f instanceof Offer) {
					System.out.println(((Offer)f).getOfferType().toString());
				}
				else if (f instanceof WSAction) {
					System.out.println(((WSAction)f).getAction().toString());
				}
			}
		}
	}

	public static WSAction.Action determineMajority(ArrayList neighbors) {
		int yea = 0, ney = 0;

		for(int i = 0; i < neighbors.size(); i++) { 			Neighbor neighbor = neighbors.get(i); 			Instance instance = neighbor.getInstance(); 			if(instance.isRedeemed()) { 				yea++; 			} 			else { 				ney++; 			} 		} 		 		if(yea > ney) {
			return WSAction.Action.Redeem;
		}
		else {
			return WSAction.Action.Hit;
		}
	}

	public static ArrayList getNearestNeighbors(ArrayList distances) {
		ArrayList neighbors = new ArrayList();

		for(int i = 0; i < K; i++) {
			averageDistance += distances.get(i).getDistance();
			neighbors.add(distances.get(i));
		}

		return neighbors;
	}

	public static ArrayList calculateDistances(ArrayList instances, Instance singleInstance) {
		ArrayList distances = new ArrayList();
		Neighbor neighbor = null;
		int distance = 0;

		for(int i = 0; i < instances.size(); i++) {
			Instance instance = instances.get(i);
			distance = 0;
			neighbor = new Neighbor();

			// for each feature, go through and calculate the "distance"
			for(Feature f : instance.getAttributes()) {
				if(f instanceof Category) {
					Category.Categories cat = ((Category) f).getCategory();
					Category singleInstanceCat = (Category)singleInstance.getAttributes().get(CATEGORY_INDEX);
					distance += Math.pow((cat.ordinal() - singleInstanceCat.getCategory().ordinal()), 2);
				}
				else if(f instanceof Distance) {
					Distance.DistanceRange dist = ((Distance) f).getDistance();
					Distance singleInstanceDist = (Distance)singleInstance.getAttributes().get(DISTANCE_INDEX);
					distance += Math.pow((dist.ordinal() - singleInstanceDist.getDistance().ordinal()), 2);
				}
				else if (f instanceof Expiration) {
					Expiration.Expiry exp = ((Expiration) f).getExpiry();
					Expiration singleInstanceExp = (Expiration)singleInstance.getAttributes().get(EXPIRATION_INDEX);
					distance += Math.pow((exp.ordinal() - singleInstanceExp.getExpiry().ordinal()), 2);
				}
				else if (f instanceof Handset) {
					// there are two calculations needed here, one for device, one for OS
					Handset.Device device = ((Handset) f).getDevice();
					Handset singleInstanceDevice = (Handset)singleInstance.getAttributes().get(HANDSET_INDEX);
					distance += Math.pow((device.ordinal() - singleInstanceDevice.getDevice().ordinal()), 2);

					Handset.OS os = ((Handset) f).getOs();
					Handset singleInstanceOs = (Handset)singleInstance.getAttributes().get(HANDSET_INDEX);
					distance += Math.pow((os.ordinal() - singleInstanceOs.getOs().ordinal()), 2);
				}
				else if (f instanceof Offer) {
					Offer.OfferType offer = ((Offer) f).getOfferType();
					Offer singleInstanceOffer = (Offer)singleInstance.getAttributes().get(OFFER_INDEX);
					distance += Math.pow((offer.ordinal() - singleInstanceOffer.getOfferType().ordinal()), 2);
				}
				else if (f instanceof WSAction) {
					WSAction.Action action = ((WSAction) f).getAction();
					WSAction singleInstanceAction = (WSAction)singleInstance.getAttributes().get(WSACTION_INDEX);
					distance += Math.pow((action.ordinal() - singleInstanceAction.getAction().ordinal()), 2);
				}
				else {
					System.out.println("Unknown category in distance calculation.  Exiting for debug: " + f);
					System.exit(1);
				}
			}
			neighbor.setDistance(distance);
			neighbor.setInstance(instance);

			distances.add(neighbor);
		}

		for (int i = 0; i < distances.size(); i++) {
			for (int j = 0; j < distances.size() - i - 1; j++) { 				if(distances.get(j).getDistance() > distances.get(j + 1).getDistance()) {
					Neighbor tempNeighbor = distances.get(j);
					distances.set(j, distances.get(j + 1));
					distances.set(j + 1, tempNeighbor);
				}
			}
		}

		return distances;
	}

}

Abstract—Recommendation systems take artifacts about items and provide suggestions to the user on what other products the might like. There are many different types of recommender algorithms, including nearest-neighbor, linear classifiers and SVMs. However, most recommender systems are collaborative systems that rely on users to rate the products that they bought. This paper presents a analysis of recommender systems using a mobile device and backend data points for a coupon delivery system.

Index Terms—machine learning, recommender systems, supervised learning, nearest neighbor, classification

1. INTRODUCTION

Recommendations are a part of everyday life. An individual constantly receives recommendations from friends, family, salespeople and Internet resources, such as online reviews. We want to make the most informed choices possible about decisions in our daily life. For example, when buying a flat screen TV, we want to have the best resolution, size and refresh rate for the money. There are many factors that influence our decisions – budget, time, product features and most importantly, previous experience. We can analyze all the factors that led up to the decision and then make a conclusion or decision based on those results. A recommender system uses historical data to recommend new items that may of be interest to a particular user.

Coupious is a mobile phone application that gives it’s user coupons and deals for businesses around their geographic location. The application runs on the user’s cell phone and automatically retrieves coupons based upon GPS coordinates. Redemption of these coupons is as simple as tapping a “Use Now” button. Coupious’ services is now available on the iPhone, iPod Touch, and Android platforms. Coupious is currently available in Minneapolis, MN, West Lafayette, IN at Purdue University and Berkley, CA. Clip Mobile is the Canada based version of Coupious that is currently available in Toronto.

Using push technology, it is possible to integrate a mobile recommendation system into Coupious. The benefit of this would be threefold: 1) offer the customers the best possible coupons based on their personal spending habits – if a user feels they received a “good deal” with Coupious, they would be more likely to use it again and integrate it into their bargain shopping strategy, 2) offer businesses the ability to capitalize on their market demographics – the ability to reach individual customers to provide goods or services would drive home more revenue and add value to the product and 3) adding coupons to the service would immediately make the system more useful to a user as it would present, desirable, geographically proximate offers without extraneous offers.

1.1 OUTLINE OF RESEARCH

In this research project, we evaluated the batch k-nearest neighbors algorithm in Java. We chose to write the implementation of the decision tree in Java because of the ease of use of the language. Java also presented superior capabilities in working with and parsing data from files. Using Java allowed the author to more efficiently model the problem through the use of OO concepts, such as polymorphism and inheritance. The kNN algorithm was originally suggested by Donald Knuth in [9] where it was referenced as the post-office problem, where one would want to assign residences to the nearest post office.

The goal of this research was to find a solution that we felt would be successful in achieving the highest rate of coupon redemption. Presumptively, in achieving the highest rate of redemption required learning what the user likes with the smallest error percentage. Additionally, we wanted to know if increasing the attributes used for computation, would effect the quality of the result set.

1.2 DATA

The data that we used is from the Coupious production database. Currently, there are approximately 70,000 rows of data (“nearby” queries, impressions and details impressions) and approximately 3,400 of those represent actual coupon redemptions. The data is an aggregate from March 25th, 2009 until February 11, 2010. The results are from a mixture of different cities where Coupious is currently in production.

From a logical standpoint, Coupious is simply a conduit through which a user may earn his discount and has no vested interest in whether or not a user redeems a coupon in a particular session. However, from a business standpoint, Coupious markets the product based on being able to entice sales through coupon redemption. Therefore, for classification purposes, sessions that ended in one or more redemptions will be labeled +1 and sessions that ended without redemption will be labeled -1.

1.3 PREVIOUS WORK

While there hasn’t been any previous work in the space of mobile recommendation systems, there has been a large amount of work in the recommender systems space and classification. In [1], [2] and [3], direct marketing is studied using collaborative filtering. In [3], the authors use SVMs and latent class models to model predictions of whether or not a customer would be likely to buy a particular product. The most direct comparison of work would be in [1] and [8], where SVMs and linear classifiers are used to cluster content driven recommender systems.

2. APPLICATION

Broadly, recommender systems can be grouped into two categories, content-based and collaborative based. In content-based systems, the recommendations are based solely on the attributes of the actual product. For example, in Coupious, attributes of a particular coupon redemption includes the distance from the merchant when the coupon was used, the date and time of the redemption, the category of the coupon, the expiry and the offer text. These attributes are physical characteristics of the coupon. Recommendations can be made to users without relying on any experience-based information.

In collaboration systems, recommendations are provided based on not only product attributes but the overlap of preferences of “like-minded” people, first introduced by Goldberg et. al [5]. For example, a user is asked to rate how well they liked the product or give an opinion of a movie. This provides the algorithm a base line of preference for a particular user, which allows the algorithm to associate product attributes with a positive or negative response. Since Coupious does not ask for user ratings, this paper will focus exclusively on content-based applications.

Many content-based systems have similar or common attributes. As stated in [3], the “central problem of content-based recommendation is identifying a sufficiently large set of key attributes.” If the attribute set is too small, there may not be enough information for the program to build a profile of the user. Conversely, if there are too many attributes, the program won’t be able to identify the truly influential attributes, which leads to poor performance [6]. Also, while the label for a particular feature vector with Coupious data is +1 or -1, many of the features in the data are multi-valued attributes (such as distance, date-time stamps, etc.), which maybe hard to represent in a binary manner, if the algorithm requires it.

In feature selection, we are “interested in finding k of the d dimensions that give us the most information and accuracy and we discard the other (d – k) dimensions [4].” How can we find the attributes that will give us the most information and accuracy? For Coupious, the attribute set is quite limited. For this research, we explicitly decided the features that will contribute to our recommendations. In all cases, all attributes were under consideration while the algorithm was running and we never partitioned the attributes for different runs to create different recommendations.

2.1 THE kNN ALGORITHM

As shown in [7], “Once the clustering is complete, performance can be very good, since the size of the group that must be analyzed is much smaller.” The nearest neighbor algorithm is such a classification algorithm. K-nearest neighbors is one of the simplest machine learning algorithms, where an instance is classified by a majority vote by the closest training examples in the hypothesis space. This algorithm is an example of instance-based learning, where previous data points are stored and interpolations is performed to find a recommendation. An important distinction to make is that kNN does not try to create a model when it are given test data, but rather performs the computation when tested. The algorithm works by attempting to find a previously seen data point that is “closest” to the query data point and then uses its previous output for prediction. The algorithm calculates its approximation using this definition:

The algorithm is shown below

KNN (Examples, Target_Instance)

  • Determine the parameter K = number of nearest neighbors.
  • Calculate the distance between Target_Instance and all the Examples based on the Euclidian distance.
  • Sort the distance and determine nearest neighbor based on the Kth minimum distance.
  • Gather the Category Y of the nearest neighbors.
  • Use simple majority of the category of nearest neighbors as the prediction value of the Target_Instance.
  • Return classification of Target_Instance.

This algorithm lends itself well to the Coupious application. When a user uses the Coupious application, they want it to launch quickly and present a list of coupons within 10-15 seconds. Since this algorithm is so simple, we are able to calculate coupons that the user might enjoy fairly quickly and deliver them to the handset. Also, because Coupious doesn’t know any personal details about the user, the ability to cluster users into groups without the need for any heavy additional implementation on the front or back ends of the application is advantageous.

There are several possible problems with this approach. While this is a simple algorithm, it doesn’t take high feature dimensionality into account. If there are many features, the algorithm will have to perform a lot of computations to create the clusters. Additionally, each attribute is given the same degree of influence on the recommendation. In Coupious, the date and time the coupon was redeemed has the same amount of bearing on the recommendation as does the current distance from the offer and previous redemption history. The features may not scale compared to their importance and the performance of the recommendation may be degraded by irrelevant features. Lastly, [6] describes the curse of dimensionality, which describes how, if a product has 20 attributes, 2 of which are actually useful, how classification may be completely different when considering each of the 20 attributes but identical when only considering the 2 relevant attributes.

2.2 ATTRIBUTE SELECTION

While implementing the k-nearest neighbors algorithm, we decided to evaluate instances based upon seven different attributes, including coupon category, distance, expiration, offer, redemption date, handset and handset operating system, and upon the session result, a “hit” or “redemption.” The offer and expiration attributes required extra processing before they were able to be used for clustering. In both of these attributes, a “bag of words” (applying a Naive Bayes classifier to text to determine the classification) implementation was used to determine what type of offer the coupon had made.

For the OFFER attribute, we split the value space into 4 discrete values, PAYFORFREE (an instance where the customer had to pay some initial amount to receive a free item), PERCENTAGE (an instance where the customer received some percentage discount), DOLLAR (an instance where a customer received a dollar amount discount) and UNKNOWN (an instance where the classification was unknown). While the PERCENTAGE and DOLLAR values essentially compute to the exact same values when calculated, consumers tend to react differently when seeing higher percentage discounts versus dollar discounts (i.e. 50% off instead of $5 discount, even though they may equate to an identical discount, if the price were $10).

For the expiration attribute, some text parsing was done to determine what type of expiration the coupon had (DATE, USES, NONE or UNKNOWN). If the parsing detected a date format, we used DATE and if the parsing detected that it was a limited usage coupon (either limited by total uses across a population, herein known as “global” or limited by uses per customer, herein known as “local”), we designated that the coupon was USES. If the coupon was valid indefinitely, we designated that the coupon expiration was NONE. We did not record the distance of dates in the future, by the type of limited usage (global or local) or by the number of uses left.

An important detail to note is that in our implementation of kNN, if an attribute was unknown, we declined using it for computation of the nearest neighbor because the UNKNOWN attribute was typically the last value in a Java enumeration and therefore would be assigned a high integer value which would skew the results unnecessarily if used for computation.

For the remaining attributes (OFFERTIME, CATEGORY, DISTANCE, HANDSET, which considers handset model and OS and ACTION), the value space of each attribute was divided over the possible discrete values for that attribute according to Table 2.2.

Attribute Possible Values
OFFERTIME Morning, Afternoon, Evening, Night, Unknown
CATEGORY Entertainment, Automotive, Food & Dining, Health & Beauty, Retail, Sports & Recreation, Travel, Clothing & Apparel, Electronics & Appliances, Furniture & Decor, Grocery, Hobbies & Crafts, Home Services, Hotels & Lodging, Nightlife & Bars, Nonprofits & Youth, Office Supplies, Other, Pet Services, Professional Services, Real Estate, Unknown
DISTANCE Less than 2 miles, 2 to 5 miles, 5 to 10 miles, 10 to 20 miles, 20 to 50 miles, 50 to 100 miles, Unknown
HANDSET Device: iPhone, iPod, G1, Hero, myTouch, Droid, Unknown
OS: iPhone, Android, Unknown
ACTION Redeem, Hit, Unknown

Table 2.2

In the case of the handset OS, iPhone and iPod were classified together as the iPhone OS as they are the same OS with different build targets. It is important to note that kNN works equally well with continuous valued attributes as well as discrete-valued attributes. For further discussion on using kNN with real-valued attributes, see [10].

3. PROBLEMS

There were several problems that were encountered while implementing kNN. The first problem was “Majority Voting.” Majority voting is the last step in the algorithm to classify an instance and is an inherent problem with the way the kNN algorithm works. If a particular class dominates the training data, it will skew the votes towards that class since we are only considering data at a local level, that is the distance from our classification instance to the nearest data points. There are two ways to solve this problem: 1) balancing the dataset, or, 2) weighting the neighboring instances. Balancing is the simplest technique where an equal proportion of either class are present in the training data. A more complicated, but more effective, method is to weight the neighboring instances such that neighbors that are further away have a smaller weight value than closer neighbors.

Determining “k” beforehand is a troublesome problem. This is due to the fact that, if “k” is too large, the separation between classifications becomes blurred. This could cause the program to group two clusters together that are, in fact, distinct clusters themselves. However, a large “k” value does reduce the effect of noise by including more samples in the hypothesis. If “k” is too small, we might not calculate a good representation of the sample space because our results were too local. In either case, if the target attribute is binary, “k” should be an odd number to avoid the possibilities of ties with majority voting.

Also, kNN has a high computation cost because it has to consider the distance of every point from itself to another point. The time-complexity of kNN is O(n), which wouldn’t scale well to hypothesis spaces with millions of instances.

Discretizing non-related attributes, like category, presented a unique challenge for the author. When considering continuous attributes, such as distance, it was easy to discretize this data. In the case of Coupious, the distance ranges were already defined in the application so we just had to translate those over to our classification algorithm. However, some attributes, such as category, while related at an attribute level (in that they were all categories of coupon), had no real values. In this case, we simply assigned them increasing integer values

Another problem that we encountered is the sparsity problem. Since we implemented a content-driven model, the degree of accuracy relied upon how much data we had about a particular end user to build their profile. If the customer only had one or two sessions ending in no redemptions, it might not be possible to achieve any accuracy about this person. We dealt with this problem by artificially creating new records to supplement previous real records.

Coupious relies on the GPS module inside the smart phone to tell us where a user is currently located. From that position, the user gets a list of coupons that are close to the user’s location. However, there are no safeguards in place to guarantee that a redemption is real. A curious user may attempt to redeem a coupon when he is not at the actual merchant location. In the data, we can account for this by calculating the distance from the merchant at the time of the redemption request. However, this GPS reading may be inaccurate as the GPS model can adjust it’s accuracy to save power and battery life.

Lastly, accuracy is somewhat limited because of the fact that we are using a content-only model. There is no way to interact with the user to ask if the recommendations are truly useful. To achieve this additional metric would require major changes to the application and the backend systems that are outside the scope of this research paper.

Despite the multiple problems with kNN, it is quite robust to noisy data as indicated in section 4, which makes it well-suited for this classification task, as the author can only verify the reasonableness of the data, not the integrity.

4. TESTING

Testing of the kNN logic was carried out using a 3-fold cross validation method, were the data was divided into training and test sets such that | R | = k X | S |, where R is the size of the training set, k is the relative size and S is the size of the test set.

For each test run, we chose 4,000 training examples and 1 test example at random. We attempted to keep labeled classes in the training set as balanced as possible by setting a threshold n. This threshold prevented the classes from becoming unbalanced by more than a difference of n. If the threshold n was ever reached, we discarded random selections until we were under the threshold again and thereby balance the classes. Discussion of results are in section 5.

5. EVALUATION

Even though kNN is a simplistic algorithm, the classification results were quite accurate. To test the algorithm, we performed 10,000 independent runs where an equal number of “hit” and “redemption” rows were selected at random (2,000 of each so as to keep the inductive bias as fair as possible). An individual classification instance was chosen at random from that set of 4,000 instances and was then classified according to its nearest neighbors.

5.1 MACRO EVALUATION

When evaluating the results, there was one main factor that affected the recall and it was the size of the k neighbors considered in the calculation (see Table 5.1). In the 10,000 independent runs using random subsets of data for each test, the overall recall for the kNN algorithm was 85.32%. The average F-measure for these runs was 0.45.

However, if one considers a subset of results with k-size less than or equal to 15, the average recall was much higher – 94%. After a k-size greater than 30, we see a significant drop-off of recall. This can be attributed to the fact that the groups are becoming less defined because, as k-size grows, the nodes that are being used are “further” away from the classification instance, and therefore the results are not as “good.”

5.2 MICRO EVALUATION

When broken down into smaller sets of 1,000 runs, the recall and F-measure vary greatly. In 16 runs, we had a range from 29.40% recall to 98.20% recall. The median recall was 92.70%.

Run K Size Recall F-measure Avg. Distance
1 1 98.20% 0.49 0.18
2 2 97.80% 0.49 0.23
3 3 95.80% 0.48 0.49
4 4 95.70% 0.48 0.44
5 5 95.60% 0.48 0.46
6 6 95.20% 0.48 0.59
7 7 93.10% 0.48 0.48
8 8 92.70% 0.47 0.69
9 9 94.20% 0.48 0.82
10 10 93.40% 0.48 0.88
11 15 91.70% 0.48 1.21
12 30 87.50% 0.47 2.16
13 50 85.10% 0.46 3.63
14 100 70.80% 0.42 7.57
15 200 48.90% 0.32 16.57
16 500 29.40% 0.22 31.64
Overall 85.32% 0.45 4.25

Table 5.1

There is a large gap between our maximum and our minimum recall. This can be attributed mainly to our k-size. Based on these results, we would feel fairly confident that we could make a useful prediction, but only if we had a confidence rating on the results (due to the high variability of the results).

These results could provide good insight into what k-size or neighbors are the most influential in suggesting a coupon. This would allow us to more carefully target Coupious advertising based on the user.

When could one consider these results valid? If the average distance of a classification instance to the examples is below a threshold such that the classification truly reflects its neighbors, we could consider the results valid. Another form of validating the results could be pruning the attribute set. If we were able to prune away attributes that didn’t affect the recall in a negative manner, we would be left with a set of attributes that truly influence the customers purchasing behavior, although this task is better suited for a decision or regression tree. An improvement to this kNN algorithm might come in the form of altering the k-size based upon the population or attributes that we are considering.

6. CONCLUSION

In this research project, we have implemented the kNN algorithm for recommender systems. The algorithm for the nearest neighbor was explored and several problems were identified and overcome. Different techniques were investigated to improve the accuracy of the system. The results of this project show an overall accuracy of 85.32%, which makes kNN an excellent, simple technique for implementing recommender systems.

REFERENCES

[1] Zhang, T., Iyengar, V. S. Recommender Systems Using Linear Classifiers. Journal of Machine Learning Research 2. (2002). 313-334.
[2] Basu, C., Hirsh, H., Cohen, W. Recommendation as Classification: Using Social and Content-Based Information in Recommendation.
[3] Cheung, K. W., Kowk, J. T., Law, M. H., Tsui, K. C. Mining Customer Product Ratings for Personalized Marketing
[4] E. Alpayden., Introduction to Machine Learning, 2nd ed. MIT Press. Cambridge, Mass, 2010.
[5] D. Goldberg, D. Nichols, B.M. Oki, D. Terry, Collaborative filtering to weave an information tapestry, Communications of the ACM 35. (12) (December 1992) 61 – 70.
[6] T.M. Mitchell, Machine Learning, New York. McGraw-Hill, 1997.
[7] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl, “Recommender Systems for Large-Scale E-Commerce: Scalable Neighborhood Formation Using Clustering,” Proc. Fifth Int’l Conf. Computer and Information Technology, 2002.
[8] D. Barbella, S. Benzaid, J. Christensen, B. Jackson, X. V. Qin, D. Musicant. Understanding Support Vector Machine Classifications via a Recommender System-Like Approach. [Online]. http://www.cs.carleton.edu/faculty/dmusican/svmzen.pdf
[9] D.E. Knuth. “The Art of Computer Programming.” Addison-Wesley. 1973.
[10] C. Elkan. “Nearest Neighbor Classification.” University of California – San Diego. 2007. [Online]. http://cseweb.ucsd.edu/~elkan/151/nearestn.pdf

All code owned and written by David Stites and published on this blog is licensed under MIT/BSD.