Implementing a Perceptron Algorithm

The Enliven team and I had the pleasure of competing in GlobalHack I in early 2014. Overall, it was a fun experience that I’ll remember as my first successful implementation of a perceptron machine learning algorithm. Although the math may seem daunting to the average developer who isn’t used to dealing with things like dot products, the basic perceptron algorithm is actually quite understandable.

My goal for this post is to cut to the core of the issue: how does one implement a quick, easy-to-use perceptron algorithm for one’s own, custom needs? I will go through concepts first and code second. The code presented at the end of this post was written from 4am-7am, so it may not be the most efficient and could certainly be expanded upon with ideas we’ll discuss in later posts. However, it should be a great resource for anyone looking to learn about this classic machine learning algorithm.

At its core, a perceptron answers a yes or no question based on past experience. In GlobalHack, we were tasked with predicting whether a sale would close based on some inputs. I’ll focus on four very simplified inputs for the sake of this post: pain (of current situation I presume), fit of product, success of demo, and dollar amount of project (specifically, how close the project is to our typical project). Each input will be ranked from 0% to 100% and will be represented by , , , and , respectively.

So let’s back up for a second and talk about a weighted average. I’m going to assume that we’re all familiar with a typical average:

For each equation I give, I’m going to also give a generic equivalent in code (C#-ish with some prettified math symbols). In this case:

If you remember your factoring from school, you’ll recognize that another way of writing the above equation is:

Distributing gives us:

Let’s go back to our example (closing a sale). If we think all inputs are equally important, then we should have something like this:

But what is the average in this case? If we rate pain, fit, success, and amount from 1 to 100, then it would be the average rating of the key inputs. It’s not too much of a jump to see that we may be able to make a prediction based off of this score.

Now what happens if we decide pain really isn’t that important and we’ve underestimated how important fit is? Well, we could make the coefficient (number in front of each input) of equal to and the coefficient of equal to . We could now re-write each equation as follows:

Of course, fractions can be rewritten as percentages:

We now have a weighted average where the weights are 0%, 50%, 25%, and 25%. The way we’d represent this in typical machine learning parlance would be: , , , , if , , , :

There are two main algorithms when it comes to the perceptron: a predictive algorithm and a training algorithm. Let’s consider the predictive algorithm first:

**from Wikipedia*

Yikes! That seems intense. But hold on - let’s break it down.

First of all, means that we are naming the prediction y, and we are predicting for the jth time. For instance, if this were our fifth prediction we’ve ever made with this perceptron, j would be 5:

Next, is called a dot product. It means we sum each weight multiplied by its respective input. So in our example, the pain weight gets multiplied by the pain rating, and the fit weight gets multiplied by the fit rating, etc. Then we add them all up:

is a bit more complicated. First of all, remember that the prediction has to be a yes or no answer. The f and the opening and closing brackets in that expression tell us that this is a function that we’re going to nickname f. We could just as easily call this a, g or function. But is more traditional in math. In code, we’d probably call this something like “getPrediction”:

The getPrediction function typically needs a threshold above which it concludes that the sale is likely. This threshold is represented by in our original equation. We want this threshold to evolve just like every other part of our algorithm, so we have to include it as a weight:

{

}

The reason is typically included as an input in our original predictive algorithm is so that the entire vector (basically, a list in C#) can train with the rest of the weights. More on that below.

This is the meat and potatoes of the perceptron algorithm. Without training, you would just have a plain ol’ dull algorithm, not one that can learn.

The following is the predictive equation:

**from Wikipedia*

Again: yikes! That’s a lot of math that some of you may not be comfortable with.

Conceptually, we’re looking at a whole big set of training data. We take this data one observation at a time, and for each observation, update the weights one at a time. The variables are as follows:

: the index of the weight we’re currently updating.

: the next incarnation of the ith (pain, for example) weight.

: the current incarnation of the ith (pain, for example) weight.

: the jth observation (for example, whether or not we actually closed the fifth sale). It’s represented by either a 1 (yes) or a 0 (no).

: the jth predicted observation (for example, whether our predictive algorithm would predict we would close the fifth sale given the current weights). It’s also represented by either a 1 (yes) or a 0 (no).

: the appropriate input for the jth observation. For example, when updating the pain weight for the fifth sale, we would look at the pain rating for the fifth sale.

: the “training velocity”. This basically allows us to throttle how fast the algorithm is learning. If it’s too small relative to your weights, then your weights won’t end up getting to their optimal levels. On the other hand, if it’s too large relative to your weights, then your weights will constantly over-correct, and won’t stay on their optimal levels. The training velocity must be a positive number between 0 and 1.

Let’s update based on this new information:

Becomes (for the training for the entire fifth observation):

And we’d continue to do this for each weight. The threshold weight is a special case (its rating is always 1):

Let’s look at some possible outcomes and consequences of .

If the sale was made and was also predicted, then this term evaluates to 1-1=0, so:

If the sale was not made and was also not predicted, then this term evaluates to 0-0=0, so:

If the sale was made, but not predicted, then this term evaluates to 1-0=1, so:

Basically, we end up deciding our weights aren’t aggressive enough, so we increment them by some fraction (decided by ) of the appropriate input, in this case .

If the sale was not made, but was predicted, then this term evaluates to 0-1=-1, so:

This time, we basically end up deciding our weights are too aggressive so we decrease them by some fraction (decided by ) of the appropriate input, in this case .

Below is my code (modified for clarity and consistency) from the Hackathon. I’ve replaced most “var” declarations with its appropriate type for clarity.

<br /> public void Learn(Opportunity opportunity)<br /> {<br /> int actualOutcome = opportunity.IsWon ? 1 : 0;<br /> List<double> weightList = Repository.GetWeightList();<br /> List<double> ratingList = GetRatingList(opportunity);<br /> List<double> newWeights = Train(actualOutcome, weightList, ratingList);<br /> SaveNewWeights(newWeights, weightList);<br /> }</p> <p>private List<double> GetRatingList(Opportunity opportunity)<br /> {<br /> var ratingList = new List<double>();<br /> ratingList.Add(opportunity.Probability); ratingList.Add(opportunity.CompetitorRating);<br /> ratingList.Add(opportunity.DemoRating);<br /> ratingList.Add(opportunity.PainRating);<br /> ratingList.Add(opportunity.ProductFitRating);<br /> return ratingList;<br /> }</p> <p>private int GetPredictedOutcome (List<double> inputList, List<double> weightList)<br /> {<br /> if ((DotProduct(weightList, inputList)) > 0)<br /> {<br /> return 1;<br /> }<br /> return 0;<br /> }</p> <p>private double DotProduct(List<double> list1, List<double> list2)<br /> {<br /> return list1.Zip(list2, (d1, d2) => d1 * d2).Sum();<br /> }</p> <p>private List<double> Train(int actualOutcome, List<double> weightList, List<double><br /> inputList)<br /> {<br /> for (int i = 0; i < weightList.Count; i++)<br /> {<br /> var weight = weightList[i];<br /> var input = inputList[i];<br /> weightList[i] = (UpdateWeight(weight, actualOutcome,<br /> GetPredictedOutcome(inputList, weightList), input));<br /> }<br /> return weightList;<br /> }</p> <p>private double UpdateWeight(double weight, int actualOutcome, int predictedOutcome,<br /> double input)<br /> {<br /> return weight + _learnRate*(actualOutcome - predictedOutcome)*input;<br /> }<br />

I hope that was helpful! Please don’t hesitate to ask questions in the comments section below.

JOIN THE CONVERSATION

- http://diggingdigital.com.au Will Charles
- George Spofford