tulians tulians - 1 month ago 20
Python Question

Misclassification error in perceptron

I'm implementing a perceptron using Python and Numpy. I'm training it using the algorithm described in this Wikipedia article, but the resulting weights vector does not correctly classify sample vectors, not even the ones of the training set.

This training code I wrote:

epochs = 50
unit_step = lambda x: 0 if x < center else (0.5 if x == center else 1)

def train(data_set, labels):
number_of_samples, dimension = data_set.shape

# Generate the augmented data set, adding a column of '1's
augmented_data_set = np.ones((number_of_samples, dimension + 1))
augmented_data_set[:,:-1] = data_set

w = 1e-6 * np.random.rand(dimension + 1)

for _ in xrange(epochs):
for sample, target in zip(augmented_data_set, labels):
predicted_output = unit_step(np.dot(w, sample))
update = (target - predicted_output) * sample
w += update

return w


After this I set as training set the necesary vectors to learn the AND logic function as:

training_set = np.array([[0,0],[0,1],[1,0],[1,1]])


and its corresponding class labels as:

labels = np.array([-1,-1,-1,1])


where -1 represent False and 1 represents True.

After running
w = train(training_set, labels)
I test the resulting weights vector and got this wrong results:


  • np.dot(w, [0,0,1]) = -1.0099996334232431

  • np.dot(w, [0,1,1]) = -1.0099991616719257

  • np.dot(w, [1,0,1]) = -1.009999277692496

  • np.dot(w, [1,0,1]) = -1.009999277692496



The error here is that the last case should return a value greater that 0 and close to 1. I can't see clearly what's happening here. What am I missing?

Thanks in advanced.

Answer

The most obvious error is lack of unification between training set labels (-1 and 1) and what your model outputs (0, 0.5 and 1.0). Change both to 0 and 1.

Comments