Welcome aboard our exploration of the building block of Neural Networks: the perceptron! This significant algorithm sparks your comprehension of advanced Neural Networks utilized in Machine Learning. The objectives of this lesson include mastering and coding a perceptron using Python
from scratch. We will decipher the structure of a perceptron, the prediction method, and the training process. To conclude, we will design a fully functioning model that abstracts a simple logical scenario using the AND operator data.
We start by delving into the perceptron, a simple form of binary linear classifiers in the Neural Network family. A perceptron operates by accepting multiple inputs, aggregating them, and democratically deciding the output based on these inputs.
Think of perceptrons as a democratic process. Each "voter" (input) contributes with differing weights. The "candidate" (output) who secures the majority of votes (aggregate of weighted inputs) wins and is chosen.
Mathematically, the predicted output of a perceptron can be formulated as follows:
For our purposes:
- is our output, which we are predicting.
- represents weights — consider these as the importance accorded to each contributing voter.
- shows our inputs, i.e., voters.
- is called bias — it's akin to an incumbent's advantage, a prior tendency towards a particular party or candidate.
- is an activation function — this takes in the sum of all votes and outputs the election result.
Let's kickstart our algorithm by setting the stage in our Python
perceptron class with the __init__
method.
Python1def __init__(self, no_of_inputs, max_iterations=100, learning_rate=0.01): 2 self.max_iterations = max_iterations 3 self.learning_rate = learning_rate 4 self.weights = np.zeros(no_of_inputs + 1)
Here:
no_of_inputs
refers to the number of inputs.
max_iterations
is the maximum number of iterations the model will hold.
learning_rate
indicates how fast we want the weights to adapt or learn based on the outcomes.
The initialized weights are set to zero, with an additional weight for the incumbent known as bias.
Let's see how our system decides using the predict
method.
Python1def predict(self, inputs): 2 summation = np.dot(inputs, self.weights[1:]) + self.weights[0] 3 if summation > 0: 4 activation = 1 5 else: 6 activation = 0 7 return activation
First, we calculate the sum of the weighted votes, akin to election results, by taking a dot product between the inputs and weights (excluding the incumbent bias). The incumbent bias is added to this sum. Finally, the activation function (step function in this case) is used to determine the election result.
Let's discuss how to train our model to yield better results in the train
method of perceptrons.
Python1def train(self, training_inputs, labels): 2 for _ in range(self.max_iterations): 3 for inputs, label in zip(training_inputs, labels): 4 prediction = self.predict(inputs) 5 self.weights[1:] += self.learning_rate * (label - prediction) * inputs 6 self.weights[0] += self.learning_rate * (label - prediction)
This training process is similar to refining policies over multiple iterations based on previous prediction errors. Each iteration enables the system to learn from its mistakes and make better decisions in the future.
Let's integrate all we've learned so far and apply it to an example. We create an AND operator data. This data contains two inputs, and the output is them combined using an AND operator. It means 1
and 1
result in 1
, but any other input combination results in 0
.
Python1training_inputs = [] 2training_inputs.append(np.array([1, 1])) 3training_inputs.append(np.array([1, 0])) 4training_inputs.append(np.array([0, 1])) 5training_inputs.append(np.array([0, 0])) 6labels = np.array([1, 0, 0, 0]) 7 8perceptron = Perceptron(2) 9perceptron.train(training_inputs, labels)
After the training using AND operator data, our model is now ready to predict the results for new inputs.
Python1inputs = np.array([1, 1]) 2print(perceptron.predict(inputs)) # Output: 1 3 4inputs = np.array([0, 1]) 5print(perceptron.predict(inputs)) # Output: 0
Hurrah! Our model successfully passed the test, correctly predicting the results.
Congratulations! You've successfully transitioned from understanding to designing a perceptron using Python
. Practicing is instrumental in solidifying this knowledge, and our upcoming exercises provide the perfect launching pad. Continue to explore machine learning spaces and enjoy your journey!