Welcome! We're about to explore Stochastic Gradient Descent (SGD), a pivotal optimization algorithm. SGD, a variant of Gradient Descent, is renowned for its efficiency with large datasets due to its unique stochastic nature. Stochastic means “random” and is the opposite of deterministic. A deterministic algorithm runs the same every time, but a stochastic one introduces a randomness. Our journey includes understanding SGD, its theoretical concepts, and implementing it in Python.
SGD starts by understanding its structure. Unlike Gradient Descent, SGD calculates an estimate of the gradient using a randomly selected single data point, not the entire dataset. Consequently, SGD is highly efficient for large datasets.
While the efficient handling of large datasets by SGD is a blessing, its stochasticity can often lead to a slightly noisier process for convergence, resulting in the model not settling at an absolute minimum.
We are going to use this simple example of data:
Python1# Importing Necessary Library 2import numpy as np 3 4# Linear regression problem 5X = np.array([0, 1, 2, 3, 4, 5]) 6Y = np.array([0, 1.1, 1.9, 3, 4.2, 5.2])
In terms of math, SGD can be formulated as follows. Imagine we are looking for a best-fit line, setting the parameters of the familiar equation. Remember, is the slope and is the y-intercept. Then:
where:
- and are the initial values of your parameters
- and are the updated parameters
- is a particular feature of your training set
- is the actual output for the given feature
- is the learning rate
These formulas represent the update rules for parameters and . Here, the term presents the difference between the actual data point and the initial model prediction. Multiplying it with a specific feature and averaging over all samples provide the gradient for parameter . The same principle applies to parameter , but without multiplication by , as is the bias term.
Now, let's dive into Python to implement SGD. This process encompasses initializing parameters randomly, selecting a random training sample, calculating the gradient, updating the parameters, and running several iterations (also known as epochs).
Let's break it down with the following code:
Python1# Model initialization 2m = np.random.randn() # Initialize the slope (random number) 3b = np.random.randn() # Initialize the intercept (random number) 4 5learning_rate = 0.01 # Define the learning rate 6epochs = 10000 # Define the number of iterations 7 8# SGD implementation 9for _ in range(epochs): 10 random_index = np.random.randint(len(X)) # select a random sample 11 x = X[random_index] 12 y = Y[random_index] 13 pred = m * x + b # Calculate the predicted y 14 # Calculate gradients for m (slope) and b (intercept) 15 grad_m = (pred - y) * x 16 grad_b = (pred - y) 17 m -= learning_rate * grad_m # Update m using the calculated gradient 18 b -= learning_rate * grad_b # Update b using the calculated gradient
After running the SGD implementation, we should see the final optimized values of m
(slope) and b
(intercept).
We apply our SGD function and then visualize the progress using Matplotlib.
Python1import matplotlib.pyplot as plt 2 3# Plot the data points 4plt.scatter(X, Y, color = "m", marker = "o", s = 30) 5 6# Predicted line for the model 7y_pred = m * X + b 8 9# Plotting the predicted line 10plt.plot(X, y_pred, color = "g") 11 12# Adding labels to the plot 13plt.xlabel('X') 14plt.ylabel('Y') 15 16plt.show()
Here is the result:
This plot visualizes the implementation of SGD on a simple linear regression problem, showcasing the resulting model.
Today's lesson unveiled critical aspects of the Stochastic Gradient Descent algorithm. We explored its significance, advantages, disadvantages, mathematical formulation, and Python implementation. You'll soon practice these concepts in upcoming tasks, cementing your understanding of SGD and enhancing your Python coding skills in machine learning. Happy learning!