Lesson 1
Stochastic Gradient Descent: Theory and Implementation in Python
Introduction

Welcome! We're about to explore Stochastic Gradient Descent (SGD), a pivotal optimization algorithm. SGD, a variant of Gradient Descent, is renowned for its efficiency with large datasets due to its unique stochastic nature. Stochastic means “random” and is the opposite of deterministic.  A deterministic algorithm runs the same every time, but a stochastic one introduces a randomness. Our journey includes understanding SGD, its theoretical concepts, and implementing it in Python.

Understanding Stochastic Gradient Descent

SGD starts by understanding its structure. Unlike Gradient Descent, SGD calculates an estimate of the gradient using a randomly selected single data point, not the entire dataset. Consequently, SGD is highly efficient for large datasets.

While the efficient handling of large datasets by SGD is a blessing, its stochasticity can often lead to a slightly noisier process for convergence, resulting in the model not settling at an absolute minimum.

Defining Data

We are going to use this simple example of data:

Python
1# Importing Necessary Library 2import numpy as np 3 4# Linear regression problem 5X = np.array([0, 1, 2, 3, 4, 5]) 6Y = np.array([0, 1.1, 1.9, 3, 4.2, 5.2])
Math Behind

In terms of math, SGD can be formulated as follows. Imagine we are looking for a best-fit line, setting the parameters of the familiar y=mx+by = mx + b equation. Remember, mm is the slope and bb is the y-intercept. Then:

m=m2α((mxi+b)yi)xim' = m - 2\alpha \cdot ((mx_i + b) - y_i) \cdot x_i

b=b2α((mxi+b)yi)b' = b - 2\alpha \cdot ((mx_i + b) - y_i)

where:

  • mm and bb are the initial values of your parameters
  • mm' and bb' are the updated parameters
  • xix_i is a particular feature of your training set
  • yiy_i is the actual output for the given feature xix_i
  • α\alpha is the learning rate

These formulas represent the update rules for parameters mm and bb. Here, the term (mxi+b)yi(mx_i+ b) - y_i presents the difference between the actual data point and the initial model mxi+bmx_i + b prediction. Multiplying it with a specific feature xix_i and averaging over all samples provide the gradient for parameter mm. The same principle applies to parameter bb, but without multiplication by xix_i, as bb is the bias term.

Implementing Stochastic Gradient Descent

Now, let's dive into Python to implement SGD. This process encompasses initializing parameters randomly, selecting a random training sample, calculating the gradient, updating the parameters, and running several iterations (also known as epochs).

Let's break it down with the following code:

Python
1# Model initialization 2m = np.random.randn() # Initialize the slope (random number) 3b = np.random.randn() # Initialize the intercept (random number) 4 5learning_rate = 0.01 # Define the learning rate 6epochs = 10000 # Define the number of iterations 7 8# SGD implementation 9for _ in range(epochs): 10 random_index = np.random.randint(len(X)) # select a random sample 11 x = X[random_index] 12 y = Y[random_index] 13 pred = m * x + b # Calculate the predicted y 14 # Calculate gradients for m (slope) and b (intercept) 15 grad_m = (pred - y) * x 16 grad_b = (pred - y) 17 m -= learning_rate * grad_m # Update m using the calculated gradient 18 b -= learning_rate * grad_b # Update b using the calculated gradient

After running the SGD implementation, we should see the final optimized values of m (slope) and b (intercept).

Testing the Algorithm

We apply our SGD function and then visualize the progress using Matplotlib.

Python
1import matplotlib.pyplot as plt 2 3# Plot the data points 4plt.scatter(X, Y, color = "m", marker = "o", s = 30) 5 6# Predicted line for the model 7y_pred = m * X + b 8 9# Plotting the predicted line 10plt.plot(X, y_pred, color = "g") 11 12# Adding labels to the plot 13plt.xlabel('X') 14plt.ylabel('Y') 15 16plt.show()

Here is the result:

This plot visualizes the implementation of SGD on a simple linear regression problem, showcasing the resulting model.

Lesson Summary and Practice

Today's lesson unveiled critical aspects of the Stochastic Gradient Descent algorithm. We explored its significance, advantages, disadvantages, mathematical formulation, and Python implementation. You'll soon practice these concepts in upcoming tasks, cementing your understanding of SGD and enhancing your Python coding skills in machine learning. Happy learning!

Enjoy this lesson? Now it's time to practice with Cosmo!
Practice is how you turn knowledge into actual skills.