Welcome to our exciting second class in the Regression and Gradient Descent series! In the previous lesson, we covered Simple Linear Regression. Now, we're transitioning toward Multiple Linear Regression, a powerful tool for examining the relationship between a dependent variable and several independent variables.
Consider a case where we need to predict house prices, which undoubtedly depend on multiple factors, such as location, size, and the number of rooms. Multiple Linear Regression accounts for these simultaneous predictors. In today's lesson, you'll learn how to implement this concept in Python!
Multiple Linear Regression builds upon the concept of Simple Linear Regression, accounting for more than one independent variable.
Let's recall the Simple Linear Regression equation:
For Multiple Linear Regression, we add multiple independent variables, :
Suppose we had n data points (equations), each with m features (x values) Then X would look like:
Each row represents the m features for a single data point. Notice how we include a column of 1's the represent the intercept (also called bias) of each equation.
For each row (equation), there is a corresponding y value. So y looks like:
The normal equation results in a vector:
Now, for any set of features through , we can predict the value as:
To calculate all the predictions at once, we take the dot product of and
To implement Multiple Linear Regression, we'll leverage some Linear Algebra concepts. Using the Normal Equation, we can calculate the coefficients for our regression equation:
Where is a matrix of features and is a vector of the target variable values. Like Simple Linear Regression, residuals (the differences between actual and predicted values) play a significant role. The smaller these residuals, the better the model fits.
Let's roll up our sleeves and start coding! We'll primarily rely on NumPy
to handle numerical operations and matrices.
First, we set up our dataset:
Python1X = np.array([[73, 67, 43], 2 [91, 88, 64], 3 [87, 134, 58], 4 [102, 43, 37], 5 [69, 96, 70]], dtype='float32') 6 7y = np.array([56, 81, 119, 22, 103], dtype='float32')
Next, we calculate our matrix of coefficients, , using the Normal Equation:
- Enhance our feature matrix, , with an extra column of ones to account for the intercept.
Python1ones = np.ones(shape=(len(X), 1)) 2X = np.append(ones, X, axis=1)
- Compute the coefficients using the Normal Equation.
Python1beta = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y)
We could also use @
operator instead of .dot
. You may choose the one you find more comfortable:
Python1beta = np.linalg.inv(X.T @ X) @ X.T @ y
After completing our model, we need to evaluate its performance. We employ the coefficient of determination ( score) for that. It indicates how well our model fits the data. Let's recall it:
Here, denotes the residual sum of squares, and is the total sum of squares:
,
where represents the observed values, represents the predicted values by the regression model.
,
where represents the observed values, stands for mean value of observed data.
A higher value (closer to 1) indicates a good model fit.
Python1predictions = X.dot(beta) 2ss_residuals = np.sum(np.square(y - predictions)) 3ss_total = np.sum(np.square(y - np.mean(y))) 4r2_score = 1 - (ss_residuals/ss_total) 5 6print("R^2 Score:", r2_score) # Output: R^2 Score: 0.9992
The score is very close to one, meaning the obtained model is very accurate – almost perfect!
Congratulations on mastering Multiple Linear Regression! You've effectively bridged the gap from concept to implementation, designing a regression model in Python from scratch.
Prepare for the upcoming lesson to delve more deeply into Regression Analysis. Meanwhile, make sure to practice and refine your newly acquired skills!