Welcome to another enriching and interactive session. In today's module, we will delve deep into the topic of Evaluating the Predictive Performance of Models. We have successfully crafted and implemented Linear and Logistic Regression Models on the Wine Quality Dataset; now it's time we focus on assessing these models' performance. Our mission in this lesson involves comprehending various evaluation metrics for regression and classification models, applying them practically with Python
, and efficiently handling potential problems such as overfitting and underfitting in our models.
Model evaluation is a cornerstone in the field of machine learning. It empowers us to "grade" our model's predictions, guiding us in enhancing its performance by adjusting its parameters. This process allows us to choose the most suitable model for our task. It might be helpful to envision model evaluation as a scorecard, where each metric gives you a score on various aspects like accuracy of prediction, error rate, precision, and recall, amongst others. Excited? Let's jump right into it!
In machine learning, evaluation metrics are essentially the 'rulers' used to quantify the predictive prowess of our models. Depending on whether our target variable is continuous or categorical, we select the metrics best suited to quantify the model's performance.
For regression models, we typically utilize metrics like Mean Squared Error (MSE)
, Root Mean Squared Error (RMSE)
, Mean Absolute Error (MAE)
, and R-squared
.
Let's delve a bit deeper into each of these regression metrics:
Mean Squared Error (MSE): This metric quantifies the average of the squares of prediction errors, which are the differences between the actual and predicted values. The lower the MSE
, the better the model performed.
Root Mean Squared Error (RMSE): This metric is merely the square root of the MSE
. It carries the same units as the output and is often preferred as it punishes larger errors more robustly.
Mean Absolute Error (MAE): As the name implies, MAE
measures the average of the absolute differences between our actual and predicted values. This metric is particularly helpful when we wish to know exactly how much our predictions deviate on average.
R-squared: This coefficient of determination, known as R-squared
, quantifies the proportion of the total variability or variance of the target variable that can be accounted for by our regression model. Higher R-squared
values indicate smaller differences between observed and predicted response values.
Next, let's use Python's scikit-learn
library to compute these metrics and evaluate the performance of our linear regression model, which predicts wine quality.
The metrics
module of the widely-used sklearn
package in Python has functions to compute all these metrics seamlessly. We will perform these calculations for the predicted wine qualities from our Linear Regression model.
Python1from sklearn import metrics 2import numpy as np 3 4# In our example, fitted is a numpy array that our linear regression model predicted for wine quality 5fitted = np.array([3.6, 2.7, 2.4]) 6 7# While actual is a numpy array containing the real wine qualities 8actual = np.array([3.5, 2.9, 2.6]) 9 10# For calculating MAE, pass the actual and predicted arrays to mean_absolute_error() 11mae = metrics.mean_absolute_error(actual, fitted) 12print(f"Mean Absolute Error (MAE): {mae}") 13# Mean Absolute Error (MAE): 0.16666666666666666 14 15# For calculating MSE, use the mean_squared_error function 16mse = metrics.mean_squared_error(actual, fitted) 17print(f"Mean Squared Error (MSE): {mse}") 18# Mean Squared Error (MSE): 0.029999999999999995 19 20# RMSE is calculated as the square root of MSE, using the np.sqrt() function 21rmse = np.sqrt(mse) 22print(f"Root Mean Squared Error (RMSE): {rmse}") 23# Root Mean Squared Error (RMSE): 0.1732050807568877 24 25# For calculating the R-squared value, use the r2_score function 26r2 = metrics.r2_score(actual, fitted) 27print(f"R-squared: {r2}") 28# R-squared: 0.7857142857142857
The output from this script provides us with MAE
, MSE
, RMSE
, and R-squared
for our predicted values from the Linear Regression model. These quantities help us assess the quality and reliability of our model's predictions.
For our Logistic Regression Model, which predicts whether a wine is good or not good, we will focus on classification metrics. These include Accuracy
, Precision
, Recall
, F1-score
, and Area Under the ROC Curve (AUC-ROC)
.
Let's acquire a basic understanding of these:
Accuracy: This metric measures the proportion of correctly predicted observations to the total number of observations in the dataset.
Precision: Precision helps us understand the exactness or quality of our model when it predicts positive classes.
Recall (Sensitivity): Recall, also known as sensitivity, reveals how well our model finds all the positive class data points.
F1 Score: The F1 score
is the harmonic mean of Precision
and Recall
, aiming to find the best balance between them.
Area Under the ROC Curve (AUC-ROC): This metric measures the entire two-dimensional area underneath the curve (AUC
) that is traced out by plotting the true positive rate (y-axis) against the false positive rate (x-axis) as we vary the discrimination threshold.
Now, we can use our logistic regression model to predict if a wine's quality is good or not good and then calculate these metrics.
For calculating classification metrics, we'll once again use Python's scikit-learn
package. Suppose you've built a logistic regression model and made some predictions on the test data:
Python1from sklearn import metrics 2 3# Let y_test be a numpy array with the actual wine quality classes ('good' or 'not good') for the test dataset 4y_test = np.array(['not good', 'good', 'good', 'not good', 'good']) 5 6# And let pred be a numpy array with the predicted classes by our model for the test dataset 7pred = np.array(['not good', 'good', 'not good', 'good', 'good']) 8 9# For calculating Accuracy 10accuracy = metrics.accuracy_score(y_test, pred) 11print(f"Accuracy: {accuracy}") 12 13# For calculating Precision, use the precision_score function 14# Note: It considers 'good' as the positive class by default (this can be changed using the pos_label parameter) 15precision = metrics.precision_score(y_test, pred, pos_label="good") 16print(f"Precision: {precision}") 17 18# For calculating Recall 19recall = metrics.recall_score(y_test, pred, pos_label="good") 20print(f"Recall: {recall}") 21 22# For calculating F1 Score 23f1 = metrics.f1_score(y_test, pred, pos_label="good") 24print(f"F1 Score: {f1}") 25 26# For computing AUC-ROC, we need the probabilities of the positive class ('good'), let's assume y_proba as an array of these probabilities 27y_proba = np.array([0.1, 0.7, 0.3, 0.8, 0.7]) 28auc_roc = metrics.roc_auc_score(y_test, y_proba) 29print(f"AUC-ROC: {auc_roc}")
In the above code, the pred
array contains the predicted classes for the test data, and y_test
holds the actual classes. The model performance metrics are calculated for these predicted and actual classes.
Time to apply what we've been learning! Let’s evaluate a machine learning model using the Wine Quality dataset.
Python1import pandas as pd 2from sklearn.model_selection import train_test_split 3from sklearn.linear_model import LinearRegression 4from sklearn import metrics 5import numpy as np 6 7# Load the Red Wine Quality Data 8wine = datasets.load_dataset('codesignal/wine-quality', split='red') 9wine = pd.DataFrame(wine) 10 11# Separate Features and Target 12X = wine.drop('quality', axis=1) 13Y = wine['quality'] 14 15# Split the data into training and test data 16X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=0) 17 18# Train the Model 19model = LinearRegression() 20model.fit(X_train, Y_train) 21 22# Make prediction 23Y_pred = model.predict(X_test) 24 25# Calculate metrics 26mae = metrics.mean_absolute_error(Y_test, Y_pred) 27mse = metrics.mean_squared_error(Y_test, Y_pred) 28rmse = np.sqrt(mse) 29r2 = metrics.r2_score(Y_test, Y_pred) 30 31# Print metrics 32print(f"Mean Absolute Error (MAE): {mae}") 33# Mean Absolute Error (MAE): 0.4696330928661111 34print(f"Mean Squared Error (MSE): {mse}") 35# Mean Squared Error (MSE): 0.384471197820124 36print(f"Root Mean Squared Error (RMSE): {rmse}") 37# Root Mean Squared Error (RMSE): 0.6200574149384265 38print(f"R-squared: {r2}") 39# R-squared: 0.32838876395802286
In machine learning, balance is crucial. If your model performs well on the training data but poorly on unseen data (such as validation and test datasets), it may be overfitting. This issue is similar to an attempt to ace a specific test by learning to copy all the answers without understanding the concepts, which leads to poor performance in other tests. This problem arises because the model learns the noise in the training data rather than the signal.
Conversely, we have underfitting. An underfitted model performs poorly on both training and unseen data because it hasn't learned the underlying pattern of the data.
In subsequent lessons, we will explore these concepts deeper and examine how to fine-tune our models to prevent overfitting and underfitting.
Cross-validation transcends the traditional train-test split strategy and ensures that our model evaluation is unbiased. It accomplishes this by partitioning the dataset into multiple 'folds'. Each iteration holds out one fold as the test set and trains the model on the remaining folds, repeating this process for each fold. This technique guarantees that every data point gets to be part of the training and test sets, providing a more generalized and robust model evaluation method.
In Python, implementing cross-validation is as straightforward as calling a function, thanks to the scikit-learn
library. Here's a simple example demonstrating how to implement 5-fold cross-validation:
Python1from sklearn.model_selection import cross_val_score 2 3# clf represents an instance of a machine learning model you've already constructed (e.g., clf = LinearRegression()) 4scores = cross_val_score(clf, X, y, cv=5)
In this snippet, cv
specifies the number of folds, so scores
holds five scores as we're performing 5-fold cross-validation. You'll notice that these five scores might vary slightly from each other because different subsets of data are held out as a test set in each iteration, providing a more generalized measure of model performance.
Well done! You have explored the assessment of predictive performance for regression and classification models. We've unraveled and understood evaluation metrics such as MSE
, MAE
, Accuracy
, Precision
, Recall
, and many others. We used sklearn
in Python to compute these metrics, quantifying the performance of our models. Moreover, we ventured into overfitting, underfitting, and cross-validation.
Next, there will be some engaging practice exercises waiting for you. These exercises will allow you to apply these skills, giving you hands-on experience evaluating real-world models. So buckle up because we are just starting with the fascinating world of machine learning!