Lesson 5
Balancing Act: Overfitting and Underfitting in Machine Learning Models
Understanding Overfitting and Underfitting in Predictive Modeling

In this lesson, we will deepen our understanding about two common challenges faced while training machine learning models: overfitting and underfitting.

First, let's define these terms. Overfitting happens when a model learns the training data so well that it even catches the irrelevant details or noise in the data. Thus, it performs well on the training data but fails on unseen data or test data because it's unable to generalize the patterns for new, real-world data.

On the contrary, underfitting happens when a model performs poorly on both the training and test data because it cannot capture the underlying pattern of the data. This situation is mostly due to the simplicity of the model, and it too fails to generalize on the unseen data. In terms of error rates, overfitting gives a low training error but a high test error, while underfitting gives a high error for both datasets.

Avoiding Overfitting and Underfitting

Finding the balance between overfitting and underfitting while training models is crucial. Here are some techniques to avoid overfitting or underfitting:

  • Regularization: Regularization techniques add a penalty term to the error of the model to limit the permissible complexity of the model. They are effective for overfitting prevention by making the model simpler and more general.

  • Adding More Data: A larger training dataset may help decrease overfitting, because the more data we have, the better our model can learn from it and generalize upon unseen data.

  • Early Stopping: Early stopping is a technique to avoid overfitting by halting the training once the model's performance begins to degrade on a held-out validation set.

  • Cross-Validation: Cross-Validation splits the dataset into multiple parts, training on some and validating on others to assess model consistency and detect overfitting or underfitting by evaluating performance across varied data subsets.

The key is to reach a fair trade-off between bias (underfitting) and variance (overfitting) such that your model works well on unseen data.

Building Overfitted and Underfitted SVM Regressors

Now let's dive into practical demonstration using a toy dataset and Support Vector Machines (SVM) for regression tasks. Our focus here is to illustrate the concepts of overfitting and underfitting by tweaking the parameters of SVM regressors.

In the context of SVM (Support Vector Machine) regression, the parameter 'C' plays a pivotal role. It is a regularization parameter that directly influences the model's complexity and its ability to adapt to the data. Specifically, 'C' balances how much the SVM should prioritize fitting each individual data point. When 'C' is high, the model endeavors to fit the training data as closely as possible, potentially leading to overfitting by emphasizing minor fluctuations and noise in the data. On the other hand, a low 'C' makes the model more tolerant to errors on individual data points, leading to a simpler, more generalizing model but at the risk of underfitting if it becomes too simplistic to capture the underlying trend. Thus, 'C' acts as a lever for adjusting the trade-off between capturing the data's nuances (low bias, high variance) and maintaining generalizability for unseen data (high bias, low variance).

  • A high 'C' value leads to overfitting by allowing the algorithm to penalize the error term more aggressively. This results in the model trying hard to fit all the training data points, even capturing the noise in the data, which usually hurts its ability to generalize on unseen datasets.
  • Conversely, a low 'C' value contributes to underfitting by easing the penalty on the error term, which makes the model too simplistic. This simplicity might prevent the model from capturing the essence and complexities of the dataset, performing poorly on both the training and unseen data.

Let's explore this behavior with hands-on experimentation:

Python
1# Import necessary libraries 2from sklearn.datasets import make_regression 3from sklearn.model_selection import train_test_split 4from sklearn.svm import SVR 5from sklearn.metrics import mean_squared_error 6from math import sqrt 7 8# Generate a toy dataset 9X, y = make_regression(n_samples=1000, n_features=20, noise=0.1, random_state=42) 10 11# Split the data into training and test sets 12X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) 13 14# Create and fit an overfitted SVM regressor with high C 15overfitted_model = SVR(kernel='rbf', C=1000.0) 16overfitted_model.fit(X_train, y_train) 17 18# Create and fit an underfitted SVM regressor with low C 19underfitted_model = SVR(kernel='rbf', C=0.001) 20underfitted_model.fit(X_train, y_train)

With our models built, we're prepared to move on to the evaluation phase, which will allow us to assess the implications of overfitting and underfitting in a tangible, measurable way.

Evaluating Our Models: Overfitting vs. Underfitting

After training our models, assessing their performance on both the training and test datasets is crucial for understanding their predictive accuracy through the Root Mean Squared Error (RMSE).

Python
1# Dictionary of models for easier iteration 2models = {'Overfitted model': overfitted_model, 'Underfitted model': underfitted_model} 3 4for model_name, model in models.items(): 5 # Generate predictions 6 train_preds = model.predict(X_train) 7 test_preds = model.predict(X_test) 8 9 # Calculating RMSE for both training and test sets 10 train_rmse = sqrt(mean_squared_error(y_train, train_preds)) 11 test_rmse = sqrt(mean_squared_error(y_test, test_preds)) 12 13 # Print the results 14 print(f"{model_name} RMSE - Training: {train_rmse:.2f}, Test: {test_rmse:.2f}")
Plain text
1Overfitted model RMSE - Training: 0.46, Test: 49.34 2Underfitted model RMSE - Training: 194.33, Test: 196.74

The overfitted model's RMSE on the training data is notably low at 0.46, reflecting its excellent fit to the training set. However, the drastic increase in RMSE to 49.34 on the test set highlights its poor generalization capabilities. This discrepancy clearly indicates that the model has overlearned the training data’s noise and outliers, rather than discerning the underlying patterns to apply to new data. This is the hallmark of overfitting - a model perfectly attuned to its training data but ill-prepared for anything outside of that.

In contrast, the underfitted model exhibits high RMSE values for both the training (194.33) and test (196.74) sets, showcasing its uniformly poor performance. This suggests the model's excessive simplicity, which prevents it from capturing the complex patterns in the data, thus failing to learn from it effectively. The consistently low predictive accuracy on both training and test data is a clear indication of underfitting, where the model lacks the necessary sophistication to make accurate predictions.

Through this analytical lens, we observe the critical need for balancing a model's complexity to mitigate the risks of overfitting or underfitting. It emphasizes the importance of fine-tuning model parameters, such as 'C' in SVM regressions, to foster a model capable of making accurate and generalized predictions. This experiential learning underscores the theoretical concepts previously discussed and aptly demonstrates the impact of parameter adjustments on a model's ability to generalize from training to unseen data.

Lesson Recap and Next Steps

Well done! You've mastered the conceptual underpinnings of overfitting and underfitting in predictive modeling. Now you can recognize when and why these phenomena occur, and more importantly, you know how to prevent them. This knowledge is vital for building effective and robust predictive models.

Up next, you'll have some exercises which provide an excellent opportunity to put these concepts into practice. Remember, hands-on experience is irreplaceable. So, ensure you take these upcoming tasks seriously and give your best!

Enjoy this lesson? Now it's time to practice with Cosmo!
Practice is how you turn knowledge into actual skills.