Lesson 2
Hyperparameter Tuning in Logistic Regressions
Intro to Hyperparameters

Welcome back to the fascinating world of machine learning! Today's mission is to enhance model performance through the technique of hyperparameter tuning. Let's start with a quick refresher - what exactly are hyperparameters?

In machine learning, hyperparameters are the parameters whose values are set upfront, before the commencement of the training process. They are external to the model.

Consider a simple analogy. If you think of your machine learning model as a car, the model parameters might represent the internal mechanisms - such as the engine, gears, and tires that get determined by the mechanics of the car - while the hyperparameters represent external settings like the angle of your steering wheel or the position of your seat, which you adjust according to a personal preference or a specific journey.

In the realm of machine learning algorithms, hyperparameters might include the K in the K-Nearest Neighbors, the kernel in Support Vector Machines, or the C and max_iter in Logistic Regression. Conversely, weights or coefficients in Linear Regression or Logistic Regression algorithms are examples of model parameters.

Let's look at how to define a hyperparameter, C, in a Logistic Regression instance using sklearn.

Python
1from sklearn.linear_model import LogisticRegression 2 3# Logistic Regression with 'C' as a hyperparameter 4log_reg = LogisticRegression(C=0.1)

In the above code snippet, C is a hyperparameter we manually choose during the creation of the Logistic Regression model. This C is set before the Logistic Regression model is fit to the data and is the inverse of the regularization strength.

Understanding the Role of `C` in Logistic Regression

As we journey deeper into the world of machine learning, we encounter various dials and switches that control our model's behavior. One such control, particularly for Logistic Regression, is C. To appreciate its significance, let's simplify the concept without getting into the details of the math or regularization.

Imagine you're training a machine to distinguish between cats and dogs. You hand it a bunch of photos, each labeled as either a cat or a dog. The machine, eager to please, starts noting down every detail—whisker lengths, fur color, eye size—to make its decisions.

Now, C is like your way of telling this machine how much attention to pay to these details. A high C means you're encouraging the machine to take every little detail seriously, aiming for perfection with the training photos. It's akin to a perfectionist mindset, trying to nail down everything precisely, which might make the model very complex.

On the flip side, a low C is like advising the machine to take a step back and not to obsess over every small detail. It suggests that being too meticulous might not be necessary and that a simpler approach, focusing on the broader strokes, might be better. This nudges the machine towards creating a simpler model, one that's not too hung up on capturing every nuance in the training set.

This concept might seem abstract now, but it's all about finding the right balance. Too high a C, and your model might become an overachiever on the training set but fail to generalize to new photos of cats and dogs it hasn't seen before. Too low, and the model might become too simplistic, missing out on important distinctions between cats and dogs.

As we move forward and introduce more concepts, the strategic importance of C—in helping us strike this balance between simplicity and complexity for optimal model performance—will become even clearer.

Preliminary Data Setup

Recall the Wisconsin Breast Cancer Dataset that we're using for our machine learning lessons? Let's get that up and running promptly. We'll load the dataset, split it into training and testing sets, and then scale it using the StandardScaler for optimal model performance.

Python
1from sklearn.datasets import load_breast_cancer 2from sklearn.model_selection import train_test_split 3from sklearn.preprocessing import StandardScaler 4 5# Load dataset 6data = load_breast_cancer() 7X, y = data.data, data.target 8 9# Split into training and testing sets 10X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) 11 12# Scale the data for optimal performance 13scaler = StandardScaler() 14X_train_scaled = scaler.fit_transform(X_train) 15X_test_scaled = scaler.transform(X_test)
Introduction to GridSearchCV

GridSearchCV stands for Grid Search with Cross-Validation. It is a tool that simplifies the process of hyperparameter tuning, ensuring that the model you train produces the best results possible. At its core, GridSearchCV systematically works through multiple combinations of parameter tunes, cross-validating as it goes to determine which tune gives the best performance. Cross-validation is a model evaluation method that is more robust than simple split into training and testing sets, especially when data is limited.

Imagine grid search like experimenting with different amounts of flour, butter, and sugar while perfecting a cookie recipe. You can visualize trying out every possible combination of these three ingredients until you bake the most delicious cookie ever. In similar vein, GridSearchCV tries out all the possible combinations of hyperparameters you define. The "CV" in its name emphasizes that each combination is rigorously evaluated through cross-validation, ensuring the selection of the most optimal settings for your model.

By employing GridSearchCV, you can automate the process of tuning your model's hyperparameters. This not only saves valuable time but also helps in refining the model to achieve better accuracy.

Hyperparameter Tuning for Logistic Regression

When applying this to hyperparameter tuning in Logistic regression, we'll focus on tuning the C hyperparameter.

Python
1from sklearn.model_selection import GridSearchCV 2from sklearn.linear_model import LogisticRegression 3 4# Define the parameter values to be searched 5param_grid = {'C': [0.001, 0.01, 0.1, 1, 10, 100]} 6 7# Instantiate the grid 8grid_search = GridSearchCV(LogisticRegression(max_iter=10000), param_grid, cv=5) 9 10# Fit the grid with data 11grid_search.fit(X_train_scaled, y_train)

By executing the above code, we have tasked GridSearchCV with methodically testing each specified C value for the Logistic Regression model, employing cross-validation for each C value to pinpoint the one that enhances the model's accuracy. This streamlined process automates the optimization of our hyperparameter, C, refining our model’s effectiveness in making predictions. For scenarios requiring multiple hyperparameters to be tuned simultaneously, simply expand the param_grid; e.g., {'C': [0.001, 0.01, 0.1, 1, 10, 100], 'penalty': ['l1', 'l2']}, to explore additional dimensions of model configuration.

Checking the Best Parameters

After GridSearchCV is fitted with data, the instance retains the combination of parameters that provided the best results. It offers an easy way to access these optimal parameters.

Python
1# Printing the best parameters 2print("Best parameters:", grid_search.best_params_)

output

1Best parameters: {'C': 1}

Through this detailed exploration, your newfound ability to tune hyperparameters can be the difference between having a good model and a great one! This understanding provides a springboard for you to experiment with hyperparameter tuning for other models and hyperparameters. Happy coding!

Enjoy this lesson? Now it's time to practice with Cosmo!
Practice is how you turn knowledge into actual skills.