Hello! Our mission for this lesson is to master the cross-validation process — a critical part in building machine learning models. We will explore the following:
sklearn
libraryReady? Let's start our journey!
First things first, what does cross-validation mean in machine learning terms? As you know, in supervised learning, we need a way to measure how good our model is. A common simple approach is to divide our data into two sets: one for training and one for testing. However, this approach has a problem where our evaluation depends on how we divided the data. This is where cross-validation becomes handy.
The core idea behind cross-validation is to divide the dataset into several subsets; the model is then trained on some of these subsets and tested on the resting subsets. We repeat this process several times, changing the subsets for training and testing, and in the end, we average the model's performance over different divisions of the dataset. It's named cross-validation because we're "crossing" over our subsets for training and validation.
Cross-validation gives us a more reliable measure of performance than just one train-test split. The most common type of cross-validation is K-fold cross-validation, where we divide the data into K subset, and train the model K times, each time using a different subset as the validation set.
In Python, we can easily perform cross-validation using the sklearn
library, specifically the cross_val_score()
function. Here's a quick overview of its parameters:
estimator
: the machine learning model we want to evaluate.X
: the input data.y
: the target data.cv
: the number of subsets to divide the data into (the "K" in "K-Fold" cross-validation).We'll be using the example code provided to create a 5-fold cross-validation. The number "5" is a common choice as it provides a good balance between the accuracy of the performance measure and computational cost.
To see cross-validation in action, let's proceed with an example using a Naive Bayes classifier.
Python1from sklearn.model_selection import cross_val_score 2from sklearn.naive_bayes import MultinomialNB 3 4# Initialize a Naive Bayes classifier 5clf = MultinomialNB() 6 7# Apply cross-validation on the classifier 8scores = cross_val_score(clf, X_tfidf, df['label'], cv=5)
In this example, we first initialize a Naive Bayes classifier and then perform a 5-fold cross-validation on our text classification task. The variable scores
stores the accuracy for each fold, providing insights into the model's performance across different subsets of the data.
When we perform cross-validation, we obtain a series of performance scores, one for each fold. Here's how we can print these scores and their average to evaluate our model's performance.
Python1# Print the scores for each cross-validation fold 2print("Cross-validated scores:", scores) 3 4# Calculate and print the mean of the cross-validation scores 5print("Average cross-validated score:", scores.mean())
The output will look like this:
Plain text1Cross-validated scores: [0.96502242 0.95426009 0.95780969 0.9551167 0.96229803] 2Average cross-validated score: 0.9589013855455635
Each number in the list of Cross-validated scores
represents our model's accuracy in a single fold, illustrating the model's consistency across different parts of the dataset. Observing similar performance across folds, as seen in our output, indicates that our model performs consistently.
By computing the mean of these scores, we get an Average cross-validated score
of approximately 95.89%. This average score is a robust metric representing how our model is likely to perform on unseen data, thus providing a more reliable estimate than a single train/test split.
Congratulations! Today, you have learned what cross-validation is, how to perform it using the Python sklearn
library, and how to interpret the results. The road to mastery is through continuous practice. Don't forget to use this cross-validation technique in your future machine learning projects, particularly for text classification. Through this process, you will gain more familiarity and confidence in using the method. Happy learning!