Greetings in this segment of Introduction to Modeling Techniques for Text Classification! This part focuses on the heart of preprocessing techniques in modeling — Train-Test Split and Stratified Cross-Validation.
Rails of any machine learning model are laid by creating an effective split in the dataset and ensuring class balance. You'll not just learn about these core concepts but also implement them using Python's powerful library, scikit-learn
. Using these techniques, you'll split the SMS Spam Collection
dataset for effective text classification later in the course.
In real life, as you browse your inbox, you come across various legitimate (ham
) and promotional or unsolicited (spam
) messages. Machine Learning models help distinguish between these, by labeling an incoming message as spam
or ham
. A good model is crucial for avoiding a cluttered inbox.
Let's start by loading the dataset. The datasets
library can pull the data directly, and we'll convert it into a pandas DataFrame for easier data manipulation.
Python1# Import necessary libraries 2import datasets 3import pandas as pd 4 5# Load the dataset 6spam_dataset = datasets.load_dataset('codesignal/sms-spam-collection', split='train') 7spam_dataset = pd.DataFrame(spam_dataset) 8 9# Display the first few rows of the dataset 10print(spam_dataset.head(3))
The output will be:
Plain text1 label message 20 ham Go until jurong point, crazy.. Available only ... 31 ham Ok lar... Joking wif u oni... 42 spam Free entry in 2 a wkly comp to win FA Cup fina...
This output displays the first three rows of the dataset, showcasing two ham
messages and one spam
message. From the dataset, you can see that each message is labeled as either ham
or spam
under the 'label' column, giving an indicator of the class of each message.
By running the above code blocks, you have loaded the SMS Spam Collection
dataset - a collection of 5572 text messages, each classified as either ham
or spam
, into a pandas DataFrame, a data structure ideal for data manipulation tasks. It's crucial to familiarize yourself with the dataset before further processing to provide a foundation for the preprocessing tasks.
Before we start our journey of text classification, let's understand Train-Test Split. It is a method used to separate our dataset into two parts — a training set and a test set. The training set is what our machine learning model trains on, while the test set is used to evaluate the performance of our trained model.
But why do we split our dataset? It prevents our model from overlearning the training data and ensures that it predicts unseen data robustly, improving model generalizability.
Let's implement train-test split on the data:
Python1from sklearn.model_selection import train_test_split 2 3# Define X (input features) and Y (output labels) 4X = spam_dataset["message"] 5Y = spam_dataset["label"] 6 7# Perform the train test split 8X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42)
By specifying test_size
as 0.2, we're splitting our data such that 80% of it goes to training, and the remaining 20% will be used for testing.
While any splitting would have worked, why did we choose "Stratified Cross-Validation"? It's because we want to ensure that both our training and testing datasets contain an equal representation of both spam
and ham
classes. This strategy is especially helpful when we have an imbalanced dataset, where one class heavily outnumbers the other.
Let's revise our train-test split and apply stratified cross-validation:
Python1# Perform the train test split using stratified cross-validation 2X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=42, stratify=Y)
Now that our data is prepared, let's validate our split:
Python1# Display the number of samples in training and test datasets 2print(f"Training dataset: {len(X_train)} samples") 3print(f"Test dataset: {len(X_test)} samples")
The output will be:
Plain text1Training dataset: 4459 samples 2Test dataset: 1115 samples
This output confirms the successful split of our dataset into training and test datasets, with 4459 samples allocated for training and 1115 samples designated for testing, ensuring a balanced representation of classes in both sets.
Great work! You've now acquired a keen understanding of train-test split and stratified cross-validation, two fundamental data preprocessing techniques. As we delve into the next parts of the course, where we teach Naive Bayes, SVMs, Decision Trees, and Random Forests for text classification, this understanding will prove crucial. Do stick around for the practice exercises to reinforce these foundational concepts as you move on in your journey to becoming a proficient Natural Language Processing Engineer.