Welcome to our journey into the heart of ensemble machine learning with the Random Forest algorithm. As an extension of decision trees, Random Forests operate a multitude of trees, creating a "forest." This lesson will equip you to understand and implement a basic Random Forest in Python, focusing on nuances of tree construction and aggregation within a forest. Let's get started!
Random Forest is a robust machine learning ensemble that builds upon many decision trees to solve regression and classification tasks. Each tree 'votes' for a particular class prediction, and the class with the majority votes becomes the final prediction of our model.
Random Forests rely significantly on specific core hyperparameters such as n_trees
, the number of trees in the forest. Increasing n_trees
generally improves performance but adds computational cost. max_depth
controls the depth or levels of individual trees, and random_state
introduces an element of brinkmanship into the feature selection and bootstrapping processes when creating each tree.
A decision tree, the foundational building block of a Random Forest, embraces a structure akin to a flowchart, with branches that denote decision points and leaves that represent class outcomes. A Random Forest's strength lies in its trees' diversification, each tree constructed uniquely to ensure variety in the forest.
Implementing our Random Forest begins by importing the libraries:
Python1import numpy as np 2from scipy import stats 3from sklearn.tree import DecisionTreeClassifier 4from sklearn.metrics import accuracy_score
We initialize our RandomForest
class with __init__
, creating attributes for n_trees
, max_depth
, random_state
, an empty list of trees
for each tree, and a list of unique random_states
for each tree.
Python1class RandomForest: 2 def __init__(self, n_trees=100, max_depth=None, random_state=None): 3 self.n_trees = n_trees 4 self.max_depth = max_depth 5 self.random_states = np.random.RandomState(random_state).randint(0,10000,size=n_trees) 6 self.trees = []
Bootstrapping is a statistical method for estimating the property of an estimator by resampling with replacement from an original data sample. It's used to assign measures of accuracy to sample estimates. Each tree is built on a separate bootstrapped dataset in a Random Forest, providing necessary randomness and variety. The bootstrapping
method, incorporated into our Random Forest, generates these datasets. Let's recall the code for it from the previous lesson:
Python1def bootstrapping(self, X, y): 2 n_samples = X.shape[0] 3 idxs = np.random.choice(n_samples, n_samples, replace=True) 4 return X[idxs], y[idxs]
Then, to 'fit' the model, we generate a bootstrapped dataset and fit a different decision tree to it with each iteration, which is then appended to our trees
list:
Python1 def fit(self, X, y): 2 for i in range(self.n_trees): 3 X_, y_ = self.bootstrapping(X, y) 4 tree = DecisionTreeClassifier(max_depth=self.max_depth, random_state=self.random_states[i]) 5 tree.fit(X_, y_) 6 self.trees.append(tree)
Finally, the predict
component of the RandomForest
collects predictions from each tree, returning the class with the majority votes.
Python1 def predict(self, X): 2 tree_preds = np.array([tree.predict(X) for tree in self.trees]) 3 return stats.mode(tree_preds)[0]
To validate our RandomForest
's proficiency, let's use the widely employed Iris dataset as our testing ground:
Python1from sklearn import datasets 2from sklearn.model_selection import train_test_split 3 4iris = datasets.load_iris() 5X = iris.data 6y = iris.target 7X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42) 8 9rf = RandomForest(n_trees=100, max_depth=2, random_state=42) 10rf.fit(X_train, y_train) 11y_pred = rf.predict(X_test) 12 13print("Accuracy: ", accuracy_score(y_test, y_pred))
Here, we load the Iris dataset and split it into training and testing datasets. We train (or 'fit') the model using the training dataset. With the model thus trained, we predict the classes for the test dataset. The 'accuracy_score' summarizes how well our model's predictions match the actual classes in the test data.
Congratulations! We've delved deep into the heart of Random Forests, looked at the tree generation process, and engineered a basic Random Forest classifier from scratch using Python. Now it's time for practice to consolidate these concepts. After all, practice is the fuel for mastery! Happy coding!