Gradient Boosting With XGBoost
In this report, you will learn how to build and optimize models with gradient boosting. This method dominates many Kaggle competitions and achieves state-of-the-art results on a variety of datasets
Created on June 14|Last edited on June 14
Comment
This article is part of a series, clarifying some of Kaggle's terms, definitions, and competitions as well as adding visualizations.
Introduction
For much of this course, you have made predictions with the random forest method, which achieves better performance than a single decision tree simply by averaging the predictions of many decision trees.
We refer to the random forest method as an "ensemble method". By definition, ensemble methods combine the predictions of several models (e.g., several trees, in the case of random forests).
Next, we'll learn about another ensemble method called gradient boosting.
Gradient Boosting
Gradient boosting is a method that goes through cycles to iteratively add models into an ensemble.
It begins by initializing the ensemble with a single model, whose predictions can be pretty naive. (Even if its predictions are wildly inaccurate, subsequent additions to the ensemble will address those errors.)
Then, we start the cycle:
- First, we use the current ensemble to generate predictions for each observation in the dataset. To make a prediction, we add the predictions from all models in the ensemble.
- These predictions are used to calculate a loss function (like mean squared error, for instance).
- Then, we use the loss function to fit a new model that will be added to the ensemble. Specifically, we determine model parameters so that adding this new model to the ensemble will reduce the loss. (Side note: The "gradient" in "gradient boosting" refers to the fact that we'll use gradient descent on the loss function to determine the parameters in this new model.)
- Finally, we add the new model to ensemble, and ...
- ... repeat!
-
Example
We begin by loading the training and validation data in X_train, X_valid, y_train, and y_valid.
In this example, you'll work with the XGBoost library. XGBoost stands for extreme gradient boosting, which is an implementation of gradient boosting with several additional features focused on performance and speed. (Scikit-learn has another version of gradient boosting, but XGBoost has some technical advantages.)
In the next code cell, we import the scikit-learn API for XGBoost (xgboost.XGBRegressor). This allows us to build and fit a model just as we would in scikit-learn. As you'll see in the output, the XGBRegressor class has many tunable parameters -- you'll learn about those soon!
from xgboost import XGBRegressormy_model = XGBRegressor()my_model.fit(X_train, y_train)We also make predictions and evaluate the model.from sklearn.metrics import mean_absolute_errorpredictions = my_model.predict(X_valid)print("Mean Absolute Error: " + str(mean_absolute_error(predictions, y_valid)))Mean Absolute Error: 280355.04334039026
Parameter Tuning
XGBoost has a few parameters that can dramatically affect accuracy and training speed. The first parameters you should understand are:
n_estimators
n_estimators specifies how many times to go through the modeling cycle described above. It is equal to the number of models that we include in the ensemble.
- Too low a value causes underfitting, which leads to inaccurate predictions on both training data and test data.
- Too high a value causes overfitting, which causes accurate predictions on training data, but inaccurate predictions on test data (which is what we care about). Typical values range from 100-1000, though this depends a lot on the learning_rate parameter discussed below.
Here is the code to set the number of models in the ensemble:
my_model = XGBRegressor(n_estimators=500)my_model.fit(X_train, y_train)
early_stopping_rounds
early_stopping_rounds offers a way to automatically find the ideal value for n_estimators. Early stopping causes the model to stop iterating when the validation score stops improving, even if we aren't at the hard stop for n_estimators. It's smart to set a high value for n_estimators and then use early_stopping_rounds to find the optimal time to stop iterating.
Since random chance sometimes causes a single round where validation scores don't improve, you need to specify a number for how many rounds of straight deterioration to allow before stopping. Setting early_stopping_rounds=5 is a reasonable choice. In this case, we stop after 5 straight rounds of deteriorating validation scores.
When using early_stopping_rounds, you also need to set aside some data for calculating the validation scores - this is done by setting the eval_set parameter.
We can modify the example above to include early stopping:
my_model = XGBRegressor(n_estimators=500)my_model.fit(X_train, y_train,early_stopping_rounds=5,eval_set=[(X_valid, y_valid)],verbose=False)
If you later want to fit a model with all of your data, set n_estimators to whatever value you found to be optimal when running with early stopping.
learning_rate
Instead of getting predictions by simply adding up the predictions from each component model, we can multiply the predictions from each model by a small number (known as the learning rate) before adding them in.
This means each tree we add to the ensemble helps us less. So, we can set a higher value for n_estimators without overfitting. If we use early stopping, the appropriate number of trees will be determined automatically.
In general, a small learning rate and large number of estimators will yield more accurate XGBoost models, though it will also take the model longer to train since it does more iterations through the cycle. As default, XGBoost sets learning_rate=0.1.
Modifying the example above to change the learning rate yields the following code:
my_model = XGBRegressor(n_estimators=1000, learning_rate=0.05)my_model.fit(X_train, y_train,early_stopping_rounds=5,eval_set=[(X_valid, y_valid)],verbose=False)
n_jobs
On larger datasets where runtime is a consideration, you can use parallelism to build your models faster. It's common to set the parameter n_jobs equal to the number of cores on your machine. On smaller datasets, this won't help.
The resulting model won't be any better, so micro-optimizing for fitting time is typically nothing but a distraction. But, it's useful in large datasets where you would otherwise spend a long time waiting during the fit command.
Here's the modified example:
my_model = XGBRegressor(n_estimators=1000, learning_rate=0.05, n_jobs=4)my_model.fit(X_train, y_train,early_stopping_rounds=5,eval_set=[(X_valid, y_valid)],verbose=False)
Conclusion
XGBoost is a the leading software library for working with standard tabular data (the type of data you store in Pandas DataFrames, as opposed to more exotic types of data like images and videos). With careful parameter tuning, you can train highly accurate models.
n_estimators specifies how many times to go through the modeling cycle described above. It is equal to the number of models that we include in the ensemble.
Let's apply what we have learned to a real-world Kaggle dataset:
Setup
You will work with the Housing Prices Competition for Kaggle Learn Users dataset from the previous exercise.

Run the next code cell without changes to load the training and validation sets in X_train, X_valid, y_train, and y_valid. The test set is loaded in X_test.
import pandas as pdfrom sklearn.model_selection import train_test_split# Read the dataX = pd.read_csv('../input/train.csv', index_col='Id')X_test_full = pd.read_csv('../input/test.csv', index_col='Id')# Remove rows with missing target, separate target from predictorsX.dropna(axis=0, subset=['SalePrice'], inplace=True)y = X.SalePriceX.drop(['SalePrice'], axis=1, inplace=True)# Break off validation set from training dataX_train_full, X_valid_full, y_train, y_valid = train_test_split(X, y,train_size=0.8, test_size=0.2,random_state=0)# "Cardinality" means the number of unique values in a column# Select categorical columns with relatively low cardinalitylow_cardinality_cols = [cname for cname in X_train_full.columns if X_train_full[cname].nunique() < 10 andX_train_full[cname].dtype == "object"]# Select numeric columnsnumeric_cols = [cname for cname in X_train_full.columns if X_train_full[cname].dtype in ['int64', 'float64']]# Keep selected columns onlymy_cols = low_cardinality_cols + numeric_colsX_train = X_train_full[my_cols].copy()X_valid = X_valid_full[my_cols].copy()X_test = X_test_full[my_cols].copy()# One-hot encode the data (to shorten the code, we use pandas)X_train = pd.get_dummies(X_train)X_valid = pd.get_dummies(X_valid)X_test = pd.get_dummies(X_test)X_train, X_valid = X_train.align(X_valid, join='left', axis=1)X_train, X_test = X_train.align(X_test, join='left', axis=1)
Step 1: Build model
In this step, you'll build and train your first model with gradient boosting.
- Begin by setting my_model_1 to an XGBoost model. Use the XGBRegressor class, and set the random seed to 0 (random_state=0).
- Then, fit the model to the training data in X_train and y_train.
from xgboost import XGBRegressorfrom sklearn.metrics import mean_absolute_error# Define the modelmy_model_1 = XGBRegressor(random_state=0)# Fit the modelmy_model_1.fit(X_train, y_train)predictions_1 = my_model_1.predict(X_valid)mae_1 = mean_absolute_error(predictions_1, y_valid)print("Mean Absolute Error:" , mae_1)
Step 2: Improve the model
Now that you've trained a default model as baseline, it's time to tinker with the parameters, to see if you can get better performance!
- Begin by setting my_model_2 to an XGBoost model, using the XGBRegressor class. Use what you learned in the previous tutorial to figure out how to change the default parameters (like n_estimators and learning_rate) to get better results.
- Then, fit the model to the training data in X_train and y_train.
- Set predictions_2 to the model's predictions for the validation data. Recall that the validation features are stored in X_valid.
- Finally, use the mean_absolute_error() function to calculate the mean absolute error (MAE) corresponding to the predictions on the validation set. Recall that the labels for the validation data are stored in y_valid. Here we'll use wandb to log the loss metrics after training so that we can later compare the model performance directly from the dashboard.
import wandbdef train(n_estimators,lr):wandb.init(project="Kaggle-XGBoost",name=str(n_estimators)+"_"+str(lr))my_model = XGBRegressor(n_estimators=n_estimators, learning_rate=lr)# Fit the modelmy_model.fit(X_train, y_train)# Get predictionspredictions = my_model.predict(X_valid)# Calculate MAEmae = mean_absolute_error(predictions, y_valid)wandb.log({"MAE":mae})print("Mean Absolute Error:" , mae)#Test random modeltrain(100,0.1)train(100,0.05)train(200,0.1)train(200,0.05)train(500,0.1)train(500,0.05)train(1000,0.1)train(1000,0.05)train(1000,0.5)train(2000,0.5)train(3000,0.05)
Step 3: Break the model
In this step, we will create a model that performs worse than the original model in previous step. This will help you to develop your intuition for how to set parameters. You might even find that you accidentally get better performance, which is ultimately a nice problem to have and a valuable learning experience!
- Begin by setting my_model_3 to an XGBoost model, using the XGBRegressor class. Use what you learned in the previous tutorial to figure out how to change the default parameters (like n_estimators and learning_rate) to design a model to get high MAE.
- Then, fit the model to the training data in X_train and y_train.
- Set predictions_3 to the model's predictions for the validation data. Recall that the validation features are stored in X_valid.
- Finally, use the mean_absolute_error() function to calculate the mean absolute error (MAE) corresponding to the predictions on the validation set. Recall that the labels for the validation data are stored in y_valid.
In order for this step to be marked correct, your model in my_model_3 must attain higher MAE than the model in my_model_1.
train(3000,0.5) #Hopefully this will perform the worse
Visualization
Now let us see how all of the trained models perform by visualizing the loss metrics.
So here we can see that the model with 3000 estimators and learning rate of `0.05` performed the best. Although we were not able to build the worst performing model in our first try, we were very close to doing that. The takeaway is that if you mindlessly increase the number of estimators and learning rates you might end up harming the performance. So, carefully tune your hyper-parameters for the best results.
Keep going!
Continue to learn about data leakage. This is an important issue for a data scientist to understand, and it has the potential to ruin your models in subtle and dangerous ways!
You may also find these reports from the same series interesting.
Handling Missing Values In A Pandas Dataframe
In this tutorial, you will learn three approaches to dealing with missing values in a pandas dataframe.
Handling Categorical Features - With Examples
In this report, you will learn what a categorical variable is, along with three approaches for handling this type of data.
Using K-Fold Cross-Validation To Improve Your Machine Learning Models
In this article, we will learn how to use k-fold cross-validation for better measures of machine learning model performance, using W&B to track our results.
Add a comment