Handling Missing Values In A Pandas Dataframe
In this tutorial, you will learn three approaches to dealing with missing values in a pandas dataframe.
Created on June 12|Last edited on June 14
Comment
This article is part of a series, clarifying some of Kaggle's terms, definitions, and competitions as well as adding visualizations.
Introduction
There are many ways data can end up with missing values. For example,
A 2 bedroom house won't include a value for the size of a third bedroom. A survey respondent may choose not to share his income.
Most machine learning libraries (including scikit-learn) give an error if you try to build a model using data with missing values. So you'll need to choose one of the strategies below.
Three Approaches
- A Simple Option - Drop Columns with Missing Values. The simplest option is to drop columns with missing values.
 

Unless most values in the dropped columns are missing, the model loses access to a lot of (potentially useful!) information with this approach. As an extreme example, consider a dataset with 10,000 rows, where one important column is missing a single entry. This approach would drop the column entirely!
2. A Better Option: Imputation - Imputation fills in the missing values with some numbers. For instance, we can fill in the mean value along each column.

The imputed value won't be exactly right in most cases, but it usually leads to more accurate models than you would get from dropping the column entirely.
3. An Extension To Imputation - Imputation is the standard approach, and it usually works well. However, imputed values may be systematically above or below their actual values (which weren't collected in the dataset). Or rows with missing values may be unique in some other way. In that case, your model would make better predictions by considering which values were originally missing.

In this approach, we impute the missing values, as before. And, additionally, for each column with missing entries in the original dataset, we add a new column that shows the location of the imputed entries.
In some cases, this will meaningfully improve results. In other cases, it doesn't help at all.
Example
In the example, we will work with the Melbourne Housing dataset. Our model will use information such as the number of rooms and land size to predict home price.
We won't focus on the data loading step. Instead, you can imagine you are at a point where you already have the training and validation data in X_train, X_valid, y_train, and y_valid.
Define Function to Measure Quality of Each Approach
We define a function score_dataset() to compare different approaches to dealing with missing values. This function reports the mean absolute error (MAE) from a random forest model.
Score from Approach 1 (Drop Columns with Missing Values)
Since we are working with both training and validation sets, we are careful to drop the same columns in both DataFrames.
# Get names of columns with missing valuescols_with_missing = [col for col in X_train.columnsif X_train[col].isnull().any()]# Drop columns in training and validation datareduced_X_train = X_train.drop(cols_with_missing, axis=1)reduced_X_valid = X_valid.drop(cols_with_missing, axis=1)print("MAE from Approach 1 (Drop columns with missing values):")print(score_dataset(reduced_X_train, reduced_X_valid, y_train, y_valid))
MAE from Approach 1 (Drop columns with missing values):183550.22137772635
Score from Approach 2 (Imputation)
Next, we use SimpleImputer to replace missing values with the mean value along each column.
Although it's simple, filling in the mean value generally performs quite well (but this varies by dataset). While statisticians have experimented with more complex ways to determine imputed values (such as regression imputation, for instance), the complex strategies typically give no additional benefit once you plug the results into sophisticated machine learning models.
from sklearn.impute import SimpleImputer# Imputationmy_imputer = SimpleImputer()imputed_X_train = pd.DataFrame(my_imputer.fit_transform(X_train))imputed_X_valid = pd.DataFrame(my_imputer.transform(X_valid))# Imputation removed column names; put them backimputed_X_train.columns = X_train.columnsimputed_X_valid.columns = X_valid.columnsprint("MAE from Approach 2 (Imputation):")print(score_dataset(imputed_X_train, imputed_X_valid, y_train, y_valid))
MAE from Approach 2 (Imputation):178166.46269899711
We see that Approach 2 has lower MAE than Approach 1, so Approach 2 performed better on this dataset.
Score from Approach 3 (An Extension to Imputation)
Next, we impute the missing values, while also keeping track of which values were imputed.
# Make copy to avoid changing original data (when imputing)X_train_plus = X_train.copy()X_valid_plus = X_valid.copy()# Make new columns indicating what will be imputedfor col in cols_with_missing:X_train_plus[col + '_was_missing'] = X_train_plus[col].isnull()X_valid_plus[col + '_was_missing'] = X_valid_plus[col].isnull()# Imputationmy_imputer = SimpleImputer()imputed_X_train_plus = pd.DataFrame(my_imputer.fit_transform(X_train_plus))imputed_X_valid_plus = pd.DataFrame(my_imputer.transform(X_valid_plus))# Imputation removed column names; put them backimputed_X_train_plus.columns = X_train_plus.columnsimputed_X_valid_plus.columns = X_valid_plus.columnsprint("MAE from Approach 3 (An Extension to Imputation):")print(score_dataset(imputed_X_train_plus, imputed_X_valid_plus, y_train, y_valid))
MAE from Approach 3 (An Extension to Imputation):178927.503183954
Why did imputation perform better than dropping the columns?
The training data has 10864 rows and 12 columns, where three columns contain missing data.  For each column, less than half of the entries are missing. Thus, dropping the columns removes a lot of useful information, and so it makes sense that imputation would perform better.
# Shape of training data (num_rows, num_columns)print(X_train.shape)# Number of missing values in each column of training datamissing_val_count_by_column = (X_train.isnull().sum())print(missing_val_count_by_column[missing_val_count_by_column > 0])(10864, 12)Car 49BuildingArea 5156YearBuilt 4307dtype: int64
Conclusion
As is common, imputing missing values (in Approach 2 and Approach 3) yielded better results, relative to when we simply dropped columns with missing values (in Approach 1).
Let's compare these approaches on real-world data next.
Setup

The next code cell loads the training and validation sets in X_train, X_valid, y_train, and y_valid.  The test set is loaded in X_test.
from sklearn.model_selection import train_test_split# Load the datadata = pd.read_csv('../input/melbourne-housing-snapshot/melb_data.csv')# Select targety = data.Price# To keep things simple, we'll use only numerical predictorsmelb_predictors = data.drop(['Price'], axis=1)X = melb_predictors.select_dtypes(exclude=['object'])# Divide data into training and validation subsetsX_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2,random_state=0)
In the next section, we define score_dataset function to train a given model and log useful metrics to W&B dashboard. wandb has native support for sklearn model classes to all the useful information about a given classifier or a regressor can be logged directly using a single line of code.
from sklearn.ensemble import RandomForestRegressorfrom sklearn.metrics import mean_absolute_errorimport wandb# Function for comparing different approachesdef score_dataset(X_train, X_valid, y_train, y_valid,name):wandb.init(project="Kaggle-Missing-Val",name=name)model = RandomForestRegressor(n_estimators=10, random_state=0)model.fit(X_train, y_train)preds = model.predict(X_valid)#Plot the regression learning curveswandb.sklearn.plot_regressor(model, X_train, X_valid, y_train, y_valid, model_name='RF')abs_error = mean_absolute_error(y_valid, preds)wandb.log({"Mean abs error":abs_error})return abs_error
Method 1 : Drop Empty Columns
# Get names of columns with missing valuescols_with_missing = [col for col in X_train.columnsif X_train[col].isnull().any()]# Drop columns in training and validation datareduced_X_train = X_train.drop(cols_with_missing, axis=1)reduced_X_valid = X_valid.drop(cols_with_missing, axis=1)print("MAE from Approach 1 (Drop columns with missing values):")print(score_dataset(reduced_X_train, reduced_X_valid, y_train, y_valid,'Drop'))MAE from Approach 1 (Drop columns with missing values):183550.22137772635
Method 2 : Impute Empty Columns
from sklearn.impute import SimpleImputer# Imputationmy_imputer = SimpleImputer()imputed_X_train = pd.DataFrame(my_imputer.fit_transform(X_train))imputed_X_valid = pd.DataFrame(my_imputer.transform(X_valid))# Imputation removed column names; put them backimputed_X_train.columns = X_train.columnsimputed_X_valid.columns = X_valid.columnsprint("MAE from Approach 2 (Imputation):")print(score_dataset(imputed_X_train, imputed_X_valid, y_train, y_valid,'Impute'))MAE from Approach 2 (Imputation):178166.46269899711
Visualization
Let us now take a look at the visualizations generated by wandb.
The `mean absolute error` of the model trained on the imputed dataset is lower than the one trained on the dataset with dropped columns. The learning curve also shows that the 2nd run score slightly more than the first run that did not incorporate imputation. 
💡
More Metrics
Wandb also visualizes other metrics that can be used for tasks such as outlier detection and for displaying summary metrics.
Here are some such visualizations:
Keep Going
Move on to learn what categorical variables are, along with how to incorporate them into your machine learning models. Categorical variables are very common in real-world data, but you'll get an error if you try to plug them into your models without processing them first!
Handling Categorical Features - With Examples
In this report, you will learn what a categorical variable is, along with three approaches for handling this type of data.
Using K-Fold Cross-Validation To Improve Your Machine Learning Models
In this article, we will learn how to use k-fold cross-validation for better measures of machine learning model performance, using W&B to track our results.
Gradient Boosting With XGBoost
In this report, you will learn how to build and optimize models with gradient boosting. This method dominates many Kaggle competitions and achieves state-of-the-art results on a variety of datasets
Add a comment