Visualize LightGBM Performance in One Line of Code
How's your lightgbm model performing? How does it compare to other models? We have the answer!
Created on September 18|Last edited on September 18
Comment
Gradient boosting decision trees are the state of the art when it comes to building predictive models for structured data.
LigthGBM, a gradient boosting framework by Microsoft, has recently dethroned xgboost and become the go to GBDT algorithm (along with catboost). It outperforms xgboost in training speeds, memory usage and the size of datasets it can handle. LightGBM does so by using histogram-based algorithms to bucket continuous features into discrete bins during training.
We want to make it incredible easy for people to look under the hood of their models, so we built a callback that helps you visualize your LightGBM’s performance in just one line of code.
# Import callbackfrom wandb.lightgbm import wandb_callback# Add callbacklgb.train(params, X_train, callbacks=[wandb_callback()])

Once you train multiple models, you can compare all of their performances in one dashboard like so.

Add a comment
Tags: W&B Features
Iterate on AI agents and models faster. Try Weights & Biases today.