Drought Watch Benchmark Progress

Developing the baseline and exploring submissions to the Drought Watch benchmark. Made by Stacey Svetlichnaya using Weights & Biases
Stacey Svetlichnaya

Overview

Drought Watch is a community benchmark for machine learning models that detect drought from satellite. With better models, index insurance companies can monitor drought conditions—and send resources to families in the area—more effectively. The goal is to learn from ~100K expert labels of forage quality in Northern Kenya (concretely, how many cows from 0 to 3+ can the given location feed?) to more accurately predict drought from unlabeled satellite images. You can read more about the dataset and methods in this paper. Since this is an open collaborative benchmark, we encourage you to share and discuss your code, training workflows, analysis, and questions—together we can build a better model faster.
In this short report, I explore the community submissions made so far and summarize how we developed the baseline for this benchmark. We think there's still plenty of room for model improvement. Read through to the end for some specific suggestions and helpful tools like Weights & Biases Sweeps—we hope you give this benchmark a try and let us know how it goes!

Join the benchmark →

See the starter code on GitHub →

Try your own hyperparameter sweep in a Google Colab→

If you'd like, read more about the project in the launch blog post and our latest update.

Community Submissions So Far

Developing the simple convnet baseline

Hyperparameter Sweep Example

What to try next

We have a lot of ideas for what to try next:
Thanks for reading!

Join the benchmark and let us know how it goes →