Overview

Drought Watch is a community benchmark for machine learning models that detect drought from satellite. With better models, index insurance companies can monitor drought conditions—and send resources to families in the area—more effectively. The goal is to learn from ~100K expert labels of forage quality in Northern Kenya (concretely, how many cows from 0 to 3+ can the given location feed?) to more accurately predict drought from unlabeled satellite images. You can read more about the dataset and methods in this paper. Since this is an open collaborative benchmark, we encourage you to share and discuss your code, training workflows, analysis, and questions—together we can build a better model faster.

Screen Shot 2020-04-24 at 4.00.22 PM.png

In this short report, I explore the community submissions made so far and summarize how we developed the baseline for this benchmark. We think there's still plenty of room for model improvement. Read through to the end for some specific suggestions and helpful tools like Weights & Biases Sweeps—we hope you give this benchmark a try and let us know how it goes!

Join the benchmark →

See the starter code on GitHub →

[Try your own hyperparameter sweep in a Google Colab→] (https://colab.research.google.com/drive/1gKixa6hNUB8qrn1CfHirOfTEQm0qLCSS)

If you'd like, read more about the project in the launch blog post and our latest update.

Community Submissions So Far

Community Submissions So Far

Developing the simple convnet baseline

Developing the simple convnet baseline

This shows all the different variants of the baseline model I tried before finding a good baseline.

Hyperparameter Sweep Example

Hyperparameter Sweep Example

What to try next

We have a lot of ideas for what to try next:

Thanks for reading!

Join the benchmark and let us know how it goes →