Drought Watch is a community benchmark for machine learning models that detect drought from satellite. With better models, index insurance companies can monitor drought conditions—and send resources to families in the area—more effectively. The goal is to learn from ~100K expert labels of forage quality in Northern Kenya (concretely, how many cows from 0 to 3+ can the given location feed?) to more accurately predict drought from unlabeled satellite images. You can read more about the dataset and methods in this paper. Since this is an open collaborative benchmark, we encourage you to share and discuss your code, training workflows, analysis, and questions—together we can build a better model faster.
In this short report, I explore the community submissions made so far and summarize how we developed the baseline for this benchmark. We think there's still plenty of room for model improvement. Read through to the end for some specific suggestions and helpful tools like Weights & Biases Sweeps—we hope you give this benchmark a try and let us know how it goes!
This shows all the different variants of the baseline model I tried before finding a good baseline.
We have a lot of ideas for what to try next:
Thanks for reading!