Reports
Created by
Created On
Last edited
Transferring Knowledge on Time Series with the Transformer
Pre-training models has proven effective in a number of different domains such as computer vision and NLP. But what about time series forecasting? Few research has explored the effectiveness of generalized pre-training in the forecasting domain. Here we explore if pre-training on river flow data can improve COVID forecasting performance.
4
2020-06-15
Examining Variations Across Test Set MSE
In this report we will examine variations across test set loss. It is important that a model performs robustly over time and not just on the most immediate test set. Therefore here we will analyze how the models perform over time.
1
2020-06-25
Using the TransformerEncoder models (US)
This explores how well the transformer encoder works at COVID-19 forecasting. Specifically we will look at how well it predicts different periods in April and May. In this file the orange lines are the actual reported number of new cases (from CDC) and the blue lines are the new cases predicted by the model. In this first instance pre-training is done on a model pre-trained on river/stream flow flow information. We plan on later comparing its performance with models pre-trained on other pandemic data.
0
2020-05-28
Enhanced Evaluation Metrics
In this report we will look at how models perform when forecasting full test. These enhanced evaluation metrics mean the model will run start on day n and forecast each 14 day period in the test set.
0
2020-06-07
Examining Parameter Importance USA Counties
In this document we will explore how the best (in terms of lowest MSE) hyper-parameters vary across counties in the United States when using the transformer model. Unlike in the other notebook
0
2020-06-05