Skip to main content

Table 3 Summary of results from the FluSight influenza forecast challenges*

From: Applying infectious disease forecasting to public health: a path forward using influenza forecasting examples

 

2013–14 season

2014–15 season

2015–16 season

2016–17 season

2017–18 season

Number of participating teams

9

5

11

21

22

Number of submitted forecasts†

13

7

14

28

29

Season onset top skill

N/A**

0.41

0.18

0.78

0.69

Peak week top skill

N/A

0.49

0.20

0.49

0.50

Peak intensity top skill

N/A

0.17

0.66

0.36

0.26

1-week ahead top skill

N/A

0.43

0.89

0.60

0.54

2-weeks ahead top skill

N/A

0.36

0.76

0.46

0.37

3-weeks ahead top skill

N/A

0.37

0.66

0.41

0.29

4-weeks ahead top skill

N/A

0.35

0.58

0.38

0.26

Overall top performing team

Columbia University

Delphi group, Carnegie Mellon University

Delphi group, Carnegie Mellon University

Delphi group, Carnegie Mellon University

Delphi group, Carnegie Mellon University

  1. *Skill scores for 2016–17 and 2017–18 challenges have not been published. Results from 2018 to 19 challenge are not complete as of August 2019
  2. †Number of submitted forecasts do not include the unweighted average ensemble or historical average forecasts
  3. **The logarithmic scoring rule used to determine forecast skill scores was not introduced until the second year of the challenge (2014–15). Skill scores for the challenge pilot (2013–14) are therefore not available