You are here

Comparison of surveillance systems for monitoring COVID-19 in England: A retrospective observational study.

Publication date: 

10 Oct 2023

Ref: 

Brainard, J., Lake, I. R., Morbey, R. A., Jones, N. R., Elliot, A. J., & Hunter, P. R. (2023). Comparison of surveillance systems for monitoring COVID-19 in England: A retrospective observational study. The Lancet Public Health, 8(11), E850-E858. https://

Author(s): 

Brainard, J., Lake, I. R., Morbey, R. A., Jones, N. R., Elliot, A. J., & Hunter, P. R.

Publication type: 

Article

Abstract: 

Background: During the COVID-19 pandemic, cases were tracked using multiple surveillance systems. Some systems were completely novel, and others incorporated multiple data streams to estimate case incidence and prevalence. How well these different surveillance systems worked as epidemic indicators is unclear, which has implications for future disease surveillance and outbreak management. The aim of this study was to compare case counts, prevalence and incidence, timeliness, and comprehensiveness of different COVID-19 surveillance systems in England. Methods: For this retrospective observational study of COVID-19 surveillance systems in England, data from 12 surveillance systems were extracted from publicly available sources (Jan 1, 2020–Nov 30, 2021). The main outcomes were correlations between different indicators of COVID-19 incidence or prevalence. These data were integrated as daily time-series and comparisons undertaken using Spearman correlation between candidate alternatives and the most timely (updated daily, clinical case register) and the least biased (from comprehensive household sampling) COVID-19 epidemic indicators, with comparisons focused on the period of Sept 1, 2020–Nov 30, 2021. Findings: Spearman statistic correlations during the full focus period between the least biased indicator (from household surveys) and other epidemic indicator time-series were 0·94 (95% CI 0·92 to 0·95; clinical cases, the most timely indicator), 0·92 (0·90 to 0·94; estimates of incidence generated after incorporating information about self-reported case status on the ZoeApp, which is a digital app), 0·67 (95% CI 0·60 to 0·73, emergency department attendances), 0·64 (95% CI 0·60 to 0·68, NHS 111 website visits), 0·63 (95% CI 0·56 to 0·69, wastewater viral genome concentrations), 0·60 (95% CI 0·52 to 0·66, admissions to hospital with positive COVID-19 status), 0·45 (95% CI 0·36 to 0·52, NHS 111 calls), 0·08 (95% CI –0·03 to 0·18, Google search rank for “covid”), –0·04 (95% CI –0·12 to 0·05, in-hours consultations with general practitioners), and –0·37 (95% CI –0·46 to –0·28, Google search rank for “coronavirus”). Time lags (–14 to +14 days) did not markedly improve these rho statistics. Clinical cases (the most timely indicator) captured a more consistent proportion of cases than the self-report digital app did. Interpretation: A suite of monitoring systems is useful. The household survey system was the most comprehensive and least biased epidemic monitor, but not very timely. Data from laboratory testing, the self-reporting digital app, and attendances to emergency departments were comparatively useful, fairly accurate, and timely epidemic trackers. Funding: National Institute for Health and Care Research Health Protection Research Unit in Emergency Preparedness and Response, a partnership between the UK Health Security Agency, King's College London, and the University of East Anglia.