You are here

COVID-19 Survey Burden for Healthcare Workers: Literature Review and Audit

Publication date: 

25 May 2021

Ref: 

Public Health Available online 25 May 2021 In Press, Journal Pre-proof

Author(s): 

Gnanapragasam SN, Hodson A, Smith LE, Greenberg N, Rubin GJ, Wessely S

Publication type: 

Article

Abstract: 

Abstract Objectives Concerns have been raised about the quantity and quality of research conducted during the COVID-19 pandemic, particularly related to the mental health and wellbeing of healthcare workers (HCWs). To understand the volume, source, methodological rigor and degree of overlap in COVID-19 studies conducted amongst HCWs in the United Kingdom (UK). Study Design Mixed methods approach, literature review and audit. Methods First, a literature review of published research studies and second, an audit of studies HCWs have been invited to complete. For the literature review, we searched Medline, PsycINFO and Nexis, webpages of three medical organisations (Royal Society of Medicine, Royal College of Nursing and British Medical Association), and the YouGov website. For the audit, a non-random purposive sample of six HCWs from different London NHS Trusts reviewed email, WhatsApp and SMS messages they received for study invitations. Results The literature review identified 27 studies; the audit identified 70 study invitations. Studies identified by the literature review were largely of poor methodological rigor: only eight studies (30%) provided response rate, one study (4%) reported having ethical approval and one study (4%) reported funding details. There was substantial overlap in the topics measured. In the audit, volunteers received a median of 12 invitations. The largest number of study invitations were for national surveys (n = 23), followed by local surveys (n = 16) and research surveys (n = 8). Conclusion HCWs have been asked to complete numerous surveys which frequently have methodological shortcomings and overlapping aims. Many studies do not follow scientific good-practice and generate questionable, non-generalisable results.