Study design
We used a mixed methods approach for validation. We developed multidimensional assessment to collect details on the sociodemographic and medical health profile of respondents, measure the pandemic stress level, identify the impact of COVID-19 pandemic on the finance and lifestyle of respondents and identify the level of perceived social support by respondents. This was adapted from an instrument initially developed by a team of researchers including two of the present study team (ALN and BB) to collect data on the effect of the COVID-19 pandemic on people living with HIV (PLHIV) in the United States [23] due to the similarity between the COVID-19 pandemic situation and the early time of AIDS. The current study tested the multidimensional assessment for content validity, dimensionality and reliability on an international audience. In-depth interviews were also conducted with a sample of respondents to discuss their perception about the comprehensiveness of the multidimensional assessment in capturing their COVID-19 experience.
Survey instrument (supplementary file 1)
SECTION 1- Sociodemographic profile (16 questions): This section had 11 questions about age, sex at birth, country of residence, current gender, highest level of education attained (none, primary, secondary, university, postgraduate), employment status, health insurance status, marital status, person whom respondent currently resides with, sexual orientation and sexual practices. Respondents could check one or more of these options. This section also included 5 questions assessing COVID-19 experience on a Yes/ No basis: whether the respondent tested positive for COVID-19 infection, was suspected but not tested, had close friends who tested positive, knew someone who died of COVID-19 infection or had to isolate because of COVID-19 infection.
SECTION 2- Medical health profile (2 questions): Respondents selected the conditions they had from a checklist of 28 health conditions in addition to other health conditions not on the checklist. This checklist was adapted from the study by Marg et al. [24] and included infectious diseases (hepatitis, herpes, pneumonia, shingles, and sexually transmitted disease), non-infectious diseases (diabetes, cancer, dermatologic problems, heart condition, hypertension, migraine, neurological problems, neuropathy, respiratory problems, and stroke), and geriatric conditions (arthritis, broken bones, depression, loss of hearing and vision). The section also assessed subjective cognitive state and memory based on the MAC-Q developed by Crook et al. [25]. It consisted of 5 questions and a global statement rated on a 5-point likert scale and a sixth global statement. The total score is the sum of the scores of the sixth statement with double weight for the sixth statement. The total score ranges from 7 to 35 with higher scores indicating greater impairment. Scores above 25 indicate memory loss [25].
SECTION 3- Pandemic stress level (6 questions): The Pandemic Stress Index (PSI) is a 3-section measure of behavior changes, impact on daily lives and stress the individual may have experienced during the COVID-19 pandemic [26]. The tool has not been previously validated. Authors of the original PSI recommend adding population-specific items as necessary depending on study or clinical needs. As such, several modifications were made for this study. The first section assessed behavior changes in response to the COVID-19 experience including 12 changes related to public health messages (physical distancing, isolation, quarantine), the workplace (working remotely, job loss), and to protect one’s own or others’ health (caretaking). Three additional questions assessed details of changes in work status, travel plans and use of healthcare services. The second section rated the impact of the COVID-19 experience on daily life on a 5-point scale. The third section assessed the psychosocial impact of the COVID-19 experience including 20 items related to emotional distress, and difficulties faced because of the pandemic such as change in work status, use of healthcare services and travel plans [27].
SECTION 4- Finance and lifestyle impact (12 questions): This section assessed changes in lifestyle using a question of seven items including sexual activity, use of tobacco, alcohol, marijuana, other substances, food intake, and use of screens The response to these items was either increase, decrease, or no change. There was a second question of 10 items: seven assessing the impact of the COVID-19 pandemic on the finance of respondents, and three items assessing the impact on access to food and meals. Responses to these questions were either ‘yes’ or ‘no’. The questions on finances were adopted from the Multi-center AIDS Cohort Study (MACS)/Women’s Interagency HIV Study (WIHS) Combined Cohort Study questionnaire [28], while those on food and meals were adapted from the US Department of Agriculture Household Food Security Survey [29].
This section also included ten questions assessing further details on the impact on financial condition, critical medical care, access to healthcare, alternative treatment, access to mental care, substance abuse care, ability to keep healthcare provider appointment adapted from the MACS/WHIS questionnaire [28].
SECTION 5- Perceived level of social support (6 questions): There were two questions assessing respondents’ feeling of isolation (on a scale of 1 -not at all – to 10 – extremely) and perception of how the pandemic affected their sense of isolation. These were developed by the study team. There were four other questions about the difficulty of adhering to social distancing, and changes in the quality of relationship with family, friends and significant others on a 5 points likert scale. These were adopted from the Coronavirus Health Impact Survey (CRISIS) Adult Self-Report Baseline questionnaire [30].
SECTION 6- Post-traumatic stress disorder (one question): This included a question of 17 items adopted from the post-traumatic stress disorder (PTSD) Checklist – Civilian Version (PCL-C) [31, 32]; a standardized self-reported rating scale assessing symptoms of post-traumatic stress. The responses were on a 5-point likert-like scale ranging from not at all (scored 1), to extremely (scored 5) with a total score which is the sum of items’ scores ranging from 17 to 85. Scores from 17 to 29 indicate no severity, from 28 to 29 indicate some severity, from 30 to 44 indicate moderate to moderately high severity and 45 and above indicate high severity of PTSD symptoms [31]. Its diagnostic accuracy as a screening tool needed assessment for population and context specific validation [33].
SECTION 7- Coping strategies (one question): This section was a grid of three items adopted from the Brief Resilient Coping Scale [34]. Responses were on a 5-point likert-like scale ranging from does not describe me at all (scored 1), to describes me very well (scored 5). The total was the sum of scores ranging from 3 to 15. Scores of 3–8 indicate low resilient coping, 9–11 indicate medium resilient coping and 12–15 indicate high resilient coping.
SECTION 8- Self-care (three questions): There was a checklist of eleven items assessing what the respondents did to take care of their mental health during the COVID-19 pandemic. The items included talking with family and friends through phone, video chats, and/or in-person, spending time with pets, meditation, exercise in-doors and out-doors, gardening and hobbies, learning new skills and/or taking breaks from news in addition to an option to specify other activities not in the list. There were also open-ended questions to list challenges faced during the COVID-19 pandemic and other strengths or resiliencies tapped into that the multidimensional assessment did not capture. This section was internally developed.
SECTION 9- For PLHIV (nine questions): This section included standard questions about HIV clinical characteristics such as year of diagnosis and last viral load and CD4 count, status of current HIV medication refill, missing doses of HIV medications during the pandemic, and reasons for missed doses. Reasons for missed doses of HIV medications were adapted from the AIDS Clinical Trial Group instrument for missing medications [35].
Study procedures
Data was collected using Survey Monkey®, an online survey platform. The links to the survey were prepared with settings to ensure that it would be anonymous, that participants could change their answers freely before they choose to submit, and it was not time-limited. One submission per electronic device was allowed. We created the questionnaire in English, and translated it to four other languages (Arabic, French, Portuguese and Spanish) followed by back translation to English by bilingual speakers to ensure accuracy based on the World Health Organization recommendations [36]. Links were sent to eligible participants – persons who were 18 years and older, could give consent, and could read the survey in any of the available languages- through emails and social media platforms. The survey was opened on the 29th of June 2020 and closed on the 31st of December 2020.
Quantitative validation of survey instrument
Step 1: Completeness of the paper-based version. Seven members of the study team (core team) reviewed a draft of the questionnaire to assess comprehensiveness in capturing all the elements of mental health and wellbeing that the study addressed; that the sequence of the questions was logical; that the questions were culturally appropriate and that they would not breach any ethical concerns. The review was conducted between 18th and 25th of May 2020. At the end of this step, the list of medical conditions in section 2 was updated following the cited reference, a question about health insurance was added in section 2 and two items were added to the self-care strategy in section 8.
Step 2: Clarity of electronic format. Based on the feedback obtained in step 1, a revised questionnaire was converted to an electronic format. The seven core team members and seven other invited collaborators responded to the online survey and timed themselves to calculate the time taken to respond to the survey and to check clarity. The review took place between 28th of May and 3rd of June 2020. Thirty-four unique comments were received from nine of the 14 evaluators.
Step 3: Content validity. Using comments from the previous steps, the revised questionnaire was launched on the 5th to the 10th of June 2020 and thirteen collaborators were invited to take the survey and fill out a form in which they evaluated each item in the survey on a 4-point scale ranging from least (scored 1) to most (scored 4) relevant to assess content validity, calculate the content validity index (CVI) and make sure that per item and overall values were at least 0.78 [37].
Step 4: Test-retest reliability. 350 respondents of the final survey were invited within a week of completion of the questionnaire the first time to refill the questionnaire a second time. These were respondent who agreed to be contacted again about the study. The final number of respondents included in the assessment of the test-retest reliability was 227 (64.9%).
Qualitative validation of survey instrument
A qualitative exploration of validity, specificity, and sensitivity of the multidimensional assessment was conducted in English. The qualitative validation was modelled against that conducted by Engel et al. [38]. A semi-structured topic guide was used with open-ended questions supplemented with probes, where necessary, to assess participants’ emerging accounts and perspectives. Participants were asked to reflect on their understanding of mental health and wellness, factors influencing it, and the possible impact of the COVID-19 pandemic on it. Next, a cognitive debriefing exercise was conducted to assess the face validity of the multidimensional assessment. Participants were provided with copies of the questionnaire and asked to explore whether sections with questions and response options were appropriate and acceptable, interpreted accurately and relevant to participants’ lived experiences. Participants were asked the following questions: (1) What were your immediate thoughts about this questionnaire? (2) Was the wording of questions and response options clear? (3) Do you think the questionnaire was applicable to people affected by the COVID-19 pandemic (4) Was it comprehensive? (5) Were there any aspects you think were missing? The interview explored respondents’ observations on practical and interpretative problems they experienced in completing the survey including time spent filling the questionnaire, challenges with filling it, motivation to fill it, what may influence variability in responding to the same question, and suggestions for further modification of the questionnaire.
For similar qualitative interviews, saturation was reached with a sample of 12 persons when working with a homogeneous group [39]. The sample for qualitative assessment was drawn from the respondents who participated in the test-retest instrument reliability assessment. This meant the respondents have been exposed to the questionnaire at least twice within two weeks. Participants who were involved in the test-retest survey were asked to volunteer for the qualitative interviews by sharing their emails for further contact. The volunteers were contacted through their emails and were scheduled for either a phone interview or a teleconference interview based on their preference. Respondents were provided with soft copies of the questionnaire ahead of the interview. Those who opted for a Zoom interview were reimbursed $2.78 (N1000) for their data. Phone interviewees did not get reimbursed since the interviewer paid for the call.
Ethical consideration
The study protocol was submitted to Institute of Public Health Research Ethics Committees, Obafemi Awolowo University Ile-Ife, Nigeria (IPHOAU/12/1557). Additional ethical approvals were attained from India (D-1791-uz and D-1790-uz), Saudi Arabia (CODJU-2006F), Brazil (CAAE N° 38,423,820.2.0000.0010) and the United Kingdom (13,283/10570). Participants were required to provide informed consent before filling the online questionnaire. All data were irrevocably anonymised. We took measures to prevent the unintended collection of IP to protect the privacy of participants and the confidentiality of the information they provided. IP addresses were instantly decoupled from the questionnaire, encrypted, and deleted at the end of the online survey by the survey tool. The questionnaire also did not install any tracker cookies on the device of the respondents. The study was made available to the target population through a secure, SSL encrypted connection link. Data in transit (while responding online) were encrypted using secure TLS cryptographic protocols. The collection tool was certified in compliance with the EU-U.S. Privacy Shield Framework and Swiss-U.S. Privacy Shield. Due to the anonymity given to participants and the IP addresses being decoupled and encrypted automatically during the time that the survey was online, there was no possibility to provide further direct information to participants after completion of the questionnaire.
Data analysis
Quantitative analysis: For test-retest reliability of the categorical responses, the Kappa statistic was calculated. When a score was obtained by adding the scores of individual items, we assessed the internal consistency of the items by calculating Cronbach’s α then calculated the intra class correlation coefficient (ICC) of the overall score. We modified the Landis and Koch categorisation and classified Cronbach’s α, ICC and Kappa statistic for reliability and/ or internal consistency into 0–0.39 (low level), 0.40–0.79 (moderate level) and 0.81–1 (excellent level) [40].
To assess the dimensionality, structure and relationships between the various items in the PSI behavior change section and psychosocial impact section as well as the items of the self-care strategies, we used multiple correspondence analysis (MCA) of responses from the whole sample. These items were categorical variables for which the use of MCA is suitable. MCA is a data reduction technique similar to principal component analysis that can be used with nominal variables [41]. MCA can reveal patterns by representing levels of variables as points in a cloud overlayed on a Euclidean space. The points are plotted on X by Y axes for a 2-dimensions solution. The proximity of these points to each other and their distribution along the X and Y axes describe emerging subgroups in a set of variables.
We used the variable principal normalization method to obtain a two-dimensions solution to enable data interpretation [42]. We calculated the discrimination measures and produced a joint plot (biplot) of category points. This is a type of scatter plot to identify category relationships and subgroup membership. Points (representing levels of variables) in the same quadrant belong to the same subgroup. We also displayed the discrimination measures plot where the length and steepness of lines indicate the importance of the variables. Variables with unique characteristics are located far from others and from the origin [43]. The data was analysed using IBM SPSS Statistics version 23.0.
Qualitative analysis: the transcribed interviews were imported into Nvivo 11 to facilitate data coding, retrieval [44] and thematic analysis [45]. Thematic analysis consisted of the following stages: familiarisation with the data (reading the transcripts); generating initial codes (organizing data into meaningful groups); searching for themes (sorting the codes into potential themes); reviewing themes (refining themes); defining and naming themes (development of a thematic map of the data and description of the content of each theme) [45]. The thematic analysis identified domains that were perceived as relevant to the focus of the survey. These domains were then compared with the content of the survey. Also, the comments made by the participants for the nine sections of the survey were summarised.