Validating Digital Tools for Remote Clinical Research

Source: flickr
Copyright: Rawpixel Ltd
URL: https://www.flickr.com/photos/147875007@N03/45789229201/
License: Creative Commons Attribution (CC-BY)


It is becoming increasingly popular to conduct psychological research remotely so that researchers can study cognition and behavior when and where they naturally occur. Remote methods also increase accessibility, as participants are not required to travel to testing locations or meet face-to-face with researchers. This has been an especially important consideration during the SARS-CoV-2 pandemic. However, it is still unclear how best to validate digital tools for remote clinical research.

Researchers at Cambridge Cognition and the University of Bristol have been thinking about various approaches to developing digital assessments for remote clinical research, which they discuss in a paper published in the Journal of Medical Internet Research.

When creating any new psychological assessment, it is important to ensure that the assessment accurately measures the concept, behavior, or symptom that it intends to measure. Part of this validation process is ruling out the possibility that changes to the outcome of interest are the result of external influences. Traditionally, when assessments are delivered in the laboratory, researchers ensure that an assessment is reliable by administering the assessment to the same person, in the same environment, at the same time of day on two different days to produce two similar scores. Researchers decide that an assessment is valid if administering the assessment alongside a gold-standard assessment to the same person under controlled conditions produces scores that are consistent with each other.

If an individual does not score similarly on an assessment taken at different time points, it does not necessarily mean that the assessment is unreliable. For example, mood can vary considerably as a function of time. Therefore, when measuring mood, or a phenomenon that is sensitive to mood, there may be considerable difference in measurements taken at different time points. Similarly, demonstrating the validity of an assessment in a controlled laboratory environment does not necessarily tell us about its validity in the real world,” says Dr Francesca Cormack, study author and Director of Research & Innovation at Cambridge Cognition.

To increase the ecological validity (ie, generalizability to real life situations) of research findings, web-based data collection has increased in popularity over the years. As long as participants can access a computer and internet connection and can spare at least 5 minutes, they are able to complete many types of cognitive tasks or questionnaires outside of the laboratory. Because researchers have less control over the environment in which participants complete web-based assessments, researchers have compared performance on the same assessments administered on the web and in the laboratory to validate web-based assessments.

To capture more granular changes in behavior over time and across settings, brief assessments (those that take a couple of minutes or seconds to complete) can be delivered on devices that individuals carry on their person (eg, smartphones and smartwatches). However, it is more challenging to systematically evaluate web-based assessments that are administered in this way, as the research environment (ie, time and space) is uncontrolled. Although it is possible to compare outcomes from a high-frequency field assessment to outcomes from a low-frequency laboratory assessment, we must consider that the contexts in which the data are being collected are very distinct from one another.

Therefore, the authors propose that a controlled environment may not be necessary, nor appropriate, to validate such flexible data collection tools. An alternative is to compare outcomes from a high-frequency field assessment to outcomes from another high-frequency field assessment, both administered in the same temporal and spatial context. “In the absence of controlled laboratory conditions, researchers must instead rely on collecting information on the respondent context and accounting for this in further analyses,” says Dr Gareth Griffith, study author and Senior Research Associate at the University of Bristol.

In the paper, the authors also discuss how to determine if an assessment is suitable for remote research and, more specifically, for high-frequency testing, and the importance of striking the right balance between the feasibility of data collection and the validity of the resulting data.

Using brief assessments allows researchers to collect data frequently, perhaps a couple of times per week or per day, without causing too much burden to participants. However, this will depend on the specific assessment and the population being studied. Someone with a medical condition might have a much lower threshold for the number of assessments they can comfortably complete than a healthy control might have. This is not only an ethical issue but can negatively impact participant engagement and data quality. Ideally, we want the assessments to be as brief as possible but removing components of the assessment might weaken its validity. These are important tools, but as with any tool, we need to ensure that they are used appropriately,” says Dr Jennifer Ferrar, study author and Senior Research Associate at the University of Bristol.

About Cambridge Cognition
Cambridge Cognition is a neuroscience technology company developing digital health products to better understand, detect, and treat conditions affecting brain health. The company’s software products assess cognitive health in patients worldwide to improve clinical trial outcomes, identify and stratify patients early, and improve global efficiency in pharmaceutical and health care industries.
For further information, visit https://www.cambridgecognition.com/cantab/.


Original Article
Ferrar J, Griffith GJ, Skirrow C, Cashdollar N, Taptiklis N, Dobson J, Cree F, Cormack FK, Barnett JH, Munafò MR
Developing Digital Tools for Remote Clinical Research: How to Evaluate the Validity and Practicality of Active Assessments in Field Settings
J Med Internet Res 2021;23(6):e26004
URL: https://www.jmir.org/2021/6/e26004/
doi: 10.2196/26004

Leave a Reply

Your email address will not be published. Required fields are marked *