Cognitive interviewing (CI) or cognitive debriefing (CD) is now recognised as a significant part in the developmental process of clinical outcome assessment (COA) measures and the assessment of their linguistic validity and cross-cultural equivalence. Despite this, studies reporting the application of CI in COA development studies, generally lack a clearly defined rationale for selecting the appropriate technique, analysis and reporting.
In this post we discuss some of the key issues we believe are necessary to consider in the application of CI’s as part of the development of a COA.
In their 2005 paper, Wild et al1 while acknowledging the lack of consistency in the term “cognitive debriefing” go on to define it as the:
“… testing the instrument on a small group of relevant patients or lay people in order to test alternative wording and to check understandability, interpretation, and cultural relevance of the translation…”
Although the FDA (2009) Guidance for Industry2 states that documentation to support content validity should include amongst other information, cognitive interview summaries or transcripts, it lacks any reference to the different techniques and their strengths and limitations. Irwin et3 al in their PROMIS study report the questions used during the interview but, offer no further information on the rationale for techniques selection. Furthermore, the ISPOR PRO Good Research Practices Task Force report: part 24 while providing some guidance on the application of cognitive debriefing, also lacks reference to the different techniques and their respective strengths and limitations.
Should COA developers and agencies undertaking cross-cultural adaptation be more specific in reporting the ‘why’ in the selection of the CI?
Defining the “why” is particularly important as there are two contrasting objectives in the use of cognitive interviews which need to align with the objectives of the research. These are the Reparative and Descriptive Approach.
The Reparative Approach tends to be the default representing an “inspect-and-repair model5 in which the objective is to identify and rectify flaws in an item and reduce response error. In other words, to evaluate the functioning of the questionnaire item.
In contrast the Descriptive Approach can be seen as a measure of psychological functioning or obtaining a more general understanding of how the question or item functions as a measure of a particular concept. Taking for example a question asking participants to rate their general health over a given time period, the focus of the cognitive interviews would be to probe participants on their understanding of the concept of general health and why they rate their health as excellent…. poor?
The Reparative and Descriptive Approach should not be seen as a dichotomy as studies may contain elements of both and can be utilised at different points in the developmental process. However, it must be recognised that there are conceptual differences between the two opposing objectives which clearly need to be made explicit as part of the developmental and reporting process.
Willis5 considers that the selection of the interviewing technique should be done according to the intended analysis strategy rather than deciding after the fact. However, the rationale behind the selection of a specific technique from the different techniques is rarely described in the COA development literature.
Selecting the appropriate CI technique
There is a range of techniques for systematically eliciting and recording individuals’ comments on items when they respond to a questionnaire. The major techniques include, concurrent probing (questions asked during each item response), retrospective probing (questions asked after all item responses), and concurrent verbalization (‘think aloud’ during each item response), It is noteworthy however, that each technique has its limitations. For example, Think-Aloud places a high burden on participants and concurrent or retrospective verbal probing by the researcher has the greater potential for interviewer bias.
Ericsson and Simon6 preferred concurrent verbalization because, according to their model, concurrent probing is more disruptive to task performance. Willis7 acknowledged that concurrent probes may produce ‘local reactivity’ (where probes about an item encourage respondents to identify spurious problems with the item) and ‘extended reactivity’ (where probes about one item encourage respondents to identify spurious problems with other items).
Analysing the CI data
When it comes to analysis of the data obtained from CI, as part of COA development, again little information about appropriate analysis is available. Willis5 describes five-analysis strategies which are:
- Text: Summary in which dominant themes, conclusions and problems are described in words.
- Cognitive coding: Focus on respondent behaviour
- Question Feature Coding: Focus on the behaviour of the evaluated survey questions
- Theme coding: Creation of labels to describe observed phenomena
- Pattern Coding: Discovery of patterns and associations in participant responses
Each of the five-strategies have their strengths and drawbacks. For example, Text Summary are rich in data and quick to produce, but their limitation is the volume of data. Cognitive Coding enables the quantification of the data and is based on a theoretical underpinning, but results in a loss of information due to coding. Question Feature Coding is suitable for quantification as coding frequencies can be reported, but coding results in information loss. Theme coding can fully describe question functioning enabling decisions to be made on the usefulness of the question providing the required information. Pattern Coding is suitable for both Reparative and Descriptive purposes and provides an in-depth focus on question functioning but, requires considerable effort to map all the patterns in the data.
Other aspects of data analysis in need of consideration
Having decided on the analytical approach other issues needing attention include:
- How to aggregate data across interviews
- Should interviewers conduct their analysis then compare and combine?
- Should there be a collaborative approach from the start?
- The importance of sample size on what can be done analytically
- Level of analysis
- Should we have multiple levels of analysis e.g. with interviews, across interviews and subgroups?
So which CI and analysis strategy is best?
When this question is asked the response should be “Best for what”?5 It is worth remembering that multiple approaches can be used in the investigation and for different elements of the questionnaire. Availability of resources and time constraints are also contributory factors in selecting the analysis strategy. So,in the application it is not a one-size-fits-all
Writing the Cognitive Interviewing Report
In the reporting of studies incorporating CI’s it is essential that there is no lack of critical detail and that all key elements of the study are not overlooked in the write-up.
We agree with Boeije and Willis8 that due to different ways in which cognitive interviews can be conducted, it is not practical to set a single standard for reporting. Nevertheless, we believe that at a minimum the write up should include the following:
- Background on the development of the COA
- Rationale for determining the sample size
- Rationale for selecting the interviewing technique
- Level of training and expertise of the interviewers
We would suggest that researchers use as a guide, Cognitive Interviewing Reporting Format (CIRF) Boeije and Willis8
In this post we have attempted to bring to attention some of the key aspects of conducting, analysing and reporting of the application of cognitive interviews as part of the developmental process of Clinical Outcome Assessments. This has included (a) clarifying the study objective i.e. Reparative v Descriptive, (b) selecting the appropriate variant to meet the study objectives e.g. Think aloud v Verbal probing, (c) analytical strategies e.g. Text Summary, Theme coding, (d) report writing. And finally, outlining what we consider as essential elements not to be overlooked in the reporting stage.
- Diane Wild et al. Value in Health. Principles of Good Practice for the Translation and Cultural Adaptation Process for Patient‐Reported Outcomes (PRO) Measures: Report of the ISPOR Task Force for Translation and Cultural Adaptation. Health Qual Life Outcomes. 2009; 7: 3.
- U.S. Department of Health and Human Services Food and Drug Administration. Guidance for Industry Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims. December 2009
- Debra E Irwin et al. Cognitive interviewing methodology in the development of a pediatric item bank: a patient reported outcomes measurement information system (PROMIS) study. Health Qual Life Outcomes. 2009 Jan 23; 7:3.
- Patrick, D.L et al. Content validity—establishing and reporting the evidence in newly developed patient-reported outcomes (PRO) instruments for medical product evaluation: ISPOR PRO Good Research Practices Task Force report: part 2—assessing respondent understanding. Value Health. 2011; 14: 978–988
- Gordon B Willis. Analysis of the Cognitive Interview in Questionnaire Design. Oxford University Press 2015
- Ericsson, K.A. & Simon, H.A..Verbal reports as data. Psychological Review, 1980 Vol. 87, no. 3, pp. 215-251.
- Willis, G. B. Cognitive Interviewing: A Tool for Improving Questionnaire Design. London: Sage 2005
- H Boije and G Willis. The Cognitive Interviewing Reporting Framework (CIRF) Towards the Harmonization of Cognitive Testing Reports. Methodology (2013), 9, pp. 87-95.
Health Outcomes Insights helps healthcare agencies and pharmaceutical companies across a range of conditions including diabetes, get targeted answers to patient behaviour whenever health outcomes are part of your programme. Visit www.healthoutinsights.com
For further information on this post please complete the required information below.