Congratulations to Dr. Jay Verkuilen who recently presented on “Detecting Respondent Vandalism in Online Panel Data: Why It Matters and Possible Methods to Address It.”
This presentation was part of the “Cultural Psychology Approaches to Studying Trauma and Posttraumatic Stress” Symposium at the International Society for Traumatic Stress Studies 35th Annual Meeting, Boston, MA, November 14, 2019. Co-authors include Educational Psychology PhD student Zebing Wu and Dr. Andrew Rasmussen of Fordham University.
Many behavioral science researchers rely on self-administered internet data, particularly gathered via Qualtrics, MTurk, etc. These have many potential advantages compared to investigator-run internet or paper surveys. However, these data have important potential issues, most notable being the presence of varying types of non-truthful, unmotivated, or “mischievous” respondents mixed with truthful ones, which, at best, adds noise and, at worst, biases substantive conclusions. Strategies such as check items or social desirability scales may be useful for detecting unmotivated respondents. However, they add length and respondent burden, may anger motivated respondents, and are often transparent to “mischievous” respondents. This in mind, we examine the screening strategy first proposed by Espelage and Robinson (2011) and Cimpian-Robinson (2014) for survey research on bullying and LGBTQ youth, respectively. We expand it using a finite mixture model approach based on important features of the data to identify potentially problematic responses. We illustrate using Qualtrics panels of Mexican and U.S. respondents (N = 2087) to trauma exposure and PTSD questionnaires. Nearly 10% of cases were identified as “potentially aberrant” using reports of trauma frequency, but there are different subtypes. Discussion focuses on the implications for online collection of trauma-related data.