Q methodology traditionally involves the sorting of stimuli such as textual phrases or images that are then analyzed with statistical software. Coupled with these quantitative techniques, Q methodology often involves in-depth interviews and interpretative methods. In spite of these mixed-methods strengths, scholars are turning to internet-based platforms for administering Q studies, allowing for a greater range of access to a larger pool of potential participants. In this article, we examine issues related to participant engagement and the potential impact of low-quality sorts on data reliability. These issues are particularly germane for studies utilizing online platforms for administering Q methodology studies, where the distance between researcher and participants is increased. Our analysis involves the generation of random Q sorts as a proxy for low-quality data and explores the influence of introduced low-quality data on factor loadings and interpretation. In our exploratory study, we find that the introduction of even a small number of low-quality sorts can seriously influence factor loadings; in particular, these random sorts alter the composition of Q sorts that load on less dominant “minority” factors and, ultimately, the interpretation of factors. Based on these findings, we propose an approach that allows Q methodology researchers to explore further the quality of their data to detect low-quality sorts and offer suggestions for improving participant engagement in online studies.

Additional Metadata
Keywords data quality, internet surveys, online methods, participant engagement, Q-sort reliability
Persistent URL dx.doi.org/10.15133/j.os.2017.011
Journal Operant Subjectivity
Citation
Matthew Dairon, Shari Clare, & John R. Parkins. (2017). Participant Engagement and Data Reliability with Internet-Based Q Methodology: A Cautionary Tale. Operant Subjectivity, 39(3/4), 46–59. doi:10.15133/j.os.2017.011