Below is a post written by Keith A. Markus, Ph.D. He is currently a Professor in the Psychology Department at John Jay College of Criminal Justice. He serves on the graduate faculty of the CUNY Graduate Center in the Quantitative Psychology subprogram of the Educational Psychology Doctoral Program and in the Industrial and Organizational Psychology and Forensic Psychology subprograms within the Psychology Doctoral Program.
In this blog post, I would like to briefly describe and then contextualize my recent article on formative measurement (Markus, 2018) in a special issue of Methodology devoted to validity in survey research methods (Menold, Bluemke & Hubley, 2018).
Formative measurement models model the focal construct — the variable being measured — as an effect of a set of scale items rather than as the cause. The idea traces back to Blalock (1963) but was popularized in the context of structural equation modeling by Bollen and Lennox (1991) and Edwards & Bagozzi (2000), the latter of whom gave it the name ‘formative measurement’. The canonical example is socio-economic status in relation to items asking about income and educational attainment. In this case, one imagines additional education impacting socio-economic status. One does not as easily envision an independent manipulation of socio-economic status somehow impacting level of education.
In the decades that have passed since the publication of the above articles, very little scale theory has developed for formative scales. In comparison, available theory for standard reflective scales far exceeds what one can cover in a single semester introductory course. In the article, I identified three conceptual impediments to the development of such scale theory for formative scales: First, if one redefines measurement as something entirely internal to a model, as Ken Bollen has, then one loses touch with all the practical aspects of testing that drive theoretical development. Second, questions about evaluating appropriate formative scales typically receive answers in terms of the notion of “conceptual unity”, posed as if it were transparently self-explanatory whereas it in fact raises more questions than it answers. Finally, there is a conceptual slippage between saying that level of education (outside a scale) causes socio-economic status (outside a scale) and saying that response to a level-of-education item (inside a scale) causes one’s level of socio-economic status (outside a scale). Ambiguity between these two interpretations constitutes the third conceptual impediment. The remainder of the article sketches how scale theory for formative scales might unfold if one were to cast aside these three impediments.
I am grateful to Anita Hubly, and the other Editors, for reaching out to me to suggest that I contribute this article to the special issue. It gave me an opportunity to develop more systematically material from briefer commentary on formative measurement (Markus & Borsboom, 2013; Markus 2014, 2016). The literature remains very much polarized on the issue of formative measurement. At one pole, advocates express unbounded enthusiasm with what strikes me as inadequate critical evaluation. At the other pole, critics strike me as overly eager to reject the idea of formative measurement wholesale without giving advocates an opportunity to modify formative measurement theory in light of criticism. Neither stance provides a fertile intellectual environment for the development of scale theory for formative scales. My article reflects an effort to stake out a middle ground that remains open to the possibility of formative scales but seeks to hold such scales to the same level of standards as reflective scales. I hope that a new generation of scholars might take up the challenge of developing such theory in detail.
It seems to me that the literature on formative measurement can offer an object lesson in the broader importance of maintaining a constructive engagement with the mechanisms of feedback when engaged in any kind of theory development. Feedback is the engine that powers progress. If one is too quick to reject feedback by labeling anyone who offers critical feedback an enemy or opponent and construing one’s task as defending an existing proposal against any and all criticism, then one loses the ability to critically engage with and capitalize on that feedback to improve the theory. Likewise, if one adopts a parallel dismissal of arguments in favor of the theory and prematurely dismisses it as a live option, one ends up in a similar place. Effectively harnessing the power of feedback requires a middle stance that is both open to criticism but also open to potential strategies for meeting the criticism. In my view, the ability to maintain the delicate balance between productive engagement and critical reflection is what separates productive intellectual engagement from mere polemics. No theory is a finished product that cannot benefit from constructive criticism but at the same time one can only reach a satisfactory critical evaluation of a theory by giving it space to respond and adapt in the face of criticism.
By Keith A. Markus, Ph.D., John Jay College of Criminal Justice; Graduate Center, CUNY
Edwards, J. R. & Bagozzi, R. P. (2000). On the nature and direction of relationships between constructs and measures. Psychological Methods, 5, 155-174. doi: 10.1037/1082-989X.5.2.155
Blalock, H. M. Jr. (1963). Making Causal Inferences for Unmeasured Variables from Correlations Among Indicators. American Journal of Sociology, 69, 53-62.
Bollen, K. A. & Lenox, R. (1991). Conventional wisdom on measurement: A structural equation perspective. Psychological Bulletin, 110, 305-314. doi: 10.1037/0033-2909.110.2.305
Markus, K. A. (2018). Three conceptual impediments to developing scale theory for formative scales. Methodology, 14, 156–164. doi: 10.1027/1614-2241/a000154
Markus, K. A. (2014). Unfinished Business in Clarifying Causal Measurement: Commentary on Bainter and Bollen. Measurement: Interdisciplinary Research and Perspectives, 12, 146-150. doi : 10.1080/15366367.2014.980106
Markus, K. A. (2016). Causal Measurement Models: Can Criticism Stimulate Clarification? Measurement: Interdisciplinary Research and Perspectives. Published online 9/30/2016. doi: 10.1080/15366367.2016.1224965
Markus, K. A. & Borsboom, D. (2013). Frontiers of test validity theory: Measurement, causation, and meaning. New York: Routledge.
Menold, N., Bluemke, M. & Hubley, A. M. (2018). Validity – Challenges in conception, methods, and interpretation in survey research. Methodology, 14, 143-145. doi: 10.1027/1614-2241/a000159.
No comments yet.