Validity versus reliability in strictly structured interviews

  • Posted on: 22 April 2018
  • By: benfell

Back when I was applying for the Ph.D. program in Transformative Studies at California Institute of Integral Studies, I was waiting for a phone call to be interviewed for that program. I was still upset about what had happened to my Communication program at California State University, East Bay, and was bothered "that in phenomenological research, we must be careful to ask all participants exactly the same questions, as if they were replaceable parts of a machine, presuming that each communication with a participant will be understood by the participant in the same way."1

This never really left my head, even as I spent nearly two years in that program, concluded it was the wrong program for me, tried and failed to transfer to the Social and Cultural Anthropology program at the same institution, instead entered the Human Science program at Saybrook University and completed my Ph.D. there. And it resurfaces as I contemplate the possibility of involving myself in quantitative research (not my strong point) doing, you guessed it, strictly structured interviews.

So as I sat down to write this, I looked up that old blog entry, and was surprised to see that I raised that question in the context of phenomenological research.2 Because the approach I was objecting to seems distinctly positivist in style.

Basically, I was appealing to validity, with is one of two unfortunately competing values in positivist research. Oversimplified, this is the idea that your study should make sense. There are a number of tests for validity, where failures are often labeled fallacies. And it is a fallacy to assume that each communication with each participant will be understood by each participant in exactly the same way. We each inhabit distinct mental spaces which others are unable to penetrate. Communication is necessarily an attempt to bridge the gaps between those spaces and we engage in it more in the hope rather than the certainty that we will succeed.

But validity isn't the only value in play here. The other, which is especially important in positivism and in academia, is reliability, which is largely about reproduceability. If you conduct a study and publish the results, I should be able to repeat that study and obtain the same results. Reliability is the reason quantitative work tends to be favored, which is why, although it's not my favorite approach, I have to consider work in quantitative research.

The fly in the ointment here is that if, in conducting your study, you deviate from the strictly structured format, explaining questions to participants so that they will understand them in the way they're meant to be understood (and I'm assuming here that your assistants who are actually conducting the interviews do indeed understand that question correctly), when I come along and repeat that study, I might not 1) share your understanding of the question, and 2) match your success in reproducing that understanding with my participants. This undermines the reproduceability of your study.

One of my favorite attacks on validity is about operationalization. This is the process where, in quantitative research, we choose measurable variables to reflect the actual phenomena we're interested in. There's a lot in the world, especially in the human world, that doesn't reduce well quantitatively, so this is often a ripe line of attack. In quantitative strictly structured interviews, each question aims to assign a value to a variable. The question's validity reflects the degree to which the question actually asks what we're interested in. And in a quest for validity, we would indeed endeavor to ensure that respondents understand the question the way we mean them to.

But because we place a high value on reliability, really even higher than that we assign to validity, which is why operationalization can be a favorite line of attack, we stick strictly to the prepared question. It is, to say the least, an uncomfortable compromise.