Beware of (Bad) Science!

Since many decades the empirical social sciences (including sociology, psychology, and educa­tio­nal research) struggle to become recognized as a science and its findings to be applied in practical contexts (politics, education, therapy etc.).

Alas, much social “science” research does not qualify for the scientific standards that we know from the natural sciences. This is not a problem of the different objects of study but a problem of understanding what science means. The empirical social sciences suffer from an enormously exaggerated emphasis in precision, which is justified neither through the needs of practice nor through common sense. Obviously, many empirical researchers believe that science is nothing more that precision. The dominating theories of psychological and educational measurement (Classical Test Theory, Item Response Theory) have been created solely to enhance the precision of measurement, so that some differences can be detected, however tiny and insigni­ficant they might be.

But these theories have no answer for these two much more important questions:

  • The question of validity: Does an instrument actually measure what one wants to measure? The precision means nothing if we measure the wrong things.
  • The question of effect size: Are the differences large enough to consider them as practically significant?

Instead of giving clear answers to these questions, researchers often give us a multitude of helpless speculations and opinions.

In face of this bleak situation it is astonishing that public and private funding agencies give money only if researchers, schools and teachers use practices that fulfil the dubious criteria of current research. These criteria have little to do with good science but much with money. In order to get „significant” results even though there are hardly real differences, the researcher must use huge samples which is expensive. And they request us to use randomized trials, which are mostly not necessary, but cost much money for paying control groups. Not all researchers have access to that money.

Prof. Stanley Pogrow from the San Francisco State University has analyzed the impact of these funding practices of the department of Education and its institutions on educational practice in the United States. His finding is devastating:

“The only thing worse than practitioners ignoring research that has truly demonstrated practices to be effective is for the research community to certify practices as being effective that are not. It is even worse when the research community encourages government to disseminate, or encourage/require practitioners to use, such practices. Alas, the methodology prized by the top research journals and government panels for identifying effective practices makes assumptions and adjustments that introduce artificialities and errors into the analysis.”

The measurement specialist and developer of the so-called meta-analysis, Prof. Gene Glass has explicitly endorsed Pogrow’s analysis.

Anybody who wants to read about these topics in German can do so already for some time:
https://www.uni-konstanz.de/ag-moral/pdf/Lind-2014_Effektstaerke-Vortrag.pdf

I wrote this paper for researchers and practitioners. Unfortunately, my paper has not yet resonated in empirical social science research, but only in marketing research. Their customers do not want to get fooled.

Take science seriously, but beware of bad science! If you want to tell good science and bad science apart, you can now inform yourself.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s