The replication crisis in psychology, and particularly in social psychology has created some anxiety and uncertainty in all of us. Probably particularly hard hit are our graduate students who are on the front line of our research efforts, who are collecting most of the data, and who have most to lose by changes that make publications more difficult to get. We need to do everything we can to help our young scholars.
My concerns come from a different angle – reporting our results. I am the author of a social psychology textbook, published by Flat World Knowledge. The book is available in many formats, all of which are cheaper than social psychology textbooks offered by other publishers. Flat World also provides a variety of other services to instructors, including the ability to monitor student interactions with the digital version of the book.
In any case, I am now in the process of revising the text and I need to determine how to make the reporting of social psychological research more dynamic and more in line with current thinking about the robustness of our effect sizes.
When I teach social psychology I frequently present my guess about effect sizes when I present an effect. For instance, I might say (winging it obviously) that the effect of similarity on liking is “pretty big,” but the effect of unconscious priming is “pretty small.” This helps students get the idea that not all effects are equal, and also helps answer their frequent questions about how two opposite effects might work together. For instance, if you are asked to express your liking for someone who is very similar to (versus very different from) you, whether you have recently been unconsciously primed with positive or negative words might not make much difference in comparison to the similarity dimension.
So I’m thinking that what I will need to do is to try to report as many effect sizes (probably as correlations) as possible. How I will get them is difficult to know however. I’d appreciate any thoughts you might have about this issue.