Archive | Crisis in Research RSS feed for this section

Shouldn’t we know if the Implicit Association Test is valid before we hype it?

2 Feb

The normally careful Association for Psychological Science has a piece on its website about the Implicit Association Test. Buried in the article is in article is this short paragraph:

Opinions on the IAT are mixed. Controversy about the test was evident in a 2013 meta-analysis by APS Fellows Fred Oswald and Phillip E. Tetlock and colleagues. They found weaker correlations between IAT scores and discriminatory behavior compared with what Greenwald, Banaji, and their colleagues found in a 2009 meta-analysis.

So there’s a debate about the validity (and the reliability, for that matter) of the IAT. But let’s not allow that pesky fact get in the way of hyping this instrument!

Here is an account of the problems with the IAT.

 

Flaw in the yoga -cognitive impairment study

29 Jul

On Wednesday, I blogged about a new study of the effects of yoga on cognitive impairment. Thinking it over, I realized that some of the study’s results rest on a serious methodological flaw.

The study compares measures before the intervention to measures after the intervention within in each group. For example, it looks at the Geriatric Depression Scale scores for the yoga group before and after the intervention and says that there is a statistically significant difference. But this is not the correct analysis, we want to compare the changes between the yoga group and the control group. An appropriate procedure would have been a gain score analysis. The authors could have subtracted the after treatment scores from the before treatment scores and then compared those two values using an appropriate statistical test.

In the other words, the study had the possibility of comparing the control and the experimental group but failed to so. All it really says it that the scores improved in the treatment group. That is an interesting finding, but it should be considered only exploratory and suggestive. I have no objection to publishing exploratory findings, I have done so myself. But the authors had the opportunity to make a better test and they failed to do so.

Bayesian reasoning and the South Park Hypothesis

3 Jun

There were many good presentations at APS this year, but by far the best was the three hour workshop I attended on JASP and Bayesian analysis run by Eric-Jan Wagenmakers. This led me to look up some of his writings including this great paper: “Bayesian Benefits for the Pragmatic Researcher.”

As way of illustration, the paper test the South Park Hypothesis: the contention that there is no correlation between the box office success and the quality of Adam Sandler movies. Quality is operationalized as freshness rating at Rottentomatoes.com.

Sandler

It is called the South Park hypothesis from this bit of dialog:

“Producer: Watch this. A.W.E.S.O.M-O, given the current trends of the movie going public, can you come up with an idea for a movie that will break $100 million box office?
Cartman: [as A.W.E.S.O.M.-O] Um… Okay, how about this: Adam Sandler is like in love with some girl. But it turns out that the girl is actually a golden retriever or something.
Mitch: Oh! Perfect!
Executive: We’ll call it “Puppy Love”.
Mitch: Give us another movie idea, A.W.E.S.O.M.-O.
Cartman: Um… How about this: Adam Sandler inherits like, a billion dollars, but first he has to become a boxer or something.
Mitch: “Punch Drunk Billionaire”.”

John Kruschke’s talk at APS

1 Jun

The APS meeting was great. I heard many good talks, including this one by John Kruschke’s “Some Bayesian approaches to replication analysis and planning.”

 

A crisis in qualitative research?

13 May

So says educational researcher Stephen Porter:

“Qual folks are also their single best enemy. I trained in comparative politics, where qual scholars are respected, because they adopt a case study approach. Many of the qual researchers I see in education and other areas tend to do dumb things like:

  1. Abandon any approach to representative sampling when they select participants. They refer this as “purposive sampling” but it is often just an excuse for laziness – representative samples require a lot of work to collect. In a world where K-12 students are now being trained in the nuances of populations and samples, how do you think the average person, or policymaker, reacts to your study when you admit that the people you interviewed are not representative of anything?
  2. Some qual researchers insist there are multiple realities. What do you think the average person, who lives in a single reality like most of us, thinks of this idea?
  3. Some are also opposed to any notion of causality and reject the entire concept. Yet we live in a time when voters and policymakers are desperate for solutions to society’s problems. Do you honestly think they want to hear from someone who says, “Sorry, but I can’t really say whether smaller class size causes students’ test scores to increase. I can only describe the students’ experiences”? Such an approach is not very helpful to school districts trying to decide between hiring more teachers versus increasing teacher compensation.
    In short, the future of qual research looks grim.”

 

Is stereotype threat real?

14 Mar

I have blogged a number of times about the crisis in psychological research. Many widely publicized research findings have been called into question because of faulty methodology. These faulty methods include small underpowered studies, p-hacking, and failure to replicate.

Now the stereotype threat, the claim that awareness of a stereotype about one’s own group will lead to a reduction in performance, has been called into question.

“Stereotype threat is one of the most famous and influential ideas in psychology. It is thought to be a key explanation for group differences in performance – whether the group is defined by gender, race or class. But now, stereotype threat itself is under threat. New studies are questioning just how robust it is, and even whether it exists at all. The same goes for many other staples of social psychology – to the point where the whole edifice is tottering badly.”

 

Psychedelic cures, a case for skepticism

17 Apr

Recently, I’ve posted about research on the therapeutic potential of psychedelic drugs.  Keith Humphreys, at The Reality Based Community, makes a case for skepticism:

“Being skeptical about miracle cures is simply playing the odds. As my colleague John Ioannidis pointed out in one of the most-read papers in medical history, most medical research findings are wrong. This is particularly true of small studies, which are usually followed by larger studies that disconfirm the original miracle finding (Fish oil pills are a good example).”

I think this is good advice. My intuition tells me that psychedelics might have value, but I am prepared to change my mind, based on the emerging evidence.

The most important science story this week

24 Feb

This is a significant (excuse the pun) development that is unlikely to be reported in major news outlets. Basic and Applied Social Psychology, an academic journal, has banned authors from using null hypothesis significance testing procedures. This may seem like an obscure topic, but it has enormous implications for what counts as scientific evidence.

As a student, I was aware of growing criticism of null hypothesis significance testing. Strangely, when I raised these issues with faculty, most of them were unaware of the criticism. Even today, when I try to publish a paper using modern or non-parametric methods the reviewers will often either reject it out of hand or demand special justification.

 

Questions about the bilingual advantage

9 Dec

I have written a number of times about evidence that bilingualism may be protective against dementia. More narrowly, there has also been evidence that bilingual individuals have an advantage in executive control tasks. Now a paper in Psychological Science raises the possibility that that the latter claim may  a consequence of publication bias:

 “It is a widely held belief that bilinguals have an advantage over monolinguals in executive-control tasks, but is this what all studies actually demonstrate? The idea of a bilingual advantage may result from a publication bias favoring studies with positive results over studies with null or negative effects. To test this hypothesis, we looked at conference abstracts from 1999 to 2012 on the topic of bilingualism and executive control. We then determined which of the studies they reported were subsequently published. Studies with results fully supporting the bilingual-advantage theory were most likely to be published, followed by studies with mixed results. Studies challenging the bilingual advantage were published the least. This discrepancy was not due to differences in sample size, tests used, or statistical power. A test for funnel-plot asymmetry provided further evidence for the existence of a publication bias.”

Here is a summary of the paper.

“Ultimately, the findings suggest that the commonly accepted view that bilingualism confers a cognitive advantage may not accurately reflect the full body of existing scientific evidence.
According to de Bruin, these findings underscore how essential it is to review the published scientific literature with a critical eye, and how important it is that researchers share all of their findings on a given topic, regardless of the outcome.”

 

 

Citing nonexistent research

7 Nov

Some years ago, I wrote an article for The Skeptic about a widely cited education study. Unfortunately, the research had had a fundamental flaw: it was never conducted.

Blogger Judge Starling writes about citations of another non-existent study: He notes:

‘”My wife is an historian and she has always cautioned me not to rely on second-hand quotations. Not ever! “You either read the papers you quote, or you quote the secondary reference.” ‘

 

%d bloggers like this: