Implicit bias purports to answer the question: Why do racial disparities persist in household income, job status, and incarceration rates, when explicit racism has, by all measures, greatly diminished over the last half-century? The reason, according to implicit-bias researchers, lies deep in our brains, outside the reach of conscious thought. We may consciously embrace racial equality, but almost all of us harbor unconscious biases favoring whites over blacks, the proponents claim. And those unconscious biases, which the implicit-bias project purports to measure scientifically, drive the discriminatory behavior that, in turn, results in racial inequality.
The need to plumb the unconscious to explain ongoing racial gaps arises for one reason: it is taboo in universities and mainstream society to acknowledge intergroup differences in interests, abilities, cultural values, or family structure that might produce socioeconomic disparities...
But the fiercest disputes concern the IAT’s validity. A psychological instrument is deemed “valid” if it actually measures what it claims to be measuring—in this case, implicit bias and, by extension, discriminatory behavior. If the IAT were valid, a high implicit-bias score would predict discriminatory behavior, as Greenwald and Banaji asserted from the start. It turns out, however, that IAT scores have almost no connection to what ludicrously counts as “discriminatory behavior” in IAT research—trivial nuances of body language during a mock interview in a college psychology laboratory, say, or a hypothetical choice to donate to children in Colombian, rather than South African, slums. Oceans of ink have been spilled debating the statistical strength of the correlation between IAT scores and lab-induced “discriminatory behavior” on the part of college students paid to take the test. The actual content of those “discriminatory behaviors” gets mentioned only in passing, if at all, and no one notes how remote those behaviors are from the discrimination that we should be worried about.
Even if we accept at face value that the placement of one’s chair in a mock lab interview or decisions in a prisoner’s-dilemma game are significant “discriminatory behaviors,” the statistical connection between IAT scores and those actions is negligible. A 2009 meta-analysis of 122 IAT studies by Greenwald, Banaji, and two management professors found that IAT scores accounted for only 5.5 percent of the variation in laboratory-induced “discrimination.” Even that low score was arrived at by questionable methods, as Jesse Singal discussed in a masterful review of the IAT literature in New York. A team of IAT skeptics—Fred Oswald of Rice University, Gregory Mitchell of the University of Virginia law school, Hart Blanton of the University of Connecticut, James Jaccard of New York University, and Philip Tetlock—noticed that Greenwald and his coauthors had counted opposite behaviors as validating the IAT. If test subjects scored high on implicit bias via the IAT but demonstrated better behavior toward out-group members (such as blacks) than toward in-group members, that was a validation of the IAT on the theory that the subjects were overcompensating for their implicit bias. But studies that found a correlation between a high implicit-bias score and discriminatory behavior toward out-group members also validated the IAT. In other words: heads, I win; tails, I win.
Greenwald and Banaji now admit that the IAT does not predict biased behavior. The psychometric problems associated with the race IAT “render [it] problematic to use to classify persons as likely to engage in discrimination,” they wrote in 2015, just two years after their sweeping claims in Blind Spot. The IAT should not be used, for example, to select a bias-free jury, maintains Greenwald. “We do not regard the IAT as diagnosing something that inevitably results in racist or prejudicial behavior,” he told The Chronicle of Higher Education in January. Their fallback position: though the IAT does not predict individual biased behavior, it predicts discrimination and disadvantage in the aggregate. “Statistically small effects” can have “societally large effects,” they have argued. If a society has higher levels of implicit bias against blacks as measured on the IAT, it will allegedly have higher levels of discriminatory behavior. Hart Blanton, one of the skeptics, dismisses this argument. If you don’t know what an instrument means on an individual level, you don’t know what it means in the aggregate, he told New York’s Singal. In fairness to Greenwald and Banaji, it is true that a cholesterol score, say, is more accurate at predicting heart attacks the larger the sample of subjects. But too much debate exists about what the IAT actually measures for much confidence about large-scale effects...
The most recent meta-analysis deals another blow to the conventional IAT narrative. This study, not yet formally published, looked at whether changes in implicit bias allegedly measured by the IAT led to changes in “discriminatory behavior”—defined as the usual artificial lab conduct. While small changes in IAT scores can be induced in a lab setting through various psychological priming techniques, they do not produce changes in behavior, the study found.
Don't think I won't challenge anyone who tries to push this phrenology on us.
No comments:
Post a Comment