Back in mid-March, the BBC noted the 3 year anniversary of the ousting of Saddam from Iraq with an article by Paul Reynolds
. A couple of TAE readers directed me to it at the time, noting the fact that Reynolds highlighted Juan Cole's dismal portrayal of the current situation in Iraq. (Regular TAE readers will be familiar
with both Cole and the respect the BBC seems to afford him.) I confess that I did not pay particular attention to the article at the time, and only had the occassion to focus on it yesterday. Wholly apart from the use of Cole, TAE noticed that Reynolds also publicized the infamous, and discredited, Lancet claim
that the invasion had resulted in the deaths of 100,000 Iraqis. Wrote Reynolds:
Thousands of people have died. The true number of Iraqi deaths is not known
and even the Iraqi Body Count figure -- compiled largely from news reports -- of somewhere in the mid 30,000s is criticised as a possible underestimate and admitted by IBC to be a baseline. The British medical journal The Lancet suggested a figure of about 100,000 back in October 2004.
Ignore, for the moment, the bizarre use of language which has it that an anti-war organization established for the precise purpose of trumpeting the number of civilian deaths is "admitting" to something which it proclaims in no uncertain terms
in its FAQ. At least the IBC figures can be said to be based at least to some extent on hard, verifiable data. The Lancet figure of 100,000, on the other hand, has been well-debunked
It was in fact an extrapolation based on an extremely small sample of deaths, a loose and unverifiable method of gathering data, and indeed the headline 100,000 figure is not even an accurate characterization of the study's results. In fact that is simply the approximate mean between an upper limit and a lower limit which bracketed the study's 95% confidence level. Big deal, you might say. If the lower limit was 90,000 and the upper limit was 110,000, then 100,000 is an accurate enough figure to give a meaningful picture, isn't it? Perhaps. But in fact, owing in part to the small sample size as well as other statistical short cuts used because of the difficulties and dangers involved in gathering data, the 95% confidence interval was absurdly wide...8,000 to 194,000. This is so wide as to render the numbers useless. As Fred Kaplan put it in his critique of the study
Imagine reading a poll reporting that George W. Bush will win somewhere between 4 percent and 96 percent of the votes in this Tuesday's election. You would say that this is a useless poll and that something must have gone terribly wrong with the sampling.
Unless, apparently, you are a reporter for the BBC, in which case you think it essential knowledge for your audience.
Wondering how such discredited data could not only find its way into his article, but be presented with nary a word about its deficiencies and the controversy surrounding it, TAE put some questions to Paul Reynolds on this issue via e-mail, and, to his credit, he was more than willing to respond. Note, however, that I say he "responded", because, as you will see, to characterize most of his responses as "answers" wouldn't really do them justice.
Mr. Reynold's responses are in blue.
Were you aware of the methodology used by the Lancet study and the details of it when you included it in your piece?
Do you think the fact that the study had a 95% confidence interval of 8,000 to 194,000 is irrelevant when trying to judge the usefulness and meaning of the 100,000 figure?
If not, why didn't you include that information for your audience?
Do you think you have a responsibility as a journalist to weigh the credibility of claims made by others before reporting them, and to present information that bears on that credibility if you decide to promote the claims? Or is your responsibility fulfilled by simply reporting factually that, for example, claim A was made by person X, with no regard for the credibility of either A or X?FROM: PR
It was a reference only. It was not 'promoting claims'. The arguments about the Lancet report are well known.
I take it from your response (ie the arguments were well known) that your answer to my first question is that you were aware of the methodology and details of the study prior to including the reference to the study in your article.
Am I to understand it to also mean that you didn't include the relevant information about the study because you assumed it was already well known by your audience? If so, why would you include even the 100,000 figure, as that must also have been just as well known? If not, then, again, why didn't you include the information?
Finally, will you be answering my last question, or should I assume your on-the-record response is "no comment"?FROM: PR
You can use it all [on TAE], including my suggestion that you fight it out with MediaLens!
(The reference to Media Lens referred to the fact that Media Lens
, an apparently left-wing critic of the media, had taken Reynolds to task for a different aspect of the same article. He had previously advised me to engage them in debate.)
I didn't ask if I could use it. I asked a) if my understanding was correct, and b) if I should assume your answer to the questions you haven't answered is an official "no comment". If you'd like, I can repeat the questions that I think I haven't yet gotten a response to.FROM: PR
Yes, I understand the arguments about methodology. There is in fact a long correspondence involving Lancet author Les Roberts on MediaLens. It would be hard not to know the arguments. I simply used the Lancet report as illustrative of the problem over figures. I was not in this piece dealing with the row over its findings which is well known, as I said. This is always a problem with blogs and you are not alone in this. We get it from the left as well. If we do not pause in an article about something else and deal fully with the rows over one particular point, all hell breaks loose and we are accused of ignorance or bias or both! You can quote me on that.
Do you think that, as a journalist, you have a responsibility to weigh the credibility of claims before you present them in your articles? Or does it suffice simply to report factually that person A claimed X, with no thought or reference as to the credibility of either A or X?
If the former, isn't it fair to assume that you have made the judgment that the claim that the war in Iraq has cost 100,000 Iraqi lives is credible? If not, why not?
If you believed that the controversy over the 100,000 figure was so widely known that you needn't offer it as context, why did you feel the need to discuss the problem of counting Iraqi deaths at all, since logically that must be at least as widely known?FROM: PR
In fact Scott, the 100,000 is no longer the upper limit. There are claims that it should be much higher! See Les Roberts http://www.alternet.org/story/31508/
Am I supposed to include his now higher estimates as well? I should do so, according to your rules.
Of course I weigh credibility. I am currently engaged in a hot debate with The Cat's Dream about why I reported on the Iraqi documents at all, the suggestion being that they were unreliable. In the case of the Lancet article, the arguments are well known. You ignore the point I made.
Since you do weigh credibility, and since you passed on the 100,000 Lancet claim without mentioning any caveats, is it fair to assume that you find that figure credible? If not, why not?
Far from ignoring your point about the arguments being well known, I have taken it on board and realized it leads to a further (and so far unanswered) question: Again, if you assumed that your audience already knew about the Lancet controversy, why did you feel the need to inform your audience about the Lancet figures themselves, for surely they must also have known those, too?FROM: PR
I neither found the figure credible nor incredible It was simply a figure. I reported it. . What's the problem?
As for not mentioning the figure at all, that is a bizarre suggestion.
So, what have we learned from this exchange? We've learned that "of course" Mr. Reynolds weighs the credibility of information before passing it on to his readers...except, that is, when he doesn't, as in the case at hand.
We've learned that he thinks there is no problem with reporting a highly disputed claim without either a) establishing in his own mind that the claim is credible or b) making any mention whatsoever that the claim has been greatly disputed.
We've learned that, although he believes the controversy surrounding 100,000 figure was so widely known that noting it for the sake of his audience was unnecessary, he also thinks it would have been "bizarre" to assume that the very same audience was equally aware of, and therefore in no more need of reminding about, the 100,000 figure itself.
Finally, while Paul should be commended for taking the time to engage his critics rather than taking the easy course and simply ignoring them (as many of his colleagues do), I think we've also learned that Michael Howard has nothing
on BBC reporters when it comes to avoiding direct answers to direct questions.