I stumbled across this blog post a week ago and thought it was a wonderful example of the way social media can be used to better biomedical science.
The New England Journal of Medicine published an article in June on the prevention of MRSA in the ICU. The study was very large, 74,256 patients, and the results looked impressive, BUT nobody could get the stats didn’t add up. The numbers given in the published paper didn’t correlate with the Number-Needed to Treat (NNT)
A blog post on Intensive Care Network posted the following about the stats in the NEJM article:
ARE THE STATS CORRECT?
We were hashing this out in our journal club, but could not get the stats to add up.
If you can PLEASE COMMENT HERE!
The NNT’s of 54 and 181 seem impossibly small, with huge clinical implications.
Please try it yourself; look at Table 3. Frequency and Rates of Outcomes during the Baseline and Intervention Periods, According to Study Group
With bloodstream infection from any pathogen, the Group 1 (standard care) number of events per 1000 patient days is 4.1. With Group 3, the number of events is 3.6 per 1000 patients days. Even taking change from baseline into account and assuming these NNTs have been calcuated AFTER randomization, between Group 1 and Group 3, we get nowhere close to their NNT’s.
PLEASE have a go and see if you can match their NNT’s.
IF you can’t there is a serious problem, with practice changing implications.
It’s too late to write letters to the NEJM, so a robust discussion in a peer reviewed forum seems a good way to go.
The authors of blog post intention was to discuss the problem in “a peer reviewed forum” and according to them “there was lots of insightful commentary from around the globe.”
The fact that they were able to discuss problem with others around world is big but not unheard of, more and more scientists are discussing issues online. To me the biggest thing is that the paper’s lead author, Susan Huang engaged in a discussion with the social media reviewers with a “prompt and gracious reply” agreed the published calculation was an error and showed “true scientific and academic integrity by contacting the NEJM as soon as there was a suggestion that the stats were incorrect.” NEJM responded by publishing an correction to the paper.
It is very cool how scientists discussed online a paper’s validity and work together to essentially provide world wide peer review. However, what I find even cooler is that the author was engaged with the social media process AND a respected journal addressed and responded to the findings. This is an example of everything that is right with social media and professional communication. It will be interesting to see if we will see more of this type of world peer review in the future especially now that PubMed Commons can also foster this kind of scientific inquiry and discussion.
NEJM is a big journal with lots of very smart authors contributing papers that are subjected to very peer reviewers, but still there can be mistakes. World peer review via social media could help improve the process. One question I keep wondering is, if we have this type of world peer review, could this cut down on the academic fraud that sometimes eludes the careful eyes of publishers’ peer reviewers? What would have happened had Wakefield’s fraudulent study linking vaccines and autism (published in 1998) been published today? Would that paper have had a chance to make it the general public’s consciousness and be as unfortunately influential as it still is today?