Science Publishing — Some Skepticism Required

Although the volume of published scientific papers is increasing, fewer and fewer may actually be read.

science magazines.jpg

Even though we’re experiencing some strategic confusion in the human space program, life in academia goes on—papers are published, meetings are attended, interns are selected to work with scientists and engineers, and proposals are being submitted and funded. Despite the community’s much publicized debate over the direction of the program and questions about its future, this turmoil hasn’t slowed the engine of scientific debate—publication of research results.

In order to become part of the edifice of knowledge, scientists must publish papers to make their discoveries known. The principal means for this is the scientific paper, a written contribution consisting of between 5,000 and 30,000 words (more or less) along with images, graphics and tables. These papers are submitted to journals that usually specialize in certain topical areas. But before a scientific paper is published, it must undergo reviews by competent experts—selected individuals who check the work for technical errors and procedure, and comment on the soundness of the author(s)’ conclusions. Usually, these reviewers are scientists working in the same field—people expected to be current on the latest knowledge and thus able to evaluate the merit of the new work.

A falling domino effect on the building blocks of knowledge begins once miscues find their way into the mix. Peer review is an important system of “checks and balances” that ensures that science remains a dispassionate search for the truth about how nature operates. When slipshod and unreliable work passes peer review, as recent articles have suggested, science becomes less pure and the process of review becomes tainted. A different sort of “review” has shown that many papers appearing in academic journals are fakes—jargon-laden hoaxes that passed through the crucible of peer review. This evidence comes from a computer algorithm developed to detect fake results (papers mechanically assembled by people). The computer program was applied to several prominent scientific journals, and it identified a disturbing number of fraudulent papers.

An amazing thing about this study is that the fake papers identified using this computer program would have been easily seen to be bogus if somebody knowledgeable had taken two minutes to browse through the manuscript. What this says is that the peer review process is not quite as rigorous as many believed and relied on it to be. My concern is slightly different but related—that the process of writing, submitting and (most astoundingly) the reading of the scientific literature is not very rigorous either. Allow me to offer an anecdote of a recent experience as evidence.

A few years ago, I was co-author on a paper dealing with some new mapping of the Moon. It was done to clarify what we knew and to better understand the context of collected samples around one of the Apollo landing sites. Our work led us to conclude the traditional interpretation of the geology of this site was wrong and that perhaps other events (less appreciated) had been more influential in its evolution. The conclusions of our work led to two alternative hypotheses (call them A and B), each quite unmistakably different, and each leading to two wholly distinct implications for lunar history. We emphasized in the paper that we had no preference between model A and model B, and that additional work was necessary to clarify what our results might mean for these interpretations of the samples from that site.

Shortly after our paper came out, a popular article (written by an active scientist) discussed our results and their possible implications. To my surprise, this article claimed that we had argued for interpretation B in our paper (a conclusion that flew in the face of the fact that we had specifically and deliberately stated that we had no preference). Clearly, the person writing the popular piece did not read our paper (or did not understand it or cherry-picked a conclusion). I would have simply written off this episode as sloppiness, but this also happens with other papers—I often hear these works misquoted and misinterpreted in meetings and conferences.

What seems to be happening is that even though the volume of published scientific papers increases every year, fewer and fewer of them are actually being read. I do not expect a paper published on some arcane topic in a specialized technical journal to become fodder for everyday conversation, but I would expect scientists working in the field to be conversant with what has been done in their areas of specialization, both historically and currently. Sadly, that doesn’t seem to be the case and this is dangerous.

When my contemporaries and I were lowly graduate students, we may not have known how to conduct scientific research, but we were expected to read the literature and learn what had already been done in our fields. I knew what each scientist believed about a given phenomena and why they believed it. I could almost recite the bibliographic references of some classic, key papers from memory. Lately, I find many younger scientists to be singularly uninformed about the state of their own fields. Few have read scientific papers more deeply than to be able to vaguely outline their principal conclusions, and almost none understand the assumptions and experimental procedure behind most studies.

This lack of familiarity with the literature has arisen simultaneously with the ability to search the web for instant information. The Internet can be a great boon—as a fast and convenient source of information, the web can serve as an excellent reference book. However, there is a growing tendency to gather facts and quasi-facts and use them as a substitute for knowledge. Assembling an easy collection of facts and data drawn from Internet searches (without an understanding of the contextual setting) allows false concepts to develop—miscues that become widely circulated in the echo chamber of the web. This destructive process is amplified when the “popularity” of a result is used to rank the order of returns of a given search. The dominoes continue to fall when errors are given an air of authority as they’re regurgitated on the multitude of Internet discussion boards dedicated to critiquing popular science.

Now we can add bogus and fake scientific papers to this fountain of misinformation and confusion. This decidedly unscientific process might produce “consensus science” but it does not produce understanding. Science should be a rigorous process, dependent on the validity and quality of published literature. Too few have read and critically thought about the patchwork of models, conjectures and hypotheses that too often are accepted as “understanding.” Science has always been a social construct, but more and more it seems to have become a mutually supporting social network, conducted without understanding or informed by wide reading and critical thought.

Even though these articles are not specifically oriented toward my own field, I found them to be relevant and timely. Perhaps if more people are made aware of the deterioration of the process of scientific inquiry, we can begin to reinstate the techniques that have served us so well in the past. Journals need to stop accepting and publishing worthless contributions, and the community needs to stop writing them. The current literature should be read carefully and thoroughly. Knowledge obtained from Internet searches must be treated with skepticism—whether incomplete or completely false, the results will be the same. You cannot add to the discussion if you don’t understand the conversation.

Get the latest stories in your inbox every weekday.