Skepticism & the architecture of trust.
Subject: Research (Social aspects)
Science (Social aspects)
Skepticism (Social aspects)
Trust (Psychology) (Social aspects)
Author: Allchin, Douglas
Pub Date: 05/01/2012
Publication: Name: The American Biology Teacher Publisher: National Association of Biology Teachers Audience: Academic; Professional Format: Magazine/Journal Subject: Biological sciences; Education Copyright: COPYRIGHT 2012 National Association of Biology Teachers ISSN: 0002-7685
Issue: Date: May, 2012 Source Volume: 74 Source Issue: 5
Topic: Event Code: 290 Public affairs
Product: Product Code: 8515100 Basic Research; 8500000 Science, Research & Development; 8520000 Sciences NAICS Code: 5417 Scientific Research and Development Services; 54171 Research and Development in the Physical, Engineering, and Life Sciences
Geographic: Geographic Scope: United States Geographic Code: 1USA United States
Accession Number: 288628484
Full Text: Consider the recent controversy over prostate cancer screening. A Presidential Task Force scaled back recommended testing. But many doctors, citing important cases of detecting cancer, disagreed (Harris, 2011; Brownlee & Lenzer, 2011). Whose judgment should we trust?

New England fish populations are threatened, according to experts. They suggest discontinuing cod fishing. But the fishermen report no decrease in their catches and defend their livelihood (Goodnough, 2011; Rosenberg, 2011). Whose expertise should prevail: the scientists' with their sampling and its inherent uncertainties, or the fishermen's with their intimate local knowledge?

There is a lot of alarm about global warming. But maybe it's all "hot air." Many political leaders, including several presidential candidates, cite scientific experts who say that the problem is overblown, and just politicized by biased environmental activists. Whose pronouncements should we heed?

As illustrated in these cases, interpreting science in policy and personal decision making poses important challenges. But being able to gather all the relevant evidence, gauge whether it is complete, and evaluate its quality is well beyond the average consumer of science. Inevitably, we all rely on scientific experts. The primary problem is not assessing the evidence, but knowing who to trust (Sacred Bovines, April, 2012).

In standard lore, science educators are responsible for nurturing a sense of skepticism. We want to empower students to guard themselves against health scams, pseudoscientific nonsense, and unjustified reassurances about environmental or worker safety. But one may want to challenge this sacred bovine. That is, skepticism tends to erode belief. Blind doubt itself does not yield reliable knowledge. The aim, rather, as exemplified in the cases above, is to know where to place our trust. We should teach instead, as described below, the basis for informed trust in science.

* The Conundrum of Credibility

The problem of knowing who to trust is not new. In the late 1600s, Robert Boyle reflected on how to structure a scientific community, the emerging Royal Society. Investigators would need to share their findings. But reporting added a new layer between observations and knowledge: testimony was a problem (Shapin, 1994). That is, while one everyone might ideally reproduce everyone else's experiment, such redundancy wasted time and resources. Scientific knowledge would grow only if you could trust what others said. But what warranted such trust?

For Boyle, it was a social problem. You could trust a fellow gentleman, bound to honor and honesty by the social norms of the upper class. By contrast, one could not place as much confidence in a servant or paid assistant, whose private interests might eclipse the pursuit of truth. Accordingly, early Western science became an elite institution, limited to "gentlemen."

The problem in modern science is not so different, although the system has changed. Indeed, as knowledge has become more specialized, the problem has been amplified. We actually know very little on our own. You read a book or newspaper, you watch a TV documentary or webcast, you listen to a friend--or a teacher: most knowledge comes from other persons. As noted by philosopher John Hardwig (1991), we are epistemically dependent on others. Trust is essential.

Indeed, a lack of trust has its costs. According to one sociological analysis, one lab lost the race to discover the structure of thyrotropin releasing factor (TRF) because of its habit of doubt. Roger Guillemen's group tended to question and redo the experiments performed by the rival lab of Andrew Schally. That cost them extra time. Schally, on the other hand, opted not to second guess Guillemen's results, but rather to build on them. That allowed him to leapfrog to a conclusion that TRF was not composed exclusively of amino acids. His lab was thus able to identify the other components sooner. They were the first to announce the complete structure of TRF (Latour & Woolgar, 1979, pp. 131-135). Trust is integral to scientific progress.

However, this fact alone does not tell us how to exercise trust. Scientific experts, at least, are well positioned to recognize other experts. They can easily use their own knowledge to gauge whether others have the same knowledge (Collins & Evans, 2007). Unfortunately, that's not possible for non-experts. And therein lies a deep conundrum: how can you identify an expert if you are not an expert yourself (Goldman, 2001)?

The problem is illustrated at the popular educational website "Understanding Science." In trying to help students untangle media messages, they provide a toolkit for evaluating scientific claims. Their six probes include these:

* Are the views of the scientific community accurately portrayed?

* Is the scientific community's confidence in the ideas accurately portrayed?

* Is a controversy misrepresented or blown out of proportion?

These comparisons can indeed indicate problematic bias. Ironically, however, these are the very questions that the non-expert, as an outsider, is unable to answer. Even knowing a bit of the nature of science, or how science works, cannot help. The consumer of science might therefore seem helpless: susceptible to the whims of whoever claims to be an expert.

Of course, we address this same problem in our daily lives. Who is a trustworthy auto mechanic? Who is a qualified doctor or dentist? Yes, even which movie reviewer can you trust to consistently pick your favorites? These are more familiar. Here, we evaluate evidence, but evidence of a very different kind. We look for social data about someone's performance or abilities. What is their experience and demonstrated competence? For a consumer of science, the aim is only slightly different. We do not want the expert's individual "opinion." We want them to report and possibly explain the scientific evidence and consensus. Are they a qualified spokesperson for a specific scientific field?

So, for assessing a building contractor, caterer, or craftsperson, one may seek samples of their work. Online sales pose parallel problems of trust. Can you have confidence in a seller on E-bay whom you have never met? Their system aims to foster trust through a summary of each sellers' ratings, made by earlier buyers. This is a track record. This concept certainly applies in science. Researchers develop a reputation based on their past work. It establishes their credibility in addressing new cases. Such measures are not an absolute guarantee, of course. But they lend confidence. With time, too, we learn the occasional pitfalls: for example, how online reviewers themselves can game the system by developing an inflated track record--say, through selective plagiarism (David & Pinch, 2005).

Evidence of past performance is not always available, however. So we resort to more indirect indicators. In our daily lives, if we cannot judge someone's expertise directly, we turn to someone else we already trust--perhaps a partial expert--to provide a "testimonial." That is, we ask for references. Such information is secondary, of course. But it can be valuable, so long as one remains aware of the indirect nature of the evidence and the potential for deception.

Often we rely on venerable institutions to make these assessments of credibility for us. We look for licensed or certified professionals. In science, one looks for appropriate credentials--an advanced research degree, publication in rigorous journals, employment at a prestigious institution, service on expert commissions, and so on.

One disregards the need for credentials at one's peril. For example, in 1986 politician Lyndon LaRouche falsely depicted AIDS as contagious, easily transmitted by coughing or sneezing. Despite having no scientific credentials, he was able to persuade over 2 million voters to endorse mandatory HIV testing and the quarantine of anyone who tested positive (Toumey, 1996, pp. 81-95). In 1986, Joe Newman testified before the U.S. Congress about his "energy machine," which he claimed could create more energy than it used. Would that Congress had heeded the federal judge who presided over his earlier patent application. The judge, at least (not a scientist himself), had done his homework. He consulted the National Bureau of Standards, who duly assured him that Newman had not upset the well-established laws of the conservation of energy (Park, 2000, pp. 98-106). Credentials, of course, can themselves be bogus. Medical journalist Ben Goldacre takes particular aim at "nutritionists" and other self-appointed health gurus who seem to flaunt all kinds of titles and impressive-sounding references. As a demonstration, Goldacre secured for his dead cat the title of "certified professional member" of the American Association of Nutritional Consultants. Yes, his dead cat. Although it cost him $60. Including the certificate (Goldacre, 2010, pp. 112-130). Credentials are no absolute guarantee. But it is rare that one can vouch reliably for scientific claims without such institutionally documented expertise. That can be a first criterion for the non-expert, in ascertaining who to trust in reporting evidence or conclusions.

All these methods are indirect. Their reliability is fragile. So, to guard against a single misleading indicator, one may look at multiple indicators simultaneously. Do independent assessments concur? In the same way, researchers try to build confidence in understanding cryptic phenomena by using different forms of observation. Agreement among contrasting approaches provides robustness, another standard strategy for bolstering evidence. Through such strategies, one might gain confidence that the source of information provides reliable, relevant, and complete evidence. Only then might one begin to evaluate the claims themselves.

* By Proxy: Credentials v. Experience

A track record, a reputation among professional peers, recommendations by other known experts, and institutional credentials can all be important benchmarks for the non-scientist in assessing someone's credibility on behalf of science. At the same time, these evaluations are indirect. They are proxies for gauging the relevant experience (or knowledge, or competence, or expertise). Keeping in mind the potential for misalignment is important for interpreting exceptional cases.

For example, some scientists present themselves as experts outside their particular fields of expertise. In these instances, they are not really experts at all. A nuclear physicist is no authority on acid rain. It's much like celebrities endorsing commercial products unrelated to their actual achievements. We transfer mere impressions from one to the other. It's how our minds tend to work--unless we train them to think more slowly and deeply (Kahneman, 2011).

The tactic of using scientists as authorities in illegitimate contexts was adopted by the tobacco industry in their denials of the adverse effects of smoking on health. They enlisted Frederick Seitz. Seitz had worked on the atomic bomb, advised NATO, and served as president of the National Academy of Sciences and of Rockefeller University. Impressive credentials, indeed. But Seitz was a physicist, an expert on metals and solid-state physics. He was not an expert on smoking and health. Politically, though, he harbored some resentments against government interference and saw environmental regulations as trying to thwart democratic freedoms. His "skeptical" attitude and support of "independent" tobacco research were guided by ideology more than by scientific perspectives. Nor was Seitz an expert on several others issues where he flexed his authority: in criticizing the scientific consensus on acid rain, the ozone hole, and global warming. The same story applies to Fred Singer, another noted physicist, who wrote numerous editorials and articles on environmental issues, repeatedly supporting the tobacco and oil industries (Oreskes & Conway, 2010). The field of expertise matters, not just a generic "scientific" credential.

That was the problem, too, in New Madrid, Missouri, in late 1990. On 3 December, the town awaited a strong earthquake. The schools were closed. The city council had stockpiled water. The National Guard had an emergency hospital ready--they said it was just a routine drill. State residents had bought more than $22 million in new earthquake insurance. All because of a single prediction by Iben Browning. But Browning's degree was in zoology, not seismology. Browning claimed to have predicted several earlier large earthquakes--an impressive track record, if true. But few bothered to check whether that credential was genuine. Eventually, the U.S. Geological Survey, with its collective expertise, denounced the prediction and the method used for making it. But a geophysicist at a local university and the director of its Center for Earthquake Studies endorsed Browning. Few checked his credentials either. Earlier he had relied on a psychic to predict another earthquake that never happened. As you might have guessed, despite all the pother, no earthquake rattled New Madrid on that occasion (Spence et al., 1993; Toumey, 1996, pp. 3-4). Credentials matter only if they are relevant.

What of the critics of global warming? Many cite the Leipzig Declaration, a statement signed by 110 people denying a scientific consensus on the issue and asserting that plain satellite observations showed no climate change. That might seem persuasive, if true. Here, some journalists did investigate the credentials of the signatories, 25 of whom were television weathermen: not experts on long range climate science. Weather is not climate. Other signers included a dentist, a medical laboratory researcher, a civil engineer, a nuclear physicist, an amateur meteorologist, and an entomologist. Of 33 European signers, 4 could not be located and 12 denied having signed the document. After whittling away those with irrevelant credentials, only 20 remained. Many of these were known to be funded by the oil and fuel industry (Rampton & Stauber, 2002, pp. 276-278). Not much expertise there, after all. It turns out the declaration was organized by Fred Singer, certainly no expert himself (see above). In any event, a consensus need not be unanimous to regard it as a consensus. The Intergovernmental Panel on Climate Change deserves trust. Politicians who currently dismiss their conclusions are thus not only ill informed about global warming. They are also ill informed about the very nature of scientific expertise--and thereby present questionable credentials themselves as public leaders.

Accordingly, one may well question practicing physicians who second guess large-scale studies on the basis of their personal experience. Most doctors are not medical researchers. While they may be well situated to interpret and explain research findings, they do not necessarily have the appropriate investigatory and statistical background to evaluate them. Recently, major national expert panels have revised recommendations for mammograms and prostate cancer screening tests. Doctors have often weighed in, citing their own cases. But their anecdotal knowledge is a poor substitute for the systematic studies addressed by the researchers. Expert for one task, the doctors are not necessarily expert for another.

Such generalizations about documenting credibility do not preempt the possibility of expertise among those without conventional credentials. For example, in the mid-1980s, AIDS activists became dissatisfied with the drug approval process and medical research. They wanted a voice at the table. Here, they were willing to work for it. They went to conferences and consulted sympathetic researchers. They learned the medical vocabulary and the clinical trial protocols. They studied the virology, immunology, and biostatistics. They thus became fluent in the experts' discourse (Epstein, 1995). In essence, they became experts. Robert Gallo, co-discoverer of HIV, was once hostile to them. Later, he described one of the leaders as "one of the most impressive persons I've ever met in my life, bar none, in any field." "It's frightening sometimes how much they know," he said (Epstein, 1996, p. 338). Experts now acknowledge the activists as members of the community, although they do not boast the standard credentials. The activists participate as full voting members of the committees at the National Institutes of Health that guide AIDS drug development. They participate in the Food and Drug Administration advisory meetings. Expertise sometimes comes without the credentials.

Expertise can also be found among the indigenous peasant farmers of southern Mexico. One might be disinclined to imagine any sophisticated knowledge among those with somewhat animistic conceptions of maize and its "soul." Farmland is considered "hot" at lower elevations, as modified by considerations of the color, consistency and rockiness of the soil, and of shade and wind. Modern fertlizers are "hot," too, and care is taken not to "burn" the crops. The crop yields seem modest by comparison with industrialized agriculture. Yet a full analysis reveals that the Oaxacan campesinos have a highly developed, ecologically sustainable system. It also addresses the dynamics of replanting, as well as local trade practices. It accommodates the variability of environmental conditions. Scientifically, the system is quite sophisticated (Gonzalez, 2001). The same complex sophistication is found in the apparently haphazard wanderings of the pastoralists in the Niger River Delta (Bass, 1990, pp. 1-50). In the case, the expertise--richly developed local knowledge--is hardly found in formal scientific credentials.

Thus, in the case of fishing stock in North American seas, one should not peremptorily dismiss the knowledge of local fishermen. As it is, fisheries science wallows in uncertainty. At the very least, we need to reconcile the formal science and crude population modeling with the informal but practical expertise of those close to the subject. This is no simple either-or case, where "scientific" experts easily trump the presumably naive non-scientists. Again, experience can, on some occasions, be found without formal credentials. Just as one sometimes finds, on the other hand, credentials without relevant experience. The alignment is not perfect. Assessing who we should trust for scientific knowledge may thus involve some careful discernment.

* The Architecture of Trust

Scientific knowledge traces a long path of transformations from the original set of observations or measurements to the report of a conclusion reaching a scientific consumer (Allchin, 1999). Trust holds the chain together. Even at the outset, investigators learn when to trust their measuring instruments and recording devices. As the data are assembled, members in a lab or research team trust each other. When a paper is submitted for publication, peer reviewers assess the quality of the interpretations. But they also trust that the raw data and images themselves are presented honestly. Fraud does occur on a few occasions. But if coworkers did not detect it, any overt evidence of misconduct is probably already well buried by then. On the occasions when a breach of integrity is ultimately found, it is typically the scientific error that is discovered first. Fraud and error follow similar patterns of detection--usually through stymied efforts to build on the original results.

Not all labs publish papers of the same quality. Through experience (and gossip) scientists develop a sense of their credibility. This provides a useful (although not infallible) shortcut for assessing the reliability of new results. More careful assessment of the study's methodology and reasoning may occur--especially if the results sharply conflict with earlier findings or form the basis for future study. But the tedious scrutiny of a paper is generally a back-up. With a few well-proscribed exceptions, trust, again, is the norm. Scientists may well disagree. When they do, one anticipates that further studies will help resolve uncertainties.

Where the conclusions are especially significant, they may get reported in the media or in policy settings. That is where the consumer of science begins their encounter with science--not with the unmediated "evidence." One may be tempted to regard this step as just dissemination, a mere transfer of knowledge. But we should not regard the reporting as transparent. It involves editing and framing. It is another layer of transformation, with another layer of trust. This is where, finally, all the assessment strategies described above matter most. The citizen must assess the evidence--not the scientific evidence, but the social evidence for credibility. First, can one trust the source of information, whether it is a respected newspaper, or advertisement, or website, or talk show host, or political candidate? If that is relatively secure, one can then take the next step "backwards," to assess the credibility of the expert or person making the claims. Known experts and media with confirmed track records are ideal, of course. But frequently we must settle for indirect evidence: testimonials (especially from other experts), credentials (or institutional endorsements), the relevance of the credentials, and/or other indicators of experience or competence. For the consumer interested in reliable knowledge, one must find the thread that one can trust. Robust agreement, when available, helps.

One can see the whole system at work in an episode from the early 1990s: the prospective link between electromagnetic fields (EMFs) and cancer (Park, 2000, pp. 140-161). The issue became big news when investigative reporter Paul Brodeur published an article in The New Yorker magazine in 1989. A pair of studies had detected an association between childhood leukemia and proximity to high-voltage power lines. Should a reader have found cause for alarm? Brodeur had a notable track record. Earlier he had helped publicize the dangers of asbestos and exposed industrial efforts to cover up its risks. His credibility seemed sound. The primary researcher was from the University of North Carolina and was largely confirming an earlier, less rigorous study. That checked out, too. So caution seemed warranted. But the study was also vague. It was not a clinical study of causation, only an epidemiological study of correlation. Nor was there any physiological understanding of how the effect might occur. The overall strength of the EMFs just seemed too low to be biologically significant. In the ensuing media hype, others experts were at hand to note these qualifications. There was no firm consensus, mostly due to insufficient evidence. One would have to accept the status of uncertainty and hedge one's actions on the basis of possible outcomes. But even that required attending carefully to the combined suite of expert opinions. One expert perspective is not always sufficient where consensus does not yet exist.

For the next several years, Brodeur continued to sound the alarm. He published two books and stirred up a great deal of public sentiment. Yet while his reporting might have been responsible, he was not a scientist. His own conclusions seemed to receive inordinate weight. Many concerned parents lobbied for local changes and filed lawsuits for damages, as though the science was already well established. The message of uncertainty and the provisional nature of early studies had certainly not been appreciated. At the same time, medical researchers initiated many further studies (some taking many years), trying to ascertain the nature of EMFs as a possible carcinogen. Finally, in 1996, the National Research Council reviewed over 500 studies then available. Here was an independent assessment from a panel of the foremost experts in the field. The consensus? For over 30 types of cancer, no evidence indicated harm from EMFs. A key finding articulated a flaw in the original study. The researchers had used distance from power lines as an easily measurable proxy for the degree of EMF exposure. In retrospect, that proved ill founded. Subsequent investigators had been able to enter the homes and measure the EMFs directly. Ultimately, the trust in the original measurement strategy was misplaced. The scientists had to learn that clearly, just as much as the lay public. Even credible science, alas, is not always free from error (Sacred Bovines, November, 2008).

While the scientific debate is largely resolved, lay concerns about EMFs persist. Websites alarm the unwary. They reference some selected research studies and sell books. Yet they do not exhibit the signs of a credible source of scientific consensus. Conspiracy theories find a home. Warnings about cell phones also recur periodically, although their EMFs are even weaker. They seem a danger. But plausibility is not credibility. Good science, and what counts as good science, can often part ways in the public eye when the lessons here are not heeded.

Learning about who to trust for scientific knowledge (and why) thus constitutes an important challenge. Skepticism, with its exclusively negative orientation, does not solve the problem. The consumer of science needs to be equipped with an understanding of the nature of expertise and the many indirect ways to gauge it--and how those assessments may be limited and when they can fail.

Context also matters. What circumstances motivate and guide the rendering of the scientific information? One additional factor thus looms over all media presentations: the potential of particular interests, notably profit and power, to bias the report. More on that essential dimension shaping trust in another Sacred Bovines.

DOI: 10.1525/abt.2012.74.5.17

References

Allchin, D. (1999). Do we see through a social microscope?: Credibility as a vicarious selector. Philosophy of Science, 60, S287-S298.

Allchin, D. (2008). Nobel prizes and noble errors. American Biology Teacher, 70, 389-392.

Allchin, D. (2012). What counts as science. American Biology Teacher, 74, 291-294.

Bass, T. (1990). Camping with the Prince and Other Tales of Science in Africa. Boston, MA: Houghton Mifflin.

Brownlee, S. & Lenzer, J. (2011). The bitter fight over prostate screening--and why it might be better not to know. New York Times Magazine, 9 October, 40-43, 57.

Collins, H. & Evans, R. (2007). Rethinking Expertise. Chicago, IL: University of Chicago Press.

David, S. & Pinch, T.J. (2005). Six degrees of reputation: The use and abuse of online review and recommendation systems. Social Science Research Network, Working Paper Series. http://ssrn.com/abstract=8575 05 or http:// dx.doi.org/10.2139/ssrn.8575 05.

Epstein, S. (1995). The construction of lay expertise: AIDS activism and the forging of credibility in the reform of clinical trials. Science, Technology, & Human Values, 20, 408-437.

Epstein, S. (1996). Impure Science: AIDS, Activism, and the Politics of Knowledge. Berkeley, CA: University of California Press.

Goldacre, B. (2010). Bad Science: Quacks, Hacks and Big Pharma Flacks. New York, NY: Faber and Faber.

Goldman, A.I. (2001). Experts: which ones should you trust? Philosophy and Phenomenological Research, 63, 85-110.

Gonzalez, R.J. (2001). Zapotec Science: Farming and Food in the Northen Sierra of Oaxaca. Austin, TX: University of Texas Press.

Goodnough, A. (2011). Scientists say cod are scant; nets say otherwise. New York Times, 10 December, A20, A27.

Hardwig, J. (1991). The role of trust in knowledge. Journal of Philosophy, 88, 693-708.

Harris, G. (2011). Some doctors launch fight against advice on prostate cancer testing. Star Tribune, 9 October, A21.

Kahneman, D. (2011). Thinking, Fast and Slow. New York, NY: Farrar, Straus and Giroux.

Latour, B. & Woolgar, S. (1979). Laboratory Life: The Construction of Scientific Facts. Princeton, NJ: Princeton University Press.

Oreskes, N. & Conway, E.M. (2010). Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New York, NY: Bloomsbury Press.

Park, R. (2000). Voodoo Science: The Road from Foolishness to Fraud. Oxford, U.K.: Oxford University Press.

Rampton, S. & Stauber, J. (2002). Trust Us, We're Experts: How Industry Manipulates Science and Gambles with Your Future. New York, NY: Tarcher/Penquin.

Rosenberg, S.A. (2011). Scientists say cod still overfished. Boston Globe, 11 December.

Shapin, S. (1994). A Social History of Truth: Civility and Science in Seventeenth-Century England. Chicago, IL: University of Chicago Press.

Spence, W., Herrmann, R.B., Johnston, A.C. & Reagor, G. (1993). Responses to Iben Browning's Prediction of a 1990 New Madrid, Missouri, Earthquake. U.S. Geological Survey Circular 1083. Washington, D.C.: U.S. Government Printing Office. http://pubs.usgs.gov/circ/1993/1083/report.pdf.

Toumey, C. (1996). Conjuring Science: Scientific Symbols and Cultural Meanings in American Life. Rutgers, NJ: Rutgers University Press.

Understanding Science. (2012). Untangling media messages and public policies. University of California Museum of Paleontology. [Online.] Available at http://undsci.berkeley.edu/article/0_0_0/sciencetoolkit_02.

DOUGLAS ALLCHIN, DEPARTMENT EDITOR

DOUGLAS ALLCHIN has taught both high school and college biology and now teaches history and philosophy of science at the University of Minnesota, Minneapolis, MN 55455; e-mail: allchin@sacredbovines.net. He is a Fellow at the Minnesota Center for the Philosophy of Science and edits the SHiPS Resource Center (ships.umn.edu). He hikes, photographs lichen, and enjoys tea.
Gale Copyright: Copyright 2012 Gale, Cengage Learning. All rights reserved.