Scientific Regression

Civil Defense Perspectives 33(5): September 2018

Everyone claims to be in favor of “data-based,” “evidence-based,” or “science-based” policy; demands “peer review”; and disdains “pseudoscience,” “fringe opinions,” and “outliers.”

But in these days when our society is so heavily dependent on science, “the problem with science is that so much of it simply isn’t,” writes software engineer William A. Wilson (First Things, May 2016, https://tinyurl.com/zzlbevc).

The original study by the Open Science Collaboration (OSC) showed that an astonishing 65% of 100 published psychology experiments failed to show statistical significance on replication, and many of the remainder showed greatly reduced effect sizes.

An unspoken rule in the pharmaceutical industry is that half of all academic biomedical research will ultimately prove false, and in 2011 a group of researchers at Bayer found that in more than 75% of 67 recent drug discovery projects based on preclinical cancer biology research, data published in prestigious journals did not match up with their attempts to replicate it.

John Ioannidis of Stanford University’s School of Medicine  shows that for a wide variety of scientific settings and fields, the proportion of possible hypotheses that turn out to be true, and the accuracy with which an experiment can discern truth from falsehood, are not high. In many cases, approaching even 50%true positives requires unimaginable accuracy. Hence, Wilson writes, the eye-catching title of his paper: “Why Most Published Research Findings Are False.”

The probability of a randomly selected mutation in a randomly selected gene having an effect such as increasing resistance to HIV is quite low, so a positive finding is more likely than not to be spurious (ibid.). Yet a Chinese scientist used CRISPR technology to produce the first genetically edited human embryos with supposed  HIV resistance (tinyurl.com/ydhnx2hj).

The fallacies discussed by Wilson permeate the EPA’s “science” and the demands to radically reduce CO2 emissions.

Bad Data

The HadCRUT4 surface temperature data used to adjust many climate models are extremely sparse, especially in the early record. From 1850-1853, temperatures for the entire Southern Hemisphere were calculated from one location in Indonesia and a few random ships. Australian researcher John McLean also notes that obvious errors are not corrected. “For April, June and July of 1978 Apto Uto (Colombia, ID:800890) had an average monthly temperature of 81.5°C, 83.4°C and 83.4°C [sic] respectively.” That would be 178–182 °F. Obviously, the data set is unreliable (TWTW 10/13/18, https://tinyurl.com/yd3rtfjn).

In any event, atmospheric temperatures, not surface temperatures, are needed to evaluate the 1979 Charney Report, in which climate modelers speculated that an increase in atmospheric water vapor would greatly amplify any increase in temperatures from CO2. Without this, the dreaded “runaway” warming could not occur, as the greenhouse effect of CO2, which is logarithmic, was already approaching saturation at pre-industrial levels of 280 ppm. In 1990, Roy Spencer and John Christy made the breakthrough discovery of how to determine atmospheric temperature from satellite data collected since 1978. The predicted tropical “hot spot” is not found, so any surface warming must be caused by factors other than the greenhouse effect.

“Most guesses are discarded when found wrong,” writes Ken Haapala. “It is past time for the US government and others to discard the guesses in the Charney Report” (TWTW 9/1/18, tinyurl.com/y9n683jt). But it is still the core reasoning for the UN Intergovernmental Panel on Climate Change (IPCC), the U.S. Global Change Research Program (USGCRP), and many U.S. government actions, including the EPA’s Endangerment Finding concerning CO2 (TWTW 8/4/18, tinyurl.com/ybmqc4e8).

Group Think, Bureaucratic Science, and Peer Review

Participants in the Charney Report manifest signs of group think. These include tremendous self-confidence, an unusually monolithic community, and working hard to buttress the basic tenets instead of constantly testing them (ibid.).

Peer review is supposed to assure accuracy and integrity. But “if peer review is good at anything, it appears to be keeping unpopular ideas from being published,” writes Wilson. “Bad” papers that failed to replicate were found, on average, to be cited far more often than the papers that did! Some non-reproducible preclinical papers spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis. Once careers are based on a false premise,  “peer review switches from merely useless to actively harmful.”

The highly touted “self-correcting” nature of science is at best slow and sclerotic, and may be overwhelmed by an unremitting flood of new, mostly false results. We now have an “ever more bloated scientific bureaucracy,” an influx of careerists rather than ascetic seekers of truth, and the blossoming of a Cult of Science (scientism). “Cult leadership trends heavily in the direction of educators, popularizers, and journalists.”

Statistical Falsification

Results may be simply wrong—though outright fraud is not uncommon—but in addition statistical falsification such as “p-hacking” my be devilishly difficult to detect. “The same freedom that empowers a statistician to pick a true signal out of the noise also enables a dishonest scientist to manufacture nearly any result he or she wishes,” Wilson states. Re-analysis is impossible without the raw data, which is seldom available. Harvard University among others roared against rules forbidding EPA policy based on “secret science” such as the Six Cities study (see p. 2).

One common distortion is created by the use of diminutive denominators. Division by zero is impossible, and division by a very small number makes the quotient very large, as in calculating  the feedback effect in global warming models and the “Global Warming Potential” of trace gases like methane, hydrofluorocarbons (e.g. Freon), and nitrous oxide (N2O). “Such parameters have no meaning or purpose other than generating alarm and headlines ,” writes Thomas Sheahen (tinyurl.com/yakkrdmh).

“How does it happen that a whole generation of scientific experts is blind to obvious facts?” Freeman Dyson asks. Government-supported tribal group-think? (tinyurl.com/y8woqen2).

Six Cities: Built on Dust

The Clinton EPA’s rules on small particulate matter (PM2.5, dust or soot of diameter ≤ 2.5 µ), which claim long-term health and economic benefits, were based on the “landmark” 1993 Harvard Six Cities study. Weaknesses of the study include small sample size (8,111 adults), dependence on subjects’ recollections of the past rather than objective measurement of exposure, weak correlation, and the strong confounding variable of cigarette smoking. Researchers dismissed any questioning of their work.

“The entire issue is a classic in bureaucratic science,” writes Ken Haapala, President of the Science and Environmental Policy Project (SEPP). He notes that Harvard received $618 million in federal support in 2017, or 12.6% of its operating revenue, perhaps accounting for its rigorous objection to EPA transparency rules. (TWTW 8/18/18, https://tinyurl.com/yaum7m6w).

Also see: “Banning Dust,” CDP July 2015, tinyurl.com/y6vurpmk; Enstrom JE, “Scientific Distortions in Fine Particulate Matter Epidemiology,” J Am Phys Surg, spring 2018, https://tinyurl.com/y9yzv2pd; and “Restoring Scientific Integrity,” CDP May 2018, tinyurl.com/y927mfmk.

Peer Review and Progress

Based on 50 years of experience, the late Thomas Gold worried that the herd instinct may be driving scientific progress in the wrong direction—away from new discoveries. “It is important to recognize how strong the interaction of the support of science and herd behavior really is,” he writes. “It is virtually impossible to depart from the herd and continue to have support, a chance of publication, and the other advantages that one requires to work in a field.” He gives several examples, including the outrage that greeted his asking a question about the origin of oil. What if petroleum can be generated abiogenically deep in the earth under conditions of high temperature and pressure? A funding proposal to explore the idea was rejected because some peer reviewers called it “misguided” (tinyurl.com/y8te2xjp).

The question has critical implications. What if petroleum is not a “fossil,” but a renewable resource? Amazingly, one can still get an outraged reaction, an accusation of being a conspiracy theorist in the pay of fossil fuel companies who want to squelch development of “renewables,” for sharing a comment on Facebook (https://tinyurl.com/yb3mer9b). “Red Wave” stated that J.D. Rockefeller introduced the term “fossil fuel” to imply scarcity, and that oil regenerates faster than it can be depleted. Is this commercially useful? I don’t know. But a challenge to produce a  peer-reviewed article elicited a long list.

Neither Clean, Green—Nor Renewable

To the nearest whole number, the percentage of world energy consumption supplied by wind in 2014 was zero (0%). It is said that 14% of the world’s energy is supplied by “renewables,” but that is almost all from the reliable renewables (hydro, wood, and dung), not the unreliable wind and solar (Matt Ridley, The Spectator 5/13/17, https://tinyurl.com/k3y9n43).

Just to supply the 2% annual growth in world energy demand would require 350,000 new turbines per year, 1.5 times the number built since governments started subsidies in 2000. Building that many turbines would require 50 million tonnes of coal per year, about half the EU’s hard coal-mining output.

“I have a commercial interest in coal,” Ridley writes. “Now it appears that the black stuff also gives me a commercial interest in ‘clean,’ green wind power” (ibid.).

Other nonrenewables that are essential to wind turbines—as well as solar cells, defense, computers, communications, and other technologies—are rare earths and other exotic minerals, virtually 100% of which are imported, largely from Russia and China.  The mines generate toxic and radioactive waste on an epic scale, and human rights abuse is rampant. Although the U.S. probably could achieve minerals independence, federal law prohibits the necessary exploration and mining, writes Paul Driessen (Canada Free Press 10/28/18,  https://tinyurl.com/yd4oe7ob).

Peer Review Misses Huge Math Error

An alarming report of ocean temperatures warming 60% faster than the UN IPCC predicted, published in Nature on Oct 31 and widely publicized, had to be walked back when major math errors were pointed out by “climate contrarian” Nic Lewis.

“The peer review process, presumably involving credentialed climate scientists, should have caught the error before publication,” stated Roy Spencer. “For decades now those of us trying to publish papers which depart from the climate doom-and-gloom narrative have noticed a trend toward both biased and sloppy peer review of research submitted for publication in scientific journals” (https://tinyurl.com/y75md495).

Science Refuses to Retract Fraudulent Paper

The foundation for the linear no-threshold (LNT) model for cancer risk assessment was laid in a paper by the National Academy of Sciences Biological Effects of Atomic Radiation (NAS BEAR) I Committee, Genetics Panel, published in Science in June, 1956. Recently, Jerry Cuttler and others requested that the paper be retracted because of its multiple instances of serious falsification and fabrication. Editor-in-chief Marcia McNutt refused, stating that there needed to be some statute of limitations of retractions after a “field has moved on,” and denied that the paper still has a “pervasive influence.”

Edward J. Calabrese responded (tinyurl.com/yb5zhzo4), noting that the LNT model “continues to dominate all regulatory agencies, affects clinical treatments, environmental regulations, clean-up costs, medical treatment strategies, all needlessly wasting massive resources.” This committee’s recommendations to switch from the threshold to the LNT model are widely considered to be “the most significant event in the history of risk assessment.”

The situation is “also extraordinary because substantial contemporary toxicological discoveries have revealed serious failings with the LNT model with findings more consistent with the threshold and hormesis models.”

While after 60 years none of the panel members are alive to defend their work, the factual record of the scientific misconduct  and the reasons for the falsifications is “substantive and unequivocal,” Calabrese writes. “I suspect that if the data quality were good, they would not have ‘needed’ to lie and deceive. However, their LNT goal was more important than truth.” He concludes that “the LNT model decision should have been reversed except for the ideological grip that has long enveloped this field.”

McNutt responded that the matter is closed. At a large meeting she happened to be attending, none of the distinguished scientists said they shared Calabrese’s opinion.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.