SCIENCE: A FEEDBACK SUCCESS STORY

Last week, I launched a series of posts titled “The Most Important Technical Distinction in the World” by introducing the concepts of positive and negative feedback. I suggested that understanding the distinction between these two types of feedback work is crucial to understanding why so many of our social institutions seem to be on the verge of crisis. In this post, I’ll look more carefully at one of those institutions: modern science.

To be clear, science itself is not in crisis. Scientists in every field imaginable continue to churn out astounding discoveries as rapidly as ever. The looming crisis is more one of public relations: large numbers of Americans have begun to express an extreme distrust of both science and scientists. And this is no small matter, for a widespread distrust of science gives rise to the possibility that our public policy will be determined less by sound scientific reasoning and more by internet conspiracy theories and other forms of quackery. This possibility was never more evident than during the COVID pandemic, when President Trump led the charge in discrediting the doctors and scientists in his own administration, instead promoting untested cures when not downplaying the notion we were actually in a deadly pandemic.

When pressed as to the wisdom of consulting their favorite politician or online influencer for medical advice rather than the medical research community, many pandemic skeptics offered responses something like the following: “I agree we should use evidence-based reasoning to determine public policy. But where are you going to get your evidence? From scientists? How many times have scientists been wrong in their claims? And how many times have scientists been caught falsifying their data, whether to make a name for themselves, scare politicians into funding their research, or to extend the considerable control they already enjoy over the rest of society? I just don’t trust scientists. That’s why I prefer to do my own research.”

As progressives, most of us find such arguments maddening, particularly at times like a pandemic when countless lives are at stake. Rather than simply fuming at the apparent callousness of our political opponents, however, it will be more productive if we can articulate exactly why modern science works so well—and why “doing your only research” generally does not. Here is where the distinction between positive and negative feedback can us help out.

In the centuries prior to Scientific Revolution, when church-run universities in Europe had begun move beyond their exclusive focus on theology and incorporate physics in their curricula, the primary mode of instruction utilized was disputation: one professor would be assigned to argue in favor of a given proposition while another was tasked with disputing it, with students being left to decide which presentation was most persuasive. This Crossfire-like approach may have been intended to provide the study of nature with some balance, but one forms of argument debating professors were allowed to invoke had the effect of generating a positive feedback loop that did not merely fail to lead investigators to the objective truths of nature, but often pointed them in precisely the opposite direction. This type of argument was the appeal to authority: to establish a particular claim, just cite what some recognized expert had already said about the matter.

To illustrate how this approach produced positive feedback, in the fourth century BCE, Aristotle suggested that the speed at which a heavy object falls is proportional to its weight. This claim is intuitive enough; just compare the speed at which a rock falls to the descent of a feather. When Aristotle’s works were then rediscovered by European scholars in the twelfth century CE, he quickly came to be recognized as the foremost authority on matters of physics, commanding so much respect he was often referred to simply as “The Philosopher.” Accordingly, scholars such as Aquinas began to cite Aristotle’s account of falling bodies approvingly, which meant the next generation of students could cite both Aristotle and Aquinas with respect to the rate at which bodies fall, the force of authority underlying the original claim having only been amplified. Eventually, the Aristotelian account of falling bodies came to be repeated so many times by so many different authorities that no one thought to question it—until it ran into one of the empirical experiments that helped launch the Scientific Revolution.

Legend has it that Galileo dropped weights of differing sizes off the Leaning Tower of Pisa to demonstrate that they all fall at the same rate, regardless of the weight, but this story is likely apocryphal; at a time before stopwatches had been invented, freefalling bodies fell too quickly for anyone to time their fall. Instead rolling steel balls down inclined planes and using his pulse to gauge the rate of their rate of descent, Galileo demonstrated empirically that Aristotle’s account of falling bodies had been wrong from the start. In fact, all heavy bodies fall at the same accelerating rate, as long as such extraneous factors as wind resistance are factored out. By discrediting one of medieval Europe’s most revered authorities—not to mention the appeal to authority as a reliable means of arriving at the truth—Galileo helped pioneer a new approach to the study of nature that would eventually become the modern scientific method. The tremendous success of this method, more formally know as the hypothetico-deductive method, can largely be ascribed the fact that it replaced the positive feedback loop characteristic of scholastic inquiry with two distinct, if closely intertwined, negative feedback mechanisms.

To briefly review how the hypothetico-deductive method works, the first step in this method involves informal observation, where the scientist simply gazes over some natural phenomenon, keeping an eye out for any regularities or patterns that might emerge. When such a pattern does appear, the scientist may formulate a theory to explain it, typically taking the form of a universal law or principle that accounts for all the particular instances observed. Aristotle had gotten this far with his account of falling bodies, but for the modern scientist, formulating a claim about how nature operates only gets you through the hypothetical stage of the hypothetico-deductive method. The researcher is then charged with deducing various results that we would expect to follow, should the proposed theory be correct, before returning to empirical observation to see whether the predictions are, in fact, borne out.

Typically, this second round of observation is conducted in targeted fashion by performing experiments specifically designed to test the predictions in question. If even one of the predictions is refuted by the observed evidence, the theory is shown to be incorrect, meaning the scientist must either modify the theory or abandon it. Only if a theory manages to survive every challenge the scientist can think to throw its way does it gain a degree of empirical credence, at which point the scientist may choose to go public with it. In the seventeenth century, this usually meant communicating the theory in a letter to a trusted colleague; today it would mean publishing a paper in a peer-reviewed journal. Regardless, with nature having already provided an initial check on any truth claims advanced, a second check now kicks in: other scientists.

From the earliest days of modern science, one of its defining features has been that its claims are developed in the context of a scientific community. Accordingly, no matter how brilliant a new theory may appear when first announced—and no matter how esteemed its author—the theory is not granted admission into the ranks of accepted scientific claims until other scientists have had a chance to test it. A minimum requirement for such testing is repeatability: if experimental evidence is put forward to support a theory, other scientists must be able to repeat the experiment, getting the same results, before the evidence is widely accepted. Ideally, however, other scientists will use the proposed theory to generate additional predictions that may not have even occurred to the original scientist, devising experiments of their own to test these new predictions. One of the benefits of this collective approach to theory-testing is therefore quantitative in nature: having more scientists testing proposed theories in more different ways can help root out mistaken theories, while providing those theories that survive all this testing with even higher degrees of credence. But the collective approach to theory-testing also exploits a particular quirk of human psychology, turning a potential liability on the part of individual researchers into an asset, as viewed from the perspective of our collective pursuit of knowledge.

Even in the seventeenth century, long before psychology had entered the ranks of the empirical sciences, observers of the human condition knew that people can be biased in their thinking. If I devise some novel scientific theory, for instance, I am likely to be strongly biased in its favor. The theory, after all, could be my ticket to everlasting scientific glory, or at least to lifetime patronage by some wealthy prince. This means, however, that during the theory’s initial testing, self-interest may incline me to go easy on my theory. I may refrain from subjecting the theory to the most rigorous tests I can imagine, or I might look the other way when experimental results come back that depart from what I had predicted, perhaps ascribing the deviations to defects in my lab equipment. Such an explanation of the deviations could be correct. But it could also mean I am clinging to a theory that has been empirically refuted, unable to bear the thought of making the long trudge back to an empty drawing board, my dreams of scientific fame and financial security having been dashed.

If the individual scientist’s worldly, self-interested motives can therefore cloud what might otherwise be a sincere attempt to discern the truth of things, the fact that other scientists are  biased in this same fashion can be a saving grace, as viewed from a collective perspective. Beyond advancing our knowledge of falling bodies, Galileo showed that one of the surest way to make a name for yourself in science is by demonstrating some widely accepted theory to be wrong, particularly when the theory was first advanced by some revered authority. Here we can also think of Einstein rocketing to scientific and popular stardom when astronomical observations confirmed that Newton’s laws of space and time break down under certain conditions, thus requiring augmentation by the Einstein’s theory of general relativity. Without questioning the motives of either Galileo or Einstein, the point is that even if worldly motives should incline individual scientists to go easy on their own theories when testing them, these same motives will incline scientists to be as demanding as they cab when challenging the theories of their esteemed predecessors or their contemporary rivals. And this makes it extremely likely that, sooner or later, someone will root out even the subtlest errors in our accepted theories, to the benefit our collective knowledge of nature.

To reiterate, then, one of the key reasons modern science has been able to make such rapid progress in unlocking the secrets of nature is that, as an institution, it disciplines itself by means of two forms of negative feedback. If we consider nature and our scientific claims about nature to form a system, with the system being in balance when the claims we make about nature correspond to the way nature actually operates, the hypothetico-deductive method actually encourages individual scientists to disrupt the established equilibrium by taking certain shots in the dark, proposing novel theories that no one has yet dreamed of and thus potentially advancing our knowledge in revolutionary fashion; think here again of Galileo and Einstein. Still, this does not mean just any theory proposed will gain lasting currency. If a theory is off the mark, failing to reflect the way nature actually operates, two forces will kick in to draw our collective assertions back into balance with nature. The first is the objective order of nature, itself, as it refuses to behave as the incorrect theory would predict. The second is the community of other scientists, who can be counted on to use observation and experimentation to knock down any theories that do not accurately reflect the objective facts of nature.

This contrast between the scholastic and modern approaches to the study of nature may again incline us to think that positive feedback is bad, whereas negative feedback is good, but to highlight the constructive role positive feedback can play in driving progress forward, let us consider the symbiotic relationship between modern science and technology. Gaining an understanding of how nature operates—the basic work of science—allows us to develop technologies to better perform any number of useful tasks. This can include devising more powerful instruments by which to study nature, as happened when Galileo drew upon his studies in optics to build one of the world’s first telescopes. This technological innovation allowed Galileo to discern that Jupiter has moons orbiting it—a scientific discovery so exciting that it prompted later engineers to devise even more powerful telescopes, with the cycle continuing. Or to cite a more contemporary example, the development of quantum physics in the first half of the twentieth century allowed for the development of digital computers over the second half of that century. And needless to say, computers have since revolutionized every facet of scientific research, thus leading to more discoveries being made, some of which led to even more powerful computers, and so on.

The mutually reinforcing effects of scientific and technological progress having been noted, the crucial point to keep in mind is that this particular positive feedback mechanism works—advancing the cause of both science and technology—only because scientists discipline themselves by means of the twin negative feedback mechanisms of empirical testing and peer review. In fact, technology can unwittingly serves as a third source of negative feedback for the underlying science. If a new technology fails to work as predicted, this provides a strong hint that the scientific claims used to develop may need to be revisited. Conversely, if a technology is successfully utilized by countless users over long periods of time, this provides the underlying science with strong corroboration. Choose to doubt the principles of quantum physics if you like, therefore, but every time you use your cell phone to look up arguments refuting this theory, you are really just lending it further support through the successful use of this handheld computer.

Which brings us back to “doing your own research” and the argument cited earlier, that scientists often make claims that turn out to be false, thus making a distrust of scientists eminently reasonable. The first part of this argument is incontrovertible: scientists do sometimes get things wrong, usually as a result of error but occasionally as a result of fraud. We can thus understand why it would be confusing to the average layperson when scientific claims that are widely trumpeted one day end up getting rejected the next. Are red wine and chocolate good for us or not? Still, the key point to note here is that when an incorrect or fraudulent scientific claim gets advanced, it is almost always other scientists who detect and expose the falsehood. Rarely if ever do skeptical politicians, cable pundits, or online influencers pick up on some error that the entire scientific community had missed. Indeed, the very strength of modern scientific method is that it has a series of checks built into it designed to catch and correct departures from the objective truth, no matter what spirit they are offered in, with these checks only being strengthened by the fact that individual researchers may be driven more by self-interested motives than a dispassionate love of truth. The result is a method of investigating nature that is neither all-powerful nor infallible. But given the finite nature of our intellects, it is the most powerful, most reliable means of discerning the objective facts of nature that anyone has yet invented, precisely because it recognizes its own fallibility and consequently incorporates multiple means of detecting and correcting errors into its standard practices.

Contrast this self-correcting dynamic with the dynamic that governs the domain to which most people turn when doing their own research, social media. This latter forum for inquiry actually bears a strong resemblance to the scholastic study of nature insofar as it tends to generate positive feedback loops better suited to perpetuating and amplifying untruths than to catching and correcting them. To illustrate, assume a single social media shares some baseless claim with his friends, whether mistakenly, maliciously, or simply to have fun. Perhaps only ten of these friends will find the post convincing enough—or intriguing, scary, amusing, or enraging enough—to share with their own followers. These people may not even believe the false claim themselves, but if each of them can get ten more people to retweet the post, and each of their recipients can get ten more people to do the same, the claim will quickly go viral, spreading itself in exponential fashion. Before long, the false claim will effectively gain the status of truth—something “everybody knows”—at least within certain communities, with the claim’s authority only growing stronger as people hear it repeated by ever more of their friends. And of course, when the social media user who initiates this positive feedback loop happens to be the President of the United States, having millions of Twitter followers, the claim in question will spread that much farther and faster, completely irrespective of whether it has any basis in objective truth.

This, then, give us a more complete explanation as to why progressives were so dismayed when millions of Americans followed Donald Trump in disparaging the medical and scientific communities at the height of the COVID pandemic, instead turning to social media for their medical advice. As finite and fallible creatures, we often lack  a complete and accurate understanding of how our world works. This is especially true during times of rapid change, as when a novel virus is sweeping the planet, reproducing itself exponentially in the original sense of "going viral." But even when our knowledge is incomplete and may even be partially wrong, we have no choice but to act, with our actions often having profound implications for ourselves and others. It follows, then, that when we must act in cases of uncertainty, all we can do is play the odds.

To stick to the case at hand, when the COVID virus began spreading, we all had two choices. We could choose to follow the consensus advice of doctors and medical researchers, themselves acting on the best understanding they had at the moment of how this novel virus operates. These scientists were better aware than anyone that their initial, imperfect understanding of the virus would likely contain some inaccuracies, and this could lead them to make suggestions that would turn out to do more harm than good. Still, everyone had to act. So even when the pandemic was in its early days and everyone was more or less following their gut, it would seem that throwing your lot in with the gut instincts of people who have spent their professional lives studying and curing diseases would give you the best odds of having a good outcome. More importantly, as the pandemic wore on and the medical research establishment threw its full weight behind studying the COVID virus, the scientific community’s evolving understanding of the virus was disciplined by a series of negative feedback mechanisms designed to root out both error and fraud. This evolving scientific understanding of how COVID operates thus had increasingly good odds of being right—and of growing even more accurate, detailed, and actionable over time. Accordingly, although any practical advice derived from this evolving body of knowledge still had some chance of being misguided and causing real harm, the odds of it being good advice only continued to improve.

The other option during the pandemic was to get online and do your own research, ultimately trusting the opinions of some politician, pundit, or social media influencer colorful enough to gain a wide following. Occasionally, one of these personalities might get lucky and offer some claim about the virus that turned out to be both true and useful. The odds of this happening were generally low, however, given that most of these commentators had little medical training or other experience with the subject at hand. As time went by, moreover, the odds that their advice being empirically justified actually went down, since claims posted on social media are not disciplined by any sort of objective, empirical checks, but rather they are subject to positive feedback mechanisms that have a strong tendency, not just to repeat false claims, but to exaggerate and amplify them. No one would play those odds in Las Vegas, so it seems like a terrible bet to place when the stakes involve your life and the lives of your loved ones.

Next week, will consider the two paths another key social institution has gone down in recent decades: the media. As you probably guessed, one is the path of negative feedback, the other the path of positive feedback.

Previous
Previous

FEEDBACK IN THE MEDIA: BALANCED TRUTHS OR SNOWBALLING LIES?

Next
Next

THE MOST IMPORTANT TECHNICAL DISTINCTION IN THE WORLD