THE MOST IMPORTANT TECHNICAL DISTINCTION IN THE WORLD

Not long ago I read The Chaos Machine by Max Fisher. It is the most disturbing book I have read in some time. The book studies the major social media platforms—Facebook, Twitter, YouTube, and others—focusing on how they have tweaked their algorithms over the past decade or so to steer users away from mainstream, fact-based content and toward the internet’s radical fringes, all in order to get more clicks and thereby generate ad revenues.

In a few weeks, I’ll come back and consider The Chaos Machine more carefully, but one of Fisher’s arguments stood out to me: he suggests the owners of the major platforms are often blinded to the potential dangers of their products by an ideological certainly that the internet is ultimately a force for social good. This thinking convinced me of the need to dedicate a multi-post series to a technical distinction crucial to determining whether or not this ideological conviction is true. Perhaps I am risking hyperbole by labeling this The Most Important Technical Distinction in the World, but I believe being able to work with this distinction is indispensable for thinking our way through much of the craziness that has overtaken our world over the past decade or two, whether on the internet, in the media, in politics, or in business.

What is this pivotal technical distinction? It is that between positive and negative feedback. In this post, I will present a basic explanation of both types of feedback, as well as offering some preliminary suggestions as to why understanding the difference between the two is so crucial for analyzing all sorts of social developments. Then, over the next several weeks, I will consider how some shifts between whether negative feedback or positive feedback predominate have been influencing some of our society’s core institutions—and how much more of a shift could derail, not just these institutions, but our larger society.

To dive in, feedback is generated whenever you have a system that accepts input and converts it to output in some regular fashion, as when a computer crunches numbers, with a key proviso: some of the output loops back around to form at least a portion of the input for the next cycle through the system. When this happens, one of two dynamics can be generated, depending on what the system does with the input it receives.

When a system amplifies its input, the result is positive feedback, also know as reinforcing feedback. One of the most straightforward examples involves an acoustic amplifier. When you speak into a microphone, the signal is transmitted to an amplifier, which then reproduces your voice, only louder. Normally, this makes it easier for an audience to hear you, but when the microphone is placed directly in front of the amplifier, things quickly go haywire: your amplified voice gets fed back into the microphone, which causes the amp to reproduce the resulting sound at an even louder volume, which then gets amplified again, and again, and so on. At best, the result is annoying shriek, but if the amplifier is not equipped with an automatic shutoff, the self-reinforcing feedback loop can cause the amp to overheat and blow.

If positive feedback can thus cause an entire system to spin out of control by exponentially amplifying the input it receives, negative feedback works in the opposite fashion, counteracting the input it receives and thereby drawing the system back toward its initial state. The classic example here is a thermostat. A household thermostat typically allows the user to determine a set point, say 70 degrees, while having some allowable deviation programmed into it, perhaps 2 degrees. The thermostat will then monitor the input coming from an attached thermometer and respond accordingly. If the ambient temperature rises to 72 degrees, for instance, the thermostat will activate the air conditioner, thus cooling the room. Alternately, if room temperatures drops to 68 degrees, the thermostat will fire up the furnace, thereby heating the room. In either case, the result is to draw the room back to its initial state of 70 degrees. Also known as balancing feedback, negative feedback does exactly that, automatically responding to a disruption that pushes a system in one direction with another force that pushes in the opposite direction, thereby helping the system gravitate toward at a balanced middle point.

These examples of positive and negative feedback are straightforward enough, but a helpful means of picturing their contrasting dynamics—and of seeing how the two types of feedback can sometimes interact—is by imagining a round rock sitting on a hillside, when suddenly the rock’s rest is disrupted by an outside force, such as a strong gust of wind. Let us first consider what happens when the rock starts out at the bottom of a hillside basin.

The rock is initially at rest because the entire system is at equilibrium: the force of gravity is pulling the rock toward the center of the earth, yet this gravitational force is equaled by the electromagnetic and nuclear forces holding the ground beneath the rock together, thus preventing its further descent. When the gust of wind then comes along, this disrupts the initial equilibrium and pushes the rock partway up one of the basin’s slope. This upward movement runs directly contrary to the pull of gravity, while moving the rock into a position where the other forces involved are no longer suspending it. Accordingly, so as soon as the gust abates, gravity will draw the rock back down to the bottom of the basin, where it again comes to rest somewhere close to its starting point. Clearly, this is a system dominated by negative feedback, where the upward-pressing force of the wind is automatically counteracted by a downward-pressing force of gravity, thereby tending to keep the system in roughly the same state over the long term.

Let us now consider what happens when the rock starts out, not in a hillside basin, but resting directly on the hill’s slope. Now, when a gust of wind jars the rock enough to overcome the friction with the ground that was holding the rock in place, the rock will start to roll. As Galileo demonstrated, when a heavy body falls, its velocity will increase in proportion to the square of the time the body has been falling, or exponentially. As the careening rock thus gains speed, it likewise gains momentum—mass times velocity—thus making it increasingly unlikely that friction with the ground below will be strong enough to arrest the rock’s descent, even if it should encounter some minor bumps. This increase in momentum will be even more pronounced if the hillside happens to be covered with snow, in which case the rock will pick up ever more snow as it rolls downhill, with the amount of snow it takes on each second only increasing because snowball’s surface area in increasing. Now exponentially augmenting not just its velocity but also its mass with each rotation, the rock will acquire even more momentum, thus making it even harder to stop; hence the term “to snowball.”

Generally speaking, once a snow-covered rock has begun careening downhill in accelerating fashion, positive feedback moving the system ever further from its initial state, the story can end in only one of two ways. The tumbling rock can find its way into a lower basin, at which point the basin’s upslope will arrest the rock’s fall and the rock will again comes to rest, albeit now in a very different location than it started out. Alternately, the rock may continue gaining speed until it encounters some immovable object such as a brick wall, with results that are not entirely predictable but likely catastrophic for the snowball, and quite possibly for the wall.

As these examples make clear, systems dominated by negative feedback tend to be highly stable, absorbing the disruptions that strike them without moving a significant distance from their initial state. The opposite is true of systems dominated by positive feedback, which take minor disruptions and amplify them—then amplify them again—such that minor changes to a system’s initial conditions can move the whole system to a very different state, occasionally even destroying it. These contrasting characterizations may suggest that negative feedback in good and positive feedback is bad. Who wants a blown amplifier, after all? But, in fact, both types of feedback can be helpful and both can be destructive, depending on the system in question and the relative amounts of positive and negative feedback it experiences.

If negative feedback tends to promote stability, for instance, an excess of negative feedback can be stultifying, preventing a system from growing or changing, or even from fending off the inevitable forces of decay. Positive feedback, on the other hand, can sometimes be a force for rapid, beneficial change, although it’s reinforcing mechanisms must usually be reigned in with some sort of balancing mechanism if they are not to cause a system to run out of control. As a sweeping generalization, therefore, we could say that successful systems—those that survive and thrive over time—tend to be those that incorporate both positive and negative feedback, although not necessarily in equal amounts. Typically, negative feedback mechanisms will dominate most of the time, thereby promoting general stability, while leaving room for an occasional burst of positive feedback to drive some beneficial change.

Well and good—that’s positive and negative feedback—but what does any of this have to do with the media, social media, politics, or business? We’ll flesh out some specific answers to this question over the next few weeks, but here’s a preview. From the time our earliest ancestors evolved some 6 million years ago, human beings have lived groups. The size of the standard social unit has grown dramatically over the ages, from small troops of extended family containing two or three dozen members, to larger tribes, then to village, towns, city states, and nation states, some of which now number over a billion people. And within these massive contemporary social units we can find many smaller subgroups, whether municipalities, companies, clubs and churches, or even online social networks. All of these groups are help together by some particular set of social dynamics, which get shaped by factors such as the local environment, the informal or formal rules members are expected to follow, the institutions they collectively erect, and even their underlying psychological drives and motives. And with people being extraordinarily complex, unpredictable actors, plenty of feedback is needed to hold social groups of any size together over the long term.

To revisit a example I mentioned a few weeks ago, when our ancestors lived in migratory tribes containing one or two hundred members, many of them unrelated—and thus not necessarily inclined to help one another out of familial love—one of the ways they maintained tribal stability was by observing certain unwritten rules. “Share the meat with your tribemates when you are successful hunting,” appears to have been a very common rule. Not only did prescribed sharing promote friendship and tribal solidarity, but it helped keep the members of the tribe alive, since every hunter was bound to hit a slump here and there where they did not catch anything for weeks; absent communal sharing as a sort of insurance policy, the hunter’s entire family might starve. The benefits of sharing aside, individuals always faced a temptation to cheat, since a hunter who accepted gifts of meat from his neighbors but declined to share, himself, would end up with more meat than anyone. Though good for the individual hunter, such behavior was obviously bad for his tribemates and for the tribe as a whole, so righteous indignation would generally lead his tribemates them to exact some sort of punishment against the cheater. This might range from declining to share any more meat with him to ostracism from the tribe, or even physical punishment up to and including death. In the technical terms we have been employing, such punishments served as a negative feedback mechanism. Should certain members of the tribe begin showing a tendency to cheat or ignore other tribal rules, challenging the group’s equilibrium, a series of swiftly executed punishments will generally induce everyone to start behaving again, thus drawing the tribe back to a more stable, enduring state.

Now let us imagine a tribe settles on a different customary rule: “If someone kills one of your family members, you are entitled to go kill two of his family members.” Ideally, tribemates will not go around killing one another very often, but occasionally tempers will flare and a fight will break out, so consider what will happen if someone kills my brother. Outraged, I may go killed both the offender and his brother. Since I have now killed two members of their family, however, the survivors are entitled to come kill four people in my family. Which means… The exponential path this cycle of revenge killings can be expected take is obvious, conceivably continuing until the entire tribe has wiped itself out. Which is why, in practice, you will rarely encounter a tribe practicing this particular rule. If any tribes ever were ever foolish enough to adopt it, they have long since removed themselves from the gene pool.

This last example may seem a bit contrived; surely no tribe would ever consciously adopt a rule that will so obviously lead to its own destruction. That may be true, but plenty of social groups, ranging from tribes to collections of street gangs to entire societies have found themselves gradually sucked into cycles of escalating violence from which they had to find a way out or risk wholesale collapse—and some of them have collapsed. Which brings me, finally, to the point of this blog series.

Many of the institutions that undergird our own society have long offered social stability because their governing dynamics are rooted in various negative feedback mechanisms. Where these institutions foster positive feedback, meanwhile, this is generally held in check by various guardrails--more negative feedback. Over the past few decades, however, many of these institutions have gradually slipped into promoting various forms of positive feedback. Early on, this typically happens without anyone having much conscious awareness of the shift taking place, so no one thinks to try to temper the positive feedback with some broader negative feedback mechanism. Later, once some people have begun to notice what is going on, those who are benefiting from the positive feedback may start consciously working to make the feedback even stronger—itself another instance of positive feedback. As a result of this shift away from negative feedback and toward positive feedback, many of our society’s core institutions have begun to lose their traditional stability. Many, in fact, seem to be running out of control, with results no one can predict. If we do not figure out how rein in these escalating dynamics, one or more of our key social institutions could eventually become a snowball crashing into a brick wall, with it being anyone’s guess whether the larger society will explode into pieces.

To set expectations for this series: I do not pretend to know how to steer all our institutions back onto more stable, enduring paths. I am convinced, however, that if any of us are to start devising solutions, we must first understand the problem. And to understand many of the social problems we face today, we must have firm understanding how positive and negative feedback work, not just in the abstract but in concrete social institutions. Next week, we will ease our into this more concrete form of analysis by considering the operation of an institution with which most readers will presumably be familiar: modern science. This institution, in itself, stands on stable ground in virtue of its systematic use of negative feedback. The threat science currently faces comes from the fact that large segments of our population now view scientists with extreme distrust—a product, unsurprisingly, of some nasty positive feedback loops.

Previous
Previous

SCIENCE: A FEEDBACK SUCCESS STORY

Next
Next

THE PROBLEM OF FREE WILL AND DETERMINISM: ARE WE CLOSE TO SOLVING IT?